Commit Graph

28 Commits

Author SHA1 Message Date
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
vishwakftw
291746f110 Rename trtrs to triangular_solve (#18213)
Summary:
Changelog:
- Renames `trtrs` to `triangular_solve` to remain consistent with `cholesky_solve` and `solve`.
- Rename all tests, fix callsites
- Create a tentative alias for `triangular_solve` under the name `trtrs`, and add a deprecation warning to not promote usage.
- Move `isnan` to _torch_docs.py
- Remove unnecessary imports
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18213

Differential Revision: D14566902

Pulled By: ezyang

fbshipit-source-id: 544f57c29477df391bacd5de700bed1add456d3f
2019-03-21 14:27:21 -07:00
Vishwak Srinivasan
a519217ee7 Add batched version of trtrs (#18025)
Summary:
- Remove single batch TH/THC implementations
- Remove `_batch_trtrs_lower` from `multivariate_normal`
- Add tests for batched behavior
- Modify trtrs_backward to accommodate for batched case
- Modify docs

In a future PR, this will be renamed to `triangular_solve`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18025

Differential Revision: D14523004

Pulled By: ifedan

fbshipit-source-id: 11c6a967d107f969b60e5a5c73ce6bb8099ebbe1
2019-03-20 11:11:32 -07:00
Nicki Skafte
8045b3eb14 Registering of kl-divergence for independent distribution (#17681)
Summary:
This address issue https://github.com/pytorch/pytorch/issues/13545 and implements the proposed fix together with a single test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17681

Differential Revision: D14360161

Pulled By: ezyang

fbshipit-source-id: 427afc88e9054b5b0dc39ebbab1087b990695ea5
2019-03-11 08:10:16 -07:00
vishwakftw
2681af1c8a Remove redundant wrappers in torch.distributions (#16807)
Summary:
Changelog:
- Remove torch.distributions.multivariate_normal._batch_diag : same functionality is provided by torch.diagonal
- Remove torch.distributions.lowrank_multivariate_normal._batch_vector_diag : same functionality is provided by torch.diag_embed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16807

Differential Revision: D13985550

Pulled By: soumith

fbshipit-source-id: 25c7d00c52ff7f85e431134e9ce0d5dda453667b
2019-02-07 01:13:55 -08:00
fehiepsi
67308a9323 Fix expanded mvn and lowrankmvn (#14557)
Summary:
This PR fixes an issue of the slowness expanded MVN.

A notebook to show the problem is [here](https://gist.github.com/fehiepsi/b15ac2978f1045d6d96b1d35b640d742). Basically, mvn's sample and log_prob have expensive computations based on `cholesky` and `trtrs`. We can save a lot of computation based on caching the unbroadcasted version of `scale_tril` (or `cov_diag`, `cov_factor` in lowrank mvn).
When expanding, this cached tensor should not be expanded together with other arguments.

Ref: https://github.com/uber/pyro/issues/1586

cc neerajprad fritzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14557

Differential Revision: D13277408

Pulled By: soumith

fbshipit-source-id: a6b16f999b008d5da148ccf519b7f32d9c6a5351
2018-11-30 10:49:13 -08:00
Johan Gudmundsson
9646d68962 support broadcasting in _kl_categorical_categorical (#10533)
Summary:
Support broadcasting in _kl_categorical_categorical

this makes it possible to do:
```
import torch.distributions as dist
import torch
p_dist = dist.Categorical(torch.ones(1,10))
q_dist = dist.Categorical(torch.ones(100,10))
dist.kl_divergence(p_dist, q_dist)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10533

Differential Revision: D9341252

Pulled By: soumith

fbshipit-source-id: 34575b30160b43b6c9e4c3070dd7ef07c00ff5d7
2018-08-15 12:40:17 -07:00
fehiepsi
9525925119 Low rank multivariate normal (#8635)
Summary:
This pull request implements low rank multivariate normal distribution where the covariance matrix has the from `W @ W.T + D`. Here D is a diagonal matrix, W has shape n x m where m << n. It used "matrix determinant lemma" and "Woodbury matrix identity" to save computational cost.

During the way, I also revise MultivariateNormal distribution a bit. Here are other changes:
+ `torch.trtrs` works with cuda tensor. So I tried to use it instead of `torch.inverse`.
+ Use `torch.matmul` instead of `torch.bmm` in `_batch_mv`. The former is faster and simpler.
+ Use `torch.diagonal` for `_batch_diag`
+ Reimplement `_batch_mahalanobis` based on `_batch_trtrs_lower`.
+ Use trtrs to compute term2 of KL.
+ `variance` relies on `scale_tril` instead of `covariance_matrix`

TODO:
- [x] Resolve the fail at `_gradcheck_log_prob`
- [x] Add test for KL

cc fritzo stepelu apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8635

Differential Revision: D8951893

Pulled By: ezyang

fbshipit-source-id: 488ee3db6071150c33a1fb6624f3cfd9b52760c3
2018-07-23 10:10:53 -07:00
Tongzhou Wang
27455e9c78 Use _six for inf and nan (#9500)
Summary:
Things like `float('inf')` are actually quite expensive.
```py
In [1]: import math

In [2]: %timeit -n 200 math.inf
49.3 ns ± 1.42 ns per loop (mean ± std. dev. of 7 runs, 200 loops each)

In [3]: %timeit -n 200 float('inf')
194 ns ± 39.1 ns per loop (mean ± std. dev. of 7 runs, 200 loops each)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9500

Reviewed By: soumith

Differential Revision: D8876229

Pulled By: SsnL

fbshipit-source-id: 78602b76bb53d5588910b58270930c0bd413d2d7
2018-07-18 10:40:29 -07:00
Du Phan
9d88ff7d0d Add half cauchy, half normal distributions (#8411) 2018-06-14 10:28:42 +02:00
Vishwak Srinivasan
3cbaa6b785 [ready] Clean up torch.distributions (#8046) 2018-06-02 16:54:53 +02:00
Neeraj Pradhan
3964253f94 Allowing for vectorized counts in Binomial Distribution (#6720) 2018-04-26 15:53:01 +02:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Vishwak Srinivasan
3497f0207c [distributions] KL-Divergence for Multivariate Normal (#6172) 2018-04-04 13:19:47 +02:00
Maruan
7cbbc0bc74 Implementation of the logistic-normal distribution (#5547) 2018-03-22 00:32:14 +01:00
Sam Gross
54b4cdeffa
Replace all uses of 'Tensor or Variable' with 'Tensor' (#5508)
Replace all uses of 'Tensor or Variable'  and 'Variable or Tensor' with 'Tensor'
2018-03-02 14:26:11 -05:00
gchanan
3b63e552f9
Fix test_distributions when WITH_SCALARS. (#5121)
* Fix test_distributions when WITH_SCALARS.

* Use SCALAR_SHAPE in test, use self.scale in AffineTransform.

* Handle device correctly for scalars.

* Fix one hot categorical.

* Fix relaxed categorical.

* Add a new_tensor instance method to Variable that takes only data.

This is to work around the legacy problems of new, where e.g.
new(5) will give you an unfilled tensor rather than a scalar.

* Fix cuda scalar code path.

* Remove double return.

* Work around lack of WITH_SCALARS.

* Use tensor_new.
2018-02-09 11:01:13 -05:00
Vishwak Srinivasan
85a7e0fc41 Addition of ExponentialFamily (#4876) 2018-02-04 12:18:28 +01:00
Vishwak Srinivasan
423677bacc Add KL-divergence for Categorical and OneHotCategorical and stronger tests (#4961) 2018-02-03 12:47:13 +01:00
Alican Bozkurt
967bceb16b Implement Transforms (#4771) 2018-01-28 21:17:16 +01:00
Rachit Singh
e58a53af6f Added Poisson self KL + Bernoulli/Poisson KL 2018-01-27 22:36:00 +01:00
gchanan
c046da76ef
More distributions fixes for scalars. (#4849) 2018-01-25 12:33:01 -05:00
Vishwak Srinivasan
8593c6f4f7 Adding better KL-Tests (#4739) 2018-01-20 21:47:11 +01:00
Alican Bozkurt
f72d86e0d3 Implement geometric distribution (#4708) 2018-01-19 21:45:14 +01:00
Alican Bozkurt
8c2d35c754 Refactor distributions (#4688) 2018-01-17 11:58:08 +01:00
Alican Bozkurt
3254eca8c8 Implement binomial distribution (#4658) 2018-01-16 21:39:05 +01:00
Vishwak Srinivasan
86fe793948 Addition of KL-Divergences for torch.distributions (#4638) 2018-01-14 22:52:28 +01:00
Fritz Obermeyer
3a335427b0 Start framework for kl_divergence(-,-) in torch.distributions (#4525) 2018-01-09 09:44:59 +01:00