Commit Graph

115 Commits

Author SHA1 Message Date
Richard Zou
71626491c4 Add batched linear solver to torch.gesv() (#6100)
* Add batched linear solver to torch.gesv()

Fixes #3164
Picks up from #4502

I moved `gesv` to ATen.
Adds bindings for MAGMA's `gesv_batched` function for CUDA.
For CPU, runs `THLapack(gesv)` in a for loop.

The new function supports arbitrary batch dimensions (and broadcasting
of those dimensions). For example, the 4-d tensor `A x B x M x M` should
be treated as having batch-size `(A x B)`.

The overhead of creating the magma_queue_t is: ~350000 microseconds
the first time it's called and ~6 microseconds every time after that.

* Tests and docs

* Address comments

* Address comments

* Rebase

* Address comments

* Fix rebase

* Addressed comments

* Address comments

* Address comments

* Addressed comments
2018-05-08 17:06:27 -04:00
Thomas Viehmann
07513cfd1d implement sum over multiple dimensions (fixes #2006) (#6152) 2018-05-02 21:50:29 -04:00
gchanan
8031da5479
Implement torch.as_tensor, similar to numpy.asarray. (#7109)
* Implement torch.as_tensor, similar to numpy.asarray.
torch.as_tensor behaves like torch.tensor except it avoids copies if possible; so also somewhat like tensor.new but without the size overloads.
I didn't add a requires_grad field, because we haven't decided on the semantics such as as_param.

* Remove requires_grad for doc.
2018-05-01 12:54:43 -04:00
Kento NOZAWA
dccfdf317b Fix example of torch.clamp (#7131) 2018-05-01 14:52:32 +02:00
Thomas Viehmann
c730792d51 Add big warning about averaging to KLDivLoss documentation #6622 (#7006)
* Add big warning about averagin to KLDivLoss documentation #6622

Also: An (independent) change in diagonal docstring tensor
formatting.

* Improve note with example

Thank you Richard Zou!

* use log_softmax
2018-04-27 15:45:26 -04:00
gchanan
a6bfa16c17
torch.arange: add numpy-style type inference. (#7016)
* torch.arange: add numpy-style type inference.

This is a backwards-compatibility breaking change.

* Fix flake8.

* Use at::optional.

* Remove unneeded header files.

* Use reference wrapper.

* Update arange for test.

* Address review comments.
2018-04-27 15:11:45 -04:00
Thomas Viehmann
2b44c420c8 Enhance diagonal (fixes #6479) (#6718)
* Enhance diagonal

This patch
- adds Tensor.diagonal to complement torch.diagonal
- implements diagonal natively in ATen
- makes diagonal a view
- implements taking arbitrary diagonals
- implements diagonal backward instead of referring
  to the (more limited) diag

* add tests, copy diagonal code to backward for double differentiability

* improve tests and doc comment. Thank you, Adam!

* Mark diagonal as view function in gen_autograd.py, use simple backward.
2018-04-26 11:11:20 -04:00
Tongzhou Wang
1ee009599c Add torch.get_default_dtype doc (#6872)
* add torch.get_default_dtype doc

* address comments
2018-04-23 18:58:01 -04:00
Tongzhou Wang
95d0e9aaa2 [docs] Update set_default_(tensor_|d)type docs (#6843)
* update set_default_(tensor_|d)type docs

* make ndarray display nicer
2018-04-22 13:44:20 -04:00
li-roy
d564ecb4a5 Update docs with new tensor repr (#6454)
* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Link to torch.no_grad, etc, from torch doc

* Add dtype aliases to table

* regen docs again

* Tensor attributes stub page

* link to inplace sampling

* Link torch.dtype, device, and layout

* fix dots after nonfinite floats

* better layout docs
2018-04-21 07:35:37 -04:00
gchanan
a568b91a5d [docs] Add missing device parameters to factories, refer to dtypes as data types rather than types. (#6803) 2018-04-20 21:01:16 -04:00
Richard Zou
47bd4be4d3
[docs] More factory functions (#6709)
* More factory functions

Changes:
- Added the remaining factory and factory-like functions
- Better argument reuse via string templates
- Link under torch.rst's Creation Ops to the randomized creation ops

* Add double tick around False

* fix flake8

* Fix False

* Clarify comment: hopefully it is clearer now
2018-04-19 13:16:07 -04:00
Thomas Viehmann
bd0cc7d364 Implement torch.einsum (fixes #1889) (#6307)
* start at generic trilinear

* Implement einsum (fixes #1889)

This provides a simple implementation of einsum. It is built on
top of the work for computing bilinear (#6110).
It uses a naive left-to-right resolution at the moment.
Autograd is able to differentiate by itself.
The obvious unsupported feature is taking diagonals (einsum('ii->i',(a,)).

* add tests and docs

* fix flake8

* clean diff

* rebase on current master to resolve conflicting String wrapping

* clean up after rebase

* better commentary in einsum and sumproduct_pair

* don't say fixme if it's fixed and rename num_outputs to num_output_dims

* adapt python wrapper to use std::string instead of String to avoid typedef at::String

* typos and some vector to array conversion

* fix accidental python<->python3 change

* really fix bad rebase
2018-04-18 13:41:27 +02:00
Richard Zou
7de61c3b8c
Update tensors.rst Tensor introduction (#6670)
Changes:
- Deleted docs for old constructor. Add link to new `torch.tensor` ctor
- Add docs for `torch.tensor`
- Add some info on dtypes to the top of `tensors.rst`.
2018-04-17 16:52:22 -04:00
Richard Zou
1f2829dd2a
Update tensor factory method docs (#6640)
* Update tensor factory method docs

Also add new docs for `torch.empty`.

* Add full; some refactoring to make docs nicer
2018-04-17 14:30:46 -04:00
Richard Zou
dd91d57c3f
Update docs for torch.zeros factory method (#6594)
* Update docs for torch.zeros factory method

If this looks good, I'll submit another PR rewriting the other factory
methods in this fashion.

* Address comments

* Better explanation for device default

* Add variable argument back

* s/set/sequence/g

* Remove class from torch.strided
2018-04-16 18:28:12 -04:00
Richard Zou
eaf1e4b6ab
Docs for torch.*_like(...) factory functions (#6589)
* Docs for torch.*_like(...) factory functions

In the same spirit as `torch.randn_like`.

* Address comments

* Recommend ones/zeros with out keyword
2018-04-13 20:49:35 -04:00
Richard Zou
99cfb56698 Add docs for torch.randn_like (#6565)
* Add docs for torch.randn_like

* Address comments

* Address commetns

* Address comments
2018-04-13 11:33:56 -04:00
Tongzhou Wang
8aa0ae3836 Support arbitrary number of batch dimensions in *FFT (#6528) 2018-04-12 15:03:22 -04:00
Tongzhou Wang
0dff2b5e35
[fft] [3 of 3] Implements backward of fft ifft rfft irfft (#5537)
* change irfft signal_sizes arg to be the last

* add docs for fft, ifft, rfft, irfft; update doc for stft

* fix typo in window function docs

* improve gradcheck error message

* implement backward of fft, ifft, rfft, irfft

* add grad tests for fft, ifft, rfft, irfft

* fix nits and typos from #6118

* address comments
2018-04-10 22:09:36 -04:00
Naman Jain
acb7df11a2 Add torch.randint and torch.randint_like functions (#6136)
Adds randint and randint_like to TensorFactories.cpp
2018-04-10 12:08:21 -04:00
Kento NOZAWA
3b58b859b2 Fix typos in docs (#6389) 2018-04-07 12:41:15 -04:00
Vishwak Srinivasan
0aa35780bf [ready] Implement log2 and log10 in PyTorch (#6272)
* Implemented log2 and log10

* Re-add incorrectly removed files

* Fix minor bugs

* Fix log1p docs

* Add a try-except for python2 math module in log2 test

* Revert changes made to aten/doc/*

* Fix docstring errors

* Fix windows build
2018-04-05 14:28:37 -04:00
Thomas Viehmann
32ba2ca203 add documentation for diagflat and diagonal (#6161) 2018-03-31 18:03:21 +02:00
samuela
7ffcb20295 small math cleanups in the docs (#6057) 2018-03-29 22:50:08 +02:00
Tongzhou Wang
39829c1670 Improve docs (#5999)
* Clarify det and svd doc on when backward is not stable

* Fix some links in nn.functional doc; improve upsampling doc
2018-03-26 14:09:11 -04:00
Tongzhou Wang
940a0ab67b Add logdet and slogdet (#5393)
* 1. Add logdet and slogdet in ATen side
2. Previously, det can return result with incorrect sign upon seeing symmetric
   matrices. This is caused by the wrong assumption I had on SVD (when input is
   symmetric U=V^T). This fixes it.
3. Moreover, after fixing 2 now QR is always needed for det forward. So I moved
   SVD to backward call. Since this is a specific variant of SVD, it is named as
   _svd_with_positive_UV_det, with derivative.yaml entry being svd_backward.
4. Updated/added backward functions for det, logdet and slogdet, which uses
   _svd_with_positive_UV_det and svd_backward inside.
5. Optimized svd_backward:
   a. Avoid unnecessary kernels when only sigma has gradient (this is the usual
      case, and also true with *det backward functions).
   b. Fix SVD double backward by avoiding a nan.

* 1. Add/update grad checks for det, logdet, and slogdet.
2. Fix an incorrect check for dim_args_idx in test_autograd.py
3. Add option to only test a subset of output values, specified by
   test_output_indices, for cases like slogdet where only the
   second output is differentiable.
4. Add better doc for the test generating list.

* Add/improve output tests for det, logdet and slogdet
Add a scaling to random matrices so closeness checks are more robust

* Remove unnecessaery Variable wrappers in some test files

* Add logdet slogdet docs

* Improve an err msg in THTensorLapack.c

* add inverse-based backward for invertible matrices
use svd only for non-invertible case, so don't need the special variant anymore

* use LU rather than QR
2018-03-16 09:23:00 -04:00
Vishwak Srinivasan
76a283db40 [ready] General Documentation Improvements - 2 (#5685)
* Fix some minor errors in existing docs.

* Fix Convolution and Pooling docs in torch.nn.functional

* Cleaned up torch.nn.functional docs

* Address @SsnL 's comments

* Add multiplication sign missing in docs

* Fix more typos, and clear some warnings

* Change infinity symbol in LPPool2d

* Revert some changes in torch.nn.functional

* Few more minor changes
2018-03-13 09:47:43 -04:00
Sam Gross
a2641500bf Implement torch.reshape and Tensor.reshape (#5575)
* Implement torch.reshape and Tensor.reshape

This implements reshape which has similar semantics to numpy.reshape. It
will return a view of the source tensor if possible. Otherwise, it
returns a copy.

* Remove in-place reshape_ that was an alias for resize_

* Update documentation
2018-03-12 16:20:40 -04:00
Benjamin Heinzerling
42bf2f9289 Explain floating point issue in torch.arange doc (#5708)
* Explain floating point issue in torch.arange doc

https://github.com/pytorch/pytorch/issues/5556
https://github.com/pytorch/pytorch/issues/5704
https://github.com/pytorch/pytorch/pull/5600

* Add line break to stay below max comment length

* Copyedit

* Typofix
2018-03-12 12:04:51 -04:00
Richard Zou
4e190c2fed Fix floor latex rendering (#5682)
* Make floors larger

* Improve Latex rendering of floor

* Improve latex rendering of ceil

* Fix flake8
2018-03-09 23:53:14 -05:00
Richard Zou
74043b69c2 Alias torch.diagonal, torch.diagflat (#5622)
* Alias torch.diagonal, torch.diagflat

* Address comments; Add sanity tests for torch.diagonal and torch.diagflat
2018-03-09 23:46:42 -05:00
Tongzhou Wang
71d73211f4 [ready] torch.* doc update for Variable/Tensor merge, and other improvements (#5443)
* 1. Update doc to reflect changes in Variable/Tensor merge, and new printing style
2. Remove functions in torch/functional.py that are already implemented with native_function
3. Add set_detault_tensor_type doc

* fix torch.split

* py2 unicode string fix

* update torch.gels doc

* address @fmassa 's comments

* double-colon
2018-03-08 23:02:38 -05:00
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
Christian S. Perone
8720d72d7c Fixing inconsistent docs (missing parameters docs). (#5620) 2018-03-08 10:42:40 +01:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
Choongwoo Han
fae6c67121 Configurable flushing denormal numbers on CPU (#5294)
* Configurable flushing denormal numbers on CPU

* Formatting

* Update docs

* Minor doc changes
2018-02-19 19:23:43 -05:00
Yimeng Zhang
da79697d45 make explicit about keyword-onlyness of out (#5165)
* make explicit about keyword-onlyness of `out`

fix issue 2 of https://github.com/pytorch/pytorch/issues/5156#issuecomment-364521510
2018-02-13 09:55:36 -08:00
Kevin Zakka
6dc41f9e63 fixed doc for cholesky potrs (#5180) 2018-02-11 20:23:10 -05:00
Tongzhou Wang
47ee86776e Fix CPU torch.multinomial with noncontiguous prob tensor (#5093)
* fix CPU torch.multinomial not working on noncontiguous probability distn'

* address comments

* change some tabs to spaces in THStorage.c
2018-02-06 22:11:43 -05:00
Vishwak Srinivasan
e519ef5337 Adding torch.expm1() and its inplace function (#4350) 2017-12-28 18:56:03 +09:00
yongjik
15163a3273 Improved documentation of several index operations. 2017-12-26 06:08:44 +08:00
SsnL
658d4c7ea8 allow optional int tensor 2017-12-24 03:08:28 +08:00
Tongzhou Wang
d8b2e5d091 Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
* implement stft

* addressed comments; implemented window functions; added support for python only default initialization
2017-12-18 12:28:23 -05:00
Richard Zou
9394e65b44 Add proper shape checking to torch.cat (#4087)
* Fix catArray in THTensor

Asserts that the inputs have the same size except in the
cat dimension or are empty (or a mix of both).

* Fix catArray for THCTensor

* Document torch.cat shape checks

* Fix types
2017-12-18 02:05:58 -05:00
Tongzhou Wang
fc8ad6fde6 improve svd doc (#4155) 2017-12-15 12:57:14 -05:00
Sam Gross
d76f7a806d
Fix potrf gradient and enable gradchecks (#3861) 2017-12-04 16:50:41 -05:00
Tongzhou Wang
932e484029 fix doc change lint; (#3974) 2017-12-01 17:24:30 -05:00
Tongzhou Wang
fe12ac57a4 Improve docs for torch and torch.Tensor (#3969)
* doc overhaul

* update split doc
2017-12-01 14:56:48 -05:00
Tongzhou Wang
c681b03d37 Add determinant function on variable; Add backward on svd (#3816)
* determinant on variable

* svd bwd
2017-12-01 13:22:46 -05:00