Commit Graph

80 Commits

Author SHA1 Message Date
Xiang Gao
07b5782ff7 Add some missing docs to torch.rst, new unittest to enforce torch.rst no longer miss anything (#16039)
Summary:
This prevent people (reviewer, PR author) from forgetting adding things to `torch.rst`.

When something new is added to `_torch_doc.py` or `functional.py` but intentionally not in `torch.rst`, people should manually whitelist it in `test_docs_coverage.py`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16039

Differential Revision: D14070903

Pulled By: ezyang

fbshipit-source-id: 60f2a42eb5efe81be073ed64e54525d143eb643e
2019-02-15 07:02:31 -08:00
vishwakftw
95febdfacc Add is_floating_point to docs (#15704)
Summary:
Fixes #15700 .

Changelog:

- Expose torch.*.is_floating_point to docs

Differential Revision: D13580734

Pulled By: zou3519

fbshipit-source-id: 76edb4af666c08237091a2cebf53d9ba5e6c8909
2019-01-07 10:43:22 -08:00
Gao, Xiang
4a716250cc Add torch.rot90 to torch.rst
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15512

Differential Revision: D13545775

Pulled By: soumith

fbshipit-source-id: 2a8896571745630cff4aaf3d5469ef646bdcddb4
2018-12-23 14:31:11 -08:00
vishwakftw
41e7e1bc40 Rename potrs to cholesky_solve (#15334)
Summary:
Changelog:
- Renames `potrs` to `cholesky_solve` to remain consistent with Tensorflow and Scipy (not really, they call their function chol_solve)
- Default argument for upper in cholesky_solve is False. This will allow a seamless interface between `cholesky` and `cholesky_solve`, since the `upper` argument in both function are the same.
- Rename all tests
- Create a tentative alias for `cholesky_solve` under the name `potrs`, and add deprecated warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15334

Differential Revision: D13507724

Pulled By: soumith

fbshipit-source-id: b826996541e49d2e2bcd061b72a38c39450c76d0
2018-12-19 12:31:24 -08:00
vishwakftw
1c9df7facf Expose torch.roll function and method (#14880)
Summary: Fixes #14859 .

Differential Revision: D13376915

Pulled By: zou3519

fbshipit-source-id: f1fc0e8492a159431a3fc0a19a41aa10429ecc80
2018-12-07 07:42:47 -08:00
Thomas Viehmann
f0ed927b62 Add diag_embed to ATen and torch (#12447)
Summary:
Fixes: #12160
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12447

Differential Revision: D12916234

Pulled By: SsnL

fbshipit-source-id: 512a04efb0c2e0a54295b857a61be66c3aae13da
2018-11-05 08:55:28 -08:00
vishwakftw
d714ecf879 Rename potrf to cholesky (#12699)
Summary:
This PR performs a renaming of the function `potrf` responsible for the Cholesky
decomposition on positive definite matrices to `cholesky` as NumPy and TF do.

Billing of changes
- make potrf cname for cholesky in Declarations.cwrap
- modify the function names in ATen/core
- modify the function names in Python frontend
- issue warnings when potrf is called to notify users of the change

Reviewed By: soumith

Differential Revision: D10528361

Pulled By: zou3519

fbshipit-source-id: 19d9bcf8ffb38def698ae5acf30743884dda0d88
2018-11-01 15:10:55 -07:00
vishwakftw
48bc57fa8d Introduce chain_matmul (#12380)
Summary:
- This was one of the few functions left out from the list of functions in
  NumPy's `linalg` module
- `multi_mm` is particularly useful for DL research, for quick analysis of
  deep linear networks
- Added tests and doc string
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12380

Differential Revision: D10357136

Pulled By: SsnL

fbshipit-source-id: 52b44fa18d6409bdeb76cbbb164fe4e88224458e
2018-10-12 03:58:12 -07:00
Wei Yang
5ffc915f26 fix docs (#12126)
Summary:
- fix https://github.com/pytorch/pytorch/issues/12120
- add `torch.argsort`, `torch.pdist`, `broadcast_tensors` to *.rst files
- add parameter dim to `torch.unique` doc
- fix table and args for `torch.norm`
- test plan: make html and check docs in browser

gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12126

Differential Revision: D10087006

Pulled By: weiyangfb

fbshipit-source-id: 25f65c43d14e02140d0da988d8742c7ade3d8cc9
2018-09-29 22:26:45 -07:00
Tongzhou Wang
d3f98b5ffc Add matrix power (#11421)
Summary:
vishwakftw Your patch needed some updates because the default native function dispatches changed from `[function, method]` to `[function]`. The CI was run before that change happened so it still shows green, but the internal test caught it.

I did some changes when rebasing and updating so I didn't just force push to your branch. Let's see if this passes CI and internal test. If it does, let me know if you want me to force push to your branch or use this PR instead.

Note to reviewers: patch was already approved at #10068 .

cc yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11421

Differential Revision: D9733407

Pulled By: SsnL

fbshipit-source-id: cf2ed293bb9942dcc5158934ff4def2f63252599
2018-09-08 15:25:56 -07:00
Thomas Viehmann
d4060d2d0e Implement torch.tensordot (#10025)
Summary:
Fixes: #8988
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10025

Reviewed By: ezyang

Differential Revision: D9540967

Pulled By: yf225

fbshipit-source-id: 6ba2a7777162983977db884b693e6f4543b31aeb
2018-09-04 21:10:07 -07:00
vishwakftw
593d74061f Document torch.allclose (#11185)
Summary:
- Modify torch.autograd.gradcheck to use torch.allclose instead
- Expose doc strings

Closes #10355
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11185

Differential Revision: D9628016

Pulled By: soumith

fbshipit-source-id: 22a30622b9fe52e41b5b3540406137b59d8c5a75
2018-09-02 09:26:07 -07:00
zou3519
7169906249 torch.digamma (#10967)
Summary:
Fixes #10307

cc SsnL
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10967

Differential Revision: D9546748

Pulled By: zou3519

fbshipit-source-id: 764e27b1cc8dd487270b3ffa653b806c86f717dd
2018-08-29 09:43:19 -07:00
Vishwak Srinivasan
5fb9b31ed5 Add matrix_rank (#10338)
Summary:
- Similar functionality as NumPy
- Added doc string
- Added tests

Differential Revision: D9240850

Pulled By: SsnL

fbshipit-source-id: 1d04cfadb076e99e03bdf699bc41b8fac06831bf
2018-08-22 09:58:38 -07:00
Richard Zou
ad6d62250a Add torch.compiled_with_cxx11_abi(). (#10071)
Summary:
It returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1.

Fixes #8385
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10071

Differential Revision: D9088946

Pulled By: zou3519

fbshipit-source-id: b00fd92ee340ef34f60bdd6027ceaf46dd7442c0
2018-08-01 15:34:48 -07:00
Vishwak Srinivasan
e41eb43327 Remove deprecated masked_copy (#9819)
Summary:
No tests are affected by this removal.

Closes https://github.com/pytorch/pytorch/issues/1885 and closes #9817

While I was at it, I also fixed #9876 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9819

Differential Revision: D9018126

Pulled By: SsnL

fbshipit-source-id: a9142bf4e2403bef05779a097f61fa8b7db04b71
2018-07-26 20:55:18 -07:00
bhushan
ea67a2bd11 Allows negative index to tensor.narrow (Fixes: #9546)
Summary:
Fixes #9546
Test cases added

Reviewed By: ezyang

Differential Revision: D8974842

Pulled By: zou3519

fbshipit-source-id: a7707406c2a21e8e14f9c2a8ad4d64c8b08156df
2018-07-25 09:25:45 -07:00
Vishwak Srinivasan
360c1bbd5b Add multivariate log-gamma (mvlgamma) (#9451)
Summary:
1. Add tests in test_cuda, test_torch
2. Add doc strings

Closes https://github.com/pytorch/pytorch/issues/9378 .

Differential Revision: D8859746

Pulled By: ezyang

fbshipit-source-id: 939c309d90940a7aa08f53004c9e7b3b1c9cf54e
2018-07-24 12:10:10 -07:00
Tongzhou Wang
bfe2aa093e docs fixes (#9607)
Summary:
fixes #9589 #9507 #9502 #9390
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9607

Reviewed By: ezyang, soumith

Differential Revision: D8923575

Pulled By: SsnL

fbshipit-source-id: cb61d990333b700d813ce781040c3d0325999b8c
2018-07-20 07:55:25 -07:00
bhushan23
5eaed750c2 Implementing torch.isfinite (#9487)
Summary:
fixes #9132
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9487

Reviewed By: soumith

Differential Revision: D8875529

Pulled By: SsnL

fbshipit-source-id: d1b8aa825d202cfbdca27897da6a8bc1b714f856
2018-07-18 08:25:20 -07:00
bhushan
5eb9d40cc6 Introducing IsInf (#9169)
Summary:
torch.isinf - checks element wise +/- inf implements #9132
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9169

Reviewed By: SsnL

Differential Revision: D8768614

Pulled By: zou3519

fbshipit-source-id: dd1b5f6c976deb421d626e22cdd25500ec04d796
2018-07-15 07:55:09 -07:00
Alican Bozkurt
d017e1798f add erfc
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/9366

Differential Revision: D8816768

Pulled By: soumith

fbshipit-source-id: 7d709f932cf156a2e7ec71c710837beb7f647d66
2018-07-12 08:32:02 -07:00
Vishwak Srinivasan
14cbd9adb8 Implement torch.pinverse : Pseudo-inverse (#9052)
Summary:
1. Used SVD to compute.
2. Tests in test_autograd, test_cuda and test_torch
3. Doc strings in _torch_docs.py and _tensor_docs.py

Closes #6187
Closes https://github.com/pytorch/pytorch/pull/9052

Reviewed By: soumith

Differential Revision: D8714628

Pulled By: SsnL

fbshipit-source-id: 7e006c9d138b9f49e703bd0ffdabe6253be78dd9
2018-07-05 09:11:24 -07:00
vishwakftw
08daed40f7 Fix bug in flip() (#9156)
Summary:
Closes #9147
Added a test to prevent regression in test_torch
Added entries in docs

cc ezyang weiyangfb
Closes https://github.com/pytorch/pytorch/pull/9156

Differential Revision: D8732095

Pulled By: soumith

fbshipit-source-id: 7a6892853cfc0ccb0142b4fd25015818849adf61
2018-07-04 07:24:01 -07:00
Vishwak Srinivasan
61f61de270 Expose logsumexp docs and mark log_sum_exp in distributions for internal use (#8428) 2018-06-13 12:27:58 -04:00
Seth Hendrickson
94888106a9 Add docstring for torch.sparse_coo_tensor (#8152)
* add sparse_coo_tensor docstring

* update empty tensor example

* whitespace

* whitespace again
2018-06-11 00:03:51 -04:00
Tongzhou Wang
742912512c Move signal window functions to ATen; add Blackman window (#8130)
* Move signal window functions to ATen; add Blackman window

* fix cuda test not checking scipy
2018-06-08 11:37:46 -04:00
gchanan
8031da5479
Implement torch.as_tensor, similar to numpy.asarray. (#7109)
* Implement torch.as_tensor, similar to numpy.asarray.
torch.as_tensor behaves like torch.tensor except it avoids copies if possible; so also somewhat like tensor.new but without the size overloads.
I didn't add a requires_grad field, because we haven't decided on the semantics such as as_param.

* Remove requires_grad for doc.
2018-05-01 12:54:43 -04:00
Tongzhou Wang
1ee009599c Add torch.get_default_dtype doc (#6872)
* add torch.get_default_dtype doc

* address comments
2018-04-23 18:58:01 -04:00
li-roy
d564ecb4a5 Update docs with new tensor repr (#6454)
* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Link to torch.no_grad, etc, from torch doc

* Add dtype aliases to table

* regen docs again

* Tensor attributes stub page

* link to inplace sampling

* Link torch.dtype, device, and layout

* fix dots after nonfinite floats

* better layout docs
2018-04-21 07:35:37 -04:00
Richard Zou
47bd4be4d3
[docs] More factory functions (#6709)
* More factory functions

Changes:
- Added the remaining factory and factory-like functions
- Better argument reuse via string templates
- Link under torch.rst's Creation Ops to the randomized creation ops

* Add double tick around False

* fix flake8

* Fix False

* Clarify comment: hopefully it is clearer now
2018-04-19 13:16:07 -04:00
Thomas Viehmann
bd0cc7d364 Implement torch.einsum (fixes #1889) (#6307)
* start at generic trilinear

* Implement einsum (fixes #1889)

This provides a simple implementation of einsum. It is built on
top of the work for computing bilinear (#6110).
It uses a naive left-to-right resolution at the moment.
Autograd is able to differentiate by itself.
The obvious unsupported feature is taking diagonals (einsum('ii->i',(a,)).

* add tests and docs

* fix flake8

* clean diff

* rebase on current master to resolve conflicting String wrapping

* clean up after rebase

* better commentary in einsum and sumproduct_pair

* don't say fixme if it's fixed and rename num_outputs to num_output_dims

* adapt python wrapper to use std::string instead of String to avoid typedef at::String

* typos and some vector to array conversion

* fix accidental python<->python3 change

* really fix bad rebase
2018-04-18 13:41:27 +02:00
Richard Zou
7de61c3b8c
Update tensors.rst Tensor introduction (#6670)
Changes:
- Deleted docs for old constructor. Add link to new `torch.tensor` ctor
- Add docs for `torch.tensor`
- Add some info on dtypes to the top of `tensors.rst`.
2018-04-17 16:52:22 -04:00
Richard Zou
1f2829dd2a
Update tensor factory method docs (#6640)
* Update tensor factory method docs

Also add new docs for `torch.empty`.

* Add full; some refactoring to make docs nicer
2018-04-17 14:30:46 -04:00
gchanan
d7cb78478f Split set_default_tensor_type(dtype) into set_default_dtype(dtype). (#6599)
* Split set_default_tensor_type(dtype) into set_default_dtype(dtype).

* Fix flake8.

The difference between this one and set_default_tensor_type is that it only sets scalar type what determines the type + device of a tensor returned from a factory function with defaults is the default tensor type + the current device (if the default tensor type is cuda). This just changes the scalar type of the default tensor type.

We do eventually want to deprecate set_default_tensor_type; it is not clear how to do that in a sensible and backwards compatible way.
2018-04-16 13:49:00 -04:00
Richard Zou
99cfb56698 Add docs for torch.randn_like (#6565)
* Add docs for torch.randn_like

* Address comments

* Address commetns

* Address comments
2018-04-13 11:33:56 -04:00
Naman Jain
1e5611014d Adding autofunction entry for torch.randint (#6507)
* added randint function in ATEN yaml as well as Tensorfactories.cpp

* corrected randint

* randint with overloading complete,getting tuple of ints behaviour though

* done randintlike and randint_out

Left : adding docs and test, and remove the bug on size = (5)

* Removed my error messages, ThRandomTensor will handle all exceptions

* added docs and tests, corrected a mistake

Tested with manual seeds in some test cases as well. Seems fine to me (check documentation though)

* corrected indentation to spaces, and improved sizes argument description

* made documentation argument description shorter

* added whitespace after ',' in torch docs

* addes spaces in documentation

* added more tests (including bounds and overloading features)

* added whitespaces in test_torch

* removed trailing whitespaces

* removed whitespace from a blank line

* removed positive requirement from docs. Added dtype argument and gave eg

* made randint over randn in all files

* changed to data type for dtype in docs for randint

* added autofunction entry for randint in torch.rst
2018-04-11 12:34:25 -04:00
Tongzhou Wang
0dff2b5e35
[fft] [3 of 3] Implements backward of fft ifft rfft irfft (#5537)
* change irfft signal_sizes arg to be the last

* add docs for fft, ifft, rfft, irfft; update doc for stft

* fix typo in window function docs

* improve gradcheck error message

* implement backward of fft, ifft, rfft, irfft

* add grad tests for fft, ifft, rfft, irfft

* fix nits and typos from #6118

* address comments
2018-04-10 22:09:36 -04:00
Vishwak Srinivasan
0aa35780bf [ready] Implement log2 and log10 in PyTorch (#6272)
* Implemented log2 and log10

* Re-add incorrectly removed files

* Fix minor bugs

* Fix log1p docs

* Add a try-except for python2 math module in log2 test

* Revert changes made to aten/doc/*

* Fix docstring errors

* Fix windows build
2018-04-05 14:28:37 -04:00
Peter Goldsborough
9ba70856a1 Add max_values and argmax convenience functions to ATen (#6201)
* Add max_values and argmax convenience functions to ATen

* Add documentation for torch.argmax/argmin and skip max_values

* Add tests for argmax/argmin

* Dont default the dim argument

* Use dim=0 in test_torch.py for argmax tests

* Implement argmin()  and argmax() without dim

* Call .contiguous() before .view(-1)
2018-04-04 15:53:26 -04:00
Thomas Viehmann
32ba2ca203 add documentation for diagflat and diagonal (#6161) 2018-03-31 18:03:21 +02:00
Tongzhou Wang
24fca0efb2 fix some methods not showing up in doc (#5882) 2018-03-19 14:48:15 -04:00
Sam Gross
a2641500bf Implement torch.reshape and Tensor.reshape (#5575)
* Implement torch.reshape and Tensor.reshape

This implements reshape which has similar semantics to numpy.reshape. It
will return a view of the source tensor if possible. Otherwise, it
returns a copy.

* Remove in-place reshape_ that was an alias for resize_

* Update documentation
2018-03-12 16:20:40 -04:00
Tongzhou Wang
71d73211f4 [ready] torch.* doc update for Variable/Tensor merge, and other improvements (#5443)
* 1. Update doc to reflect changes in Variable/Tensor merge, and new printing style
2. Remove functions in torch/functional.py that are already implemented with native_function
3. Add set_detault_tensor_type doc

* fix torch.split

* py2 unicode string fix

* update torch.gels doc

* address @fmassa 's comments

* double-colon
2018-03-08 23:02:38 -05:00
theweiho
c2721ab503 Add per-element unique op for CPU (#5503)
Questions/possible future works:

How to template-ize to extend support beyond LongTensor?
How to check if autograd works (and if not, how to add explicit gradient)?
CUDA support?
Testing command:
DEBUG=1 NO_CUDA=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build && DEBUG=1 NO_CUDA=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py develop && python3 test/test_torch.py

Partially fixes #2031

* Initial commit for unique op

* Working unique with test

* Make inverse indices shape conform to input

* flake8 whitespace removal

* address review comment nits

* Expose fn and add docs. Explicitly declare no gradients

* Trial generic dispatch implementation

* Add tests for generics

* flake8 whitespace

* Add basic CUDA error throwing and templateize set

* Explicit contiguous and AT_DISPATCH_ALL_TYPES return

* Remove extraneous numpy conversion

* Refactor out .data calls

* Refactored to variable return length API with wrapper fn as opposed to returning a 0-length tensor, per off-line reviewer comments

* Remove A

* Don't use hidden torch._unique() in test

* Fix documentations
2018-03-07 18:16:51 -05:00
Choongwoo Han
cf71385ec9 Implement torch.isnan (#5273)
* Implement torch.isnan

* Simple python implementation

* Fix typo
2018-02-19 19:46:35 -05:00
Choongwoo Han
fae6c67121 Configurable flushing denormal numbers on CPU (#5294)
* Configurable flushing denormal numbers on CPU

* Formatting

* Update docs

* Minor doc changes
2018-02-19 19:23:43 -05:00
Tongzhou Wang
5918243b0c Methods for checking CUDA memory usage (#4511)
* gpu mem allocated

* add test

* addressed some of @apaszke 's comments

* cache stats

* add more comments about test
2018-01-09 11:47:48 -05:00
Vishwak Srinivasan
e519ef5337 Adding torch.expm1() and its inplace function (#4350) 2017-12-28 18:56:03 +09:00
SsnL
9a48f8d7c3 add tests for btrifact_with_info and doc for btriunpack 2017-12-24 03:08:28 +08:00