Commit Graph

299 Commits

Author SHA1 Message Date
zou3519
23bffc4f14 Fix most documentation warnings (#27782)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27782

Warnings show up when running `make html` to build documentation. All of
the warnings are very reasonable and point to bugs in our docs. This PR
attempts to fix most of those warnings.

In the future we will add something to the CI that asserts that there
are no warnings in our docs.

Test Plan: - build and view changes locally

Differential Revision: D17887067

Pulled By: zou3519

fbshipit-source-id: 6bf4d08764759133b20983d6cd7f5d27e5ee3166
2019-10-13 10:34:01 -07:00
Hong Xu
4da68227e9 Clarify that when the divisor in div is zero and the dividend is integral, the behavior is undefined. (#25968)
Summary:
Currently when an integral tensor is divided by zero, it emits a
"floating point exception" (which can be different from system to
system). Clarify in the document that nothing would be guaranteed under
this circumstance.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25968

Differential Revision: D17888097

Pulled By: ezyang

fbshipit-source-id: 7c3ce3ac4080479d637cc2710b6aa3ae7e42431d
2019-10-11 15:37:09 -07:00
Dmytro Dzhulgakov
d931c8bf75 substantially restructure all quantized docs to group logically (#27677)
Summary:
Make everything clickable
Organize APIs logically in subsections
Fix many typos
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27677

Differential Revision: D17850650

Pulled By: dzhulgakov

fbshipit-source-id: 060f6ed988d1c4beecba6bc8daf55626961fac98
2019-10-10 00:50:02 -07:00
Dylan Bespalko
7c472ec597 Vectorized complex unary and binary op support. (#26500)
Summary:
Added Complex support with AVX to unary ops and binary ops.

I need to add nan propagation to minimum() and maximum() in the future.
In-tree changes to pytorch to support complex numbers are being submitted here.
Out-of-tree support for complex numbers is here: pytorch-cpu-strided-complex extension

Preliminary Benchmarks are here.

I tried rrii and riri and found that riri is better in most situations.
Divide is very slow because you can't reduce 1/(x+y)
Sqrt is also very slow.
Reciprocal could be sped up after I add conj()
Everything else is typically within 20% of the real number performance.
Questions:

Why does macOS not support mil? #if AT_MKL_ENABLED() && !defined(__APPLE__) in vml.h. MKL does support some complex operations like Abs, so I was curious about trying it.
Is MKL just calling AVX?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26500

Differential Revision: D17835431

Pulled By: ezyang

fbshipit-source-id: 6746209168fbeb567af340c22bf34af28286bd54
2019-10-09 12:49:21 -07:00
vishwakftw
0222eceaaa Remove outdated note in cholesky_solve and triangular_solve doc strings (#26989)
Summary:
We do support inputs with dim > 2 in _out variants
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26989

Differential Revision: D17785632

Pulled By: soumith

fbshipit-source-id: d42ba7ca9c225ad1a26ff3b410d0c5c08eaed001
2019-10-06 23:28:48 -07:00
Ilia Cherniavskii
74572fc985 Relax restrictions on set_num_threads (#27190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27190

Allow set_num_threads to be called multiple times in case of TBB
parallel backend

Test Plan:
BUILD_BINARY=1 USE_TBB=1 ATEN_THREADING=TBB python setup.py develop
install  --cmake
./build/bin/test_parallel
./build/bin/thread_init_test

Reviewed By: kostmo

Differential Revision: D17704236

Pulled By: ilia-cher

fbshipit-source-id: 274380795e78ba417301c5faa18c9e9d3198bd5e
2019-10-03 15:51:03 -07:00
Brian Vaughan
0c6a18de8d Add torch.promote_types function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26655

Test Plan: Imported from OSS

Differential Revision: D17556196

Pulled By: nairbv

fbshipit-source-id: eeebce8968bfb2ffd25c066595bc19e5dee6ea6f
2019-09-27 16:48:38 -07:00
Brian Vaughan
2a43b74196 Add torch.can_cast(from, to) function (#26805)
Summary:
https://github.com/pytorch/pytorch/issues/25472
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26805

Differential Revision: D17628434

Pulled By: nairbv

fbshipit-source-id: 6af8031ac3afda1505d338075c0637ad043f8b7e
2019-09-27 08:40:34 -07:00
Brian Vaughan
002c250139 Expose a torch.result_type and simplify tensor iterator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26012

Test Plan: Imported from OSS

Differential Revision: D17556197

Pulled By: nairbv

fbshipit-source-id: c0be3ac9e99fecc26a181e301defc1942bc6708c
2019-09-25 06:52:23 -07:00
Hong Xu
71ec9a0035 Clarify and correct the doc of atan2.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26180

Reviewed By: ezyang

Differential Revision: D17500224

Pulled By: albanD

fbshipit-source-id: 98b9f32aa443963fe1e89b83e15bed9ff83a2694
2019-09-20 12:58:12 -07:00
François Darmon
ec3793362f Documentation change of torch.where (#25554)
Summary:
Change the doc of torch.where. The parameters are x and y instead of input and other
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25554

Differential Revision: D17227193

Pulled By: soumith

fbshipit-source-id: 96d8a6f60ae8e788648247320ae715d0058de2b4
2019-09-06 12:55:16 -07:00
vishwakftw
1e4832ffad Enable broadcasting of batch dimensions RHS and LHS tensors for lu_solve (#24333)
Summary:
Changelog:
- Enable broadcasting of RHS and LHS tensors for lu_solve. This means that you can now have RHS with size `3 x 2` and LHS with size `4 x 3 x 3` for instance
- Remove deprecated behavior of having 2D tensors for RHS. Now all tensors have to have a last dimension which equals the number of right hand sides
- Modified docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24333

Test Plan: - Add tests for new behavior in test_torch.py with a port to test_cuda.py

Differential Revision: D17165463

Pulled By: zou3519

fbshipit-source-id: cda5d5496ddb29ed0182bab250b5d90f8f454aa6
2019-09-03 15:14:48 -07:00
Igor Fedan
896cd1c510 Documentation for cdist (#25221)
Summary:
https://github.com/pytorch/pytorch/issues/21730
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25221

Differential Revision: D17073908

Pulled By: ifedan

fbshipit-source-id: 19e2534183d6a2a7e9cdfcee4734cff1b124e05a
2019-09-03 14:16:07 -07:00
Hong Xu
07fe66f25e logical_xor doc cleanup
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25364

Differential Revision: D17105048

Pulled By: gchanan

fbshipit-source-id: 8bef3e330ef00decb3118a5ae7d17308a58878a2
2019-08-29 09:09:16 -07:00
Gregory Chanan
f362a5a04b Revert "Let logical_xor support non-bool tensors." (#25269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25269

This reverts commit 5ca612b55e.

Test Plan: Imported from OSS

Differential Revision: D17080088

fbshipit-source-id: e6b6215b713910c448e9a6b831b08f28b849c64a
2019-08-28 15:41:51 -07:00
Kexuan Sun
4b3ea92787 Test if descriptions of args are in the template (#24161)
Summary:
As in https://github.com/pytorch/pytorch/issues/23439, some descriptions of arguments in `_torch_docs.py` have been replaced by `common_args`, it would be helpful to check if any descriptions can be replaced for new docs in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24161

Differential Revision: D16889293

Pulled By: ezyang

fbshipit-source-id: bf6f581494482d6eb32e634f73e84a4586766230
2019-08-20 16:34:50 -07:00
Hong Xu
5ca612b55e Let logical_xor support non-bool tensors.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23978

Test Plan: Imported from OSS

Differential Revision: D16719299

Pulled By: gchanan

fbshipit-source-id: 2fe170be6090733e20410db7cf99266543299c58
2019-08-15 12:21:31 -07:00
Hong Xu
00e4870001 Let logical_not support non-bool tensors. (#23916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23916

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23916

Test Plan: Imported from OSS

Differential Revision: D16719300

Pulled By: gchanan

fbshipit-source-id: 5be6aeea9a38cc40ad59d0449d25a25f7dfa2787
2019-08-15 12:21:27 -07:00
Hong Xu
338f9c860f Add logical_xor operator (#23847)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23847

Related to #23836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23847

Test Plan: Imported from OSS

Differential Revision: D16678300

Pulled By: gchanan

fbshipit-source-id: 67020aca5830b6bec2f561105954e0a8c2ee37e0
2019-08-15 08:40:25 -07:00
Hong Xu
1f4c73618c Add logical_not operator. (#23839)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23839

Close #23836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23839

Test Plan: Imported from OSS

Differential Revision: D16678301

Pulled By: gchanan

fbshipit-source-id: 54e7b3f3b04c577e239b88493247e1c036266774
2019-08-15 08:40:21 -07:00
Vishwak Srinivasan
5411d1a27b Fix docstring for argmax (#23775)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/23757
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23775

Differential Revision: D16644692

Pulled By: soumith

fbshipit-source-id: d759bb85f73383021e4657325dbac79913042ad2
2019-08-07 09:42:19 -07:00
Kexuan Sun
8e9f9b424f Replace descriptions of args in doc with template (#23439)
Summary:
Many descriptions of arguments could be replaced by items in the template such as `factory_common_args`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23439

Differential Revision: D16688527

Pulled By: ezyang

fbshipit-source-id: 406ce45d72e297f46b5fa9ea5472b3284c8d4324
2019-08-07 08:50:09 -07:00
Ailing Zhang
51d59a43ba fix torch.frac documentation (#23830)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/13968 .

Following the math formula in wiki: https://en.wikipedia.org/wiki/Fractional_part
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23830

Differential Revision: D16656871

Pulled By: ailzhang

fbshipit-source-id: a71467870cf9566e0c7b1a045f72607dada81e1f
2019-08-05 17:43:17 -07:00
Hong Xu
1aa4afde80 Document bool tensors for bitwise_not. (#23800)
Summary:
Requested by vadimkantorov at https://github.com/pytorch/pytorch/pull/23621#issuecomment-517945167
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23800

Differential Revision: D16651008

Pulled By: gchanan

fbshipit-source-id: 4ce21158bd5dd142edcd951e7ac941521b3d54af
2019-08-05 12:11:45 -07:00
Iurii Zdebskyi
19c675178f Updated docs and added deprecation warnings to acknowledge a bool tensor (#22261)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22261
ghimport-source-id: 1611d62d056a04c0ad15ef662e594a3d206a78e2

Test Plan: Imported from OSS

Differential Revision: D16005990

Pulled By: izdeby

fbshipit-source-id: 2413824aa75a0755719e4df11acd21e6607e5a85
2019-08-05 07:42:34 -07:00
vishwakftw
8e2b9de860 Document empty_strided (#23735)
Summary:
Changelog:
- Add doc string for torch.empty_strided
- Remove empty file named `python` in test/

Fixes https://github.com/pytorch/pytorch/issues/23688
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23735

Differential Revision: D16623438

Pulled By: ailzhang

fbshipit-source-id: acd5a47da9220243467ccc6bff92edd209cca709
2019-08-02 20:02:44 -07:00
vishwakftw
5d130e4232 Allowing batching for det/logdet/slogdet operations (#22909)
Summary:
Changelog:
- Add batching for det / logdet / slogdet operations
- Update derivative computation to support batched inputs (and consequently batched outputs)
- Update docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22909

Test Plan:
- Add a `test_det_logdet_slogdet_batched` method in `test_torch.py` to test `torch.det`, `torch.logdet` and `torch.slogdet` on batched inputs. This relies on the correctness of `torch.det` on single matrices (tested by `test_det_logdet_slogdet`). A port of this test is added to `test_cuda.py`
- Add autograd tests for batched inputs

Differential Revision: D16580988

Pulled By: ezyang

fbshipit-source-id: b76c87212fbe621f42a847e3b809b5e60cfcdb7a
2019-07-31 10:01:32 -07:00
vishwakftw
b3a9a7a9b9 Rename gels to lstsq (#23460)
Summary:
Changelog:
- Rename `gels` to `lstsq`
- Fix all callsites
- Rename all tests
- Create a tentative alias for `lstsq` under the name `gels` and add a deprecation warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23460

Test Plan: - All tests should pass to confirm that the patch is correct

Differential Revision: D16547834

Pulled By: colesbury

fbshipit-source-id: b3bdb8f4c5d14c7716c3d9528e40324cc544e496
2019-07-30 09:56:04 -07:00
Mingbo Wan
f546a3b8d8 fixing documentation, issue 22697 (#23268)
Summary:
As fmassa commented :

> Agree, it should probably be weight, start, end
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23268

Differential Revision: D16493403

Pulled By: zou3519

fbshipit-source-id: 51ed07f6f7abdbd41dc323570aed41d804fa9c1b
2019-07-29 07:24:49 -07:00
Pavel Belevich
dd79d45c5a Added torch.bitwise_not docstr (#23397)
Summary:
Fixing https://github.com/pytorch/pytorch/issues/23311
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23397

Differential Revision: D16505107

Pulled By: pbelevich

fbshipit-source-id: 8d515fc27e253469393941c8da23d8e0510e64df
2019-07-25 18:32:58 -07:00
Will Feng
3ed79f4b6c Fix argument names in torch doc (#22973)
Summary:
I manually went through all functions in `torch.*` and corrected any mismatch between the arguments mentioned in doc and the ones actually taken by the function. This fixes https://github.com/pytorch/pytorch/issues/8698.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22973

Differential Revision: D16419602

Pulled By: yf225

fbshipit-source-id: 5562c9b0b95a0759abee41f967c45efacf2267c2
2019-07-24 11:22:45 -07:00
Vishwak Srinivasan
0ab19d66ee Port lu_solve to ATen (#22379)
Summary:
Changelog:
- Port TH implementation to ATen/native/BatchLinearAlgebra.cpp
- Port THC implementation to ATen/native/cuda/BatchLinearAlgebra.cu
- Remove TH/THC implementations
- Update doc strings
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22379

Test Plan: - Added new tests in test_torch.py (port to test_cuda.py exists)

Differential Revision: D16089645

Pulled By: zou3519

fbshipit-source-id: dc8561aadacacb23e80c375b4fec687df2b6bbc8
2019-07-23 19:11:35 -07:00
Kexuan Sun
45d3f495ef Add document of function torch.as_strided (#22842)
Summary:
Documentation of `torch.as_strided` and `Tensor.as_strided` is missing. As mentioned in https://github.com/pytorch/pytorch/issues/9886
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22842

Differential Revision: D16254106

Pulled By: soumith

fbshipit-source-id: dee142483fb9ef7bea84bd44a970b6eccdcdc471
2019-07-23 06:06:00 -07:00
Hong Xu
502766e99e Add the mathematical definition of torch.sign to clarify this is the sgn function.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22894

Differential Revision: D16345027

Pulled By: ezyang

fbshipit-source-id: 1421571f1f8764539a35b9060d90ea6075f889d3
2019-07-18 11:45:27 -07:00
Tongzhou Wang
14ecf92d42 Slightly improve irfft doc
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22995

Differential Revision: D16356435

Pulled By: soumith

fbshipit-source-id: f6cfd9990fd79faebfb566704359c866ddf36525
2019-07-18 03:12:49 -07:00
Hong Xu
3ea04b59c0 Resolve the doc issue in which two asterisks have weird links. (#22896)
Summary:
Asterisks start emphases in rst. We should either escape them or put them as interpreted text.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22896

Differential Revision: D16282869

Pulled By: zou3519

fbshipit-source-id: 15ec4286434db55fb8357b1a12e6f70ef54f8c66
2019-07-16 11:23:06 -07:00
vishwakftw
7d055c21b3 Port SVD to ATen, enable batching for matrix inputs (#21588)
Summary:
Changelog:
- Port SVD TH implementation to ATen/native/BatchLinearAlgebra.cpp
- Port SVD THC implementation to ATen/native/cuda/BatchLinearAlgebra.cu
- Allow batches of matrices as arguments to `torch.svd`
- Remove existing implementations in TH and THC
- Update doc string
- Update derivatives to support batching
- Modify nuclear norm implementation to use at::svd instead of _batch_svd
- Remove _batch_svd as it is redundant
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21588

Test Plan:
- Add new test suite for SVD in test_torch.py with port to test_cuda.py
- Add tests in common_methods_invocations.py for derivative testing

Differential Revision: D16266115

Pulled By: nairbv

fbshipit-source-id: e89bb0dbd8f2d58bd758b7830d2389c477aa61fb
2019-07-15 13:34:01 -07:00
Hong Xu
e2dc1fc715 Add a bitwise NOT operator for integer and Boolean types (CPU).
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22283

Test Plan: Imported from OSS

Differential Revision: D16183576

Pulled By: colesbury

fbshipit-source-id: 2e539fab8ff885dddb9bff334d1d784b28d65b8f
2019-07-10 12:17:44 -07:00
Brennan Vincent
e210c65097 Add torch.where overload with only condition argument (#21986)
Summary:
Requested in https://github.com/pytorch/pytorch/issues/21798
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21986

Differential Revision: D16081577

Pulled By: zhangguanheng66

fbshipit-source-id: 658c0f451b833aceb1a41ee424c7990eec00bc02
2019-07-02 18:18:15 -07:00
Brennan Vincent
dcd902bdde provide "size" parameter in torch.normal when called with two floats (#20545)
Summary:
This has been requested in https://github.com/pytorch/pytorch/issues/20323

(It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20545

Differential Revision: D15358736

Pulled By: zhangguanheng66

fbshipit-source-id: 762ea5eab5b8667afbac2df0137df017ba6e413c
2019-07-02 18:18:11 -07:00
Hong Xu
e259894e83 Test raising TypeError in torch.from_numpy() (#21607)
Summary:
With some additional cleanup.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21607

Differential Revision: D16046063

Pulled By: li-roy

fbshipit-source-id: 15256a0e94afea39db3cb581c546c2a18a8a7fda
2019-06-27 23:54:47 -07:00
vishwakftw
bcb5fd8f06 Port symeig to ATen and enable batching of inputs (#21858)
Summary:
Changelog:
- Port `symeig` from TH/THC to ATen
- Enable batching of matrix inputs for `symeig`
- Modify derivative computation based on batching
- Update docs to reflect the change
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21858

Test Plan: - Added additional tests in `test_torch.py` (with a port to `test_cuda.py`) and `common_methods_invocations.py` to test if both the port and batching work.

Differential Revision: D15981789

Pulled By: soumith

fbshipit-source-id: ab9af8361f8608db42318aabc8421bd99a1ca7ae
2019-06-25 12:13:27 -07:00
Brennan Vincent
4cd7d78718 correct arange docs (#21992)
Summary:
https://github.com/pytorch/pytorch/issues/21579 correctly points out an inaccuracy in the docs for `arange`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21992

Differential Revision: D15914411

Pulled By: umanwizard

fbshipit-source-id: 3eb1734b29af3f3858f0f4d54c71e28dbda5c75b
2019-06-20 12:36:00 -07:00
Syed Tousif Ahmed
effcc398c4 Refactor Random Number Generators in ATen (#21555)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21555
ghimport-source-id: dd900a8c3e1ef9ef1e011b8bb5476626d18cc462

Test Plan: Imported from OSS

Differential Revision: D15875780

Pulled By: ezyang

fbshipit-source-id: 6e04e90af62ab9c9593d74f344a3a084aaaf6f43
2019-06-19 13:54:09 -07:00
vishwakftw
c9ba3f699d Bag of documentation fixes (#21846)
Summary:
Thanks henon for raising the issues.

Fixes https://github.com/pytorch/pytorch/issues/21830
Fixes https://github.com/pytorch/pytorch/issues/21831
Fixes https://github.com/pytorch/pytorch/issues/21832
Fixes https://github.com/pytorch/pytorch/issues/21827
Fixes https://github.com/pytorch/pytorch/issues/21822
Fixes https://github.com/pytorch/pytorch/issues/21820
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21846

Differential Revision: D15847389

Pulled By: soumith

fbshipit-source-id: 421cc48af646a2618af731697de7d4de83d3eabe
2019-06-16 19:35:27 -07:00
Shagun
b9675efb5a Fix the issue of sizes vs size for tensor creation ops (#21686)
Summary:
Related to [pytorch#20921](https://github.com/pytorch/pytorch/issues/20921)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21686

Differential Revision: D15816109

Pulled By: gchanan

fbshipit-source-id: 4428b8e77b6c8b297ddb77e58fc1cb916c9cc46e
2019-06-14 07:34:56 -07:00
Brennan Vincent
699de487db numerical integration "trapz" function. (#21610)
Summary:
This is intended to match [numpy.trapz](https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html): numerical integration based on the trapezoid rule.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21610

Differential Revision: D15747618

Pulled By: umanwizard

fbshipit-source-id: 8eadb2e75c9877b07592d875ca0b2cca6cb72297
2019-06-12 15:30:13 -07:00
Syed Tousif Ahmed
ae342fd076 Refactor Random Number Generators in ATen (#21364)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21364
ghimport-source-id: ca7d37e10190ba46dc8512f437404ca9216d3369

Differential Revision: D15696497

Pulled By: ezyang

fbshipit-source-id: 2e713b8566ae915e175b5a79ac1dd9b86cc2a23d
2019-06-12 13:01:30 -07:00
Brennan Vincent
039629cedd fix incorrect use of TeX in docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21649

Differential Revision: D15766392

Pulled By: umanwizard

fbshipit-source-id: a362ec06e971ee12c47a45bc9c15cc773ec878e3
2019-06-11 16:19:40 -07:00
Brennan Vincent
f4f32cecfd numpy like nonzero (called nonzero_tuple) (#20293)
Summary:
No performance degradation compared to Numpy when indexing:

```
In [15]: x=torch.randn((1000,1000))

In [16]: %timeit x[x.nonzero_tuple()]
4.63 ms ± 102 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [17]: y=x.numpy()

In [18]: %timeit y[y.nonzero()]
14.6 ms ± 281 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [20]: x=x.t()

In [22]: %timeit x[x.nonzero_tuple()]
9.01 ms ± 626 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [24]: y=x.numpy()

In [25]: %timeit y[y.nonzero()]
16.8 ms ± 770 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20293

Differential Revision: D15358754

Pulled By: umanwizard

fbshipit-source-id: 1344aabd95c969eeda9780c475a39551231879e1
2019-06-06 12:50:59 -07:00