Commit Graph

323 Commits

Author SHA1 Message Date
Michael Suo
62b10721fb Actually make flake8 do something (#30892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892

Fixes all outstanding lints and actually installs a properly configured
flake8

Test Plan: Imported from OSS

Differential Revision: D18862825

Pulled By: suo

fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
2019-12-06 17:50:50 -08:00
Tongzhou Wang
a68b790293 fix ref to nonexistent torch.repeat
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30614

Differential Revision: D18808517

Pulled By: ezyang

fbshipit-source-id: 27f9bda6fbbd1c3c751a0e96fdc336bf724c0b31
2019-12-04 07:27:01 -08:00
Tongzhou Wang
ec7bb9de1c format tri[lu]_indices doc better
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30377

Differential Revision: D18689152

Pulled By: zou3519

fbshipit-source-id: 7fab1e39ecd39ef6a3869befcbe217f8d3b6a87e
2019-12-04 07:16:34 -08:00
Hong Xu
bb5dcaf24f Add logical_and and logical_or (#30521)
Summary:
With the CI failure caused in 8bbafa0b32 fixed (incorrect return type of the lambdas in CUDA kernels)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30521

Differential Revision: D18770151

Pulled By: ailzhang

fbshipit-source-id: 02f0fe1d5718c34d24da6dbb5884ee8b247ce39a
2019-12-03 18:24:54 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Richard Zou
ec5c08de74 Revert D18580867: Add logical_and and logical_or
Test Plan: revert-hammer

Differential Revision:
D18580867

Original commit changeset: 7e4d7c37da4d

fbshipit-source-id: 81fb604c7aef8d847f518f5faa016e7bd0423016
2019-11-27 09:27:00 -08:00
Hong Xu
8bbafa0b32 Add logical_and and logical_or (#28162)
Summary:
Superseding https://github.com/pytorch/pytorch/issues/24379 as type promotion has been implemented.

Close https://github.com/pytorch/pytorch/issues/24379
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28162

Differential Revision: D18580867

Pulled By: ailzhang

fbshipit-source-id: 7e4d7c37da4dc8df87314bd4f1f6a7539e46586a
2019-11-26 17:38:22 -08:00
vishwakftw
dcd9f49809 Specify ordering on singular values and eigenvalues output from torch… (#30389)
Summary:
….svd/symeig respectively

Changelog:
- Adds a note to docstrings of the both functions specifying the ordering

Fixes https://github.com/pytorch/pytorch/issues/30301
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30389

Differential Revision: D18707608

Pulled By: zou3519

fbshipit-source-id: b0f73631578f39a24fae9af4997c6491de8be9a8
2019-11-26 10:23:47 -08:00
Zhang Zhi
ab2ec4d835 Fix inexistent parameter in document (#24335)
Summary:
There is no `out` argument to `argsort` according to the source code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24335

Differential Revision: D16829134

Pulled By: vincentqb

fbshipit-source-id: 8f91154984cd4a753ba1d6105fb8a9bfa0da22b3
2019-11-26 06:53:17 -08:00
Pavel Belevich
cc81769e10 C++ API parity: isfinite
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30083

Test Plan: Imported from OSS

Differential Revision: D18594723

Pulled By: pbelevich

fbshipit-source-id: 5970e0aa6ef8994e9c4a741784fd053383aaceb7
2019-11-19 20:00:05 -08:00
Will Feng
3bd0f476d4 Revert D18233037: C++ API parity: isfinite
Test Plan: revert-hammer

Differential Revision:
D18233037

Original commit changeset: c76b9467bbc1

fbshipit-source-id: 97d2cfa9de767a8c3a0ca919f9d768e959fa484e
2019-11-18 20:26:19 -08:00
Pavel Belevich
8df5e10ee9 C++ API parity: isfinite
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28918

Test Plan: Imported from OSS

Differential Revision: D18233037

Pulled By: pbelevich

fbshipit-source-id: c76b9467bbc1fbb2c9bf49855895c98438b36c12
2019-11-18 19:06:57 -08:00
SsnL
38340f59fd randint accept generator=None (#29748)
Summary:
This PR fixes the inconsistent behavior of `randint`'s `generator=` kwarg. It does not accept `None`, which is inconsistent with how other random functions behave:
```
In [12]: torch.randint(0, 4, size=(2,3), generator=torch.Generator())
Out[12]:
tensor([[2, 0, 1],
        [0, 1, 3]])

In [13]: torch.randint(0, 4, size=(2,3), generator=None)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-13-a6bc6525a1e1> in <module>
----> 1 torch.randint(0, 4, size=(2,3), generator=None)

TypeError: randint() received an invalid combination of arguments - got (int, int, generator=NoneType, size=tuple), but expected one of:
 * (int high, tuple of ints size, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
 * (int high, tuple of ints size, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
 * (int low, int high, tuple of ints size, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
 * (int low, int high, tuple of ints size, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
```

Other random functions work fine:
```
In [9]: torch.bernoulli(torch.ones(3))
Out[9]: tensor([1., 1., 1.])

In [10]: torch.bernoulli(torch.ones(3), generator=None)
Out[10]: tensor([1., 1., 1.])
```

This PR also documents the `generator=` kwarg, and fixes https://github.com/pytorch/pytorch/issues/29683 since it's a related easy fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29748

Differential Revision: D18529951

Pulled By: ezyang

fbshipit-source-id: e956cc989decc94e9483fd4a30f9255240d7c07e
2019-11-18 08:07:29 -08:00
Hong Xu
bd0394d473 Add op bitwise_xor to replace __xor__ and __ixor__ (#25665)
Summary:
We define `bitwise_xor` instead of
`__xor__` and `__ixor__`. The reason is that (a) it is not idiomatic to call
functions starting and ending with double underscores, and that (b) the
types of argument that we can add is limited (e.g., no out), and that (c) consistent with the naming of `bitwise_not` and numpy.

Fix https://github.com/pytorch/pytorch/issues/24513,  Fix https://github.com/pytorch/pytorch/issues/24517, Fix https://github.com/pytorch/pytorch/issues/24660, Fix https://github.com/pytorch/pytorch/issues/24664
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25665

Differential Revision: D17577143

Pulled By: VitalyFedyunin

fbshipit-source-id: 042f6385f9305bd66d50a8ce82e28f40a23a7266
2019-11-12 16:14:04 -08:00
Alban Desmaison
1dcf1b8938 Update pinverse doc for recent commit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28877

Differential Revision: D18225510

Pulled By: albanD

fbshipit-source-id: 698af06ac9e4259eed93d146edb3a7fb13e39242
2019-10-31 07:36:35 -07:00
Dylan Bespalko
f8b758b141 CPU-Strided-Complex Support for reduce ops and linpack ops (#27653)
Summary:
In-tree changes to pytorch to support complex numbers are being submitted here.
Out-of-tree support for complex numbers is here: [pytorch-cpu-strided-complex extension](https://gitlab.com/pytorch-complex/pytorch-cpu-strided-complex)

Changes so far:

- [x]  Renamed references to variable "I" that may be confused for "I" defined in complex.h.  I did this to avoid crazy CI failures messages as complex.h is included by more source files.
     - aten/src/ATen/native/cpu/Loops.h (Renamed I to INDEX)
     - aten/src/ATen/native/cuda/Loops.cuh (Renamed I to INDEX)
     - aten/src/ATen/core/ivalue_inl.h (Renamed I to INDEX)
     - c10/util/Array.h (Renamed I to INDEX)
     - c10/util/C++17.h (Renamed I to INDEX)
    - c10/util/Metaprogramming.h (Renamed I to INDEX)
    - c10/util/SmallVector.h (custom renaming)
- [x]  Added complex support of Linear Algebra Ops.
     - SVD needed to be modified to support mixed data types
     - Example U(std::complex<double)), S(double), V(std::complex<double>)
     - See before and after benchmark below (No observable change in performance).
- [x]  Added complex support of Reduce Ops.
     - var/std computations could have been faster if it was possible to interpret std::complex<double> Tensor as a double Tensor.
- [x]  Added complex derivative support for autograd functionality.
     - derivatives are the same as defined by numpy autograd library for real(), imag(), conj(), angle(). These functions only affect complex numbers.
     - derivative of abs() has not been modified to not interfere with existing code.
     - Autograd defines abs() for complex numbers and fabs() for real numbers. I will look into this further down the road.

 ----------------------------------------
 PyTorch/Caffe2 Operator Micro-benchmarks Before Changes
----------------------------------------
Tag : short

Benchmarking PyTorch: svd
Mode: Eager
Name: svd_M512_N512
Input: M: 512, N: 512
Forward Execution Time (us) : 162339.425
Forward Execution Time (us) : 162517.479
Forward Execution Time (us) : 162847.775

----------------------------------------
PyTorch/Caffe2 Operator Micro-benchmarks After Changes
----------------------------------------
Tag : short

Benchmarking PyTorch: svd
Mode: Eager
Name: svd_M512_N512
Input: M: 512, N: 512
Forward Execution Time (us) : 162032.117
Forward Execution Time (us) : 161943.484
Forward Execution Time (us) : 162513.786
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27653

Differential Revision: D17907886

Pulled By: ezyang

fbshipit-source-id: a88b6d0427591ec1fba09e97c880f535c5d0e513
2019-10-24 09:31:06 -07:00
Nathan Goldbaum
139fec2d14 remove type information from docstrings of quantization functions (#28556)
Summary:
Following from https://github.com/pytorch/pytorch/issues/28479 let's remove the type information from the docstrings of these functions as well, making them valid python signatures matching the other signatures in the docstrings for the torch API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28556

Differential Revision: D18115641

Pulled By: ezyang

fbshipit-source-id: e4c3d56981b16f5acabe8be7bfbe6ae506972d7f
2019-10-24 08:13:48 -07:00
vishwakftw
657430e1f0 Return 0-numel empty tensor from symeig when eigenvectors=False (#28338)
Summary:
Changelog:
- Changes the behavior of returning a zero tensor when eigenvectors=False, matching behavior of torch.eig
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28338

Test Plan: - test_symeig has been modified appropriately for this change

Differential Revision: D18085280

Pulled By: ezyang

fbshipit-source-id: 43129a96dd01743997157974100e5a7270742b46
2019-10-23 11:44:57 -07:00
Nathan Goldbaum
9d767db493 remove extraneous type information from torch.matrix_rank documentation (#28479)
Summary:
The types don't appear in the docstrings for other functions in the `torch` namespace so I think this was included here because of a copy/paste error.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28479

Differential Revision: D18086150

Pulled By: ezyang

fbshipit-source-id: 2481bccba6df36b12779a330f8c43d4aea68495f
2019-10-23 11:08:30 -07:00
Igor Fedan
12dde7f58a cdist performance improvement for euclidean distance (#25799)
Summary:
jacobrgardner https://github.com/pytorch/pytorch/issues/15253#issuecomment-491467128 preposed a way to speedup euclidean distance calculation. This PR is implementation of this solution for normal and batch version.

Also simonepri provided performance metrics https://github.com/pytorch/pytorch/issues/15253#issuecomment-502363581
![image](https://user-images.githubusercontent.com/12058312/64460756-44a24580-d0c9-11e9-9f7f-a5942f4c832d.png)

Current implementation has speedup comparing to jacobrgardner approach
![image](https://user-images.githubusercontent.com/12058312/64461495-5553bb00-d0cb-11e9-87e6-302b8cc7e12b.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25799

Differential Revision: D17964982

Pulled By: ifedan

fbshipit-source-id: bf7bd0dbfca51fd39e667da55139347480f30a2f
2019-10-17 14:56:54 -07:00
Hong Xu
cbb4c87d43 Improve the doc and test of logical_xor (#28031)
Summary:
Following up https://github.com/pytorch/pytorch/issues/27248. per suggestion by gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28031

Differential Revision: D17962226

Pulled By: gchanan

fbshipit-source-id: 788e4e1fc78b1cfc7915aedaa10c8656b19edc4d
2019-10-16 13:57:53 -07:00
Hong Xu
e6a71405a0 Let logical_xor support non-bool tensors (again) (#27248)
Summary:
f362a5a04b reverted
5ca612b55e due to build time conerns (also
see https://github.com/pytorch/pytorch/issues/25254). Now we come back to this by reusing the underlying code in
comparison operators: Logical operators on non-bool variables are
essentially comparison operators that semantically output bool
values. Compared with the previous implementation, we compromise by
always applying XOR on the same input type, while output can be either
the input type or the bool type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27248

Differential Revision: D17929356

Pulled By: ezyang

fbshipit-source-id: dbac08c7614b36f05d24c69104fee9df9ca523d5
2019-10-15 10:56:32 -07:00
vishwakftw
ad47788647 Add Polygamma to the docs (#27696)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25347
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27696

Differential Revision: D17916790

Pulled By: ezyang

fbshipit-source-id: ac2635a300b1ef0ab437e3ffac152239754fe828
2019-10-15 07:00:57 -07:00
vishwakftw
82a69a690f Add documentation for torch.lgamma (#27812)
Summary:
Changelog:
- Add doc string in _torch_docs.py, _tensor_docs.py
- Expose in docs/source/torch.rst, docs/source/tensors.rst
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27812

Test Plan:
- Remove `lgamma`, `lgamma_` from the blacklist

Fixes https://github.com/pytorch/pytorch/issues/27783

Differential Revision: D17907630

Pulled By: ezyang

fbshipit-source-id: 14e662a4e5262126889a437e5c4bfb21936730e8
2019-10-14 08:47:04 -07:00
zou3519
23bffc4f14 Fix most documentation warnings (#27782)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27782

Warnings show up when running `make html` to build documentation. All of
the warnings are very reasonable and point to bugs in our docs. This PR
attempts to fix most of those warnings.

In the future we will add something to the CI that asserts that there
are no warnings in our docs.

Test Plan: - build and view changes locally

Differential Revision: D17887067

Pulled By: zou3519

fbshipit-source-id: 6bf4d08764759133b20983d6cd7f5d27e5ee3166
2019-10-13 10:34:01 -07:00
Hong Xu
4da68227e9 Clarify that when the divisor in div is zero and the dividend is integral, the behavior is undefined. (#25968)
Summary:
Currently when an integral tensor is divided by zero, it emits a
"floating point exception" (which can be different from system to
system). Clarify in the document that nothing would be guaranteed under
this circumstance.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25968

Differential Revision: D17888097

Pulled By: ezyang

fbshipit-source-id: 7c3ce3ac4080479d637cc2710b6aa3ae7e42431d
2019-10-11 15:37:09 -07:00
Dmytro Dzhulgakov
d931c8bf75 substantially restructure all quantized docs to group logically (#27677)
Summary:
Make everything clickable
Organize APIs logically in subsections
Fix many typos
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27677

Differential Revision: D17850650

Pulled By: dzhulgakov

fbshipit-source-id: 060f6ed988d1c4beecba6bc8daf55626961fac98
2019-10-10 00:50:02 -07:00
Dylan Bespalko
7c472ec597 Vectorized complex unary and binary op support. (#26500)
Summary:
Added Complex support with AVX to unary ops and binary ops.

I need to add nan propagation to minimum() and maximum() in the future.
In-tree changes to pytorch to support complex numbers are being submitted here.
Out-of-tree support for complex numbers is here: pytorch-cpu-strided-complex extension

Preliminary Benchmarks are here.

I tried rrii and riri and found that riri is better in most situations.
Divide is very slow because you can't reduce 1/(x+y)
Sqrt is also very slow.
Reciprocal could be sped up after I add conj()
Everything else is typically within 20% of the real number performance.
Questions:

Why does macOS not support mil? #if AT_MKL_ENABLED() && !defined(__APPLE__) in vml.h. MKL does support some complex operations like Abs, so I was curious about trying it.
Is MKL just calling AVX?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26500

Differential Revision: D17835431

Pulled By: ezyang

fbshipit-source-id: 6746209168fbeb567af340c22bf34af28286bd54
2019-10-09 12:49:21 -07:00
vishwakftw
0222eceaaa Remove outdated note in cholesky_solve and triangular_solve doc strings (#26989)
Summary:
We do support inputs with dim > 2 in _out variants
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26989

Differential Revision: D17785632

Pulled By: soumith

fbshipit-source-id: d42ba7ca9c225ad1a26ff3b410d0c5c08eaed001
2019-10-06 23:28:48 -07:00
Ilia Cherniavskii
74572fc985 Relax restrictions on set_num_threads (#27190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27190

Allow set_num_threads to be called multiple times in case of TBB
parallel backend

Test Plan:
BUILD_BINARY=1 USE_TBB=1 ATEN_THREADING=TBB python setup.py develop
install  --cmake
./build/bin/test_parallel
./build/bin/thread_init_test

Reviewed By: kostmo

Differential Revision: D17704236

Pulled By: ilia-cher

fbshipit-source-id: 274380795e78ba417301c5faa18c9e9d3198bd5e
2019-10-03 15:51:03 -07:00
Brian Vaughan
0c6a18de8d Add torch.promote_types function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26655

Test Plan: Imported from OSS

Differential Revision: D17556196

Pulled By: nairbv

fbshipit-source-id: eeebce8968bfb2ffd25c066595bc19e5dee6ea6f
2019-09-27 16:48:38 -07:00
Brian Vaughan
2a43b74196 Add torch.can_cast(from, to) function (#26805)
Summary:
https://github.com/pytorch/pytorch/issues/25472
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26805

Differential Revision: D17628434

Pulled By: nairbv

fbshipit-source-id: 6af8031ac3afda1505d338075c0637ad043f8b7e
2019-09-27 08:40:34 -07:00
Brian Vaughan
002c250139 Expose a torch.result_type and simplify tensor iterator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26012

Test Plan: Imported from OSS

Differential Revision: D17556197

Pulled By: nairbv

fbshipit-source-id: c0be3ac9e99fecc26a181e301defc1942bc6708c
2019-09-25 06:52:23 -07:00
Hong Xu
71ec9a0035 Clarify and correct the doc of atan2.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26180

Reviewed By: ezyang

Differential Revision: D17500224

Pulled By: albanD

fbshipit-source-id: 98b9f32aa443963fe1e89b83e15bed9ff83a2694
2019-09-20 12:58:12 -07:00
François Darmon
ec3793362f Documentation change of torch.where (#25554)
Summary:
Change the doc of torch.where. The parameters are x and y instead of input and other
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25554

Differential Revision: D17227193

Pulled By: soumith

fbshipit-source-id: 96d8a6f60ae8e788648247320ae715d0058de2b4
2019-09-06 12:55:16 -07:00
vishwakftw
1e4832ffad Enable broadcasting of batch dimensions RHS and LHS tensors for lu_solve (#24333)
Summary:
Changelog:
- Enable broadcasting of RHS and LHS tensors for lu_solve. This means that you can now have RHS with size `3 x 2` and LHS with size `4 x 3 x 3` for instance
- Remove deprecated behavior of having 2D tensors for RHS. Now all tensors have to have a last dimension which equals the number of right hand sides
- Modified docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24333

Test Plan: - Add tests for new behavior in test_torch.py with a port to test_cuda.py

Differential Revision: D17165463

Pulled By: zou3519

fbshipit-source-id: cda5d5496ddb29ed0182bab250b5d90f8f454aa6
2019-09-03 15:14:48 -07:00
Igor Fedan
896cd1c510 Documentation for cdist (#25221)
Summary:
https://github.com/pytorch/pytorch/issues/21730
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25221

Differential Revision: D17073908

Pulled By: ifedan

fbshipit-source-id: 19e2534183d6a2a7e9cdfcee4734cff1b124e05a
2019-09-03 14:16:07 -07:00
Hong Xu
07fe66f25e logical_xor doc cleanup
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25364

Differential Revision: D17105048

Pulled By: gchanan

fbshipit-source-id: 8bef3e330ef00decb3118a5ae7d17308a58878a2
2019-08-29 09:09:16 -07:00
Gregory Chanan
f362a5a04b Revert "Let logical_xor support non-bool tensors." (#25269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25269

This reverts commit 5ca612b55e.

Test Plan: Imported from OSS

Differential Revision: D17080088

fbshipit-source-id: e6b6215b713910c448e9a6b831b08f28b849c64a
2019-08-28 15:41:51 -07:00
Kexuan Sun
4b3ea92787 Test if descriptions of args are in the template (#24161)
Summary:
As in https://github.com/pytorch/pytorch/issues/23439, some descriptions of arguments in `_torch_docs.py` have been replaced by `common_args`, it would be helpful to check if any descriptions can be replaced for new docs in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24161

Differential Revision: D16889293

Pulled By: ezyang

fbshipit-source-id: bf6f581494482d6eb32e634f73e84a4586766230
2019-08-20 16:34:50 -07:00
Hong Xu
5ca612b55e Let logical_xor support non-bool tensors.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23978

Test Plan: Imported from OSS

Differential Revision: D16719299

Pulled By: gchanan

fbshipit-source-id: 2fe170be6090733e20410db7cf99266543299c58
2019-08-15 12:21:31 -07:00
Hong Xu
00e4870001 Let logical_not support non-bool tensors. (#23916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23916

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23916

Test Plan: Imported from OSS

Differential Revision: D16719300

Pulled By: gchanan

fbshipit-source-id: 5be6aeea9a38cc40ad59d0449d25a25f7dfa2787
2019-08-15 12:21:27 -07:00
Hong Xu
338f9c860f Add logical_xor operator (#23847)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23847

Related to #23836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23847

Test Plan: Imported from OSS

Differential Revision: D16678300

Pulled By: gchanan

fbshipit-source-id: 67020aca5830b6bec2f561105954e0a8c2ee37e0
2019-08-15 08:40:25 -07:00
Hong Xu
1f4c73618c Add logical_not operator. (#23839)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23839

Close #23836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23839

Test Plan: Imported from OSS

Differential Revision: D16678301

Pulled By: gchanan

fbshipit-source-id: 54e7b3f3b04c577e239b88493247e1c036266774
2019-08-15 08:40:21 -07:00
Vishwak Srinivasan
5411d1a27b Fix docstring for argmax (#23775)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/23757
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23775

Differential Revision: D16644692

Pulled By: soumith

fbshipit-source-id: d759bb85f73383021e4657325dbac79913042ad2
2019-08-07 09:42:19 -07:00
Kexuan Sun
8e9f9b424f Replace descriptions of args in doc with template (#23439)
Summary:
Many descriptions of arguments could be replaced by items in the template such as `factory_common_args`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23439

Differential Revision: D16688527

Pulled By: ezyang

fbshipit-source-id: 406ce45d72e297f46b5fa9ea5472b3284c8d4324
2019-08-07 08:50:09 -07:00
Ailing Zhang
51d59a43ba fix torch.frac documentation (#23830)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/13968 .

Following the math formula in wiki: https://en.wikipedia.org/wiki/Fractional_part
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23830

Differential Revision: D16656871

Pulled By: ailzhang

fbshipit-source-id: a71467870cf9566e0c7b1a045f72607dada81e1f
2019-08-05 17:43:17 -07:00
Hong Xu
1aa4afde80 Document bool tensors for bitwise_not. (#23800)
Summary:
Requested by vadimkantorov at https://github.com/pytorch/pytorch/pull/23621#issuecomment-517945167
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23800

Differential Revision: D16651008

Pulled By: gchanan

fbshipit-source-id: 4ce21158bd5dd142edcd951e7ac941521b3d54af
2019-08-05 12:11:45 -07:00
Iurii Zdebskyi
19c675178f Updated docs and added deprecation warnings to acknowledge a bool tensor (#22261)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22261
ghimport-source-id: 1611d62d056a04c0ad15ef662e594a3d206a78e2

Test Plan: Imported from OSS

Differential Revision: D16005990

Pulled By: izdeby

fbshipit-source-id: 2413824aa75a0755719e4df11acd21e6607e5a85
2019-08-05 07:42:34 -07:00
vishwakftw
8e2b9de860 Document empty_strided (#23735)
Summary:
Changelog:
- Add doc string for torch.empty_strided
- Remove empty file named `python` in test/

Fixes https://github.com/pytorch/pytorch/issues/23688
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23735

Differential Revision: D16623438

Pulled By: ailzhang

fbshipit-source-id: acd5a47da9220243467ccc6bff92edd209cca709
2019-08-02 20:02:44 -07:00