Commit Graph

1133 Commits

Author SHA1 Message Date
pbialecki
1451d84766 Minor doc fix: change truncating to rounding in TF32 docs (#49625)
Summary:
Minor doc fix in clarifying that the input data is rounded not truncated.

CC zasdfgbnm ngimel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49625

Reviewed By: mruberry

Differential Revision: D25668244

Pulled By: ngimel

fbshipit-source-id: ac97e41e0ca296276544f9e9f85b2cf1790d9985
2020-12-22 13:46:33 -08:00
Xiong Wei
3779bdec56 Implementing NumPy-like function torch.broadcast_to (#48997)
Summary:
Related https://github.com/pytorch/pytorch/issues/38349

Implement NumPy-like function `torch.broadcast_to` to broadcast the input tensor to a new shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48997

Reviewed By: anjali411, ngimel

Differential Revision: D25663937

Pulled By: mruberry

fbshipit-source-id: 0415c03f92f02684983f412666d0a44515b99373
2020-12-21 11:24:50 -08:00
Ivan Yashchuk
8be205ae13 Added linalg.solve (#48456)
Summary:
This PR adds `torch.linalg.solve`.

`linalg_solve_out` uses in-place operations on the provided result tensor.

I modified `apply_solve` to accept tensor of Int instead of std::vector, that way we can write a function similar to `linalg_solve_out` but removing the error checks and device memory synchronization.

In comparison to `torch.solve` this routine accepts 1-dimensional tensors and batches of 1-dim tensors for the right-hand-side term. `torch.solve` requires it to be at least 2-dimensional.

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48456

Reviewed By: izdeby

Differential Revision: D25562222

Pulled By: mruberry

fbshipit-source-id: a9355c029e2442c2e448b6309511919631f9e43b
2020-12-21 10:11:12 -08:00
Jeffrey Wan
d0a12c5a47 Add sinc operator (#48740)
Summary:
Implements the sinc operator.
See https://numpy.org/doc/stable/reference/generated/numpy.sinc.html

![image](https://user-images.githubusercontent.com/13428986/101653855-cdffa080-3a0d-11eb-8426-ecc81c152ebd.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48740

Reviewed By: ezyang

Differential Revision: D25597565

Pulled By: soulitzer

fbshipit-source-id: 6dbcf282ee4eba34930bc9e5c85c0c5e79cf0322
2020-12-18 15:52:24 -08:00
Ilia Cherniavskii
daaf932a99 New profiler API (#48280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48280

Adding new API for the kineto profiler that supports enable predicate
function

Test Plan: unit test

Reviewed By: ngimel

Differential Revision: D25142220

Pulled By: ilia-cher

fbshipit-source-id: c57fa42855895075328733d7379eaf3dc1743d14
2020-12-18 11:49:02 -08:00
jonykarki
0b27d57062 fixed the first line of torch.rst to match the __init__.py file's first line (#49584)
Summary:
Changed the first line of the torch.rst file to match that of the __init__.py file

Fixes https://github.com/pytorch/pytorch/issues/49228

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49584

Reviewed By: VitalyFedyunin

Differential Revision: D25639260

Pulled By: mrshenli

fbshipit-source-id: a0bafd945ff92115eed932662feedc46d29dfaab
2020-12-18 08:55:58 -08:00
Jerry Zhang
b8d98f05e7 [reland][quant][docs] Add fx graph mode quantization to quantization docs (#49211) (#49515)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49515

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D25601061

fbshipit-source-id: 74e917d57895e9b4131a01fdcea8df3e94322bec
2020-12-17 10:30:10 -08:00
Mike Ruberry
676bfa6dbd Revert D25507480: [quant][docs] Add fx graph mode quantization to quantization docs
Test Plan: revert-hammer

Differential Revision:
D25507480 (7729581414)

Original commit changeset: 9e9e4b5fef97

fbshipit-source-id: fdb08d824209b97defaba2e207d1a914575a6ae7
2020-12-16 14:26:18 -08:00
Jeffrey Wan
7767dcfc8d Revert D25564477: [pytorch][PR] Add sinc operator
Test Plan: revert-hammer

Differential Revision:
D25564477 (bbc71435b7)

Original commit changeset: 13f36a2b84da

fbshipit-source-id: 58cbe8109efaf499dd017531878b9fbbb27976bc
2020-12-16 13:19:16 -08:00
Jerry Zhang
7729581414 [quant][docs] Add fx graph mode quantization to quantization docs (#49211)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49211

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D25507480

fbshipit-source-id: 9e9e4b5fef979f5621c1bbd1b49e9cc6830da617
2020-12-16 12:40:02 -08:00
Natalia Gimelshein
afce5890ff Revert D25421263: [pytorch][PR] [numpy] torch.{all/any} : output dtype is always bool
Test Plan: revert-hammer

Differential Revision:
D25421263 (c508e5b1bf)

Original commit changeset: c6c681ef9400

fbshipit-source-id: 4c0c9acf42b06a3ed0af8f757ea4512ca35b6c59
2020-12-16 11:11:13 -08:00
Jeffrey Wan
bbc71435b7 Add sinc operator (#48740)
Summary:
Implements the sinc operator.
See https://numpy.org/doc/stable/reference/generated/numpy.sinc.html

![image](https://user-images.githubusercontent.com/13428986/101653855-cdffa080-3a0d-11eb-8426-ecc81c152ebd.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48740

Reviewed By: izdeby

Differential Revision: D25564477

Pulled By: soulitzer

fbshipit-source-id: 13f36a2b84dadfb4fd1442a2a40a3a3246cbaecb
2020-12-16 10:33:02 -08:00
kshitij12345
c508e5b1bf [numpy] torch.{all/any} : output dtype is always bool (#47878)
Summary:
BC-breaking note:

This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)

PR summary:

https://github.com/pytorch/pytorch/pull/44790#issuecomment-725596687

Fixes 2 and 3

Also Fixes https://github.com/pytorch/pytorch/issues/48352

Changes
* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)
* Uses vectorized version for all dtypes on CPU
* Enables test for complex
* Update doc for `torch.all` and `torch.any`

TODO
* [x] Update docs
* [x] Benchmark
* [x] Raise issue on XLA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47878

Reviewed By: H-Huang

Differential Revision: D25421263

Pulled By: mruberry

fbshipit-source-id: c6c681ef94004d2bcc787be61a72aa059b333e69
2020-12-15 13:59:32 -08:00
James Reed
778006918c [WIP][FX] Add FX page to docs (#48814)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48814

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25320051

Pulled By: jamesr66a

fbshipit-source-id: b1fdec9615a7a4eb97c557bb3cba7f90b0a4d933
2020-12-15 09:48:29 -08:00
Ralf Gommers
6cfd7c3811 Remove type annotations from signatures in html docs (#49294)
Summary:
One unintended side effect of moving type annotations inline was that those annotations now show up in signatures in the html docs. This is more confusing and ugly than it is helpful. An example for `MaxPool1d`:

![image](https://user-images.githubusercontent.com/98330/102010280-77f86900-3d3d-11eb-8f83-e7ee0991ed92.png)

This makes the docs readable again. The parameter descriptions often already have type information, and there will be many cases where the type annotations will make little sense to the user (e.g., returning typevar T, long unions).

Change to `MaxPool1d` example:

![image](https://user-images.githubusercontent.com/98330/102010304-91011a00-3d3d-11eb-860d-ffa174b4d43b.png)

Note that once we can build the docs with Sphinx 3 (which is far off right now), we have two options to make better use of the extra type info in the annotations (some of which is useful):
- `autodoc_type_aliases`, so we can leave things like large unions unevaluated to keep things readable
- `autodoc_typehints = 'description'`, which moves the annotations into the parameter descriptions.

Another, more labour-intensive option, is what vadimkantorov suggested in gh-44964: show annotations on hover. Could also be done with some foldout, or other optional way to make things visible. Would be nice, but requires a Sphinx contribution or plugin first.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49294

Reviewed By: glaringlee

Differential Revision: D25535272

Pulled By: ezyang

fbshipit-source-id: 5017abfea941a7ae8c4595a0d2bdf8ae8965f0c4
2020-12-14 12:19:48 -08:00
shubhambhokare1
e1c1a7e964 [ONNX] Changes to export API to better handle named arguments (#47367)
Summary:
The args parameter of ONNX export is changed to better support optional arguments such that args is represented as:
args (tuple of arguments or torch.Tensor, a dictionary consisting of named arguments (optional)):
            a dictionary to specify the input to the corresponding named parameter:
            - KEY: str, named parameter
            - VALUE: corresponding input

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47367

Reviewed By: H-Huang

Differential Revision: D25432691

Pulled By: bzinodev

fbshipit-source-id: 9d4cba73cbf7bef256351f181f9ac5434b77eee8
2020-12-10 12:31:00 -08:00
Ivan Yashchuk
bea88ee1d0 Added entry for torch.linalg.cond to linalg.rst (#48941)
Summary:
This PR makes documentation for `cond` available at https://pytorch.org/docs/master/linalg.html
I forgot to include this change in https://github.com/pytorch/pytorch/issues/45832.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48941

Reviewed By: ngimel

Differential Revision: D25379244

Pulled By: mruberry

fbshipit-source-id: c8c0a0b8a05c17025d6c3cea405b2add369e2019
2020-12-07 19:01:05 -08:00
Rohan Varma
d6b5f3ad98 Add object-based collective APIs to public docs (#48909)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48909

Adds these new APIs to the documentation
ghstack-source-id: 117965961

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D25363279

fbshipit-source-id: af6889d377f7b5f50a1a77a36ab2f700e5040150
2020-12-07 14:30:25 -08:00
Peter Bell
5180caeeb4 Remove deprecated spectral ops from torch namespace (#48594)
Summary:
Ref https://github.com/pytorch/pytorch/issues/42175

This removes the 4 deprecated spectral functions: `torch.{fft,rfft,ifft,irfft}`. `torch.fft` is also now imported by by default.

The actual `at::native` functions are still used in `torch.stft` so can't be full removed yet. But will once https://github.com/pytorch/pytorch/issues/47601 has been merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48594

Reviewed By: heitorschueroff

Differential Revision: D25298929

Pulled By: mruberry

fbshipit-source-id: e36737fe8192fcd16f7e6310f8b49de478e63bf0
2020-12-05 04:12:32 -08:00
kiyosora
6ab84ca0f3 Implement NumPy-like function torch.msort() (#48440)
Summary:
- Related with https://github.com/pytorch/pytorch/issues/38349
- Implementing the NumPy-like function `torch.msort()` .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48440

Reviewed By: bdhirsh

Differential Revision: D25265753

Pulled By: mruberry

fbshipit-source-id: 7709ac5e5667e7541a3dc9048b9c9896b1a6dfa1
2020-12-04 04:32:09 -08:00
shubhambhokare1
5fd61de99e [ONNX] Added hardswish symbolic in opset 9 (#48423)
Summary:
Adds support for torch.nn.Hardswish operator in Export

Fixes https://github.com/pytorch/pytorch/issues/43665

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48423

Reviewed By: heitorschueroff

Differential Revision: D25309868

Pulled By: bzinodev

fbshipit-source-id: f5583eb01b1b0e8f0bc95d5054941dd29605d6a5
2020-12-03 23:22:21 -08:00
Tongzhou Wang
86540dbf41 Fix jit doc model loading example (#48104)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48104

Reviewed By: jamesr66a

Differential Revision: D25028353

Pulled By: suo

fbshipit-source-id: aaf74a40e7150a278d100e129740cfe1cef99af2
2020-12-03 20:47:20 -08:00
Heitor Schueroff
c134f32835 Implemented torch.inner (#46716)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46716

Implemented torch.inner similar to [numpy.inner](https://numpy.org/doc/stable/reference/generated/numpy.inner.html). For now it's implemented as a composite op.

TODO

- [x] Add documentation

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D24860351

Pulled By: heitorschueroff

fbshipit-source-id: de5c82f285893495491fdba73b35634f4d00bac8
2020-12-03 11:37:55 -08:00
kshitij12345
5c9cef9a6c [numpy] Add torch.moveaxis (#48581)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/38349 #36048 https://github.com/pytorch/pytorch/pull/41480#issuecomment-734398262

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48581

Reviewed By: bdhirsh

Differential Revision: D25276307

Pulled By: mruberry

fbshipit-source-id: 3e3e4df1343c5ce5b71457badc43f08c419ec5c3
2020-12-03 10:34:33 -08:00
Fritz Obermeyer
313e77fc06 Add broadcast_shapes() function and use it in MultivariateNormal (#43935)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/43837

This adds a `torch.broadcast_shapes()` function similar to Pyro's [broadcast_shape()](7c2c22c10d/pyro/distributions/util.py (L151)) and JAX's [lax.broadcast_shapes()](https://jax.readthedocs.io/en/test-docs/_modules/jax/lax/lax.html). This helper is useful e.g. in multivariate distributions that are parameterized by multiple tensors and we want to `torch.broadcast_tensors()` but the parameter tensors have different "event shape" (e.g. mean vectors and covariance matrices). This helper is already heavily used in Pyro's distribution codebase, and we would like to start using it in `torch.distributions`.

- [x] refactor `MultivariateNormal`'s expansion logic to use `torch.broadcast_shapes()`
- [x] add unit tests for `torch.broadcast_shapes()`
- [x] add docs

cc neerajprad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43935

Reviewed By: bdhirsh

Differential Revision: D25275213

Pulled By: neerajprad

fbshipit-source-id: 1011fdd597d0a7a4ef744ebc359bbb3c3be2aadc
2020-12-03 02:42:04 -08:00
peter
3c5db30eaa Update magma to 2.5.4 for Windows (#48656)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48527

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48656

Reviewed By: zhangguanheng66

Differential Revision: D25261601

Pulled By: malfet

fbshipit-source-id: 4ba0036ca882bccd1990108d13596455d179d06e
2020-12-02 09:45:21 -08:00
Vishwak Srinivasan
47db191f0c Implement Kumaraswamy Distribution (#48285)
Summary:
This PR implements the Kumaraswamy distribution.

cc: fritzo alicanb sdaulton

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48285

Reviewed By: ejguan

Differential Revision: D25221015

Pulled By: ezyang

fbshipit-source-id: e621b25a9c75671bdfc94af145a4d9de2f07231e
2020-12-02 07:46:45 -08:00
Ivan Yashchuk
74330e0497 Added linalg.matrix_rank (#48206)
Summary:
This PR adds `torch.linalg.matrix_rank`.

Changes compared to the original `torch.matrix_rank`:
- input with the complex dtype is supported
- batched input is supported
- "symmetric" kwarg renamed to "hermitian"

Should I update the documentation for `torch.matrix_rank`?

For the input with no elements (for example 0×0 matrix), the current implementation is divergent from NumPy. NumPy stumbles on not defined max for such input, here I chose to return appropriately sized tensor of zeros. I think that's mathematically a correct thing to do.

Ref https://github.com/pytorch/pytorch/issues/42666.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48206

Reviewed By: albanD

Differential Revision: D25211965

Pulled By: mruberry

fbshipit-source-id: ae87227150ab2cffa07f37b4a3ab228788701837
2020-12-02 03:29:25 -08:00
Akifumi Imanishi
492683bd42 Add LazyConvXd and LazyConvTransposeXd (#47350)
Summary:
This PR implements LazyConvXd and LazyConvTransposeXd based on https://github.com/pytorch/pytorch/issues/44538. (cc. emcastillo and albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47350

Reviewed By: ejguan

Differential Revision: D25220645

Pulled By: albanD

fbshipit-source-id: b5e2e866d53761a3415fd762d05a81920f8b16c3
2020-12-01 07:00:28 -08:00
AishwaryaKalloli
fe80638212 added docs to nn.rst (#48374)
Summary:
Fixes  https://github.com/pytorch/pytorch/issues/48198
Added following functions to a subsection "Global Hooks For Module" in containers sections of nn.rst.
- register_module_forward_pre_hook
- register_module_forward_hook
- register_module_backward_hook

screenshots:
![image](https://user-images.githubusercontent.com/30429206/99903019-9ee7f000-2ce7-11eb-95dd-1092d5e57ce7.png)
![image](https://user-images.githubusercontent.com/30429206/99903027-ac04df00-2ce7-11eb-9983-42ce67de75ba.png)
![image](https://user-images.githubusercontent.com/30429206/99903039-c3dc6300-2ce7-11eb-81c4-a0240067fe23.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48374

Reviewed By: ejguan

Differential Revision: D25219507

Pulled By: albanD

fbshipit-source-id: 0dd9d65f562c001c993ebcb51465e8ddcf631231
2020-11-30 11:34:49 -08:00
Hameer Abbasi
4e15877d5c Add documentation for torch.overrides submodule. (#48170)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48087

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48170

Reviewed By: ejguan

Differential Revision: D25220942

Pulled By: ezyang

fbshipit-source-id: a2b7f7b565f5e77173d8ce2fe9676a8131f929b6
2020-11-30 11:25:31 -08:00
mariosasko
755b8158e2 Fix __config__ docs (#48557)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48287

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48557

Reviewed By: ngimel

Differential Revision: D25211872

Pulled By: mruberry

fbshipit-source-id: ac916e16722809e747bd8960675c1477e3a1084d
2020-11-29 23:57:06 -08:00
kiyosora
272f4db043 Implement NumPy-like function torch.float_power() (#44937)
Summary:
- Related with https://github.com/pytorch/pytorch/issues/38349
- Implementing the NumPy-like function `torch.float_power()` .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44937

Reviewed By: ngimel

Differential Revision: D25192119

Pulled By: mruberry

fbshipit-source-id: 2e446b8e0c2825f045fe057e30c9419335557a05
2020-11-27 18:01:42 -08:00
kshitij12345
33cc1d6a64 [docs] fix torch.swap{dim/axes} to showup in docs (#48376)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48372

Verified locally that it is generated
![Screenshot from 2020-11-22 20-38-15](https://user-images.githubusercontent.com/19503980/99907517-298a1880-2d03-11eb-9a8f-9809609c2d2d.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48376

Reviewed By: ngimel

Differential Revision: D25176483

Pulled By: mruberry

fbshipit-source-id: 911b57d43319059cc9f809ea0396c3740ff81ff5
2020-11-25 13:15:39 -08:00
Fayçal Arbai
2e0a8b75d8 An implementation of torch.tile as requested in pytorch/pytorch#38349 (#47974)
Summary:
The approach is to simply reuse `torch.repeat` but adding one more functionality to tile, which is to prepend 1's to reps arrays if there are more dimensions to the tensors than the reps given in input. Thus for a tensor of shape (64, 3, 24, 24) and reps of (2, 2) will become (1, 1, 2, 2), which is what NumPy does.

I've encountered some instability with the test on my end, where I could get a random failure of the test (due to, sometimes, random value of `self.dim()`, and sometimes, segfaults). I'd appreciate any feedback on the test or an explanation for this instability so I can this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47974

Reviewed By: ngimel

Differential Revision: D25148963

Pulled By: mruberry

fbshipit-source-id: bf63b72c6fe3d3998a682822e669666f7cc97c58
2020-11-24 18:07:25 -08:00
Ivan Yashchuk
4ed7f36ed1 Added linalg.eigh, linalg.eigvalsh (#45526)
Summary:
This PR adds `torch.linalg.eigh`, and `torch.linalg.eigvalsh` for NumPy compatibility.
The current `torch.symeig` uses (on CPU) a different LAPACK routine than NumPy (`syev` vs `syevd`). Even though it shouldn't matter in practice, `torch.linalg.eigh` uses `syevd` (as NumPy does).

Ref https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45526

Reviewed By: gchanan

Differential Revision: D25022659

Pulled By: mruberry

fbshipit-source-id: 3676b77a121c4b5abdb712ad06702ac4944e900a
2020-11-22 04:57:28 -08:00
Brian Johnson
63b04dc11d Update index.rst (#47282)
Summary:
Updating master to match changes we made to 1.7.

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47282

Reviewed By: zhangguanheng66

Differential Revision: D24727322

Pulled By: brianjo

fbshipit-source-id: 64e3f06eb32c965390f282b81084460903d872a2
2020-11-20 08:52:00 -08:00
Randall Hunt
562d4c3bc5 Add basic ldexp operator for numpy compatibility (#45370)
Summary:
Adds ldexp operator for https://github.com/pytorch/pytorch/issues/38349

I'm not entirely sure the changes to `NamedRegistrations.cpp` were needed but I saw other operators in there so I added it.

Normally the ldexp operator is used along with the frexp to construct and deconstruct floating point values. This is useful for performing operations on either the mantissa and exponent portions of floating point values.

Sleef, std math.h, and cuda support both ldexp and frexp but not for all data types. I wasn't able to figure out how to get the iterators to play nicely with a vectorized kernel so I have left this with just the normal CPU kernel for now.

This is the first operator I'm adding so please review with an eye for errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45370

Reviewed By: mruberry

Differential Revision: D24333516

Pulled By: ranman

fbshipit-source-id: 2df78088f00aa9789aae1124eda399771e120d3f
2020-11-20 04:09:39 -08:00
Ivan Yashchuk
343b3e5cae Added linalg.tensorinv (#45969)
Summary:
This PR adds `torch.linalg.tensorinv` for NumPy compatibility.

Ref https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45969

Reviewed By: zhangguanheng66

Differential Revision: D25060568

Pulled By: mruberry

fbshipit-source-id: 3b145ce64e4bd5021bc229f5ffdd791c572673a0
2020-11-19 11:54:50 -08:00
kiyosora
008f840e7a Implement in-place method torch.cumsum_ and torch.cumprod_ (#47651)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47193

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47651

Reviewed By: zou3519

Differential Revision: D24992438

Pulled By: ezyang

fbshipit-source-id: c38bea55f4af1fc92be780eaa8e1d462316e6192
2020-11-19 11:20:12 -08:00
mattip
975ff6624b DOC: backport doc build fix from 1.7, tweak link (#47349)
Summary:
xref gh-46927 to the 1.7 release branch

This backports a fix to the script to push docs to pytorch/pytorch.github.io. Specifically, it pushes to the correct directory when a tag is created here. This issue became apparent in the 1.7 release cycle and should be backported to here.

Along the way, fix the canonical link to the pytorch/audio documentation now that they use subdirectories for the versions, xref pytorch/audio#992. This saves a redirect.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47349

Reviewed By: zhangguanheng66

Differential Revision: D25073752

Pulled By: seemethere

fbshipit-source-id: c778c94a05f1c3e916217bb184f69107e7d2c098
2020-11-19 09:51:18 -08:00
mfkasim91
8819bad86c Implement igammac (3rd PR) (#48171)
Summary:
Related: https://github.com/pytorch/pytorch/issues/46183 (torch.igamma)
This is the regularized upper incomplete gamma function.

This is supposed to be exactly the same as https://github.com/pytorch/pytorch/issues/47463, but after rebasing the `viable/strict` branch.

cc: mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48171

Reviewed By: zhangguanheng66

Differential Revision: D25060107

Pulled By: mruberry

fbshipit-source-id: 89780dea21dbb2141cbc4f7f18192cb78a769b17
2020-11-18 23:44:32 -08:00
kshitij12345
68a3a3f3b5 Add torch.swapdims and torch.swapaxes (#46041)
Summary:
Reference https://github.com/pytorch/pytorch/issues/38349

Delegates to `torch.transpose` (not sure what is the best way to alias)

TODO:
* [x] Add test
* [x] Add documentation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46041

Reviewed By: gchanan

Differential Revision: D25022816

Pulled By: mruberry

fbshipit-source-id: c80223d081cef84f523ef9b23fbedeb2f8c1efc5
2020-11-18 11:35:53 -08:00
Howard Huang
a6898cb5f4 Small documentation changes for RRef and Dist Autograd (#48123)
Summary:
Small wording changes and polishing documentation for:

https://pytorch.org/docs/master/rpc/rref.html
https://pytorch.org/docs/master/rpc/distributed_autograd.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48123

Reviewed By: zhangguanheng66

Differential Revision: D25059320

Pulled By: H-Huang

fbshipit-source-id: 7a0be56f062de06483b3bd3a5d617234101862ba
2020-11-18 10:57:59 -08:00
Jerry Zhang
8aaca4b46a [reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48038

nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu

this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D25000462

fbshipit-source-id: e3609a3ae4a3476a42f61276619033054194a0d2
2020-11-17 09:52:21 -08:00
Vasiliy Kuznetsov
ee995d33bd rename torch.Assert to torch._assert (#47763) (#47972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47972

Changing the name due to the discussion in
https://github.com/pytorch/pytorch/pull/47399.

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
python test/test_fx_experimental.py
```

Reviewed By: supriyar

Differential Revision: D24974298

Pulled By: vkuzo

fbshipit-source-id: 24ded93a7243ec79a0375f4eae8a3db9b787f857
2020-11-16 11:43:27 -08:00
Hameer Abbasi
3a2aad9314 Fix documentation to point to torch.overrides instead of _overrides. (#47842)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47697

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47842

Reviewed By: smessmer

Differential Revision: D24951750

Pulled By: ezyang

fbshipit-source-id: df62ec2e52f1c561c864a50bac4abf4a55e4f8e6
2020-11-16 08:28:53 -08:00
Vasiliy Kuznetsov
4779553921 Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47949

This reverts commit 1478e5ec2a.

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24966363

Pulled By: vkuzo

fbshipit-source-id: ca1126f699eef84027a15df35962728296c8a790
2020-11-14 08:40:30 -08:00
Masaki Kozuki
2eb1e866e8 Update links in DDP note (#47663)
Summary:
Update the links in https://pytorch.org/docs/stable/notes/ddp.html#.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47663

Reviewed By: smessmer

Differential Revision: D24951684

Pulled By: ezyang

fbshipit-source-id: c1c104d76cf0292a7fc75a627bf76bb56fea72d0
2020-11-13 21:26:28 -08:00
Ivan Yashchuk
260daf088d Added linalg.cholesky (#46083)
Summary:
This PR adds `torch.linalg.cholesky` function that matches `numpy.linalg.cholesky`.

Fixed `lda` argument to `lapackCholesky` calls.
Added `random_hermitian_pd_matrix` helper function for tests.

Ref https://github.com/pytorch/pytorch/issues/42666.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46083

Reviewed By: ailzhang

Differential Revision: D24861752

Pulled By: mruberry

fbshipit-source-id: 214dbceb4e8a2c589df209493efd843962d25593
2020-11-13 16:50:40 -08:00