Commit Graph

1104 Commits

Author SHA1 Message Date
AishwaryaKalloli
fe80638212 added docs to nn.rst (#48374)
Summary:
Fixes  https://github.com/pytorch/pytorch/issues/48198
Added following functions to a subsection "Global Hooks For Module" in containers sections of nn.rst.
- register_module_forward_pre_hook
- register_module_forward_hook
- register_module_backward_hook

screenshots:
![image](https://user-images.githubusercontent.com/30429206/99903019-9ee7f000-2ce7-11eb-95dd-1092d5e57ce7.png)
![image](https://user-images.githubusercontent.com/30429206/99903027-ac04df00-2ce7-11eb-9983-42ce67de75ba.png)
![image](https://user-images.githubusercontent.com/30429206/99903039-c3dc6300-2ce7-11eb-81c4-a0240067fe23.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48374

Reviewed By: ejguan

Differential Revision: D25219507

Pulled By: albanD

fbshipit-source-id: 0dd9d65f562c001c993ebcb51465e8ddcf631231
2020-11-30 11:34:49 -08:00
Hameer Abbasi
4e15877d5c Add documentation for torch.overrides submodule. (#48170)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48087

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48170

Reviewed By: ejguan

Differential Revision: D25220942

Pulled By: ezyang

fbshipit-source-id: a2b7f7b565f5e77173d8ce2fe9676a8131f929b6
2020-11-30 11:25:31 -08:00
mariosasko
755b8158e2 Fix __config__ docs (#48557)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48287

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48557

Reviewed By: ngimel

Differential Revision: D25211872

Pulled By: mruberry

fbshipit-source-id: ac916e16722809e747bd8960675c1477e3a1084d
2020-11-29 23:57:06 -08:00
kiyosora
272f4db043 Implement NumPy-like function torch.float_power() (#44937)
Summary:
- Related with https://github.com/pytorch/pytorch/issues/38349
- Implementing the NumPy-like function `torch.float_power()` .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44937

Reviewed By: ngimel

Differential Revision: D25192119

Pulled By: mruberry

fbshipit-source-id: 2e446b8e0c2825f045fe057e30c9419335557a05
2020-11-27 18:01:42 -08:00
kshitij12345
33cc1d6a64 [docs] fix torch.swap{dim/axes} to showup in docs (#48376)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48372

Verified locally that it is generated
![Screenshot from 2020-11-22 20-38-15](https://user-images.githubusercontent.com/19503980/99907517-298a1880-2d03-11eb-9a8f-9809609c2d2d.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48376

Reviewed By: ngimel

Differential Revision: D25176483

Pulled By: mruberry

fbshipit-source-id: 911b57d43319059cc9f809ea0396c3740ff81ff5
2020-11-25 13:15:39 -08:00
Fayçal Arbai
2e0a8b75d8 An implementation of torch.tile as requested in pytorch/pytorch#38349 (#47974)
Summary:
The approach is to simply reuse `torch.repeat` but adding one more functionality to tile, which is to prepend 1's to reps arrays if there are more dimensions to the tensors than the reps given in input. Thus for a tensor of shape (64, 3, 24, 24) and reps of (2, 2) will become (1, 1, 2, 2), which is what NumPy does.

I've encountered some instability with the test on my end, where I could get a random failure of the test (due to, sometimes, random value of `self.dim()`, and sometimes, segfaults). I'd appreciate any feedback on the test or an explanation for this instability so I can this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47974

Reviewed By: ngimel

Differential Revision: D25148963

Pulled By: mruberry

fbshipit-source-id: bf63b72c6fe3d3998a682822e669666f7cc97c58
2020-11-24 18:07:25 -08:00
Ivan Yashchuk
4ed7f36ed1 Added linalg.eigh, linalg.eigvalsh (#45526)
Summary:
This PR adds `torch.linalg.eigh`, and `torch.linalg.eigvalsh` for NumPy compatibility.
The current `torch.symeig` uses (on CPU) a different LAPACK routine than NumPy (`syev` vs `syevd`). Even though it shouldn't matter in practice, `torch.linalg.eigh` uses `syevd` (as NumPy does).

Ref https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45526

Reviewed By: gchanan

Differential Revision: D25022659

Pulled By: mruberry

fbshipit-source-id: 3676b77a121c4b5abdb712ad06702ac4944e900a
2020-11-22 04:57:28 -08:00
Brian Johnson
63b04dc11d Update index.rst (#47282)
Summary:
Updating master to match changes we made to 1.7.

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47282

Reviewed By: zhangguanheng66

Differential Revision: D24727322

Pulled By: brianjo

fbshipit-source-id: 64e3f06eb32c965390f282b81084460903d872a2
2020-11-20 08:52:00 -08:00
Randall Hunt
562d4c3bc5 Add basic ldexp operator for numpy compatibility (#45370)
Summary:
Adds ldexp operator for https://github.com/pytorch/pytorch/issues/38349

I'm not entirely sure the changes to `NamedRegistrations.cpp` were needed but I saw other operators in there so I added it.

Normally the ldexp operator is used along with the frexp to construct and deconstruct floating point values. This is useful for performing operations on either the mantissa and exponent portions of floating point values.

Sleef, std math.h, and cuda support both ldexp and frexp but not for all data types. I wasn't able to figure out how to get the iterators to play nicely with a vectorized kernel so I have left this with just the normal CPU kernel for now.

This is the first operator I'm adding so please review with an eye for errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45370

Reviewed By: mruberry

Differential Revision: D24333516

Pulled By: ranman

fbshipit-source-id: 2df78088f00aa9789aae1124eda399771e120d3f
2020-11-20 04:09:39 -08:00
Ivan Yashchuk
343b3e5cae Added linalg.tensorinv (#45969)
Summary:
This PR adds `torch.linalg.tensorinv` for NumPy compatibility.

Ref https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45969

Reviewed By: zhangguanheng66

Differential Revision: D25060568

Pulled By: mruberry

fbshipit-source-id: 3b145ce64e4bd5021bc229f5ffdd791c572673a0
2020-11-19 11:54:50 -08:00
kiyosora
008f840e7a Implement in-place method torch.cumsum_ and torch.cumprod_ (#47651)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47193

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47651

Reviewed By: zou3519

Differential Revision: D24992438

Pulled By: ezyang

fbshipit-source-id: c38bea55f4af1fc92be780eaa8e1d462316e6192
2020-11-19 11:20:12 -08:00
mattip
975ff6624b DOC: backport doc build fix from 1.7, tweak link (#47349)
Summary:
xref gh-46927 to the 1.7 release branch

This backports a fix to the script to push docs to pytorch/pytorch.github.io. Specifically, it pushes to the correct directory when a tag is created here. This issue became apparent in the 1.7 release cycle and should be backported to here.

Along the way, fix the canonical link to the pytorch/audio documentation now that they use subdirectories for the versions, xref pytorch/audio#992. This saves a redirect.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47349

Reviewed By: zhangguanheng66

Differential Revision: D25073752

Pulled By: seemethere

fbshipit-source-id: c778c94a05f1c3e916217bb184f69107e7d2c098
2020-11-19 09:51:18 -08:00
mfkasim91
8819bad86c Implement igammac (3rd PR) (#48171)
Summary:
Related: https://github.com/pytorch/pytorch/issues/46183 (torch.igamma)
This is the regularized upper incomplete gamma function.

This is supposed to be exactly the same as https://github.com/pytorch/pytorch/issues/47463, but after rebasing the `viable/strict` branch.

cc: mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48171

Reviewed By: zhangguanheng66

Differential Revision: D25060107

Pulled By: mruberry

fbshipit-source-id: 89780dea21dbb2141cbc4f7f18192cb78a769b17
2020-11-18 23:44:32 -08:00
kshitij12345
68a3a3f3b5 Add torch.swapdims and torch.swapaxes (#46041)
Summary:
Reference https://github.com/pytorch/pytorch/issues/38349

Delegates to `torch.transpose` (not sure what is the best way to alias)

TODO:
* [x] Add test
* [x] Add documentation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46041

Reviewed By: gchanan

Differential Revision: D25022816

Pulled By: mruberry

fbshipit-source-id: c80223d081cef84f523ef9b23fbedeb2f8c1efc5
2020-11-18 11:35:53 -08:00
Howard Huang
a6898cb5f4 Small documentation changes for RRef and Dist Autograd (#48123)
Summary:
Small wording changes and polishing documentation for:

https://pytorch.org/docs/master/rpc/rref.html
https://pytorch.org/docs/master/rpc/distributed_autograd.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48123

Reviewed By: zhangguanheng66

Differential Revision: D25059320

Pulled By: H-Huang

fbshipit-source-id: 7a0be56f062de06483b3bd3a5d617234101862ba
2020-11-18 10:57:59 -08:00
Jerry Zhang
8aaca4b46a [reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48038

nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu

this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D25000462

fbshipit-source-id: e3609a3ae4a3476a42f61276619033054194a0d2
2020-11-17 09:52:21 -08:00
Vasiliy Kuznetsov
ee995d33bd rename torch.Assert to torch._assert (#47763) (#47972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47972

Changing the name due to the discussion in
https://github.com/pytorch/pytorch/pull/47399.

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
python test/test_fx_experimental.py
```

Reviewed By: supriyar

Differential Revision: D24974298

Pulled By: vkuzo

fbshipit-source-id: 24ded93a7243ec79a0375f4eae8a3db9b787f857
2020-11-16 11:43:27 -08:00
Hameer Abbasi
3a2aad9314 Fix documentation to point to torch.overrides instead of _overrides. (#47842)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47697

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47842

Reviewed By: smessmer

Differential Revision: D24951750

Pulled By: ezyang

fbshipit-source-id: df62ec2e52f1c561c864a50bac4abf4a55e4f8e6
2020-11-16 08:28:53 -08:00
Vasiliy Kuznetsov
4779553921 Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47949

This reverts commit 1478e5ec2a.

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24966363

Pulled By: vkuzo

fbshipit-source-id: ca1126f699eef84027a15df35962728296c8a790
2020-11-14 08:40:30 -08:00
Masaki Kozuki
2eb1e866e8 Update links in DDP note (#47663)
Summary:
Update the links in https://pytorch.org/docs/stable/notes/ddp.html#.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47663

Reviewed By: smessmer

Differential Revision: D24951684

Pulled By: ezyang

fbshipit-source-id: c1c104d76cf0292a7fc75a627bf76bb56fea72d0
2020-11-13 21:26:28 -08:00
Ivan Yashchuk
260daf088d Added linalg.cholesky (#46083)
Summary:
This PR adds `torch.linalg.cholesky` function that matches `numpy.linalg.cholesky`.

Fixed `lda` argument to `lapackCholesky` calls.
Added `random_hermitian_pd_matrix` helper function for tests.

Ref https://github.com/pytorch/pytorch/issues/42666.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46083

Reviewed By: ailzhang

Differential Revision: D24861752

Pulled By: mruberry

fbshipit-source-id: 214dbceb4e8a2c589df209493efd843962d25593
2020-11-13 16:50:40 -08:00
Jerry Zhang
1478e5ec2a [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415

nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu

this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24747035

fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769
2020-11-12 10:56:30 -08:00
David Fan
9ea7a6c7c5 [ONNX] Update ONNX doc for writing pytorch model (#46961)
Summary:
For tracing successfully, we need write pytorch model in torch way. So we add instructions with examples here.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46961

Reviewed By: ailzhang

Differential Revision: D24900040

Pulled By: bzinodev

fbshipit-source-id: b375b533396b11dbc9656fa61e84a3f92f352e4b
2020-11-12 10:16:45 -08:00
Xiang Gao
4a7de2746f Add docs on how to toggle TF32 flags on C++ (#47331)
Summary:
I have been asked several times how to toggle this flag on libtorch. I think it would be good to mention it in the docs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47331

Reviewed By: glaringlee

Differential Revision: D24777576

Pulled By: mruberry

fbshipit-source-id: cc2a338c477bb57e0bb74b8960c47fde99665e41
2020-11-08 01:29:24 -08:00
Elias Ellison
7ab843e78b [JIT] add freeze to docs (#47120)
Summary:
freeze was temporarily renamed to _freeze in a reorg, and then removed from doc [here](https://github.com/pytorch/pytorch/pull/43473). add it back to docs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47120

Reviewed By: suo

Differential Revision: D24650712

Pulled By: eellison

fbshipit-source-id: 399e31586b8093de66937ba1266007ee291f509e
2020-11-04 13:50:36 -08:00
Erjia Guan
f1ac63d324 Implement copysign (#46396)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46396

Related #38349

[numpy](https://numpy.org/doc/stable/reference/generated/numpy.copysign.html?highlight=copysign#numpy.copysign)
- No in-place function
- No method
- Optional output
- Available: byte, char, bool, int, short, long, float, double, half
- Integral promoted to float
- Not available: float/double complex

`c = np.copysign(a, b)`
|  a |  b |  c | a.grad |
| -1 | -1 | -1 |   1  |
| -0 | -1 | -0 |   0  |
|  0 | -1 | -0 |  0  |
|  1 | -1 | -1 |  -1  |
| -1 | -0 |  -1 |  1  |
| -0 | -0 |  0 |  0  |
|  0 | -0 |  0 |   0  |
|  1 | -0 |  -1 |   -1  |
| -1 |  0 |  1 |  -1  |
| -0 |  0 |  0 |  0  |
|  0 |  0 |  0 |   0  |
|  1 |  0 |  1 |   1  |
| -1 |  1 |  1 |  -1  |
| -0 |  1 |  0 |  0  |
|  0 |  1 |  0 |   0  |
|  1 |  1 |  1 |   1  |

This function becomes **non-differentiable** at `a=0` for any `b`. So, in my opinion, we may set the gradient for `a=0` to 0.

TODO:
- [x] test (cpu/gpu)
- [x] doc
- [x] ~kernel_vec~

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D24401366

Pulled By: ejguan

fbshipit-source-id: 3621c5ff74b185376a3705589983bb5197ab896d
2020-11-04 08:08:57 -08:00
Ivan Yashchuk
f276ab55cd Added Kronecker product of tensors (torch.kron) (#45358)
Summary:
This PR adds a function for calculating the Kronecker product of tensors.
The implementation is based on `at::tensordot` with permutations and reshape.
Tests pass.

TODO:

- [x] Add more test cases
- [x] Write documentation
- [x] Add entry `common_methods_invokations.py`

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45358

Reviewed By: mrshenli

Differential Revision: D24680755

Pulled By: mruberry

fbshipit-source-id: b1f8694589349986c3abfda3dc1971584932b3fa
2020-11-03 12:41:41 -08:00
Taylor Robie
ac8a8185eb expose Timer docs to PyTorch website. (#46880)
Summary:
CC: gchanan jspisak seemethere

I previewed the docs and they look reasonable. Let me know if I missed anything.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46880

Reviewed By: seemethere, izdeby

Differential Revision: D24551503

Pulled By: robieta

fbshipit-source-id: 627f73d3dd4d8f089777bca8653702735632b9fc
2020-11-02 21:59:29 -08:00
Xiong Wei
74d730c0b5 implement NumPy-like functionality column_stack, row_stack (#46313)
Summary:
Related https://github.com/pytorch/pytorch/issues/38349

This PR implements `column_stack` as the composite ops of `torch.reshape` and `torch.hstack`, and makes `row_stack` as the alias of `torch.vstack`.

Todo

- [x] docs
- [x] alias pattern for `row_stack`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46313

Reviewed By: ngimel

Differential Revision: D24585471

Pulled By: mruberry

fbshipit-source-id: 62fc0ffd43d051dc3ecf386a3e9c0b89086c1d1c
2020-10-29 12:14:39 -07:00
mfkasim91
6eaa324c9f Implement torch.igamma (#46183)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41637
This is regularized lower incomplete gamma function, equivalent to scipy's `gammainc` and tensorflow `igamma`.

cc fritzo mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46183

Reviewed By: gchanan

Differential Revision: D24479126

Pulled By: mruberry

fbshipit-source-id: fdf8ea289fe4ca1b408810732192411e948fcdfe
2020-10-29 11:40:18 -07:00
Ivan Yashchuk
f629fbe235 Added torch.linalg.tensorsolve (#46142)
Summary:
This PR adds `torch.linalg.tensorsolve` function that matches `numpy.linalg.tensorsolve`.

Ref https://github.com/pytorch/pytorch/issues/42666.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46142

Reviewed By: izdeby

Differential Revision: D24539400

Pulled By: mruberry

fbshipit-source-id: 6e38364fe0bc511e739036deb274d9307df119b2
2020-10-29 10:29:28 -07:00
Zafar
57bf0b596a [docs] Changing the wording on quantization versioning and support (#46858)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46858

Test Plan: Imported from OSS

Reviewed By: dskhudia

Differential Revision: D24542598

Pulled By: z-a-f

fbshipit-source-id: 0eb7a2dcc8f8ad52954f2555cf41d5f7524cbc2c
2020-10-26 14:30:50 -07:00
BowenBao
52f8d320b3 [ONNX] Update ONNX doc for indexing export (#46349)
Summary:
Adding example code for supported cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46349

Reviewed By: gchanan

Differential Revision: D24459449

Pulled By: malfet

fbshipit-source-id: 65021a96cd12225615aa40af5d916e0cda56d107
2020-10-23 09:49:43 -07:00
Pearu Peterson
905ed3c840 Revised sparse tensor documentation. (#45400)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/44635.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45400

Reviewed By: ezyang

Differential Revision: D24359410

Pulled By: mruberry

fbshipit-source-id: 37c691a49a7b0042c7a298e0ed1226702b097c8b
2020-10-22 02:07:54 -07:00
Lillian Johnson
f83cf2dab3 [JIT] adding torch.jit.isinstance support (#46062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46062

Adds support for torch.jit.isinstance in both eager and script mode

Example use:

```
import torch
from typing import Any, List

class TestModule(torch.nn.Module):
    def __init__(self):
        super(TestModule, self).__init__()

    def call(self, input1: str, input2: str) -> str:
        return input1

    def forward(self, input: Any) -> None:
        if torch.jit.isinstance(input, List[str]):
            for el in input:
                print(el)

TestModule().forward(["1","2"])
scripted_module = torch.jit.script(TestModule())
scripted_module(["1", "2"])
```

Test Plan: Imported from OSS

Reviewed By: bertmaher, zou3519

Differential Revision: D24264415

Pulled By: Lilyjjo

fbshipit-source-id: 039c95bddd854c414027ac8332832e6bc830b5b9
2020-10-20 16:47:49 -07:00
Emilio Castillo
d38a71d579 torch.nn.modules.LazyModuleMixin and torch.nn.LazyLinear (Shape Inference II) (#44538)
Summary:
Retake on https://github.com/pytorch/pytorch/issues/40493 after all the feedback from albanD

This PR implements the generic Lazy mechanism and a sample `LazyLinear` layer with the `UninitializedParameter`.

The main differences with the previous PR are two;
Now `torch.nn.Module` remains untouched.
We don't require an explicit initialization or a dummy forward pass before starting the training or inference of the actual module. Making this much simpler to use from the user side.

As we discussed offline, there was the suggestion of not using a mixin, but changing the `__class__` attribute of `LazyLinear` to become `Linear` once it's completely initialized. While this can be useful, by the time being we need `LazyLinear` to be a `torch.nn.Module` subclass since there are many checks that rely on the modules being instances of `torch.nn.Module`.
This can cause problems when we create complex modules such as
```
class MyNetwork(torch.nn.Module):
    def __init__(self):
        super(MyNetwork, self).__init__()
        self.conv = torch.nn.Conv2d(20, 4, 2)
        self.linear = torch.nn.LazyLinear(10)
    def forward(self, x):
        y = self.conv(x).clamp(min=0)
        return self.linear(y)
```
Here, when the __setattr__ function is called at the time LazyLinear is registered, it won't be added to the child modules of `MyNetwork`, so we have to manually do it later, but currently there is no way to do such thing as we can't access the parent module from LazyLinear once it becomes the Linear module. (We can add a workaround to this if needed).

TODO:

Add convolutions once the design is OK
Fix docstrings

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44538

Reviewed By: ngimel

Differential Revision: D24162854

Pulled By: albanD

fbshipit-source-id: 6d58dfe5d43bfb05b6ee506e266db3cf4b885f0c
2020-10-19 13:13:54 -07:00
Yanan Cao
6a2f40dc66 Expose script_if_tracing as public API (#46494)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45921

`torch.jit._script_if_tracing` is still kept for BC

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46494

Reviewed By: ZolotukhinM

Differential Revision: D24381621

Pulled By: gmagogsfm

fbshipit-source-id: 35d9f2da38c591039ba95cd95ef186e6c7e47586
2020-10-17 17:31:57 -07:00
Peter Bell
da95eec613 torch.fft: Two dimensional FFT functions (#45164)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45164

This PR implements `fft2`, `ifft2`, `rfft2` and `irfft2`. These are the last functions required for `torch.fft` to match `numpy.fft`. If you look at either NumPy or SciPy you'll see that the 2-dimensional variants are identical to `*fftn` in every way, except for the default value of `axes`. In fact you can even use `fft2` to do general n-dimensional transforms.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D24363639

Pulled By: mruberry

fbshipit-source-id: 95191b51a0f0b8e8e301b2c20672ed4304d02a57
2020-10-17 16:23:06 -07:00
senius
e7dbaa252e Update optim.rst for better understanding (#45944)
Summary:
The `i` variable in `Line 272` may cause ambiguity in understanding. I think it should be named as `epoch` variable.

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45944

Reviewed By: agolynski

Differential Revision: D24219486

Pulled By: vincentqb

fbshipit-source-id: 2af0408594613e82a1a1b63971650cabde2b576e
2020-10-14 09:36:06 -07:00
anjali411
ac245f6b45 Complex autograd doc fix (#46258)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46258

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D24286512

Pulled By: anjali411

fbshipit-source-id: 60bc98d69336101c0d8fe5ab542b9757b5e7faac
2020-10-13 14:36:50 -07:00
Vitaly Fedyunin
31ee5d8d8b Adding information how to control randomness with DataLoader (#45749)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45749

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D24088407

Pulled By: VitalyFedyunin

fbshipit-source-id: 398b73ec5e8c83000ebc692001da847fc0aaa48f
2020-10-12 16:57:58 -07:00
Erjia Guan
bed3b40523 Implement ravel (#46098)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46098

Doc:
![image](https://user-images.githubusercontent.com/68879799/95611323-ae5cf380-0a2f-11eb-9b8e-56bf79ce68af.png)

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24253213

Pulled By: ejguan

fbshipit-source-id: 42a866c902272cbe3743a9d0cb3afb9165d51c0b
2020-10-12 16:00:44 -07:00
Rohan Varma
362d9a932e Remove object-based collective APIs from public docs (#46075)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46075

Removes these from public docs for now as we are still
iterating/formalizing these APIs. Will add them back once they are part of a
PyTorch release.
ghstack-source-id: 113928700

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D24211510

fbshipit-source-id: 3e36ff6990cf8e6ef72b6e524322ae06f9097aa2
2020-10-09 09:24:51 -07:00
Peter Bell
c86ee082a2 torch.fft: Add helper functions section to docs (#46032)
Summary:
Fixes https://github.com/pytorch/pytorch/pull/44877#issuecomment-705411068

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46032

Reviewed By: ngimel

Differential Revision: D24191580

Pulled By: mruberry

fbshipit-source-id: 58a32de886b40f85653ddc3b65bf8d551395f023
2020-10-08 17:57:12 -07:00
n-v-k
64b0686986 Expose ChannelShuffle (#46000)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45999
Also small fix for caffe2 counterpart

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46000

Reviewed By: mruberry

Differential Revision: D24185855

Pulled By: ngimel

fbshipit-source-id: c5d599bb8100b86b81c6901f1b8b8baefc12cb16
2020-10-08 16:00:01 -07:00
anjali411
89256611b5 Doc note update for complex autograd (#45270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45270

<img width="1679" alt="Screen Shot 2020-10-07 at 1 45 59 PM" src="https://user-images.githubusercontent.com/20081078/95368324-fa7b2d00-08a3-11eb-9066-2e659a4085a2.png">
<img width="1673" alt="Screen Shot 2020-10-07 at 1 46 10 PM" src="https://user-images.githubusercontent.com/20081078/95368332-fbac5a00-08a3-11eb-9be5-77ce6deb8967.png">
<img width="1667" alt="Screen Shot 2020-10-07 at 1 46 30 PM" src="https://user-images.githubusercontent.com/20081078/95368337-fe0eb400-08a3-11eb-80a2-5ad23feeeb83.png">
<img width="1679" alt="Screen Shot 2020-10-07 at 1 46 48 PM" src="https://user-images.githubusercontent.com/20081078/95368345-00710e00-08a4-11eb-96d9-e2d544554a4b.png">
<img width="1680" alt="Screen Shot 2020-10-07 at 1 47 03 PM" src="https://user-images.githubusercontent.com/20081078/95368350-023ad180-08a4-11eb-89b3-f079480741f4.png">
<img width="1680" alt="Screen Shot 2020-10-07 at 1 47 12 PM" src="https://user-images.githubusercontent.com/20081078/95368364-0535c200-08a4-11eb-82fc-9435a046e4ca.png">

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24203257

Pulled By: anjali411

fbshipit-source-id: cd637dade5fb40cecf5d9f4bd03d508d36e26fcd
2020-10-08 15:04:52 -07:00
Heitor Schueroff de Souza
636eb18029 Fixed median nan propagation and implemented nanmedian (#45847)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45847

Original PR here https://github.com/pytorch/pytorch/pull/45084. Created this one because I was having problems with ghstack.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D24136629

Pulled By: heitorschueroff

fbshipit-source-id: dd7c7540a33f6a19e1ad70ba2479d5de44abbdf9
2020-10-08 11:20:21 -07:00
Kurt Mohler
ef4817fe5a Add tensor_split function, based on numpy.array_split (#45168)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/9382

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45168

Reviewed By: ngimel

Differential Revision: D24166164

Pulled By: mruberry

fbshipit-source-id: 795459821e52885bc99623a01a2abec060995ce6
2020-10-07 23:14:48 -07:00
Rohan Varma
154347d82f Fix distributed documentation for asynchronous collective Work objects (#45709)
Summary:
Closes https://github.com/pytorch/pytorch/issues/42247. Clarifies some documentation related to `Work` object semantics (outputs of async collective functions). Clarifies the difference between CPU operations and CUDA operations (on Gloo or NCCL backend), and provides an example where the difference in CUDA operation's wait() semantics is necessary to understand for correct code.
![sync](https://user-images.githubusercontent.com/8039770/94875710-6f64e780-040a-11eb-8fb5-e94fd53534e5.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45709

Reviewed By: ngimel

Differential Revision: D24171256

Pulled By: rohan-varma

fbshipit-source-id: 6365a569ef477b59eb2ac0a8a9a1c1f34eb60e22
2020-10-07 19:59:51 -07:00
Michael Carilli
5640b79bf8 Allow consumer ops to sync on GraphRoot's gradient (#45787)
Summary:
Currently, a GraphRoot instance doesn't have an associated stream.  Streaming backward synchronization logic assumes the instance ran on the default stream, and tells consumer ops to sync with the default stream.  If the gradient the GraphRoot instance passes to consumer backward ops was populated on a non-default stream, we have a race condition.

The race condition can exist even if the user doesn't give a manually populated gradient:
```python
with torch.cuda.stream(side_stream):
    # loss.backward() implicitly synthesizes a one-element 1.0 tensor on side_stream
    # GraphRoot passes it to consumers, but consumers first sync on default stream, not side_stream.
    loss.backward()

    # Internally to backward(), streaming-backward logic takes over, stuff executes on the same stream it ran on in forward,
    # and the side_stream context is irrelevant.  GraphRoot's interaction with its first consumer(s) is the spot where
    # the side_stream context causes a problem.
```

This PR fixes the race condition by associating a GraphRoot instance, at construction time, with the current stream(s) on the device(s) of the grads it will pass to consumers. (i think this relies on GraphRoot executing in the main thread, before backward thread(s) fork, because the grads were populated on the main thread.)

The test demonstrates the race condition. It fails reliably without the PR's GraphRoot diffs and passes with the GraphRoot diffs.

With the GraphRoot diffs, manually populating an incoming-gradient arg for `backward` (or `torch.autograd.grad`) and the actual call to `autograd.backward` will have the same stream-semantics relationship as any other pair of ops:
```python
# implicit population is safe
with torch.cuda.stream(side_stream):
    loss.backward()

# explicit population in side stream then backward in side stream is safe
with torch.cuda.stream(side_stream):
    kickoff_grad = torch.ones_like(loss)
    loss.backward(gradient=kickoff_grad)

# explicit population in one stream then backward kickoff in another stream
# is NOT safe, even with this PR's diffs, but that unsafety is consistent with
# stream-semantics relationship of any pair of ops
kickoff_grad = torch.ones_like(loss)
with torch.cuda.stream(side_stream):
    loss.backward(gradient=kickoff_grad)

# Safe, as you'd expect for any pair of ops
kickoff_grad = torch.ones_like(loss)
side_stream.wait_stream(torch.cuda.current_stream())
with torch.cuda.stream(side_stream):
    loss.backward(gradient=kickoff_grad)
```
This PR also adds the last three examples above to cuda docs and references them from autograd docstrings.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45787

Reviewed By: nairbv

Differential Revision: D24138376

Pulled By: albanD

fbshipit-source-id: bc4cd9390f9f0358633db530b1b09f9c1080d2a3
2020-10-07 08:53:53 -07:00