Commit Graph

942 Commits

Author SHA1 Message Date
Mike Ruberry
60f2fa6a84 Updates serialization note to explain versioned symbols and dynamic versioning (#41395)
Summary:
Doc update intended to clarify and expand our current serialization behavior, including explaining the difference between torch.save/torch.load, torch.nn.Module.state_dict/torch.nn.Module.load_state_dict, and torch.jit.save/torch.jit.load. Also explains, for the time, when historic serialized Torchscript behavior is preserved and our recommendation for preserving behavior (using the same PyTorch version to consume a model as produced it).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41395

Reviewed By: ngimel

Differential Revision: D22560538

Pulled By: mruberry

fbshipit-source-id: dbc2f1bb92ab61ff2eca4888febc21f7dda76ba1
2020-07-15 19:05:19 -07:00
Xingying Cheng
04320a47d7 Add optimizer_for_mobile doc into python api root doc (#41211)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41211

Test Plan: Imported from OSS

Reviewed By: xta0

Differential Revision: D22543608

fbshipit-source-id: bf522a6c94313bf2696eca3c5bb5812ea98998d0
2020-07-15 09:57:40 -07:00
Shen Li
3a63a939d4 Revert D22517785: [pytorch][PR] Enable TF32 support for cuBLAS
Test Plan: revert-hammer

Differential Revision:
D22517785 (288ece89e1)

Original commit changeset: 87334c893561

fbshipit-source-id: 0a0674f49c1bcfc98f7f88af5a8c7de93b76e458
2020-07-15 08:15:48 -07:00
Qiao Tan
359cdc20e2 Revert D22432885: [pytorch][PR] unsafe_split, unsafe_split_with_sizes, unsafe_chunk operations
Test Plan: revert-hammer

Differential Revision:
D22432885 (c17670ac50)

Original commit changeset: 324aef091b32

fbshipit-source-id: 6b7c52bde46932e1cf77f61e7035d8a641b0beb6
2020-07-14 16:06:42 -07:00
Wojciech Baranowski
c17670ac50 unsafe_split, unsafe_split_with_sizes, unsafe_chunk operations (#39299)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/36403

Copy-paste of the issue description:

* Escape hatch: Introduce unsafe_* version of the three functions above that have the current behavior (outputs not tracked as views). The documentation will explain in detail why they are unsafe and when it is safe to use them. (basically, only the outputs OR the input can be modified inplace but not both. Otherwise, you will get wrong gradients).
* Deprecation: Use the CreationMeta on views to track views created by these three ops and throw warning when any of the views is modified inplace saying that this is deprecated and will raise an error soon. For users that really need to modify these views inplace, they should look at the doc of the unsafe_* version to make sure their usecase is valid:
  * If it is not, then pytorch is computing wrong gradients for their use case and they should not do inplace anymore.
  * If it is, then they can use the unsafe_* version to keep the current behavior.
* Removal: Use the CreationMeta on view to prevent any inplace on these views (like we do for all other views coming from multi-output Nodes). The users will still be able to use the unsafe_ versions if they really need to do this.

Note about BC-breaking:
- This PR changes the behavior of the regular function by making them return proper views now. This is a modification that the user will be able to see.
- We skip all the view logic for these views and so the code should behave the same as before (except the change in the `._is_view()` value).
- Even though the view logic is not performed, we do raise deprecation warnings for the cases where doing these ops would throw an error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/39299

Differential Revision: D22432885

Pulled By: albanD

fbshipit-source-id: 324aef091b32ce69dd067fe9b13a3f17d85d0f12
2020-07-14 14:15:41 -07:00
Xiang Gao
288ece89e1 Enable TF32 support for cuBLAS (#40800)
Summary:
Benchmark on a fully connected network and torchvision models (time in seconds) on GA100:

| model              | batch size | forward(TF32) | forward(FP32) | backward(TF32) | backward(FP32) |
|--------------------|------------|---------------|---------------|----------------|----------------|
| FC 512-128-32-8    | 512        | 0.000211      | 0.000321      | 0.000499       | 0.000532       |
| alexnet            | 512        | 0.0184        | 0.0255        | 0.0486         | 0.0709         |
| densenet161        | 128        | 0.0665        | 0.204         | 0.108          | 0.437          |
| googlenet          | 256        | 0.0925        | 0.110         | 0.269          | 0.326          |
| inception_v3       | 256        | 0.155         | 0.214         | 0.391          | 0.510          |
| mnasnet1_0         | 512        | 0.108         | 0.137         | 0.298          | 0.312          |
| mobilenet_v2       | 512        | 0.114         | 0.294         | 0.133          | 0.303          |
| resnet18           | 512        | 0.0722        | 0.100         | 0.182          | 0.228          |
| resnext50_32x4d    | 256        | 0.170         | 0.237         | 0.373          | 0.479          |
| shufflenet_v2_x1_0 | 512        | 0.0463        | 0.0473        | 0.125          | 0.123          |
| squeezenet1_0      | 512        | 0.0870        | 0.0948        | 0.205          | 0.214          |
| vgg16              | 256        | 0.167         | 0.234         | 0.401          | 0.502          |
| wide_resnet50_2    | 512        | 0.186         | 0.310         | 0.415          | 0.638          |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40800

Reviewed By: mruberry

Differential Revision: D22517785

Pulled By: ngimel

fbshipit-source-id: 87334c8935616f72a6af5abbd3ae69f76923dc3e
2020-07-14 13:21:10 -07:00
Xiaomeng Yang
80d5b3785b Add torch.logit function (#41062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41062

Add torch.logit function

Test Plan: buck test mode/dev-nosan //caffe2/test:torch -- "logit"

Reviewed By: hl475

Differential Revision: D22406912

fbshipit-source-id: b303374f4c68850eb7477eb0645546a24b844606
2020-07-13 19:33:20 -07:00
xueht-fnst
0651887eb4 Improve repr for torch.iinfo & torch.finfo (#40488)
Summary:
- fix https://github.com/pytorch/pytorch/issues/39991
- Include directly `min`/`max`/`eps`/`tiny` values in repr of `torch.iinfo` & `torch.finfo` for inspection
- Use `torch.float16` / `torch.int16` instead of uncorrespond names `Half` / `Short`
- The improved repr is shown just like:
```
>>> torch.iinfo(torch.int8)
iinfo(type=torch.int8, max=127, min=-128)
>>> torch.iinfo(torch.int16)
iinfo(type=torch.int16, max=32767, min=-32768)
>>> torch.iinfo(torch.int32)
iinfo(type=torch.int32, max=2.14748e+09, min=-2.14748e+09)
>>> torch.iinfo(torch.int64)
iinfo(type=torch.int64, max=9.22337e+18, min=-9.22337e+18)
>>> torch.finfo(torch.float16)
finfo(type=torch.float16, eps=0.000976563, max=65504, min=-65504, tiny=6.10352e-05)
>>> torch.finfo(torch.float32)
finfo(type=torch.float32, eps=1.19209e-07, max=3.40282e+38, min=-3.40282e+38, tiny=1.17549e-38)
>>> torch.finfo(torch.float64)
finfo(type=torch.float64, eps=2.22045e-16, max=1.79769e+308, min=-1.79769e+308, tiny=2.22507e-308)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40488

Differential Revision: D22445301

Pulled By: mruberry

fbshipit-source-id: 552af9904c423006084b45d6c4adfb4b5689db54
2020-07-10 15:22:55 -07:00
Michael Carilli
d927aee312 Small clarification of torch.cuda.amp multi-model example (#41203)
Summary:
some people have been confused by `retain_graph` in the snippet, they thought it was an additional requirement imposed by amp.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41203

Differential Revision: D22463700

Pulled By: ngimel

fbshipit-source-id: e6fc8871be2bf0ecc1794b1c6f5ea99af922bf7e
2020-07-10 11:13:26 -07:00
anjali411
db38487ece Autograd Doc for Complex Numbers (#41012)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41012

Test Plan: Imported from OSS

Differential Revision: D22476911

Pulled By: anjali411

fbshipit-source-id: 7da20cb4312a0465272bebe053520d9911475828
2020-07-10 09:57:43 -07:00
Heitor Schueroff de Souza
75a4862f63 Added SiLU activation function (#41034)
Summary:
Implemented the SiLU activation function as discussed in https://github.com/pytorch/pytorch/issues/3169.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41034

Reviewed By: glaringlee

Differential Revision: D22465203

Pulled By: heitorschueroff

fbshipit-source-id: b27d064529fc99600c586ad49b594b52b718b0d2
2020-07-10 07:37:30 -07:00
Luca Wehrstedt
dde3d5f4a8 [RPC docs] Remove mention of TensorPipe's SHM and CMA backends as they're not built (#41200)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41200

In short, we messed up. The SHM and CMA backends of TensorPipe are Linux-specific and thus they are guarded by a #ifdef in the agent's code. Due to a mishap with CMake (due the fact that TensorPipe has two CMake files, one for PyTorch and a "standalone" one) we were not correctly propagating some flags and these #ifdefs were always false. This means that these two backends have always been disabled and have thus never been covered by our OSS CI. It would be irresponsible to enable them now in v1.6, so instead we remove any mention of them from the docs.

Note that this is perhaps not as bad as it sounds. These two backends were providing higher performance (latency) when the two endpoints were on the same machine. However, I suspect that most RPC users will only do transfers across machines, for which SHM and CMA wouldn't have played any role.
ghstack-source-id: 107458630

Test Plan: Docs only

Differential Revision: D22462158

fbshipit-source-id: 0d72fea11bcaab6d662184bbe7270529772a5e9b
2020-07-09 15:33:07 -07:00
mattip
a88099ba3e restore old documentation references (#39086)
Summary:
Fixes gh-39007

We replaced actual content with links to generated content in many places to break the documentation into manageable chunks. This caused references like
```
https://pytorch.org/docs/stable/torch.html#torch.flip
```
to become
```
https://pytorch.org/docs/master/generated/torch.flip.html#torch.flip
```
The textual content that was located at the old reference was replaced with a link to the new reference. This PR adds a `<p id="xxx"/p>` reference next to the link, so that the older references from outside tutorials and forums still work: they will bring the user to the link that they can then follow through to see the actual content.

The way this is done is to monkeypatch the sphinx writer method that produces the link. It is ugly but practical, and in my mind not worse than adding javascript to do the same thing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/39086

Differential Revision: D22462421

Pulled By: jlin27

fbshipit-source-id: b8f913b38c56ebb857c5a07bded6509890900647
2020-07-09 15:20:10 -07:00
Shen Li
0edbe6b063 Add a link in RPC doc page to point to PT Distributed overview (#41108)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41108

Test Plan: Imported from OSS

Differential Revision: D22440751

Pulled By: mrshenli

fbshipit-source-id: 9e7b002091a3161ae385fdfcc26484ae8fc243bb
2020-07-08 14:00:05 -07:00
Michael Carilli
0911c1e71a Added index_put to promotelist (#41035)
Summary:
[index_put](https://pytorch.org/docs/master/tensors.html#torch.Tensor.index_put) requires src and dst tensors to be the same dtype, so imo it belongs on the promote list when autocast is active (output should be widest dtype among input dtypes).

i also put some other registrations in alphabetical order.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41035

Differential Revision: D22418305

Pulled By: ngimel

fbshipit-source-id: b467cb16ac6c2ba1f9e43531f69a144b17f00b87
2020-07-07 20:36:55 -07:00
mattip
75155df8b4 Doc warnings (#41068)
Summary:
solves most of gh-38011 in the framework of solving gh-32703.

These should only be formatting fixes, I did not try to fix grammer and syntax.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41068

Differential Revision: D22411919

Pulled By: zou3519

fbshipit-source-id: 25780316b6da2cfb4028ea8a6f649bb18b746440
2020-07-07 11:43:21 -07:00
Karel Ha
00ee54d2a4 Fix link to PyTorch organization (from Governance) (#40984)
Summary:
PR fixes https://github.com/pytorch/pytorch/issues/40666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40984

Differential Revision: D22404543

Pulled By: ngimel

fbshipit-source-id: 0d39e8f4d701517cce9c31fddaaad46be3d4844b
2020-07-07 11:22:57 -07:00
Edward Leardi
733b8c23c4 Fix several quantization documentation typos (#40567)
Summary:
This PR fixes several typos I noticed in the docs here: https://pytorch.org/docs/master/quantization.html. In one case there was a misspelled module [torch.nn.instrinsic.qat](https://pytorch.org/docs/master/quantization.html#torch-nn-instrinsic-qat) which I corrected and am including screenshots of below just in case.

<img width="1094" alt="before" src="https://user-images.githubusercontent.com/54918401/85766765-5cdd6280-b6e5-11ea-93e6-4944cf820b71.png">

<img width="1093" alt="after" src="https://user-images.githubusercontent.com/54918401/85766769-5d75f900-b6e5-11ea-8850-0d1f5ed67b16.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40567

Differential Revision: D22311291

Pulled By: ezyang

fbshipit-source-id: 65d1f3dd043357e38a584d9e30f31634a5b0995c
2020-07-07 09:45:23 -07:00
Edward Leardi
6b50874cb7 Fix HTTP links in documentation to HTTPS (#40878)
Summary:
I ran `make linkcheck` using `sphinx.builders.linkcheck` on the documentation and noticed a few links weren't using HTTPS so I quickly updated them all.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40878

Differential Revision: D22404647

Pulled By: ngimel

fbshipit-source-id: 9c9756db59197304023fddc28f252314f6cf4af3
2020-07-06 20:05:21 -07:00
raghuramank100
e173278348 Update quantization.rst (#40896)
Summary:
Add documentation for dynamic quantized modules

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40896

Differential Revision: D22395955

Pulled By: z-a-f

fbshipit-source-id: cdc956d1509a0901bc24b73b6ca68a1b65e00cc2
2020-07-06 13:47:39 -07:00
kshitij12345
4104ab8b18 Add torch.count_nonzero (#39992)
Summary:
Reference https://github.com/pytorch/pytorch/issues/38349

TODO:

* [x] Add tests
* [x] Add docs (pending add to docs.rst)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39992

Reviewed By: ezyang

Differential Revision: D22236738

Pulled By: mruberry

fbshipit-source-id: 8520068b086b5ffc4de9e4939e746ff889293987
2020-06-30 06:39:13 -07:00
Ailing Zhang
d7cd16858f Add documentation about storage sharing is preserved and serialized f… (#40412)
Summary:
…ile size.
fixes https://github.com/pytorch/pytorch/issues/40157
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40412

Reviewed By: ezyang

Differential Revision: D22265639

Pulled By: ailzhang

fbshipit-source-id: 16b0301f16038bd784e7e92f63253fedc7820adc
2020-06-29 17:23:29 -07:00
Jeong Ukjae
b4db529352 Fix wrong link in docs/source/notes/ddp.rst (#40484)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40484

Differential Revision: D22259834

Pulled By: mrshenli

fbshipit-source-id: 4ec912c600c81010bdb2778c35cbb0321480199f
2020-06-28 13:55:56 -07:00
Wanchao Liang
eebd492dcf [doc] fix autograd doc subsubsection display issue (#40582)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40582

There's a misuse in the `requires_grad` with ~~~~~~, "~~~~" is not a official section marker, change it to "^^^^^" to denote subsubsections, also fix the other places where we should use subsection "-----" instead of subsubsection "^^^^"

see https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections

Before:
<img width="712" alt="rst_before" src="https://user-images.githubusercontent.com/9443650/85789835-2226fa80-b6e4-11ea-97b6-2b19fdf324a4.png">
After:
<img width="922" alt="rst_after" src="https://user-images.githubusercontent.com/9443650/85789856-281cdb80-b6e4-11ea-925f-cb3f4ebaa2bf.png">

Test Plan: Imported from OSS

Differential Revision: D22245747

Pulled By: wanchaol

fbshipit-source-id: 11548ed42f627706863bb74d4269827d1b3450d4
2020-06-25 23:28:33 -07:00
Jessica Lin
2e6e8d557c Update docs feature classifications (#39966)
Summary:
Update the following feature classifications in docs to align with the changes:
1. [High Level Autograd APIs](https://pytorch.org/docs/stable/autograd.html#functional-higher-level-api): Beta (was experimental)
2. [Eager Mode Quantization](https://pytorch.org/docs/stable/quantization.html): Beta (was experimental)
3. [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html): Prototype (was experimental)
4. [TorchScript/RPC](https://pytorch.org/docs/stable/rpc.html#rpc): Prototype (was experimental)
5. [Channels Last Memory Layout](https://pytorch.org/docs/stable/tensor_attributes.html#torch-memory-format): Beta (was experimental)
6. [Custom C++ Classes](https://pytorch.org/docs/stable/cpp_index.html): Beta (was experimental)
7. [Torch.Sparse](https://pytorch.org/docs/stable/sparse.html): Beta (was experimental)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39966

Differential Revision: D22213217

Pulled By: jlin27

fbshipit-source-id: dc49337cbc7026ed8dcac506fc60029dc3add854
2020-06-24 15:35:59 -07:00
Shihao Xu
0ecea2d64d [JIT x RPC] Consolidate Future type class and Future impl class (#40406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40406

Same motivation for https://github.com/pytorch/pytorch/issues/35110.

`Future` and `RRef` are two important types for `rpc` module, should make users feel easy to use.

Reference, https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#directive-autoclass

Follow https://github.com/pytorch/pytorch/pull/35694.
ghstack-source-id: 106484664

Test Plan:
```
buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork

buck build mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork && \
buck-out/gen/caffe2/test/distributed/rpc/jit/rpc_fork\#binary.par \
-r test_rref_local_value
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc/tensorpipe:rpc_fork_tensorpipe
```

pyre -l caffe2/torch/fb/training_toolkit
pyre -l caffe2/torch/fb/distributed
pyre -l aiplatform

Differential Revision: D7722176

fbshipit-source-id: f3b9ccd7bccb233b2b33ad59dd65e178ba34d67f
2020-06-24 01:44:49 -07:00
Shihao Xu
7c07c39845 [torch.distributed.rpc] Install method docstrings from PyRRef to RRef (#40461)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40461

It turned out `:inheried-members:` (see [doc](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#directive-autoclass)) is not really usable.

Because pybind11 generates a docstring that writes `self` as parent class, `rpc.PyRRef`, type.

As a workaround, I am pulling docstrings on parent-class, `PyRRef` class, into subclass, `RRef`. And do surgery on the docstring generated by pybind11.

{F241283111}

ghstack-source-id: 106472496

Test Plan:
buck test mode/dev-nosan //caffe2/test/distributed/rpc/:rpc_fork

buck build mode/dev-nosan //caffe2/test/distributed/rpc/:rpc_fork && \
buck-out/gen/caffe2/test/distributed/rpc/rpc_fork\#binary.par \
-r test_rref_str

buck build mode/dev-nosan //caffe2/test/distributed/rpc/:rpc_fork && \
buck-out/gen/caffe2/test/distributed/rpc/rpc_fork\#binary.par \
-r test_return_local_rrefs

buck test mode/dev-nosan //caffe2/torch/fb/distributed/model_parallel/tests:test_elastic_averaging -- 'test_elastic_averaging_center \(caffe2\.torch\.fb\.distributed\.model_parallel\.tests\.test_elastic_averaging\.TestElasticAveragingCenter\)'

P134031188

Differential Revision: D7933834

fbshipit-source-id: c03a8a4c9d98888b64492a8caba1591595bfe247
2020-06-23 19:58:36 -07:00
Jessica Lin
7c737eab59 Remove table of contents at the top of rpc.rst (#40205)
Summary:
mattip - Can we remove the table of contents created by the `.. contents:: :local: :depth: 2` since this page isn't one of the large documentation pages (https://github.com/pytorch/pytorch/issues/38010) and is simply a landing page for the Distributed RPC Framework?

Changes made in this original PR: f10fbcc820 (diff-250b9b23fd6f1a5c15aecdb72afb9d7d)

cc mrshenli
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40205

Differential Revision: D22194943

Pulled By: jlin27

fbshipit-source-id: 4e42845daf2784a17ad81645fe3b838385656bba
2020-06-23 19:45:11 -07:00
Elias Ellison
8c20fb6481 [JIT] freeze doc (#40409)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40409

Reviewed By: ezyang

Differential Revision: D22192709

Pulled By: eellison

fbshipit-source-id: 68cdb2e5040d31957fbd64690fdc03c058d13f9a
2020-06-23 15:44:03 -07:00
Elias Ellison
f000b44d89 Fork/Join Inline Docs (relanding) (#40438)
Summary:
Added fork/wait to docs/source/jit.rst, hopefully that will fix test error.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40438

Differential Revision: D22188152

Pulled By: eellison

fbshipit-source-id: c19277284455fb6e7c0138b0c1423d90b147d18e
2020-06-23 13:25:51 -07:00
Michael Carilli
3b040c478a Make custom_fwd a no-op when not executed under autocast (#36171)
Summary:
Currently, a custom autograd function written with
```
torch.cuda.amp.custom_fwd(cast_inputs=dtype)
def forward(ctx, *args):
    ...
```
casts incoming floating-point CUDA tensors to `dtype` unconditionally, regardless of whether the function executes in an autocast-enabled region.  I think I had the wrong idea there.  Autocast-disabled regions should give the user control of input types.  Also, `custom_fwd(cast_inputs=dtype)`-decorated functions' behavior should align with native fp32list/fp16list functions.  C++-side casting wrappers have no effect when autocast is disabled, and  `custom_fwd`'s casting should behave the same way.

The present PR changes `custom_fwd` so it only casts in autocast-enabled regions (also updates custom_fwd to ignore fp64 inputs, like the C++ wrappers).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36171

Differential Revision: D22179511

Pulled By: ngimel

fbshipit-source-id: 5a93d070179a43206066bce19da0a5a19ecaabbd
2020-06-23 10:23:02 -07:00
Vasiliy Kuznetsov
9bf255573f quant docs: add and clean up ELU (#40377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40377

Cleans up the docstring for quantized ELU and adds it to the quantization docs.

Test Plan: * build on Mac OS and inspect

Differential Revision: D22162834

Pulled By: vkuzo

fbshipit-source-id: e548fd4dc8d67db27ed19cac4dbdf2a942586759
2020-06-23 09:02:43 -07:00
Vasiliy Kuznetsov
d71ec51c0e quant docs: add and clean up BatchNorm{n}d (#40346)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40346

Cleans up docstrings for quantized BatchNorm and adds to quantization docs

Test Plan: * build on Mac OS and inspect

Differential Revision: D22152633

Pulled By: vkuzo

fbshipit-source-id: e0bf02194158231e0205b5b2df7f6f1ffc3c4d65
2020-06-23 09:02:41 -07:00
Vasiliy Kuznetsov
5e683517a7 quant docs: add and clean up InstanceNorm{n}d (#40345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40345

Fixes docstrings and adds to quantization docs for quantized InstanceNorm.

Test Plan: * build on Mac OS and inspect

Differential Revision: D22152637

Pulled By: vkuzo

fbshipit-source-id: 7a485311ead20796b7a0944827d1d04e14ec8dcd
2020-06-23 09:02:39 -07:00
Vasiliy Kuznetsov
6e3fdd77ca quant docs: add and clean up GroupNorm (#40343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40343

Cleans up the quantized GroupNorm docstring and adds it to quantization docs.

Test Plan: * build on Mac OS and inspect

Differential Revision: D22152635

Pulled By: vkuzo

fbshipit-source-id: 5553b841c7a5d77f1467f0c40657db9e5d730a12
2020-06-23 09:02:36 -07:00
Vasiliy Kuznetsov
d15fcc7e49 quant docs: add and clean up LayerNorm (#40342)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40342

Cleans up the docstrings for quantized LayerNorm, and adds it to the docs.

Test Plan: * build on Mac OS and inspect

Differential Revision: D22152639

Pulled By: vkuzo

fbshipit-source-id: 38adf14b34675d1983ac4ed751938aa396e5400b
2020-06-23 09:02:34 -07:00
Vasiliy Kuznetsov
d27f8eaf92 quant docs: add and clean up hardtanh (#40341)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40341

Cleans up the hardtanh docstring and adds it to quantization docs.

Test Plan: * build and inspect on Mac OS

Differential Revision: D22152636

Pulled By: vkuzo

fbshipit-source-id: c98e635199c8be332aa6958664ff23faad834908
2020-06-23 09:02:32 -07:00
Vasiliy Kuznetsov
8e74fb6a0c quant docs: add and clean up hardsigmoid (#40340)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40340

Adds and simplifies quantization docs for hardsigmoid

Test Plan:
* build docs on Mac OS
* inspect

Differential Revision: D22152634

Pulled By: vkuzo

fbshipit-source-id: 18da273023fb00e5f0bc1e881b00536492c606d3
2020-06-23 09:02:29 -07:00
Vasiliy Kuznetsov
c4594a97ae quant docs: clean up hardswish (#40323)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40323

Cleans up the naming and the function param docs for quantized hardswish.
Remove redundant docstrings and link to floating point modules instead.

Test Plan:
* build the docs on Mac OS
* verify that every link works as expected

Differential Revision: D22152638

Pulled By: vkuzo

fbshipit-source-id: fef04874ae460b449c677424a6a1c6dd47054795
2020-06-23 08:59:34 -07:00
Michael Carilli
8066fba226 [RELAND2] Change AccumulateGrad to yield .grads that match weights' memory layout (#40358)
Summary:
https://github.com/pytorch/pytorch/pull/40129 fixed the error responsible for the first revert, but exposed another error in the same test.

This PR is intended as the "master copy" for merge, and it runs on full CI.
Two other PRs (restricted to run on a small subset of CI) supporting debugging DDP failures/hangs with multiple devices per process (`test_c10d.py:DistributedDataParallelTest.test_grad_layout_1devicemodule_2replicaperprocess`).
- https://github.com/pytorch/pytorch/pull/40290 tries the test with purely rowmajor contiguous params on an untouched master.  In other words https://github.com/pytorch/pytorch/pull/40290 contains none of this PR's diffs aside from the test itself.
- https://github.com/pytorch/pytorch/pull/40178, for comparison, tries the test with this PR's diffs.

Both fail the same way, indicating failure is unrelated to this PR's other diffs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40358

Differential Revision: D22165785

Pulled By: albanD

fbshipit-source-id: ac7cdd79af5c080ab74341671392dca8e717554e
2020-06-22 17:13:21 -07:00
Rohan Varma
ae2f1f0372 [DDP Note] Remove refs to RoundRobin PG until we officially support it (#40380)
Summary:
Removes line mentioning `ProcessGroupRoundRobin` since we don't intend it to be used as a public API just yet. We can add this back when we officially support the API
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40380

Differential Revision: D22165556

Pulled By: rohan-varma

fbshipit-source-id: 24d0477d881dc74f2ff579de61dfd1ced2b09e75
2020-06-22 16:19:29 -07:00
anjali411
8ec2ae9a9f Add view_as_real, view_as_complex for complex tensors (#39099)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39099

Test Plan: Imported from OSS

Differential Revision: D22057886

Pulled By: anjali411

fbshipit-source-id: bad5ba7097ba0dd13f2c549b2463094dee9afa14
2020-06-22 15:15:27 -07:00
Edward Yang
e4766fb4d9 Meta tensors, but without code deduplication (#38490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38490

A meta tensor is a tensor that is a lot like a normal tensor,
except it doesn't actually have any data associated with it.
You can use them to carry out shape/dtype computations without
actually having to run the actual code; for example, this could
be used to do shape inference in a JIT analysis pass.
Check out the description in DispatchKey.h for more information.

Meta tensors are part of a larger project to rationalize how we
write kernels so that we don't have to duplicate shape logic
in CPU kernel, CUDA kernel and meta kernel (this PR makes the
duplication problem worse!)  However, that infrastructure can
be built on top of this proof of concept, which just shows how
you can start writing meta kernels today even without this
infrastructure.

There are a lot of things that don't work:
- I special cased printing for dense tensors only; if you try to
  allocate a meta sparse / quantized tensor things aren't going
  to work.
- The printing formula implies that torch.tensor() can take an
  ellipsis, but I didn't add this.
- I wrote an example formula for binary operators, but it isn't
  even right!  (It doesn't do type promotion of memory layout
  correctly).  The most future proof way to do it right is to
  factor out the relevant computation out of TensorIterator,
  as it is quite involved.
- Nothing besides torch.add works right now
- Meta functions are ALWAYS included in mobile builds (selective
  build doesn't work on them).  This isn't a big deal for now
  but will become more pressing as more meta functions are added.

One reason I'm putting up this PR now is to check with Yinghai Lu
if we can unblock shape inference for accelerators, while we are
still working on a long term plan for how to unify all shape
computation across our kernels.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21935609

Pulled By: ezyang

fbshipit-source-id: f7d8636eeb8516b6bc296db99a16e56029972eee
2020-06-22 09:18:33 -07:00
Jerry Zhang
59ca1d31ca [quant][graphmode] docstrings for top level APIs (#40328)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40328

Test Plan: Imported from OSS

Differential Revision: D22149708

fbshipit-source-id: 63a1cd229d9e4668fba0ef3977e894cb8984318b
2020-06-19 22:20:23 -07:00
Mike Ruberry
4f761f325c Back out "[pytorch][PR] Removes dunder div"
Summary: NVIDIA's Apex is updating to no longer rely on this behavior, but we're reverting this Python2->Python3 update to unblock internal apex users.

Test Plan: Sandcaslte + OSS CI.

Reviewed By: ngimel

Differential Revision: D22146782

fbshipit-source-id: f9483d2cbf9dc3a469ad48a6c863edea3ae51070
2020-06-19 18:31:20 -07:00
Shen Li
3ca05500fa Improve RPC documents (#40296)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40296

1. Added a link to parameter server tutorial
2. Explained current states for TorchScript support

Test Plan: Imported from OSS

Differential Revision: D22142647

Pulled By: mrshenli

fbshipit-source-id: ffd697dd64a3aa874cf3f3488122ed805903370d
2020-06-19 15:34:49 -07:00
James Reed
c73095e78f Add note to serialization docs about zipfile format (#40288)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40288

Test Plan: Imported from OSS

Differential Revision: D22140324

Pulled By: jamesr66a

fbshipit-source-id: 01d7aa642ed2f4e4bdac4b7f3223bf4d7e62fd4d
2020-06-19 13:40:08 -07:00
Negin Raoof
73a156e81f [ONNX] Update pytorch/onnx docs for new export API args (#39802)
Summary:
Update pytorch/onnx docs for new export API args:
Use external data format and Training args.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39802

Reviewed By: hl475

Differential Revision: D22139664

Pulled By: houseroad

fbshipit-source-id: 7d6dcf75129cb88987f8c37b7d9d48ca594c0f38
2020-06-19 13:38:47 -07:00
Luca Wehrstedt
2393bab036 [TensorPipe] Update documentation (#40222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40222

Mention the TensorPipe agent in the RPC docs and give users the information they need to choose which agent to use.
ghstack-source-id: 106225711

Test Plan: Export to GitHub, build locally and try out the docs.

Differential Revision: D22116494

fbshipit-source-id: 30703ba8410c40f64e785f60d71dfd9faa8de4a1
2020-06-19 04:26:49 -07:00
Meghan Lele
d58b8222b7 [JIT] Add support for with statements (#34705)
Summary:
**Summary**
This commit adds support for with statements to PyTorch JIT. Each
of the with items in a with statement is represented in the JIT IR
as a pair of `prim::Enter` and `prim::Exit` nodes that call the
`__enter__` and `__exit__` methods defined on the context manager objects
returned by the expressions in the with item.

**Testing**
This commit adds unit tests for with statements with named with items,
nameless with items, and with statements that encounter exceptions.
```
$ python test/test_jit.py TestWith.test_with_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.430s

OK
```

```
$ python test/test_jit.py TestWith.test_with_no_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.264s

OK
```

```
$ python test/test_jit.py TestWith.test_with_exceptions
Fail to import hypothesis in common_utils, tests are not derandomized
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 1.053s

OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34705

Differential Revision: D22095945

Pulled By: SplitInfinity

fbshipit-source-id: f661565a834786725259b8ea014b4d7532f9419d
2020-06-18 16:57:18 -07:00