Commit Graph

1458 Commits

Author SHA1 Message Date
Meghan Lele
a3db8e0a26 [docs] Add torch.package documentation preamble (#59491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59491

**Summary**
This commit adds a preamble to the `torch.package` documentation page
that explains briefly what `torch.package` is.

**Test Plan**
Continous integration.

<img width="881" alt="Captura de Pantalla 2021-06-04 a la(s) 3 57 01 p  m" src="https://user-images.githubusercontent.com/4392003/120872203-d535e000-c552-11eb-841d-b38df19bc992.png">

Test Plan: Imported from OSS

Reviewed By: Lilyjjo

Differential Revision: D29050630

Pulled By: SplitInfinity

fbshipit-source-id: 70a3fd43f076751c6ea83be3ead291686c641158
2021-06-10 19:51:37 -07:00
Rohan Varma
2f395f3b54 [reland] Document debugability features in torch.distributed (#59726)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59726

Reland of https://github.com/pytorch/pytorch/pull/59604 with indentation fix
ghstack-source-id: 130979356

Test Plan: ci

Reviewed By: SciPioneer

Differential Revision: D29001923

fbshipit-source-id: 225d9dc5054c223b453f3b39749e2b62f61b9a2c
2021-06-09 16:40:11 -07:00
Luca Wehrstedt
f1786b293d Revert D28972444: [pytorch][PR] Document debugability features in torch.distributed
Test Plan: revert-hammer

Differential Revision:
D28972444 (a9d2810817)

Original commit changeset: da5e8ee84f0d

fbshipit-source-id: 94d3b3b75ddec74ea5b2b76f6a7519dc921ee2a7
2021-06-09 03:04:36 -07:00
Rohan Varma
a9d2810817 Document debugability features in torch.distributed (#59604)
Summary:
Adds comprehensive documentation around debugability features added to `torch.distributed` recently, including the `monitored_barrier` and TORCH_DISTRIBUTED_DEBUG env variable.

![dist_one](https://user-images.githubusercontent.com/8039770/121102672-0f052180-c7b3-11eb-974c-81dbbe102cb6.png)
![dist_two](https://user-images.githubusercontent.com/8039770/121102734-39ef7580-c7b3-11eb-94f7-c75469351440.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59604

Reviewed By: jbschlosser, SciPioneer

Differential Revision: D28972444

Pulled By: rohan-varma

fbshipit-source-id: da5e8ee84f0d6f252c703c4d70ff2a0d5817cc4e
2021-06-08 23:52:19 -07:00
Jeffrey Wan
f52e202840 Add warning when accessing Tensor::grad() in the C++ API (#59362)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/35379

 - Adds  `retains_grad` attribute backed by cpp as a native function. The python bindings for the function are skipped to be consistent with `is_leaf`.
   - Tried writing it without native function, but the jit test `test_tensor_properties` seems to require that it be a native function (or alternatively maybe it could also work if we manually add a prim implementation?).
 - Python API now uses `retain_grad` implementation from cpp

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59362

Reviewed By: jbschlosser

Differential Revision: D28969298

Pulled By: soulitzer

fbshipit-source-id: 335f2be50b9fb870cd35dc72f7dadd6c8666cc02
2021-06-08 19:43:21 -07:00
James Reed
02d380450d [FX][docs][EZ] Fix link to fuser example (#59670)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59670

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D28975704

Pulled By: jamesr66a

fbshipit-source-id: 2fb759224b5b1ecc62c0ab26563d2a35ed422794
2021-06-08 17:32:55 -07:00
Vasiliy Kuznetsov
dafa4b3517 quantization: improve documentation on natively supported backends (#58925)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58925

Cleans up documentation on natively supported backends.  In particular:
* adds a section title
* deduplicates information about fbgemm/qnnpack
* clarifies what `torch.backends.quantized.engine` does
* adds code samples with default settings for `fbgemm` and `qnnpack`

Test Plan: Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D28681840

Pulled By: vkuzo

fbshipit-source-id: 51a6ab66934f657553351f6c84a638fd5f7b4e12
2021-06-07 17:29:03 -07:00
Thomas J. Fan
6ff001c125 DOC Improve documentation for LayerNorm (#59178)
Summary:
Closes https://github.com/pytorch/pytorch/issues/51455

I think the current implementation is aggregating over the correct dimensions. The shape of `normalized_shape` is only used to determine the dimensions to aggregate over. The actual values of `normalized_shape` are used when `elementwise_affine=True` to initialize the weights and biases.

This PR updates the docstring to clarify how `normalized_shape` is used. Here is a short script comparing the implementations for tensorflow and pytorch:

```python
import torch
import torch.nn as nn

import tensorflow as tf
from tensorflow.keras.layers import LayerNormalization

rng = np.random.RandomState()
x = rng.randn(10, 20, 64, 64).astype(np.float32)
# slightly non-trival
x[:, :10, ...] = x[:, :10, ...] * 10 + 20
x[:, 10:, ...] = x[:, 10:, ...] * 30 - 100

# Tensorflow Layer norm
x_tf = tf.convert_to_tensor(x)
layer_norm_tf = LayerNormalization(axis=[-3, -2, -1], epsilon=1e-5)
output_tf = layer_norm_tf(x_tf)
output_tf_np = output_tf.numpy()

# PyTorch Layer norm
x_torch = torch.as_tensor(x)
layer_norm_torch = nn.LayerNorm([20, 64, 64], elementwise_affine=False)
output_torch = layer_norm_torch(x_torch)
output_torch_np = output_torch.detach().numpy()

# check tensorflow and pytorch
torch.testing.assert_allclose(output_tf_np, output_torch_np)

# manual comutation
manual_output = ((x_torch - x_torch.mean(dim=(-3, -2, -1), keepdims=True)) /
                 (x_torch.var(dim=(-3, -2, -1), keepdims=True, unbiased=False) + 1e-5).sqrt())

torch.testing.assert_allclose(output_torch, manual_output)
```

To get to the layer normalization as shown here:

<img width="157" alt="Screen Shot 2021-05-29 at 2 13 52 PM" src="https://user-images.githubusercontent.com/5402633/120080691-1e37f100-c088-11eb-9060-4f263e4cd093.png">

One needs to pass in `normalized_shape` with shape `x.dim() - 1` with the size of the channels and all spatial dimensions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59178

Reviewed By: ejguan

Differential Revision: D28931877

Pulled By: jbschlosser

fbshipit-source-id: 193e05205b9085bb190c221428c96d2ca29f2a70
2021-06-07 14:34:10 -07:00
anjali411
3607478ecd Conjugate View (#54987)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54987

Based off of ezyang (https://github.com/pytorch/pytorch/pull/44799) and bdhirsh (https://github.com/pytorch/pytorch/pull/43702) 's prototype:

Here's a summary of the changes in this PR:
This PR adds a new dispatch key called Conjugate. This enables us to make conjugate operation a view and leverage the specialized library functions that fast path with the hermitian operation (conj + transpose).

1. Conjugate operation will now return a view with conj bit (1) for complex tensors and returns self for non-complex tensors as before. This also means `torch.view_as_real` will no longer be a view on conjugated complex tensors and is hence disabled. To fill the gap, we have added `torch.view_as_real_physical` which would return the real tensor agnostic of the conjugate bit on the input complex tensor. The information about conjugation on the old tensor can be obtained by calling `.is_conj()` on the new tensor.
2. NEW API:
    a) `.conj()` -- now returning a view.
    b) `.conj_physical()` -- does the physical conjugate operation. If the conj bit for input was set, you'd get `self.clone()`, else you'll get a new tensor with conjugated value in its memory.
    c) `.conj_physical_()`, and `out=` variant
    d) `.resolve_conj()`  -- materializes the conjugation. returns self if the conj bit is unset, else returns a new tensor with conjugated values and conj bit set to 0.
    e) `.resolve_conj_()` in-place version of (d)
    f) `view_as_real_physical` -- as described in (1), it's functionally same as `view_as_real`, just that it doesn't error out on conjugated tensors.
    g) `view_as_real` -- existing function, but now errors out on conjugated tensors.
3. Conjugate Fallback
    a) Vast majority of PyTorch functions would currently use this fallback when they are called on a conjugated tensor.
    b) This fallback is well equipped to handle the following cases:
        - functional operation e.g., `torch.sin(input)`
        - Mutable inputs and in-place operations e.g., `tensor.add_(2)`
        - out-of-place operation e.g., `torch.sin(input, out=out)`
        - Tensorlist input args
        - NOTE: Meta tensors don't work with conjugate fallback.
4. Autograd
    a) `resolve_conj()` is an identity function w.r.t. autograd
    b) Everything else works as expected.
5. Testing:
    a) All method_tests run with conjugate view tensors.
    b) OpInfo tests that run with conjugate views
        - test_variant_consistency_eager/jit
        - gradcheck, gradgradcheck
        - test_conj_views (that only run for `torch.cfloat` dtype)

NOTE: functions like `empty_like`, `zero_like`, `randn_like`, `clone` don't propagate the conjugate bit.

Follow up work:
1. conjugate view RFC
2. Add neg bit to re-enable view operation on conjugated tensors
3. Update linalg functions to call into specialized functions that fast path with the hermitian operation.

Test Plan: Imported from OSS

Reviewed By: VitalyFedyunin

Differential Revision: D28227315

Pulled By: anjali411

fbshipit-source-id: acab9402b9d6a970c6d512809b627a290c8def5f
2021-06-04 14:12:41 -07:00
Jeffrey Wan
4ae5764d47 Add is_inference to native functions (#58729)
Summary:
Adds `is_inference` as a native function w/ manual cpp bindings.
Also changes instances of `is_inference_tensor` to `is_inference` to be consistent with other properties such as `is_complex`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58729

Reviewed By: mruberry

Differential Revision: D28874507

Pulled By: soulitzer

fbshipit-source-id: 0fa6bcdc72a4ae444705e2e0f3c416c1b28dadc7
2021-06-04 08:59:11 -07:00
Kushashwa Ravi Shrimali
44c20ce676 Alias for i0 to special namespace (#59141)
Summary:
See https://github.com/pytorch/pytorch/issues/50345

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59141

Reviewed By: ngimel

Differential Revision: D28784097

Pulled By: mruberry

fbshipit-source-id: 9b61a21906ef337292686fd40e328502a79e6f09
2021-06-01 23:04:09 -07:00
Thomas J. Fan
8af6281201 DOC Adds register_module_full_backward_hook into docs (#58954)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54443

Adds `register_module_full_backward_hook` into the index so it is rendered in the html docs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58954

Reviewed By: ngimel

Differential Revision: D28801816

Pulled By: jbschlosser

fbshipit-source-id: a2e737fe983e5d7e4e26d7639183bca34b571cb8
2021-06-01 15:47:10 -07:00
kshitij12345
fea7a79e0b [special] Add ndtr (#58126)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Plot:
![image](https://user-images.githubusercontent.com/19503980/117942099-54efd680-b328-11eb-8948-c3080779ce19.png)
https://colab.research.google.com/drive/1Of67A042rOImj8wrLF_fUTgoy_wVEOZS?usp=sharing

TODO:
* [x] Add docs (https://13385714-65600975-gh.circle-artifacts.com/0/docs/special.html#torch.special.ndtr)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58126

Reviewed By: anjali411

Differential Revision: D28700957

Pulled By: mruberry

fbshipit-source-id: 5b9991e97ec1e8fd01518cc9d9849108d35fe406
2021-05-30 21:12:04 -07:00
kshitij12345
5c18994674 [special] Add i1 and i1e (#56352)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

* [x] Check Docs https://12721710-65600975-gh.circle-artifacts.com/0/docs/special.html
* [x] Investigate fp32 failure on CI?! (Fails on clang. Reproduced locally with clang-11)
* [ ] Kernel vs Composite?
* [x] Autograd for `i0e` for zero?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56352

Reviewed By: anjali411

Differential Revision: D28700888

Pulled By: mruberry

fbshipit-source-id: 91a3cbb94f5b8a3b063589ec38179848c11def83
2021-05-29 20:55:23 -07:00
Jeffrey Wan
9e60c7dee3 Add docstring for is_inference_mode_enabled (#59047)
Summary:
Fixes` #{issue number}

Testing:
```
>>> import torch
>>> torch.is_inference_mode_enabled.__doc__
'\nis_inference_mode_enabled(input) -> (bool)\n\nReturns True if inference mode is currently enabled.\n\nArgs:\n    input (Tensor): the input tensor.\n'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59047

Reviewed By: ailzhang

Differential Revision: D28726991

Pulled By: soulitzer

fbshipit-source-id: c117c7d73e551a1b5f0e215f2aed528bf558ef7c
2021-05-26 19:27:33 -07:00
Joel Schlosser
a749e8edf5 Add UninitializedBuffer to nn docs (#59021)
Summary:
The `UninitializedBuffer` class was previously left out of `nn.rst`, so it was not included in the generated documentation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59021

Reviewed By: anjali411

Differential Revision: D28723044

Pulled By: jbschlosser

fbshipit-source-id: 71e15b0c7fabaf57e8fbdf7fbd09ef2adbdb36ad
2021-05-26 14:36:05 -07:00
Jeffrey Wan
a7a5992d7d Add no-grad inference mode note (#58513)
Summary:
Adds a note explaining the difference between several often conflated mechanisms in the autograd note
Also adds a link to this note from the docs in `grad_mode` and `nn.module`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58513

Reviewed By: gchanan

Differential Revision: D28651129

Pulled By: soulitzer

fbshipit-source-id: af9eb1749b641fc1b632815634eea36bf7979156
2021-05-25 13:06:54 -07:00
Adnios
09a8f22bf9 Add mish activation function (#58648)
Summary:
See issus: https://github.com/pytorch/pytorch/issues/58375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
2021-05-25 10:36:21 -07:00
Joel Schlosser
c58709b7bb Helper function for skipping module parameter / buffer initialization (#57555)
Summary:
This PR introduces a helper function named `torch.nn.utils.skip_init()` that accepts a module class object + `args` / `kwargs` and instantiates the module while skipping initialization of parameter / buffer values. See discussion at https://github.com/pytorch/pytorch/issues/29523 for more context. Example usage:

```python
import torch

m = torch.nn.utils.skip_init(torch.nn.Linear, 5, 1)
print(m.weight)

m2 = torch.nn.utils.skip_init(torch.nn.Linear, 5, 1, device='cuda')
print(m2.weight)

m3 = torch.nn.utils.skip_init(torch.nn.Linear, in_features=5, out_features=1)
print(m3.weight)
```
```
Parameter containing:
tensor([[-3.3011e+28,  4.5915e-41, -3.3009e+28,  4.5915e-41,  0.0000e+00]],
       requires_grad=True)
Parameter containing:
tensor([[-2.5339e+27,  4.5915e-41, -2.5367e+27,  4.5915e-41,  0.0000e+00]],
       device='cuda:0', requires_grad=True)
Parameter containing:
tensor([[1.4013e-45, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]],
       requires_grad=True)
```

Bikeshedding on the name / namespace is welcome, as well as comments on the design itself - just wanted to get something out there for discussion.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57555

Reviewed By: zou3519

Differential Revision: D28640613

Pulled By: jbschlosser

fbshipit-source-id: 5654f2e5af5530425ab7a9e357b6ba0d807e967f
2021-05-24 11:28:32 -07:00
Rohan Varma
071d49a970 Document monitored barrier (#58322)
Summary:
Will not land before the release, but would be good to have this function documented in master for its use in distributed debugability.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58322

Reviewed By: SciPioneer

Differential Revision: D28595405

Pulled By: rohan-varma

fbshipit-source-id: fb00fa22fbe97a38c396eae98a904d1c4fb636fa
2021-05-21 19:04:57 -07:00
Michael Carilli
e8c6a65074 Adds grid_sampler to autocast fp32 list for 1.9 (#58679)
Summary:
Temporary fix for https://github.com/pytorch/pytorch/issues/42218.

Numerically, grid_sampler should be fine in fp32 or fp16. So grid_sampler really belongs on the promote list. But performancewise, native grid_sampler backward kernels use gpuAtomicAdd, which is notoriously slow in fp16. So the simplest functionality fix is to put grid_sampler on the fp32 list.

In https://github.com/pytorch/pytorch/pull/58618 I implement the right long-term fix (refactoring kernels to use fp16-friendly fastAtomicAdd and moving grid_sampler to the promote list). But that's more invasive, and for 1.9 ngimel says this simple temporary fix is preferred.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58679

Reviewed By: soulitzer

Differential Revision: D28576559

Pulled By: ngimel

fbshipit-source-id: d653003f37eaedcbb3eaac8d7fec26c343acbc07
2021-05-20 14:05:09 -07:00
abladawood
1fc3e1e1fb Abladawood patch 1 (#58496)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58496

Reviewed By: soulitzer

Differential Revision: D28562333

Pulled By: ailzhang

fbshipit-source-id: aa9fcc03ba7ffe03db6cc5da353d37d679a0a160
2021-05-20 10:32:18 -07:00
Gary Miguel
703cfdc9ed [JIT] improve documentation (#57991)
Summary:
* Fix lots of links.
* Minor improvements for consistency, clarity or grammar.
* Update jit_python_reference to note the limitations on __exit__.
  (Related to https://github.com/pytorch/pytorch/issues/41420).
* Fix a comment in exit_transforms.cpp: removed the word "not" which
  made the comment say the opposite of the truth.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57991

Reviewed By: malfet

Differential Revision: D28522247

Pulled By: SplitInfinity

fbshipit-source-id: fc63a59d19ea6c89f957c9f7d451be17d1c5fc91
2021-05-19 11:47:32 -07:00
Horace He
79a258f448 s/foward/forward/g (#58497)
Summary:
Annoying typo.

Prompted by these profiling results: https://github.com/pytorch/pytorch/issues/56419#issuecomment-825787828

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58497

Reviewed By: malfet

Differential Revision: D28521081

Pulled By: Chillee

fbshipit-source-id: ab91a2e167dd7d3387fd56106a6cff81f7a32f10
2021-05-19 11:42:42 -07:00
Richard Zou
e059fd40a8 Remove master documentation from being indexable by search engines (#58056)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58056

This PR addresses an action item in #3428: disabling search engine
indexing of master documentation. This is desireable because we want to
direct users to our stable documentation (instead of master
documentation) because they are more likely to have a stable version of
PyTorch installed.

Test Plan:
1. run `make html`, check that the noindex tags are there
2. run `make html-stable, check that the noindex tags aren't there

Reviewed By: bdhirsh

Differential Revision: D28490504

Pulled By: zou3519

fbshipit-source-id: 695c944c4962b2bd484dd7a5e298914a37abe787
2021-05-18 06:20:09 -07:00
Rohan Varma
52bb8120b8 Mention distributed profiling in documentation (#58286)
Summary:
Added a simple section indicating distributed profiling is expected to work similar to other torch operators, and is supported for all communication backends out of the box.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58286

Reviewed By: bdhirsh

Differential Revision: D28436489

Pulled By: rohan-varma

fbshipit-source-id: ce1905a987c0ede8011e8086a2c30edc777b4a38
2021-05-14 09:43:00 -07:00
Jeffrey Wan
e1bb9d2d99 Reimplement spectral_norm using new parametrization functionality (#57784)
Summary:
Adds a new file under `torch/nn/utils/parametrizations.py` which should contain all the parametrization implementations

For spectral_norm we add the `SpectralNorm` module which can be registered using `torch.nn.utils.parametrize.register_parametrization` or using a wrapper: `spectral_norm`, the same API the old implementation provided.

Most of the logic is borrowed from the old implementation:
 - Just like the old implementation, there should be cases when retrieving the weight should perform another power iteration (thus updating the weight) and cases where it shouldn't. For example in eval mode `self.training=True`, we do not perform power iteration.

There are also some differences/difficulties with the new implementation:
 - Using new parametrization functionality as-is there doesn't seem to be a good way to tell whether a 'forward' call was the result of parametrizations are unregistered (and leave_parametrizations=True) or when the injected property's getter was invoked. The issue is that we want perform power iteration in the latter case but not the former, but we don't have this control as-is. So, in this PR I modified the parametrization functionality to change the module to eval mode before triggering their forward call
 - Updates the vectors based on weight on initialization to fix https://github.com/pytorch/pytorch/issues/51800 (this avoids silently update weights in eval mode). This also means that we perform twice any many power iterations by the first forward.
 - right_inverse is just the identity for now, but maybe it should assert that the passed value already satisfies the constraints
 - So far, all the old spectral_norm tests have been cloned, but maybe we don't need so much testing now that the core functionality is already well tested

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57784

Reviewed By: ejguan

Differential Revision: D28413201

Pulled By: soulitzer

fbshipit-source-id: e8f1140f7924ca43ae4244c98b152c3c554668f2
2021-05-13 14:16:13 -07:00
Ivan Yashchuk
c1430c3425 Add torch.linalg.inv_ex without checking for errors by default (#58039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58039

The new function has the following signature
`inv_ex(Tensor inpit, *, bool check_errors=False) -> (Tensor inverse, Tensor info)`.
When `check_errors=True`, an error is thrown if the matrix is not invertible; `check_errors=False` - responsibility for checking the result is on the user.

`linalg_inv` is implemented using calls to `linalg_inv_ex` now.

Resolves https://github.com/pytorch/pytorch/issues/25095

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28405148

Pulled By: mruberry

fbshipit-source-id: b8563a6c59048cb81e206932eb2f6cf489fd8531
2021-05-13 09:42:15 -07:00
Jeffrey Wan
e71b526e7e Add inference mode python bindings and tests (#58045)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56608

 - Adds binding to the `c10::InferenceMode` RAII class in `torch._C._autograd.InferenceMode` through pybind. Also binds the `torch.is_inference_mode` function.
 - Adds context manager `torch.inference_mode` to manage an instance of `c10::InferenceMode` (global).  Implemented in `torch.autograd.grad_mode.py` to reuse the `_DecoratorContextManager` class.
 - Adds some tests based on those linked in the issue + several more for just the context manager

Issues/todos (not necessarily for this PR):
- Improve short inference mode description
- Small example
- Improved testing since there is no direct way of checking TLS/dispatch keys
-

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58045

Reviewed By: agolynski

Differential Revision: D28390595

Pulled By: soulitzer

fbshipit-source-id: ae98fa036c6a2cf7f56e0fd4c352ff804904752c
2021-05-13 08:55:35 -07:00
Alexander Golynski
bc30c3165c Update docs for get_future support (#58107)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58107

Test Plan: Imported from OSS

Reviewed By: SciPioneer

Differential Revision: D28387374

Pulled By: agolynski

fbshipit-source-id: 70052afbb0b07ba341ea55f7ec30f7d9759b7bd4
2021-05-12 18:29:28 -07:00
Can Balioglu
028f2f62ac [torch/elastic] Update the rendezvous docs (#58160)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58160

This PR updates the Torch Distributed Elastic documentation with references to the new `c10d` backend.
ghstack-source-id: 128783809

Test Plan: Visually verified the correct

Reviewed By: tierex

Differential Revision: D28384996

fbshipit-source-id: a40b0c37989ce67963322565368403e2be5d2592
2021-05-12 16:54:28 -07:00
Michael Suo
01d0eb9dac [package] Add an intern keyword (#57341)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57341

Require that users be explicit about what they are going to be
interning. There are a lot of changes that are enabled by this. The new
overall scheme is:

PackageExporter maintains a dependency graph. Users can add to it,
either explicitly (by issuing a `save_*` call) or explicitly (through
dependency resolution). Users can also specify what action to take when
PackageExporter encounters a module (deny, intern, mock, extern).

Nothing (except pickles, tho that can be changed with a small amount
of work) is written to the zip archive until we are finalizing the
package. At that point, we consult the dependency graph and write out
the package exactly as it tells us to.

This accomplishes two things:
1. We can gather up *all* packaging errors instead of showing them one at a time.
2. We require that users be explicit about what's going in packages, which is a common request.

Differential Revision: D28114185

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Pulled By: suo

fbshipit-source-id: fa1abf1c26be42b14c7e7cf3403ecf336ad4fc12
2021-05-12 16:22:43 -07:00
Yi Wang
581bf01074 [Gradient Compression] Remove unnecessary warning on the rst file and the check on C++ version (#58170)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58170

Now comm hook can be supported on MPI and GLOO backends besides NCCL. No longer need these warnings and check.
ghstack-source-id: 128799123

Test Plan: N/A

Reviewed By: agolynski

Differential Revision: D28388861

fbshipit-source-id: f56a7b9f42bfae1e904f58cdeccf7ceefcbb0850
2021-05-12 14:15:10 -07:00
albanD
cbd1227809 Add a note in the parametrize doc about the naming choice (#58142)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58142

Reviewed By: agolynski

Differential Revision: D28386655

Pulled By: albanD

fbshipit-source-id: c2793ac377ef7082c1840e1a50604da3ff9c61ac
2021-05-12 13:15:56 -07:00
Jithun Nair
ab6b5fa036 Add HIP (ROCm) semantics doc (#57871)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57871

Reviewed By: agolynski

Differential Revision: D28385510

Pulled By: malfet

fbshipit-source-id: 9cf69e52d026a1cf74cc12d8727ca17ae026235e
2021-05-12 12:34:07 -07:00
PCTURBOX\anton
5ea87f9c24 Grammatically updated the tech docs (complex_numbers.rst) (#57540)
Summary:
Small grammatical change in complex_numbers.rst .
-You can see the changes in the screenshot below -
![Capture](https://user-images.githubusercontent.com/38073192/117013956-01aed000-acf9-11eb-9d17-1e369de68585.PNG)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57540

Reviewed By: albanD

Differential Revision: D28233650

Pulled By: mrshenli

fbshipit-source-id: 0cec7bb1f4bd61e929e2a8fc5292bc20b77aee35
2021-05-12 09:05:18 -07:00
Luca Wehrstedt
d623fb7e04 Add a disclaimer about limited CUDA support in RPC (#58023)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58023

Clearly state that some features of RPC aren't yet compatible with CUDA.
ghstack-source-id: 128688856

Test Plan: None

Reviewed By: agolynski

Differential Revision: D28347605

fbshipit-source-id: e8df9a4696c61a1a05f7d2147be84d41aeeb3b48
2021-05-12 00:11:22 -07:00
Ilqar Ramazanli
8b816e9010 To implement gradient for Pytorch (#54617)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54617

Reviewed By: anjali411

Differential Revision: D28057452

Pulled By: iramazanli

fbshipit-source-id: 9bd86679282d34f5e5393e6447121586517eb4f0
2021-05-11 18:52:20 -07:00
Kimish Patel
b7d674eb21 Revert D28331386: [pytorch][PR] [torch/elastic] Update the rendezvous docs
Test Plan: revert-hammer

Differential Revision:
D28331386 (e4418b67c7)

Original commit changeset: 95dd32146222

fbshipit-source-id: 5522d4a09bc06ac42943eec9aa8bf5292cc778b2
2021-05-11 18:10:46 -07:00
Ivan Yashchuk
a90c229900 Remove the BETA status for torch.linalg (#58043)
Summary:
We are ready to move to the new stage for our `torch.linalg` module, which is stable (or STABLE?).

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58043

Reviewed By: ngimel

Differential Revision: D28356172

Pulled By: mruberry

fbshipit-source-id: e2c1effa79b9635b2ef0a820a03a0685105042bd
2021-05-11 16:11:48 -07:00
Gary Miguel
f9c8b7f1a8 [FX][docs] minor fixes (#58085)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58085

Reviewed By: mruberry

Differential Revision: D28364553

Pulled By: jamesr66a

fbshipit-source-id: 0d953672de9a86ecf5b1900b22e6ddef850dbe8f
2021-05-11 15:35:49 -07:00
Can Balioglu
e4418b67c7 [torch/elastic] Update the rendezvous docs (#57973)
Summary:
This PR updates the rendezvous documentation for the Torch Distributed Elastic section of PyTorch docs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57973

Reviewed By: kiukchung

Differential Revision: D28331386

Pulled By: cbalioglu

fbshipit-source-id: 95dd32146222aaeff246bd3c3d2caf0036a9011b
2021-05-11 15:32:50 -07:00
Luca Wehrstedt
3e46d6c9e4 Update docs to mention CUDA support for Future (#50048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50048

To reflect the many changes introduced recently.

In my mind, CUDAFuture should be considered a "private" subclass, which in practice should always be returned as a downcast pointer to an ivalue::Future. Hence, we should document the CUDA behavior in the superclass, even if it's CUDA-agnostic, since that's the interface the users will see also for CUDA-enabled futures.
ghstack-source-id: 128640983

Test Plan: Built locally and looked at them.

Reviewed By: mrshenli

Differential Revision: D25757474

fbshipit-source-id: c6f66ba88fa6c4fc33601f31136422d6cf147203
2021-05-11 08:26:33 -07:00
Yi Wang
38500d5d7b [RPC Framework] Move the annotation w/ bold effect out of the quotes (#57965)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57965

The bold effect does not work under quotes, so move it out.
ghstack-source-id: 128570357

Test Plan:
locally view

{F614715259}

Reviewed By: rohan-varma

Differential Revision: D28329694

fbshipit-source-id: 299b427f4c0701ba70c84148f65203a6e2d6ac61
2021-05-10 16:51:23 -07:00
nikithamalgi
bf053a1296 Fix hasattr support type (#57950)
Summary:
`hasattr` is partially supported. This PR fixes that in the builtin table.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57950

Reviewed By: pbelevich

Differential Revision: D28329005

Pulled By: nikithamalgifb

fbshipit-source-id: c4cfba9badcc8f7cbc8250a5c21dfb62b35a83fc
2021-05-10 12:21:56 -07:00
Heitor Schueroff
4cf2c646c2 Added torch.linalg.matrix_norm (#57127)
Summary:
This PR is focused on  the API for `linalg.matrix_norm` and delegates computations to `linalg.norm` for the moment.

The main difference between the norms is when `dim=None`. In this case
- `linalg.norm` will compute a vector norm on the flattened input if `ord=None`, otherwise it requires the input to be either 1D or 2D in order to disambiguate between vector and matrix norm
- `linalg.vector_norm` will flatten the input
- `linalg.matrix_norm` will compute the norm over the last two dimensions, treating the input as batch of matrices

In future PRs, the computations will be moved to `torch.linalg.matrix_norm` and `torch.norm` and `torch.linalg.norm` will delegate computations to either `linalg.vector_norm` or `linalg.matrix_norm` based on the arguments provided.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57127

Reviewed By: mrshenli

Differential Revision: D28186736

Pulled By: mruberry

fbshipit-source-id: 99ce2da9d1c4df3d9dd82c0a312c9570da5caf25
2021-05-09 04:50:33 -07:00
Yi Wang
94080f45ab [RPC Framework] Update rpc.rst (#57876)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57876

ghstack-source-id: 128484049

Test Plan: N/A

Reviewed By: pritamdamania87

Differential Revision: D28305719

fbshipit-source-id: cc0d79fb46077a0d1cf6026c373893e7d3b7761e
2021-05-07 19:42:29 -07:00
Holly Sweeney
626ae7f036 Copy edit of TorchScript Language Reference (#57694)
Summary:
Initial copy edit of the file.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57694

Reviewed By: malfet, ngimel

Differential Revision: D28289209

Pulled By: holly1238

fbshipit-source-id: 7035d6790767a2f758e6019ae63df16537ef2725
2021-05-07 12:17:32 -07:00
Philip Meier
0dd0151c64 add torch.testing to docs (#57247)
Summary:
Redo of https://github.com/pytorch/pytorch/issues/56373 out of stack.

 ---

To reviewers: **please be nitpicky**. I've read this so often that I probably missed some typos and inconsistencies.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57247

Reviewed By: albanD

Differential Revision: D28247402

Pulled By: mruberry

fbshipit-source-id: 71142678ee5c82cc8c0ecc1dad6a0b2b9236d3e6
2021-05-07 09:16:39 -07:00
Nicolas Hug
1fc89d9ffc Use proper Google Analytics id (#56578)
Summary:
This PR fixes the GA id and relies on `pytorch-sphinx-theme`  to set the GA script instead of hard-coding it (this is supported since https://github.com/pytorch/pytorch_sphinx_theme/pull/110 was merged).

Similar PRs were opened and merged in torchchvision/audio/text, e.g.: https://github.com/pytorch/vision/pull/3700

CC brianjo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56578

Reviewed By: mrshenli

Differential Revision: D28199244

Pulled By: ranman

fbshipit-source-id: a20b7fd1b1da3ebff491286c3eeb1410f3c80670
2021-05-04 13:23:16 -07:00
Kiuk Chung
a80b215a9a [1/n][torch/elastic] Move torchelastic docs *.rst (#148)
Summary:
Pull Request resolved: https://github.com/pytorch/elastic/pull/148

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56811

Moves docs sphinx `*.rst` files from the torchelastic repository to torch. Note: only moves the rst files the next step is to link it to the main pytorch `index.rst` and write new `examples.rst`

Reviewed By: H-Huang

Differential Revision: D27974751

fbshipit-source-id: 8ff9f242aa32e0326c37da3916ea0633aa068fc5
2021-05-04 00:57:56 -07:00
Ilqar Ramazanli
15975cf6a6 To add priority of int/int? over int[] on signature matching and adding {h,v,d}split methods (#57346)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54555

It has been discussed in the issue https://github.com/pytorch/pytorch/issues/54555 that {h,v,d}split methods unexpectedly matches argument of single int[] when it is expected to match single argument of int. The same unexpected behavior can happen in other functions/methods which can take both int[] and int? as single argument signatures.

In this PR we solve this problem by giving higher priority to int/int? arguments over int[] while sorting signatures.

We also add methods of {h,v,d}split methods here, which helped us to discover this unexpected behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57346

Reviewed By: ezyang

Differential Revision: D28121234

Pulled By: iramazanli

fbshipit-source-id: 851cf40b370707be89298177b51ceb4527f4b2d6
2021-05-03 18:52:41 -07:00
Ivan Yashchuk
75a2a92b02 Add torch.linalg.cholesky_ex without checking for errors by default (#56724)
Summary:
The new function has the following signature `cholesky_ex(Tensor input, *, bool check_errors=False) -> (Tensor L, Tensor infos)`. When `check_errors=True`, an error is thrown if the decomposition fails; `check_errors=False` - responsibility for checking the decomposition is on the user.

When `check_errors=False`, we don't have host-device memory transfers for checking the values of the `info` tensor.

Rewrote the internal code for `torch.linalg.cholesky`. Added `cholesky_stub` dispatch. `linalg_cholesky` is implemented using calls to `linalg_cholesky_ex` now.

Resolves https://github.com/pytorch/pytorch/issues/57032.

Ref. https://github.com/pytorch/pytorch/issues/34272, https://github.com/pytorch/pytorch/issues/47608, https://github.com/pytorch/pytorch/issues/47953

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56724

Reviewed By: ngimel

Differential Revision: D27960176

Pulled By: mruberry

fbshipit-source-id: f05f3d5d9b4aa444e41c4eec48ad9a9b6fd5dfa5
2021-05-01 18:48:27 -07:00
kshitij12345
d4ddb47719 [special] Add xlog1py (#55138)
Summary:
Reference : https://github.com/pytorch/pytorch/issues/50345

* [x] Check Rendered Document (https://12494173-65600975-gh.circle-artifacts.com/0/docs/special.html#torch.special.xlog1py)
* [x] Tests in Binary Ufunc
* [x] OpInfo
* [x] Structured Kernel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55138

Reviewed By: ngimel

Differential Revision: D27961461

Pulled By: mruberry

fbshipit-source-id: 30a8f41970a829bf50254aadf5615e8ce4148c7e
2021-04-30 05:51:13 -07:00
Yanan Cao
2aadeac0ff Remove duplicate entry for filter in language ref v2 (#57154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57154

Reviewed By: zou3519

Differential Revision: D28061690

Pulled By: gmagogsfm

fbshipit-source-id: b895238c0425cc6b60f5e19c67fc5bc6e0115d7f
2021-04-29 04:52:50 -07:00
Lillian Johnson
31e59c3869 torch.package change Folder to Directory and add doc strings (#56925)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56925

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D28002145

Pulled By: Lilyjjo

fbshipit-source-id: 6265970202d1530c4fb7ea10011b0e09094037d5
2021-04-28 13:03:12 -07:00
Nikitha Malgi
ce79bd255d Fix doc issues (#57153)
Summary:
Fixes inconsistencies in the TorchScript Language reference.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57153

Reviewed By: zou3519, gmagogsfm

Differential Revision: D28061449

Pulled By: nikithamalgifb

fbshipit-source-id: a055c7b1417391afe00ec0b35e1042acb049feed
2021-04-28 11:47:10 -07:00
albanD
d16ed1ee8a Add first draft of gradcheck note (#55966)
Summary:
You can find the latest rendered version in the `python_doc_build` CI job below, in the artifact tab of that build on circle CI

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55966

Reviewed By: H-Huang

Differential Revision: D28032446

Pulled By: albanD

fbshipit-source-id: 227ad37b03d39894d736c19cae3195b4d56fc62f
2021-04-27 14:33:42 -07:00
Akifumi Imanishi
9da0f2e95e Support __pos__ and positive (#55891)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55604.

This PR implements `torch.Tensor.__pos__` and `torch.positive` for the compatibility with NumPy’s interface. (cc: mruberry, rgommers, emcastillo and kmaehashi)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55891

Reviewed By: H-Huang

Differential Revision: D28025928

Pulled By: mruberry

fbshipit-source-id: e43e329a802f31bf8805f6efab5c2c7ef34c88b9
2021-04-27 13:23:59 -07:00
lezcano
d578e8cfa2 Improved docs for torch.linalg (#56265)
Summary:
This PR tries to make the docs of `torch.linalg` have/be:
- More uniform notation and structure for every function.
- More uniform use of back-quotes and the `:attr:` directive
- More readable for a non-specialised audience through explanations of the form that factorisations take and when would it be beneficial to use what arguments in some solvers.
- More connected among the different functions through the use of  the `.. seealso::` directive.
- More information on when do gradients explode / when is a function silently returning a wrong result / when things do not work in general

I tried to follow the structure of "one short description and then the rest" to be able to format the docs like those of `torch.` or `torch.nn`. I did not do that yet, as I am waiting for the green light on this idea:
https://github.com/pytorch/pytorch/issues/54878#issuecomment-816636171

What this PR does not do:
- Clean the documentation of other functions that are not in the `linalg` module (although I started doing this for `torch.svd`, but then I realised that this PR would touch way too many functions).

Fixes https://github.com/pytorch/pytorch/issues/54878

cc mruberry IvanYashchuk

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56265

Reviewed By: H-Huang

Differential Revision: D27993986

Pulled By: mruberry

fbshipit-source-id: adde7b7383387e1213cc0a6644331f0632b7392d
2021-04-27 11:16:09 -07:00
Yukio Siraichi
9d54475032 Hide module paths leaking in the documentation. (#54585)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54354

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54585

Reviewed By: H-Huang

Differential Revision: D28027037

Pulled By: mruberry

fbshipit-source-id: 219874e143221f5e8349d007f88464e0be1a6243
2021-04-27 10:58:01 -07:00
iramazanli
3e006fc57e Adding hsplit,vsplit and dsplit methods (#53536)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53536

Reviewed By: albanD

Differential Revision: D27938880

Pulled By: iramazanli

fbshipit-source-id: f741119517783ec2bafa296622ee518b587dd127
2021-04-26 09:39:09 -07:00
IceTDrinker
689d3a70aa Fix broken link to fx graph quant guide in quantization.rst (#56776)
Summary:
No oustanding issue, can create it if needed.

Was looking for that resource and it was moved without fixing the documentation.

Cheers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56776

Reviewed By: heitorschueroff

Differential Revision: D27967020

Pulled By: ezyang

fbshipit-source-id: a5cd7d554da43a9c9e44966ccd0b0ad9eef2948c
2021-04-26 08:22:28 -07:00
Ilqar Ramazanli
70d9be0f42 Replace duplicative s with alpha (#56804)
Summary:
It is always easier to read a document when different objects / concepts denoted with different variables / representations.
In this PR we make sure the [complex autograd](https://pytorch.org/docs/master/notes/autograd.html#autograd-for-complex-numbers) documentation, the variable of output and step size diverge.

Fixes https://github.com/pytorch/pytorch/issues/53633

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56804

Reviewed By: anjali411

Differential Revision: D27989959

Pulled By: iramazanli

fbshipit-source-id: c271590ee744c8aeeff62bfaa2295429765ef64e
2021-04-25 16:27:09 -07:00
Ilqar Ramazanli
d1fe68e70b To add single and chained learning schedulers to docs (#56705)
Summary:
In the optimizer documentation, many of the learning rate schedulers [examples](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) are provided according to a generic template. In this PR we provide a precise simple use case example to show how to use learning rate schedulers. Moreover, in a followup example we show an example how to chain two schedulers next to each other.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56705

Reviewed By: ezyang

Differential Revision: D27966704

Pulled By: iramazanli

fbshipit-source-id: f32b2d70d5cad7132335a9b13a2afa3ac3315a13
2021-04-23 09:36:00 -07:00
Stas Bekman
1dbbbbe904 [doc] FX Graph Mode Quantization - fix preamble (#52192)
Summary:
The pre-amble here is misformatted at least and is hard to make sense of: https://pytorch.org/docs/master/quantization.html#prototype-fx-graph-mode-quantization

This PR is trying to make things easier to understand.

As I'm new to this please verify that my modifications remain in line with what may have been meant originally.

Thanks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52192

Reviewed By: ailzhang

Differential Revision: D27941730

Pulled By: vkuzo

fbshipit-source-id: 6c4bbf7c87d8fb87ab5d588b690a72045752e47a
2021-04-22 10:20:31 -07:00
Erjia Guan
8cf85a1152 [DataLoader][doc] Randomness for base_seed generator and NumPy seed (#56528)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56528

Tried to search across internal and external usage of DataLoader. People haven't started to use `generator` for `DataLoader`.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27908487

Pulled By: ejguan

fbshipit-source-id: 14c83ed40d4ba4dc988b121968a78c2732d8eb93
2021-04-22 09:40:45 -07:00
M.L. Croci
1f0223d6bb Fix bug in gaussian_nll_loss (#56469)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53964. cc albanD almson

## Major changes:
- Overhauled the actual loss calculation so that the shapes are now correct (in functional.py)
- added the missing doc in nn.functional.rst

## Minor changes (in functional.py):
- I removed the previous check on whether input and target were the same shape. This is to allow for broadcasting, say when you have 10 predictions that all have the same target.
- I added some comments to explain each shape check in detail. Let me know if these should be shortened/cut.

Screenshots of updated docs attached.
Let me know what you think, thanks!

## Edit: Description of change of behaviour (affecting BC):
The backwards-compatibility is only affected for the `reduction='none'` mode. This was the source of the bug. For tensors with size (N, D), the old returned loss had size (N), as incorrect summation was happening. It will now have size (N, D) as expected.

### Example
Define input tensors, all with size (2, 3).
`input = torch.tensor([[0., 1., 3.], [2., 4., 0.]], requires_grad=True)`
`target = torch.tensor([[1., 4., 2.], [-1., 2., 3.]])`
`var = 2*torch.ones(size=(2, 3), requires_grad=True)`

Initialise loss with reduction mode 'none'. We expect the returned loss to have the same size as the input tensors, (2, 3).
`loss = torch.nn.GaussianNLLLoss(reduction='none')`

Old behaviour:
`print(loss(input, target, var)) `
`# Gives tensor([3.7897, 6.5397], grad_fn=<MulBackward0>. This has size (2).`

New behaviour:
`print(loss(input, target, var)) `
`# Gives tensor([[0.5966, 2.5966, 0.5966], [2.5966, 1.3466, 2.5966]], grad_fn=<MulBackward0>)`
`# This has the expected size, (2, 3).`

To recover the old behaviour, sum along all dimensions except for the 0th:
`print(loss(input, target, var).sum(dim=1))`
`# Gives tensor([3.7897, 6.5397], grad_fn=<SumBackward1>.`

![doc1](https://user-images.githubusercontent.com/26558092/115391089-f7f47b00-a1d6-11eb-8726-e4da9057aee0.png)
![doc2](https://user-images.githubusercontent.com/26558092/115391094-f925a800-a1d6-11eb-954b-afd187f42bc7.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56469

Reviewed By: jbschlosser, agolynski

Differential Revision: D27894170

Pulled By: albanD

fbshipit-source-id: 197890189c97c22109491c47f469336b5b03a23f
2021-04-22 07:43:48 -07:00
Meghan Lele
eac082891f [package] Massage exporter docstrings (#56547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56547

**Summary**
This commit tweaks the docstrings of `PackageExporter` so that they look
nicer on the docs website.

**Test Plan**
Continuous integration.

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D27912965

Pulled By: SplitInfinity

fbshipit-source-id: 38c0a715365b8cfb9eecdd1b38ba525fa226a453
2021-04-21 14:06:54 -07:00
Nikitha Malgi
c65284aa07 Remove caption for Lang Reference (#56526)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56526

Test Plan: Imported from OSS

Reviewed By: navahgar, gmagogsfm

Differential Revision: D27891208

Pulled By: nikithamalgifb

fbshipit-source-id: 50da4f08a01b5407c9a1ead535539a5a26aea0f7
2021-04-20 14:33:42 -07:00
UnmeshPadhye
eacf6f1b51 Updated the tech docs to be consistent with other two descriptions (#56338)
Summary:
Updated the Beta channel description to be consistent with other two channels (Stable, Prototype)

The screenshot attached is for reference before changes.

![Screenshot 2021-04-18 12-36-55](https://user-images.githubusercontent.com/20245964/115137303-0c077380-a043-11eb-9532-c46486e8a75a.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56338

Reviewed By: heitorschueroff

Differential Revision: D27854350

Pulled By: bdhirsh

fbshipit-source-id: a21208c11242e84de313d5b11269264756bf9029
2021-04-20 09:00:42 -07:00
Zhengxu Chen
8176ab6ca0 [JIT] Put explicit error message on class attribute accesses. (#55723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55723

Resolving https://github.com/pytorch/pytorch/issues/51139

Test Plan:
python test/test_jit.py TestClassType.test_unresolved_attributes

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27691960

fbshipit-source-id: 1d078a4ab25af1a73109ca6ef0333a67a634bff6
2021-04-16 15:47:10 -07:00
Nikitha Malgi
643dd26389 Fix formatting for the new language reference (#56042)
Summary:
This PR fixes the formatting issues in the new language reference

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56042

Reviewed By: gmagogsfm

Differential Revision: D27830179

Pulled By: nikithamalgifb

fbshipit-source-id: bce3397d4de3f1536a1a8f0a16f10a703e7d4406
2021-04-16 14:18:09 -07:00
Heitor Schueroff
33159b68a3 Revert "Deprecate legacy constructor torch.Tensor() (#54414)" (#55831)
Summary:
This PR reverts https://github.com/pytorch/pytorch/pull/54414 because of https://github.com/pytorch/pytorch/issues/55780

cc ysiraichi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55831

Reviewed By: agolynski

Differential Revision: D27762264

Pulled By: heitorschueroff

fbshipit-source-id: 8079a660cc440cafb9d22aa031d36dde121e13b3
2021-04-15 14:06:10 -07:00
kshitij12345
50057e560b [special] Add i0e (#54409)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Changes:
* Add `i0e`
* Move some kernels from `UnaryOpsKernel.cu` to `UnarySpecialOpsKernel.cu` to decrease compilation time per file.

Time taken by i0e_vs_scipy tests: around 6.33.s

<details>

<summary>Test Run Log</summary>

```
(pytorch-cuda-dev) kshiteej@qgpu1:~/Pytorch/pytorch_module_special$ pytest test/test_unary_ufuncs.py -k _i0e_vs
======================================================================= test session starts ========================================================================
platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/kshiteej/Pytorch/pytorch_module_special, configfile: pytest.ini
plugins: hypothesis-5.38.1
collected 8843 items / 8833 deselected / 10 selected

test/test_unary_ufuncs.py ...sss....                                                                                                                         [100%]

========================================================================= warnings summary =========================================================================
../../.conda/envs/pytorch-cuda-dev/lib/python3.8/site-packages/torch/backends/cudnn/__init__.py:73
test/test_unary_ufuncs.py::TestUnaryUfuncsCUDA::test_special_i0e_vs_scipy_cuda_bfloat16
  /home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/site-packages/torch/backends/cudnn/__init__.py:73: UserWarning: PyTorch was compiled without cuDNN/MIOpen support. To use cuDNN/MIOpen, rebuild PyTorch making sure the library is visible to the build system.
    warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/warnings.html
===================================================================== short test summary info ======================================================================
SKIPPED [3] test/test_unary_ufuncs.py:1182: not implemented: Could not run 'aten::_copy_from' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_copy_from' is only available for these backends: [BackendSelect, Named, InplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
InplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:56 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8761 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_4.cpp:9348 [kernel]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:250 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
==================================================== 7 passed, 3 skipped, 8833 deselected, 2 warnings in 6.33s =====================================================
```

</details>

TODO:
* [x] Check rendered docs (https://11743402-65600975-gh.circle-artifacts.com/0/docs/special.html)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54409

Reviewed By: jbschlosser

Differential Revision: D27760472

Pulled By: mruberry

fbshipit-source-id: bdfbcaa798b00c51dc9513c34626246c8fc10548
2021-04-15 06:06:11 -07:00
mattip
40d74e6f71 breakup optim, cuda documentation (#55673)
Summary:
Related to https://github.com/pytorch/pytorch/issues/52256

Use autosummary instead of autofunction to create subpages for optim and cuda functions/classes.

Also fix some minor formatting issues in optim.LBFGS and cuda.stream docstings

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55673

Reviewed By: jbschlosser

Differential Revision: D27747741

Pulled By: zou3519

fbshipit-source-id: 070681f840cdf4433a44af75be3483f16e5acf7d
2021-04-14 12:44:00 -07:00
mattip
fd15557ccc breakup autograd documentation (#55672)
Summary:
Related to https://github.com/pytorch/pytorch/issues/52256

Use autosummary instead of autofunction to create subpages for autograd functions. I left the autoclass parts intact but manually laid out their members.

Also the Latex formatting of the spcecial page emitted a warning (solved by adding `\begin{align}...\end{align}`) and fixed alignment of equations (by using `&=` instead of `=`).

zou3519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55672

Reviewed By: jbschlosser

Differential Revision: D27736855

Pulled By: zou3519

fbshipit-source-id: addb56f4f81c82d8537884e0ff243c1e34969a6e
2021-04-14 12:40:00 -07:00
Natalia Gimelshein
f94c95a2dd Revert D23752058: [pytorch][PR] Don't split oversize cached blocks
Test Plan: revert-hammer

Differential Revision:
D23752058 (67dcd62310)

Original commit changeset: ccb7c13e3cf8

fbshipit-source-id: 12ae9702135ea510e9714ed97fb75ca3b9f97c27
2021-04-14 09:24:08 -07:00
Michael Wootton
67dcd62310 Don't split oversize cached blocks (#44742)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/35901

This change is designed to prevent fragmentation in the Caching Allocator.  Permissive block splitting in the allocator allows very large blocks to be split into many pieces.  Once split too finely it is unlikely all pieces will be 'free' at that same time so the original allocation can never be returned.   Anecdotally, we've seen a model run out of memory failing to alloc a 50 MB block on a 32 GB card while the caching allocator is holding 13 GB of 'split free blocks'

Approach:

- Large blocks above a certain size are designated "oversize".  This limit is currently set 1 decade above large, 200 MB
- Oversize blocks can not be split
- Oversize blocks must closely match the requested size (e.g. a 200 MB request will match an existing 205 MB block, but not a 300 MB block)
- In lieu of splitting oversize blocks there is a mechanism to quickly free a single oversize block (to the system allocator) to allow an appropriate size block to be allocated.  This will be activated under memory pressure and will prevent _release_cached_blocks()_ from triggering

Initial performance tests show this is similar or quicker than the original strategy.  Additional tests are ongoing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44742

Reviewed By: ngimel

Differential Revision: D23752058

Pulled By: ezyang

fbshipit-source-id: ccb7c13e3cf8ef2707706726ac9aaac3a5e3d5c8
2021-04-14 03:04:41 -07:00
mattip
f61556a7ce Use autosummary on torch.fft, torch.linalg (#55748)
Summary:
Related to https://github.com/pytorch/pytorch/issues/52256

Use autosummary instead of autofunction to create subpages for `torch.fft` and `torch.linalg` functions.

zou3519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55748

Reviewed By: jbschlosser

Differential Revision: D27739282

Pulled By: heitorschueroff

fbshipit-source-id: 37aa06cb8959721894ffadc15ae8c3b83481a319
2021-04-13 12:02:36 -07:00
Meghan Lele
fc6985eceb [package] Minor fixes to PackageExporter docstrings (#55817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55817

**Summary**
This commit makes minor edits to the docstrings of `PackageExporter` so
that they render properly in the `torch.package` API reference.

**Test Plan**
Continuous integration (especially the docs tests).

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27726817

Pulled By: SplitInfinity

fbshipit-source-id: b81276d7278f586fceded83d23cb4d0532f7c629
2021-04-13 10:00:38 -07:00
Meghan Lele
6a738196af [package] Create API reference (#55812)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55812

**Summary**
This commit creates a barebones API reference doc for `torch.package`.
The content is sourced from the docstrings in the source for the
`torch.package`.

**Test Plan**
Continuous integration (specifically the docs tests).

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27726816

Pulled By: SplitInfinity

fbshipit-source-id: 5e9194536f80507e337b81c5ec3b5635d7121818
2021-04-13 09:58:45 -07:00
Jeff Yang
5a4e5db9ad docs: fix profiler docstring (#55750)
Summary:
Description:
- change the docstrings for profiler module as per google docstring
- add link to `torch.autograd` module
- document `ProfilerAction` and `ProfilerActivity`

https://12292060-65600975-gh.circle-artifacts.com/0/docs/profiler.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55750

Reviewed By: yinghai

Differential Revision: D27725494

Pulled By: ngimel

fbshipit-source-id: 32d0a18e274a871ac712b28b61ba63eb08299a03
2021-04-13 00:23:14 -07:00
Sameer Deshmukh
5fb1142702 Add CSR (compressed sparse row) layout for sparse tensors (#50937)
Summary:
Implement compressed sparse row format. Derived from the GCS implementation at https://github.com/pytorch/pytorch/pull/44190

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50937

Reviewed By: mrshenli

Differential Revision: D27439865

Pulled By: ezyang

fbshipit-source-id: 3ba3dcb9679505b980ff6a5f513e913bbae2fb1d
2021-04-12 10:09:12 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
mattip
7d56de1834 DOC: use autosummary on tensors.rst (#55042)
Summary:
Related to https://github.com/pytorch/pytorch/issues/52256

Splits tensors into a table-of-contents page and many sub-pages, one for each function

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55042

Reviewed By: mrshenli

Differential Revision: D27628688

Pulled By: zou3519

fbshipit-source-id: 08e87700a8e7d5b3fba3f1949e29e988a42bf2c6
2021-04-08 06:44:23 -07:00
kshitij12345
902bf0bbbe [special] Alias for sigmoid and logit & follow-up (#54759)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Chages:
* Alias for sigmoid and logit
* Adds out variant for C++ API
* Updates docs to link back to `special` documentation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54759

Reviewed By: mrshenli

Differential Revision: D27615208

Pulled By: mruberry

fbshipit-source-id: 8bba908d1bea246e4aa9dbadb6951339af353556
2021-04-08 00:56:59 -07:00
James Reed
ec38dda1cc Remove extra close bracket in extending.rst (#55409)
Summary:
Small typo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55409

Reviewed By: pbelevich

Differential Revision: D27611177

Pulled By: jamesr66a

fbshipit-source-id: 8a5ff702e4ab8a7eb2403432889f8b7a5a69484b
2021-04-07 21:15:46 -07:00
Peter Bell
8ac0619784 Avoid infinite recursion in __torch_function__ example (#55391)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55284

This gets the example to run but probably doesn't help the readability of the example.

Thoughts?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55391

Reviewed By: mrshenli

Differential Revision: D27621096

Pulled By: ezyang

fbshipit-source-id: d02c4fb0001e54139a167b477fd3b4a229e4dc8c
2021-04-07 20:31:46 -07:00
whiteking64
e6bfff679d [ONNX] Add hardsigmoid symbolic in opset 9 #49649 (#54193)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49649
Adds support for torch.nn.Hardsigmoid operator in torch.onnx.export

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54193

Reviewed By: anjali411

Differential Revision: D27522969

Pulled By: SplitInfinity

fbshipit-source-id: 33abcec578f4bc3cf5c3ee1c1bed7d94816bee96
2021-04-07 14:28:31 -07:00
mattip
b9a02128bc split nn.functional (#55038)
Summary:
Related to https://github.com/pytorch/pytorch/issues/52256

Splits torch.nn.functional into a table-of-contents page and many sub-pages, one for each function

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55038

Reviewed By: gchanan

Differential Revision: D27502677

Pulled By: zou3519

fbshipit-source-id: 38e450a0fee41c901eb56f94aee8a32f4eefc807
2021-04-07 06:35:47 -07:00
James Reed
c96f076248 Fix typo in extending.rst (#55408)
Summary:
Small typo in docs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55408

Reviewed By: pbelevich

Differential Revision: D27611175

Pulled By: jamesr66a

fbshipit-source-id: a83a6220054c0411329792c7ac6afceb2b699f44
2021-04-07 03:46:01 -07:00
Ivan Yashchuk
84d18727bd Added linalg.eig, linalg.eigvals (#52491)
Summary:
This PR adds `torch.linalg.eig`, and `torch.linalg.eigvals` for NumPy compatibility.

MAGMA uses a hybrid CPU-GPU algorithm and doesn't have a GPU interface for the non-symmetric eigendecomposition. It means that it forces us to transfer inputs living in GPU memory to CPU first before calling MAGMA, and then transfer results from MAGMA to CPU. That is rather slow for smaller matrices and MAGMA is faster than CPU path only for matrices larger than 3000x3000.
Unfortunately, there is no cuSOLVER function for this operation.

Autograd support for `torch.linalg.eig` will be added in a follow-up PR.

Ref https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52491

Reviewed By: anjali411

Differential Revision: D27563616

Pulled By: mruberry

fbshipit-source-id: b42bb98afcd2ed7625d30bdd71cfc74a7ea57bb5
2021-04-06 13:53:26 -07:00
Pritam Damania
e0c5d0ea15 Add tutorials to pipeline docs. (#55209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55209

ghstack-source-id: 125588324

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D27528715

fbshipit-source-id: e6de3649e7265f34de03d452ffdf66ae45569d58
2021-04-05 20:01:00 -07:00
Yi Wang
6a2f046504 [SPMD] Restrict DDP communication hooks to SPSD mode (#55253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55253

Previously DDP communication hooks takes a tensor list as the input. Now only takes a single tensor, as the preparation of retiring SPMD and only providing a single model replica for DDP communication hooks.

The next step is limiting only 1 model replica in Reducer.
ghstack-source-id: 125677637

Test Plan: waitforbuildbot

Reviewed By: zhaojuanmao

Differential Revision: D27533898

fbshipit-source-id: 5db92549c440f33662cf4edf8e0a0fd024101eae
2021-04-05 16:46:47 -07:00
Jerry Zhang
7613b1150b [docs][quant] Add fx graph mode quant api doc (#55306)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55306

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27567187

fbshipit-source-id: ceef873b78fc77e366a47be66c8efd856bac013e
2021-04-05 13:56:23 -07:00
Yi Wang
e593044748 [Gradient Compression] Update a warning in ddp_comm_hooks.rst (#55031)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55031

It turns out that PowerSGD hooks can work on PyTorch native AMP package, but not Apex AMP package, which can somehow mutate gradients during the execution of communication hooks.

{F561544045}
ghstack-source-id: 125268206

Test Plan:
Used native amp backend for the same pytext model and worked:
f261564342
f261561664

Reviewed By: rohan-varma

Differential Revision: D27436484

fbshipit-source-id: 2b63eb683ce373f9da06d4d224ccc5f0a3016c88
2021-04-02 12:07:50 -07:00
Yanan Cao
ec609e7420 Adds torch.* API section for TorchScript Lang Ref (#53236)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53236

Reviewed By: SplitInfinity

Differential Revision: D27526584

Pulled By: gmagogsfm

fbshipit-source-id: ea931ea63aa4b37a7782935a1760bebffedc5b67
2021-04-02 03:01:08 -07:00
Yanan Cao
1b2b3ca86d Language Ref Python Builtin Functions and Values (#52830)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52830

Reviewed By: SplitInfinity, nikithamalgifb

Differential Revision: D27407474

Pulled By: gmagogsfm

fbshipit-source-id: 06fcafbcc66376c5f1818cb12fca2f2a57843c9d
2021-04-01 10:14:03 -07:00
Heitor Schueroff
5d68b3695c [Relanding] Implemented torch.linalg.multi_dot (#52859)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52859

This reverts commit 92a4ee1cf6.

Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D27402390

Pulled By: heitorschueroff

fbshipit-source-id: 73c5ccf54f3da3d29eb63c9ed3601e2fe6951034
2021-04-01 04:49:05 -07:00
Negin Raoof
c5f3d92816 [ONNX] Update scripting docs (#54634) (#54868)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54868

* Updating docs for scripting

* Rebase

* Fix formatting

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408980

Pulled By: SplitInfinity

fbshipit-source-id: 2b176a5a746c1a2369be1940d84e6491a1ecd015
2021-03-31 21:14:27 -07:00
nikithamalgi
790b69e096 Language Ref for Statements in Torchscript (#52847)
Summary:
Addresses the Statements supported in Torchscript for Language Spec

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52847

Reviewed By: gmagogsfm

Differential Revision: D27463142

Pulled By: nikithamalgifb

fbshipit-source-id: ff3def1b878092b0a2afc7c2f47b7857e6658ecf
2021-03-31 19:15:53 -07:00
nikithamalgi
444e5f0b60 Add Type System (I) (#53244)
Summary:
**Summary**
This commit adds a new .rst file to update the language specification with the updated content for the Type System section.

**Test Plan**

![image](https://user-images.githubusercontent.com/70345919/109920057-9308b400-7c6e-11eb-8391-83635efbf036.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53244

Reviewed By: H-Huang

Differential Revision: D27445210

Pulled By: nikithamalgifb

fbshipit-source-id: 984c25b06686ba7a72cc03c5c069d819709eedb8
2021-03-30 23:10:27 -07:00
Michael Carilli
920eb01e2e Add scatter_add to amp docs (#54908)
Summary:
Updates docs to reflect https://github.com/pytorch/pytorch/pull/52133.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54908

Reviewed By: agolynski

Differential Revision: D27431302

Pulled By: H-Huang

fbshipit-source-id: fa3dc6267bc73c81cdd96f986c971daee1922cb5
2021-03-30 15:26:41 -07:00
Sam Estep
5bcbbf5373 Lint trailing newlines (#54737)
Summary:
*Context:* https://github.com/pytorch/pytorch/issues/53406 added a lint for trailing whitespace at the ends of lines. However, in order to pass FB-internal lints, that PR also had to normalize the trailing newlines in four of the files it touched. This PR adds an OSS lint to normalize trailing newlines.

The changes to the following files (made in 54847d0adb9be71be4979cead3d9d4c02160e4cd) are the only manually-written parts of this PR:

- `.github/workflows/lint.yml`
- `mypy-strict.ini`
- `tools/README.md`
- `tools/test/test_trailing_newlines.py`
- `tools/trailing_newlines.py`

I would have liked to make this just a shell one-liner like the other three similar lints, but nothing I could find quite fit the bill. Specifically, all the answers I tried from the following Stack Overflow questions were far too slow (at least a minute and a half to run on this entire repository):

- [How to detect file ends in newline?](https://stackoverflow.com/q/38746)
- [How do I find files that do not end with a newline/linefeed?](https://stackoverflow.com/q/4631068)
- [How to list all files in the Git index without newline at end of file](https://stackoverflow.com/q/27624800)
- [Linux - check if there is an empty line at the end of a file [duplicate]](https://stackoverflow.com/q/34943632)
- [git ensure newline at end of each file](https://stackoverflow.com/q/57770972)

To avoid giving false positives during the few days after this PR is merged, we should probably only merge it after https://github.com/pytorch/pytorch/issues/54967.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54737

Test Plan:
Running the shell script from the "Ensure correct trailing newlines" step in the `quick-checks` job of `.github/workflows/lint.yml` should print no output and exit in a fraction of a second with a status of 0. That was not the case prior to this PR, as shown by this failing GHA workflow run on an earlier draft of this PR:

- https://github.com/pytorch/pytorch/runs/2197446987?check_suite_focus=true

In contrast, this run (after correcting the trailing newlines in this PR) succeeded:

- https://github.com/pytorch/pytorch/pull/54737/checks?check_run_id=2197553241

To unit-test `tools/trailing_newlines.py` itself (this is run as part of our "Test tools" GitHub Actions workflow):
```
python tools/test/test_trailing_newlines.py
```

Reviewed By: malfet

Differential Revision: D27409736

Pulled By: samestep

fbshipit-source-id: 46f565227046b39f68349bbd5633105b2d2e9b19
2021-03-30 13:09:52 -07:00
Meghan Lele
d60874354f [docs] Add updated TorchScript language reference section for types (#53673)
Summary:
**Summary**
This commit adds information about type annotation and inference to
the updated language specification. It will be rebased on top of https://github.com/pytorch/pytorch/issues/52494
after it lands.

**Test Plan**
Continuous integration.

Screen capture:
https://user-images.githubusercontent.com/4392003/110560184-66371f80-80fa-11eb-803a-923cf8de25ff.mov

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53673

Reviewed By: gmagogsfm

Differential Revision: D27413001

Pulled By: SplitInfinity

fbshipit-source-id: b54b300b4b1f10537ec06e2ee9eeb6d2b1f1810b
2021-03-30 10:32:58 -07:00
kshitij12345
c9d0c855f7 [special] Alias for special.expm1 and special.exp2 (#54670)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54670

Reviewed By: H-Huang

Differential Revision: D27401440

Pulled By: mruberry

fbshipit-source-id: 02b1fd0e8ffd3f5a017d6b6b9229b76b92b4b745
2021-03-30 10:03:13 -07:00
Jerry Zhang
a1bd7918cc [docs][quant] Fix FX Graph Mode Quantization tutorial link (#54715)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54715

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27338515

fbshipit-source-id: d61b140284548073df42ead1900f179c6ada2f02
2021-03-29 17:25:19 -07:00
Yanan Cao
f4dfa02c03 Add documentation for torch.jit.Attribute and torch.jit.annotate (#54485)
Summary:
This is to prepare for new language reference spec that needs to describe `torch.jit.Attribute` and `torch.jit.annotate`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54485

Reviewed By: SplitInfinity, nikithamalgifb

Differential Revision: D27406843

Pulled By: gmagogsfm

fbshipit-source-id: 98983b9df0f974ed69965ba4fcc03c1a18d1f9f5
2021-03-29 14:44:53 -07:00
Jeff Yang
02f5c50828 docs: separate autosummary for flatten layers (#54663)
Summary:
fixes https://github.com/pytorch/pytorch/issues/46881
https://11815123-65600975-gh.circle-artifacts.com/0/docs/generated/torch.nn.Flatten.html#torch.nn.Flatten

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54663

Reviewed By: ailzhang

Differential Revision: D27328367

Pulled By: zou3519

fbshipit-source-id: de1651a670181db8ea8ab16624c17ba08a88eb5d
2021-03-29 10:23:34 -07:00
Jeff Yang
7eef0c3ab5 docs: add functional group_norm (#54673)
Summary:
fixes https://github.com/pytorch/pytorch/issues/34209
https://11813548-65600975-gh.circle-artifacts.com/0/docs/nn.functional.html#normalization-functions

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54673

Reviewed By: ailzhang

Differential Revision: D27328211

Pulled By: zou3519

fbshipit-source-id: 75c49849377047502962157239857ed99afe6d1e
2021-03-29 10:21:50 -07:00
Jeff Yang
475251631b docs: reference links to serialization.html (#54659)
Summary:
fixes https://github.com/pytorch/pytorch/issues/54311
https://11811979-65600975-gh.circle-artifacts.com/0/docs/generated/torch.save.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54659

Reviewed By: ailzhang

Differential Revision: D27328281

Pulled By: zou3519

fbshipit-source-id: b88d02e5407238a338d537d013a297ae9cdf922b
2021-03-29 10:15:07 -07:00
Jeff Yang
84232b762b docs: add reset_peak_memory_stats in cuda.rst (#54668)
Summary:
fixes https://github.com/pytorch/pytorch/issues/41808
https://11812999-65600975-gh.circle-artifacts.com/0/docs/cuda.html

One question: does `reset_peak_stats` exist in `torch.cuda` ?
I can't find anywhere.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54668

Reviewed By: ailzhang

Differential Revision: D27328444

Pulled By: zou3519

fbshipit-source-id: 098024d43da98e3249aa9aa71cb10126095504a4
2021-03-29 10:05:20 -07:00
Yukio Siraichi
4e5af53d29 Deprecate legacy constructor torch.Tensor() (#54414)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47112

This pull request is the final step in [the proposed plan](https://github.com/pytorch/pytorch/issues/47112#issuecomment-789972007) for deprecating `torch.Tensor()` constructor. Specifically, it **updates the docs and throws `TORCH_WARN_ONCE` if someone uses `torch.Tensor()`**.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54414

Reviewed By: ailzhang

Differential Revision: D27325267

Pulled By: heitorschueroff

fbshipit-source-id: 5442572603d340b89e8cc5a886a330dd9b13550a
2021-03-29 05:14:47 -07:00
kshitij12345
0527d14248 [numpy] Add torch.take_along_dim (#52833)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/38349

Wrapper around the existing `torch.gather` with broadcasting logic.

TODO:
* [x] Add Doc entry (see if phrasing can be improved)
* [x] Add OpInfo
* [x] Add test against numpy
* [x] Handle broadcasting behaviour and when dim is not given.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52833

Reviewed By: malfet

Differential Revision: D27319038

Pulled By: mruberry

fbshipit-source-id: 00f307825f92c679d96e264997aa5509172f5ed1
2021-03-28 05:22:51 -07:00
Pritam Damania
f612d4eb58 Add 'remote_parameters' and 'get_module_rref' to RemoteModule docs. (#54645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54645

Had to replace RRef[..] with just RRef in the return signature since
sphynx seemed to completely mess up rendering RRef[..]
ghstack-source-id: 125024783

Test Plan: View locally.

Reviewed By: SciPioneer

Differential Revision: D27314609

fbshipit-source-id: 2dd9901e79f31578ac7733f79dbeb376f686ed75
2021-03-26 21:41:28 -07:00
kshitij12345
6f8328ef44 [special] Add special.entr (#53500)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

TODO:

* [x] Verfiy docs rendering (https://11397990-65600975-gh.circle-artifacts.com/0/docs/special.html)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53500

Reviewed By: ngimel

Differential Revision: D27287096

Pulled By: mruberry

fbshipit-source-id: 6b3dfd53e811a0f023ee444a0b56176f825d39e9
2021-03-24 18:44:42 -07:00
Ansley Ussery
b032316c41 Improve nn.Sequential documentation (#53380)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53380

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D26849861

Pulled By: ansley

fbshipit-source-id: 2add8c73ae421332ed1c03340806e25656bafabb
2021-03-24 13:02:43 -07:00
Heitor Schueroff
f9e7f132fb Added torch.linalg.matrix_power (#52608)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52608

**TODO**

- [x] Add OpInfo
- [x] Update documentation
- [x] Add more tests and compare against NumPy

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D27261532

Pulled By: heitorschueroff

fbshipit-source-id: c1e4ab297da3683f6d5751be8790602f9dc37b6b
2021-03-23 15:10:06 -07:00
Ioana Tivadar
1041fdd069 Grammatically update tech docs (#54370)
Summary:
Small grammatical update to nn.rst

![Screenshot 2021-03-20 at 11 44 29](https://user-images.githubusercontent.com/80534697/111867047-d868f900-8971-11eb-8cc2-0ae7d2c59229.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54370

Reviewed By: radkris-git

Differential Revision: D27243944

Pulled By: heitorschueroff

fbshipit-source-id: 08d8061d9e74ffaf95c8a610107a8632259474ca
2021-03-23 02:59:19 -07:00
Wanchao Liang
270d675f86 update distributed doc table for alltoall nccl (#54277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54277

alltoall already supported in nccl backend, so update the doc to reflect it.

Test Plan: Imported from OSS

Reviewed By: divchenko

Differential Revision: D27172904

Pulled By: wanchaol

fbshipit-source-id: 9afa89583d56b247b2017ea2350936053eb30827
2021-03-19 15:35:10 -07:00
kshitij12345
bfd009836e [torch.special] Add special.erf{c, inv} (#53260)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Also adds `overrides` entry for module and the newly added functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53260

Reviewed By: agolynski

Differential Revision: D27114342

Pulled By: mruberry

fbshipit-source-id: b1dd88f373db251bb71df12d33b160382138f63f
2021-03-18 19:06:25 -07:00
Kurt Mohler
382a47b493 Add torch.linalg.vector_norm function (#51099)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/50214

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51099

Reviewed By: agolynski

Differential Revision: D27147360

Pulled By: mruberry

fbshipit-source-id: 1056f840e7027ad81971c9d1a9f952ab9648f1b5
2021-03-18 06:41:39 -07:00
Ivan Yashchuk
564456ac44 Added autograd support for torch.orgqr (#52637)
Summary:
This PR adds autograd support for `torch.orgqr`.

Since `torch.orgqr` is one of few functions that expose LAPACK's naming and all other linear algebra routines were renamed a long time ago, I also added a new function with a new name and `torch.orgqr` now is an alias for it.

The new proposed name is `householder_product`. For a matrix `input` and a vector `tau` LAPACK's orgqr operation takes columns of `input` (called Householder vectors or elementary reflectors) scalars of `tau` that together represent Householder matrices and then the product of these matrices is computed. See https://www.netlib.org/lapack/lug/node128.html.
Other linear algebra libraries that I'm aware of do not expose this LAPACK function, so there is some freedom in naming it. It is usually used internally only for QR decomposition, but can be useful for deep learning tasks now when it supports differentiation.

Resolves https://github.com/pytorch/pytorch/issues/50104

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52637

Reviewed By: agolynski

Differential Revision: D27114246

Pulled By: mruberry

fbshipit-source-id: 9ab51efe52aec7c137aa018c7bd486297e4111ce
2021-03-18 05:42:18 -07:00
Yi Wang
4b00bce156 [Gradient Compression] Introduce fp16_compress_wrapper in ddp_comm_hooks.rst (#54052)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54052

Introduce `fp16_compress_wrapper`, which can give some speedup on top of some gradient compression algorithms like PowerSGD.

ghstack-source-id: 124001805

Test Plan: {F509205173}

Reviewed By: iseessel

Differential Revision: D27076064

fbshipit-source-id: 4845a14854cafe2112c0caefc1e2532efe9d3ed8
2021-03-16 15:40:10 -07:00
mattip
ae154a8c2c various doc building cleanups (#53851)
Summary:
brianjo
- Add a javascript snippet to close the expandable left navbar sections 'Notes', 'Language Bindings', 'Libraries', 'Community'
- Fix two latex bugs that were causing output in the log that might have been misleading when looking for true doc build problems
- Change the way release versions interact with sphinx. I tested these via building docs twice: once with `export RELEASE=1` and once without.
  - Remove perl scripting to turn the static version text into a link to the versions.html document. Instead, put this where it belongs in the layout.html template. This is the way the domain libraries (text, vision, audio) do it.
  -  There were two separate templates for master and release, with the only difference between them is that the master has an admonition "You are viewing unstable developer preview docs....". Instead toggle that with the value of `release`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53851

Reviewed By: mruberry

Differential Revision: D27085875

Pulled By: ngimel

fbshipit-source-id: c2d674deb924162f17131d895cb53cef08a1f1cb
2021-03-16 15:01:59 -07:00
Xiong Wei
da10ccd35f Implements cpu_kernel_multiple_outputs and torch.frexp (#51097)
Summary:
Close https://github.com/pytorch/pytorch/issues/51108
Related https://github.com/pytorch/pytorch/issues/38349

This PR implements the `cpu_kernel_multiple_outputs` to support returning multiple values in a CPU kernel.
```c++
auto iter = at::TensorIteratorConfig()
  .add_output(out1)
  .add_output(out2)
  .add_input(in1)
  .add_input(in2)
  .build();

at::native::cpu_kernel_multiple_outputs(iter,
  [=](float a, float b) -> std::tuple<float, float> {
    float add = a + b;
    float mul = a * b;
    return std::tuple<float, float>(add, mul);
  }
);
```

The `out1` will equal to `torch.add(in1, in2)`, while the result of `out2` will be `torch.mul(in1, in2)`.
It helps developers implement new torch functions that return two tensors more conveniently, such as NumPy-like functions [divmod](https://numpy.org/doc/1.18/reference/generated/numpy.divmod.html?highlight=divmod#numpy.divmod) and [frexp](https://numpy.org/doc/stable/reference/generated/numpy.frexp.html#numpy.frexp).

This PR adds `torch.frexp` function to exercise the new functionality provided by `cpu_kernel_multiple_outputs`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51097

Reviewed By: albanD

Differential Revision: D26982619

Pulled By: heitorschueroff

fbshipit-source-id: cb61c7f2c79873ab72ab5a61cbdb9203531ad469
2021-03-15 10:44:32 -07:00
Isaac Seessel
3078233e9a [Gradient Compression] Make FP16 compression as a wrapper that can be combined with other communication hooks (#53808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53808

Create a FP16 wrapper that can combine FP16 gradient compression with any gradient compression algorithm.

Test Plan:
Unit test:
```
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_fp16_compress_wrapper
```

Performance Test on DDP QPS Benchmark: Check if AllReduce + FP16 Wrapper = FP16 Compression
1) FP16 Compression:
f256897690

2) FP16 Wrapper + AllReduce (after patching D26960986):
f256897289

Reviewed By: SciPioneer

Differential Revision: D26978832

fbshipit-source-id: 0dcd18b050c02f5e9f3cff56344d1f39a04e20c0
2021-03-12 17:31:07 -08:00
Nikita Vedeneev
afa1ff8e04 Implements torch.linalg.lstsq (#49093)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/44378 by providing a wider range of drivers similar to what SciPy is doing.

The supported CPU drivers are `gels, gelsy, gelsd, gelss`.
The CUDA interface has only `gels` implemented but only for overdetermined systems.

The current state of this PR:
- [x] CPU interface
- [x] CUDA interface
- [x] CPU tests
- [x] CUDA tests
- [x] Memory-efficient batch-wise iteration with broadcasting which fixes https://github.com/pytorch/pytorch/issues/49252
- [x] docs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49093

Reviewed By: albanD

Differential Revision: D26991788

Pulled By: mruberry

fbshipit-source-id: 8af9ada979240b255402f55210c0af1cba6a0a3c
2021-03-12 13:25:55 -08:00
Stas Bekman
924c15c962 [doc] reorg dist init and non-init functions (#52976)
Summary:
This PR proposes to improve the distributed doc:

* [x] putting the init functions together
* [x] moving post-init functions into their own sub-section as they are only available after init and moving that group to after all init sub-sections

If this is too much, could we at least put these 2 functions together:

```
.. autofunction:: init_process_group

.. autofunction:: is_initialized
```
as they are interconnected. and the other functions are not alphabetically sorted in the first place.

Thank you.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52976

Reviewed By: albanD

Differential Revision: D26993933

Pulled By: mrshenli

fbshipit-source-id: 7cacbe28172ebb5849135567b1d734870b49de77
2021-03-12 08:48:18 -08:00
BowenBao
705131c5d3 [ONNX] Update ONNX documentation (#51362) (#53313)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53313

Add information about .data field

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922421

Pulled By: SplitInfinity

fbshipit-source-id: 5117ac20990e286dcacb44f7b810b1bcc75d3dd6
2021-03-12 02:49:38 -08:00
Meghan Lele
b69dd910e8 [docs] Add starter content for new TorchScript language reference (#53837)
Summary:
**Summary**
This commit adds a new .rst file to use for updating the language specification and prepopulates it with the updated content for the expressions section.

**Test Plan**
https://user-images.githubusercontent.com/4392003/110441235-638ee880-806e-11eb-83ae-3b908bf00d5b.mov

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53837

Reviewed By: nikithamalgifb

Differential Revision: D26990801

Pulled By: SplitInfinity

fbshipit-source-id: 3b4e711bfaa8aac4ee3a075822fed7267a818121
2021-03-11 18:18:27 -08:00
Yi Wang
8d8a4a0624 Remove the extra ":noindex:" in ddp_comm_hooks.rst (#53855)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53855

Remove "noindex" here:

{F492926346}
ghstack-source-id: 123724419

Test Plan:
waitforbuildbot

The failure on doctest does not seem to be relevant.

Reviewed By: rohan-varma

Differential Revision: D26967086

fbshipit-source-id: adf9db1144fa1475573f617402fdbca8177b7c08
2021-03-11 17:26:50 -08:00
Edward Yang
ffac9b2ead Revert D26965463: [pytorch][PR] [docs] Add starter content for new TorchScript language reference
Test Plan: revert-hammer

Differential Revision:
D26965463 (d49c5c74f5)

Original commit changeset: 246c76a56d91

fbshipit-source-id: 50de1a2ac92204a2f3a2ad9b8fa163338062bf58
2021-03-11 07:26:00 -08:00
Meghan Lele
d49c5c74f5 [docs] Add starter content for new TorchScript language reference (#52494)
Summary:
**Summary**
This commit adds a new .rst file to use for updating the language specification and prepopulates it with the updated content for the expressions section.

**Test Plan**
https://user-images.githubusercontent.com/4392003/110441235-638ee880-806e-11eb-83ae-3b908bf00d5b.mov

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52494

Reviewed By: nikithamalgifb

Differential Revision: D26965463

Pulled By: SplitInfinity

fbshipit-source-id: 246c76a56d911a8061e720abd200a44d7dfa1f35
2021-03-10 19:36:27 -08:00
hyperfraise
f9185973d1 [quantization] Add some support for 3d operations (#50003)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/50002

The last commit adds tests for 3d conv with the `SubModelFusion` and `SubModelWithoutFusion` classes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50003

Reviewed By: mrshenli

Differential Revision: D26325953

Pulled By: jerryzh168

fbshipit-source-id: 7406dd2721c0c4df477044d1b54a6c5e128a9034
2021-03-10 16:40:35 -08:00
Yi Wang
fe0810e2f8 Add a section to introduce GradBucket class in ddp_comm_hooks.rst (#53253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53253

Since GradBucket class becomes public, mention this class in ddp_comm_hooks.rst.

Screenshot:
{F478201008}

ghstack-source-id: 123596842

Test Plan: viewed generated html file

Reviewed By: rohan-varma

Differential Revision: D26812210

fbshipit-source-id: 65b70a45096b39f7d41a195e65b365b722645000
2021-03-10 16:14:34 -08:00
James Reed
f8e7d8bb0d [FX][docs] Render inherited methods in fx.Tracer API reference (#53630)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53630

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26918962

Pulled By: jamesr66a

fbshipit-source-id: 2c84e308889d4ba3176018c7bd44a841e715e6c8
2021-03-09 14:30:41 -08:00
Eric Jang
c2ccb3578e Fix inport -> import typo in documentation (#53589)
Summary:
Fixes a small documentation typo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53589

Reviewed By: ngimel

Differential Revision: D26907045

Pulled By: Chillee

fbshipit-source-id: 15c35bec8d75dd897fe8886d0e0e1b889df65b24
2021-03-08 23:56:42 -08:00
Horace He
c07a62b854 [FX] change dynamic control flow example to a *more* dynamic version (#53250)
Summary:
This is a more fundamental example, as we may support some amount of shape specialization in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53250

Reviewed By: navahgar

Differential Revision: D26841272

Pulled By: Chillee

fbshipit-source-id: 027c719afafc03828a657e40859cbfbf135e05c9
2021-03-08 10:00:19 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
lezcano
7aeee2849b Parametrization Functionality (#33344)
Summary:
Provides the implementation for feature request issue https://github.com/pytorch/pytorch/issues/28937.

Adds the `Parametrization` functionality and implements `Pruning` on top of it.
It adds the `auto` mode, on which the parametrization is just computed once per forwards pass. The previous implementation computed the pruning on every forward, which is not optimal when pruning RNNs for example.

It implements a caching mechanism for parameters. This is implemented through the mechanism proposed at the end of the discussion https://github.com/pytorch/pytorch/issues/7313. In particular, it assumes that the user will not manually change the updated parameters between the call to `backwards()` and the `optimizer.step()`. If they do so, they would need to manually call the `.invalidate()` function provided in the implementation. This could be made into a function that gets a model and invalidates all the parameters in it. It might be the case that this function has to be called in the `.cuda()` and `.to` and related functions.

As described in https://github.com/pytorch/pytorch/issues/7313, this could be used, to implement in a cleaner way the `weight_norm` and `spectral_norm` functions. It also allows, as described in https://github.com/pytorch/pytorch/issues/28937, for the implementation of constrained optimization on manifolds (i.e. orthogonal constraints, positive definite matrices, invertible matrices, weights on the sphere or the hyperbolic space...)

TODO (when implementation is validated):
- More thorough test
- Documentation

Resolves  https://github.com/pytorch/pytorch/issues/28937

albanD

Pull Request resolved: https://github.com/pytorch/pytorch/pull/33344

Reviewed By: zhangguanheng66

Differential Revision: D26816708

Pulled By: albanD

fbshipit-source-id: 07c8f0da661f74e919767eae31335a9c60d9e8fe
2021-03-04 12:45:27 -08:00
kshitij12345
c4c77e2001 [special] add torch.special namespace (#52296)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

 * Add `torch.special` namespace
* Add `torch.special.gammaln` (alias to `torch.lgamma`)

TODO:
* Add proper entries for docs.
   * [x] Add .rst file entry
   * [x] Add documentation
   * [x] Update `lgamma` OpInfo entry for alias to `special.gammaln`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52296

Reviewed By: ngimel

Differential Revision: D26754890

Pulled By: mruberry

fbshipit-source-id: 73479f68989d6443ad07b7b02763fa98973c15f6
2021-03-04 00:04:36 -08:00
Wanchao Liang
79944f7ad9 [fx] simple doc fix
Reviewed By: houseroad

Differential Revision: D26739803

fbshipit-source-id: e680ce961a9ed1a5042d675aca9f5cf118c8ff85
2021-03-03 15:47:40 -08:00
Mike Ruberry
9c2673df46 Revert D26723384: [pytorch][PR] Implements torch.linalg.lstsq
Test Plan: revert-hammer

Differential Revision:
D26723384 (3ac9013235)

Original commit changeset: c9866a95f140

fbshipit-source-id: 3e5263d71facdc91ca09d7dcbbbe3ba818ee2821
2021-03-03 15:24:25 -08:00
Pritam Damania
59c0c19be2 Add RemoteModule to master RPC docs. (#53084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53084

Adding RemoteModule to master RPC docs since it is a prototype
feature.
ghstack-source-id: 122816689

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D26743372

fbshipit-source-id: 00ce9526291dfb68494e07be3e67d7d9c2686f1b
2021-03-03 13:52:11 -08:00
Nikita Vedeneev
3ac9013235 Implements torch.linalg.lstsq (#49093)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/44378 by providing a wider range of drivers similar to what SciPy is doing.

The supported CPU drivers are `gels, gelsy, gelsd, gelss`.
The CUDA interface has only `gels` implemented but only for overdetermined systems.

The current state of this PR:
- [x] CPU interface
- [x] CUDA interface
- [x] CPU tests
- [x] CUDA tests
- [x] Memory-efficient batch-wise iteration with broadcasting which fixes https://github.com/pytorch/pytorch/issues/49252
- [x] docs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49093

Reviewed By: H-Huang

Differential Revision: D26723384

Pulled By: mruberry

fbshipit-source-id: c9866a95f14091955cf42de22f4ac9e2da009713
2021-03-02 19:00:07 -08:00
Joel Schlosser
e86476f736 Huber loss (#50553)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48595.

## Background

This PR implements HuberLoss, which differs from SmoothL1Loss by a factor of beta. The current implementation does not share logic between the two. Feedback is welcome for the optimal way to minimize code duplication while remaining performant.

I've done some early [benchmarking](https://pytorch.org/tutorials/recipes/recipes/benchmark.html#collecting-instruction-counts-with-callgrind) with Huber calling in to the Smooth L1 kernel and scaling afterwards; for the simple test case I used, instruction counts are as follows:
```
Huber loss calls dedicated Huber kernel: 2,795,300
Huber loss calls Smooth L1 kernel and scales afterwards: 4,523,612
```
With these numbers, instruction counts are ~62% higher when using the pre-existing Smooth L1 kernel.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50553

Test Plan:
```
python test/test_nn.py TestNN.test_HuberLoss
python test/test_nn.py TestNN.test_HuberLoss_delta
python test/test_nn.py TestNN.test_huber_loss_invalid_delta
python test/test_nn.py TestNNDeviceTypeCPU.test_smooth_l1_loss_vs_huber_loss_cpu
python test/test_nn.py TestNNDeviceTypeCUDA.test_smooth_l1_loss_vs_huber_loss_cuda
python test/test_nn.py TestNNDeviceTypeCPU.test_invalid_reduction_strings_cpu
python test/test_nn.py TestNNDeviceTypeCUDA.test_invalid_reduction_strings_cuda
python test/test_nn.py TestNN.test_loss_equal_input_target_shape
python test/test_nn.py TestNN.test_pointwise_loss_broadcast
python test/test_overrides.py
python test/test_jit.py TestJitGeneratedFunctional.test_nn_huber_loss
python test/test_type_hints.py
python test/test_cpp_api_parity.py
build/bin/test_api
```

## Documentation
<img width="677" alt="Screen Shot 2021-01-14 at 4 25 08 PM" src="https://user-images.githubusercontent.com/75754324/104651224-5a445980-5685-11eb-884b-14ea517958c2.png">
<img width="677" alt="Screen Shot 2021-01-14 at 4 24 35 PM" src="https://user-images.githubusercontent.com/75754324/104651190-4e589780-5685-11eb-974d-8c63a89c050e.png">
<img width="661" alt="Screen Shot 2021-01-14 at 4 24 45 PM" src="https://user-images.githubusercontent.com/75754324/104651198-50225b00-5685-11eb-958e-136b36f6f8a8.png">
<img width="869" alt="Screen Shot 2021-01-14 at 4 25 27 PM" src="https://user-images.githubusercontent.com/75754324/104651208-53b5e200-5685-11eb-9fe4-5ff433aa13c5.png">
<img width="862" alt="Screen Shot 2021-01-14 at 4 25 48 PM" src="https://user-images.githubusercontent.com/75754324/104651209-53b5e200-5685-11eb-8051-b0cfddcb07d3.png">

Reviewed By: H-Huang

Differential Revision: D26734071

Pulled By: jbschlosser

fbshipit-source-id: c98c1b5f32a16f7a2a4e04bdce678080eceed5d5
2021-03-02 17:30:45 -08:00
Shen Li
29034b9487 [Reland] Update and expose ZeroRedundancyOptimizer docs (#53112)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53112

Test Plan: Imported from OSS

Reviewed By: blefaudeux

Differential Revision: D26752289

Pulled By: mrshenli

fbshipit-source-id: 897257417b530e6e18788cb40c44e5cb7ac688d5
2021-03-02 14:16:12 -08:00
Shen Li
931100f829 Revert D26696938: Update and expose ZeroRedundancyOptimizer docs
Test Plan: revert-hammer

Differential Revision:
D26696938 (a586c02962)

Original commit changeset: dafb00e5c9f0

fbshipit-source-id: b08604d2009f4df7b620699dd6659dfed2b02792
2021-03-02 07:14:23 -08:00