Commit Graph

1634 Commits

Author SHA1 Message Date
Alban Desmaison
734281c3d6 Cleanup all module references in doc (#73983)
Summary:
Working towards https://docs.google.com/document/d/10yx2-4gs0gTMOimVS403MnoAWkqitS8TUHX73PN8EjE/edit?pli=1#

This PR:
- Ensure that all the submodules are listed in a rst file (that ensure they are considered by the coverage tool)
- Remove some long deprecated code that just error out on import
- Remove the allow list altogether to ensure nothing gets added back there

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73983

Reviewed By: anjali411

Differential Revision: D34787908

Pulled By: albanD

fbshipit-source-id: 163ce61e133b12b2f2e1cbe374f979e3d6858db7
(cherry picked from commit c9edfead7a01dc45bfc24eaf7220d2a84ab1f62e)
2022-03-10 22:26:29 +00:00
Alban Desmaison
238f7d9cbf rename config module file to work with gh pages better
Fixes https://github.com/pytorch/pytorch/issues/62018

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74038
Approved by: https://github.com/mruberry, https://github.com/seemethere
2022-03-10 20:41:44 +00:00
Rohit Goswami
979a78f8b2 Sphinx panel
Fixes https://github.com/pytorch/pytorch/issues/73835.

The full context for this is detailed in the issue, but briefly:

- Adds `sphinx-panel`

Other PRs will demonstrate usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73836
Approved by: https://github.com/albanD
2022-03-07 14:50:09 +00:00
Pritam Damania
71aa3ab020 Add note in RPC docs about retries. (#73601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73601

Some users had questions about how the RPC framework deals with
failures and whether we retry. Adding a note about this to our docs to
elaborate on our current behavior and why we chose that approach.
ghstack-source-id: 150359866

Test Plan: view docs.

Reviewed By: mrshenli

Differential Revision: D34560199

fbshipit-source-id: ee33ceed7fa706270d4ca5c8fcff7535583490ff
(cherry picked from commit 954a906240cc40aacf08ca13f6554a35303a678a)
2022-03-03 00:29:31 +00:00
Ren Pang
e8b10b6e34 fix wrong indexing of class names in docs
Fixes #73631

Locally built and tested.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73632
Approved by: jbschlosser
2022-03-02 22:21:21 +00:00
Christian Puhrsch
484c0de670 Minimal NestedTensor (#72881)
Summary:
This PR adds a minimal version of a NestedTensor. It introduces the general harness future development can be built around.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72881

Reviewed By: albanD

Differential Revision: D34259177

Pulled By: cpuhrsch

fbshipit-source-id: 0245c36f603424e20f3b09651043c207f526d760
(cherry picked from commit 10764e8d427f29b364567e4cbc86ed73c3933158)
2022-03-02 16:31:51 +00:00
Nikita Shulga
8ac7393565 Revert D33767740: [pytorch][PR] Sparse CSR CPU: cuSolverSP backend for linalg.solve
Test Plan: revert-hammer

Differential Revision:
D33767740 (199d9a992c)

Original commit changeset: a945f065210c

Original Phabricator Diff: D33767740 (199d9a992c)

fbshipit-source-id: b7934df18118f8d6d5f165deb5aae9887953ae43
(cherry picked from commit d3ddbb021b227e3638f6f7c22c6eadfa73695e31)
2022-03-01 18:33:23 +00:00
Kushashwa Ravi Shrimali
199d9a992c Sparse CSR CPU: cuSolverSP backend for linalg.solve (#71399)
Summary:
This PR introduces the `cuSolverSP` backend for `linalg.solve` with sparse CSR input matrices. The motivation comes from the issue: https://github.com/pytorch/pytorch/issues/69538.

`cuSolver` provides [`cusolverSp<t>csrlsvluHost`](https://docs.nvidia.com/cuda/cusolver/index.html#cusolver-lt-t-gt-csrlsvlu) API, a few things to note:

1. As mentioned in the documentation: `only CPU (Host) path is provided.` From the profiling, there doesn't seem to be any GPU kernel launch for optimization, please see the profiling below.
2. Since only `host` path is provided, the CPU path uses `csrlsvluHost` (but requires PyTorch to be installed/built with CUDA support).
3. The documentation mentions reordering helps optimize stuff, but it isn't clear how it affects the performance. There are options for reordering, so we stick to `reorder = 0` as the default choice.

`cuSolver` has [`csrlsvqr`](https://docs.nvidia.com/cuda/cusolver/index.html#cusolver-lt-t-gt-csrlsvqr) function which provides a `device` path to solve the linear system. This function is used for the CUDA path in this PR.

**Gist:**

For CPU Path: we call [`csrlsvluHost` function of cuSolver](https://docs.nvidia.com/cuda/cusolver/index.html#cusolver-lt-t-gt-csrlsvlu).
For CUDA Path: we call [`csrlsvqr` function of cuSolver](https://docs.nvidia.com/cuda/cusolver/index.html#cusolver-lt-t-gt-csrlsvqr).

**Profiling:** (On sparse input tensor of size 1000 x 1000, with a vector of shape length 1000), for `csrlsvlu` function (to show no GPU optimization)

```cpp
==3999651== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:  100.00%  2.1440us         1  2.1440us  2.1440us  2.1440us  [CUDA memcpy HtoD]
      API calls:   99.72%  1.07199s         9  119.11ms     500ns  1.07164s  cudaFree
                    0.11%  1.2182ms       398  3.0600us     140ns  137.94us  cuDeviceGetAttribute
                    0.06%  674.45us         4  168.61us  165.50us  173.64us  cuDeviceTotalMem
                    0.03%  357.07us         4  89.268us  2.7800us  201.89us  cudaMalloc
                    0.03%  309.29us         1  309.29us  309.29us  309.29us  cudaGetDeviceProperties
                    0.01%  160.47us       332     483ns     350ns  3.3300us  cudaFuncSetAttribute
                    0.01%  115.12us         4  28.780us  26.290us  33.410us  cuDeviceGetName
                    0.00%  28.591us         5  5.7180us     440ns  16.921us  cudaGetDevice
                    0.00%  22.061us         4  5.5150us     871ns  18.690us  cudaDeviceSynchronize
                    0.00%  20.370us        18  1.1310us     410ns  6.9900us  cudaEventDestroy
                    0.00%  16.390us         1  16.390us  16.390us  16.390us  cudaMemcpy
                    0.00%  11.540us         2  5.7700us  1.4900us  10.050us  cuDeviceGetPCIBusId
                    0.00%  10.510us        18     583ns     430ns  1.6200us  cudaEventCreateWithFlags
                    0.00%  7.9100us        21     376ns     290ns     700ns  cudaDeviceGetAttribute
                    0.00%  1.4300us         6     238ns     150ns     590ns  cuDeviceGet
                    0.00%  1.2200us         4     305ns     190ns     500ns  cuDeviceGetCount
                    0.00%     900ns         1     900ns     900ns     900ns  cuInit
                    0.00%     860ns         4     215ns     180ns     260ns  cuDeviceGetUuid
                    0.00%     240ns         1     240ns     240ns     240ns  cuDriverGetVersion
                    0.00%     230ns         1     230ns     230ns     230ns  cudaGetDeviceCount
```

Script:

```python
import torch

def solve(x, other, out):
    torch.linalg.solve(x, other, out=out)

if __name__ == "__main__":
    dense_inp = torch.randn((1000, 1000), dtype=torch.float64)
    # Set 50% of the values to 0 randomly
    dense_inp = torch.nn.functional.dropout(dense_inp, p=0.5)
    sparse_inp = dense_inp.to_sparse_csr()

    other = torch.randint(100, (1000,), dtype=torch.float64)
    out = torch.randint(1, (1000,), dtype=torch.float64)

    solve(sparse_inp, other, out)
```

The following error is raised when the function is used on a CPU device with PyTorch built/installed without CUDA support:
* When built without CUDA support:

```python
/home/krshrimali/pytorch/torch/autograd/profiler.py:151: UserWarning: CUDA is not available, disabling CUDA profiling
  warn("CUDA is not available, disabling CUDA profiling")
Traceback (most recent call last):
  File "/home/krshrimali/pytorch/test_sp.py", line 17, in <module>
    solve(x, other, out)
  File "/home/krshrimali/pytorch/test_sp.py", line 5, in solve
    torch.linalg.solve(x, other, out=out)
RuntimeError: PyTorch was not built with CUDA support. Please use PyTorch built CUDA support
```

**Performance Comparison** (vs SciPy's [`scipy.sparse.linalg.spsolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.spsolve.html):

Time taken by `scipy.sparse.linalg.spsolve` : 0.595 seconds

On CPU: Time taken by `torch.linalg.solve` : 4.565 seconds
On CUDA: Time taken by `torch.linalg.solve`: 1.838 seconds

The inputs are of dimensions: (17281, 17281) and (17281, 1), and were taken from https://math.nist.gov/MatrixMarket/extreme.html.

Thanks to IvanYashchuk for helping me with the PR, and guiding me through it.

cc: IvanYashchuk pearu nikitaved cpuhrsch

cc nikitaved pearu cpuhrsch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71399

Reviewed By: VitalyFedyunin

Differential Revision: D33767740

Pulled By: cpuhrsch

fbshipit-source-id: a945f065210cd719096eb8d7cdbf8e8937c2fce9
(cherry picked from commit f4f35c17da414e1ca6c6d91402933521857aa1ea)
2022-03-01 05:32:35 +00:00
Vasiliy Kuznetsov
01bd6f4357 pytorch: fix typo in quantization docs (#73511)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73511

Fixes typo in describing the `torch.qint32` data type.

Test Plan: CI

Reviewed By: andrewor14

Differential Revision: D34522741

Pulled By: vkuzo

fbshipit-source-id: f05f8440d9708281213a4b3736e8f59199dd7b1a
(cherry picked from commit ca9e598d60cac016e58fda9cd0f329ca412ec36b)
2022-02-28 23:11:52 +00:00
Peter Bell
f437ca6e8e Remove legacy tensor constructors for complex dtypes
PR #72405 added four new types to the public python API:
`torch.ComplexFloatTensor`, `torch.ComplexDoubleTensor`,
`torch.cuda.ComplexFloatTensor` and `torch.cuda.ComplexDoubleTensor`.

I believe this was unintentional and a clarifying comment as to the
purpose of `all_declared_types` is needed to avoid this in future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73370
2022-02-28 15:13:44 +00:00
Philip Meier
c6f1bbc0ac promote torch.testing to stable (#73348)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73348

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D34457727

Pulled By: mruberry

fbshipit-source-id: 2cc812b643e0d1e753bead2751ee79b3f03fde20
(cherry picked from commit bcdaca1a019a679b8b274e2fb5f19bfd08874ce9)
2022-02-25 06:30:31 +00:00
Jacob Hepkema
91261feb7b Add SoftplusTransform (#52300)
Summary:
This pull request introduces `SoftplusTransform` to `torch.distributions.transforms`. `SoftplusTransform` transforms via the mapping `Softplus(x) = log(1 + exp(x))`. Note that the transform is different to [`torch.nn.Softplus`](https://pytorch.org/docs/stable/generated/torch.nn.Softplus.html#torch.nn.Softplus), as that has additional `beta` and `threshold` parameters. Inverse and `log_abs_det_jacobian` for a more complex `SoftplusTransform` can be added in the future.

vitkl fritzo

Addresses the issue discussed here: [pyro issue 855](https://github.com/pyro-ppl/numpyro/issues/855)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52300

Reviewed By: albanD, ejguan

Differential Revision: D34082655

Pulled By: neerajprad

fbshipit-source-id: 6114e74ee5d73c1527191bed612a142d691e2094
(cherry picked from commit a181a3a9e53a34214a503d38760ad7778d08a680)
2022-02-25 02:30:03 +00:00
Can Balioglu
0e7a7a5fe7 Add documentation for c10d log levels (#73361)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73361

This PR adds the documentation for the newly introduced `TORCH_CPP_LOG_LEVEL` and how it can be used along with `TORCH_DISTRIBUTED_DEBUG` to adjust the log level of c10d.
ghstack-source-id: 149874995

Test Plan: Locally rendered and checked the documentation.

Reviewed By: rohan-varma

Differential Revision: D34452352

fbshipit-source-id: ecb54590f3030ddef9921a7152ca9f7fc9438345
(cherry picked from commit f4c7c6f3b27dbd3006686cf26a6e9e53cd2c8f09)
2022-02-24 20:38:15 +00:00
Edgar Andrés Margffoy Tuay
86deecd7be Check clang++/g++ version when compiling CUDA extensions (#63230)
Summary:
See https://github.com/pytorch/pytorch/issues/55267

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63230

Reviewed By: soulitzer

Differential Revision: D34159119

Pulled By: malfet

fbshipit-source-id: 6eef7582388bf6a42dcc1d82b6e4b1f40f418dd7
(cherry picked from commit 2056d0a0be7951602de22f8d3b4efc28dd71b6c2)
2022-02-24 08:32:32 +00:00
Can Balioglu
e1db2f13ce Refactor TORCH_DISTRIBUTED_DEBUG implementation (#73166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73166

This PR refactors, cleans up, and optimizes the implementation of `TORCH_DISTRIBUTED_DEBUG`. It also introduces three new user APIs: `get_debug_level()`, `set_debug_level()`, and `set_debug_level_from_env()` to retrieve and modify the debug level after a process has started.
ghstack-source-id: 149778566

Test Plan: Run the existing unit tests.

Reviewed By: rohan-varma

Differential Revision: D34371226

fbshipit-source-id: e18443b411adcbaf39b2ec999178c198052fcd5b
(cherry picked from commit 26d6bb1584b83a0490d8b766482656a5887fa21d)
2022-02-24 02:33:05 +00:00
Nikita Karetnikov
75db05c3fd Check if the iterator is valid before dereferencing it (#72405)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72405

Fixes #71674.

This shouldn't segfault now:

```
import torch
d = torch.complex64
torch.set_default_dtype(d)
```

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D34423660

Pulled By: anjali411

fbshipit-source-id: cac92a6f56846f2c0727a120b5f568aa75baa21e
(cherry picked from commit eaab813a0fddced24303b3bd50e4fcdba1516e46)
2022-02-23 18:33:46 +00:00
Nikita Shulga
cfb6c942fe scatter_reduce documentation (#73125)
Summary:
Reland of https://github.com/pytorch/pytorch/issues/68580 (which were milestoned for 1.11) plus partial revert of https://github.com/pytorch/pytorch/pull/72543

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73125

Reviewed By: bdhirsh

Differential Revision: D34355217

Pulled By: malfet

fbshipit-source-id: 325ecdeaf53183d653b44ee5e6e8839ceefd9200
(cherry picked from commit 71db31748a)
2022-02-22 19:33:46 +00:00
Gary Miguel
dbac0f5cdc Update persons of interest for ONNX (#72072)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72072

Reviewed By: H-Huang

Differential Revision: D34230534

Pulled By: malfet

fbshipit-source-id: ed5abdfacf0d9628c6cc99957fa578d71a79d025
(cherry picked from commit 4669c346c4)
2022-02-16 23:01:13 +00:00
Elias Ellison
f8a2efc190 Make fusion strategy api public (#72639)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72639

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D34159123

Pulled By: eellison

fbshipit-source-id: 27e4d9694a83e8d6829009882715be4308c96a9f
(cherry picked from commit 1cadcd2f75)
2022-02-16 03:45:15 +00:00
Kurt Mohler
8e7fe87630 Rename Typed/UntypedStorage to _Typed/_UntypedStorage (#72540)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72540

Reviewed By: jbschlosser

Differential Revision: D34216823

Pulled By: bdhirsh

fbshipit-source-id: 1bc9930ab582771ebf02308e035576cd1a0dbe47
(cherry picked from commit 329238f612)
2022-02-15 23:53:01 +00:00
Nikita Shulga
cb00d9601c Revert D33800694: [pytorch][PR] scatter_reduce documentation
Test Plan: revert-hammer

Differential Revision:
D33800694 (12a1df27c7)

Original commit changeset: 2e09492a29ce

Original Phabricator Diff: D33800694 (12a1df27c7)

fbshipit-source-id: 2a4775c0042551607fe3ab77f5bfe9f2e4b6b78e
(cherry picked from commit 4bd6c0d2bb)
2022-02-15 20:10:26 +00:00
rusty1s
12a1df27c7 scatter_reduce documentation (#68580)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/63780 (part 2)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68580

Reviewed By: atalman

Differential Revision: D33800694

Pulled By: malfet

fbshipit-source-id: 2e09492a29cef115a7cca7c8209d1dcb6ae24eb9
(cherry picked from commit 696ff75940)
2022-02-15 19:43:54 +00:00
Huamin Li
32dd4a8639 move fx_acc out of pytorch core (#72803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72803

as title

Reviewed By: jfix71

Differential Revision: D34101788

fbshipit-source-id: a9fd84671929af21405c049603e9895ec68de3d8
(cherry picked from commit e98fd1c32d)
2022-02-15 16:13:43 +00:00
mattip
fb4504da2f DOC: release documentation version should be major.minor (#72706)
Summary:
Fixes pytorch/pytorch.github.io#929

The pytorch doc team would like to move to only major.minor documentation at https://pytorch.org/docs/versions.html, not major.minor.patch. This has been done in the CI scripts, but the generated documentation still has the patch version. Remove it when building RELEASE documentation. This allows simplifying the logic, using `'.'.join(torch_version.split('.')[:2])` since we no longer care about trimming off the HASH: it automatically gets removed.

holly1238, brianjo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72706

Reviewed By: samdow

Differential Revision: D34215815

Pulled By: albanD

fbshipit-source-id: 8437036cc6636674d9ab8b1666f37b561d0527e1
(cherry picked from commit d8caf988f9)
2022-02-14 23:37:43 +00:00
Rohit Goswami
801abc0cdd MAINT, DOC: Trivial spellings and warnings (#72745)
Summary:
Fixes N/A.
Just minor annoyances.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72745

Reviewed By: samdow

Differential Revision: D34216016

Pulled By: albanD

fbshipit-source-id: b65600b50e41a1dd7bf7d076b0dd3e2d1c99caf9
(cherry picked from commit b959392a5f)
2022-02-14 21:55:19 +00:00
Kurt Mohler
47c6993355 Update from_dlpack tests and documentation (#70543)
Summary:
Part of https://github.com/pytorch/pytorch/issues/58742

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70543

Reviewed By: soulitzer

Differential Revision: D34172475

Pulled By: mruberry

fbshipit-source-id: d498764b8651a8b7a19181b3421aeebf28a5db2b
(cherry picked from commit 05332f164c)
2022-02-14 03:35:17 +00:00
Felix Divo
340fae4363 [Doc] Better formatting in autograd.rst (#72586)
Summary:
See title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72586

Reviewed By: soulitzer

Differential Revision: D34177704

Pulled By: albanD

fbshipit-source-id: 1adf6ebed4f64ec4d8fff160df300c8e6ee528ea
(cherry picked from commit bbb586d67d)
2022-02-11 22:46:10 +00:00
BowenBao
9257de7efa [ONNX] Minor doc update (#69501) (#69550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69550

Fix the wiki URL.

Also minor reorganization in onnx.rst.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32994269

Pulled By: malfet

fbshipit-source-id: 112acfe8b7c778d7e3c2cef684023fdaf2c6ec9c
(cherry picked from commit f0787fabde)
2022-02-11 22:05:15 +00:00
BowenBao
ce5b155ccb [ONNX] Link to the wiki (#68505) (#72663)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72663

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D34150535

Pulled By: malfet

fbshipit-source-id: 230b786f6235549fff764083eac2c3744c6bff88

Co-authored-by: Gary Miguel <garymiguelmicrosoft.com>
(cherry picked from commit c848c582d1)
2022-02-11 22:05:15 +00:00
Felix Divo
25fba4a019 [DOC] Add link to "double backward" from "extending pytorch" page (#72584)
Summary:
It is probably the most user friendly to link to that (lesser known?) feature.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72584

Reviewed By: soulitzer

Differential Revision: D34173999

Pulled By: albanD

fbshipit-source-id: 99fff7a55412faf54888f8317ab2388f4d7d30e4
(cherry picked from commit 2191ee7657)
2022-02-11 20:34:13 +00:00
BowenBao
04c5d978b9 [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
2022-02-11 18:35:35 +00:00
Mike Ruberry
2fa34fb7b9 Revert D34154832: [pytorch][PR] Add multi_head_attention_forward to functional rst docs
Test Plan: revert-hammer

Differential Revision:
D34154832 (bafaf0d610)

Original commit changeset: 7279d05f31d4

Original Phabricator Diff: D34154832 (bafaf0d610)

fbshipit-source-id: fcbc896b25f3b51a7ce0c5dc1dca652f57f7218c
(cherry picked from commit afa53acdfd)
2022-02-11 05:08:46 +00:00
ProGamerGov
bafaf0d610 Add multi_head_attention_forward to functional rst docs (#72675)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/72597

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72675

Reviewed By: malfet

Differential Revision: D34154832

Pulled By: jbschlosser

fbshipit-source-id: 7279d05f31d41259e57ba28fe6fdb7079d603660
(cherry picked from commit 68c32cdbd7)
2022-02-11 01:52:58 +00:00
Till Hoffmann
b014d4ddb9 Add transformation using cdf of distribution. (#72495)
Summary:
This PR adds a transform that uses the cumulative distribution function of a given probability distribution.

For example, the following code constructs a simple Gaussian copula.

```python
# Construct a Gaussian copula from a multivariate normal.
base_dist = MultivariateNormal(
    loc=torch.zeros(2),
    scale_tril=LKJCholesky(2).sample(),
)
transform = CumulativeDistributionTransform(Normal(0, 1))
copula = TransformedDistribution(base_dist, [transform])
```

The following snippet creates a "wrapped" Gaussian copula for correlated positive variables with Weibull marginals.

```python
transforms = [
    CumulativeDistributionTransform(Normal(0, 1)),
    CumulativeDistributionTransform(Weibull(4, 2)).inv,
]
wrapped_copula = TransformedDistribution(base_dist, transforms)
```

cc fritzo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72495

Reviewed By: ejguan

Differential Revision: D34085919

Pulled By: albanD

fbshipit-source-id: 7917391519a96b0d9b54c52db65d1932f961d070
(cherry picked from commit 572196146e)
2022-02-09 14:46:47 +00:00
Yinghai Lu
3670466201 Move fx2trt out of PyTorch core (#72499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72499

Pull Request resolved: https://github.com/pytorch/benchmark/pull/740

To fx2trt out of tree to remove bloatness of PyTorch core.

It's the first and major step. Next, we will move acc_tracer out of the tree and rearrange some fx passes.

Reviewed By: suo

Differential Revision: D34065866

fbshipit-source-id: c72b7ad752d0706abd9a63caeef48430e85ec56d
(cherry picked from commit 91647adbca)
2022-02-09 04:04:49 +00:00
Noufel
8d525d4760 Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500)
Summary:
Correcting a minor typo: "Users should pay" instead of "Users should be pay"

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72500

Reviewed By: albanD

Differential Revision: D34077972

Pulled By: ejguan

fbshipit-source-id: 5d7a138d1f17eca838d2c1da76d7759d96e4375f
(cherry picked from commit d046baa89c)
2022-02-08 23:08:25 +00:00
Kushashwa Ravi Shrimali
bc03c1d000 Structured Kernels for index_copy, add out variant (#67329)
Summary:
This PR ports `index_copy` implementation to structured kernels, also adds an `out` variant.

~Note to the reviewers: This is in draft mode, waiting for the tests from the CI, and I'll give a final look before requesting the review.~

Issue tracker: https://github.com/pytorch/pytorch/issues/55070

cc: bdhirsh ysiraichi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67329

Reviewed By: ejguan

Differential Revision: D34077219

Pulled By: bdhirsh

fbshipit-source-id: 6accda33957f654b753261c5c3d765a27a64d2c0
(cherry picked from commit f3ac83217a)
2022-02-08 22:52:27 +00:00
Ivan Yashchuk
8cdcc1181c Add missing entry for sampled_addmm in sparse.rst (#72312)
Summary:
Let's make the documentation for `torch.sparse.sampled_addmm` searchable in the PyTorch documentation.
This PR shall be cherry-picked for the next 1.11 release.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72312

Reviewed By: davidberard98

Differential Revision: D34045230

Pulled By: cpuhrsch

fbshipit-source-id: c1b1dc907443284857f48c8ce1efab22c6701bbe
(cherry picked from commit 225929ecf2)
2022-02-08 00:07:20 +00:00
Yanli Zhao
2336571cb7 make fsdp folder to be public (#72084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72084

make fsdp folder to be public
ghstack-source-id: 148173447

Test Plan: unit tests

Reviewed By: mrshenli

Differential Revision: D33903417

fbshipit-source-id: 7852a2adc4af09af48a5ffa52ebf210489f834d5
(cherry picked from commit bd06513cfe)
2022-02-02 15:50:14 +00:00
Richard Zou
f99147dec0 Targeted documentation updates in autograd.functional (#72111)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72111

For vectorize flag:
- Advertises the use of functorch

For autograd.functional.jvp:
- Advertises the use of functorch and the low-level jvp API, both of
which will be more performant than the double backprop trick.

Test Plan: - view docs

Reviewed By: albanD

Differential Revision: D33918065

Pulled By: zou3519

fbshipit-source-id: 6e19699aa94f0e023ccda0dc40551ad6d932b7c7
(cherry picked from commit b4662ceb99)
2022-02-02 03:19:31 +00:00
Tristan Rice
6208c2800e torch/monitor: merge Interval and FixedCount stats (#72009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72009

This simplifies the Stats interface by merging IntervalStat and FixedCountStat into a single Stat w/ a specific window size duration and an optional max samples per window. This allows for the original intention of having comparably sized windows (for statistical purposes) while also having a consistent output bandwidth.

Test Plan:
```
buck test //caffe2/test:monitor //caffe2/test/cpp/monitor:monitor
```

Reviewed By: kiukchung

Differential Revision: D33822956

fbshipit-source-id: a74782492421be613a1a8b14341b6fb2e8eeb8b4
(cherry picked from commit 293b94e0b4)
2022-01-30 23:21:59 +00:00
soulitzer
0c2b1b8bcf Update docs for forward AD and make them public (#71643)
Summary:
Follow up: we would need to update the links to the tutorial later

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71643

Reviewed By: albanD

Differential Revision: D33713982

Pulled By: soulitzer

fbshipit-source-id: a314ffa4e7d5c5ebdef9c50033f338b06578d71c
(cherry picked from commit ba30daaaa5)
2022-01-28 03:33:00 +00:00
Wanchao Liang
9b53d3194c Implement gather primitive for ProcessGroupNCCL (#66745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66745

This PR implement NCCL gather and add gather to ProcessGroupNCCL using nccl send/recv api.

NCCL doesn’t directly provide primitives for gather, so we need to be implemented on top of NCCL’s send/recv API.
1. In ProcessGroupNCCL.cpp, the outputTensors are first flattened, then inputTensors and outputFlattened are passed by the collective class to gather() function in nccl.cpp.
1. In nccl.cpp, gather is implemented using ncclSend/ncclRecv: all the ranks send inputTensor to the root rank, and the root rank uses a for loop to receive these inputTensors.
ghstack-source-id: 147754838

Test Plan:
test_gather_ops
test_gather_checks
test_gather_stress

Reviewed By: pritamdamania87

Differential Revision: D29616361

fbshipit-source-id: b500d9b8e67113194c5cc6575fb0e5d806dc7782
(cherry picked from commit d560ee732e)
2022-01-27 19:37:55 +00:00
Tristan Rice
7aa4a1f63e torch/monitor: TensorboardEventHandler (#71658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71658

This adds the beginnings of a TensorboardEventHandler which will log stats to Tensorboard.

Test Plan: buck test //caffe2/test:monitor

Reviewed By: edward-io

Differential Revision: D33719954

fbshipit-source-id: e9847c1319255ce0d9cf2d85d8b54b7a3c681bd2
(cherry picked from commit 5c8520a6ba)
2022-01-27 08:33:55 +00:00
lezcano
108b37db84 [Array API] Add linalg.diagonal (#70599)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70599

This PR adds `linalg.diagonal` following the Array API:
https://data-apis.org/array-api/latest/extensions/linear_algebra_functions.html#linalg-diagonal-x-axis1-0-axis2-1-offset-0

Fixes https://github.com/pytorch/pytorch/issues/62813

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33760506

Pulled By: mruberry

fbshipit-source-id: e32c3490321d8c3f31b3bb538bc1f72b39bd2854
(cherry picked from commit 44f41f8e39)
2022-01-26 08:08:32 +00:00
Shen Li
7bc220e060 Update distributed.rst for ProcessGroup Extensions (#71482)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71482

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D33745986

Pulled By: mrshenli

fbshipit-source-id: fe2d0491901bf00be09deb5c556bc1e1d359b725
(cherry picked from commit be5104bfd7)
2022-01-25 00:30:08 +00:00
Priyam Parashar
f75e92a936 Fix for retracing documentation which would break for n-ary operators (#71599)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68195

Updated fx.rst documentation and followed the instructions in [contributing.md](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#writing-documentation) to generate html. Faced errors which looked very similar to https://github.com/pytorch/pytorch/issues/32703 but gathered from the thread that a non-0 exit is OK for documentation building and these are warnings not affecting the html generation (at least for root rst folder). The HTML output is plain without any styling, please confirm this is intentional.

Screenshot of generated html:
<img width="1438" alt="Screen Shot 2022-01-20 at 4 31 24 PM" src="https://user-images.githubusercontent.com/9580531/150439448-1a626d74-68ba-4f94-91f2-a6942959b049.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71599

Reviewed By: jamesr66a

Differential Revision: D33719546

Pulled By: zephirefaith

fbshipit-source-id: cc9b8ddb13cfdb9f14ebff54cf0d894a8b842aa1
(cherry picked from commit 170db5d7be)
2022-01-24 20:07:08 +00:00
Tristan Rice
26d54b4076 monitor: add docstrings to pybind interface (#71481)
Summary:
This adds argument names and docstrings so the docs are a lot more understandable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71481

Test Plan:
docs/tests CI should suffice

![Screenshot 2022-01-19 at 16-35-10 torch monitor — PyTorch master documentation](https://user-images.githubusercontent.com/909104/150240882-e69cfa17-e2be-4569-8ced-71979a89b369.png)

Reviewed By: edward-io

Differential Revision: D33661255

Pulled By: d4l3k

fbshipit-source-id: 686835dfe331b92a51f4409ec37f8ee6211e49d3
(cherry picked from commit 0a6accda1b)
2022-01-21 23:04:33 +00:00
Michael Suo
9f0227a0eb
Revert "[ONNX] Minor doc update (#69501)" (#71615)
This reverts commit 114c13d020.
2022-01-20 17:35:04 -08:00
BowenBao
114c13d020 [ONNX] Minor doc update (#69501)
Fix the wiki URL.

Also minor reorganization in onnx.rst.

[ONNX] restore documentation of public functions (#69623)

The build-docs check requires all public functions to be documented.
These should really not be public, but we'll fix that later.'

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71609
2022-01-21 00:13:40 +00:00