Commit Graph

1953 Commits

Author SHA1 Message Date
Joel Schlosser
8b55b86dbd Move sym_int and sym_float alongside SymInt / SymFloat in base torch package (#91317)
This PR moves the definitions for:
* `sym_int`
* `sym_ceil` (used only for `sym_int`)
* `sym_floor` (used only for `sym_int`)
* `sym_float`

from `torch/fx/experimental/symbolic_shapes.py` to `torch/__init__.py`, where `SymInt` and `SymFloat` are already defined.

This removes the need for several in-line imports, and enables proper JIT script gating for #91318. I'm very open to doing this in a better way!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91317
Approved by: https://github.com/ezyang, https://github.com/anijain2305
2022-12-28 16:08:16 +00:00
Salahuddin
f1d8fef4d4 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-28 15:06:24 +00:00
PyTorch MergeBot
af7132302a Revert "Softmax added to tensor, torch and docs (#91292)"
This reverts commit f8b28799f8.

Reverted https://github.com/pytorch/pytorch/pull/91292 on behalf of https://github.com/weiwangmeta due to breaking internal distributed testing builds
2022-12-28 14:30:46 +00:00
Salahuddin
f8b28799f8 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-25 12:59:45 +00:00
Ikko Ashimine
a188e6ddc0 Fix typo in troubleshooting.rst (#91301)
enviornment -> environment

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91301
Approved by: https://github.com/msaroufim
2022-12-23 21:39:38 +00:00
Takeshi Watanabe
55749b9c41 [dynamo] Write full code of how to enable output_code (#91230)
Ref https://github.com/pytorch/pytorch/pull/91223
Since it was trickier than I've expected

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91230
Approved by: https://github.com/soumith
2022-12-22 14:09:06 +00:00
bowen0701
e803d336eb Fix missing indentation in serialization.rst (#91253)
Fixes #ISSUE_NUMBER

In serialization.rst, fix class ControlFlowModule's forward(): missing indentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91253
Approved by: https://github.com/kit1980
2022-12-21 20:14:44 +00:00
Eddie Yan
8b617f813d [cuBLAS] Add an option to disable reduced precision reductions for BF16 GEMM (#89172)
Essentially the same change as #67946, except that the default is to disallow reduced precision reductions in `BFloat16` GEMMs (for now). If performance is severely regressed, we can change the default, but this option appears to be necessary to pass some `addmm` `BFloat16` tests on H100.

CC @ptrblck @ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89172
Approved by: https://github.com/ngimel
2022-12-21 18:58:28 +00:00
Takeshi Watanabe
0476201482 Update debug option for torch._dynamo (#91223)
Seems outdated from https://www.youtube.com/watch?v=egZB5Uxki0I

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91223
Approved by: https://github.com/ngimel
2022-12-21 05:06:42 +00:00
richardachen
f460893cec Update optim.rst (#91195)
Fixes #91080

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91195
Approved by: https://github.com/kit1980
2022-12-20 23:22:25 +00:00
Richard Zou
41846e205e [torch.func] Setup torch.func, populate it with all transforms (#91016)
This PR sets up torch.func and populates it with the following APIs:
- grad
- grad_and_value
- vjp
- jvp
- jacrev
- jacfwd
- hessian
- functionalize
- vmap

It also renames all instances of `functorch` in the APIs for those docs
to `torch.func`.

We rewrite the `__module__` fields on some of the above APIs so that the
APIs fit PyTorch's public api definition.
- For an API to be public, it must have a `__module__` that points to a
  public PyTorch submodule. However, `torch._functorch.eager_transforms`
  is not public due to the leading underscore.
- The solution is to rewrite `__module__` to point to where the API is
  exposed (torch.func). This is what both Numpy and JAX do for their
  APIs.
- h/t pmeier in
  https://github.com/pytorch/pytorch/issues/90284#issuecomment-1348595246
  for idea and code
- The helper function, `exposed_in`, is confined to
  torch._functorch/utils for now because we're not completely sure if
  this should be the long-term solution.

Implication for functorch.* APIs:
- functorch.grad is the same object as torch.func.grad
- this means that the functorch.grad docstring is actually the
  torch.func.grad docstring and will refer to torch.func instead of
  functorch.
- This isn't really a problem since the plan on record is to deprecate
  functorch in favor of torch.func. We can fix these if we really want,
  but I'm not sure if a solution is worth maintaining.

Test Plan:
- view docs preview

Future:
- vmap should actually just be torch.vmap. This requires an extra step
  where I need to test internal callsites, so, I'm separating it into a
  different PR.
- make_fx should be in torch.func to be consistent with `import
  functorch`. This one is a bit more of a headache to deal with w.r.t.
  public api, so going to deal with it separately.
- beef up func.rst with everything else currently on the functorch
  documention website. func.rst is currently just an empty shell.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91016
Approved by: https://github.com/samdow
2022-12-20 00:00:52 +00:00
Bin Bao
548960f68e Replace TORCHINDUCTOR_TRACE with TORCH_COMPILE_DEBUG in documentation (#91011)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91011
Approved by: https://github.com/mlazos, https://github.com/jansel, https://github.com/msaroufim
2022-12-19 14:45:27 +00:00
Alvaro Gaona
ddf5b68dcb Nuttall window (#90103)
Relates #85366
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90103
Approved by: https://github.com/lezcano
2022-12-16 09:05:53 +00:00
HDCharles
1ca9d43d4e [ao] quantize.py fixing public v private (#87521)
Summary: made _register_activation_post_process_hook, _add_observer,
_get_unique_devices_, _get_observer_dict private

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D40709277](https://our.internmc.facebook.com/intern/diff/D40709277)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87521
Approved by: https://github.com/jerryzh168
2022-12-14 22:50:39 +00:00
Sherlock Huang
b4b8a56589 Doc for Canonical Aten and Prims IR (#90644)
as title.

Sample output: https://docs-preview.pytorch.org/90644/ir.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90644
Approved by: https://github.com/ezyang
2022-12-13 21:30:47 +00:00
Arek Sredzki
44dac51c36 Improve Autograd Documentation Clarity (#89401)
This makes minor adjustments to the autograd docs, improving clarity and resolving grammatical errors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89401
Approved by: https://github.com/kit1980
2022-12-06 06:45:04 +00:00
xiny
57bb4cd046 [Doc][Distributed] Add missing functions to distributed.rst (#89905)
Add missing documents for `torch.distributed.all_to_all_single` and other functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89905
Approved by: https://github.com/kit1980
2022-12-04 07:22:54 +00:00
Christian Puhrsch
a306f85ea7 Update Persons of Interest (#90069)
Creates sections for contributors to MaskedTensor and NestedTensor and updates torchaudio.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90069
Approved by: https://github.com/drisspg, https://github.com/mikaylagawarecki, https://github.com/nateanl
2022-12-02 23:06:57 +00:00
PyTorch MergeBot
cba96366a2 Revert "remove torch.equal usages (#89527)"
This reverts commit 4095ef8b80.

Reverted https://github.com/pytorch/pytorch/pull/89527 on behalf of https://github.com/clee2000 due to broke periodic multigpu tests 4095ef8b80 https://github.com/pytorch/pytorch/actions/runs/3592806602/jobs/6049368502
2022-12-02 21:36:13 +00:00
XiaobingSuper
8b2f9887bf update quantization doc: add x86 backend as default backend of server inference (#86794)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86794
Approved by: https://github.com/jgong5, https://github.com/kit1980
2022-12-02 02:10:25 +00:00
Nikita Shulga
768bd3fb4a Add torch.compile implementation (#89607)
`torch.compile` can be used either as decorator or to optimize model directly, for example:
```
@torch.compile
def foo(x):
  return torch.sin(x) + x.max()
```
or
```
mod = torch.nn.ReLU()
optimized_mod = torch.compile(mod, mode="max-autotune")
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89607
Approved by: https://github.com/soumith
2022-12-01 20:17:52 +00:00
Svetlana Karslioglu
015b05af18 Editorial pass on Dyamo docs (#89921)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89921
Approved by: https://github.com/msaroufim
2022-12-01 18:53:16 +00:00
Philip Meier
4095ef8b80 remove torch.equal usages (#89527)
Preparation for the next PR in this stack: #89559.

I replaced

- `self.assertTrue(torch.equal(...))` with `self.assertEqual(..., rtol=0, atol=0, exact_device=True)`,
- the same for `self.assertFalse(...)` with `self.assertNotEqual(...)`, and
- `assert torch.equal(...)` with `torch.testing.assert_close(..., rtol=0, atol=0)` (note that we don't need to set `check_device=True` here since that is the default).

There were a few instances where the result of `torch.equal` is used directly. In that cases I've replaced with `(... == ...).all().item()` while sometimes also dropping the `.item()` depending on the context.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89527
Approved by: https://github.com/mruberry
2022-12-01 11:22:52 +00:00
Philip Meier
d72cd4c4e5 document torch.testing.assert_allclose (#89526)
After our failed attempt to remove `assert_allclose` in #87974, we decided to add it to the documentation after all. Although we drop the expected removal date, the function continues to be deprecated in favor of `assert_close`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89526
Approved by: https://github.com/mruberry
2022-12-01 11:22:50 +00:00
Wanchao Liang
4451eb24e6 Move tensor_parallel out to distributed.tensor folder (#89878)
This PR moves tensor parallel from torch.distributed._tensor.parallel
to torch.distributed.tensor.parallel, to prepare for beta release
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89878
Approved by: https://github.com/fduwjj
2022-11-30 22:13:10 +00:00
Will Constable
447283752c Update DDP docs for Dynamo/DDPOptimizer (#89096)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89096
Approved by: https://github.com/msaroufim
2022-11-30 05:50:12 +00:00
andrewor14
fb47a66989 [Quant][docs] Use get_default_qconfig_mapping (#87299)
Summary: The recommended way to use QConfigMapping is through
`get_default_qconfig_mapping`. However, the docs still references
usages that use `QConfigMapping().set_global(...)`. This doesn't
actually work well in practice when the model has fixed qparams
ops for example. This commit updates these usages.

Reviewers: vkuzo

Subscribers: vkuzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87299
Approved by: https://github.com/jerryzh168
2022-11-29 18:08:16 +00:00
Mark Saroufim
9048cf16fe Move Dynamo docs back to core (#89769)
With contributions from @svekars and @malfet

Waiting for doc build job to complete
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89769
Approved by: https://github.com/soumith, https://github.com/malfet
2022-11-29 04:38:53 +00:00
PyTorch MergeBot
47cca5e444 Revert "Move Dynamo docs back to core (#89769)"
This reverts commit be2816db18.

Reverted https://github.com/pytorch/pytorch/pull/89769 on behalf of https://github.com/clee2000 due to broke lint
2022-11-28 21:04:33 +00:00
eqy
8321066031 Tweak formatting of note on macros (#89598)
For readability when viewing the rendered file e.g., from the browser.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89598
Approved by: https://github.com/kit1980
2022-11-28 20:42:30 +00:00
Mark Saroufim
be2816db18 Move Dynamo docs back to core (#89769)
With contributions from @svekars and @malfet

Waiting for doc build job to complete
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89769
Approved by: https://github.com/soumith
2022-11-28 20:32:05 +00:00
albanD
098cbe23c3 Update masked.rst (#89758)
Fix https://github.com/pytorch/pytorch/issues/89734

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89758
Approved by: https://github.com/anjali411, https://github.com/malfet, https://github.com/cpuhrsch
2022-11-28 17:55:43 +00:00
Alvaro Gaona
abb446af8c Implement old windows in Python (#87082)
Relates to #85366

- Bartlett, Blackman, Hamming, Hann.
- Except Kaiser which will be in a different PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87082
Approved by: https://github.com/mruberry, https://github.com/lezcano
2022-11-25 11:09:28 +00:00
Emilio Castillo
c9d4390d13 Add Pluggable CUDA allocator backend (#86786)
Fixes #43144

This uses the Backend system added by [82682](https://github.com/pytorch/pytorch/pull/82682) to change allocators dynamically during the code execution. This will allow us to use RMM, use CUDA managed memory for some portions of the code that do not fit in GPU memory. Write static memory allocators to reduce fragmentation while training models and improve interoperability with external DL compilers/libraries.

For example, we could have the following allocator in c++

```c++
#include <sys/types.h>
#include <cuda_runtime_api.h>
#include <iostream>

extern "C" {
void* my_malloc(ssize_t size, int device, cudaStream_t stream) {
   void *ptr;
   std::cout<<"alloc "<< size<<std::endl;
   cudaMalloc(&ptr, size);
   return ptr;
}

void my_free(void* ptr) {
   std::cout<<"free "<<std::endl;
   cudaFree(ptr);
}
}
```

Compile it as a shared library
```
nvcc allocator.cc -o alloc.so -shared --compiler-options '-fPIC'
```

And use it from PyTorch as follows

```python
import torch

# Init caching
# b = torch.zeros(10, device='cuda')
new_alloc = torch.cuda.memory.CUDAPluggableAllocator('alloc.so', 'my_malloc', 'my_free')
old = torch.cuda.memory.get_current_allocator()
torch.cuda.memory.change_current_allocator(new_alloc)
b = torch.zeros(10, device='cuda')
# This will error since the current allocator was already instantiated
torch.cuda.memory.change_current_allocator(old)
```

Things to discuss
- How to test this, needs compiling external code ...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86786
Approved by: https://github.com/albanD
2022-11-23 17:54:36 +00:00
Nikita Shulga
2de38a0714 Add torch._dynamo to docs (#89510)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89510
Approved by: https://github.com/msaroufim
2022-11-23 16:33:13 +00:00
Li-Huai (Allan) Lin
c2ce79f06e Fix dev-discuss link in the maintainer docs (#89493)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89493
Approved by: https://github.com/H-Huang
2022-11-22 19:33:21 +00:00
AllenTiTaiWang
126e44173d [ONNX] Add onnx-script into ONNX docs (#89078)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89078
Approved by: https://github.com/BowenBao
2022-11-17 06:27:17 +00:00
Kazuaki Ishizaki
a5f04e9a91 Fix typos in .md and .rst files (#88962)
This PR fixes typos `Github` in `.md` and `.rst` files.
`Github` -> `GitHub`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88962
Approved by: https://github.com/kit1980
2022-11-17 03:37:02 +00:00
Mikayla Gawarecki
5848704ef8 Removed unecessary check in select_nested (#89150)
Implementation in  #88585 should work for all dimensions. Removed unnecessary check that constrained select to dims 0 and 1

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89150
Approved by: https://github.com/cpuhrsch
2022-11-16 22:11:37 +00:00
Iris
aee96bbf5a [PT-D][Checkpointing] Move distributed checkpointing from torch.distributed._shard.checkpoint to torch.distributed.checkpoint (#88698)
Context in RFC: https://github.com/pytorch/pytorch/issues/86620

.rst file will be finalized in subsequent PRs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88698
Approved by: https://github.com/wanchaol
2022-11-16 21:06:38 +00:00
BowenBao
0581331963 [ONNX] Document ONNX diagnostics (#88371)
Reference pages:
- Landing page: https://docs-preview.pytorch.org/88371/onnx_diagnostics.html
- Individual rule: https://docs-preview.pytorch.org/88371/generated/onnx_diagnostics_rules/POE0004%3Aoperator-supported-in-newer-opset-version.html

An initial PR to setup the document generation for ONNX diagnostics.
* Add document page for ONNX diagnostics.
* Add document generation for diagnostics rules from `rules.yaml`.
* Add dependency on `myst-parser` for markdown to rst parsing.

More content to be added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88371
Approved by: https://github.com/abock, https://github.com/justinchuby, https://github.com/malfet, https://github.com/kit1980
2022-11-16 19:21:46 +00:00
Driss Guessous
b291c1213a Create native function for determining which implementation of SDP to call (#89029)
# Summary
Creates a callable native function that can determine which implementation of scaled dot product will get called. This allows to bump re-order the runtime dispatch of SDP to enable autograd.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89029
Approved by: https://github.com/cpuhrsch
2022-11-16 03:07:54 +00:00
Kevin Tse
be8d88f8d0 [DataLoader] Removing DataLoader2 related code (#88848)
Removing these lines of code as `DataLoader2` has been added to [TorchData](https://github.com/pytorch/data). I'm importing this to confirm it will not impact internal codes.

Differential Revision: [D41201578](https://our.internmc.facebook.com/intern/diff/D41201578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88848
Approved by: https://github.com/ejguan
2022-11-11 22:27:01 +00:00
Kurt Mohler
ee28b865ee Deprecate TypedStorage, its derived classes, and all of their public methods (#85303)
Part of #85302

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85303
Approved by: https://github.com/ezyang
2022-11-08 18:11:01 +00:00
Howard Huang
bc66ddb5cb Add torch.distributed.DistBackendError exception type, thrown from C10D_NCCL_CHECK (#88134)
Currently all of the distributed errors are thrown from the `TORCH_CHECK` macro which throws a generic `RuntimeError`. This change introduced a new error type `DistBackendError` which derives from `RuntimeError` to signify there was an error with the backend communication library. This allows for better error handling and analysis at higher levels in the stack. Motivation: https://docs.google.com/document/d/1j6VPOkC6znscliFuiDWMuMV1_fH4Abgdq7TCHMcXai4/edit#heading=h.a9rc38misyx8

Changes:
- introduce new error type
- Update `C10D_NCCL_CHECK`

Sample script to demonstrate new error type

```python
# python -m torch.distributed.run --nproc_per_node=2 <script>.py

import torch
import torch.distributed as dist

if __name__ == "__main__":
    dist.init_process_group("nccl")
    dist.broadcast(torch.tensor([1, 2, 3]).cuda(), 0)
```

Differential Revision: [D40998803](https://our.internmc.facebook.com/intern/diff/D40998803)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88134
Approved by: https://github.com/rohan-varma
2022-11-08 13:26:42 +00:00
lezcano
d453b3c4d4 Add a note on the stability of linalg functions. (#88313)
This was long-due, as it keeps comming up in issues.

Fixes https://github.com/pytorch/pytorch/issues/85950
Fixes https://github.com/pytorch/pytorch/issues/59720
Fixes https://github.com/pytorch/pytorch/issues/59782

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88313
Approved by: https://github.com/soumith, https://github.com/mruberry
2022-11-07 22:44:23 +00:00
Codrin Popa
5b767d404e Modified roundup_power2_divisions to specify the number of divisions for each power of two interval (#87290)
Summary:
Improved roundup_power2_divisions knob so it allows better control of rouding in the PyTorch CUDA Caching Allocator.

This new version allows setting the number of divisions per power of two interval starting from 1MB and ending at 64GB and above. An example use case is when rouding is desirable for small allocations but there are also very large allocations which are persistent, thus would not benefit from rounding and take up extra space.

Test Plan: Tested locally

Differential Revision: D40103909

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87290
Approved by: https://github.com/zdevito
2022-11-04 19:31:16 +00:00
Pruthvi Madugundu
fbd08fb358 Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)
- Asserts for CUDA are enabled by default
- Disabled for ROCm by default by setting `TORCH_DISABLE_GPU_ASSERTS` to `ON`
- Can be enabled for ROCm by setting above variable to`OFF` during build or can be forcefully enabled by setting `ROCM_FORCE_ENABLE_GPU_ASSERTS:BOOL=ON`

This is follow up changes as per comment in PR #81790, comment [link](https://github.com/pytorch/pytorch/pull/81790#issuecomment-1215929021)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84190
Approved by: https://github.com/jeffdaily, https://github.com/malfet
2022-11-04 04:43:05 +00:00
Christian Puhrsch
5e6ceebccb Add support for neg to NestedTensor (#88131)
Partially fixes #86889

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88131
Approved by: https://github.com/drisspg
2022-11-03 15:15:57 +00:00
PyTorch MergeBot
99c07735e4 Revert "Add support for neg to NestedTensor (#88131)"
This reverts commit 6a75a0d1a1.

Reverted https://github.com/pytorch/pytorch/pull/88131 on behalf of https://github.com/mehtanirav due to [Internal breakages](https://www.internalfb.com/intern/sandcastle/job/13510799692239080/insights)
2022-11-02 18:43:36 +00:00