Commit Graph

2232 Commits

Author SHA1 Message Date
Bin Bao
edebad81a9 Add a rst doc for the performance dashboard (#100592)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100592
Approved by: https://github.com/msaroufim, https://github.com/huydhn
2023-05-04 18:28:09 +00:00
Richard Barnes
9c185b6b46 [codemod] Replace hasattr with getattr in caffe2/docs/source/notes/extending.rst (#100598)
Summary:
The pattern
```
X.Y if hasattr(X, "Y") else Z
```
can be replaced with
```
getattr(X, "Y", Z)
```

The [getattr](https://www.w3schools.com/python/ref_func_getattr.asp) function gives more succinct code than the [hasattr](https://www.w3schools.com/python/ref_func_hasattr.asp) function. Please use it when appropriate.

**This diff is very low risk. Green tests indicate that you can safely Accept & Ship.**

Test Plan: Sandcastle

Differential Revision: D44886464

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100598
Approved by: https://github.com/Skylion007
2023-05-04 16:36:15 +00:00
Angela Yi
8eb82135d1 [docs] Docs for writing ATen IR passes + FX Pattern matching (#100577)
I'm not really sure where to put this...maybe just link it somewhere in torch.compile docs?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100577
Approved by: https://github.com/msaroufim
2023-05-04 05:17:10 +00:00
shibo
6aeb85add8 add checkpoint support for custom device (#99626)
Fixes #ISSUE_NUMBER
1、add checkpoint support for custom device
2、add a device argument, I want to add a device="cuda" parameter to the func `forward` of `CheckpointFunction`, and I can specify the device type when using it, but the func `apply` of `torch.autograd.Function` does not support `kwargs`, so I added a variable named `_device`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99626
Approved by: https://github.com/soulitzer
2023-05-04 00:23:42 +00:00
vfdev-5
6a12f10b08 Publicly exposing torch.backends.cpu.get_cpu_capability() (#100164)
Description:

- As suggested by Nikita, created `torch.backends.cpu` submodule and exposed `get_cpu_capability`.

- In torchvision Resize method we want to know current cpu capability in order to pick appropriate codepath depending on cpu capablities

Newly coded vectorized resize of uint8 images on AVX2 supported CPUs is now faster than older way (uint8->float->resize->uint8). However, on non-avx hardware (e.g. Mac M1) certain configs are slower using native uint8.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100164
Approved by: https://github.com/albanD, https://github.com/malfet
2023-05-03 19:02:07 +00:00
Svetlana Karslioglu
d425da8bf3 Replace master with main in links and docs/conf.py (#100176)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100176
Approved by: https://github.com/albanD, https://github.com/malfet
2023-05-02 18:20:32 +00:00
Hirochika Matsumoto
f143c92739 [docs] Fix typo in get-started.rst (#100355)
This PR changes `""nvprims_nvfuser"` which should be a typo to `"nvprims_nvfuser"`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100355
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-05-02 00:29:53 +00:00
BowenBao
c94b6a6712 [ONNX] Introduce 'diagnostics' to 'dynamo_export' api (#99668)
Summary
* Introduce `DiagnosticContext` to `torch.onnx.dynamo_export`.
* Remove `DiagnosticEngine` in preparations to update 'diagnostics' in `dynamo_export` to drop dependencies on global diagnostic context. No plans to update `torch.onnx.export` diagnostics.

Next steps
* Separate `torch.onnx.export` diagnostics and `torch.onnx.dynamo_export` diagnostics.
* Drop dependencies on global diagnostic context. https://github.com/pytorch/pytorch/pull/100219
* Replace 'print's with 'logger.log'.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99668
Approved by: https://github.com/justinchuby, https://github.com/abock
2023-05-01 19:58:49 +00:00
pbialecki
8fe91d16b0 Remove CUDA 11.6 note from complex docs (#100118)
Removes note in the complex docs pointing to the CUDA 11.6 wheels introduced in https://github.com/pytorch/pytorch/pull/80363.
Background: this warning was added via https://github.com/pytorch/pytorch/issues/79876 which pointed out a slow compilation time in 11.3. The 11.6 pip wheels were thus recommended but are not build anymore as our current support is 11.7, 11.8 (and 12.1 experimental in nightlies).

The note is confusing users as it doesn't explain why 11.6 is needed.
Reference: https://discuss.pytorch.org/t/complex-numbers-cuda-11-6-documentation-warning/178588/1

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100118
Approved by: https://github.com/msaroufim
2023-04-27 16:26:27 +00:00
milesial
45bf3f6216 Optimized EMA implementation (#94820)
This PR proposes an optimized way to do Exponential Moving Average (EMA), which is faster than the current way using `swa_utils.AveragedModel` described in https://pytorch.org/docs/stable/optim.html#custom-averaging-strategies.

This implementation is asynchronous, and is built as an optimizer wrapper so that the EMA weight update happens without any additional CPU/GPU sync, just after optimizer steps, and with limited code changes.

Example usage:
```
model = Model().to(device)
opt = torch.optim.Adam(model.parameters())

opt = EMAOptimizer(opt, device, 0.9999)

for epoch in range(epochs):
    training_loop(model, opt)

    regular_eval_accuracy = evaluate(model)

    with opt.swap_ema_weights():
        ema_eval_accuracy = evaluate(model)
```

Here are some benchmarks (time per iteration) on various torchvision models:

|model|this PR iteration time                      |swa_utils.AveragedModel iteration time| iteration speedup                                      |
|-----|-----------------------------|-----------------------|---------------------------------------------|
|     |                             |                       |                                             |
|regnet_x_1_6gf|62.73                        |67.998                 |1.08                                         |
|regnet_x_3_2gf|101.75                       |109.422                |1.08                                         |
|regnet_x_400mf|25.13                        |32.005                 |1.27                                         |
|regnet_x_800mf|33.01                        |37.466                 |1.13                                         |
|regnet_x_8gf|128.13                       |134.868                |1.05                                         |
|regnet_y_16gf|252.91                       |261.292                |1.03                                         |
|regnet_y_1_6gf|72.14                        |84.22                  |1.17                                         |
|regnet_y_3_2gf|99.99                        |109.296                |1.09                                         |
|regnet_y_400mf|29.53                        |36.506                 |1.24                                         |
|regnet_y_800mf|37.82                        |43.634                 |1.15                                         |
|regnet_y_8gf|196.63                       |203.317                |1.03                                         |
|resnet101|128.80                       |137.434                |1.07                                         |
|resnet152|182.85                       |196.498                |1.07                                         |
|resnet18|29.06                        |29.975                 |1.03                                         |
|resnet34|50.73                        |53.443                 |1.05                                         |
|resnet50|76.88                        |80.602                 |1.05                                         |
|resnext101_32x8d|277.29                       |280.759                |1.01                                         |
|resnext101_64x4d|269.56                       |281.052                |1.04                                         |
|resnext50_32x4d|100.73                       |101.102                |1.00                                         |
|shufflenet_v2_x0_5|10.56                        |15.419                 |1.46                                         |
|shufflenet_v2_x1_0|13.11                        |18.525                 |1.41                                         |
|shufflenet_v2_x1_5|18.05                        |23.132                 |1.28                                         |
|shufflenet_v2_x2_0|25.04                        |30.008                 |1.20                                         |
|squeezenet1_1|14.26                        |14.325                 |1.00                                         |
|swin_b|264.52                       |274.613                |1.04                                         |
|swin_s|180.66                       |188.914                |1.05                                         |
|swin_t|108.62                       |112.632                |1.04                                         |
|swin_v2_s|220.29                       |231.153                |1.05                                         |
|swin_v2_t|127.27                       |133.586                |1.05                                         |
|vgg11|95.52                        |103.714                |1.09                                         |
|vgg11_bn|106.49                       |120.711                |1.13                                         |
|vgg13|132.94                       |147.063                |1.11                                         |
|vgg13_bn|149.73                       |165.256                |1.10                                         |
|vgg16|158.19                       |172.865                |1.09                                         |
|vgg16_bn|177.04                       |192.888                |1.09                                         |
|vgg19|184.76                       |194.194                |1.05                                         |
|vgg19_bn|203.30                       |213.334                |1.05                                         |
|vit_b_16|217.31                       |219.748                |1.01                                         |
|vit_b_32|69.47                        |75.692                 |1.09                                         |
|vit_l_32|223.20                       |258.487                |1.16                                         |
|wide_resnet101_2|267.38                       |279.836                |1.05                                         |
|wide_resnet50_2|145.06                       |154.918                |1.07                                         |

You can see that in all cases it is faster than using `AveragedModel`. In fact in many cases, adding EMA does not add any overhead since the computation is hidden behind the usual iteration flow.

This is a similar implementation to the one currently in [NVIDIA NeMo](https://github.com/NVIDIA/NeMo).

If the team is interested in merging this, let me know and I'll add some documentation similar to `swa_utils` and tests.

Credits to @szmigacz for the implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94820
Approved by: https://github.com/janeyx99
2023-04-26 18:02:11 +00:00
Chris Gottbrath
f0e28b1cb9 Adding the maintainers approved in 2023Q1 Core Maintainers meeting (#98520)
Added Nikita to Core Maintainers
Merged MKLDNN with CPU Performance
Renamed CUDA to GPU Performance
Added Jiong to Compiler and CPU Performance
Added Xiaobing to CPU Performance
Marking Vitaly and Jian Hui as Emeritus
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98520
Approved by: https://github.com/ezyang, https://github.com/soumith, https://github.com/dzhulgakov
2023-04-24 17:58:18 +00:00
Kurt Mohler
1e8cf6ad7f Add documentation for torch._logging.set_logs (#99219)
Part of #98871

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99219
Approved by: https://github.com/mlazos, https://github.com/lezcano
2023-04-24 08:06:57 +00:00
BowenBao
51742a467d [ONNX] Fix missing import numpy for docs example (#99663)
Fixes https://github.com/pytorch/pytorch/issues/99408
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99663
Approved by: https://github.com/justinchuby
2023-04-21 04:06:45 +00:00
Simon Seo
9f95032101 Fix broken links in contribution_guide.rst (#99295)
mainly from `master` to `main`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99295
Approved by: https://github.com/kit1980
2023-04-20 22:20:56 +00:00
Will Constable
e6aa8e0729 Test and document dynamo backward hooks support (#99382)
No new support added, but backward hooks are working and now there is a test and some documentation about the limitations (hooks firing after whole graph).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99382
Approved by: https://github.com/yanboliang
2023-04-18 03:03:29 +00:00
Will Constable
6eab5e88c8 Graph-break on allowed modules if they have hooks (#97184)
Allowed modules are stuck into dynamo's fx graph as call_module
nodes, without dynamo doing any tracing of the module.  This means
during AOT trace time, hooks will fire during tracing when the
call_module is executed, but the hooks themselves will disappear
after that and not be present in the compiled program.
  (worse, if they performed any tensor operations, those would get
   traced so you could end up with part of the hook's functionality).

To circumvent this, there are two options for 'allowed modules' with hooks.
1) don't treat them as 'allowed' - trace into them
2) graph-break, so the module is no longer part of the dynamo trace at all

(1) will fail for users that opted into allowed modules becuase they know
    their module has problems being traced by dynamo.
(2) causes graph breaks on common modules such as nn.Linear, just because they
    are marked as 'allowed'.

It would help matters if we could differentiate between types of allowed modules
  (A) allowed to avoid overheads - used for common ops like nn.Linear
  (B) allowed to avoid dynamo graphbreaks caused by unsupported code

Ideally, we'd use method (1) for group (A) and (2) for (B).

For now, graph-break on all cases of allowed modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97184
Approved by: https://github.com/jansel
2023-04-15 01:46:15 +00:00
BowenBao
606ce5b653 [ONNX] Introduce Input/Ouptut adapter; Switch to 'DynamoExporter' (#98421)
Summary
* Introduce input/output adapter. Due to design differences, input/output format
between PyTorch model and exported ONNX model are often not the same. E.g., `None`
inputs are allowed for PyTorch model, but are not supported by ONNX. Nested constructs
of tensors are allowed for PyTorch model, but only flattened tensors are supported by ONNX,
etc. The new input/output adapter is exported with the model. Providing an interface to
automatically convert and validate inputs/outputs format.
* As suggested by #98251,
provide extension for unwrapping user defined python classes for `dynamo.export` based
exporter. Unblock huggingface models.
* Re-wire tests to run through `DynamoExporter` w/ `dynamo_export` api. Kept
`DynamoOptimizeExporter` in the tests for now for coverage of this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98421
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms, https://github.com/thiagocrepaldi
2023-04-15 01:13:00 +00:00
PyTorch MergeBot
dda7ce4bb3 Revert "[core][pruning][be] Rename sparsifier folder to pruner (#98758)"
This reverts commit 778fd1922a.

Reverted https://github.com/pytorch/pytorch/pull/98758 on behalf of https://github.com/jcaip due to https://www.internalfb.com/diff/D44905951 need to fix broken import in fbcode
2023-04-13 16:30:47 +00:00
Tugsbayasgalan Manlaibaatar
39fd7f945f Add Symbool support in python to C++ translation (#98453)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98453
Approved by: https://github.com/ezyang
2023-04-12 03:21:57 +00:00
Mark Saroufim
bc8cb62bcb torch.compile benchmark utility (#97699)
I've had many exchanges that look like this https://github.com/rasbt/faster-pytorch-blog/pull/2 so this is an attempt to get make this problem easier

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97699
Approved by: https://github.com/ezyang
2023-04-12 03:02:06 +00:00
soulitzer
367051e47e [docs] Add missing functions to autograd.rst (#98854)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98854
Approved by: https://github.com/albanD
2023-04-11 20:45:49 +00:00
Jesse Cai
778fd1922a [core][pruning][be] Rename sparsifier folder to pruner (#98758)
Summary:
att

Test Plan:
```
python test/test_ao_sparsity.py
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98758
Approved by: https://github.com/jerryzh168
2023-04-11 17:26:29 +00:00
Edward Z. Yang
b8b840be3d Convert logging f-strings to use % format, part five (#98765)
This does some annoying but simple cases by hand.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98765
Approved by: https://github.com/wanchaol
2023-04-11 13:17:59 +00:00
Guspan Tanadi
ab385bd49e docs: Linking ResNeXt PyTorch Hub Pipeline (#98689)
Introducing ResNeXt model as link to PyTorch Hub see Skip connections section.
Handle issue in #98690.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98689
Approved by: https://github.com/zou3519, https://github.com/kit1980
2023-04-11 02:20:26 +00:00
Will Constable
390c51bf87 Skip nnmodule hook guards by default (#98371)
This PR makes basic nnmodule forward hooks work by default, without any overhead.  But it leaves silent correctness issues if users modify/remove their hooks later, thus also emits a warning.

- the usual case is to not use hooks, so avoid guard overhead here
- registering any hook before compile will trigger a warning about hook support
- registering a hook later (or removing one) requires user knowledge and opting in,
  currently this isn't warnable (but maybe we can observe compiled nnmodules to make it
  warnable).

Why skip hook guards by default instead of not tracing __call__/hooks by default?
- avoid having a mode flag that alters dynamo tracing behavior (harder to test both codepaths
  in CI with full coverage)
- the most basic hook usecase (registering a hook before compile, and never removing it)
  will work by default with this PR, while it would require enablement and incur overhead
  in the 'not tracing __call__' proposal.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98371
Approved by: https://github.com/jansel
2023-04-07 15:10:51 +00:00
BJ Hargrave
555ab310dc Add itemsize and nbytes properties to Tensor (#98322)
Adds properties for itemsize and nbytes to Tensor matching the properties in NumPy.

Fixes https://github.com/pytorch/pytorch/issues/12728

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98322
Approved by: https://github.com/ezyang
2023-04-05 12:11:55 +00:00
Aaron Bockover
558e5a240e Introduce torch.onnx.dynamo_export API (#97920)
This is the first phase of the new ONNX exporter API for exporting from TorchDynamo and FX, and represents the beginning of a new era for exporting ONNX from PyTorch.

The API here is a starting point upon which we will layer more capability and expressiveness in subsequent phases. This first phase introduces the following into `torch.onnx`:

```python
dynamo_export(
    model: torch.nn.Module,
    /,
    *model_args,
    export_options: Optional[ExportOptions] = None,
    **model_kwargs,
) -> ExportOutput:
    ...

class ExportOptions:
    opset_version: Optional[int] = None
    dynamic_shapes: Optional[bool] = None
    logger: Optional[logging.Logger] = None

class ExportOutputSerializer(Protocol):
    def serialize(
        self,
        export_output: ExportOutput,
        destination: io.BufferedIOBase,
    ) -> None:
        ...

class ExportOutput:
    model_proto: onnx.ModelProto

    def save(
        self,
        destination: Union[str, io.BufferedIOBase],
        *,
        serializer: Optional[ExportOutputSerializer] = None,
    ) -> None:
        ...
```

In addition to the API in the first commit on this PR, we have a few experiments for exporting Dynamo and FX to ONNX that this PR rationalizes through the new Exporter API and adjusts tests to use the new API.

- A base `FXGraphModuleExporter` exporter from which all derive:
  - `DynamoExportExporter`: uses dynamo.export to acquire FX graph
  - `DynamoOptimizeExporter`: uses dynamo.optimize to acquire FX graph
  - `FXSymbolicTraceExporter`: uses FX symbolic tracing

The `dynamo_export` API currently uses `DynamoOptimizeExporter`.

### Next Steps (subsequent PRs):

* Combine `DynamoExportExporter` and `DynamoOptimizeExporter` into a single `DynamoExporter`.
* Make it easy to test `FXSymbolicTraceExporter` through the same API; eventually `FXSymbolicTraceExporter` goes away entirely when the Dynamo approach works for large models. We want to keep `FXSymbolicTraceExporter` around for now for experimenting and internal use.
* Parameterize (on `ExportOptions`) and consolidate Dynamo exporter tests.
  - This PR intentionally leaves the existing tests unchanged as much as possible except for the necessary plumbing.
* Subsequent API phases:
  - Diagnostics
  - Registry, dispatcher, and Custom Ops
  - Passes
  - Dynamic shapes

Fixes #94774

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97920
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms, https://github.com/thiagocrepaldi, https://github.com/shubhambhokare1
2023-04-04 18:13:29 +00:00
Richard Zou
6b9e22f3f6 Clarify the saving of intermediates in the "extending torch.func" docs (#98020)
Fixes https://github.com/pytorch/pytorch/issues/97260

We got some feedback that the page reads like "in order to save an input
for backward, you must return it as an output of the
autograd.Function.forward".

Doing so actually raises an error (on master and as of 2.1), but results
in an ambiguous situation on 2.0.0. To avoid more users running into
this, we clarify the documentation so it doesn't read like the above
and clearly mentions that you can save things from the inputs or
outputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98020
Approved by: https://github.com/soulitzer, https://github.com/kshitij12345
2023-03-31 13:57:37 +00:00
drisspg
a5b6f10c5d Fix format bug in NT docs (#97998)
Fixes a formatting bug in the NT docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97998
Approved by: https://github.com/jbschlosser
2023-03-31 01:00:25 +00:00
Driss Guessous
5a81508bb6 Add NestedTensor ops: logical_not, logical_not_, masked_fill (#97934)
# Summary
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 7954302</samp>

This pull request adds support for `logical_not` and `masked_fill` operations on nested tensors, which are tensors that can have tensors as elements. It modifies the `native_functions.yaml` file to dispatch these operations to the nested tensor backend, implements the logic for these operations in `NestedTensorBinaryOps.cpp` and `NestedTensorUnaryOps.cpp`, adds documentation in `nested.rst`, and adds tests in `test_nestedtensor.py`.

## Description
<!--
copilot:walkthrough
-->
### <samp>🤖 Generated by Copilot at 7954302</samp>

*  Implement `logical_not` operation on nested tensors ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991R1164), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991R1172), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-f7c94671810b3ce652f9ad5458518cb7bbd67e8bf7e84e0a2fba641d878ba7c5R45-R56), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-c8b131d009badb3f92031b2aaa6e7f93a793f13caee278ea78e1c57d78c0399eR203), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-6eef496a8ec635930b6e52507358e069c80021f3535b8737d39e14ffc38950c0L854-R867))
  - Add `NestedTensor_logical_not` and `NestedTensor_logical_not_` functions to `native_functions.yaml` for CPU and CUDA dispatch ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991R1164), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991R1172))
  - Define `NestedTensor_logical_not` and `NestedTensor_logical_not_` functions in `NestedTensorUnaryOps.cpp` using `map_nt` and `get_buffer` ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-f7c94671810b3ce652f9ad5458518cb7bbd67e8bf7e84e0a2fba641d878ba7c5R45-R56))
  - Document `torch.logical_not` function for nested tensors in `nested.rst` ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-c8b131d009badb3f92031b2aaa6e7f93a793f13caee278ea78e1c57d78c0399eR203))
  - Add subtest for `logical_not` function in `test_activations` method in `TestNestedTensorDeviceType` class in `test_nestedtensor.py` ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-6eef496a8ec635930b6e52507358e069c80021f3535b8737d39e14ffc38950c0L854-R867))
* Implement `masked_fill` operation on nested tensors ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991R7439), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-f847e41e3d373230df0b25574e993ec0e6b699bf16796b3df9ae9fb518048e25L210-R224), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-c8b131d009badb3f92031b2aaa6e7f93a793f13caee278ea78e1c57d78c0399eR197), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-6eef496a8ec635930b6e52507358e069c80021f3535b8737d39e14ffc38950c0R677-R688), [link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-6eef496a8ec635930b6e52507358e069c80021f3535b8737d39e14ffc38950c0R2515-R2528))
  - Add `NestedTensor_masked_fill` function to `native_functions.yaml` for CPU and CUDA dispatch ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991R7439))
  - Define `NestedTensor_masked_fill` function in `NestedTensorBinaryOps.cpp` using `NestedTensor_elementwise_Tensor` ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-f847e41e3d373230df0b25574e993ec0e6b699bf16796b3df9ae9fb518048e25L210-R224))
  - Document `torch.Tensor.masked_fill` function for nested tensors in `nested.rst` ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-c8b131d009badb3f92031b2aaa6e7f93a793f13caee278ea78e1c57d78c0399eR197))
  - Add test case for `masked_fill` function in `TestNestedTensorDeviceType` class in `test_nestedtensor.py` ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-6eef496a8ec635930b6e52507358e069c80021f3535b8737d39e14ffc38950c0R677-R688))
  - Add test case for backward pass of `masked_fill` function in `TestNestedTensorAutograd` class in `test_nestedtensor.py` ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-6eef496a8ec635930b6e52507358e069c80021f3535b8737d39e14ffc38950c0R2515-R2528))
* Improve error message for unsupported element-wise binary operations on nested dense tensors ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-f847e41e3d373230df0b25574e993ec0e6b699bf16796b3df9ae9fb518048e25L142-R150))
  - Modify `NestedTensor_elementwise_Tensor` function in `NestedTensorBinaryOps.cpp` to include operation name in error message ([link](https://github.com/pytorch/pytorch/pull/97934/files?diff=unified&w=0#diff-f847e41e3d373230df0b25574e993ec0e6b699bf16796b3df9ae9fb518048e25L142-R150))

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97934
Approved by: https://github.com/cpuhrsch
2023-03-30 08:14:39 +00:00
Driss Guessous
f603873c1b add various NT ops needed for testing (#97837)
# Summary
Add some Simple unary and binary NT ops
- Sub
- sgn
- abs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97837
Approved by: https://github.com/cpuhrsch
2023-03-29 23:43:37 +00:00
vfdev
0f424f7f05 Fixed broken link to troubleshooting.html docs page (#97330)
Seen first in error message:
```
[2023-03-22 10:30:39,786] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64)
   function: '<resume in paste_mask_in_image>' (/vision/torchvision/models/detection/roi_heads.py:407)
   reasons:  w == 857
to diagnose recompilation issues, see https://pytorch.org/docs/master/dynamo/troubleshooting.html.
[2023-03-22 10:30:40,036] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64)
   function: '<resume in paste_mask_in_image>' (/vision/torchvision/models/detection/roi_heads.py:406)
   reasons:  ___stack0 == 207
to diagnose recompilation issues, see https://pytorch.org/docs/master/dynamo/troubleshooting.html.
```

Broken link:
- https://pytorch.org/docs/master/dynamo/troubleshooting.html.

Good link:
- https://pytorch.org/docs/master/compile/troubleshooting.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97330
Approved by: https://github.com/zou3519
2023-03-22 16:40:21 +00:00
Mikayla Gawarecki
b04363ead4 [easy] Expose documentation for a few global nn.Module hooks (#97185)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97185
Approved by: https://github.com/albanD
2023-03-21 20:09:29 +00:00
Kazuaki Ishizaki
50ed38a7eb Fix typo under docs directory (#97202)
This PR fixes typo in `.rst` files under docs directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97202
Approved by: https://github.com/kit1980
2023-03-21 01:24:10 +00:00
Driss Guessous
a269e5fa04 Add forward and backward support for silu to NestedTensors (#97181)
# Summary
Add forward and backward support for silu to NestedTensors
- Add forward support to silu
- Add forward support to silu_
- Add backward support to silu
- Add to NT docs
- Add tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97181
Approved by: https://github.com/cpuhrsch, https://github.com/jbschlosser
2023-03-20 23:46:12 +00:00
Mark Saroufim
6110effa86 Rework torch.compile docs (#96706)
Chatted with @stas00 on slack and here are some great improvements he suggested to the compile docs

- [x] Rename `dynamo` folder to `compile`
- [x] Link `compile` docstring on `torch.html` to main index page for compile
- [x] Create a new index page that describes why people should care
  - [x] easy perf, memory reduction, 1 line
  - [x] Short benchmark table
  - [x] How to guide
  - [x] TOC that links to the more technical pages folks have written, make the existing docs we have a Technical overview
- [x] Highlight the new APIs for `torch._inductor.list_options()` and `torch._inductor.list_mode_options()` - clarify these are inductor specific and add more prose around which ones are most interesting

He also highlighted an interesting way to think about who is reading this doc we have

- [x] End users, that just want things to run fast
- [x] Library maintainers wrapping torch.compile which would care for example about understanding when in their code they should compile a model, which backends are supported
- [x] Debuggers who needs are somewhat addressed by the troubleshooting guide and faq but those could be dramatically reworked to say what we expect to break

And in a seperate PR I'll work on the below with @SherlockNoMad
- [ ] Authors of new backends that care about how to plug into dynamo or inductor layer so need to explain some more internals like
  - [ ] IR
  - [ ] Where to plugin, dynamo? inductor? triton?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96706
Approved by: https://github.com/svekars
2023-03-15 04:41:13 +00:00
Bin Bao
f03db8d6cb [reland2][inductor] Add an AOT compilation mode for Inductor CPP backend (#96520)
Summary: This is a reland of https://github.com/pytorch/pytorch/pull/94822.
Solved the long compilation issue for inductor cpp tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96520
Approved by: https://github.com/huydhn, https://github.com/malfet
2023-03-14 16:10:54 +00:00
eqy
6e3e22d58c [CUDA][cuFFT] Minor fix for cuFFT plan cache docs (#96373)
The attributes described in the docs require indexing in to the plan cache manager, as there is a separate plan cache per device.

CC @ptrblck @ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96373
Approved by: https://github.com/ngimel
2023-03-14 00:28:14 +00:00
Driss Guessous
f330281fb2 Add torch.nn.LayerNorm() to documented list of supported nested tensor ops (#96434)
Layer norm is supported and this updates the documentation to reflect that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96434
Approved by: https://github.com/cpuhrsch, https://github.com/jbschlosser
2023-03-13 23:16:09 +00:00
Joel Schlosser
30d56dd8c1 Support randn_like() for NT (#96528)
To satisfy an internal ask.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96528
Approved by: https://github.com/mikaylagawarecki, https://github.com/cpuhrsch
2023-03-13 19:39:51 +00:00
Kiuk Chung
55a1bd3fc6 [PT-D] Update CODEOWNERS, merge_rules, and Persons-of-Interest for to… (#96321)
Synchronize CODEOWNERS, merge_rules, and POI files to reflect kiukchung and d4l3k (Tristan Rice) as one of the maintainers for the distributed module.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96321
Approved by: https://github.com/d4l3k, https://github.com/albanD, https://github.com/malfet
2023-03-13 17:38:43 +00:00
Joel Schlosser
024ea1a21e Support zeros_like() for NT (#96527)
This is used for the fake tensor fallbacks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96527
Approved by: https://github.com/cpuhrsch
2023-03-13 15:15:08 +00:00
Rishub Tamirisa
f3b8638074 Adding nn.ZeroPad1d and nn.ZeroPad3d (#96295)
Fixes #95796

### Implementation
Adds python implementation for `nn.ZeroPad1d` and `nn.ZeroPad3d` in `torch/nn/modules/padding.py`.

Adds cpp implementation for `nn::ZeroPad1d` and `nn::ZeroPad3d` in the following 3 files, refactored with templates similarly to `nn::ConstantPad`'s implementation: <br>
- `torch/crsc/api/include/torch/nn/modules/padding.h`
- `torch/csrc/api/include/torch/nn/options/padding.h`
- `torch/csrc/api/src/nn/modules/padding.cpp`

Also added relevant definitions in `torch/nn/modules/__init__.py`.
### Testing
Adds the following tests:
-  cpp tests of similar length and structure as `ConstantPad` and the existing `ZeroPad2d` impl in `test/cpp/api/modules.cpp`
- cpp API parity tests in `torch/testing/_internal/common_nn.py`
- module init tests in `test/test_module_init.py`

Also added relevant definitions in `test/cpp_api_parity/parity-tracker.md`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96295
Approved by: https://github.com/soulitzer
2023-03-10 03:51:41 +00:00
Joel Schlosser
7324aef9a8 Add torch.empty_like() to documented list of supported nested tensor ops (#96211)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96211
Approved by: https://github.com/drisspg
2023-03-07 23:33:34 +00:00
Iris
a7698a8260 [DCP] Add DCP FSDP sharded_state_dict checkpoint example to DCP .rst file (#95517)
As title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95517
Approved by: https://github.com/kumpera
2023-03-03 18:09:10 +00:00
Svetlana Karslioglu
004bcffc6a Fix formatting (#95906)
Fixing list formatting by adding a missing blank line:

Before:
![Screenshot 2023-03-02 at 3 17 28 PM (2)](https://user-images.githubusercontent.com/5317992/222585127-9b6ed4dd-4719-4756-b2ac-1ba6e8f97b87.png)

After:
![Screenshot 2023-03-02 at 3 16 48 PM (2)](https://user-images.githubusercontent.com/5317992/222585172-3ef35a48-641f-4b73-9f7b-f419a122196b.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95906
Approved by: https://github.com/orionr
2023-03-03 16:18:12 +00:00
Michael Lazos
184fb9f11d Small doc update for torch_compile_debug (#95809)
Updates the troubleshooting documentation with the folder structure of the debug directory
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95809
Approved by: https://github.com/msaroufim
2023-03-02 00:25:28 +00:00
Mark Saroufim
f7b26bdd22 Remove mention of dynamo.optimize() in docs (#95802)
This should be self containable to merge but other stuff that's been bugging me is
* Instructions on debugging IMA issues
* Dynamic shape instructions
* Explaining config options better

Will look at adding a config options doc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95802
Approved by: https://github.com/svekars
2023-03-01 23:24:09 +00:00
ajithvallabai
e9c70b0b20 Fix typo and grammatical errors in community docs and dynamo docs (#95692)
Fixes typo and grammatical errors in community docs and dynamo docs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95692
Approved by: https://github.com/H-Huang
2023-03-01 18:10:46 +00:00
ajithvallabai
3944e7c3e8 Fix grammatical errors in contribution guide (#95454)
Fixed following errors in contribution guide.

"deep neural networks using a **on** tape-based autograd systems." to "deep neural networks **using a tape-based** autograd systems."

"the best entrance **point** and are great places to start." to "the best entrance **points** and are great places to start."
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95454
Approved by: https://github.com/ezyang
2023-02-28 03:44:40 +00:00
Svetlana Karslioglu
d7146e7870 Update copyright (#95652)
Updating the copyright to reflect on the website.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95652
Approved by: https://github.com/atalman
2023-02-27 23:15:55 +00:00
Jane Xu
b215af2db8 [optim] Add general documentation on our algorithm defaults (#95391)
I added a section + table under Algorithms
https://docs-preview.pytorch.org/95391/optim.html?highlight=optim#module-torch.optim
<img width="725" alt="image" src="https://user-images.githubusercontent.com/31798555/221246256-99325a27-9016-407b-a9fe-404d61e41a82.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95391
Approved by: https://github.com/albanD
2023-02-24 21:35:30 +00:00
Mark Saroufim
9f707f164e Add more GPU metric instrumentation (#91717)
Fixes https://github.com/pytorch/serve/issues/1937

A fairly common query I see folks running while using pytorch is

`nvidia-smi --format=csv,noheader,nounits --query-gpu=utilization.gpu,utilization.memory,memory.total,memory.used,temperature.gpu,power.draw,clocks.current.sm,clocks.current.memory -l 10`

Existing metrics we have
* For kernel utilization`torch.cuda.utilization()`
* For memory utilization we have them under `torch.cuda.memory` the memory allocated with `torch.cuda.memory.memory_allocated()`
* For total available memory we have `torch.cuda.get_device_properties(0).total_memory`

Which means the only metrics we're missing are
* Temperature: now in `torch.cuda.temperature()`
* Power draw: now in `torch.cuda.power()`
* Clock speed: now in `torch.cuda.clock_speed()`

With some important details on each

* Clock speed settings: I picked the SM clock domain which is documented here https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceEnumvs.html#group__nvmlDeviceEnumvs_1g805c0647be9996589fc5e3f6ff680c64
* Temperature: I use `pynvml.nvmlDeviceGetTemperature(handle, 0)` where 0 refers to the GPU die temperature
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91717
Approved by: https://github.com/ngimel
2023-02-24 00:38:03 +00:00
Atharva Kavitkar
627282fa6c Corrected grammar in contribution guide (#93014)
Corrected the grammar of a sentence in "Implementing Features or Fixing Bugs" section of the contribution guide.

**Before:**
Issues that are labeled first-new-issue, low, or medium priority provide the best entrance point are great places to start.

**After:**
Issues that are labeled first-new-issue, low, or medium priority provide the best entrance point _and_ are great places to start.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93014
Approved by: https://github.com/albanD, https://github.com/kit1980
2023-02-24 00:22:14 +00:00
fduwjj
b209d8fa0d [PT-D][Sequence Parallelism] Enable DTensor based Naive sequence parallelism (#94369)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94369
Approved by: https://github.com/wanchaol
2023-02-16 21:21:00 +00:00
Wanchao Liang
cd9ca4c73f [tp] additional doc fixes (#94786)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94786
Approved by: https://github.com/fduwjj
2023-02-15 21:25:26 +00:00
Yaoyao Ding
57b22bc6d8 [Dynamo] Backend registration with `entry_points` (#93873)
Fixes #91824

This PR add a new dynamo backend registration mechanism through ``entry_points``. The ``entry_points`` of a package is provides a way for the package to reigster a plugin for another one.

The docs of the new mechanism:
![image](https://user-images.githubusercontent.com/23381083/216133221-18cf18e2-6ad6-4cf7-8da2-9b9b883389c8.png)
(the typo '...named "my_backend" that has been..." has been fixed to '...named "my_compiler" that has been...')

# Discussion

## About the test
I did not add a test for this PR as it is hard either to install a fack package during a test or manually hack the entry points function by replacing it with a fake one. I have tested this PR offline with the hidet compiler and it works fine. Please let me know if you have any good idea to test this PR.

## About the dependency of ``importlib_metadata``
This PR will add a dependency ``importlib_metadata`` for the python < 3.10 because the modern usage of ``importlib`` gets stable at this python version (see the documentation of the importlib package [here](https://docs.python.org/3/library/importlib.html)).  For python < 3.10, the package ``importlib_metadata`` implements the feature of ``importlib``. The current PR will hint the user to install this ``importlib_metata`` if their python version < 3.10.

## About the name and docs
Please let me know how do you think the name ``torch_dynamo_backend`` as the entry point group name and the documentation of this registration mechanism.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93873
Approved by: https://github.com/malfet, https://github.com/jansel
2023-02-14 15:44:25 +00:00
fduwjj
39511697d4 [PT-D][BE] Update 2D parallelism API name and docs (#94771)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94771
Approved by: https://github.com/wanchaol
2023-02-14 08:13:15 +00:00
PyTorch MergeBot
28ed0bdb37 Revert "[tp] additional doc fixes (#94786)"
This reverts commit 7522ca55f1.

Reverted https://github.com/pytorch/pytorch/pull/94786 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but the doc failure looks related and they are also failing in trunk 7522ca55f1
2023-02-14 05:43:37 +00:00
Wanchao Liang
7522ca55f1 [tp] additional doc fixes (#94786)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94786
Approved by: https://github.com/fduwjj
2023-02-14 04:52:04 +00:00
Wanchao Liang
2db12e3844 [tp] minor update to TP docs (#94748)
minor update to TP docs for beta release
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94748
Approved by: https://github.com/fduwjj
2023-02-13 21:54:19 +00:00
Quajak
c0e7077674 Fix link in docs (#94686)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94686
Approved by: https://github.com/kit1980
2023-02-13 20:42:24 +00:00
Ramin Azarmehr
b57e6fdb50 [MPS] Enable Memory Leak Detection for test_mps.py (#94646)
- To check for Memory Leaks in `test_mps.py`, set the env-variable `PYTORCH_TEST_MPS_MEM_LEAK_CHECK=1` when running test_mps.py (used CUDA code as reference).
- Added support for the following new python interfaces in MPS module:
`torch.mps.[empty_cache(), set_per_process_memory_fraction(), current_allocated_memory(), driver_allocated_memory()]`
- Renamed `_is_mps_on_macos_13_or_newer()` to `_mps_is_on_macos_13_or_newer()`, and `_is_mps_available()` to `_mps_is_available()` to be consistent in naming with prefix `_mps`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94646
Approved by: https://github.com/malfet
2023-02-13 17:56:24 +00:00
Mikayla Gawarecki
5ce1fad711 Add rnn.unpad_sequence and rnn.unpack_sequence to documentation (#94316)
Fix #76064

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94316
Approved by: https://github.com/jbschlosser
2023-02-13 17:47:10 +00:00
Ramin Azarmehr
bdd8f518d7 [MPS] Add Python Module Bindings for the MPS backend (#94417)
- This PR is a prerequisite for the upcoming Memory Leak Detection PR.
- Enable global manual seeding via `torch.manual_seed()` + test case
- Add `torch.mps.synchronize()` to wait for MPS stream to finish + test case
- Enable the following python interfaces for MPS:
  `torch.mps.[get_rng_state(), set_rng_state(), synchronize(), manual_seed(), seed()]`
- Added some test cases in test_mps.py
- Added `mps.rst` to document the `torch.mps` module.
- Fixed the failure with `test_public_bindings.py`

Description of new files added:
- `torch/csrc/mps/Module.cpp`: implements `torch._C` module functions for `torch.mps` and `torch.backends.mps`.
- `torch/mps/__init__.py`: implements Python bindings for `torch.mps` module.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94417
Approved by: https://github.com/albanD
2023-02-12 21:22:30 +00:00
Xuehai Pan
8d45f555d7 [BE] [1/3] Rewrite super() calls in caffe2 and benchmarks (#94587)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94587
Approved by: https://github.com/ezyang
2023-02-11 18:19:48 +00:00
PyTorch MergeBot
4fe365774a Revert "[MPS] Add Python Module Bindings for the MPS backend (#94417)"
This reverts commit beb4f5bf39.

Reverted https://github.com/pytorch/pytorch/pull/94417 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it seems to break MacOS test in trunk bae397ec63
2023-02-11 05:24:45 +00:00
Ramin Azarmehr
beb4f5bf39 [MPS] Add Python Module Bindings for the MPS backend (#94417)
- This PR is a prerequisite for the upcoming Memory Leak Detection PR.
- Enable global manual seeding via `torch.manual_seed()` + test case
- Add `torch.mps.synchronize()` to wait for MPS stream to finish + test case
- Enable the following python interfaces for MPS:
  `torch.mps.[get_rng_state(), set_rng_state(), synchronize(), manual_seed(), seed()]`
- Added some test cases in test_mps.py
- Added `mps.rst` to document the `torch.mps` module.
- Fixed the failure with `test_public_bindings.py`

Description of new files added:
- `torch/csrc/mps/Module.cpp`: implements `torch._C` module functions for `torch.mps` and `torch.backends.mps`.
- `torch/mps/__init__.py`: implements Python bindings for `torch.mps` module.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94417
Approved by: https://github.com/albanD
2023-02-10 23:18:41 +00:00
Driss Guessous
70026aaad6 [SDPA] update type hint for scaled_dot_product_attention and documentation (#94008)
# Summary
- Adds type hinting support for SDPA
- Updates the documentation adding warnings and notes on the context manager
- Adds scaled_dot_product_attention to the non-linear activation function section of nn.functional docs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94008
Approved by: https://github.com/cpuhrsch
2023-02-10 18:02:43 +00:00
Xuehai Pan
a229b4526f [BE] Prefer dash over underscore in command-line options (#94505)
Preferring dash over underscore in command-line options. Add `--command-arg-name` to the argument parser. The old arguments with underscores `--command_arg_name` are kept for backward compatibility.

Both dashes and underscores are used in the PyTorch codebase. Some argument parsers only have dashes or only have underscores in arguments. For example, the `torchrun` utility for distributed training only accepts underscore arguments (e.g., `--master_port`). The dashes are more common in other command-line tools. And it looks to be the default choice in the Python standard library:

`argparse.BooleanOptionalAction`: 4a9dff0e5a/Lib/argparse.py (L893-L895)

```python
class BooleanOptionalAction(Action):
    def __init__(...):
            if option_string.startswith('--'):
                option_string = '--no-' + option_string[2:]
                _option_strings.append(option_string)
```

It adds `--no-argname`, not `--no_argname`. Also typing `_` need to press the shift or the caps-lock key than `-`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94505
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-09 20:16:49 +00:00
Xuehai Pan
69e0bda999 [BE] Import Literal, Protocol, and Final from standard library typing as of Python 3.8+ (#94490)
Changes:

1. `typing_extensions -> typing-extentions` in dependency. Use dash rather than underline to fit the [PEP 503: Normalized Names](https://peps.python.org/pep-0503/#normalized-names) convention.

```python
import re

def normalize(name):
    return re.sub(r"[-_.]+", "-", name).lower()
```

2. Import `Literal`, `Protocal`, and `Final` from standard library as of Python 3.8+
3. Replace `Union[Literal[XXX], Literal[YYY]]` to `Literal[XXX, YYY]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94490
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-09 19:17:49 +00:00
double7
685108b201 [docs] Fix incorrect wrapping of function (#94446)
The sample code of document incorrectly wraps the function decorator. To fix this, update the attributes of `func` based on `torch_function`.

Fixes #94305

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94446
Approved by: https://github.com/ezyang
2023-02-09 16:01:10 +00:00
kshitij12345
4f3858c6d8 [functorch] linearize (#94173)
Fixes https://github.com/pytorch/functorch/issues/724

TODO:
* [x] Docs

NOTE: `const_fold` pass raises UserWarning -> https://github.com/pytorch/pytorch/issues/94374

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94173
Approved by: https://github.com/Chillee
2023-02-09 15:45:08 +00:00
PyTorch MergeBot
e0e4f1a890 Revert "[functorch] linearize (#94173)"
This reverts commit b6b9e1e6e0.

Reverted https://github.com/pytorch/pytorch/pull/94173 on behalf of https://github.com/kshitij12345 due to Broke lint runner
2023-02-09 09:22:39 +00:00
Kshiteej K
b6b9e1e6e0 [functorch] linearize (#94173)
Fixes https://github.com/pytorch/functorch/issues/724

TODO:
* [x] Docs

NOTE: `const_fold` pass raises UserWarning -> https://github.com/pytorch/pytorch/issues/94374

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94173
Approved by: https://github.com/Chillee
2023-02-09 08:57:05 +00:00
fduwjj
41e3189222 [PT-D][Tensor parallelism] Add documentations for TP (#94421)
This is far from completed and we will definitely polish it down the road.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94421
Approved by: https://github.com/wz337
2023-02-09 02:31:06 +00:00
Vasiliy Kuznetsov
a9f57db607 AO migration: migrate .rst files to new locations (#94211)
Summary:

Migrates the PyTorch documentation to point to the new locations
of AO code.  Context: https://github.com/pytorch/pytorch/issues/81667

Process:
1. run https://gist.github.com/vkuzo/c38d4ba201604579d7d316ec4a4692e7 for automated replacement
2. manually fix the doc build errors (by removing the module declarations which are now duplicate)

Test plan: CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94211
Approved by: https://github.com/jerryzh168
2023-02-07 02:32:23 +00:00
Jason Ansel
e071d72f3c Tag dynamo backends as debug/experimental (#93878)
Hides debug/experimental backends by default.

Before:
```
torch._dynamo.list_backends()
['aot_eager', 'aot_eager_decomp_partition', 'aot_torchxla_trace_once', 'aot_torchxla_trivial', 'aot_ts', 'aot_ts_nvfuser', 'cudagraphs', 'dynamo_accuracy_minifier_backend', 'dynamo_minifier_backend', 'eager', 'inductor', 'ipex', 'nvprims_aten', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'torchxla_trace_once', 'torchxla_trivial', 'ts', 'tvm']
```

After:
```
torch._dynamo.list_backends()
['aot_ts_nvfuser', 'cudagraphs', 'inductor', 'ipex', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'tvm']
```

Fixes https://github.com/pytorch/pytorch/issues/93733

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93878
Approved by: https://github.com/voznesenskym
2023-02-04 00:50:51 +00:00
Svetlana Karslioglu
5197496799 Add a private API banner (#93996)
Add a banner that will appear on all pages where the last segment of the URL starts with an underscore "_".
Example pages:
* https://pytorch.org/docs/master/_dynamo.html
* https://pytorch.org/docs/master/_modules/torch/_jit_internal.html
Sample screenshots:
<img width="885" alt="Screenshot 2023-02-03 at 1 13 47 PM" src="https://user-images.githubusercontent.com/5317992/216711948-6ba35d38-da8f-4145-9580-bafc921a1df5.png">
<img width="871" alt="Screenshot 2023-02-03 at 1 12 51 PM" src="https://user-images.githubusercontent.com/5317992/216711951-877a760e-3449-4593-b81c-14bf3b9943da.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93996
Approved by: https://github.com/malfet, https://github.com/albanD
2023-02-03 21:40:15 +00:00
Jason Ansel
5d709af59a Rename aot_cudagraphs to cudagraphs (#93821)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93821
Approved by: https://github.com/ezyang
2023-02-03 21:01:27 +00:00
Svetlana Karslioglu
3b7140d938 Add the new submission form (#94000)
Adding the new form for submitting topics on quarterly maintainers meetings.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94000
Approved by: https://github.com/orionr
2023-02-03 16:46:30 +00:00
soulitzer
77cbaedd5c [docs] Add section about tensor hooks on in-place in autograd note (#93116)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93116
Approved by: https://github.com/albanD
2023-02-01 17:35:21 +00:00
Ivan Kobzarev
9daca46dc4 [jit][await] Apply review comments (#93284)
Differential Revision: [D42849920](https://our.internmc.facebook.com/intern/diff/D42849920)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93284
Approved by: https://github.com/malfet
2023-02-01 07:22:06 +00:00
Svetlana Karslioglu
218d4eac56 Remove submission form (#93287)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93287
Approved by: https://github.com/orionr
2023-01-31 23:41:16 +00:00
akhilkedia
129a1bc715 Minor error in docs regarding execution time (#93258)
The previous sentence seemed to imply that sparse may not always be helpful, ie, your execution time may increase when using sparse. But the docs mentioned otherwise.

A simple re-ordering of two words in the documentation to better align with the contextual sentiment.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93258
Approved by: https://github.com/cpuhrsch
2023-01-31 23:32:42 +00:00
Ivan Yashchuk
fba13d94a1 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

- [x] XLA PR: https://github.com/pytorch/xla/pull/4498

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980, https://github.com/malfet
2023-01-31 11:59:11 +00:00
William Wen
2a6e085704 Update custom backend docs (#92721)
Title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92721
Approved by: https://github.com/jansel
2023-01-30 23:54:49 +00:00
Ivan Kobzarev
2fc73622f8 [jit] Support Awaitable type (#90863)
We want to make TorchRec sharded models TorchScriptable.

TorchRec sharded models uses generic types Awaitable[W] and LazyAwaitable[W] (https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/types.py#L212).
In sharded model those types are used instead of contained type W, having the initialization function that produces object of type W.

At the moment when the first attribute of W is requested - `LazyAwaitable[W]` will call its initialization function (on the same stack), cache the result inside and work transparently as an object of W. So we can think about it as a delayed object initialization.

To support this behavior in TorchScript - we propose a new type to TorchScript - `Await`.
In eager mode it works the same as `LazyAwaitable[W]` in TorchRec, being dynamically typed - acting as a type `W` while it is `Await[W]`.

Within torchscript it is `Await[W]` and can be only explicitly converted to W, using special function `torch.jit.awaitable_wait(aw)`.
Creation of this `Await[W]` is done via another special function `torch.jit.awaitable(func, *args)`.

The semantic is close to `torch.jit.Future`, fork, wait and uses the same jit mechanics (inline fork Closures) with the difference that it does not start this function in parallel on fork. It only stores as a lambda inside IValue that will be called on the same thread when `torch.jit.awaitable_wait` is called.

For example (more examples in this PR `test/jit/test_await.py`)
```
      def delayed(z: Tensor) -> Tensor:
          return Tensor * 3

      @torch.jit.script
      def fn(x: Tensor):
          aw: Await[int] = torch.jit._awaitable(delayed, 99)
          a = torch.eye(2)
          b = torch.jit._awaitable_wait(aw)
          return a + b + x
```

Functions semantics:

`_awaitable(func -> Callable[Tuple[...], W], *args, **kwargs) -> Await[W]`

Creates Await object, owns args and kwargs. Once _awaitable_wait calls, executes function func and owns the result of the function. Following _awaitable_wait calls will return this result from the first function call.

`_awaitable_wait(Await[W]) -> W`
Returns either cached result of W if it is not the first _awaitable_wait call to this Await object or calls specified function if the first.

`_awaitable_nowait(W) -> Await[W]`

Creates trivial Await[W] wrapper on specified object To be type complaint for the corner cases.

Differential Revision: [D42502706](https://our.internmc.facebook.com/intern/diff/D42502706)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90863
Approved by: https://github.com/davidberard98
2023-01-30 17:38:59 +00:00
Edward Z. Yang
c7b03010ec Split the aot/dynamo TORCHDYNAMO_REPRO_AFTER cases (#93226)
I often copy paste this line and it is annoying to have to modify
the inside to select aot/dynamo

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93226
Approved by: https://github.com/desertfire
2023-01-30 14:23:16 +00:00
Felix Divo
219e9533f0 Improve autograd doc on complex numbers (#93065)
A tiny change to fix formatting and clarify a bit in [this section](https://pytorch.org/docs/stable/notes/autograd.html#what-are-complex-derivatives).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93065
Approved by: https://github.com/albanD
2023-01-27 09:36:38 +00:00
Sherlock Huang
a6ac922eab Rename Canonical Aten IR to Core Aten IR (#92904)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92904
Approved by: https://github.com/bdhirsh
2023-01-25 05:12:23 +00:00
PyTorch MergeBot
acdd462b1a Revert "Remove deprecated torch.symeig (#70988)"
This reverts commit d70ed68162.

Reverted https://github.com/pytorch/pytorch/pull/70988 on behalf of https://github.com/kit1980 due to Failing XLA tests, forward fix unsuccessful
2023-01-24 19:03:40 +00:00
Rodrigo Kumpera
9e56378ef2 Add documentation for DCP. (#92813)
This populates the website with some basic documentation.

It's far from ideal as we should include some basic usage example.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92813
Approved by: https://github.com/wz337
2023-01-24 17:21:51 +00:00
Ivan Yashchuk
d70ed68162 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980
2023-01-23 22:51:40 +00:00
Kazuaki Ishizaki
d40a4540d6 Fix typo under docs directory (#92762)
This PR fixes typo and URL (`http -> https`) in `rst` files under `docs` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92762
Approved by: https://github.com/H-Huang
2023-01-23 18:07:22 +00:00
Masaki Kozuki
30876229a7 [mta] Backward of unary foreach functions (#89591)
as per title, this PR defines backward of those.

This doesn't implement forward-mode automatic differentiation as [the current codegen](a747326423/tools/autograd/gen_variable_type.py (L1513)) doesn't seem to handle `ArrayRef<Tensor>`.

Rel:
- https://github.com/pytorch/pytorch/issues/53796
- https://github.com/pytorch/pytorch/issues/58833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89591
Approved by: https://github.com/albanD
2023-01-23 08:28:06 +00:00
Edward Z. Yang
85a1f0223a Add a warning about performance cost of set_default_device (#92703)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92703
Approved by: https://github.com/albanD
2023-01-21 02:23:13 +00:00
Edward Z. Yang
5c6f5439b7 Implement SymBool (#92149)
We have known for a while that we should in principle support SymBool as a separate concept from SymInt and SymFloat ( in particular, every distinct numeric type should get its own API). However, recent work with unbacked SymInts in, e.g., https://github.com/pytorch/pytorch/pull/90985 have made this a priority to implement. The essential problem is that our logic for computing the contiguity of tensors performs branches on the passed in input sizes, and this causes us to require guards when constructing tensors from unbacked SymInts. Morally, this should not be a big deal because, we only really care about the regular (non-channels-last) contiguity of the tensor, which should be guaranteed since most people aren't calling `empty_strided` on the tensor, however, because we store a bool (not a SymBool, prior to this PR it doesn't exist) on TensorImpl, we are forced to *immediately* compute these values, even if the value ends up not being used at all. In particular, even when a user allocates a contiguous tensor, we still must compute channels-last contiguity (as some contiguous tensors are also channels-last contiguous, but others are not.)

This PR implements SymBool, and makes TensorImpl use SymBool to store the contiguity information in ExtraMeta. There are a number of knock on effects, which I now discuss below.

* I introduce a new C++ type SymBool, analogous to SymInt and SymFloat. This type supports logical and, logical or and logical negation. I support the bitwise operations on this class (but not the conventional logic operators) to make it clear that logical operations on SymBool are NOT short-circuiting. I also, for now, do NOT support implicit conversion of SymBool to bool (creating a guard in this case). This does matter too much in practice, as in this PR I did not modify the equality operations (e.g., `==` on SymInt) to return SymBool, so all preexisting implicit guards did not need to be changed. I also introduced symbolic comparison functions `sym_eq`, etc. on SymInt to make it possible to create SymBool. The current implementation of comparison functions makes it unfortunately easy to accidentally introduce guards when you do not mean to (as both `s0 == s1` and `s0.sym_eq(s1)` are valid spellings of equality operation); in the short term, I intend to prevent excess guarding in this situation by unit testing; in the long term making the equality operators return SymBool is probably the correct fix.
* ~~I modify TensorImpl to store SymBool for the `is_contiguous` fields and friends on `ExtraMeta`. In practice, this essentially meant reverting most of the changes from https://github.com/pytorch/pytorch/pull/85936 . In particular, the fields on ExtraMeta are no longer strongly typed; at the time I was particularly concerned about the giant lambda I was using as the setter getting a desynchronized argument order, but now that I have individual setters for each field the only "big list" of boolean arguments is in the constructor of ExtraMeta, which seems like an acceptable risk. The semantics of TensorImpl are now that we guard only when you actually attempt to access the contiguity of the tensor via, e.g., `is_contiguous`. By in large, the contiguity calculation in the implementations now needs to be duplicated (as the boolean version can short circuit, but the SymBool version cannot); you should carefully review the duplicate new implementations. I typically use the `identity` template to disambiguate which version of the function I need, and rely on overloading to allow for implementation sharing. The changes to the `compute_` functions are particularly interesting; for most of the functions, I preserved their original non-symbolic implementation, and then introduce a new symbolic implementation that is branch-less (making use of our new SymBool operations). However, `compute_non_overlapping_and_dense` is special, see next bullet.~~ This appears to cause performance problems, so I am leaving this to an update PR.
* (Update: the Python side pieces for this are still in this PR, but they are not wired up until later PRs.) While the contiguity calculations are relatively easy to write in a branch-free way, `compute_non_overlapping_and_dense` is not: it involves a sort on the strides. While in principle we can still make it go through by using a data oblivious sorting network, this seems like too much complication for a field that is likely never used (because typically, it will be obvious that a tensor is non overlapping and dense, because the tensor is contiguous.) So we take a different approach: instead of trying to trace through the logic computation of non-overlapping and dense, we instead introduce a new opaque operator IsNonOverlappingAndDenseIndicator which represents all of the compute that would have been done here. This function returns an integer 0 if `is_non_overlapping_and_dense` would have returned `False`, and an integer 1 otherwise, for technical reasons (Sympy does not easily allow defining custom functions that return booleans). The function itself only knows how to evaluate itself if all of its arguments are integers; otherwise it is left unevaluated. This means we can always guard on it (as `size_hint` will always be able to evaluate through it), but otherwise its insides are left a black box. We typically do NOT expect this custom function to show up in actual boolean expressions, because we will typically shortcut it due to the tensor being contiguous. It's possible we should apply this treatment to all of the other `compute_` operations, more investigation necessary. As a technical note, because this operator takes a pair of a list of SymInts, we need to support converting `ArrayRef<SymNode>` to Python, and I also unpack the pair of lists into a single list because I don't know if Sympy operations can actually validly take lists of Sympy expressions as inputs. See for example `_make_node_sizes_strides`
* On the Python side, we also introduce a SymBool class, and update SymNode to track bool as a valid pytype. There is some subtlety here: bool is a subclass of int, so one has to be careful about `isinstance` checks (in fact, in most cases I replaced `isinstance(x, int)` with `type(x) is int` for expressly this reason.) Additionally, unlike, C++, I do NOT define bitwise inverse on SymBool, because it does not do the correct thing when run on booleans, e.g., `~True` is `-2`. (For that matter, they don't do the right thing in C++ either, but at least in principle the compiler can warn you about it with `-Wbool-operation`, and so the rule is simple in C++; only use logical operations if the types are statically known to be SymBool). Alas, logical negation is not overrideable, so we have to introduce `sym_not` which must be used in place of `not` whenever a SymBool can turn up. To avoid confusion with `__not__` which may imply that `operators.__not__` might be acceptable to use (it isn't), our magic method is called `__sym_not__`. The other bitwise operators `&` and `|` do the right thing with booleans and are acceptable to use.
* There is some annoyance working with booleans in Sympy. Unlike int and float, booleans live in their own algebra and they support less operations than regular numbers. In particular, `sympy.expand` does not work on them. To get around this, I introduce `safe_expand` which only calls expand on operations which are known to be expandable.

TODO: this PR appears to greatly regress performance of symbolic reasoning. In particular, `python test/functorch/test_aotdispatch.py -k max_pool2d` performs really poorly with these changes. Need to investigate.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92149
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-01-21 02:21:56 +00:00
Will Constable
a2b8e891f6 Fix/modernize dynamo docs (#92572)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92572
Approved by: https://github.com/ezyang
2023-01-19 16:15:31 +00:00
Edward Z. Yang
6420fecdc4 Introduce sym_min and sym_max (#92107)
It turns out our old max/min implementation didn't do anything, because `__max__` and `__min__` are not actually magic methods in Python. So I give 'em the `sym_` treatment, similar to the other non-overrideable builtins.

NB: I would like to use `sym_max` when computing contiguous strides but this appears to make `python test/functorch/test_aotdispatch.py -v -k test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool2d_cpu_float32` run extremely slowly. Needs investigating.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92107
Approved by: https://github.com/albanD, https://github.com/voznesenskym, https://github.com/Skylion007
2023-01-18 20:57:27 +00:00
Richard Zou
98b78aa11c [autograd.Function] setup_context always appears on the Function (#92312)
Previously, we used the existence of setup_context to switch between if
forward should take a ctx object or not.

To be consistent with all other staticmethod (which always exist on the
autograd.Function), this PR change it so that we use IF setup_context
gets overriden by the user to switch between if forward should take a
ctx object or not.

Fixes https://github.com/pytorch/pytorch/issues/91451

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92312
Approved by: https://github.com/albanD, https://github.com/soulitzer
2023-01-18 02:55:42 +00:00
soulitzer
88366a9075 Document hooks ordering behavior in the autograd note (#91667)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91667
Approved by: https://github.com/albanD
2023-01-18 00:20:13 +00:00
soulitzer
388b245d54 Expose autograd.graph.Node as an abstract base class (#91475)
This PR:
- registers all of the codegened Nodes to the torch._C._functions module, this is where special nodes like AccumulateGrad are already registered.
- creates a autograd.graph.Node abstract base class that all of the newly registered nodes subclass from. We make the subclassing happen by implementing the ``__subclasshook__`` method
- enables static type checking to work and also enables Sphinx to generate documentation for the Node and its methods
- handles both the custom Function and codegened cases

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91475
Approved by: https://github.com/albanD
2023-01-18 00:20:13 +00:00
Richard Zou
16f9d1bb83 [torch.func] Add migration guide from functorch (#91811)
Test Plan:
- view preview

Future:
- still need to figure out the make_fx situation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91811
Approved by: https://github.com/albanD
2023-01-17 22:14:42 +00:00
Richard Zou
2f9166ef89 [autograd.Function] Cleanup asymmetry in generate_vmap_rule and vmap (#91787)
This PR:
- changes generate_vmap_rule to either be True or False. Previously it
  could be True, False, or not set. This simplifies the implementation a
  bit.
- changes the vmap staticmethod to always be on the autograd.Function
  rather than sometimes defined.
  This is how the other staticmethod (forward, backward, jvp) are
  implemented and allows us to document it.

There are 4 possible states for the autograd.Function w.r.t. to the
above:
- generate_vmap_rule is True, vmap staticmethod overriden. This raises
  an error when used with vmap.
- generate_vmap_rule is False, vmap staticmethod overriden. This is
  valid.
- generate_vmap_rule is True, vmap staticmethod not overriden. This is
  valid.
- generate_vmap_rule is False, vmap staticmethod not overriden. This
  raises an error when used with vmap.

Future:
- setup_context needs the same treatment, but that's a bit tricker to
  implement.

Test Plan:
- new unittest
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91787
Approved by: https://github.com/soulitzer
2023-01-17 13:36:34 +00:00
Salil Desai
da43584bef [Reland] Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#92081)
Summary:
X-link: https://github.com/facebookresearch/d2go/pull/459

Reland of D41690203 (370df963e0)

Remove MobileOptimizerType and all rewrite flags from torch.X and torch._C.X to clean up torch.X and torch._C.X namespaces

The affected rewrite flags are
- CONV_BN_FUSION
- FUSE_ADD_RELU
- HOIST_CONV_PACKED_PARAMS
- INSERT_FOLD_PREPACK_OPS
- REMOVE_DROPOUT
- VULKAN_AUTOMATIC_GPU_TRANSFER

Bc-Breaking Change:

Before this change, the rewrite flags were accessible through all of
1. torch.utils.mobile_optimizer.MobileOptimizerType.X
2. torch._C.MobileOptimizerType.X
3. torch.X
4. torch.MobileOptimizerType.X
5. torch._C.X

But after this change, only torch.utils.mobile_optimizer.MobileOptimizerType.X  (option 1 above) and the newly added torch._C._MobileOptimizerType.X remain

Corresponding updates to PyTorch Tutorial Docs are in https://github.com/pytorch/tutorials/pull/2163

Test Plan:
```buck test caffe2/test:test_mobile_optimizer```
```
Summary
  Pass: 6
  Skip: 1
    ↻ caffe2/test:test_mobile_optimizer - test_mobilenet_optimize_for_mobile (test_mobile_optimizer.TestOptimizer)
  ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/4222124793514412
```
___
```buck test caffe2/torch/fb/mobile/tests:model_exporter_tests```
Tests pass
___

With temporary testing changes in D41690204:

```buck run caffe2:test_rewrite_flags_api```
Before:
```
torch.utils.mobile_optimizer.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C._MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute '_MobileOptimizerType')
torch._C.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
```
After:
```
torch.utils.mobile_optimizer.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C._MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute 'MobileOptimizerType')
torch.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch' has no attribute 'VULKAN_AUTOMATIC_GPU_TRANSFER')
torch.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch' has no attribute 'MobileOptimizerType')
torch._C.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute 'VULKAN_AUTOMATIC_GPU_TRANSFER')
```

```buck test caffe2/test:public_bindings -- test_no_new_bindings```
```
Summary
  Pass: 1
  ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/7881299473114294
```

Reviewed By: SS-JIA

Differential Revision: D42442395

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92081
Approved by: https://github.com/albanD
2023-01-14 17:06:00 +00:00
Pearu Peterson
b3e4f5029b Add check-sparse-tensor-invariants flag to Context - 2nd try. (#92094)
This PR is a copy of https://github.com/pytorch/pytorch/pull/90849 that merge was reverted.

The PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

`torch.sparse.check_sparse_tensor_invariants` class provides different ways to enable/disable the invariant checking.

`torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR fixes https://github.com/pytorch/pytorch/issues/90833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92094
Approved by: https://github.com/cpuhrsch
2023-01-13 14:50:33 +00:00
andrewor14
0bd3fa3d22 [Quant][docs] Move parts of BackendConfig tutorial (#91999)
Summary: This commit moves the API specification section of
the BackendConfig tutorial to the docstrings, which is a more
suitable place for this content. This change also reduces some
duplication. There is no new content added in this change.

Reviewers: jerryzh168, vkuzo

Subscribers: jerryzh168, vkuzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91999
Approved by: https://github.com/vkuzo, https://github.com/jerryzh168
2023-01-13 05:59:22 +00:00
samdow
515dff7811 [functorch] move batch_norm_replacement to torch.func (#91412)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91412
Approved by: https://github.com/zou3519
2023-01-12 19:15:41 +00:00
PyTorch MergeBot
c7a22bb7c7 Revert "Add check-sparse-tensor-invariants flag to Context. (#90849)"
This reverts commit b9a035c1c5.

Reverted https://github.com/pytorch/pytorch/pull/90849 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-12 09:58:16 +00:00
Emilio Castillo
07e595e88a Add device_idx to free_fn in CUDAPluggableAllocator (#91398)
This was requested by nvidia folks, track also the device_id in the free function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91398
Approved by: https://github.com/albanD
2023-01-12 05:03:48 +00:00
BowenBao
c537f5bee8 [ONNX] Documentation for torch.onnx.find_mismatch (#90728)
Doc preview:
* `find_mismatch`: https://docs-preview.pytorch.org/90728/onnx.html#torch.onnx.verification.find_mismatch
* `GraphInfo`: https://docs-preview.pytorch.org/90728/onnx.html#classes and https://docs-preview.pytorch.org/90728/generated/torch.onnx.verification.GraphInfo.html#torch.onnx.verification.GraphInfo
* `VerificationOptions`: https://docs-preview.pytorch.org/90728/onnx.html#classes and  https://docs-preview.pytorch.org/90728/generated/torch.onnx.verification.VerificationOptions.html#torch.onnx.verification.VerificationOptions

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90728
Approved by: https://github.com/titaiwangms, https://github.com/justinchuby
2023-01-11 23:58:57 +00:00
Pearu Peterson
b9a035c1c5 Add check-sparse-tensor-invariants flag to Context. (#90849)
This PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

- `torch.enable_check_sparse_tensor_invariants` and `torch.is_check_sparse_tensor_invariants_enabled` functions to globally enable/disable the invariant checks and to retrieve the state of the feature, respectively
- `torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR also fixes https://github.com/pytorch/pytorch/issues/90833

# Main issue

*The following content is outdated after merging the PRs in this ghstack but kept for the record.*

The importance of this feature is that when enabling the invariants checks by default, say, via

<details>

```
$ git diff
diff --git a/torch/__init__.py b/torch/__init__.py
index c8543057c7..19a91d0482 100644
--- a/torch/__init__.py
+++ b/torch/__init__.py
@@ -1239,3 +1239,8 @@ if 'TORCH_CUDA_SANITIZER' in os.environ:

 # Populate magic methods on SymInt and SymFloat
 import torch.fx.experimental.symbolic_shapes
+
+# temporarily enable sparse tensor arguments validation in unsafe
+# constructors:
+
+torch._C._set_check_sparse_tensor_invariants(True)
```

</details>

a massive number of test failures/errors occur in test_sparse_csr.py tests:
```
$ pytest -sv test/test_sparse_csr.py
<snip>
==== 4293 failed, 1557 passed, 237 skipped, 2744 errors in 69.71s (0:01:09) ====
```
that means that we are silently constructing sparse compressed tensors that do not satisfy the sparse tensor invariants. In particular, the following errors are raised:

```
AssertionError: "resize_as_sparse_compressed_tensor_: self and src must have the same layout" does not match "expected values to be a strided and contiguous tensor"

RuntimeError: CUDA error: device-side assert triggered

RuntimeError: `col_indices[..., crow_indices[..., i - 1]:crow_indices[..., i]] for all i = 1, ..., nrows are sorted and distinct along the last dimension values` is not satisfied.

RuntimeError: expected col_indices to be a strided and contiguous tensor

RuntimeError: expected row_indices to be a strided and contiguous tensor

RuntimeError: expected values to be a strided and contiguous tensor

RuntimeError: for_each: failed to synchronize: cudaErrorAssert: device-side assert triggered

RuntimeError: tensor dimensionality must be sum of batch, base, and dense dimensionalities (=0 + 2 + 0) but got 3
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90849
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-01-11 01:05:14 +00:00
Kazuaki Ishizaki
4f91b8e0ee Fix typo under docs directory (#91871)
This PR fixes typo in '.rst' files under 'docs' directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91871
Approved by: https://github.com/ngimel
2023-01-10 22:33:36 +00:00
PyTorch MergeBot
3aeb7127b4 Revert "Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#91600)"
This reverts commit 370df963e0.

Reverted https://github.com/pytorch/pytorch/pull/91600 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2023-01-10 21:38:40 +00:00
Salil Desai
370df963e0 Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#91600)
Summary:
X-link: https://github.com/facebookresearch/d2go/pull/452

Remove MobileOptimizerType and all rewrite flags from torch.X and torch._C.X to clean up torch.X and torch._C.X namespaces

The affected rewrite flags are
- CONV_BN_FUSION
- FUSE_ADD_RELU
- HOIST_CONV_PACKED_PARAMS
- INSERT_FOLD_PREPACK_OPS
- REMOVE_DROPOUT
- VULKAN_AUTOMATIC_GPU_TRANSFER

Bc-Breaking Change:

Before this change, the rewrite flags were accessible through all of
1. torch.utils.mobile_optimizer.MobileOptimizerType.X
2. torch._C.MobileOptimizerType.X
3. torch.X
4. torch.MobileOptimizerType.X
5. torch._C.X

But after this change, only torch.utils.mobile_optimizer.MobileOptimizerType.X  (option 1 above) and the newly added torch._C._MobileOptimizerType.X remain

Corresponding updates to PyTorch Tutorial Docs are in https://github.com/pytorch/tutorials/pull/2163

Test Plan:
```buck test caffe2/test:test_mobile_optimizer```
```
Summary
  Pass: 6
  Skip: 1
    ↻ caffe2/test:test_mobile_optimizer - test_mobilenet_optimize_for_mobile (test_mobile_optimizer.TestOptimizer)
  ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/4222124793514412
```
___

With temporary testing changes in D41690204:

```buck run caffe2:test_rewrite_flags_api```
Before:
```
torch.utils.mobile_optimizer.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C._MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute '_MobileOptimizerType')
torch._C.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
```
After:
```
torch.utils.mobile_optimizer.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C._MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute 'MobileOptimizerType')
torch.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch' has no attribute 'VULKAN_AUTOMATIC_GPU_TRANSFER')
torch.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch' has no attribute 'MobileOptimizerType')
torch._C.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute 'VULKAN_AUTOMATIC_GPU_TRANSFER')
```

```buck test caffe2/test:public_bindings -- test_no_new_bindings```
```
Summary
  Pass: 1
  ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/7881299473114294
```

Differential Revision: D41690203

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91600
Approved by: https://github.com/albanD, https://github.com/malfet
2023-01-10 20:16:53 +00:00
Sean Silva
e9cd7e0869 [dynamo] Fix rst syntax for list (#90390)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90390
Approved by: https://github.com/soumith
2023-01-10 19:56:26 +00:00
Edward Z. Yang
333540a458 Reland "Add torch.utils.device_mode" (#91796)
Original PR https://github.com/pytorch/pytorch/pull/91525

Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91796
Approved by: https://github.com/albanD
2023-01-09 20:57:12 +00:00
Will Constable
630ef6c711 Fix Dynamo+DDP documentation (#91832)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91832
Approved by: https://github.com/soumith, https://github.com/davidberard98
2023-01-09 17:35:49 +00:00
PyTorch MergeBot
9b415240d4 Revert "Reland "Add torch.utils.device_mode" (#91796)"
This reverts commit 81b5eff3c3.

Reverted https://github.com/pytorch/pytorch/pull/91796 on behalf of https://github.com/huydhn due to This breaks trunk with the following failed test https://hud.pytorch.org/failure/test_jit_save%2CTestTracer
2023-01-09 04:45:47 +00:00
Edward Z. Yang
81b5eff3c3 Reland "Add torch.utils.device_mode" (#91796)
Original PR https://github.com/pytorch/pytorch/pull/91525

Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91796
Approved by: https://github.com/albanD
2023-01-08 03:44:56 +00:00
drisspg
eb8547e939 Add a NestedTensor Readme (#91472)
# Summary
This PR adds a NestedTensor Readme which explains the code structure and will hopefully serve as a reference point for new contributors, especially if they would like to implement a NestedTensor kernel implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91472
Approved by: https://github.com/mikaylagawarecki, https://github.com/cpuhrsch
2023-01-06 14:44:55 +00:00
PyTorch MergeBot
f571ae4fdb Revert "Make torch.device usable as a context manager (#91525)"
This reverts commit 619d52a5d2.

Reverted https://github.com/pytorch/pytorch/pull/91525 on behalf of https://github.com/mehtanirav due to Internal breakages
2023-01-05 21:34:50 +00:00
Edward Z. Yang
619d52a5d2 Make torch.device usable as a context manager (#91525)
Fixes https://github.com/pytorch/pytorch/issues/82296
Fixes https://github.com/pytorch/pytorch/issues/27878
Fixes https://github.com/pytorch/pytorch/issues/260

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91525
Approved by: https://github.com/albanD
2023-01-04 01:32:00 +00:00
samdow
162474d7fd [functorch] add new ensembling api, demonstrate in example (#88850)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88850
Approved by: https://github.com/zou3519
2023-01-04 00:33:14 +00:00
samdow
c5e5916fff [functorch] add functorch functional_call, update tests to test this (#89213)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89213
Approved by: https://github.com/zou3519
2023-01-04 00:33:14 +00:00
Richard Zou
264f5ed516 [autograd.Function] Add docs on the functorch interaction (#91452)
This PR:
- Updates autograd.Function.forward docs to reflect how you either
  define a forward with ctx or a separate forward and setup_context
- Updates the "Extending Autograd" docs to suggest the usage of
  autograd.Function with separate forward and setup_context. This should
  be the default because there is a low barrier to go from this to
  an autograd.Function that is fully supported by functorch transforms.
- Adds a new "Extending torch.func with autograd.Function" doc that
  explains how to use autograd.Function with torch.func. It also
  explains how to use generate_vmap_rule and how to manually write a
  vmap staticmethod.

While writing this, I noticed that the implementation of
setup_context staticmethod/generate_vmap_rule/vmap staticmethod are a
bit inconsistent with the other method/attributes on autograd.Function:
- https://github.com/pytorch/pytorch/issues/91451
- I'm happy to fix those if we think it is a problem, either in this PR
  or a followup (this PR is getting long, I want some initial docs
  out that I can point early adopters at, and fixing the problems in the
  future isn't really BC-breaking).

Test Plan:
- view docs preview
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91452
Approved by: https://github.com/soulitzer
2023-01-04 00:28:19 +00:00
Richard Zou
31e66ca4ef [torch.func] Add docs (#91319)
Docs copy-pasted from functorch docs with minor adjustments. We are
keeping the functorch docs for BC, though that's up for debate -- we
could also just say "see .. in torch.func" for some, but not all doc
pages (we still want to keep around any examples that use
make_functional so that users can tell what the difference between that
and the new functional_call is).

Test Plan:
- docs preview
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91319
Approved by: https://github.com/samdow
2022-12-30 02:51:18 +00:00
Kurt Mohler
08a47549af Rename Tensor._storage to Tensor.untyped_storage and update docs (#91414)
Fixes #89224

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91414
Approved by: https://github.com/ezyang
2022-12-28 19:21:34 +00:00
Joel Schlosser
8b55b86dbd Move sym_int and sym_float alongside SymInt / SymFloat in base torch package (#91317)
This PR moves the definitions for:
* `sym_int`
* `sym_ceil` (used only for `sym_int`)
* `sym_floor` (used only for `sym_int`)
* `sym_float`

from `torch/fx/experimental/symbolic_shapes.py` to `torch/__init__.py`, where `SymInt` and `SymFloat` are already defined.

This removes the need for several in-line imports, and enables proper JIT script gating for #91318. I'm very open to doing this in a better way!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91317
Approved by: https://github.com/ezyang, https://github.com/anijain2305
2022-12-28 16:08:16 +00:00
Salahuddin
f1d8fef4d4 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-28 15:06:24 +00:00
PyTorch MergeBot
af7132302a Revert "Softmax added to tensor, torch and docs (#91292)"
This reverts commit f8b28799f8.

Reverted https://github.com/pytorch/pytorch/pull/91292 on behalf of https://github.com/weiwangmeta due to breaking internal distributed testing builds
2022-12-28 14:30:46 +00:00
Salahuddin
f8b28799f8 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-25 12:59:45 +00:00
Ikko Ashimine
a188e6ddc0 Fix typo in troubleshooting.rst (#91301)
enviornment -> environment

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91301
Approved by: https://github.com/msaroufim
2022-12-23 21:39:38 +00:00
Takeshi Watanabe
55749b9c41 [dynamo] Write full code of how to enable output_code (#91230)
Ref https://github.com/pytorch/pytorch/pull/91223
Since it was trickier than I've expected

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91230
Approved by: https://github.com/soumith
2022-12-22 14:09:06 +00:00
bowen0701
e803d336eb Fix missing indentation in serialization.rst (#91253)
Fixes #ISSUE_NUMBER

In serialization.rst, fix class ControlFlowModule's forward(): missing indentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91253
Approved by: https://github.com/kit1980
2022-12-21 20:14:44 +00:00
Eddie Yan
8b617f813d [cuBLAS] Add an option to disable reduced precision reductions for BF16 GEMM (#89172)
Essentially the same change as #67946, except that the default is to disallow reduced precision reductions in `BFloat16` GEMMs (for now). If performance is severely regressed, we can change the default, but this option appears to be necessary to pass some `addmm` `BFloat16` tests on H100.

CC @ptrblck @ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89172
Approved by: https://github.com/ngimel
2022-12-21 18:58:28 +00:00
Takeshi Watanabe
0476201482 Update debug option for torch._dynamo (#91223)
Seems outdated from https://www.youtube.com/watch?v=egZB5Uxki0I

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91223
Approved by: https://github.com/ngimel
2022-12-21 05:06:42 +00:00
richardachen
f460893cec Update optim.rst (#91195)
Fixes #91080

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91195
Approved by: https://github.com/kit1980
2022-12-20 23:22:25 +00:00
Richard Zou
41846e205e [torch.func] Setup torch.func, populate it with all transforms (#91016)
This PR sets up torch.func and populates it with the following APIs:
- grad
- grad_and_value
- vjp
- jvp
- jacrev
- jacfwd
- hessian
- functionalize
- vmap

It also renames all instances of `functorch` in the APIs for those docs
to `torch.func`.

We rewrite the `__module__` fields on some of the above APIs so that the
APIs fit PyTorch's public api definition.
- For an API to be public, it must have a `__module__` that points to a
  public PyTorch submodule. However, `torch._functorch.eager_transforms`
  is not public due to the leading underscore.
- The solution is to rewrite `__module__` to point to where the API is
  exposed (torch.func). This is what both Numpy and JAX do for their
  APIs.
- h/t pmeier in
  https://github.com/pytorch/pytorch/issues/90284#issuecomment-1348595246
  for idea and code
- The helper function, `exposed_in`, is confined to
  torch._functorch/utils for now because we're not completely sure if
  this should be the long-term solution.

Implication for functorch.* APIs:
- functorch.grad is the same object as torch.func.grad
- this means that the functorch.grad docstring is actually the
  torch.func.grad docstring and will refer to torch.func instead of
  functorch.
- This isn't really a problem since the plan on record is to deprecate
  functorch in favor of torch.func. We can fix these if we really want,
  but I'm not sure if a solution is worth maintaining.

Test Plan:
- view docs preview

Future:
- vmap should actually just be torch.vmap. This requires an extra step
  where I need to test internal callsites, so, I'm separating it into a
  different PR.
- make_fx should be in torch.func to be consistent with `import
  functorch`. This one is a bit more of a headache to deal with w.r.t.
  public api, so going to deal with it separately.
- beef up func.rst with everything else currently on the functorch
  documention website. func.rst is currently just an empty shell.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91016
Approved by: https://github.com/samdow
2022-12-20 00:00:52 +00:00
Bin Bao
548960f68e Replace TORCHINDUCTOR_TRACE with TORCH_COMPILE_DEBUG in documentation (#91011)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91011
Approved by: https://github.com/mlazos, https://github.com/jansel, https://github.com/msaroufim
2022-12-19 14:45:27 +00:00
Alvaro Gaona
ddf5b68dcb Nuttall window (#90103)
Relates #85366
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90103
Approved by: https://github.com/lezcano
2022-12-16 09:05:53 +00:00
HDCharles
1ca9d43d4e [ao] quantize.py fixing public v private (#87521)
Summary: made _register_activation_post_process_hook, _add_observer,
_get_unique_devices_, _get_observer_dict private

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D40709277](https://our.internmc.facebook.com/intern/diff/D40709277)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87521
Approved by: https://github.com/jerryzh168
2022-12-14 22:50:39 +00:00
Sherlock Huang
b4b8a56589 Doc for Canonical Aten and Prims IR (#90644)
as title.

Sample output: https://docs-preview.pytorch.org/90644/ir.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90644
Approved by: https://github.com/ezyang
2022-12-13 21:30:47 +00:00
Arek Sredzki
44dac51c36 Improve Autograd Documentation Clarity (#89401)
This makes minor adjustments to the autograd docs, improving clarity and resolving grammatical errors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89401
Approved by: https://github.com/kit1980
2022-12-06 06:45:04 +00:00
xiny
57bb4cd046 [Doc][Distributed] Add missing functions to distributed.rst (#89905)
Add missing documents for `torch.distributed.all_to_all_single` and other functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89905
Approved by: https://github.com/kit1980
2022-12-04 07:22:54 +00:00
Christian Puhrsch
a306f85ea7 Update Persons of Interest (#90069)
Creates sections for contributors to MaskedTensor and NestedTensor and updates torchaudio.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90069
Approved by: https://github.com/drisspg, https://github.com/mikaylagawarecki, https://github.com/nateanl
2022-12-02 23:06:57 +00:00
PyTorch MergeBot
cba96366a2 Revert "remove torch.equal usages (#89527)"
This reverts commit 4095ef8b80.

Reverted https://github.com/pytorch/pytorch/pull/89527 on behalf of https://github.com/clee2000 due to broke periodic multigpu tests 4095ef8b80 https://github.com/pytorch/pytorch/actions/runs/3592806602/jobs/6049368502
2022-12-02 21:36:13 +00:00
XiaobingSuper
8b2f9887bf update quantization doc: add x86 backend as default backend of server inference (#86794)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86794
Approved by: https://github.com/jgong5, https://github.com/kit1980
2022-12-02 02:10:25 +00:00
Nikita Shulga
768bd3fb4a Add torch.compile implementation (#89607)
`torch.compile` can be used either as decorator or to optimize model directly, for example:
```
@torch.compile
def foo(x):
  return torch.sin(x) + x.max()
```
or
```
mod = torch.nn.ReLU()
optimized_mod = torch.compile(mod, mode="max-autotune")
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89607
Approved by: https://github.com/soumith
2022-12-01 20:17:52 +00:00
Svetlana Karslioglu
015b05af18 Editorial pass on Dyamo docs (#89921)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89921
Approved by: https://github.com/msaroufim
2022-12-01 18:53:16 +00:00
Philip Meier
4095ef8b80 remove torch.equal usages (#89527)
Preparation for the next PR in this stack: #89559.

I replaced

- `self.assertTrue(torch.equal(...))` with `self.assertEqual(..., rtol=0, atol=0, exact_device=True)`,
- the same for `self.assertFalse(...)` with `self.assertNotEqual(...)`, and
- `assert torch.equal(...)` with `torch.testing.assert_close(..., rtol=0, atol=0)` (note that we don't need to set `check_device=True` here since that is the default).

There were a few instances where the result of `torch.equal` is used directly. In that cases I've replaced with `(... == ...).all().item()` while sometimes also dropping the `.item()` depending on the context.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89527
Approved by: https://github.com/mruberry
2022-12-01 11:22:52 +00:00
Philip Meier
d72cd4c4e5 document torch.testing.assert_allclose (#89526)
After our failed attempt to remove `assert_allclose` in #87974, we decided to add it to the documentation after all. Although we drop the expected removal date, the function continues to be deprecated in favor of `assert_close`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89526
Approved by: https://github.com/mruberry
2022-12-01 11:22:50 +00:00
Wanchao Liang
4451eb24e6 Move tensor_parallel out to distributed.tensor folder (#89878)
This PR moves tensor parallel from torch.distributed._tensor.parallel
to torch.distributed.tensor.parallel, to prepare for beta release
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89878
Approved by: https://github.com/fduwjj
2022-11-30 22:13:10 +00:00
Will Constable
447283752c Update DDP docs for Dynamo/DDPOptimizer (#89096)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89096
Approved by: https://github.com/msaroufim
2022-11-30 05:50:12 +00:00
andrewor14
fb47a66989 [Quant][docs] Use get_default_qconfig_mapping (#87299)
Summary: The recommended way to use QConfigMapping is through
`get_default_qconfig_mapping`. However, the docs still references
usages that use `QConfigMapping().set_global(...)`. This doesn't
actually work well in practice when the model has fixed qparams
ops for example. This commit updates these usages.

Reviewers: vkuzo

Subscribers: vkuzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87299
Approved by: https://github.com/jerryzh168
2022-11-29 18:08:16 +00:00
Mark Saroufim
9048cf16fe Move Dynamo docs back to core (#89769)
With contributions from @svekars and @malfet

Waiting for doc build job to complete
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89769
Approved by: https://github.com/soumith, https://github.com/malfet
2022-11-29 04:38:53 +00:00
PyTorch MergeBot
47cca5e444 Revert "Move Dynamo docs back to core (#89769)"
This reverts commit be2816db18.

Reverted https://github.com/pytorch/pytorch/pull/89769 on behalf of https://github.com/clee2000 due to broke lint
2022-11-28 21:04:33 +00:00
eqy
8321066031 Tweak formatting of note on macros (#89598)
For readability when viewing the rendered file e.g., from the browser.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89598
Approved by: https://github.com/kit1980
2022-11-28 20:42:30 +00:00
Mark Saroufim
be2816db18 Move Dynamo docs back to core (#89769)
With contributions from @svekars and @malfet

Waiting for doc build job to complete
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89769
Approved by: https://github.com/soumith
2022-11-28 20:32:05 +00:00
albanD
098cbe23c3 Update masked.rst (#89758)
Fix https://github.com/pytorch/pytorch/issues/89734

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89758
Approved by: https://github.com/anjali411, https://github.com/malfet, https://github.com/cpuhrsch
2022-11-28 17:55:43 +00:00
Alvaro Gaona
abb446af8c Implement old windows in Python (#87082)
Relates to #85366

- Bartlett, Blackman, Hamming, Hann.
- Except Kaiser which will be in a different PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87082
Approved by: https://github.com/mruberry, https://github.com/lezcano
2022-11-25 11:09:28 +00:00
Emilio Castillo
c9d4390d13 Add Pluggable CUDA allocator backend (#86786)
Fixes #43144

This uses the Backend system added by [82682](https://github.com/pytorch/pytorch/pull/82682) to change allocators dynamically during the code execution. This will allow us to use RMM, use CUDA managed memory for some portions of the code that do not fit in GPU memory. Write static memory allocators to reduce fragmentation while training models and improve interoperability with external DL compilers/libraries.

For example, we could have the following allocator in c++

```c++
#include <sys/types.h>
#include <cuda_runtime_api.h>
#include <iostream>

extern "C" {
void* my_malloc(ssize_t size, int device, cudaStream_t stream) {
   void *ptr;
   std::cout<<"alloc "<< size<<std::endl;
   cudaMalloc(&ptr, size);
   return ptr;
}

void my_free(void* ptr) {
   std::cout<<"free "<<std::endl;
   cudaFree(ptr);
}
}
```

Compile it as a shared library
```
nvcc allocator.cc -o alloc.so -shared --compiler-options '-fPIC'
```

And use it from PyTorch as follows

```python
import torch

# Init caching
# b = torch.zeros(10, device='cuda')
new_alloc = torch.cuda.memory.CUDAPluggableAllocator('alloc.so', 'my_malloc', 'my_free')
old = torch.cuda.memory.get_current_allocator()
torch.cuda.memory.change_current_allocator(new_alloc)
b = torch.zeros(10, device='cuda')
# This will error since the current allocator was already instantiated
torch.cuda.memory.change_current_allocator(old)
```

Things to discuss
- How to test this, needs compiling external code ...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86786
Approved by: https://github.com/albanD
2022-11-23 17:54:36 +00:00
Nikita Shulga
2de38a0714 Add torch._dynamo to docs (#89510)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89510
Approved by: https://github.com/msaroufim
2022-11-23 16:33:13 +00:00
Li-Huai (Allan) Lin
c2ce79f06e Fix dev-discuss link in the maintainer docs (#89493)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89493
Approved by: https://github.com/H-Huang
2022-11-22 19:33:21 +00:00
AllenTiTaiWang
126e44173d [ONNX] Add onnx-script into ONNX docs (#89078)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89078
Approved by: https://github.com/BowenBao
2022-11-17 06:27:17 +00:00
Kazuaki Ishizaki
a5f04e9a91 Fix typos in .md and .rst files (#88962)
This PR fixes typos `Github` in `.md` and `.rst` files.
`Github` -> `GitHub`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88962
Approved by: https://github.com/kit1980
2022-11-17 03:37:02 +00:00
Mikayla Gawarecki
5848704ef8 Removed unecessary check in select_nested (#89150)
Implementation in  #88585 should work for all dimensions. Removed unnecessary check that constrained select to dims 0 and 1

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89150
Approved by: https://github.com/cpuhrsch
2022-11-16 22:11:37 +00:00
Iris
aee96bbf5a [PT-D][Checkpointing] Move distributed checkpointing from torch.distributed._shard.checkpoint to torch.distributed.checkpoint (#88698)
Context in RFC: https://github.com/pytorch/pytorch/issues/86620

.rst file will be finalized in subsequent PRs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88698
Approved by: https://github.com/wanchaol
2022-11-16 21:06:38 +00:00
BowenBao
0581331963 [ONNX] Document ONNX diagnostics (#88371)
Reference pages:
- Landing page: https://docs-preview.pytorch.org/88371/onnx_diagnostics.html
- Individual rule: https://docs-preview.pytorch.org/88371/generated/onnx_diagnostics_rules/POE0004%3Aoperator-supported-in-newer-opset-version.html

An initial PR to setup the document generation for ONNX diagnostics.
* Add document page for ONNX diagnostics.
* Add document generation for diagnostics rules from `rules.yaml`.
* Add dependency on `myst-parser` for markdown to rst parsing.

More content to be added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88371
Approved by: https://github.com/abock, https://github.com/justinchuby, https://github.com/malfet, https://github.com/kit1980
2022-11-16 19:21:46 +00:00
Driss Guessous
b291c1213a Create native function for determining which implementation of SDP to call (#89029)
# Summary
Creates a callable native function that can determine which implementation of scaled dot product will get called. This allows to bump re-order the runtime dispatch of SDP to enable autograd.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89029
Approved by: https://github.com/cpuhrsch
2022-11-16 03:07:54 +00:00
Kevin Tse
be8d88f8d0 [DataLoader] Removing DataLoader2 related code (#88848)
Removing these lines of code as `DataLoader2` has been added to [TorchData](https://github.com/pytorch/data). I'm importing this to confirm it will not impact internal codes.

Differential Revision: [D41201578](https://our.internmc.facebook.com/intern/diff/D41201578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88848
Approved by: https://github.com/ejguan
2022-11-11 22:27:01 +00:00
Kurt Mohler
ee28b865ee Deprecate TypedStorage, its derived classes, and all of their public methods (#85303)
Part of #85302

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85303
Approved by: https://github.com/ezyang
2022-11-08 18:11:01 +00:00
Howard Huang
bc66ddb5cb Add torch.distributed.DistBackendError exception type, thrown from C10D_NCCL_CHECK (#88134)
Currently all of the distributed errors are thrown from the `TORCH_CHECK` macro which throws a generic `RuntimeError`. This change introduced a new error type `DistBackendError` which derives from `RuntimeError` to signify there was an error with the backend communication library. This allows for better error handling and analysis at higher levels in the stack. Motivation: https://docs.google.com/document/d/1j6VPOkC6znscliFuiDWMuMV1_fH4Abgdq7TCHMcXai4/edit#heading=h.a9rc38misyx8

Changes:
- introduce new error type
- Update `C10D_NCCL_CHECK`

Sample script to demonstrate new error type

```python
# python -m torch.distributed.run --nproc_per_node=2 <script>.py

import torch
import torch.distributed as dist

if __name__ == "__main__":
    dist.init_process_group("nccl")
    dist.broadcast(torch.tensor([1, 2, 3]).cuda(), 0)
```

Differential Revision: [D40998803](https://our.internmc.facebook.com/intern/diff/D40998803)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88134
Approved by: https://github.com/rohan-varma
2022-11-08 13:26:42 +00:00
lezcano
d453b3c4d4 Add a note on the stability of linalg functions. (#88313)
This was long-due, as it keeps comming up in issues.

Fixes https://github.com/pytorch/pytorch/issues/85950
Fixes https://github.com/pytorch/pytorch/issues/59720
Fixes https://github.com/pytorch/pytorch/issues/59782

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88313
Approved by: https://github.com/soumith, https://github.com/mruberry
2022-11-07 22:44:23 +00:00
Codrin Popa
5b767d404e Modified roundup_power2_divisions to specify the number of divisions for each power of two interval (#87290)
Summary:
Improved roundup_power2_divisions knob so it allows better control of rouding in the PyTorch CUDA Caching Allocator.

This new version allows setting the number of divisions per power of two interval starting from 1MB and ending at 64GB and above. An example use case is when rouding is desirable for small allocations but there are also very large allocations which are persistent, thus would not benefit from rounding and take up extra space.

Test Plan: Tested locally

Differential Revision: D40103909

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87290
Approved by: https://github.com/zdevito
2022-11-04 19:31:16 +00:00
Pruthvi Madugundu
fbd08fb358 Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)
- Asserts for CUDA are enabled by default
- Disabled for ROCm by default by setting `TORCH_DISABLE_GPU_ASSERTS` to `ON`
- Can be enabled for ROCm by setting above variable to`OFF` during build or can be forcefully enabled by setting `ROCM_FORCE_ENABLE_GPU_ASSERTS:BOOL=ON`

This is follow up changes as per comment in PR #81790, comment [link](https://github.com/pytorch/pytorch/pull/81790#issuecomment-1215929021)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84190
Approved by: https://github.com/jeffdaily, https://github.com/malfet
2022-11-04 04:43:05 +00:00
Christian Puhrsch
5e6ceebccb Add support for neg to NestedTensor (#88131)
Partially fixes #86889

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88131
Approved by: https://github.com/drisspg
2022-11-03 15:15:57 +00:00
PyTorch MergeBot
99c07735e4 Revert "Add support for neg to NestedTensor (#88131)"
This reverts commit 6a75a0d1a1.

Reverted https://github.com/pytorch/pytorch/pull/88131 on behalf of https://github.com/mehtanirav due to [Internal breakages](https://www.internalfb.com/intern/sandcastle/job/13510799692239080/insights)
2022-11-02 18:43:36 +00:00
PyTorch MergeBot
0fa23663cc Revert "Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)"
This reverts commit 1e2c4a6e0e.

Reverted https://github.com/pytorch/pytorch/pull/84190 on behalf of https://github.com/malfet due to Needs internal changes, has to be landed via co-dev
2022-11-02 18:13:37 +00:00
Pruthvi Madugundu
1e2c4a6e0e Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)
- Asserts for CUDA are enabled by default
- Disabled for ROCm by default by setting `TORCH_DISABLE_GPU_ASSERTS` to `ON`
- Can be enabled for ROCm by setting above variable to`OFF` during build or can be forcefully enabled by setting `ROCM_FORCE_ENABLE_GPU_ASSERTS:BOOL=ON`

This is follow up changes as per comment in PR #81790, comment [link](https://github.com/pytorch/pytorch/pull/81790#issuecomment-1215929021)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84190
Approved by: https://github.com/jeffdaily, https://github.com/malfet
2022-11-02 17:41:57 +00:00
Philip Meier
bc73affdad prepare removal of deprecated functionality in torch.testing (#87969)
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._

---

Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc0ac, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87969
Approved by: https://github.com/mruberry
2022-11-02 14:04:48 +00:00
Christian Puhrsch
6a75a0d1a1 Add support for neg to NestedTensor (#88131)
Partially fixes #86889

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88131
Approved by: https://github.com/drisspg
2022-11-01 02:37:42 +00:00
Christian Puhrsch
139afc50ec Fix links to tutorial in torch masked docs (#88129)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88129
Approved by: https://github.com/jisaacso
2022-10-31 21:31:54 +00:00
Kazuaki Ishizaki
7d2f1cd211 Fix typos under docs directory (#88033)
This PR fixes typos in `.rst` and `.Doxyfile` files under docs directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88033
Approved by: https://github.com/soulitzer
2022-10-31 19:31:56 +00:00
Andrew Gu
9d9267c6f7 [FSDP()][3/N] Refactor public APIs (#87917)
- This PR defines a new `api.py` meant to hold the public API for FSDP (minus `FullyShardedDataParallel` itself). This is needed because several of the `_<...>_utils.py` files rely on the public API, and we cannot import from `torch.distributed.fsdp.fully_sharded_data_parallel` without a circular import. Calling the file `api.py` follows the convention used by `ShardedTensor`.
- This PR cleans up the wording in the `BackwardPrefetch`, `ShardingStrategy`, `MixedPrecision`, and `CPUOffload` docstrings.
- This PR adds the aforementioned classes to `fsdp.rst` to have them rendered in public docs.
- To abide by the public bindings contract (`test_public_bindings.py`), the aforementioned classes are removed from `fully_sharded_data_parallel.py`'s `__all__`. This is technically BC breaking if someone uses `from torch.distributed.fsdp.fully_sharded_data_parallel import *`; however, that does not happen in any of our own external or internal code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87917
Approved by: https://github.com/mrshenli
2022-10-31 16:45:21 +00:00
Salil Desai
df1cc0ef47 [Vulkan] Add Vulkan Rewrite to Transfer Inputs and Outputs to Vulkan and CPU Backends Respectively (#87432)
With this change, we don't have to manually invoke transferring input and output backends when we run vulkan models.

Graph rewrite code based off of:
- 32efff45ba (diff-a473bddb458dc24225866a45092d6eca064eddd256245d93020e48e216eee4d5R160-R179)

Differential Revision: [D39519168](https://our.internmc.facebook.com/intern/diff/D39519168/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39519168/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87432
Approved by: https://github.com/mcr229, https://github.com/digantdesai
2022-10-31 14:18:45 +00:00
Driss Guessous
35c611d30f Add mem efficient backend flag (#87946)
# Summary
Add in a torch.backends.cuda flag and update context manager to pic between the three implementations of the scaled_dot_product_attention.

cc @cpuhrsch @jbschlosser @bhosmer @mikaylagawarecki
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87946
Approved by: https://github.com/cpuhrsch
2022-10-28 15:51:10 +00:00
Alvaro Gaona
46b16977d9 Reimplement Kaiser window (#87330)
Relates to #85366

- For reference follow #87082.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87330
Approved by: https://github.com/lezcano, https://github.com/mruberry
2022-10-27 21:01:01 +00:00
Edward Z. Yang
1ff52225f1 Unify SymIntNode and SymFloatNode into SymNode (#87817)
This refactor was prompted by challenges handling mixed int/float
operations in C++.  A previous version of this patch
added overloads for each permutation of int/float and was unwieldy
https://github.com/pytorch/pytorch/pull/87722/  This PR takes a different
approach.

The general outline of the patch is to combine the C++ types SymIntNode
and SymFloatNode into a single type, SymNode.  This is type erased; we
no longer know statically at C++ if we have an int/float and have to test
it with the is_int()/is_float() virtual methods.  This has a number of
knock on effects.

- We no longer have C++ classes to bind to Python.  Instead, we take an
  entirely new approach to our Python API, where we have a SymInt/SymFloat
  class defined entirely in Python, which hold a SymNode (which corresponds
  to the C++ SymNode).  However, SymNode is not pybind11-bound; instead,
  it lives as-is in Python, and is wrapped into C++ SymNode using PythonSymNode
  when it goes into C++.  This implies a userland rename.

  In principle, it is also possible for the canonical implementation of SymNode
  to be written in C++, and then bound to Python with pybind11 (we have
  this code, although it is commented out.)  However, I did not implement
  this as we currently have no C++ implementations of SymNode.

  Because we do return SymInt/SymFloat from C++ bindings, the C++ binding
  code needs to know how to find these classes.  Currently, this is done
  just by manually importing torch and getting the attributes.

- Because SymInt/SymFloat are easy Python wrappers, __sym_dispatch__ now
  takes SymInt/SymFloat, rather than SymNode, bringing it in line with how
  __torch_dispatch__ works.

Some miscellaneous improvements:

- SymInt now has a constructor that takes SymNode.  Note that this
  constructor is ambiguous if you pass in a subclass of SymNode,
  so an explicit downcast is necessary.  This means toSymFloat/toSymInt
  are no more.  This is a mild optimization as it means rvalue reference
  works automatically.

- We uniformly use the caster for c10::SymInt/SymFloat, rather than
  going the long way via the SymIntNode/SymFloatNode.

- Removed some unnecessary toSymInt/toSymFloat calls in normalize_*
  functions, pretty sure this doesn't do anything.

- guard_int is now a free function, since to guard on an int you cannot
  assume the method exists.  A function can handle both int and SymInt
  inputs.

- We clean up the magic method definition code for SymInt/SymFloat/SymNode.
  ONLY the user classes (SymInt/SymFloat) get magic methods; SymNode gets
  plain methods; this is to help avoid confusion between the two types.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87817
Approved by: https://github.com/albanD, https://github.com/anjali411
2022-10-27 20:56:02 +00:00
HDCharles
d0e12d1cc8 [ao] Adding FAQ to docs (#87322)
Summary: migrated from: https://discuss.pytorch.org/t/quantization-frequently-asked-questions/161251

Test Plan: circle CI tests

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87322
Approved by: https://github.com/z-a-f
2022-10-25 20:18:04 +00:00
Masaki Kozuki
28593a8339 [docs] batch_isend_irecv and P2POp of torch.distributed (#86438)
Reopening https://github.com/pytorch/pytorch/pull/79722

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86438
Approved by: https://github.com/kit1980
2022-10-25 00:11:50 +00:00
Kazuaki Ishizaki
72ec1b5fc1 Fix typo under docs directory (#87583)
This PR fixes typo in `.rst` files under docs directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87583
Approved by: https://github.com/kit1980
2022-10-24 23:52:44 +00:00
Svetlana Karslioglu
7e83f65ad5 Add General Project Policies (#87385)
Add General Project Policies to the Governance page

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87385
Approved by: https://github.com/orionr
2022-10-20 21:02:09 +00:00
George Qi
17202b3637 [maskedtensor] fix docs formatting (#87387)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87387
Approved by: https://github.com/cpuhrsch
2022-10-20 20:48:25 +00:00
George Qi
cf2be34ff5 [maskedtensor] add docs (#84887)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84887
Approved by: https://github.com/cpuhrsch
2022-10-19 20:44:34 +00:00
Christian Puhrsch
e8c4adf3c3 Add torch.sparse overview section (#85265)
The goal of this section is to provide a general overview of how PyTorch handles sparsity for readers who are already familiar with sparse matrices and their operators.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85265
Approved by: https://github.com/jisaacso
2022-10-18 21:07:57 +00:00
albanD
9db7270ee7 Small update to Module note (#87142)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87142
Approved by: https://github.com/cpuhrsch
2022-10-17 22:56:49 +00:00
Jan Margeta
e85dbcc9b0 [docs] Fix ScalarTensor __repr__ in Extending PyTorch example (#86330)
This PR fixes the __repr__ of the `ScalarTensor` class in the Extending PyTorch example to correspond with the class name instead of `DiagonalTensor`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86330
Approved by: https://github.com/bdhirsh
2022-10-17 20:01:10 +00:00
Nikita Karetnikov
91b3cd0b5a [primTorch] Add a ref for narrow_copy (#86748)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86748
Approved by: https://github.com/mruberry
2022-10-17 10:16:05 +00:00