Commit Graph

2208 Commits

Author SHA1 Message Date
Thiago Crepaldi
b1729d8bbe Fix doc preview page url at CONTRIBUTING.md (#108580)
The URL for previewing documentation directly on PR has changed and CONTRIBUTING.md got outdated. There is also a minor fix to a non-existent document URL

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108580
Approved by: https://github.com/svekars, https://github.com/kit1980
2023-09-05 20:17:55 +00:00
kshitij12345
a74f50d524 torch.compile-functorch interaction: update docs (#108130)
Doc Preview: https://docs-preview.pytorch.org/pytorch/pytorch/108130/torch.compiler_faq.html#torch-func-works-with-torch-compile-for-grad-and-vmap-transforms

Will also cherry-pick this for release branch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108130
Approved by: https://github.com/zou3519
2023-09-05 18:24:08 +00:00
youkaichao
ba9acbebfc [Doc] Update the dynamo deepdive doc (#108147)
With a new tool `depyf` to decompile bytecode into human readable source code, understanding dynamo becomes much more easier.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108147
Approved by: https://github.com/jansel
2023-09-03 13:08:13 +00:00
Pritam Damania
704b0b3c67 [RESUBMIT] Standardize on error types for distributed errors. (#108191)
We have a plethora of error types for various errors raised from c10d. These include `RuntimeError`, `TimeoutError`, `SocketError`, `DistBackendError` etc.

This results in messy code during error handling somewhat like this:
```
if "NCCL" in exception_str:
  ...
if "Timed out initializing process group in store based barrier on rank" in exception_str:
  ...
if "The client socket has timed out after" in exception_str:
  ...
if "Broken pipe" in exception_str:
  ...
if "Connection reset by peer" in exception_str:
  ...
```

To address this issue, in this PR I've ensured added these error types:

1. **DistError** - the base type of all distributed errors
2. **DistBackendError** - this already existed and referred to PG backend errors
3. **DistStoreError** - for errors originating from the store
4. **DistNetworkError** - for general network errors coming from the socket library

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108191
Approved by: https://github.com/H-Huang
2023-08-30 21:47:39 +00:00
Jane Xu
fa49be2a49 [docs] Properly link register_post_accumulate_grad_hook docs (#108157)
it shows up now

![image](https://github.com/pytorch/pytorch/assets/31798555/0aa86839-b9c5-4b4b-b1b1-aa1c0c0abbab)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108157
Approved by: https://github.com/soulitzer, https://github.com/albanD
2023-08-29 22:13:33 +00:00
PyTorch MergeBot
d4ff06ec84 Revert "Standardize on error types for distributed errors. (#107651)"
This reverts commit 0e2317479b.

Reverted https://github.com/pytorch/pytorch/pull/107651 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing inductor test in trunk for one of its model moco ([comment](https://github.com/pytorch/pytorch/pull/107651#issuecomment-1696578138))
2023-08-28 23:58:33 +00:00
Pritam Damania
0e2317479b Standardize on error types for distributed errors. (#107651)
We have a plethora of error types for various errors raised from c10d. These include `RuntimeError`, `TimeoutError`, `SocketError`, `DistBackendError` etc.

This results in messy code during error handling somewhat like this:
```
if "NCCL" in exception_str:
  ...
if "Timed out initializing process group in store based barrier on rank" in exception_str:
  ...
if "The client socket has timed out after" in exception_str:
  ...
if "Broken pipe" in exception_str:
  ...
if "Connection reset by peer" in exception_str:
  ...
```

To address this issue, in this PR I've ensured added these error types:

1. **DistError** - the base type of all distributed errors
2. **DistBackendError** - this already existed and referred to PG backend errors
3. **DistStoreError** - for errors originating from the store
4. **DistNetworkError** - for general network errors coming from the socket library
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107651
Approved by: https://github.com/H-Huang
2023-08-28 21:58:15 +00:00
Aaron Bockover
15e5bd5103 [ONNX] Support torch.compile(backend="onnxrt", options=OrtBackendOptions(...)) (#107973)
This reworks the DORT backend factory function to support the options kwarg of torch.compile, and defines a concrete OrtBackendOptions type that can be used to influence the backend.

Caching is also implemented in order to reuse backends with equal options.

Wrapping the backend in auto_autograd also becomes an option, which allows `OrtBackend` to always be returned as the callable for torch.compile; wrapping happens internally if opted into (True by default).

Lastly, expose options for configuring preferred execution providers (will be attempted first), whether or not to attempt to infer an ORT EP from a torch found device in the graph or inputs, and finally the default/fallback EPs.

### Demo

The following demo runs `Gelu` through `torch.compile(backend="onnxrt")` using various backend options through a dictionary form and a strongly typed form. It additionally exports the model through both the ONNX TorchScript exporter and the new TorchDynamo exporter.

```python
import math

import onnx.inliner
import onnxruntime
import torch
import torch.onnx

torch.manual_seed(0)

class Gelu(torch.nn.Module):
    def forward(self, x):
        return x * (0.5 * torch.erf(math.sqrt(0.5) * x) + 1.0)

@torch.compile(
    backend="onnxrt",
    options={
        "preferred_execution_providers": [
            "NotARealEP",
            "CPUExecutionProvider",
        ],
        "export_options": torch.onnx.ExportOptions(dynamic_shapes=True),
    },
)
def dort_gelu(x):
    return Gelu()(x)

ort_session_options = onnxruntime.SessionOptions()
ort_session_options.log_severity_level = 0

dort_gelu2 = torch.compile(
    Gelu(),
    backend="onnxrt",
    options=torch.onnx._OrtBackendOptions(
        preferred_execution_providers=[
            "NotARealEP",
            "CPUExecutionProvider",
        ],
        export_options=torch.onnx.ExportOptions(dynamic_shapes=True),
        ort_session_options=ort_session_options,
    ),
)

x = torch.randn(10)

torch.onnx.export(Gelu(), (x,), "gelu_ts.onnx")

export_output = torch.onnx.dynamo_export(Gelu(), x)
export_output.save("gelu_dynamo.onnx")
inlined_model = onnx.inliner.inline_local_functions(export_output.model_proto)
onnx.save_model(inlined_model, "gelu_dynamo_inlined.onnx")

print("Torch Eager:")
print(Gelu()(x))

print("DORT:")
print(dort_gelu(x))
print(dort_gelu2(x))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107973
Approved by: https://github.com/BowenBao
2023-08-26 18:20:18 +00:00
Pearu Peterson
c5ad44be1d Add torch.sparse.as_sparse_gradcheck decorator of gradcheck that allows gradcheck input function to receive and return sparse tensors (#107150)
Compared to #104848, this PR makes a step further: when the enable_sparse_support decorator is applied to `torch.autograd.gradcheck`, the resulting callable is equivalent to `torch.autograd.gradcheck` with an extra feature of supporting functions that can have input sparse tensors or/and can return sparse tensors.

At the same time, the underlying call to `torch.autograd.gradcheck` will operate on strided tensors only. This basically means that torch/autograd/gradcheck.py can be cleaned up by removing the code that deals with sparse tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107150
Approved by: https://github.com/albanD, https://github.com/amjames, https://github.com/cpuhrsch
ghstack dependencies: #107638, #107777
2023-08-26 07:24:31 +00:00
BowenBao
25d98a3e3b [ONNX] Remove API reference for TorchScript export diagnostics (#107979)
Remove both api reference and rules specific to TorchScript ONNX export. The page should display only info related to `torch.onnx.dynamo_export` diagnostics.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107979
Approved by: https://github.com/justinchuby
2023-08-26 00:52:59 +00:00
gmagogsfm
9af0e47653 Hide transform method by renaming it (#107940)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107940
Approved by: https://github.com/tugsbayasgalan
2023-08-25 16:31:44 +00:00
gmagogsfm
39854df1d3 Make validate private by renaming validate to _validate (#107927)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107927
Approved by: https://github.com/tugsbayasgalan
2023-08-25 08:14:56 +00:00
gmagogsfm
bfb09204bd Expose torch.export.{save,load} APIs (#107888)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107888
Approved by: https://github.com/angelayi
2023-08-25 06:06:36 +00:00
gmagogsfm
7dd1113463 Expose ExportedProgram and related classes (#107852)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107852
Approved by: https://github.com/zhxchen17, https://github.com/angelayi
2023-08-25 00:07:00 +00:00
Digant Desai
8a7a6867b9 [PyTorch][Tensor] Introduce tensor.dim_order (#106835)
Summary:
This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106835
Approved by: https://github.com/ezyang
2023-08-25 00:06:03 +00:00
Zachary DeVito
40cbda274b document memory snapshotting (#107660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107660
Approved by: https://github.com/albanD
ghstack dependencies: #107171, #107399
2023-08-24 19:20:03 +00:00
angelayi
6ec2ec845c [exportdb] Fix generating docs (#107838)
Previously I accidentally replaced all `=` with `-`, resulting in clowny code rendering like:
![image](https://github.com/pytorch/pytorch/assets/10901756/738eaf92-8cc6-43bd-b531-224ec44afa9f)

The purpose of replacing the `=` with `-` is to change the RST heading size of modules. So now, I replace strings with more than 3 `=`'s with `-`. This should avoid incorrectly replacing code where we set variables with `=` and do equality checks with `==`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107838
Approved by: https://github.com/gmagogsfm
2023-08-24 06:32:51 +00:00
gmagogsfm
f8119f8bda Move Constraint class to torch.export() to avoid circular dependency in _dynamo package (#107750)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107750
Approved by: https://github.com/tugsbayasgalan
2023-08-24 03:07:28 +00:00
gmagogsfm
652ccfadc1 Expose torch.export.constrain_as_{size,value} APIs (#107735)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107735
Approved by: https://github.com/avikchaudhuri
2023-08-23 20:13:40 +00:00
PyTorch MergeBot
ecde622649 Revert "reseed all Generators in Dataloader's _worker_loop() -- via GC (#107131)"
This reverts commit 42625da5e1.

Reverted https://github.com/pytorch/pytorch/pull/107131 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/107131#issuecomment-1690325745))
2023-08-23 17:08:07 +00:00
gmagogsfm
137d96a26e Expose torch.export.dynamic_dim() API (#107635)
With updated doc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107635
Approved by: https://github.com/avikchaudhuri
2023-08-22 18:40:49 +00:00
Jane Xu
515aa993e3 Document post acc grad hooks in backward hooks execution (#107323)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107323
Approved by: https://github.com/soulitzer, https://github.com/albanD
2023-08-22 18:37:03 +00:00
Alexander Jipa
2e054037da fixing named tensor unflatten example (#106921)
Fixes an example from the documentation [here](https://pytorch.org/docs/stable/named_tensor.html#manipulating-dimensions).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106921
Approved by: https://github.com/zou3519
2023-08-22 18:00:10 +00:00
gmagogsfm
bbb216bca4 Move torch.export() to torch.export.export() (#107609)
New plan:

torch.export.export() as the main API

All other utilities will be torch.export.foo_utilities
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107609
Approved by: https://github.com/tugsbayasgalan, https://github.com/msaroufim
2023-08-22 00:38:32 +00:00
moto
a250cc9bd7 Update persons_of_interest.rst (#107592)
Updating the state of PyTorch Audio.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107592
Approved by: https://github.com/cpuhrsch
2023-08-21 20:01:46 +00:00
Chien-Chin Huang
7ba513b6e4 [FSDP][state_dict] Expose optimizer state_dict config (#105949)
Optimizer state_dict config are not exposed. This PR exposes the 2 dataclass.

Differential Revision: [D47766024](https://our.internmc.facebook.com/intern/diff/D47766024/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105949
Approved by: https://github.com/rohan-varma
2023-08-21 07:29:49 +00:00
Nicolas Hug
42625da5e1 reseed all Generators in Dataloader's _worker_loop() -- via GC (#107131)
Alternative to https://github.com/pytorch/pytorch/pull/107034, implements @ezyang 's suggestion from https://github.com/pytorch/pytorch/pull/107034#discussion_r1292857201.

This PR addresses https://fb.workplace.com/groups/pytorch.oss.dev/posts/1699944830430051 and does a bunch of stacked changes:

- Make `Generator` class support GC;this makes all `Generator` instances tracked and accessile through Python's GC.
- Use the GC to retrieve all existing Generator instances in Dataloader's `_worker_loop` and re-seed them: this extends what is already applied to the global/default Generator, which is already re-seeded.

~TODO: a bit of docs and justification, which I'll do if this PR is mergeable.~ -- Done

CC @albanD @ezyang  as previously discussed

BC-Breaking Note
-------------------

We now re-seed all `Generator` instances within the `Dataloader` workers' loop to ensure that their RNG is different across workers.
Previously, the RNG of user-defined `Generators` would be the same across workers, which could lead to wrong training procedures. This only affects user-defined `Generators`, not the default `Generator` (which was already re-seeded).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107131
Approved by: https://github.com/ezyang
2023-08-18 10:23:23 +00:00
Alexander Pivovarov
35b2b3ee47 Fix rst formatting in torch.compiler_troubleshooting.rst (#107360)
Fix some rst formatting - mostly around ``.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107360
Approved by: https://github.com/kit1980
2023-08-18 01:04:24 +00:00
Alexander Pivovarov
a98f745c80 Use compiled model in torch.compiler_get_started (#107267)
- Text says `Next, let’s try a real model like resnet50 from the PyTorch` but the code example uses `resnet18`. Fixed code to use `resnet50` for consistency.
- One of the examples in TorchDynamo Overview uses uncompiled model - fixed it - now it uses compiled model.
- Removed unused import to `_dynamo` in one of the examples
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107267
Approved by: https://github.com/soulitzer
2023-08-17 09:26:54 +00:00
Alexander Pivovarov
11e366943d Fix rst formatting in dynamo/guards-overview doc (#107275)
Fix rst formatting in dynamo/guards-overview doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107275
Approved by: https://github.com/soulitzer
2023-08-17 09:04:44 +00:00
fduwjj
983fd5ba79 [2D][TP] Enable DDP TP integration with unit test (#106583)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106583
Approved by: https://github.com/kumpera, https://github.com/fegin, https://github.com/wanchaol
ghstack dependencies: #107313
2023-08-17 02:54:17 +00:00
gmagogsfm
ddba7a5a55 Expose torch.export() API (#106904)
Other class definitions and utilities will be moved in subsequent PRs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106904
Approved by: https://github.com/avikchaudhuri
2023-08-16 10:47:26 +00:00
BowenBao
19a76290d8 [ONNX] Public diagnostic options for 'dynamo_export' (#106741)
Generate diagnostic reports to monitor the internal stages of the export process. This tool aids in unblocking model exports and debugging the exporter.

#### Settings

~~1. Choose if you want to produce a .sarif file and specify its location.~~
1. Updated: saving .sarif file should be done by `export_output.save_sarif_log(dst)`, similar to saving exported onnx model `export_output.save(model_dst)`.
2. Customize diagnostic options:
    - Set the desired verbosity for diagnostics.
    - Treat warnings as errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106741
Approved by: https://github.com/titaiwangms, https://github.com/justinchuby, https://github.com/malfet
2023-08-15 17:46:15 +00:00
youkaichao
05db3d9969 improve doc on how to understand dynamo (#106860)
Per the discussion in https://github.com/pytorch/pytorch/pull/106673#issuecomment-1669939815 , I add more documentation to explain the output of dynamo compilation. I didn't find any de-compile library, so I manually de-compile the bytecode. The result looks good.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106860
Approved by: https://github.com/jansel, https://github.com/msaroufim
2023-08-14 19:49:24 +00:00
BowenBao
22095acfd7 [ONNX] Migrate to PT2 logging (#106592)
Summary
- The 'dynamo_export' diagnostics leverages the PT2 artifact logger to handle the verbosity
level of logs that are recorded in each SARIF log diagnostic. In addition to SARIF log,
terminal logging is by default disabled. Terminal logging can be activated by setting
the environment variable `TORCH_LOGS="onnx_diagnostics"`. When the environment variable
is set, it also fixes logging level to `logging.DEBUG`, overriding the verbosity level
specified in the diagnostic options.
See `torch/_logging/__init__.py` for more on PT2 logging.
- Replaces 'with_additional_message' with 'Logger.log' like apis.
- Introduce 'LazyString', adopted from 'torch._dynamo.utils', to skip
evaluation if the message will not be logged into diagnostic.
- Introduce 'log_source_exception' for easier exception logging.
- Introduce 'log_section' for easier markdown title logging.
- Updated all existing code to use new api.
- Removed 'arg_format_too_verbose' diagnostic.
- Rename legacy diagnostic classes for TorchScript Onnx Exporter to avoid
confusion.

Follow ups
- The 'dynamo_export' diagnostic now will not capture python stack
information at point of diagnostic creation. This will be added back in
follow up PRs for debug level logging.
- There is type mismatch due to subclassing 'Diagnostic' and 'DiagnosticContext'
for 'dynamo_export' to incorporate with PT2 logging. Follow up PR will
attempt to fix it.
- More docstrings with examples.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106592
Approved by: https://github.com/titaiwangms
2023-08-11 23:27:00 +00:00
Howard Huang
149e458846 [BE] RPC is missing RRef docs (#106902)
Current `RRef` class derives from `PyRRef` which has all the method definitions and documentations, and we don't see any of this in the current documentation:

<img width="891" alt="image" src="https://github.com/pytorch/pytorch/assets/14858254/62897766-a660-4846-97bf-182e4aa45079">

Changing to :inherited-member: so sphinx can pick up these methods

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106902
Approved by: https://github.com/svekars
2023-08-10 16:26:27 +00:00
Ivan Yashchuk
c913f3857f Remove dynamo+nvfuser (#105789)
This PR removes unmaintained Dynamo+nvFuser.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105789
Approved by: https://github.com/jansel, https://github.com/jjsjann123, https://github.com/albanD
2023-08-08 22:29:32 +00:00
Jason Lu
bc88028e8e Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
Summary:
Original commit changeset: 81319beb97f3

Original Phabricator Diff: D47961182

Test Plan: revert to maintain backward compat with legacy ads_dper3 production package. Read details in: S357822

Reviewed By: atuljangra

Differential Revision: D48131623

@diff-train-skip-merge
(D48131623 landed internally)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106743
Approved by: https://github.com/malfet
2023-08-08 15:27:34 +00:00
PyTorch MergeBot
891bb259f8 Revert "Remove dynamo+nvfuser (#105789)"
This reverts commit 6030151d37.

Reverted https://github.com/pytorch/pytorch/pull/105789 on behalf of https://github.com/DanilBaibak due to Break a lot of tests on main. ([comment](https://github.com/pytorch/pytorch/pull/105789#issuecomment-1669710571))
2023-08-08 14:20:32 +00:00
Ivan Yashchuk
6030151d37 Remove dynamo+nvfuser (#105789)
This PR removes unmaintained Dynamo+nvFuser.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105789
Approved by: https://github.com/jansel, https://github.com/jjsjann123, https://github.com/albanD
2023-08-08 13:29:31 +00:00
Ramin Azarmehr
cdfd0ea162 [MPS] Introduce torch.mps.Event() APIs (#102121)
- Implement `MPSEventPool` to recycle events.
- Implement python bindings with `torch.mps.Event` class using the MPSEventPool backend. The current member functions of the Event class are `record()`, `wait()`, `synchronize()`, `query()`, and `elapsed_time()`.
- Add API to measure elapsed time between two event recordings.
- Added documentation for Event class to `mps.rst`.
- Added test case to `test_mps.py`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102121
Approved by: https://github.com/albanD, https://github.com/kulinseth
2023-08-08 03:45:45 +00:00
AllenTiTaiWang
b782beb18e [ONNX] Expose OnnxRegistry publicly (#106140)
The official move of `OnnxRegistry` to `torch.onnx` allows it to become one of the parameters in `torch.onnx.ExportOption`. By incorporating `OnnxRegistry` in `torch.onnx.ExportOption`, users gain access to various functionalities, including the ability to register custom operators using `register_custom_op`, check whether an operator is supported using `is_registered_op`, and obtain symbolic functions that support specific operators using `get_functions`.

Additionally, `opset_version` is now exclusively available in `torch.onnx.OnnxRegistry` as it is removed from `torch.onnx.ExportOption`. The initialization of the registry with torchlib under the provided opset version ensures that the exporter uses the specified opset version as the primary version for exporting.

These changes encompass scenarios where users can:

1. Register an unsupported ATen operator with a custom implementation using onnx-script.
2. Override an existing symbolic function (onnx invariant).

NOTE: The custom registered function will be prioritized in onnx dispatcher, and if there are multiple custom ones, the one registered the last will be picked.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106140
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
2023-08-04 20:46:03 +00:00
wangxiyuan
4eeda6616c Correct URL Link for torchDynamo (#105903)
Correct some error or 404 urls for torchDynamo doc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105903
Approved by: https://github.com/malfet
2023-07-31 20:50:09 +00:00
Mikayla Gawarecki
d8e5f2aa6d Reland "Make adding buffers more like adding parameters (#104069)" (#106224)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106224
Approved by: https://github.com/atalman, https://github.com/albanD
2023-07-31 17:18:56 +00:00
Svetlana Karslioglu
4d3ea5df65 Restructure torch.compile docs (#105376)
Current torch.compile docs have become a bit of a mess with the docs expanded in the left nav. This PR moves them under the torch.compiler menu item in the left nav. A bunch of rewrites were made in collaboration with @msaroufim to address formatting issues, latest updates that moved some of the APIs to the public torch.compiler namespace were addressed as well. The documentation is broken down in three categories that address three main audiences: PyTorch users, Pytorch Developers and PyTorch backend vendors. While, the user-facing documentation was significantly rewritten, dev docs and vendor docs kept mostly untouched. This can be addressed in the follow up PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105376
Approved by: https://github.com/msaroufim
2023-07-28 20:58:57 +00:00
Mikayla Gawarecki
035124774a Enable registering fallthroughs to (op, dk) from torch.library (#106086)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106086
Approved by: https://github.com/zou3519, https://github.com/albanD
2023-07-28 19:37:59 +00:00
fduwjj
487ebcac3b Clean up unsed MHA code to avoid confusion (#105956)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105956
Approved by: https://github.com/wz337, https://github.com/ezyang, https://github.com/wanchaol
2023-07-27 17:10:17 +00:00
Edward Z. Yang
edebdaf182 Change _dynamo.explain to be explain(f)(*args, **kwargs) (#106066)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106066
Approved by: https://github.com/wanchaol, https://github.com/voznesenskym
2023-07-27 03:21:52 +00:00
Edward Z. Yang
f70844bec7 Enable UFMT on a bunch of low traffic Python files outside of main files (#106052)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106052
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-07-27 01:01:17 +00:00
Jerry Zhang
3a77f9aaaf [quant][api] Move torch.ao.quantization.pt2e.quantizer to torch.ao.quantization.quantizer (#105885)
Summary: moving quantizer to torch.ao.quantization to make it a public api, since pt2e is a folder for implementations

Test Plan:
CIs

sanity check: "buck test //executorch/backends/xnnpack/test:test_xnnpack_quantized_models -- test_resnet18"

Differential Revision: D47727838

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105885
Approved by: https://github.com/andrewor14
2023-07-26 18:20:09 +00:00