Commit Graph

70 Commits

Author SHA1 Message Date
Tom Ritchford
c0582fd0f8 Remove unused Python variables in torch/[b-z]* (#136963)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136963
Approved by: https://github.com/ezyang
2024-10-19 16:45:22 +00:00
Aaron Orenstein
abcd329359 [BE] typing for decorators - onnx/symbolic_helper (#131565)
See #131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131565
Approved by: https://github.com/justinchuby, https://github.com/oulgen, https://github.com/zou3519, https://github.com/titaiwangms
2024-07-24 16:39:47 +00:00
Aaron Orenstein
5a0068cc69 [BE] mypy: disallow untyped decorators (#131428)
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.

Step 1 - Enable the error and override in all the offending files.

#131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby, https://github.com/oulgen
2024-07-23 21:50:55 +00:00
Justin Chu
fd4899bc58 [ONNX] Run ruff pyupgrade to update type annotations (#130657)
Use the newest syntax for type annotations
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130657
Approved by: https://github.com/titaiwangms
2024-07-19 05:09:44 +00:00
Justin Chu
e880cb2fe0 [ONNX] Remove beartype usage (#130484)
beartype has served us well in identifying type errors and ensuring we call internal functions with the correct arguments (thanks!). However, the value of having beartype is diminished because of the following:

1. When beartype improves support for better Dict[] type checking, it discovered typing mistakes in some functions that were previously uncaught. This caused the exporter to fail with newer versions beartype when it used to succeed. Since we cannot fix PyTorch and release a new version just because of this, it creates confusion for users that have beartype in their environment from using torch.onnx
2. beartype adds an additional call line in the traceback, which makes the already thick dynamo stack even larger, affecting readability when users diagnose errors with the traceback.
3. Since the typing annotations need to be evaluated, we cannot use new syntaxes like `|` because we need to maintain compatibility with Python 3.8. We don't want to wait for PyTorch take py310 as the lowest supported Python before using the new typing syntaxes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130484
Approved by: https://github.com/titaiwangms
2024-07-18 22:07:40 +00:00
PyTorch MergeBot
0851de5b16 Revert "[ONNX] Remove beartype usage (#130484)"
This reverts commit 1794c35912.

Reverted https://github.com/pytorch/pytorch/pull/130484 on behalf of https://github.com/clee2000 due to test_sympy_utils failure is real https://github.com/pytorch/pytorch/actions/runs/9961499559/job/27523758780 1794c35912.  Dr CI is matching with commits in current commit? ([comment](https://github.com/pytorch/pytorch/pull/130484#issuecomment-2231575577))
2024-07-16 18:41:51 +00:00
Justin Chu
1794c35912 [ONNX] Remove beartype usage (#130484)
beartype has served us well in identifying type errors and ensuring we call internal functions with the correct arguments (thanks!). However, the value of having beartype is diminished because of the following:

1. When beartype improves support for better Dict[] type checking, it discovered typing mistakes in some functions that were previously uncaught. This caused the exporter to fail with newer versions beartype when it used to succeed. Since we cannot fix PyTorch and release a new version just because of this, it creates confusion for users that have beartype in their environment from using torch.onnx
2. beartype adds an additional call line in the traceback, which makes the already thick dynamo stack even larger, affecting readability when users diagnose errors with the traceback.
3. Since the typing annotations need to be evaluated, we cannot use new syntaxes like `|` because we need to maintain compatibility with Python 3.8. We don't want to wait for PyTorch take py310 as the lowest supported Python before using the new typing syntaxes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130484
Approved by: https://github.com/titaiwangms
2024-07-16 17:34:36 +00:00
PyTorch MergeBot
0effcb70ef Revert "[ONNX] Remove beartype usage (#130484)"
This reverts commit f44739cf42.

Reverted https://github.com/pytorch/pytorch/pull/130484 on behalf of https://github.com/huydhn due to Sorry for reverting your change but those failures show up in trunk after the commit landed f44739cf42, I am reverting it to see if it fix trunk ([comment](https://github.com/pytorch/pytorch/pull/130484#issuecomment-2226812311))
2024-07-13 07:52:59 +00:00
Justin Chu
f44739cf42 [ONNX] Remove beartype usage (#130484)
beartype has served us well in identifying type errors and ensuring we call internal functions with the correct arguments (thanks!). However, the value of having beartype is diminished because of the following:

1. When beartype improves support for better Dict[] type checking, it discovered typing mistakes in some functions that were previously uncaught. This caused the exporter to fail with newer versions beartype when it used to succeed. Since we cannot fix PyTorch and release a new version just because of this, it creates confusion for users that have beartype in their environment from using torch.onnx
2. beartype adds an additional call line in the traceback, which makes the already thick dynamo stack even larger, affecting readability when users diagnose errors with the traceback.
3. Since the typing annotations need to be evaluated, we cannot use new syntaxes like `|` because we need to maintain compatibility with Python 3.8. We don't want to wait for PyTorch take py310 as the lowest supported Python before using the new typing syntaxes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130484
Approved by: https://github.com/titaiwangms
2024-07-13 00:08:25 +00:00
cyy
163847b1bb [1/N] [Caffe2] Remove caffe2_aten_fallback code (#128675)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128675
Approved by: https://github.com/r-barnes
2024-06-17 21:25:59 +00:00
Aaron Orenstein
27f9d3b0a1 Flip default value for mypy disallow_untyped_defs [8/11] (#127845)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127845
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843, #127844
2024-06-08 18:49:56 +00:00
AllenTiTaiWang
1ca2e993af [ONNX] Support aten::logit (#102377)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102377
Approved by: https://github.com/BowenBao
2023-06-02 03:39:35 +00:00
Thiago Crepaldi
a8f40b39ce Update all ONNX symbolics with new JitScalarType API (#87245)
Fixes https://github.com/pytorch/pytorch/issues/84365 and more

This PR addresses not only the issue above, but the entire family of issues related to `torch._C.Value.type()` parsing when `scalarType()` or `dtype()` is not available.

This issue exists before `JitScalarType` was introduced, but the new implementation refactored the bug in because the new api `from_name` and `from_dtype` requires parsing `torch._C.Value.type()` to get proper inputs, which is exactly the root cause for this family of bugs.

Therefore `from_name` and `from_dtype` must be called when the implementor knows the `name` and `dtype` without parsing a `torch._C.Value`. To handle the corner cases hidden within `torch._C.Value`, a new `from_value` API was introduced and it should be used in favor of the former ones for most cases. The new API is safer and doesn't require type parsing from user, triggering JIT asserts in the core of pytorch.

Although CI is passing for all tests, please review carefully all symbolics/helpers refactoring to make sure the meaning/intetion of the old call are not changed in the new call

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87245
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-11-03 03:01:33 +00:00
Justin Chu
5deeb09d4e [ONNX] Annotate all g as GraphContext (#85491)
- Use g.opset to test export opset version
- Annotate all `g` as GraphContext

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85491
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-28 22:39:28 +00:00
Justin Chu
3d2316670f [ONNX] Create GraphContext and load g.op method to the class (#84728)
This PR create the `GraphContext` class and relays all graph methods to _C.Graph as well as implements the `g.op`  method. The GraphContext object is passed into the symbolic functions in place of _C.Graph for compatibility with existing symbolic functions.

This way (1) we can type annotate all `g` args because the method is defined and (2) we can use additional context information in symbolic functions. (3) no more monkey patching on `_C.Graph`

Also

- Fix return type of `_jit_pass_fixup_onnx_controlflow_node`
- Create `torchscript.py` to house torch.Graph related functions
- Change `GraphContext.op` to create nodes in the Block instead of the Graph
- Create `add_op_with_blocks` to handle scenarios where we need to directly manipulate sub-blocks. Update loop and if symbolic functions to use this function.

## Discussion

Should we put all the context inside `SymbolicContext` and make it an attribute in the `GraphContext` class? This way we only define two attributes `GraphContext.graph` and `GraphContext.context`. Currently all context attributes are directly defined in the class.

### Decision

Keep GraphContext flatand note that it will change in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84728
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-28 22:21:55 +00:00
Jane Xu
e7e1cd945f Add path optimize kwarg to einsum (#84890)
## This PR seeks to:
- [x] add c++ support for an optimize path
- [x] add python opt_einsum path passthrough
- [x] add opt_einsum to OSS requirements, but a soft one
- [x] show benchmark results here

Additional things I've explored + their conclusions:
- **Delaying the summing over dimensions** => added!
    - The idea here is to not incur kernel calls to `sum` as we try to early sum out in einsum. Thus, we collect all the dimensions that need to be summed together in one contraction + sum at the end instead of summing as we go. While this optimization didn't feel like it made things faster for the random cases we've selected (they all summed 1 dim per contraction), it is a good principle and would help more common use cases that would reduce multiple dimensions at a time (like `bxy,xyi,xyj->bij`).
- **Caching contract_path based on equation and tensor sizes** => dropped :(
    - The benchmarks were strictly worse for all the cases, and, from scanning the use cases, I observed people do not often call einsum on the same equation/tensor order enough for caching to be justified. I do think caching can be effective in the future, but it would require further investigation.

## Not a part of this PR (but are next steps):
- adding opt_einsum package to OSS CI
- adding it to internal CI
- potentially adding a kwarg path argument to the python API -- if the path is given, we wouldn't have to spend time calculating it, but there would be some time lost validating user input.

## Testing:
- Added more tests to CI

## Benchmarking:
**TL;DRs**
- **torch.einsum with opt_einsum is a definite win for the production case**.
- **torch.einsum with opt_einsum installed is consistently fast, but has an overhead** of needing to find the path. If the path is already found/optimal, it will be slightly slower.
- The einsum overhead decreases for bigger dimensions.
- **torch.einsum without opt_einsum installed is comparable to before this commit**, with occasional slowness potentially due to not reshaping/squeezing as we contract until the end.
- For many of the random generated cases, the dimensions were too similar and small where an optimal order wasn't that much more optimal than just going left to right. However, in production, dimensions are commonly quite distinct (batch size will be small, but the data will be huge).
- **torch.einsum opt is comparable (slightly faster overall) compared to numpy.einsum opt for the cpu case**. This is interesting given that torch.einsum currently spends time computing the path, but numpy.einsum takes it as input.
- **torch.einsum opt is significantly faster than numpy.einsum opt for the gpu case**. This is because numpy doesn't take advantage of GPUs.

The following benchmarks were done on an A100 GPU and Linux CPUs. The line in the first chart separates GPU (on top) from CPU, and the line in the second graph separates CPU (on top) and then GPU. Sorry it's flipped 😛 .

Production example (see [colab benchmark](https://colab.research.google.com/drive/1V2s4v1dOOKwRvp5T_DC-PNUosOV9FFJx?authuser=1#scrollTo=WZoQkC8Mdt6I) for more context):
<img width="1176" alt="image" src="https://user-images.githubusercontent.com/31798555/192012636-9a68bfa7-2601-43b1-afeb-b4e0877db6a4.png">

Randomly generated examples (the same ones as in https://github.com/pytorch/pytorch/pull/60191)
<img width="1176" alt="image" src="https://user-images.githubusercontent.com/31798555/192012804-1c639595-b3e6-48c9-a385-ad851c13e1c2.png">

Open below to see old + not super relevant benchmarking results:
<details>
Benchmark results BEFORE this PR (on Linux -- I will update devices so they are consistent later):
<img width="776" alt="image" src="https://user-images.githubusercontent.com/31798555/190807274-18f71fce-556e-47f4-b18c-e0f7d0c0d5aa.png">

Benchmark results with the code on this PR (on my x86 mac):
For the CPU internal use case --
![image](https://user-images.githubusercontent.com/31798555/190801376-6f591b00-cebd-4ca7-bb23-ae8f17f1634e.png)

For the general use case --
It looks like numpy opt still does better in several of these random cases, but torch einsum opt is consistently faster than torch.einsum.
![image](https://user-images.githubusercontent.com/31798555/190811730-fbb6797d-af59-4f5a-92da-ba4103372014.png)
<details>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84890
Approved by: https://github.com/albanD, https://github.com/soulitzer
2022-09-24 03:47:36 +00:00
Justin Chu
2f50d2f685 [ONNX] Update docs on symbolic registration (#85290)
- Move inline instructions on editing symbolic functions to the README
- Add a line on using the symbolic function registration decorator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85290
Approved by: https://github.com/BowenBao
2022-09-22 13:37:11 +00:00
Justin Chu
76d60778eb [ONNX] Use decorators for symbolic function registration (#84448)
This is the 4th PR in the series of #83787. It enables the use of `@onnx_symbolic` across `torch.onnx`.

- **Backward breaking**: Removed some symbolic functions from `__all__` because of the use of  `@onnx_symbolic` for registering the same function on multiple aten names.
- Decorate all symbolic functions with `@onnx_symbolic`
- Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for `isfunction` checking, speeding up the registration process by 60%.
    - Remove the outdated unit test `test_symbolic_opset9.py`
- Symbolic function registration moved from the first call to `_run_symbolic_function` to init time.
- Registration is fast:
  ![image](https://user-images.githubusercontent.com/11205048/189164959-f3fca173-19bc-4682-b150-f13a586387bf.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84448
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-22 06:25:24 +00:00
Justin Chu
388368b699 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-09-03 01:40:18 +00:00
PyTorch MergeBot
d8cc8368ab Revert "[ONNX] Fix type annotations and enable type checking for all apis (#84091)"
This reverts commit 6446da1730.

Reverted https://github.com/pytorch/pytorch/pull/84091 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-08-28 12:28:58 +00:00
Justin Chu
6446da1730 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao
2022-08-27 04:40:41 +00:00
Justin Chu
3dfb8dfcf3 [ONNX] Use errors.SymbolicValueError for more context (#83332)
Replace runtime errors in torch.onnx with `errors.SymbolicValueError` for more context around jit values.

- Extend `_unimplemented`, `_onnx_unsupported`, `_onnx_opset_unsupported`, `_onnx_opset_unsupported_detailed` errors to include JIT value information
- Replace plain RuntimeError with `errors.SymbolicValueError`
- Clean up: Use `_is_bool` to replace string comparison on jit types
- Clean up: Remove the todo `Remove type ignore after #81112`

#77316
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83332
Approved by: https://github.com/AllenTiTaiWang, https://github.com/thiagocrepaldi, https://github.com/BowenBao
2022-08-23 05:39:17 +00:00
Justin Chu
27108d9434 [ONNX] Update typing and error messages in symbolic_helper (#83007)
### Description

- Clearer error messages with more context
-   Created `SymbolicValueError` which adds context of the value to the error message
- Type annotation

example error message:

```
torch.onnx.errors.SymbolicValueError: ONNX symbolic does not understand the Constant node '%1 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
' specified with descriptor 'is'.  [Caused by the value '1 defined in (%1 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Constant'.]

    Inputs:
        Empty
    Outputs:
        #0: 1 defined in (%1 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
    )  (type 'Tensor')
```

### Issue

- #77316 (Runtime error during symbolic conversion)

### Testing

Unit tested
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83007
Approved by: https://github.com/BowenBao
2022-08-11 23:26:13 +00:00
Justin Chu
c6cdca5c68 [ONNX] Reland #81953 Type utility for converting among JIT, torch and ONNX data types (#82995)
Re-land #81953

Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Deprecated: "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type" in `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82995
Approved by: https://github.com/kit1980
2022-08-08 23:43:43 +00:00
PyTorch MergeBot
b170a52a09 Revert "[ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)"
This reverts commit 6ddf4c6f58.

Reverted https://github.com/pytorch/pytorch/pull/81953 on behalf of https://github.com/kit1980 due to Broke internal builds by removing functions without deprecation
2022-08-07 20:15:28 +00:00
Justin Chu
6ddf4c6f58 [ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)
Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Breaking: **Remove "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type"** from `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81953
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 22:24:45 +00:00
killbigger
561f1568c6 [ONNX] Make einsum_helper in opset12 private (#82402)
### Description

einsum_helper was a helper function and it was public. I changed the einsum_helper to a private function by removing from `__all__` and renamed it to _einsum_helper

### Issue

Fixes #82245

### Testing

Unit tested

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82402
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-08-05 18:57:35 +00:00
Li-Huai (Allan) Lin
6bdf89b0c7 [ONNX] Fix argmin and argmax test cases (#79503)
Part of #79263

The `keepdim` argument is theoretically ignored when `dim` is not specified (See [docs](https://pytorch.org/docs/stable/generated/torch.argmin.html)).

Unfortunately the PyTorch implementation seems to still take it into account, resulting in a non-fully-reduced tensor, which is an undefined behavior. Thus, I add `dim` argument to the tests to make the outputs between PyTorch and ONNX runtime consistent.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79503
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 18:09:47 +00:00
Wei-Sheng Chin
b7f9315eac [ONNX] Export aten::native_dropout (#81743)
This PR adds exporter for `aten::native_dropout` with a test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81743
Approved by: https://github.com/BowenBao, https://github.com/justinchuby
2022-07-26 21:47:06 +00:00
Huy Do
6ea422dd0b Format torch/onnx with ufmt (#82137)
This is the last batch for the new ufmt (black + usort) linter. After this, black linter can finally be replaced. The previous PR to format ONNX tests was #81335
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82137
Approved by: https://github.com/kit1980, https://github.com/AllenTiTaiWang
2022-07-25 22:42:21 +00:00
Gary Miguel
fb7a761ffd [ONNX] reduce log spam when exporting dropout in training mode (#78309)
The default for `torch.onnx.export` is `TrainingMode.EVAL`:
0d76299ff7/torch/onnx/__init__.py (L63)

That means that this warning is only printed when the caller overrides
that and explicitly specifies that they want training ops like Dropout.
We should assume the user knows what they're doing and not warn.

Also set `do_constant_folding=False` in the dropout related training tests. Without this, warnings are printed like:
```
UserWarning: It is recommended that constant folding be turned off ('do_constant_folding=False') when exporting the model in training-amenable mode
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78309
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-06-03 18:41:07 +00:00
Justin Chu
0d76299ff7 [ONNX] Clean up module imports (#77423)
Cleaning up onnx module imports to prepare for updating `__init__`.

- Simplify importing the `_C` and `_C._onnx` name spaces
- Remove alias of the symbolic_helper module in imports
- Remove any module level function imports. Import modules instead
    - Alias `symbilic_opsetx` as `opsetx`
- Fix some docstrings

Requires:
- https://github.com/pytorch/pytorch/pull/77448
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77423
Approved by: https://github.com/BowenBao
2022-05-20 01:56:24 +00:00
Justin Chu
563c2719bf [ONNX] Refactor to remove inline imports - attempt 2 (#77448)
Re-land
- #77142

(diff: https://github.com/pytorch/pytorch/compare/c08b8f0..justinchuby:justinchu/remove-patch2)

Fixed:
- Delay import symbolic_opsets in the registry.

Tested locally with torchvision
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77448
Approved by: https://github.com/garymm
2022-05-16 14:44:24 +00:00
PyTorch MergeBot
6b366dd3c1 Revert "[ONNX] Refactor to remove inline imports (#77142)"
This reverts commit c08b8f0967.

Reverted https://github.com/pytorch/pytorch/pull/77142 on behalf of https://github.com/malfet
2022-05-13 19:44:17 +00:00
Justin Chu
c08b8f0967 [ONNX] Refactor to remove inline imports (#77142)
Reduce circular dependencies

- Lift constants and flags from `symbolic_helper` to `_constants` and `_globals`
    - Standardized constant naming to make it consistant
- Make `utils` strictly dependent on `symbolic_helper`, removing inline imports from symbolic_helper
- Move side effects from `utils` to `_patch_torch`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77142
Approved by: https://github.com/garymm, https://github.com/BowenBao
2022-05-13 03:46:33 +00:00
Justin Chu
5dd1c67776 [ONNX] Format ONNX python with black
Format all onnx python code with black and isort with

```sh
isort torch/onnx/ test/onnx
black torch/onnx/ test/onnx
```

Updated lintrunner config to include these paths.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76754
Approved by: https://github.com/suo, https://github.com/BowenBao
2022-05-05 00:19:22 +00:00
Thiago Crepaldi
eab3f42883 Update symbolics policy to emit aten::ATen for Caffe2 build only
Currently ONNX exporter symbolics can emit ATen operators when `operator_export_type==ONNX_ATEN_FALLBACK`. However, this is a behavior specific to Caffe2 builds, as the intend use of `ONNX_ATEN_FALLBACK` is to emit ATen operators only when there is no ONNX equivalent.

The reason Caffe2 choses to emit ATen operators when ONNX counterpart exists is for performance on their particular engine implementation, which might not be true for other implementations. e.g. ONNX Runtime can optimize the generated ONNX graph into something more efficient

This PR must be merged only after https://github.com/pytorch/pytorch/pull/73954
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74680
Approved by: https://github.com/garymm, https://github.com/malfet
2022-04-19 15:57:54 +00:00
BowenBao
9210e8f540 [ONNX] Adds overload_name to Aten op (#69378) (#73280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73280

This PR adds a new attribute overload_name to the Aten node so that third party applications can implement calls to libtorch without using PyTorch source code.

This is necessary because torch's torch::jit::findOperatorFor(fullname) requires a full name, including operator and overload names.

ATen op was originally created for Caffe2, which leveraged the availability of the pytorch yaml files to create calls to the aten oeprators directly, not relying on torch::jit::findOperatorFor

The first part of the PR refactors all symbolics that create Aten ops, so that there is a single helper for this operator. Next all symbolics are updated to pass in the relevant overload name, if empty string is not applicable

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D34625645

Pulled By: malfet

fbshipit-source-id: 37d58cfb5231833768172c122efc42edf7d8609a
(cherry picked from commit e92f09117d3645b38bc3235b30aba4b4c7c71dfa)
2022-03-09 14:26:18 +00:00
Gary Miguel
37688148ae [ONNX] Support opset 15 (#67121) (#67805)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67805

Also fix Reduce ops on binary_cross_entropy_with_logits

The graph says the output is a scalar but with `keepdims=1`
(the default), the output should be a tensor of rank 1. We set keep
`keepdims=0` to make it clear that we want a scalar output.

This previously went unnoticed because ONNX Runtime does not strictly
enforce shape inference mismatches if the model is not using the latest
opset version.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181304

Pulled By: malfet

fbshipit-source-id: 1462d8a313daae782013097ebf6341a4d1632e2c

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-11-08 14:37:00 -08:00
Nikita Shulga
0bc9928f31 [ONNX] Symbolic: dynamic input for OneHot, bool for Einsum (#65940) (#66147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66147

Symbolic: dynamic input for OneHot, bool for Einsum

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424094

fbshipit-source-id: 76bea22b29c93d1621c597fe7ab59deb3685087f

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-22 13:46:24 -07:00
Thomas J. Fan
d3bcba5f85 ENH Adds label_smoothing to cross entropy loss (#63122)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/7455

Partially resolves pytorch/vision#4281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63122

Reviewed By: iramazanli

Differential Revision: D30586076

Pulled By: jbschlosser

fbshipit-source-id: 06afc3aa1f8b9edb07fe9ed68c58968ad1926924
2021-08-29 23:33:04 -07:00
BowenBao
2aa19f33c6 [ONNX] Fix for batchnorm training op mode (#52758) (#62760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62760

* Rebase

# Conflicts:
#	torch/csrc/jit/passes/onnx/eval_peephole.cpp

# Conflicts:
#	test/onnx/test_utility_funs.py
#	torch/onnx/symbolic_opset9.py

* Update symbolic_opset12.py

* Update test.sh
# Conflicts:
#	.jenkins/caffe2/test.sh

* Merge

* Fix utility tests

# Conflicts:
#	test/onnx/test_pytorch_onnx_onnxruntime.py
#	test/onnx/test_utility_funs.py

* Fix for comment

* Enable BN tests

* Fix for test

* Update test_pytorch_onnx_onnxruntime.py

* Update test_pytorch_onnx_onnxruntime.py

* Update test_utility_funs.py

* Update test_pytorch_onnx_onnxruntime.py

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349060

Pulled By: msaroufim

fbshipit-source-id: 93312c17607974731c17099ae181acb6e4c1c409
2021-08-18 13:29:07 -07:00
BowenBao
3a7bbf5fb7 [ONNX] Add support for opset14 in PT-ONNX exporter (#59486) (#62758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62758

* Add initial changes for opset14

* Fixed flake

* Add onnx submodule changes and removed utility func tests

* Add updated batchNorm symbolic

* Add triu/tril symbolics

* Fix lint

* Fixed test failures

* Add reshape with allowzero

* Added tests/refactored opset versioning

* Bump onnxruntime version

* Fix clang/lint failures

* Add reshape shape inference for opset 14

* Changes for allowzero

* Fix lint/clang and test failures

* Updated PR

* Flake fixes

* Fix flake

* Remove new_jit_api tests

* Add opset14 models

* Update allowzero

* Fix test failures

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349063

Pulled By: msaroufim

fbshipit-source-id: 54724246149b01a2f627c43d7396253a7e9c9eb9

Co-authored-by: Shubham Bhokare <sbhokare@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-08-18 13:29:01 -07:00
BowenBao
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
BowenBao
5a455dc717 [ONNX] Enable tensordot symbolic function. (#55654) (#56166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56166

Support tensordot in symbolic function of opset 12, and add tests accordingly.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866140

Pulled By: SplitInfinity

fbshipit-source-id: 68e218cfbd630900fb92871fc7c0de3e7e8c8c3d
2021-04-20 23:00:41 -07:00
DeyuHuang
40869884cd Add outer export to onnx (#53603) (#54869)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54869

Add symbolic fuction to support torch.outer export to onnx.
Support for transfo-xl-wt103 model.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408978

Pulled By: SplitInfinity

fbshipit-source-id: 70c89a9fc1a5e4a4ddcf674afb1e82e492a7d3b9
2021-03-31 21:14:29 -07:00
Bel H
645119eaef Lowering NLLLoss/CrossEntropyLoss to ATen code (#53789)
Summary:
* Lowering NLLLoss/CrossEntropyLoss to ATen dispatch
* This allows the MLC device to override these ops
* Reduce code duplication between the Python and C++ APIs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53789

Reviewed By: ailzhang

Differential Revision: D27345793

Pulled By: albanD

fbshipit-source-id: 99c0d617ed5e7ee8f27f7a495a25ab4158d9aad6
2021-03-26 07:31:08 -07:00
BowenBao
8dd9fefacb [ONNX] Fix bug in unfold symbolic (#50504) (#51515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51515

Fix bug in unfold symbolic

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203113

Pulled By: SplitInfinity

fbshipit-source-id: 3a1b0013624d918de762a88ac6de8c9cafa0f732
2021-02-04 12:43:50 -08:00
BowenBao
b308fb78d1 [ONNX] Add binary_cross_entropy_with_logits op to ONNX opset version 12 (#49675) (#50908)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50908

Fixes #{#47997}
Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050885

Pulled By: SplitInfinity

fbshipit-source-id: e4167895eed804739aa50481679500a4d564b360
2021-01-27 17:48:49 -08:00
BowenBao
7e4c956955 [ONNX] Support opset13 Squeeze and Unsqueeze (#50150) (#50906)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50906

In opset 13, squeeze/unsqueeze is updated to take axes as input, instead of attribute.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050883

Pulled By: SplitInfinity

fbshipit-source-id: 7b5faf0e016d476bc75cbf2bfee6918d77e8aecd
2021-01-27 17:48:40 -08:00