Commit Graph

615 Commits

Author SHA1 Message Date
Yuanyuan Chen
fc8ac1216c [4/N] Remove unused loop variables in tests (#166690)
This PR removes unused loop variables in tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166690
Approved by: https://github.com/justinchuby, https://github.com/mlazos
2025-10-31 10:20:48 +00:00
linhaifeng
695cb0d342 [2/N][Fix] Fix typo in test folder (#166374)
Fix typo in test folder.

_typos.toml
```bash
[default.extend-words]
nd = "nd"
arange = "arange"
Nd = "Nd"
GLOBALs = "GLOBALs"
hte = "hte"
iy = "iy"
PN = "PN"
Dout = "Dout"
optin = "optin"
gam = "gam"
PTD = "PTD"
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166374
Approved by: https://github.com/cyyever, https://github.com/ezyang
2025-10-29 03:02:07 +00:00
Yuanyuan Chen
e925dfcc6b Enable all SIM rules except disabled ones (#164645)
`SIM` rules are useful for simplifying boolean expressions and enhances code readability.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164645
Approved by: https://github.com/ezyang, https://github.com/mlazos
2025-10-17 07:27:11 +00:00
PyTorch MergeBot
5d7360bb03 Revert "Enable all SIM rules except disabled ones (#164645)"
This reverts commit 321e602692.

Reverted https://github.com/pytorch/pytorch/pull/164645 on behalf of https://github.com/izaitsevfb due to causes lint failures ([comment](https://github.com/pytorch/pytorch/pull/164645#issuecomment-3369274351))
2025-10-05 19:32:21 +00:00
Yuanyuan Chen
321e602692 Enable all SIM rules except disabled ones (#164645)
`SIM` rules are useful for simplifying boolean expressions and enhances code readability.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164645
Approved by: https://github.com/ezyang
2025-10-05 07:38:25 +00:00
Justin Chu
bd39e47fee [ONNX] Default to dynamo export (#159646)
Set dynamo=True and enable fallback.

1. Implemented the compatible behavior where BytesIO objects as `f` is accepted
2. Update tests to explicitly set dynamo=False

#151693

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159646
Approved by: https://github.com/titaiwangms
2025-09-02 22:45:55 +00:00
Justin Chu
524b78d4f6 [ONNX] Refactor torchscript based exporter (#161323)
Refactor torchscript based exporter logic to move them to a single (private) location for better code management. Original public module and method apis are preserved.

- Updated module paths in `torch/csrc/autograd/python_function.cpp` accordingly
- Removed `check_onnx_broadcast` from `torch/autograd/_functions/utils.py` because it is private&unused

@albanD / @soulitzer could you review changes in `torch/csrc/autograd/python_function.cpp` and
`torch/autograd/_functions/utils.py`? Thanks!

## BC Breaking
- **Deprecated members in `torch.onnx.verification` are removed**

Differential Revision: [D81236421](https://our.internmc.facebook.com/intern/diff/D81236421)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161323
Approved by: https://github.com/titaiwangms, https://github.com/angelayi
2025-09-02 16:10:30 +00:00
PyTorch MergeBot
82c7a1eb4b Revert "[ONNX] Default to dynamo export (#159646)"
This reverts commit 11b6ceb7b4.

Reverted https://github.com/pytorch/pytorch/pull/159646 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/159646#issuecomment-3198507767))
2025-08-18 21:41:32 +00:00
Justin Chu
11b6ceb7b4 [ONNX] Default to dynamo export (#159646)
Set dynamo=True and enable fallback.

1. Implemented the compatible behavior where BytesIO objects as `f` is accepted
2. Update tests to explicitly set dynamo=False

#151693

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159646
Approved by: https://github.com/titaiwangms
2025-08-16 04:48:58 +00:00
Xuehai Pan
c73a92fbf5 [BE][CI] bump ruff to 0.9.2: multiline assert statements (#144546)
Reference: https://docs.astral.sh/ruff/formatter/black/#assert-statements

> Unlike Black, Ruff prefers breaking the message over breaking the assertion, similar to how both Ruff and Black prefer breaking the assignment value over breaking the assignment target:
>
> ```python
> # Input
> assert (
>     len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
>
> # Black
> assert (
>     len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
> # Ruff
> assert len(policy_types) >= priority + num_duplicates, (
>     f"This tests needs at least {priority + num_duplicates} many types."
> )
> ```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144546
Approved by: https://github.com/malfet
2025-02-27 20:46:16 +00:00
Aaron Gokaslan
e738f7ba23 [BE]: Enable ruff rule SIM113 (#147290)
Lint rules that tells the user to avoid keeping track of their own counter and use the builtin enumerate when possible.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147290
Approved by: https://github.com/jansel
2025-02-16 22:41:16 +00:00
Aaron Orenstein
99dbc5b0e2 PEP585 update - test (#145176)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145176
Approved by: https://github.com/bobrenjc93
2025-01-22 04:48:28 +00:00
Leo Yang
40305dd37e [onnx] Fix bug for exporting torch.cdist into onnx and support 'compute_mode' (#144213)
### Fix bug for exporting torch.cdist and support 'compute_mode'
In [cdist,](https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py#L6181) the 'compute_mode' was ignored, which leads to a big difference of the computation flow between original torch.cdist and the exported onnx file when computing Euclidean distance (p=2). For computing Euclidean distance, the running of exported onnx model will be 10x slower than running torch.cdist directly, and also very likely to cause CUDA OOM for larger matrixes unnecessarily.

This code is going for exporting the same onnx computation flow with the forward of  torch.cdist defined at [forward implementation](9225f149eb/aten/src/ATen/native/Distance.cpp (L66-L149).) under every compute_mode.

Fixes #144212

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144213
Approved by: https://github.com/justinchuby
2025-01-09 20:07:20 +00:00
Tom Ritchford
d8c8ba2440 Fix unused Python variables in test/[e-z]* (#136964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby, https://github.com/albanD
2024-12-18 23:02:30 +00:00
titaiwangms
e48ee2cf50 [ONNX] Fix scaled_dot_product_attention with float scale (#135594)
Fixes #125158

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135594
Approved by: https://github.com/justinchuby
2024-09-10 23:04:02 +00:00
Justin Chu
a6b9d444fb [ONNX] Refactor exporter errors (#135180)
Refactor exporter errors to combine old errors and new errors for API consistency.

This PR also

1. Removes the `_C._check_onnx_proto(proto)` call in the old exporter. We don't need the ONNX checker because it is limited.
2. Removes the `OnnxExporterError` defined in the dynamo module. This class unnecessarily stores the onnx program object, making it very bulky. Instead, we revert to use the plain OnnxExporterError defined in the `errors` module and use it as the base class for all errors.
3. Continues to expose `OnnxExporterError` in `torch.onnx` and the rest of the errors in `torch.onnx.errors`.
4. Removes the `CheckerError` and `InvalidExportOptionsError` from `torch.onnx`. This is BC breaking but should have low impact.
5. I did not rename existing errors out of compatibility considerations, even though `ExporterError` would have been more succinct.

Fixes https://github.com/pytorch/pytorch/issues/135125
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135180
Approved by: https://github.com/titaiwangms
2024-09-07 00:50:15 +00:00
PyTorch MergeBot
a681260caf Revert "[ONNX] Refactor exporter errors (#135180)"
This reverts commit 5eebd9315a.

Reverted https://github.com/pytorch/pytorch/pull/135180 on behalf of https://github.com/clee2000 due to I think this broke test_public_bindings.py::TestPublicBindings::test_correct_module_names [GH job link](https://github.com/pytorch/pytorch/actions/runs/10743909338/job/29800779403) [HUD commit link](5eebd9315a), possibly a landrace with the PR that landed before it ([comment](https://github.com/pytorch/pytorch/pull/135180#issuecomment-2334844191))
2024-09-06 21:39:18 +00:00
Justin Chu
5eebd9315a [ONNX] Refactor exporter errors (#135180)
Refactor exporter errors to combine old errors and new errors for API consistency.

This PR also

1. Removes the `_C._check_onnx_proto(proto)` call in the old exporter. We don't need the ONNX checker because it is limited.
2. Removes the `OnnxExporterError` defined in the dynamo module. This class unnecessarily stores the onnx program object, making it very bulky. Instead, we revert to use the plain OnnxExporterError defined in the `errors` module and use it as the base class for all errors.
3. Continues to expose `OnnxExporterError` in `torch.onnx` and the rest of the errors in `torch.onnx.errors`.
4. Removes the `CheckerError` and `InvalidExportOptionsError` from `torch.onnx`. This is BC breaking but should have low impact.
5. I did not rename existing errors out of compatibility considerations, even though `ExporterError` would have been more succinct.

Fixes https://github.com/pytorch/pytorch/issues/135125
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135180
Approved by: https://github.com/titaiwangms
2024-09-06 19:10:56 +00:00
Justin Chu
b319fa3fd9 [ONNX] Opt into ruff fmt (#134120)
Add ONNX directory to use ruff format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134120
Approved by: https://github.com/XuehaiPan, https://github.com/Skylion007
2024-08-22 22:44:03 +00:00
PyTorch MergeBot
b0171c3920 Revert "[ONNX] Opt into ruff fmt (#134120)"
This reverts commit 0870398fa8.

Reverted https://github.com/pytorch/pytorch/pull/134120 on behalf of https://github.com/albanD due to Breaks main branch lint ([comment](https://github.com/pytorch/pytorch/pull/134120#issuecomment-2305089756))
2024-08-22 15:48:14 +00:00
Justin Chu
0870398fa8 [ONNX] Opt into ruff fmt (#134120)
Add ONNX directory to use ruff format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134120
Approved by: https://github.com/XuehaiPan, https://github.com/Skylion007
2024-08-21 21:43:55 +00:00
Oguz Ulgen
221350e3a4 Add None return type to init -- tests (#132352)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132352
Approved by: https://github.com/ezyang
ghstack dependencies: #132335, #132351
2024-08-01 15:44:51 +00:00
ekamiti
9e473fd868 Make adding Buffers more like adding Parameters (#125971)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new Buffer class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the register_buffer method has not been changed. The persistent parameter in the Buffer type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new Buffer type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the Buffer type can be used as a drop in replacement for register_buffer as it just leads to register_buffer being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Co-authored-by: Mikayla Gawarecki <mikaylagawarecki@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125971
Approved by: https://github.com/albanD, https://github.com/anijain2305, https://github.com/mlazos
2024-07-31 10:32:40 +00:00
Justin Chu
ae708e9791 [ONNX] Remove the deprecated SymbolicContext (#132184)
Remove the deprecated SymbolicContext class from torch.onnx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132184
Approved by: https://github.com/titaiwangms
2024-07-31 04:24:32 +00:00
Xuehai Pan
fbe6f42dcf [BE][Easy][8/19] enforce style for empty lines in import segments in test/[k-p]*/ (#129759)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129759
Approved by: https://github.com/justinchuby, https://github.com/ezyang
2024-07-31 02:09:20 +00:00
Xuehai Pan
973037be6a [BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() (#130199)
This PR changes the empty collection factory call to Python literals:

- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`

The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:

```bash
$ python3 -m dis - <<EOS
import collections

d1 = {}
d2 = dict()

dict = collections.OrderedDict
d3 = dict()
EOS
```

```text
  0           0 RESUME                   0

  1           2 LOAD_CONST               0 (0)
              4 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (collections)
              8 STORE_NAME               0 (collections)

  3          10 BUILD_MAP                0
             12 STORE_NAME               1 (d1)

  4          14 PUSH_NULL
             16 LOAD_NAME                2 (dict)
             18 CALL                     0
             26 STORE_NAME               3 (d2)

  6          28 LOAD_NAME                0 (collections)
             30 LOAD_ATTR                8 (OrderedDict)
             50 STORE_NAME               2 (dict)

  7          52 PUSH_NULL
             54 LOAD_NAME                2 (dict)
             56 CALL                     0
             64 STORE_NAME               5 (d3)
             66 RETURN_CONST             1 (None)
```

The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
titaiwangms
705346bf8d [ONNX] Skip optimizer when it fails (#127349)
continue #127039

(1) Skip optimizer when it fails
(2) Update onnx, ort, and onnx-script
(3) The update to onnx-script results in the actual optimizer and rewriter enabling in this PR, and https://github.com/pytorch/pytorch/pull/123379 did not update onnx-script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127349
Approved by: https://github.com/justinchuby
2024-05-30 07:08:45 +00:00
Xuehai Pan
26f4f10ac8 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
2024-05-27 14:49:57 +00:00
PyTorch MergeBot
55c0ab2887 Revert "[5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)"
This reverts commit 7763c83af6.

Reverted https://github.com/pytorch/pytorch/pull/127126 on behalf of https://github.com/XuehaiPan due to Broken CI ([comment](https://github.com/pytorch/pytorch/pull/127126#issuecomment-2133044286))
2024-05-27 09:22:08 +00:00
Xuehai Pan
7763c83af6 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
ghstack dependencies: #127122, #127123, #127124, #127125
2024-05-27 04:22:18 +00:00
zjgarvey
edea2b81b5 [ONNX] Adds Support for Some Bitwise Ops in Onnx Exporter (#126229)
Addresses #126194

Adds support for
- "aten::bitwise_right_shift"
- "aten::bitwise_left_shift"
- "aten::bitwise_and"

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126229
Approved by: https://github.com/justinchuby
2024-05-22 07:47:43 +00:00
Aaron Gokaslan
c5fafe9f48 [BE]: TRY002 - Ban raising vanilla exceptions (#124570)
Adds a ruff lint rule to ban raising raw exceptions. Most of these should at the very least be runtime exception, value errors, type errors or some other errors. There are hundreds of instance of these bad exception types already in the codebase, so I have noqa'd most of them. Hopefully this error code will get commiters to rethink what exception type they should raise when they submit a PR.

I also encourage people to gradually go and fix all the existing noqas that have been added so they can be removed overtime and our exception typing can be improved.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124570
Approved by: https://github.com/ezyang
2024-04-21 22:26:40 +00:00
liqunfu
bbe846f430 Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20, extend opset 18 support (#118828)
Start to fix https://github.com/pytorch/pytorch/issues/114801

Co-authored-by: Thiago Crepaldi <thiagofc@microsoft.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118828
Approved by: https://github.com/thiagocrepaldi
2024-03-22 18:01:33 +00:00
Francesco Fusco
26431db939 [ONNX] Perform implicit casting of constants for the onnx::where operator (#118733) (#120619)
This PR fixes the problem of having the `Where` operator bound to different types in cases where the dtype is not explicitly set. The PR extends the implicit casting to the onnx::Where operator to fix the issue, and includes the corresponding unit test.

Fixes #118733

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120619
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2024-03-04 19:27:30 +00:00
liqunfu
cd9a1934fb [ONNX] Bump to onnx1.15.0 and ort1.17.0 in CI (#119106)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119106
Approved by: https://github.com/thiagocrepaldi, https://github.com/titaiwangms
2024-02-08 19:26:13 +00:00
titaiwangms
a3cec6a7fa [ONNX] Eliminate redundant TODOs (#119060)
Remove titaiwangms/AllenTiTaiWang/titaiwang created TODOs:

1. Resolved TODOs
2. Turned TODOs to NOTEs if they are not actionable
3. Merge duplicated TODOs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119060
Approved by: https://github.com/kit1980, https://github.com/thiagocrepaldi
2024-02-02 23:37:52 +00:00
CYuxian
f543093e06 [ONNX] Fix output mismatch issue of repeat_interleave when dim is None (#116689)
'input' is introduced but it's mixed with 'self' in repeat_interleave, which causes the mismatch issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116689
Approved by: https://github.com/thiagocrepaldi
2024-01-03 18:38:00 +00:00
Aaron Gokaslan
bd10fea79a [BE]: Enable F821 and fix bugs (#116579)
Fixes #112371

I tried to fix as many of the bugs as I could, a few I could not figure out what the proper fix for them was though and so I left them with noqas.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116579
Approved by: https://github.com/ezyang
2024-01-01 08:40:46 +00:00
Aaron Gokaslan
ee5d981249 [BE]: Enable RUFF PERF402 and apply fixes (#115505)
* Enable PERF402. Makes code more efficient and succinct by removing useless list copies that could be accomplished either via a list constructor or extend call. All test cases have noqa added since performance is not as sensitive in that folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115505
Approved by: https://github.com/malfet
2023-12-20 18:01:24 +00:00
Aaron Gokaslan
794545c11f [BE]: Enable RUF015 codebase wide (#115507)
Constant time access of first value in collection. This is a constant time operation instead of converting the item to a list to get the first item which is linear. The rule is turned on which automatically autofixes and enforces this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115507
Approved by: https://github.com/malfet
2023-12-11 15:51:01 +00:00
CYuxian
9bab96c78c [ONNX] Consider negative dim in _index_fill_reshape_helper (#114050)
Fix export issue of index_copy op with negative dim.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114050
Approved by: https://github.com/thiagocrepaldi
2023-11-22 15:40:57 +00:00
CYuxian
b88abb1674 [ONNX] Fix export issue of aten::layer_norm in opset 17 (#114058)
For torch.nn.LayerNorm, weight and bias could be None(when parameter elementwise_affine is False or bias is False), but for onnx op LayerNormalization from opset 17, weight and bias cannot be None.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114058
Approved by: https://github.com/thiagocrepaldi
2023-11-21 22:45:50 +00:00
BowenBao
275a4521a9 [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
Fixes https://github.com/pytorch/pytorch/issues/104594.

The reason for the exporter behavior in original posted issue is explained as follows:
ONNX model track shape related computes that were done in pytorch by python
numbers as tensor computes. This is the only way for ONNX to track them properly
since ONNX only has tensor type, otherwise the computation result will be tracked
statically as constant, and the model won't work for another input that differs in shape.

Now for type promotion logic, scalars should be treated differently with tensors.
Exporter mistook the shape related scalars as tensors in this case and incorrectly promoted.

This PR fixes the behavior and relaxes the criteria of scalar recognition. For floating point,
previously only a value from model initializer that has dtype torch.double and rank 0 is
treated as scalar. Now it is relaxed to any intermediate value, as well as for dtype torch.float.
Previous assumption was that python number is traced as torch.double dtype, which also
appears to be invalid anymore.

NOTE that this might introduce regression that a REAL 0-rank tensor is now being recognized as
scalar. The downside is the model will drop in accuracy for these cases as certain computations
will happen in lower precision data types.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113404
Approved by: https://github.com/justinchuby
2023-11-15 20:32:55 +00:00
PyTorch MergeBot
0fd856ca22 Revert "[ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)"
This reverts commit 39ca5a3226.

Reverted https://github.com/pytorch/pytorch/pull/113404 on behalf of https://github.com/jeanschmidt due to sorry it is breaking CI jobs on main ([comment](https://github.com/pytorch/pytorch/pull/113404#issuecomment-1808314277))
2023-11-13 14:56:35 +00:00
BowenBao
39ca5a3226 [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
Fixes https://github.com/pytorch/pytorch/issues/104594.

The reason for the exporter behavior in original posted issue is explained as follows:
ONNX model track shape related computes that were done in pytorch by python
numbers as tensor computes. This is the only way for ONNX to track them properly
since ONNX only has tensor type, otherwise the computation result will be tracked
statically as constant, and the model won't work for another input that differs in shape.

Now for type promotion logic, scalars should be treated differently with tensors.
Exporter mistook the shape related scalars as tensors in this case and incorrectly promoted.

This PR fixes the behavior and relaxes the criteria of scalar recognition. For floating point,
previously only a value from model initializer that has dtype torch.double and rank 0 is
treated as scalar. Now it is relaxed to any intermediate value, as well as for dtype torch.float.
Previous assumption was that python number is traced as torch.double dtype, which also
appears to be invalid anymore.

NOTE that this might introduce regression that a REAL 0-rank tensor is now being recognized as
scalar. The downside is the model will drop in accuracy for these cases as certain computations
will happen in lower precision data types.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113404
Approved by: https://github.com/justinchuby
2023-11-11 15:08:07 +00:00
PyTorch MergeBot
3cb6cf1e8a Revert "[ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)"
This reverts commit f2cd68102a.

Reverted https://github.com/pytorch/pytorch/pull/113404 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing in trunk f2cd68102a, may be a landrace or flaky of sort ([comment](https://github.com/pytorch/pytorch/pull/113404#issuecomment-1806613497))
2023-11-11 02:09:22 +00:00
AllenTiTaiWang
e8e3afb784 [ONNX] Refactor MaxPool to support dynamic inputs (#113318)
In https://github.com/pytorch/pytorch/pull/106270, the solution managed to solve the [`ceil_model` corner issue](https://github.com/onnx/onnx/issues/5711) with the usage of `get_pool_ceil_padding`. However, padding the ceil in converter side only works when we already know the input shapes, therefore, a regression happens when users want to do dynamic inputs.

This PR provides (1) refactor codes with torchlib implementation, (2) add dynamic shapes test, and (3) disable the corner tests with comments saying re-enable it when the [real fix from ONNX](https://github.com/onnx/onnx/pull/5741) is merged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113318
Approved by: https://github.com/thiagocrepaldi
2023-11-10 23:23:49 +00:00
BowenBao
f2cd68102a [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
Fixes https://github.com/pytorch/pytorch/issues/104594.

The reason for the exporter behavior in original posted issue is explained as follows:
ONNX model track shape related computes that were done in pytorch by python
numbers as tensor computes. This is the only way for ONNX to track them properly
since ONNX only has tensor type, otherwise the computation result will be tracked
statically as constant, and the model won't work for another input that differs in shape.

Now for type promotion logic, scalars should be treated differently with tensors.
Exporter mistook the shape related scalars as tensors in this case and incorrectly promoted.

This PR fixes the behavior and relaxes the criteria of scalar recognition. For floating point,
previously only a value from model initializer that has dtype torch.double and rank 0 is
treated as scalar. Now it is relaxed to any intermediate value, as well as for dtype torch.float.
Previous assumption was that python number is traced as torch.double dtype, which also
appears to be invalid anymore.

NOTE that this might introduce regression that a REAL 0-rank tensor is now being recognized as
scalar. The downside is the model will drop in accuracy for these cases as certain computations
will happen in lower precision data types.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113404
Approved by: https://github.com/justinchuby
2023-11-10 22:31:25 +00:00
Thiago Crepaldi
9ab6ac5bc1 [ONNX] Fix aten::new_zeros due to TorchScript behavior change on Pytorch 2.1 Fix #110935 (#110956)
Fixes #110597

Summary:

* Generic code: The `torch._C.Value.node().mustBeNone()` is encapsulated into the high-level API `JitScalarType.from_value` ; `_is_none` was also extended to allow either `None` or `torch._C.Value.node.mustBeNone()`, so users don't manually call into TorchScript API when implementing operators
* Specific to `new_zeros` (and ops of ` *_like`  and `new_*`): When checking `dtype`, we always must use ` _is_none`, which will call  proposed by #110935
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110956
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2023-10-16 18:28:20 +00:00
veritas-Qiu
a3e9b80082 Fix torch.diagonal for torch.onnx.export when dim1<0 or dim2<0 (#111130)
in many cases, torch.diagonal will pass (dim1=-2, dim2=-1), onnx export will always fail in these cases
this pr try to fix the bug
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111130
Approved by: https://github.com/thiagocrepaldi
2023-10-13 22:05:53 +00:00