Commit Graph

1147 Commits

Author SHA1 Message Date
Justin Chu
f3aba45049 [ONNX] Create onnxscript-torchlib specific xfails/skips for fx tests (#110536)
Creates xfail_onnxscript/skip_onnxscript so that it is clear torchlib needs to support it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110536
Approved by: https://github.com/BowenBao
2023-10-05 00:39:05 +00:00
CaoE
9399e0b1ff add fp16 support for gemm (#99498)
### Testing

Native matmul vs. mkldnn matmul  on SPR (with avx512_fp16 support)

single core:

Input | Naïve impl   / ms | oneDNN /   ms | Speed up
-- | -- | -- | --
M: 128, N: 128, K: 128, trans_a: False, trans_b: False | 2010.387 | 64.700 | 31.072
M: 128, N: 256, K: 128, trans_a: False, trans_b: False | 4027.116 | 107.780 | 37.364
M: 8192, N: 768, K: 768, trans_a: False, trans_b: False | 28685868.488 | 90663.008 | 316.401

56 cores:
Input | Naïve impl   / ms | oneDNN /   ms | Speed up
-- | -- | -- | --
M: 128, N: 128, K: 128, trans_a: False, trans_b: False | 5.091 | 0.24 | 211.30
M: 128, N: 128, K: 128, trans_a: False, trans_b: True | 5.224 | 0.23 | 220.09
M: 128, N: 256, K: 128, trans_a: False, trans_b: False | 10.006 | 0.30 | 330.31
M: 8192, N: 768, K: 768, trans_a: False, trans_b: False | 29435.372 | 1.770 | 1662.80
M: 8192, N: 768, K: 768, trans_a: False, trans_b: True | 31464.961 | 1.728 |  18204.76
M: 8192, N: 768, K: 3072, trans_a: False, trans_b: False | 115035.849  | 7.990 | 14396.90
M: 8192, N: 768, K: 3072, trans_a: False, trans_b: True | 122981.023 |  7.725 | 15918.34
Batch: 768, M: 128, N: 64, K: 128  | 2032.523 | 0.705 | 2882.23

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99498
Approved by: https://github.com/jgong5, https://github.com/malfet
2023-09-28 01:03:50 +00:00
PyTorch MergeBot
a5364b12bb Revert "[ONNX] Remove the depreacated function _export (#109763)"
This reverts commit d7c05bb2e8.

Reverted https://github.com/pytorch/pytorch/pull/109763 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/109763#issuecomment-1734201053))
2023-09-25 17:47:21 +00:00
Kunal Vaishnavi
c0d746c90e [ONNX] Relax getting module attributes in ONNX export (#109759)
### Description

This PR fixes a bug with getting module attributes during `torch.onnx.export` when `export_modules_as_functions` is used. With this fix, we can compare the LLaMA-2 models produced by the TorchScript exporter and the [Dynamo exporter](https://github.com/pytorch/pytorch/issues/104903).

### Context
When exporting LLaMA-2 from Hugging Face with `export_modules_as_functions`, the `Embedding` object does not have the `freeze` attribute.
```
  File "/home/kvaishnavi/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 662, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1558, in _call_impl
    args_result = hook(self, args)
  File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1394, in _track_module_attributes_forward_pre_hook
    setattr(module, attr_name, _get_module_attributes(module))
  File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1474, in _get_module_attributes
    return {k: getattr(module, k) for k in annotations}
  File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1474, in <dictcomp>
    return {k: getattr(module, k) for k in annotations}
  File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1696, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'Embedding' object has no attribute 'freeze'
```
To get around this issue, we can skip adding the keys in the dictionary when the object does not have the attribute.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109759
Approved by: https://github.com/BowenBao
2023-09-23 02:47:51 +00:00
wangxiyuan
d7c05bb2e8 [ONNX] Remove the depreacated function _export (#109763)
`_export` API was depreacated and should be removed after 2.0.

See: https://github.com/pytorch/pytorch/pull/107208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109763
Approved by: https://github.com/thiagocrepaldi
2023-09-22 07:14:13 +00:00
Gustav Larsson
8dcdc74915 torch->onnx export support: quantized::linear_relu (#109755)
- Adds support for quantized::linear_relu
  - Adds weight unpacking pattern matcher
  - Adds to export for opset 10 and 13.
- Adds QAT test modeled after conv2d+relu fusion test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109755
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2023-09-21 23:24:20 +00:00
wangxiyuan
f9947830bb [ONNX] Remove the depreacated function in symbolic_helper (#109681)
These three functions in symbolic_helper are depreacated and should be removed after pytorch 2.0.

The clean up job will be separated into several patches to ensure the safety. See: https://github.com/pytorch/pytorch/pull/107208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109681
Approved by: https://github.com/thiagocrepaldi
2023-09-20 19:31:39 +00:00
PyTorch MergeBot
cd31c170c9 Revert "[ONNX] Remove deprecated functions (#107208)"
This reverts commit 263ca7d69b.

Reverted https://github.com/pytorch/pytorch/pull/107208 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/107208#issuecomment-1726183104))
2023-09-19 17:26:48 +00:00
CaoE
54c28c564f add Half support for BatchNorm on CPU (#102070)
Fixes #106543

### Testing

Single core:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.7116 | 0.1427 | 0.1744 | 0.2638 | 0.2002 | 0.2556
(1, 32, 100, 100) | 0.8579 | 0.1725 | 0.2077 | 0.3023 | 0.2399 | 0.2995
(32, 16, 200, 200) | 57.3466 | 12.2179 | 13.1320 | 45.9524 | 24.1526 | 24.9882

28 cores:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.2571 | 0.0713 | 0.0846 | 0.1140 | 0.0883 |  0.1043
(1, 32, 100, 100) | 0.1077 | 0.0510 | 0.0548 | 0.0700 | 0.0645 | 0.0713
(32, 16, 200, 200) | 5.5060 | 1.4195 | 1.4663 | 6.773 | 3.0886 | 3.1343

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102070
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki, https://github.com/mingfeima
2023-09-19 10:43:33 +00:00
Aaron Bockover
0e2b22c451 [ONNX] switch from onnxscript-preview to onnxscript (#109139)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109139
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2023-09-18 22:24:47 +00:00
CYuxian
504dceacb1 [ONNX] Fix indexing issue of meshgrid op (#109350)
Should unpack tensor_list before swapping the elements for indexing 'xy'.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109350
Approved by: https://github.com/thiagocrepaldi
2023-09-15 19:49:43 +00:00
wangxiyuan
263ca7d69b [ONNX] Remove deprecated functions (#107208)
The usage of some functions is deprecated. This PR drop them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107208
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
2023-09-14 19:09:56 +00:00
PyTorch MergeBot
b226373d16 Revert "add Half support for BatchNorm on CPU (#102070)"
This reverts commit b6a1d3fb97.

Reverted https://github.com/pytorch/pytorch/pull/102070 on behalf of https://github.com/clee2000 due to I'm very sorry but it looks like #106543 was not fixed, I still see it failing on main b6a1d3fb97 https://github.com/pytorch/pytorch/actions/runs/6185704949/job/16793975677 ([comment](https://github.com/pytorch/pytorch/pull/102070#issuecomment-1719747065))
2023-09-14 16:13:34 +00:00
CaoE
b6a1d3fb97 add Half support for BatchNorm on CPU (#102070)
Fixes #106543

### Testing

Single core:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.7116 | 0.1427 | 0.1744 | 0.2638 | 0.2002 | 0.2556
(1, 32, 100, 100) | 0.8579 | 0.1725 | 0.2077 | 0.3023 | 0.2399 | 0.2995
(32, 16, 200, 200) | 57.3466 | 12.2179 | 13.1320 | 45.9524 | 24.1526 | 24.9882

28 cores:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.2571 | 0.0713 | 0.0846 | 0.1140 | 0.0883 |  0.1043
(1, 32, 100, 100) | 0.1077 | 0.0510 | 0.0548 | 0.0700 | 0.0645 | 0.0713
(32, 16, 200, 200) | 5.5060 | 1.4195 | 1.4663 | 6.773 | 3.0886 | 3.1343

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102070
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki
2023-09-14 12:23:59 +00:00
PyTorch MergeBot
04a765f95d Revert "add Half support for BatchNorm on CPU (#102070)"
This reverts commit 6065e7a97c.

Reverted https://github.com/pytorch/pytorch/pull/102070 on behalf of https://github.com/clee2000 due to sorry it looks like this is causing an unexpected success for `test_jit_fuser_te.py::TestNNCOpInfoCPU::test_nnc_correctness_nn_functional_batch_norm_cpu_float16` 6065e7a97c https://github.com/pytorch/pytorch/actions/runs/6178069462/job/16770849782 ([comment](https://github.com/pytorch/pytorch/pull/102070#issuecomment-1718402208))
2023-09-13 22:38:42 +00:00
CaoE
6065e7a97c add Half support for BatchNorm on CPU (#102070)
Fixes #106543

### Testing

Single core:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.7116 | 0.1427 | 0.1744 | 0.2638 | 0.2002 | 0.2556
(1, 32, 100, 100) | 0.8579 | 0.1725 | 0.2077 | 0.3023 | 0.2399 | 0.2995
(32, 16, 200, 200) | 57.3466 | 12.2179 | 13.1320 | 45.9524 | 24.1526 | 24.9882

28 cores:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.2571 | 0.0713 | 0.0846 | 0.1140 | 0.0883 |  0.1043
(1, 32, 100, 100) | 0.1077 | 0.0510 | 0.0548 | 0.0700 | 0.0645 | 0.0713
(32, 16, 200, 200) | 5.5060 | 1.4195 | 1.4663 | 6.773 | 3.0886 | 3.1343

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102070
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki
2023-09-13 17:30:16 +00:00
Aaron Bockover
bd1229477d [ONNX] Add initial support for FP8 ONNX export (#107962)
This PR resurrects @tcherckez-nvidia's #106379 with changes to resolve conflicts against newer `main` and defines our own constants for the new ONNX types to [avoid breaking Meta's internal usage of an old ONNX](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675189340).

- `::torch::onnx::TensorProto_DataType_FLOAT8E4M3FN=17`
- `::torch::onnx::TensorProto_DataType_FLOAT8E5M2=19`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107962
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms
2023-09-08 20:40:39 +00:00
Kurt Mohler
3f88e3105f Reland: Remove remaining global set_default_dtype calls from tests (#108088)
Fixes #68972

Relands #107246

To avoid causing Meta-internal CI failures, this PR avoids always asserting that the default dtype is float in the `TestCase.setUp/tearDown` methods. Instead, the assert is only done if `TestCase._default_dtype_check_enabled == True`. `_default_dtype_check_enabled` is set to True in the `if __name__ == "__main__":` blocks of all the relevant test files that have required changes for this issue

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108088
Approved by: https://github.com/ezyang
2023-09-07 03:04:34 +00:00
Jirka Borovec
9178deedff removing some redundant str splits (#106089)
drop some redundant string splits, no factual changes, just cleaning the codebase

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106089
Approved by: https://github.com/albanD, https://github.com/malfet
2023-09-01 00:22:58 +00:00
AllenTiTaiWang
d72b990bab [ONNX] Move large scale models without non-persistent buffers to runtime test (#108084)
Fixes https://github.com/pytorch/pytorch/issues/107715

Update models with their config to save CI running time and memories. Move some of models that doesn't need non-persistent buffers to runtime test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108084
Approved by: https://github.com/thiagocrepaldi
2023-08-31 06:05:19 +00:00
Aaron Bockover
b0d109f29f [ONNX] Bump onnx submodule to 1.14.1; ONNX Runtime 1.16 (#106984)
Bump dependencies:

- ort-nightly 1.16.0.dev20230824005
- onnx 1.14.1rc2
- onnxscript 0.1.0.dev20230825
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106984
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2023-08-28 20:11:29 +00:00
Aaron Bockover
15e5bd5103 [ONNX] Support torch.compile(backend="onnxrt", options=OrtBackendOptions(...)) (#107973)
This reworks the DORT backend factory function to support the options kwarg of torch.compile, and defines a concrete OrtBackendOptions type that can be used to influence the backend.

Caching is also implemented in order to reuse backends with equal options.

Wrapping the backend in auto_autograd also becomes an option, which allows `OrtBackend` to always be returned as the callable for torch.compile; wrapping happens internally if opted into (True by default).

Lastly, expose options for configuring preferred execution providers (will be attempted first), whether or not to attempt to infer an ORT EP from a torch found device in the graph or inputs, and finally the default/fallback EPs.

### Demo

The following demo runs `Gelu` through `torch.compile(backend="onnxrt")` using various backend options through a dictionary form and a strongly typed form. It additionally exports the model through both the ONNX TorchScript exporter and the new TorchDynamo exporter.

```python
import math

import onnx.inliner
import onnxruntime
import torch
import torch.onnx

torch.manual_seed(0)

class Gelu(torch.nn.Module):
    def forward(self, x):
        return x * (0.5 * torch.erf(math.sqrt(0.5) * x) + 1.0)

@torch.compile(
    backend="onnxrt",
    options={
        "preferred_execution_providers": [
            "NotARealEP",
            "CPUExecutionProvider",
        ],
        "export_options": torch.onnx.ExportOptions(dynamic_shapes=True),
    },
)
def dort_gelu(x):
    return Gelu()(x)

ort_session_options = onnxruntime.SessionOptions()
ort_session_options.log_severity_level = 0

dort_gelu2 = torch.compile(
    Gelu(),
    backend="onnxrt",
    options=torch.onnx._OrtBackendOptions(
        preferred_execution_providers=[
            "NotARealEP",
            "CPUExecutionProvider",
        ],
        export_options=torch.onnx.ExportOptions(dynamic_shapes=True),
        ort_session_options=ort_session_options,
    ),
)

x = torch.randn(10)

torch.onnx.export(Gelu(), (x,), "gelu_ts.onnx")

export_output = torch.onnx.dynamo_export(Gelu(), x)
export_output.save("gelu_dynamo.onnx")
inlined_model = onnx.inliner.inline_local_functions(export_output.model_proto)
onnx.save_model(inlined_model, "gelu_dynamo_inlined.onnx")

print("Torch Eager:")
print(Gelu()(x))

print("DORT:")
print(dort_gelu(x))
print(dort_gelu2(x))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107973
Approved by: https://github.com/BowenBao
2023-08-26 18:20:18 +00:00
CYuxian
35f4bb9a25 [ONNX] Return input itself for non-fp inputs and support decimals for aten::round op (#107920)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107920
Approved by: https://github.com/justinchuby
2023-08-26 05:54:52 +00:00
CaoE
3267996372 add channel last 3d support for maxpool3d on CPU (#97775)
### Testing
Single socket (28 cores):

shape | fp32 forward / ms | bf16 forward / ms | fp32 backward / ms  | bf16 backward / ms
-- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: contig | 3.959584 | 5.493402 | 0.557232 | 0.568485
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: CL | 0.815511 | 1.351261 | 5.710506 | 10.57506
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: contig  | 10.63426 | 15.28637 | 2.67656 | 1.71365
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: CL | 2.63570 | 2.05532 | 2.55452 | 2.33923
size: (4, 19, 10, 16, 16), kernel:   3, stride: 1, mem_format: contig | 0.375469 | 0.479748 | 0.066364 | 0.065155
size: (4, 19, 10, 16, 16), kernel:   3, stride: 1, mem_format: CL3d | 0.112197 | 0.112326 | 0.111697 | 0.145364

Single core:

shape | fp32 forward / ms | bf16 forward / ms | fp32 backward / ms | bf16 backward / ms
-- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: contig | 92.16582 | 128.6513 | 6.684325 | 12.21541
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: CL | 10.14318 | 29.80297 | 7.350142 | 11.25323
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: contig | 238.55453 | 331.89967 | 19.694657 | 32.78853
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: CL | 30.17079 | 32.75628 | 22.44543 | 30.17796
size: (4, 19, 10, 16, 16), kernel:   3, stride: 1, mem_format: contig | 7.474389 | 9.937217 | 0.236015 | 0.434229
size: (4, 19, 10, 16, 16), kernel:   3, stride: 1, mem_format: CL3d | 2.318954 | 2.469444 | 0.262125 | 0.401361

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97775
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki
2023-08-26 00:21:27 +00:00
AllenTiTaiWang
ee171465ad [ONNX] Support constant tensors in FakeMode exporting (#107836)
Fixes https://github.com/pytorch/pytorch/issues/107475

- Constant tensors was wrongly recognized as weights and buffers, and then was detached from its default value during `to_model_proto`. This PR fixes the bug and pick up Bloom CI test back successfully. NOTE: non-persistent buffer and weights has different situation and is not fixed by this PR.
- Reduce transformers model size by modifying their config parameters to speed up CI tests. (Unrelated to this PR title)

Corresponding change with https://github.com/microsoft/onnxscript/pull/1023
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107836
Approved by: https://github.com/BowenBao, https://github.com/justinchuby
2023-08-26 00:06:49 +00:00
PyTorch MergeBot
161ea463e6 Revert "Remove remaining global set_default_dtype calls from tests (#107246)"
This reverts commit aa8ea1d787.

Reverted https://github.com/pytorch/pytorch/pull/107246 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/107246#issuecomment-1693838522))
2023-08-25 19:34:55 +00:00
BowenBao
00e9735ee3 [ONNX] Enable 'ExportOutput.save' for models larger than 2GB (#107904)
Previously it fails during serialization, despite onnxscript graph_building managed to return ModelProto > 2GB.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107904
Approved by: https://github.com/abock
2023-08-25 03:08:38 +00:00
Kurt Mohler
aa8ea1d787 Remove remaining global set_default_dtype calls from tests (#107246)
Fixes #68972

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107246
Approved by: https://github.com/ezyang
2023-08-24 16:10:48 +00:00
Justin Chu
387556318e [ONNX] Cap opset version at 17 for torch.onnx.export (#107829)
Cap opset version at 17 for torch.onnx.export and suggest users to use the dynamo exporter. Warn users instead of failing hard because we should still allow users to create custom symbolic functions for opset>17.

Also updates the default opset version by running `tools/onnx/update_default_opset_version.py`.

Fixes #107801 Fixes #107446
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107829
Approved by: https://github.com/BowenBao
2023-08-24 07:21:10 +00:00
BowenBao
5b632bf7a6 [ONNX] More debug logging from fx to onnx (#107654)
Summary:
- Log fx graph name for 'fx-graph-to-onnx' diagnostic.
- Log fx graph and onnx graph under DEBUG verbosity level for 'fx-graph-to-onnx' diagnostic.
- Adjust unittest to run with diagnostics verbosity level logging.DEBUG.
- Sarif logs will be saved for unittest when `TORCH_LOGS="onnx_diagnostics"` is set.

<img width="640" alt="image" src="https://github.com/pytorch/pytorch/assets/9376104/a5718530-3594-46fb-85a2-b8bcc8ba01c7">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107654
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms
ghstack dependencies: #107408, #107409, #107653
2023-08-23 18:05:15 +00:00
BowenBao
c3c1b68ae8 [ONNX] Enclose package info for modules exported as local functions (#107409)
Enclose source package of modules that are exported as onnx local function in exported onnx model. GPT2 model example:

<img width="350" alt="image" src="https://github.com/pytorch/pytorch/assets/9376104/5e361bdd-ca24-45e7-a9ba-191c35acf3bb">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107409
Approved by: https://github.com/justinchuby
ghstack dependencies: #107408
2023-08-23 18:05:13 +00:00
BowenBao
7a8db57e37 [ONNX] Re-purpose 'name' field of GraphProto (#107408)
Previously, the top level GraphProto is hardcoded with name "torch_jit", and the subgraphs "torch_jit_{count}". It does not offer any insight to the graph, but rather encodes the graph producer as jit (torchscript). This is no longer true now that the graph can also be produced from dynamo.

As a naive first step, this PR re-purposes the name, to "main_graph", and "sub_graph_{count}" respectively. More delicate processing can be done to name the subgraphs with respect to their parent node or module. This can be done as follow ups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107408
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms
2023-08-23 18:05:11 +00:00
AllenTiTaiWang
400c4de53b [ONNX] Add huggingface models into CI tests (#107247)
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](https://github.com/huggingface/transformers/issues/21013) when a newer version of transformers is used.
    - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported.
    - Falcon and MPT has unsupported user coding to Dynamo.
2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](https://github.com/pytorch/pytorch/issues/107211)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107247
Approved by: https://github.com/wschin, https://github.com/BowenBao
2023-08-23 07:28:26 +00:00
Aaron Gokaslan
660e8060ad [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
PyTorch MergeBot
d59a6864fb Revert "[BE]: Update ruff to 0.285 (#107519)"
This reverts commit 88ab3e4322.

Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480))
2023-08-22 19:53:32 +00:00
BowenBao
f9f88f2d31 [ONNX] Add unittest for exporting embedding_bag (#105862)
Issue list:
* Unsupported FX nodes: {'call_function': ['aten.embedding_renorm.default', ~~'aten._embedding_bag_forward_only.default'~~]}.
* aten._embedding_bag.default not captured by test. Hence this test is not reflecting the pattern seen in model from onnxbench. Update: need validation again, unsure if this is still the case.
* `padding_idx` is always emitted for `aten._embedding_bag` and `aten._embedding_bag_forward_only`. This overload is unsupported by Torchlib.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105862
Approved by: https://github.com/justinchuby
2023-08-22 03:52:38 +00:00
AllenTiTaiWang
a4eae43315 [ONNX] Update xfail reasons in fx runtime tests (#107257)
1. Update xfail reasons in fx runtime
2. Enable bloom-560m in runtime test. However, it's blocked by the unsupported constant tensor case. The previous error was because the when the model loads with external data, it surpasses 2GB, and couldn't be inlined. The fix is to inline the model it self, and then replace the original one. Pointing ORT to the path allows it to load with external data into model in runtime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107257
Approved by: https://github.com/justinchuby
2023-08-21 19:21:56 +00:00
Aaron Gokaslan
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
Wei-Sheng Chin
22f5889753 [Dynamo, ONNX] Replace onnxrt backend with new backend from ONNXRuntime team (#106929)
In https://github.com/pytorch/pytorch/pull/106589, a new ONNXRuntime-based Dynamo backend is introduced. As mentioned in that PR, we hope to replace legacy `onnxrt` with that new backend. This PR remove legacy `onnxrt` and register the new backend under the same name.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106929
Approved by: https://github.com/thiagocrepaldi, https://github.com/BowenBao, https://github.com/abock, https://github.com/msaroufim, https://github.com/jansel
2023-08-15 22:50:46 +00:00
BowenBao
d8a71a6633 [ONNX] Set 'Generic[Diagnostic]' as base class for 'DiagnosticContext' (#107165)
Allows overriding the `Diagnostic` type for DiagnosticContext and enable type checking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107165
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms
ghstack dependencies: #106741, #107158
2023-08-15 21:01:17 +00:00
BowenBao
e9cb7179cb [ONNX] Fix diagnostic log and add unittest (#107158)
As title. Previously message was formatted but mistakenly not logged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107158
Approved by: https://github.com/titaiwangms
ghstack dependencies: #106741
2023-08-15 17:46:15 +00:00
BowenBao
19a76290d8 [ONNX] Public diagnostic options for 'dynamo_export' (#106741)
Generate diagnostic reports to monitor the internal stages of the export process. This tool aids in unblocking model exports and debugging the exporter.

#### Settings

~~1. Choose if you want to produce a .sarif file and specify its location.~~
1. Updated: saving .sarif file should be done by `export_output.save_sarif_log(dst)`, similar to saving exported onnx model `export_output.save(model_dst)`.
2. Customize diagnostic options:
    - Set the desired verbosity for diagnostics.
    - Treat warnings as errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106741
Approved by: https://github.com/titaiwangms, https://github.com/justinchuby, https://github.com/malfet
2023-08-15 17:46:15 +00:00
BowenBao
22095acfd7 [ONNX] Migrate to PT2 logging (#106592)
Summary
- The 'dynamo_export' diagnostics leverages the PT2 artifact logger to handle the verbosity
level of logs that are recorded in each SARIF log diagnostic. In addition to SARIF log,
terminal logging is by default disabled. Terminal logging can be activated by setting
the environment variable `TORCH_LOGS="onnx_diagnostics"`. When the environment variable
is set, it also fixes logging level to `logging.DEBUG`, overriding the verbosity level
specified in the diagnostic options.
See `torch/_logging/__init__.py` for more on PT2 logging.
- Replaces 'with_additional_message' with 'Logger.log' like apis.
- Introduce 'LazyString', adopted from 'torch._dynamo.utils', to skip
evaluation if the message will not be logged into diagnostic.
- Introduce 'log_source_exception' for easier exception logging.
- Introduce 'log_section' for easier markdown title logging.
- Updated all existing code to use new api.
- Removed 'arg_format_too_verbose' diagnostic.
- Rename legacy diagnostic classes for TorchScript Onnx Exporter to avoid
confusion.

Follow ups
- The 'dynamo_export' diagnostic now will not capture python stack
information at point of diagnostic creation. This will be added back in
follow up PRs for debug level logging.
- There is type mismatch due to subclassing 'Diagnostic' and 'DiagnosticContext'
for 'dynamo_export' to incorporate with PT2 logging. Follow up PR will
attempt to fix it.
- More docstrings with examples.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106592
Approved by: https://github.com/titaiwangms
2023-08-11 23:27:00 +00:00
PyTorch MergeBot
71be8f2223 Revert "Add initial support for FP8 ONNX export (#106379)"
This reverts commit 08704f96f0.

Reverted https://github.com/pytorch/pytorch/pull/106379 on behalf of https://github.com/kit1980 due to breaking multiple internal builds ([comment](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675192700))
2023-08-11 18:22:35 +00:00
Thiago Crepaldi
0b05aef8d0 Add ONNX export support for huggingface's bigscience/bloom-560m model (#106930)
Port fix from https://github.com/huggingface/safetensors/pull/318 into ONNX exporter until it is merged

* This add support for safetensors to be loaded within a FakeTensorMode, which results in creating `torch.empty((shape,), dtype=)`. This is done through a monkeypatch for the in-progress https://github.com/huggingface/safetensors/pull/318
* Adds a test for the HF bloom model (bigscience/bloom-560m)
* This PR also fixes existing fake tensor unit tests by moving the `torch.onnx.dynamo_export` to be inside the `enable_fake_mode()` context. Although calling `torch.onnx._dynamo_export` works for several models, the right way of using fake mode is calling the exporter within the context manager.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106930
Approved by: https://github.com/BowenBao
2023-08-11 16:34:24 +00:00
AllenTiTaiWang
e93a90bdd5 [ONNX] Refactor perfect/nearest match criteria to allow optional inputs and disallow mismatch attributes (#106478)
Fix #106057, except **Attribute dtype mismatch. E.g., alpha of aten.add.Tensor. -> Attribute: alpha INT vs FLOAT**.

Summarized the change
* Fill in defaults of attribute when `param_schema` is applied. This relaxes the matching on default attributes.
* Fill in None to optional input when `param_schema` is applied.
* Keep extra kwargs in attributes to make matching strictly.
* Allow input to be None when its dtype is `optiona[INPUT]`

The change comes with the guarantee from torchlib that attribute would never be None. For example, if `memory_format` is needed. The function should specify like this:
```python
@torch_op("aten::clone")
def aten_clone(
    self: TTensor, memory_format: str = ""  # pylint: disable=unused-argument
) -> TTensor:
    """clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor"""

    return op.Identity(self)
```

Previous to this PR, opSchema matching didn't strictly guard the number of inputs/attributes to allow nearest match, which introduces the bug of dispatching `aten::div.Tensor` to `aten::div.default` disregarding the fact that `aten::div.Tensor` has an extra attibute `rounding_mode`. This PR fixes the issue with the new logic to perfect/nearest match. Particularly, strictly restrict the qualification of being nearest match candidate.

For each ONNX variants, we check these step by step:
1. Check if the function signature of inputs number is the same as the inputs.
2. Check if the function signature of attribute names is the same set of inputs.

If either of the above two criteria fails to meet, the ONNX variant is not a perfect match, nor a nearest match candidate (match_score=None)

3. Check if input dtype matches
4. Check if attribute dtype matches

If 3 and 4 are met, then this is a perfect match, otherwise, it's still considered a candidate of nearest match with a matching score.

## Case Study

### Optional Input
The dispatcher recognizes optional inputs. However, the input can't be ignored. None must be provided.
```python
# Perfect match is found
inputs = (Tensor([2, 3]), None)
aten_op(X: TTensor, Y: Optional[INT64]):
    ...
```
Real Case: aten::convolution
NOTE: There is/will not optional attribute in torchlib.

### Different attributes
If an attribute is provided with value, it's a must to match the attribute in function signature.
```python
# Not perfect match, nor nearest match
inputs = (Tensor([2, 3]),)
attributes = {"a":1, "b":2}
aten_op(X: TTensor, a: int):
    ...
```
Real Case: aten::div and aten::div.Tensor_mode

### Default attribute
Default attribute will fill in the value into inputs/attributes
```python
# Perfect match is found
inputs = (Tensor([2, 3]),)
attributes = {}
aten_op(X: TTensor, a: int = 3):
    ...
```
Real case: aten::clone

### Ignore attribute with None value
The attributes with None value will be ignored in matching.
```python
# Perfect match
inputs = (Tensor([2, 3]),)
attributes = {"a": None}
aten_op(X: TTensor):
    ...

# Not perfect match, but eligible for nearest match
inputs = (Tensor([2, 3]),)
attributes = {"a": None}
aten_op(X: TTensor, a: int = 3):
    ...
```
Real case: aten::div and aten::div.Tensor_mode

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106478
Approved by: https://github.com/thiagocrepaldi, https://github.com/BowenBao
2023-08-10 03:08:23 +00:00
Tal Cherckez
08704f96f0 Add initial support for FP8 ONNX export (#106379)
Add support for ONNX_NAMESPACE::TensorProto_DataType_FLOAT8E5M2 and ONNX_NAMESPACE::TensorProto_DataType_FLOAT8E4M3FN to enable export of torch models that use FP8 (E4M3 and E5M2) to ONNX (opset 19)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106379
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi, https://github.com/malfet
2023-08-10 01:02:45 +00:00
Wei-Sheng Chin
99a10da295 [Dynamo] a dyanmo backend based on ONNXRuntime (#106589)
This PR migrates the dynamo backend developed under ONNXRuntime into PyTorch. The ultimate goal is to replace legacy `onnxrt` in dynamo with dynamo compiler from ONNXRuntime team.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106589
Approved by: https://github.com/abock, https://github.com/thiagocrepaldi
2023-08-10 00:09:19 +00:00
BowenBao
2a138d7f1d [ONNX] Turn on batch norm related unittest (#105769)
As title, add test for ops already supported.
Bump ORT in CI to 1.15.1 release version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105769
Approved by: https://github.com/titaiwangms, https://github.com/thiagocrepaldi
2023-08-08 19:51:04 +00:00
Jason Lu
bc88028e8e Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
Summary:
Original commit changeset: 81319beb97f3

Original Phabricator Diff: D47961182

Test Plan: revert to maintain backward compat with legacy ads_dper3 production package. Read details in: S357822

Reviewed By: atuljangra

Differential Revision: D48131623

@diff-train-skip-merge
(D48131623 landed internally)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106743
Approved by: https://github.com/malfet
2023-08-08 15:27:34 +00:00