pytorch/test/onnx/pytorch_test_common.py
BowenBao 436edc5ac3 [ONNX] Retire 'DynamoOptimizeExporter' (#99202)
<!--
copilot:all
-->
### <samp>🤖 Generated by Copilot at f2ccd03</samp>

### Summary
🗑️📝🛠️

<!--
1.  🗑️ - This emoji represents the removal of unused or unnecessary code, such as the class `DynamoOptimizeExporter` and some imports and decorators.
2.  📝 - This emoji represents the improvement of code readability and consistency, such as replacing `skip_fx_exporters` with `xfail` and using more descriptive names for the FX exporters.
3.  🛠️ - This emoji represents the simplification and refactoring of the code, such as removing some FX exporters and reducing the number of arguments and conditions in the tests.
-->
Removed unused code and simplified test logic for FX to ONNX conversion. This involved removing `skip_fx_exporters` and `DynamoOptimizeExporter`, and using `xfail` instead of `skip_fx_exporters` in `pytorch_test_common.py` and `test_fx_to_onnx_with_onnxruntime.py`.

> _Some FX exporters were not in use_
> _So they were removed without excuse_
> _The tests were updated_
> _With `xfail` annotated_
> _To make the ONNX logic more smooth_

### Walkthrough
*  Remove unused imports of `Mapping`, `Type`, and `exporter` from `test/onnx/pytorch_test_common.py` ([link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-26ce853445bf331686abb33393ee166726923ce36aa2a8de98ac7a2e3bc5a6d8L9-R9), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-26ce853445bf331686abb33393ee166726923ce36aa2a8de98ac7a2e3bc5a6d8L16-R16))
*  Replace custom `skip_fx_exporters` function with standard `xfail` decorator in `test/onnx/pytorch_test_common.py` and `test/onnx/test_fx_to_onnx_with_onnxruntime.py` to simplify test skipping logic and mark tests as expected to fail ([link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-26ce853445bf331686abb33393ee166726923ce36aa2a8de98ac7a2e3bc5a6d8L209-R220), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL319-R288), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL375-R343), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL619-R563), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL721-R656), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL788-R718))
*  Remove unused `DynamoOptimizeExporter` class from `torch/onnx/_internal/fx/dynamo_exporter.py` and remove references to it in `test/onnx/test_fx_to_onnx_with_onnxruntime.py` to simplify FX exporter logic and remove unused code ([link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-3ecf10bc5f6eb95a19441118cb947bd44766dc5eb9b26346f922759bb0f8c9f2L16-L85), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL37-R37), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL411-L415), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL452-L454))
*  Remove unused variables and parameters related to different FX exporters in `test/onnx/test_fx_to_onnx_with_onnxruntime.py` and use `torch.onnx.dynamo_export` directly to simplify code ([link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL50-R47), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL191-R188), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL245-R224), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL265-R237), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL279), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL296))
*  Replace `skip` decorators with `xfail` decorators in `test/onnx/test_fx_to_onnx_with_onnxruntime.py` to mark tests as expected to fail instead of skipping them unconditionally ([link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL524-R471), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL665-R600), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL748-R675), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL767-R696))
*  Replace `skip_fx_exporters` decorator with `skip_dynamic_fx_test` decorator in `test/onnx/test_fx_to_onnx_with_onnxruntime.py` to skip tests only for dynamic shapes instead of a specific FX exporter ([link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL591-R541), [link](https://github.com/pytorch/pytorch/pull/99202/files?diff=unified&w=0#diff-c8fa56eefd7f98fb4f9739d57df57f02ede77e28528133736010a6d06651ebcbL831-R761))

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99202
Approved by: https://github.com/abock
2023-04-18 01:40:47 +00:00

280 lines
8.0 KiB
Python

# Owner(s): ["module: onnx"]
from __future__ import annotations
import functools
import os
import random
import sys
import unittest
from typing import Optional
import numpy as np
import packaging.version
import torch
from torch.autograd import function
from torch.onnx._internal import diagnostics
from torch.testing._internal import common_utils
pytorch_test_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
sys.path.insert(-1, pytorch_test_dir)
torch.set_default_tensor_type("torch.FloatTensor")
BATCH_SIZE = 2
RNN_BATCH_SIZE = 7
RNN_SEQUENCE_LENGTH = 11
RNN_INPUT_SIZE = 5
RNN_HIDDEN_SIZE = 3
def _skipper(condition, reason):
def decorator(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
if condition():
raise unittest.SkipTest(reason)
return f(*args, **kwargs)
return wrapper
return decorator
skipIfNoCuda = _skipper(lambda: not torch.cuda.is_available(), "CUDA is not available")
skipIfTravis = _skipper(lambda: os.getenv("TRAVIS"), "Skip In Travis")
skipIfNoBFloat16Cuda = _skipper(
lambda: not torch.cuda.is_bf16_supported(), "BFloat16 CUDA is not available"
)
# skips tests for all versions below min_opset_version.
# if exporting the op is only supported after a specific version,
# add this wrapper to prevent running the test for opset_versions
# smaller than the currently tested opset_version
def skipIfUnsupportedMinOpsetVersion(min_opset_version):
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if self.opset_version < min_opset_version:
raise unittest.SkipTest(
f"Unsupported opset_version: {self.opset_version} < {min_opset_version}"
)
return func(self, *args, **kwargs)
return wrapper
return skip_dec
# skips tests for all versions above max_opset_version.
def skipIfUnsupportedMaxOpsetVersion(max_opset_version):
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if self.opset_version > max_opset_version:
raise unittest.SkipTest(
f"Unsupported opset_version: {self.opset_version} > {max_opset_version}"
)
return func(self, *args, **kwargs)
return wrapper
return skip_dec
# skips tests for all opset versions.
def skipForAllOpsetVersions():
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if self.opset_version:
raise unittest.SkipTest(
"Skip verify test for unsupported opset_version"
)
return func(self, *args, **kwargs)
return wrapper
return skip_dec
def skipTraceTest(skip_before_opset_version: Optional[int] = None, reason: str = ""):
"""Skip tracing test for opset version less than skip_before_opset_version.
Args:
skip_before_opset_version: The opset version before which to skip tracing test.
If None, tracing test is always skipped.
reason: The reason for skipping tracing test.
Returns:
A decorator for skipping tracing test.
"""
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if skip_before_opset_version is not None:
self.skip_this_opset = self.opset_version < skip_before_opset_version
else:
self.skip_this_opset = True
if self.skip_this_opset and not self.is_script:
raise unittest.SkipTest(f"Skip verify test for torch trace. {reason}")
return func(self, *args, **kwargs)
return wrapper
return skip_dec
def skipScriptTest(skip_before_opset_version: Optional[int] = None, reason: str = ""):
"""Skip scripting test for opset version less than skip_before_opset_version.
Args:
skip_before_opset_version: The opset version before which to skip scripting test.
If None, scripting test is always skipped.
reason: The reason for skipping scripting test.
Returns:
A decorator for skipping scripting test.
"""
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if skip_before_opset_version is not None:
self.skip_this_opset = self.opset_version < skip_before_opset_version
else:
self.skip_this_opset = True
if self.skip_this_opset and self.is_script:
raise unittest.SkipTest(f"Skip verify test for TorchScript. {reason}")
return func(self, *args, **kwargs)
return wrapper
return skip_dec
# TODO(titaiwang): dynamic_only is specific to the situation that dynamic fx exporter
# is not yet supported by ORT until 1.15.0. Remove dynamic_only once ORT 1.15.0 is released.
def skip_min_ort_version(reason: str, version: str, dynamic_only: bool = False):
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if (
packaging.version.parse(self.ort_version).release
< packaging.version.parse(version).release
):
if dynamic_only and not self.dynamic_shapes:
return func(self, *args, **kwargs)
raise unittest.SkipTest(
f"ONNX Runtime version: {version} is older than required version {version}. "
f"Reason: {reason}."
)
return func(self, *args, **kwargs)
return wrapper
return skip_dec
def skip_dynamic_fx_test(reason: str):
"""Skip dynamic exporting test.
Args:
reason: The reason for skipping dynamic exporting test.
Returns:
A decorator for skipping dynamic exporting test.
"""
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if self.dynamic_shapes:
raise unittest.SkipTest(
f"Skip verify dynamic shapes test for FX. {reason}"
)
return func(self, *args, **kwargs)
return wrapper
return skip_dec
def xfail(reason: str):
"""Expect failure.
Args:
reason: The reason for expected failure.
Returns:
A decorator for expecting test failure.
"""
return unittest.expectedFailure
# skips tests for opset_versions listed in unsupported_opset_versions.
# if the caffe2 test cannot be run for a specific version, add this wrapper
# (for example, an op was modified but the change is not supported in caffe2)
def skipIfUnsupportedOpsetVersion(unsupported_opset_versions):
def skip_dec(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if self.opset_version in unsupported_opset_versions:
raise unittest.SkipTest(
"Skip verify test for unsupported opset_version"
)
return func(self, *args, **kwargs)
return wrapper
return skip_dec
def skipShapeChecking(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
self.check_shape = False
return func(self, *args, **kwargs)
return wrapper
def skipDtypeChecking(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
self.check_dtype = False
return func(self, *args, **kwargs)
return wrapper
def flatten(x):
return tuple(function._iter_filter(lambda o: isinstance(o, torch.Tensor))(x))
def set_rng_seed(seed):
torch.manual_seed(seed)
random.seed(seed)
np.random.seed(seed)
class ExportTestCase(common_utils.TestCase):
"""Test case for ONNX export.
Any test case that tests functionalities under torch.onnx should inherit from this class.
"""
def setUp(self):
super().setUp()
# TODO(#88264): Flaky test failures after changing seed.
set_rng_seed(0)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(0)
diagnostics.engine.clear()