Should fix#78844
Custom op related tests utilize inline cpp extension to build custom
operator from c++ source snippet. Only two test cases become flaky after
parallel run, and both use inline cpp extension. Reverting to run these
tests in single process to try resolve the flakiness.
Reverts test skip added previously #78936.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78944
Approved by: https://github.com/janeyx99, https://github.com/garymm
Currently `torch.onnx.export(.., operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK)` only issues ATen ops through explicit requests (e.g. `g.at()`) calls inside each op symbolic function. This is done based on specific conditions such as `operator_export_type==OperatorExportTypes.ONNX_ATEN_FALLBACK)` or `is_caffe2_aten_fallback()`
This PR extends the ATen fallback mechanism for scenarios when the symbolic function raises `RuntimeError` during export. The idea is that partial implementation of existing ONNX ops can fallback to ATen as a last resort. That is valuable because each operator can have many input combinations and not all are always implemented.
A minor fix was done to make sure the `overload_name` attribute is added to explicit ATen op fallback requests when a symbolic is not registered to a particular op.
ps: The behavior for builds with BUILD_CAFFE2=1 is not changed to ensure BC.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74759
Approved by: https://github.com/garymm, https://github.com/msaroufim
`torch.cuda.synchronize()` is a heavy hammer and distorts benchmarking results a lot. Timer provides results that are closer to kernel times observed in profiler.
If you want, instead of `blocked_autorange` you can use `timeit` that repeats the stmt fixed number of times.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75393
Approved by: https://github.com/davidberard98
Summary:
And add a new tool to update it in the future, which follows the policy
of using "latest as of 18 months ago". This policy is meant to balance:
* recent enough to increase the odds of being able to successfully
export
* old enough to increase the odds of exported model being runnable by
different ONNX implementations
Related changes:
* test_models.py: explicitly fix opset_version to 9 rather than relying on default. Caffe2 doesn't support newer versions.
* symbolic_helper.py:
* Remove a misleading comment
* Remove unnecessary check in `_set_opset_version`
* Use a range to define `_onnx_stable_opsets`
* test_pytorch_common.py:
* Rename a variable from min -> max. I think it was a copy-paste error.
* Make skip test messages more informative.
* Remove unused `skipIfONNXShapeInference`. More on that below.
* test_pytorch_onnx_onnxruntime.py:
* Make all the `TestCase` classes explicitly specify opset version.
* Make `test_unsupported_pad` respect `opset_version` by using `run_test`
* Unrelated simplification: make it obvious that all tests run with `onnx_shape_inference=True`. AFAICT this was already the case.
* There was one test that was entirely disabled (test_tolist) because it was asking to be skipped whenever `onnx_shape_inference=True`, but it was always True. I changed the model being tested so as to preserve the intended test coverage but still have the test actually pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73898
Reviewed By: msaroufim
Differential Revision: D35264615
Pulled By: malfet
fbshipit-source-id: cda8fbdffe4cc8210d8d96e659e3a9adf1b5f1d2
(cherry picked from commit b5e639e88828d34442282d0b50c977e610a2ba3a)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75108
- Add option to only run some graphs
- Add NNC Static vs Dynamic
- Update make_tensor bc it wasnt using strides
Test Plan: Imported from OSS
Reviewed By: ejguan
Differential Revision: D35374000
Pulled By: eellison
fbshipit-source-id: df16b8647f2309a8837207cacba55d30f46845ce
(cherry picked from commit 19feb54db049186972b47548cf3d83e76512adfd)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74076
Extends the repro script to cpu and NNC. As in file:
Usage:
```
1. Run your script and pipe into a log file
PYTORCH_JIT_LOG_LEVEL=">>tensorexpr_fuser" python3 my_test.py &> log.txt
2. Run log_extract:
log_extract.py log.txt --baseline --nnc
```
Test Plan: Imported from OSS
Reviewed By: gchanan
Differential Revision: D34946883
Pulled By: eellison
fbshipit-source-id: 644012dbbca0b490820ef83e761c06b0dd009e52
(cherry picked from commit 5256c8f3ff8545033d1335cc96d34194abda1370)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73881
NVFuser fusion groups can contain nvfuser-only ops, e.g. `prim::reshape_copy`. Previously, we couldn't get a baseline performance measurement because the nvfuser-only ops would error out on nnc- and no-fusion- runs. Instead, dump the fallback graphs, after the fallbacks are corrected into runnable fallbacks.
Test Plan: Imported from OSS
Reviewed By: eellison
Differential Revision: D34698307
Pulled By: davidberard98
fbshipit-source-id: c357b2736b789bfd347afe9c83a1b610b64881e0
(cherry picked from commit 5918d826502ff75fbc22d242844ae6435dd7d22a)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72889
The script along with the GRAPH_EXPORT macro will allow for an easy way to extract IR from logs. One use case in this diff is to extract the fusion groups from nvfuser, so that the fusions can be tested individually.
Usage (e.g. for nvfuser test)
1. Write some test.py file that uses nvfuser
2. `PYTORCH_JIT_LOG_LEVEL=">>graph_fuser" python3 test.py 2>&1 | tee output.txt`
3. `python3 pytorch/scripts/jit/log_extract.py output.txt --nvfuser`
This will run with and without nvfuser to compare the output.
Alternatively, use `--output` to dump the IR so that it can be used in other applications.
Currently, only `--output` works (since generating input tensors is not supported)
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D34440189
Pulled By: davidberard98
fbshipit-source-id: fca0f619200ee37aba34bb39b69e6c640c263e26
(cherry picked from commit eb319166075db160f1628f0de545641fbecde8be)
Summary:
RFC: https://github.com/pytorch/rfcs/pull/40
This PR (re)introduces python codegen for unboxing wrappers. Given an entry of `native_functions.yaml` the codegen should be able to generate the corresponding C++ code to convert ivalues from the stack to their proper types. To trigger the codegen, run
```
tools/jit/gen_unboxing.py -d cg/torch/share/ATen
```
Merged changes on CI test. In https://github.com/pytorch/pytorch/issues/71782 I added an e2e test for static dispatch + codegen unboxing. The test exports a mobile model of mobilenetv2, load and run it on a new binary for lite interpreter: `test/mobile/custom_build/lite_predictor.cpp`.
## Lite predictor build specifics
1. Codegen: `gen.py` generates `RegisterCPU.cpp` and `RegisterSchema.cpp`. Now with this PR, once `static_dispatch` mode is enabled, `gen.py` will not generate `TORCH_LIBRARY` API calls in those cpp files, hence avoids interaction with the dispatcher. Once `USE_LIGHTWEIGHT_DISPATCH` is turned on, `cmake/Codegen.cmake` calls `gen_unboxing.py` which generates `UnboxingFunctions.h`, `UnboxingFunctions_[0-4].cpp` and `RegisterCodegenUnboxedKernels_[0-4].cpp`.
2. Build: `USE_LIGHTWEIGHT_DISPATCH` adds generated sources into `all_cpu_cpp` in `aten/src/ATen/CMakeLists.txt`. All other files remain unchanged. In reality all the `Operators_[0-4].cpp` are not necessary but we can rely on linker to strip them off.
## Current CI job test coverage update
Created a new CI job `linux-xenial-py3-clang5-mobile-lightweight-dispatch-build` that enables the following build options:
* `USE_LIGHTWEIGHT_DISPATCH=1`
* `BUILD_LITE_INTERPRETER=1`
* `STATIC_DISPATCH_BACKEND=CPU`
This job triggers `test/mobile/lightweight_dispatch/build.sh` and builds `libtorch`. Then the script runs C++ tests written in `test_lightweight_dispatch.cpp` and `test_codegen_unboxing.cpp`. Recent commits added tests to cover as many C++ argument type as possible: in `build.sh` we installed PyTorch Python API so that we can export test models in `tests_setup.py`. Then we run C++ test binary to run these models on lightweight dispatch enabled runtime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69881
Reviewed By: iseeyuan
Differential Revision: D33692299
Pulled By: larryliu0820
fbshipit-source-id: 211e59f2364100703359b4a3d2ab48ca5155a023
(cherry picked from commit 58e1c9a25e3d1b5b656282cf3ac2f548d98d530b)
These were left out of the intial migration for some reason so this just
transfers over those tests
Signed-off-by: Eli Uriegas <eliuriegasfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71644
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64929
Auto categorized 63% of the commits for PyTorch 1.10 release (2.2k out of 3.4k commits)
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D33768760
Pulled By: anjali411
fbshipit-source-id: 0655090af83e923f8c26fa1ce9f190edc542b97e
(cherry picked from commit 2fe30f77b8)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69332
---
## Context
The `build_android.sh` script currently does not forward Vulkan configuration options, which makes it impossible to control them when running `build_pytorch_android.sh`.
## Changes
Slightly change the script to allow Vulkan configuration options to propagate from `build_pytorch_android.sh` to `build_android.sh`
Test Plan: Imported from OSS
Reviewed By: beback4u
Differential Revision: D32840908
Pulled By: SS-JIA
fbshipit-source-id: e55d89c93c996b92b743cf047f5a285bb516bbc4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67805
Also fix Reduce ops on binary_cross_entropy_with_logits
The graph says the output is a scalar but with `keepdims=1`
(the default), the output should be a tensor of rank 1. We set keep
`keepdims=0` to make it clear that we want a scalar output.
This previously went unnoticed because ONNX Runtime does not strictly
enforce shape inference mismatches if the model is not using the latest
opset version.
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D32181304
Pulled By: malfet
fbshipit-source-id: 1462d8a313daae782013097ebf6341a4d1632e2c
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66513
These were missed in the migration of onnx to github actions.
Adds ort tests with 2 shards for the onnx workflow
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D31599433
Pulled By: seemethere
fbshipit-source-id: 73dce0d3017c4280e64f0c8578e2be7ef6a168d6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66513
These were missed in the migration of onnx to github actions.
Adds ort tests with 2 shards for the onnx workflow
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D31591512
Pulled By: seemethere
fbshipit-source-id: 4a8bb3f0e62ff98ee77d3d8afc905f4e02db6f24
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62419
This diff adds support for cpu only kineto profiler on mobile. Thus
enabling chrome trace generation on mobile. This bring cpp API for
mobile profiling on part with Torchscript.
This is done via:
1. Utilizating debug handle annotations in KinetoEvent.
2. Adding post processing capability, via callbacks, to
KinetoThreadLocalState
3. Creating new RAII stype profiler, KinetoEdgeCPUProfiler, which can be
used in surrounding scope of model execution. This will write chrome
trace to the location specified in profiler constructor.
Test Plan:
MobileProfiler.ModuleHierarchy
Imported from OSS
Reviewed By: raziel
Differential Revision: D29993660
fbshipit-source-id: 0b44f52f9e9c5f5aff81ebbd9273c254c3c03299
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62855
Test Plan: Test on Private Pod with the [HelloWorld](https://fburl.com/3hiwkkhm) demo
Reviewed By: xta0
Differential Revision: D30174151
Pulled By: hanton
fbshipit-source-id: 22cd8663ac239811bf8ed1c3b6301460d798dbfa
Summary:
Two changes:
1. Build lite interpreter as default for iOS
2. Switch the previous lite interpreter test to full jit build test
Test Plan: Imported from OSS
Differential Revision: D27698039
Reviewed By: xta0
Pulled By: cccclai
fbshipit-source-id: 022b554f4997ae577681f2b79a9ebe9236ca4f7d
Summary:
Build lite interpreter as default for android, should wait until https://github.com/pytorch/pytorch/pull/56002 lands
Mainly two changes:
1. Use lite interpreter as default for Android
2. Switch the lite interpreter build test to full jit build test
Test Plan: Imported from OSS
Differential Revision: D27695530
Reviewed By: IvanKobzarev
Pulled By: cccclai
fbshipit-source-id: e1b2c70fee6590accc22c7404b9dd52c7d7c36e2
Summary:
Some machines don't have a versionless `python` on their PATH, which breaks these existing shebangs.
I'm assuming that all the existing versionless `python` shebangs are meant to be `python3` and not `python2`; please let me know if my assumption was incorrect for any of these.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58275
Test Plan: CI.
Reviewed By: zhouzhuojie
Differential Revision: D28428143
Pulled By: samestep
fbshipit-source-id: 6562be3d12924db72a92a0207b060ef740f61ebf
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57597
* Special post process for onnx::Cast and onnx::ConstantOfShape
* Update `test_pytorch_onnx_shape_inference.py` to be unit test over shape inference patterns.
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D28393529
Pulled By: SplitInfinity
fbshipit-source-id: fc26032ddb842d4e299447da39564b28049752ed
Co-authored-by: BowenBao <bowbao@microsoft.com>
Summary:
Expanding support to all builds
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56323
Test Plan: CI
Reviewed By: malfet
Differential Revision: D28171478
Pulled By: ilia-cher
fbshipit-source-id: 16bc752d1be3cbaeda5316f5d8a687ae05a83d22
Summary:
[distutils](https://docs.python.org/3/library/distutils.html) is on its way out and will be deprecated-on-import for Python 3.10+ and removed in Python 3.12 (see [PEP 632](https://www.python.org/dev/peps/pep-0632/)). There's no reason for us to keep it around since all the functionality we want from it can be found in `setuptools` / `sysconfig`. `setuptools` includes a copy of most of `distutils` (which is fine to use according to the PEP), that it uses under the hood, so this PR also uses that in some places.
Fixes#56527
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57040
Pulled By: driazati
Reviewed By: nikithamalgifb
Differential Revision: D28051356
fbshipit-source-id: 1ca312219032540e755593e50da0c9e23c62d720