Commit Graph

449 Commits

Author SHA1 Message Date
Aaron Gokaslan
1d6c5972c1 [BE]: Optimize min/max/sum comprehensions C419 (#123960)
Automatic fixes that replaces certain list comprehensions with generator ones where appropriate so that they are immediately consumed. This is preview functionality in ruff for rule C419 and it was automatically applied.

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123960
Approved by: https://github.com/malfet
2024-04-12 23:54:15 +00:00
atalman
a95ceb51a2 Release fix pinning slow-tests.json (#121746)
Apply release changes script adds version to SLOW_TESTS_FILE which should not change

Test:
```
SLOW_VER=test
sed -i -e s#/slow-tests.json#"/slow-tests.json?versionId=${SLOW_VER}"#  tools/stats/import_test_stats.py
```
Output:
```
SLOW_TESTS_FILE = ".pytorch-slow-tests.json"
...
url = "https://ossci-metrics.s3.amazonaws.com/slow-tests.json?versionId=test"
```

related to: https://github.com/pytorch/pytorch/pull/121726
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121746
Approved by: https://github.com/huydhn
2024-03-12 22:04:55 +00:00
atalman
00a53b58dd Refactor release only changes to two step execution (#121728)
Refactor release only changes to two step execution.

1. Step ``tag-docker-images.sh`` . Tags latest docker images for current release. This step takes about 30min to complete. This step may fail due to space issues on the local host or http connection when pulling image. Hence should be rerun if failed.

2. Apply release only changes ``apply-release-changes.sh`` prepares a PR with release only changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121728
Approved by: https://github.com/jeanschmidt
2024-03-12 17:22:22 +00:00
atalman
12191f4b3e Fix make triton command on release branch (#121169)
Fixes #120044

Should fix build from source instructions on release branch here: https://github.com/pytorch/pytorch#from-source

Please note we are using /test/ channel for release here to make sure it works, before actual release is completed.

Test main:
```
make triton
pip3 uninstall -y triton
WARNING: Skipping triton as it is not installed.
Looking in indexes: https://download.pytorch.org/whl/nightly/
Collecting pytorch-triton==3.0.0+a9bc1a3647
  Downloading https://download.pytorch.org/whl/nightly/pytorch_triton-3.0.0%2Ba9bc1a3647-cp310-cp310-linux_x86_64.whl (239.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 239.0/239.0 MB 8.7 MB/s eta 0:00:00
Requirement already satisfied: filelock in /home/atalman/miniconda3/envs/py310/lib/python3.10/site-packages (from pytorch-triton==3.0.0+a9bc1a3647) (3.13.1)
Installing collected packages: pytorch-triton
  Attempting uninstall: pytorch-triton
    Found existing installation: pytorch-triton 2.2.0
    Uninstalling pytorch-triton-2.2.0:
      Successfully uninstalled pytorch-triton-2.2.0
Successfully installed pytorch-triton-3.0.0+a9bc1a3647
```

Test release/2.2:
```
make triton
pip3 uninstall -y triton
WARNING: Skipping triton as it is not installed.
Looking in indexes: https://download.pytorch.org/whl/test/
Collecting pytorch-triton==2.2.0
  Using cached https://download.pytorch.org/whl/test/pytorch_triton-2.2.0-cp310-cp310-linux_x86_64.whl (183.1 MB)
Requirement already satisfied: filelock in /home/atalman/miniconda3/envs/py310/lib/python3.10/site-packages (from pytorch-triton==2.2.0) (3.13.1)
Installing collected packages: pytorch-triton
  Attempting uninstall: pytorch-triton
    Found existing installation: pytorch-triton 3.0.0+a9bc1a3647
    Uninstalling pytorch-triton-3.0.0+a9bc1a3647:
      Successfully uninstalled pytorch-triton-3.0.0+a9bc1a3647
Successfully installed pytorch-triton-2.2.0
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121169
Approved by: https://github.com/seemethere
2024-03-05 13:53:53 +00:00
James Wu
82099ab87b [easy] Reword unexpected success error messages and generated github issues now that we have sentinel files (#120766)
It's a bit annoying to have to read through the test name in verbose mode just to see what the test's sentinel file is actually called when encountering an unexpected success. Now that we have sentinel files, we can directly list the file path from root in the error message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120766
Approved by: https://github.com/Skylion007
2024-02-28 11:15:29 +00:00
rzou
7b1cc140aa Use lxml in scripts/compile_tests when it is available (#120633)
It's around 30x (300s -> 10s) faster.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120633
Approved by: https://github.com/oulgen
2024-02-26 21:35:22 +00:00
Oguz Ulgen
3eefe96297 Update scripts/compile_tests/update_failures.py (#120529)
In order to unbreak this script, I have only tested with
```
./scripts/compile_tests/update_failures.py 97918e8c37
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120529
Approved by: https://github.com/zou3519
2024-02-23 22:15:44 +00:00
Jason Ansel
0f68bcaa5c Make filename optional in update_failures.py (#119289)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119289
Approved by: https://github.com/zou3519
2024-02-06 21:56:09 +00:00
rzou
debc3b3254 Download reports only if they're necessary (#119027)
Previously we were downloading all of (eager311, dynamo38, dynamo311).
Now we just download what's necessary. This is useful for
update_failures.py because the dynamo tests finish much faster than the
eager tests and it only needs the result from the dynamo tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119027
Approved by: https://github.com/jamesjwu
ghstack dependencies: #118874, #118882, #118931
2024-02-02 20:11:01 +00:00
rzou
a68cf3ef7d update_failures.py: add option to also remove "skipped" tests (#118931)
Previously, you could run update_failures.py (with a commit hash) and it
would add new expected failures and skips for newly failing tests and
remove expected failures for newly passing tests.

This PR teaches update_failures.py to also remove skips for tests that
are now passing without them.

The way we do this is:
- dynamo_test_failures.py doesn't actually skip tests -- it runs the
  test and then suppresses the signal.
- if the test actually passed, then the test gets skipped with a special
  skip message
- we teach update_failures.py to look for the presence of that skip
  message.

Test Plan:
- Used this to generate https://github.com/pytorch/pytorch/pull/118928
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118931
Approved by: https://github.com/yanboliang
ghstack dependencies: #118874, #118882
2024-02-02 20:11:01 +00:00
rzou
292243d1aa Automatically pull test reports from CI (#118882)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118882
Approved by: https://github.com/jamesjwu, https://github.com/yanboliang
ghstack dependencies: #118874
2024-02-02 14:18:56 +00:00
rzou
0f7954107a Add ability to print histogram as a github issue (#118874)
Adds the ability to print the failures histogram into lines that can be
copy-pasted into a github issue.

I used this to generate https://github.com/orgs/pytorch/projects/43

Test Plan:
- tested locally
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118874
Approved by: https://github.com/jamesjwu
2024-02-02 14:18:56 +00:00
rzou
318e6ff40e Fix __name__ on a reconstructed NestedUserFunctionVariable (#118768)
```
def f():
    def g():
        return ()

    print(g.__name__)

f()
```

The following script should print `g` (with or without torch.compile),
but prints `f.<locals>.g` with torch.compile.

The problem looks like we use the co_qualname when reconstructing the
NestedUserFunctionVariable. I switched this over to use the co_name.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118768
Approved by: https://github.com/yanboliang, https://github.com/jansel
2024-02-01 18:59:01 +00:00
James Wu
8d6e34b21b Add verbose option to failures histogram (#118757)
Sample output: https://gist.github.com/jamesjwu/cc80d7da305add0a69c5e39aae09a077
Using directories from https://hud.pytorch.org/pr/118597:
eager_tests: [linux-focal-py3.11-clang10 / test (default, 1, 3, linux.2xlarge)](https://github.com/pytorch/pytorch/actions/runs/7716582714/job/21034340833)
dynamo_tests: [linux-focal-py3.11-clang10 / test (dynamo, 1, 3, linux.2xlarge)](https://github.com/pytorch/pytorch/actions/runs/7716582714/job/21034342747)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118757
Approved by: https://github.com/zou3519
2024-02-01 02:46:36 +00:00
rzou
41dfd0e063 Update Dynamo passrate/histogram scripts (#118752)
Changelog:
- Don't count running PYTORCH_TEST_WITH_DYNAMO=1 on dynamo/ tests in the pass
rate. This was a bug (we were counting all of these as failing, but in
reality, most of these pass). The net effect is that the passrate is (artifically)
6% higher.
- Have the histogram script filter out skips based on the passrate metric.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118752
Approved by: https://github.com/jamesjwu
2024-01-31 19:15:17 +00:00
Zhengxu Chen
2d37a046e7 [export] Enforce serialization BC/FC with updater script. (#118424)
Summary:
This diff implements a mechanism for safely update torch.export serialization schema, aka schema.py, which is the API surface having the strongest compatibility guarantee.

The diff is consist of 3 changes:
- Added a script to "build" or "materialize" schema.py into a platform neutral format (yaml), which serves as the committed form of the seialization schema.
- Added unittest to compare against schema.py and schema.yaml, so that it forces developers to execute the updater script when there is mismatch between two files.
- Added a checker inside the updater script, so that all the compatible change will result in a minor version bump, and all the incompatible changes will result in a major version bump.

torch.export's serialization BC/FC policy is (tentatively) documented here: https://docs.google.com/document/d/1EN7JrHbOPDhbpLDtiYG4_BPUs7PttpXlbZ27FuwKhxg/edit#heading=h.pup7ir8rqjhx , we will update the

As noted in the code doc, people should be able to run the following command to update schema properly from now on:

```
    python scripts/export/update_schema.py --prefix <path_to_torch_development_diretory>
or
    buck run caffe2:export_update_schema -- --prefix /data/users/$USER/fbsource/fbcode/caffe2/
```

Test Plan:
buck test mode/opt caffe2/test:test_export -- -r test_schema
buck run caffe2:update_export_schema -- --prefix /data/users/$USER/fbsource/fbcode/caffe2/

Differential Revision: D52971020

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118424
Approved by: https://github.com/angelayi
2024-01-31 05:37:58 +00:00
rzou
8f973038d5 Update update_failures.py given feedback (#118237)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118237
Approved by: https://github.com/drisspg
2024-01-25 15:42:01 +00:00
Yanbo Liang
c0732c8d5e [Dynamo] Add complex to literal constant (#117819)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117819
Approved by: https://github.com/zou3519
2024-01-23 23:46:46 +00:00
rzou
dc1b9d758e Update passrate calculation script to skip inductor and export (#118030)
We don't want to count running test/inductor/ and test/export/ with
PYTORCH_TEST_WITH_DYNAMO=1 as a part of the pass rate.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118030
Approved by: https://github.com/ydwu4
ghstack dependencies: #117998
2024-01-23 02:33:57 +00:00
rzou
162f643090 Script to generate failures histogram (#118008)
Generates something that looks like
https://gist.github.com/zou3519/43aa8ef28a327bd68cfbac83d84c0999
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118008
Approved by: https://github.com/yanboliang, https://github.com/oulgen
2024-01-23 02:28:55 +00:00
Nikita Shulga
aadbaf8e2d [EZ][BE] Move build_android_gradle.sh (#117795)
From `.circleci/scripts` to `scripts`, next to another `build_android.sh`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117795
Approved by: https://github.com/huydhn
2024-01-19 02:14:28 +00:00
Nikita Shulga
044b9012d5 Update PocketFFT (#117595)
This updates PocketFFT submodule to 9d3ab05a7f

Probably fixes https://github.com/pytorch/pytorch/issues/117589 (as it includes https://github.com/mreineck/pocketfft/issues/5 that should fix PocketFFT compilation on Windows)

Also adjust `#if __cplusplus >= 201703` replace path in Android scripts (need to submit the fix back to PocketFFT)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117595
Approved by: https://github.com/huydhn
2024-01-18 17:08:44 +00:00
rzou
c30346db0e Check in some torch.compile helper scripts (#117400)
- passrate.py: compute the pass rate
- update_failures.py: update `dynamo_test_failures.py`

Both of these scripts require you to download the test results from CI
locally. Maybe we can automate this more in the future. Checking these
in for now, with no tests :P.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117400
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117391
2024-01-16 17:14:43 +00:00
Jacob Rodal
a699b10339 [buck2][win] fix caffe2 protobuf_rule (#115954)
Summary:
c2_protobuf_rule ([here](https://fburl.com/code/iyiulpmv)) is broken on buck2, ultimately due to the following error:

> .\./caffe2.proto: File does not reside within any path specified using --proto_path (or -I).  You must specify a --proto_path which encompasses this file.  Note that the proto_path must be an exact prefix of the .proto file names -- protoc is too dumb to figure out when two paths (e.g. absolute and relative) are equivalent (it's harder than you think).

The root cause is differences in how buck1 and buck2 handle `%SRCDIR%` (absolute versus relative paths). This diff fixes the build.

Test Plan:
# Before

```
buck2 build arvr/mode/win/opt //xplat/caffe2:caffe2.pb.h
```

```
More details at https://www.internalfb.com/intern/buck/build/c6550454-ae6d-479e-9d08-016e544ef050
BUILD SUCCEEDED
```

```
Action failed: fbsource//xplat/caffe2:caffe2.pb.h (genrule)
Remote command returned non-zero exit code <no exit code>
Reproduce locally: frecli cas download-action 5df17cf64b7e2fc5ab090c91e1129f2f3cad36dc72c7c182ab052af23d3f32aa:145
stdout:
stderr:
OUTMISS: Missing outputs: buck-out/v2/gen/fbsource/dd87aacb8683145b/xplat/caffe2/caffe2.pb.h/out/caffe2.pb.h
```

# After

Buck1 still works

```
buck1 build arvr/mode/win/opt //xplat/caffe2:caffe2.pb.h
```

Buck2 works

```
buck2 build arvr/mode/win/opt //xplat/caffe2:caffe2.pb.h
```

```
Buck UI: https://www.internalfb.com/buck2/e5dae607-325a-4eab-b0c9-66fe4e9a6254
BUILD SUCCEEDED
```

Differential Revision: D52218365

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115954
Approved by: https://github.com/mcr229
2023-12-18 21:41:10 +00:00
BowenBao
b0a36944cc [ONNX] Add sanity check in CI for onnxbench (#110178)
ONNX CI to run benchmark with `--quick` to validate the onnxbench infra.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110178
Approved by: https://github.com/thiagocrepaldi
2023-12-02 00:17:07 +00:00
atalman
56a95afb42 [RelEng] Pin disabled and slow test for release (#114515)
Follow up for https://github.com/pytorch/pytorch/pull/114355
Pin disabled and slow tests when applying release only changes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114515
Approved by: https://github.com/DanilBaibak
2023-11-27 15:15:19 +00:00
atalman
7a697c4683 [RelEng] Tag docker images for release, pin unstable and disabled jobs, apply release only changes (#114355)
1. This tags docker images using docker pull/tag/push for current release
2. Sets RELEASE_VERSION_TAG var and regenerates the workflows using the new docker tag
3. Remove conda token setting and Binary tests release changes these are already automated
4. Pin unstable and disabled jobs, autumate: https://github.com/pytorch/pytorch/pull/111675

Test:
```
RELEASE_VERSION=2.2 ./scripts/release/apply-release-changes.sh
Tagging pytorch/manylinux-builder:cuda11.8-main to pytorch/manylinux-builder:cuda11.8-2.2 , dry_run: enabled
Tagging pytorch/manylinux-builder:cuda12.1-main to pytorch/manylinux-builder:cuda12.1-2.2 , dry_run: enabled
Tagging pytorch/libtorch-cxx11-builder:cuda11.8-main to pytorch/libtorch-cxx11-builder:cuda11.8-2.2 , dry_run: enabled
Tagging pytorch/libtorch-cxx11-builder:cuda12.1-main to pytorch/libtorch-cxx11-builder:cuda12.1-2.2 , dry_run: enabled
Tagging pytorch/manylinux-builder:rocm5.6-main to pytorch/manylinux-builder:rocm5.6-2.2 , dry_run: enabled
Tagging pytorch/manylinux-builder:rocm5.7-main to pytorch/manylinux-builder:rocm5.7-2.2 , dry_run: enabled
Tagging pytorch/libtorch-cxx11-builder:rocm5.6-main to pytorch/libtorch-cxx11-builder:rocm5.6-2.2 , dry_run: enabled
Tagging pytorch/libtorch-cxx11-builder:rocm5.7-main to pytorch/libtorch-cxx11-builder:rocm5.7-2.2 , dry_run: enabled
Tagging pytorch/manylinux-builder:cpu-main to pytorch/manylinux-builder:cpu-2.2 , dry_run: enabled
Tagging pytorch/libtorch-cxx11-builder:cpu-main to pytorch/libtorch-cxx11-builder:cpu-2.2 , dry_run: enabled
Tagging pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main to pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.2 , dry_run: enabled
Tagging pytorch/manylinuxaarch64-builder:cpu-aarch64-main to pytorch/manylinuxaarch64-builder:cpu-aarch64-2.2 , dry_run: enabled
Tagging pytorch/conda-builder:cuda11.8-main to pytorch/conda-builder:cuda11.8-2.2 , dry_run: enabled
Tagging pytorch/conda-builder:cuda12.1-main to pytorch/conda-builder:cuda12.1-2.2 , dry_run: enabled
Tagging pytorch/conda-builder:cpu-main to pytorch/conda-builder:cpu-2.2 , dry_run: enabled
/data/users/atalman/pytorch/.github/workflows/generated-linux-binary-manywheel-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-linux-binary-conda-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-linux-binary-libtorch-cxx11-abi-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-linux-binary-libtorch-pre-cxx11-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-linux-aarch64-binary-manywheel-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-linux-binary-manywheel-main.yml
/data/users/atalman/pytorch/.github/workflows/generated-linux-binary-libtorch-cxx11-abi-main.yml
/data/users/atalman/pytorch/.github/workflows/generated-linux-binary-libtorch-pre-cxx11-main.yml
/data/users/atalman/pytorch/.github/workflows/generated-windows-binary-wheel-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-windows-binary-conda-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-windows-binary-libtorch-release-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-windows-binary-libtorch-debug-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-windows-binary-libtorch-release-main.yml
/data/users/atalman/pytorch/.github/workflows/generated-windows-binary-libtorch-debug-main.yml
/data/users/atalman/pytorch/.github/workflows/generated-macos-binary-wheel-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-macos-binary-conda-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-macos-binary-libtorch-cxx11-abi-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-macos-arm64-binary-libtorch-cxx11-abi-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-macos-arm64-binary-wheel-nightly.yml
/data/users/atalman/pytorch/.github/workflows/generated-macos-arm64-binary-conda-nightly.yml
````

Result of pinning unstable and disabled jobs:
```
# The link to the published list of disabled jobs
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json?versionid=kKJlAXdrUbk3CilXbKu.6OwNTGQB8a.B"
# and unstable jobs
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json?versionid=vzaicOxSsh55iXBXwgGrW6dFeVtPfrhr"
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114355
Approved by: https://github.com/malfet
2023-11-23 02:14:22 +00:00
atalman
2322d989e8 Apply release only changes to core (#109208)
Utility script to run after branch cut have been completed.
Execute: ``RELEASE_VERSION=2.1 apply-release-changes.sh``
Similar to: https://github.com/pytorch/audio/pull/3590

Test PR: https://github.com/pytorch/pytorch/pull/109210

Automate generation of PRs:
https://github.com/pytorch/pytorch/pull/108053
https://github.com/pytorch/pytorch/pull/108688
https://github.com/pytorch/pytorch/pull/108064

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109208
Approved by: https://github.com/seemethere
2023-11-07 19:47:30 +00:00
Oleg Bulatov
192477b5ba Enable flake8-bugbear B020 lint (#110823)
Fixes part of https://github.com/pytorch/pytorch/issues/106571

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110823
Approved by: https://github.com/Skylion007
2023-10-24 22:43:47 +00:00
Jerry Zhang
5cc1a38370 [release_notes] Some updates after 2.1 release (#110771)
Summary:
1. aligned topic with labels
2. added some more descriptions in release note worksheet template

Test Plan:
.

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110771
Approved by: https://github.com/drisspg
2023-10-07 03:10:46 +00:00
Huy Do
65afa760a6 Add a script to run iOS test app on AWS Device Farm (#110202)
This adds a script to test PyTorch on actual iOS devices on AWS Device Farm. The test could take quite a long time pending for the devices to become available, so the steps are done manually and documented in `ios/TestApp/README.md`.

### Testing

1. TestApp itself runs fine on my local iPhone 13 and on [device farm](https://us-west-2.console.aws.amazon.com/devicefarm/home#/mobile/projects/b531574a-fb82-40ae-b687-8f0b81341ae0/runs/d2653ca8-8ee2-44dd-b15e-0402f9ab0aca).  I can see the benchmark results output at the console log.
```
BUILD_LITE_INTERPRETER=1 USE_PYTORCH_METAL=1 USE_COREML_DELEGATE=1 IOS_PLATFORM=OS IOS_ARCH=arm64 ./scripts/build_ios.sh

pushd ios/TestApp/benchmark
ruby setup.rb --lite 1 -t 9HKVT38N77 --benchmark
popd

ruby scripts/xcode_build.rb -i build_ios/install -x ios/TestApp/TestApp.xcodeproj -p "OS"
```

2. Trying to run TestAppTests https://github.com/pytorch/pytorch/blob/main/ios/TestApp/TestAppTests/TestLiteInterpreter.mm on my local iPhone ends up with this error `Logic Testing Unavailable. Logic Testing on iOS devices is not supported. You can run logic tests on the Simulator`.  I update the xcode project to reuse TestApp as the host application.
```
ruby setup.rb --lite 1 -t 9HKVT38N77
```

3.. Trying [another round of testing on device farm](https://us-west-2.console.aws.amazon.com/devicefarm/home#/mobile/projects/b531574a-fb82-40ae-b687-8f0b81341ae0/runs/18dbd69d-8608-46d8-a868-bd05b69375db)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110202
Approved by: https://github.com/kit1980
2023-10-06 08:23:16 +00:00
Huy Do
f7909cb947 Build and test iOS on GitHub M1 runners (#110406)
They are here https://github.blog/2023-10-02-introducing-the-new-apple-silicon-powered-m1-macos-larger-runner-for-github-actions

I have been able to run iOS simulator tests on my M1 laptop without issues.  Some numbers:

* iOS build takes ~1h with x86 runners
* The new M1 runners take ~20m https://github.com/pytorch/pytorch/actions/runs/6386171957

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110406
Approved by: https://github.com/malfet, https://github.com/seemethere
2023-10-03 03:17:10 +00:00
Aaron Gokaslan
6d725e7d66 [BE]: enable ruff rules PLR1722 and PLW3301 (#109461)
Enables two ruff rules derived from pylint:
* PLR1722 replaces any exit() calls with sys.exit(). exit() is only designed to be used in repl contexts as may not always be imported by default. This always use the version in the sys module which is better
* PLW3301 replaces nested min / max calls with simplified versions (ie. `min(a, min(b, c))` => `min(a, b. c)`). The new version is more idiomatic and more efficient.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109461
Approved by: https://github.com/ezyang
2023-09-18 02:07:21 +00:00
Jerry Zhang
9ed0b3fcd9 [release_note_tool] Update test and skip commits that errors out (#108252)
Summary:
att

Test Plan:
python test_release_notes.py

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108252
Approved by: https://github.com/drisspg
2023-08-31 04:38:53 +00:00
chuboning
329a9a90c0 fix some typos (#106253)
Fixes typos

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106253
Approved by: https://github.com/awgu
2023-07-29 16:11:52 +00:00
Edward Z. Yang
f70844bec7 Enable UFMT on a bunch of low traffic Python files outside of main files (#106052)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106052
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-07-27 01:01:17 +00:00
Justin Chu
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
Sergii Dymchenko
a1c26ba77c Rename READEME.md to README.md (#103230)
Fix the typo so the file is shown for the dir.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103230
Approved by: https://github.com/ZainRizvi
2023-06-08 18:42:53 +00:00
Nikita Shulga
991b1c0286 Do not use --extra-index-url in testing wheels (#100183)
Should prevent regressions like the ones reported in  https://github.com/pytorch/pytorch/issues/100104 from sneaking undetected.

Same for `install_triton_wheel.sh` - always use packages from https://download.pytorch.org/whl/

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at deda821</samp>

> _`pip install` changed_
> _Only use PyTorch nightly_
> _Snowflake packages_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100183
Approved by: https://github.com/kit1980, https://github.com/pmeier
2023-04-27 18:48:02 +00:00
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
BowenBao
d41aa448b8 [ONNX] Run ONNX tests as part of standard run_test script (#99215)
<!--
copilot:all
-->
### <samp>🤖 Generated by Copilot at dcbf7e2</samp>

### Summary
📝🧹🚩

<!--
1.  📝 for simplifying the `./scripts/onnx/test.sh` script
2.  🧹 for refactoring the `test/onnx/dynamo/test_exporter_api.py` file
3.  🚩 for adding the `--onnx` flag to `test/run_test.py` and updating the `TESTS` list
-->
This pull request improves the ONNX testing infrastructure in PyTorch by refactoring the test code, normalizing the scope names, adding a flag to run only the ONNX tests, and simplifying the test script.

> _To export PyTorch models to ONNX_
> _We refactored some scripts and contexts_
> _We used `common_utils`_
> _And normalized the scopes_
> _And added a flag to run the tests_

### Walkthrough
*  Simplify `./scripts/onnx/test.sh` to use `run_test.py` with `--onnx` flag instead of `pytest` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-0017f5b22ae1329acb0f54af8d9811c9b6180a72dac70d7a5b89d7c23c958198L44-R46))
*  Remove `onnx` test from `TESTS` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7L127-R127)). Replace with `onnx_caffe2`.
*  Add `onnx/test_pytorch_onnx_onnxruntime_cuda` and `onnx/test_models` tests to `blocklisted_tests` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R154-R155))
*  Add `ONNX_SERIAL_LIST` list to `test/run_test.py` to specify ONNX tests that must run serially ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R296-R301))
*  Add `ONNX_TESTS` list to `test/run_test.py` to store all ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R370))
*  Add `--onnx` flag to `parse_args` function in `test/run_test.py` to run only ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R920-R928))
*  Include `ONNX_SERIAL_LIST` in `must_serial` function in `test/run_test.py` to run ONNX tests serially or parallelly based on memory usage ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1120))
*  Filter selected tests based on `--onnx` flag in `get_selected_tests` function in `test/run_test.py` to exclude non-ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1158-R1165))

### Other minor changes to accommodate this change
*  Replace `unittest` module with `common_utils.TestCase` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L4), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L29-R28), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L71-R70), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L147-R146))
*  Import `TemporaryFileName` class from `common_utils` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L19-R18))
*  Use `common_utils.TemporaryFileName` instead of `TemporaryFileName` in `TestDynamoExportAPI` class in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L92-R91), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L110-R109), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L129-R128))
*  Use `common_utils.run_tests` instead of `unittest.main` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L155-R154))
*  Add `re` module to `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R6))
*  Add `_remove_test_environment_prefix_from_scope_name` function to `test/onnx/test_utility_funs.py` to normalize scope names of ONNX nodes ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R32-R58))
*  Use `_remove_test_environment_prefix_from_scope_name` function to compare scope names of ONNX nodes in `TestUtilityFuns` class in `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1099-R1133), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1119-R1152), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1170-R1188), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1181-R1199), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1220-R1239), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1235-R1258))

Fixes #98626

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99215
Approved by: https://github.com/huydhn, https://github.com/titaiwangms
2023-04-19 06:17:47 +00:00
BowenBao
4f9dbc17a4 [ONNX] Enable xdoctests in CI (#98546)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98546
Approved by: https://github.com/justinchuby, https://github.com/kit1980
2023-04-07 22:20:18 +00:00
Driss Guessous
dcafe3f271 Updates to the release notes scripts and documentation (#94560)
# Summary
This PR made some significant changes to the scripts around Release Scripts. At a high level:
- Turned the quips into docs and updated links
- Update the common.categorizes list in the hopes to make this the source of truth for releases- This is hard since the release_notes labels can be changed at will. An alternative would be to poll from github api. However, I think that is overkill. The notebook does a set compare and will show you knew categories. I think we want this to be manual so that the release note engineer will decided how to categorize.
- Create cateogry group from speaking with folks on distributed and AO that told me these different release categories can be merged.
- I am the newest person to Core and don't use ghstack soo made token getting a lil more generic.
- Added a classifier.py file. This file will train a commit categorizer for you, hopefully with decent accuracy. I was able to achieve 75% accuracy. I drop the highest frequency class - "skip" since this creates a more useful cateogrizer.
- I updated the categorize.py script so that the prompt will be what the classifier thinks, gated by a flag.
- Added a readme that will hopefully help future release notes engineers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94560
Approved by: https://github.com/albanD
2023-03-16 00:09:26 +00:00
Aaron Gokaslan
dd5e6e8553 [BE]: Merge startswith calls - rule PIE810 (#96754)
Merges startswith, endswith calls to into a single call that feeds in a tuple. Not only are these calls more readable, but it will be more efficient as it iterates through each string only once.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96754
Approved by: https://github.com/ezyang
2023-03-14 22:05:20 +00:00
Huy Do
6a2dcfd738 Move all ONNX test dependencies to Docker (#96590)
Per title.  This is the first one of a two-part process:

[x] Move all ONNX test dependencies to Docker https://github.com/pytorch/pytorch/pull/96590
[ ] Move the test model used by [TestFxToOnnxWithOnnxRuntime.test_gpt2_tiny](https://hud.pytorch.org/failure/FAILED%20test%2Fonnx%2Ftest_fx_to_onnx_with_onnxruntime.py%3A%3ATestFxToOnnxWithOnnxRuntime%3A%3Atest_large_scale_exporter_with_tiny_gpt2%20-%20requests.exceptions.ReadTimeout%3A%20HTTPSConnectionPool(host%3D'huggingface.co'%2C%20port%3D443)%3A%20Read%20timed%20out.%20(read%20timeout%3D10.0))
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96590
Approved by: https://github.com/ZainRizvi
2023-03-14 06:19:00 +00:00
Edward Z. Yang
a8d1eb1961 Convenience script for getting correct Triton nightly binary (#96669)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96669
Approved by: https://github.com/ngimel, https://github.com/malfet
2023-03-13 18:58:38 +00:00
Xuehai Pan
b005ec62b9 [BE] Remove dependency on six and future (#94709)
Remove the Python 2 and 3 compatibility library [six](https://pypi.org/project/six) and [future](https://pypi.org/project/future) and `torch._six`. We only support Python 3.8+ now. It's time to retire them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94709
Approved by: https://github.com/malfet, https://github.com/Skylion007
2023-02-14 09:14:14 +00:00
Aaron Gokaslan
67d9790985 [BE] Apply almost all remaining flake8-comprehension checks (#94676)
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
2023-02-12 01:01:25 +00:00
Aaron Gokaslan
3d82d8d0ed [BE] Enable more flake8-comprehensions checks (#94601)
I applied some flake8 fixes and enabled checking for them in the linter. I also enabled some checks for my previous comprehensions PR.

This is a follow up to #94323 where I enable the flake8 checkers for the fixes I made and fix a few more of them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94601
Approved by: https://github.com/ezyang
2023-02-10 23:40:29 +00:00
Mikayla Gawarecki
df13247e67 small bugfixes to release notes script (#94536)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94536
Approved by: https://github.com/drisspg
2023-02-10 01:23:07 +00:00
Xuehai Pan
a229b4526f [BE] Prefer dash over underscore in command-line options (#94505)
Preferring dash over underscore in command-line options. Add `--command-arg-name` to the argument parser. The old arguments with underscores `--command_arg_name` are kept for backward compatibility.

Both dashes and underscores are used in the PyTorch codebase. Some argument parsers only have dashes or only have underscores in arguments. For example, the `torchrun` utility for distributed training only accepts underscore arguments (e.g., `--master_port`). The dashes are more common in other command-line tools. And it looks to be the default choice in the Python standard library:

`argparse.BooleanOptionalAction`: 4a9dff0e5a/Lib/argparse.py (L893-L895)

```python
class BooleanOptionalAction(Action):
    def __init__(...):
            if option_string.startswith('--'):
                option_string = '--no-' + option_string[2:]
                _option_strings.append(option_string)
```

It adds `--no-argname`, not `--no_argname`. Also typing `_` need to press the shift or the caps-lock key than `-`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94505
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-09 20:16:49 +00:00
Jane Xu
0ecb071fc4 [BE][CI] change references from .jenkins to .ci (#92624)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92624
Approved by: https://github.com/ZainRizvi, https://github.com/huydhn
2023-01-30 22:50:07 +00:00
Edward Z. Yang
93e71cc2f5 Add helpers for running tests and then putting them in a CSV (#92642)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92642
Approved by: https://github.com/albanD
2023-01-22 02:00:39 +00:00
salilsdesai
ec94cbc66a [Vulkan] Remove GLSL Code Gen (#91912)
@bypass-github-export-checks

GLSL Code Gen is not used, so this diff removes
- GLSL parts of ShaderSource
- Anything enclosed by USE_VULKAN_SHADERC_RUNTIME, as well as the flag itself
- gen_vulkan_glsl script

Plus some additional refactoring

Differential Revision: [D41358861](https://our.internmc.facebook.com/intern/diff/D41358861/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D41358861/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91912
Approved by: https://github.com/mcr229
2023-01-10 20:29:47 +00:00
Remi Domingues
fdbbd20f32 Cache conda and pip for IOS CI (#91359)
Fixes T137630520

Caching for conda and pip dependencies for iOS CI workflow.

- Conda and pip dependencies have been moved from [_ios-build-test.yml](https://github.com/pytorch/pytorch/blob/master/.github/workflows/_ios-build-test.yml) to dedicated requirements files
- Miniconda shell installation has been replaced by `setup-miniconda@main` which supports caching
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91359
Approved by: https://github.com/malfet, https://github.com/huydhn
2022-12-30 17:52:20 +00:00
Justin Chu
634555d981 [ONNX] Auto test based on OpInfo (#86182)
This change introduces a mechanism to test onnx export based on sample inputs registered in OpInfo, similar to how MPS and other components of pytorch are tested. It provides test coverage on ops and dtypes previously unattainable with manually created test models. This is the best way for us to discover gaps in the exporter support, especially for ops with partial existing support.

This test is adapted from https://github.com/pytorch/pytorch/blob/master/test/test_mps.py

This PR also

- Update sqrt to support integer inputs to match pytorch behavior
- Add pytest-subtests for unittest subtests support in the new test file

I only enabled very few ops: `t`, `ceil` and `sqrt` because otherwise too many things will fail due to (1) unsupported dtypes in the exporter (2) unimplemented dtype support in onnxruntime (3) unexpected input to verification.verify.

Subsequent PRs should improve `verification.verify` first for it to accept any legal input to a pytorch model, then incrementally fix the symbolic functions to enable more test cases.

Fixes #85363
Design #88118
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86182
Approved by: https://github.com/BowenBao
2022-12-16 14:43:41 +00:00
Nikita Shulga
36ac095ff8 Migrate PyTorch to C++17 (#85969)
With CUDA-10.2 gone we can finally do it!

This PR mostly contains build system related changes, invasive functional ones are to be followed.
Among many expected tweaks to the build system, here are few unexpected ones:
 - Force onnx_proto project to be updated to C++17 to avoid `duplicate symbols` error when compiled by gcc-7.5.0, as storage rule for `constexpr` changed in C++17, but gcc does not seem to follow it
 - Do not use `std::apply` on CUDA but rely on the built-in variant, as it results in test failures when CUDA runtime picks host rather than device function when `std::apply` is invoked from CUDA code.
 - `std::decay_t` -> `::std::decay_t` and `std::move`->`::std::move` as VC++ for some reason claims that `std` symbol is ambigious
 - Disable use of `std::aligned_alloc` on Android, as its `libc++` does not implement it.

Some prerequisites:
 - https://github.com/pytorch/pytorch/pull/89297
 - https://github.com/pytorch/pytorch/pull/89605
 - https://github.com/pytorch/pytorch/pull/90228
 - https://github.com/pytorch/pytorch/pull/90389
 - https://github.com/pytorch/pytorch/pull/90379
 - https://github.com/pytorch/pytorch/pull/89570
 - https://github.com/facebookincubator/gloo/pull/336
 - https://github.com/facebookincubator/gloo/pull/343
 - 919676fb32

Fixes https://github.com/pytorch/pytorch/issues/56055

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85969
Approved by: https://github.com/ezyang, https://github.com/kulinseth
2022-12-08 02:27:48 +00:00
Zain Rizvi
837ca8f344 Remove --retry-all-errors from environment with old curl (#89298)
The version of curl on the `ubuntu-latest` box doesn't support the `--retry-all-errors` param and is breaking periodic builds

Example: https://github.com/pytorch/pytorch/actions/runs/3495466804/jobs/5852265880
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89298
Approved by: https://github.com/huydhn
2022-11-18 19:36:09 +00:00
Zain Rizvi
ab75982d3a Always retry curl downloads (#89157)
Modify our curl commands so that they always retry downloads.

By default, curl only retries what it considers to be "transient" errors, based on the server's response. However, curl's estimate of what's transient is very conservative.  By adding the --retry-all-errors parameter we'll always retry curl commands.

In particular, I'm hoping this mitigates errors where curl fails with the below error ([logs](https://github.com/pytorch/pytorch/actions/runs/3468758110/jobs/5794939941))
`curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to ossci-linux.s3.amazonaws.com:443`

Some of the modified downloads didn't even have retries, so I added them in

More details: https://everything.curl.dev/usingcurl/downloads/retry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89157
Approved by: https://github.com/kit1980, https://github.com/malfet
2022-11-18 07:03:24 +00:00
David Berard
c413a32135 Release note script: match topics with spaces or underscores (#87011)
e.g. match "new features" in the category as "new_features"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87011
Approved by: https://github.com/albanD, https://github.com/soulitzer
2022-10-19 02:28:45 +00:00
John Detloff
06868004b7 Remove codesigning from ios circleci workflows (#85630)
This PR is a follow up to https://github.com/pytorch/pytorch/pull/85597 which removes codesigning from our github action workflows. This is a synonymous change to our circleci workflows. Since we only run TestApp on simulator we don't need to have this codesigning logic. (And more pressingly, these dev cert is expiring at the end of the month and we don't have a replacement)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85630
Approved by: https://github.com/atalman, https://github.com/malfet
2022-09-29 19:49:11 +00:00
Dhruv Matani
747f27a9ad [Mobile] Update build_mobile.sh to allow lite interpreter and tracing based builds (#84647)
Summary: Currently, build_mobile.sh doesn't allow lite interpreter builds or tracing based selective builds. build_mobile.sh is used for host builds of PyTorch for Mobile deployment.

Additionally, certain flags such as `USE_BLAS` were not being respected as they should be. This change addresses that as well.

Test Plan: Build using:

```
cat /tmp/selected_ops.yaml
- aten::add
- aten::sub
```

```
BUILD_PYTORCH_MOBILE_WITH_HOST_TOOLCHAIN=1 USE_LIGHTWEIGHT_DISPATCH=0 BUILD_LITE_INTERPRETER=1 SELECTED_OP_LIST=/tmp/selected_ops.yaml ./scripts/build_mobile.sh
```

```
cat /tmp/main.cpp

int main() {
  auto m = torch::jit::_load_for_mobile("/tmp/path_to_model.ptl");
  auto res = m.forward({});
  return 0;
}
```

Test using:

```
g++ /tmp/main.cpp -L build_mobile/lib/ -I build_mobile/install/include/ -lpthread -lc10 -ltorch_cpu -ltorch -lXNNPACK -lpytorch_qnnpack -lcpuinfo -lclog -lpthreadpool -lgloo -lkineto -lfmt -ldl -lc10
```

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84647
Approved by: https://github.com/JacobSzwejbka, https://github.com/cccclai
2022-09-09 15:02:29 +00:00
John Detloff
e0229d6517 Remove caffe2 mobile (#84338)
We're no longer building Caffe2 mobile as part of our CI, and it adds a lot of clutter to our make files. Any lingering internal dependencies will use the buck build and so wont be effected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84338
Approved by: https://github.com/dreiss
2022-09-08 01:49:55 +00:00
Justin Chu
bf25a140f9 [ONNX] Add runtime type checking to export (#83673)
This PR adds an internal wrapper on the [beartype](https://github.com/beartype/beartype) library to perform runtime type checking in `torch.onnx`. It uses beartype when it is found in the environment and is reduced to a no-op when beartype is not found.

Setting the env var `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK=ERRORS` will turn on the feature. setting `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK=DISABLED` will disable all checks. When not set and `beartype` is installed, a warning message is emitted.

Now when users call an api with invalid arguments e.g.

```python
torch.onnx.export(conv, y, path, export_params=True, training=False)

# traning should take TrainingModel, not bool
```

they get

```
Traceback (most recent call last):
  File "bisect_m1_error.py", line 63, in <module>
    main()
  File "bisect_m1_error.py", line 59, in main
    reveal_error()
  File "bisect_m1_error.py", line 32, in reveal_error
    torch.onnx.export(conv, y, cpu_model_path, export_params=True, training=False)
  File "<@beartype(torch.onnx.utils.export) at 0x1281f5a60>", line 136, in export
  File "pytorch/venv/lib/python3.9/site-packages/beartype/_decor/_error/errormain.py", line 301, in raise_pep_call_exception
    raise exception_cls(  # type: ignore[misc]
beartype.roar.BeartypeCallHintParamViolation: @beartyped export() parameter training=False violates type hint <class 'torch._C._onnx.TrainingMode'>, as False not instance of <protocol "torch._C._onnx.TrainingMode">.
```

when `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK` is not set and `beartype` is installed, a warning message is emitted.

```
>>> torch.onnx.export("foo", "bar", "f")
<stdin>:1: CallHintViolationWarning: Traceback (most recent call last):
  File "/home/justinchu/dev/pytorch/torch/onnx/_internal/_beartype.py", line 54, in _coerce_beartype_exceptions_to_warnings
    return beartyped(*args, **kwargs)
  File "<@beartype(torch.onnx.utils.export) at 0x7f1d4ab35280>", line 39, in export
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/beartype/_decor/_error/errormain.py", line 301, in raise_pep_call_exception
    raise exception_cls(  # type: ignore[misc]
beartype.roar.BeartypeCallHintParamViolation: @beartyped export() parameter model='foo' violates type hint typing.Union[torch.nn.modules.module.Module, torch.jit._script.ScriptModule, torch.jit.ScriptFunction], as 'foo' not <protocol "torch.jit.ScriptFunction">, <protocol "torch.nn.modules.module.Module">, or <protocol "torch.jit._script.ScriptModule">.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/justinchu/dev/pytorch/torch/onnx/_internal/_beartype.py", line 63, in _coerce_beartype_exceptions_to_warnings
    return func(*args, **kwargs)
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 482, in export
    _export(
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 1422, in _export
    with exporter_context(model, training, verbose):
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 177, in exporter_context
    with select_model_mode_for_export(
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 95, in select_model_mode_for_export
    originally_training = model.training
AttributeError: 'str' object has no attribute 'training'
```

We see the error is caught right when the type mismatch happens, improving from what otherwise would become `AttributeError: 'str' object has no attribute 'training'`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83673
Approved by: https://github.com/BowenBao
2022-08-25 21:24:37 +00:00
BowenBao
8324cdda35 [ONNX] Add quantized model tests to CI (#80398)
In parallel to #80039, start tracking torchvision quantized model export in CI.

This PR depends on ~~#80393~~#79256, bumping torchvision version in CI, due to PyTorch not backward compatible with vision #74028.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80398
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/garymm
2022-07-28 21:25:29 +00:00
Sergii Dymchenko
34bb3714f0 Remove obsolete onnx_c2 scripts (#82285)
The scripts look completely obsolete.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82285
Approved by: https://github.com/ezyang
2022-07-28 00:10:25 +00:00
BowenBao
4c0000a98e [ONNX] Remove duplicated test run (#81146)
Looks like caused by incorrect (auto) merge conflict resolution. `test_models_onnxruntime.py` script is run twice.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81146
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/garymm
2022-07-22 21:43:44 +00:00
Jing Xu
3c7044728b Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)
More detailed description of benefits can be found at #41001. This is Intel's counterpart of NVidia’s NVTX (https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.emit_nvtx).

ITT is a functionality for labeling trace data during application execution across different Intel tools.
For integrating Intel(R) VTune Profiler into Kineto, ITT needs to be integrated into PyTorch first. It works with both standalone VTune Profiler [(https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html](https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html)) and Kineto-integrated VTune functionality in the future.
It works for both Intel CPU and Intel XPU devices.

Pitch
Add VTune Profiler's ITT API function calls to annotate PyTorch ops, as well as developer customized code scopes on CPU, like NVTX for NVidia GPU.

This PR rebases the code changes at https://github.com/pytorch/pytorch/pull/61335 to the latest master branch.

Usage example:
```
with torch.autograd.profiler.emit_itt():
    for i in range(10):
        torch.itt.range_push('step_{}'.format(i))
        model(input)
        torch.itt.range_pop()
```

cc @ilia-cher @robieta @chaekit @gdankel @bitfort @ngimel @orionr @nbcsm @guotuofeng @guyang3532 @gaoteng-git
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63289
Approved by: https://github.com/malfet
2022-07-13 13:50:15 +00:00
John Clow
3c4c7d3e6b [Release Notes] fix bug with categorize call (#81284)
This was pointed out by @kit1980 in #78190
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81284
Approved by: https://github.com/kit1980
2022-07-12 19:02:15 +00:00
John Clow
7fd0cf5581 [Release Notes] Add way to export result from Google Sheets to Markdown (#79911)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79911
Approved by: https://github.com/soulitzer
2022-07-07 22:42:46 +00:00
John Clow
6d4410b8c6 [Release Notes] Simple script to merge categories (#79910)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79910
Approved by: https://github.com/soulitzer
2022-07-07 22:42:46 +00:00
John Clow
755861063d Adding additional topics to align with github topics list (#79909)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79909
Approved by: https://github.com/soulitzer
2022-07-07 22:42:46 +00:00
John Clow
770fc74e33 [Release Notes] Add Github PR link to csv export (#79908)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79908
Approved by: https://github.com/soulitzer
2022-07-07 22:42:46 +00:00
John Clow
ad6328ea51 [Release Notes] Adding CSV Category Export (#78212)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78212
Approved by: https://github.com/soulitzer
2022-07-07 22:42:46 +00:00
John Clow
62bf807113 Always use the CommitCache, and make it a singleton (#78203)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78203
Approved by: https://github.com/soulitzer
2022-07-07 22:42:46 +00:00
John Clow
da549f58d5 Adding Author and Accepters information into pytorch release notes gen (#78190)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78190
Approved by: https://github.com/soulitzer, https://github.com/malfet
2022-07-07 22:42:46 +00:00
John Clow
8549fafd36 Refactoring release not script to use dataclasses and have a shorter test. (#78189)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78189
Approved by: https://github.com/soulitzer
2022-07-07 22:42:46 +00:00
PyTorch MergeBot
1454515253 Revert "Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)"
This reverts commit f988aa2b3f.

Reverted https://github.com/pytorch/pytorch/pull/63289 on behalf of https://github.com/malfet due to broke trunk, see f988aa2b3f
2022-06-30 12:49:41 +00:00
Jing Xu
f988aa2b3f Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)
More detailed description of benefits can be found at #41001. This is Intel's counterpart of NVidia’s NVTX (https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.emit_nvtx).

ITT is a functionality for labeling trace data during application execution across different Intel tools.
For integrating Intel(R) VTune Profiler into Kineto, ITT needs to be integrated into PyTorch first. It works with both standalone VTune Profiler [(https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html](https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html)) and Kineto-integrated VTune functionality in the future.
It works for both Intel CPU and Intel XPU devices.

Pitch
Add VTune Profiler's ITT API function calls to annotate PyTorch ops, as well as developer customized code scopes on CPU, like NVTX for NVidia GPU.

This PR rebases the code changes at https://github.com/pytorch/pytorch/pull/61335 to the latest master branch.

Usage example:
```
with torch.autograd.profiler.emit_itt():
    for i in range(10):
        torch.itt.range_push('step_{}'.format(i))
        model(input)
        torch.itt.range_pop()
```

cc @ilia-cher @robieta @chaekit @gdankel @bitfort @ngimel @orionr @nbcsm @guotuofeng @guyang3532 @gaoteng-git
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63289
Approved by: https://github.com/malfet
2022-06-30 05:14:03 +00:00
Linbin Yu
d32ab80c32 Update buck_setup.sh (#80467)
Add a parameter for proxy setup when running this script in devserver

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80467
Approved by: https://github.com/malfet
2022-06-29 01:39:37 +00:00
Nikita Shulga
2d7d5a75aa Cleanup buck_setup.sh a bit (#80198)
Use `curl -L |tar xf` to extract downloadable file right into the folder one wants to use

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80198
Approved by: https://github.com/linbinyu
2022-06-24 15:31:45 +00:00
Linbin Yu
3507bee7d1 Update buck_setup.sh (#80116)
Remove destination folders if they already exist. Otherwise the copy step will fail. It happens if people tried to run this script several times.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80116
Approved by: https://github.com/kit1980
2022-06-23 05:50:41 +00:00
BowenBao
73f6601cfc [ONNX] Refactor heavy memory usage tests
* Move memory heavy tests from `test_pytorch_onnx_onnxruntime.py` to
  `test_models_onnxruntime.py`. The former is run in parallel in CI,
  while the latter is not. A change is that the moved tests are now
  only covered in default opset export.
* Refactor and create base class for tests that export model to ONNX
  and verify with ONNX Runtime. The new base class are parameterized
  with `opset_version` and `is_script`. Further work can be done to
  refactor existing test classes in `test_pytorch_onnx_onnxruntime.py`.
  See #75630
* Reduce unnecessarily large tensor size in
  `test_pytorch_onnx_onnxruntime.py` to further reduce memory usage
  and test time.

After this PR, the running time for `test_pytorch_onnx_onnxruntime.py`
is reduced from `1338.82s (0:22:18)` to `225.07s (0:03:45)`,
benchmarked on 10900x with `-n 10`.

Fixes #79179

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79640

Approved by: https://github.com/justinchuby, https://github.com/garymm
2022-06-18 02:28:00 +00:00
titaiwang
44764f131b [ONNX] Move tests in test_onnx_export.py to test_pytorch_onnx_no_runtime.py (#78310)
Fixes #78308
This should be merged after
- #78116

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78310
Approved by: https://github.com/justinchuby, https://github.com/garymm
2022-06-14 06:08:29 +00:00
BowenBao
4750f745bf [ONNX] Disable parallel run for custom op related tests in CI (#78944)
Should fix #78844
Custom op related tests utilize inline cpp extension to build custom
operator from c++ source snippet. Only two test cases become flaky after
parallel run, and both use inline cpp extension. Reverting to run these
tests in single process to try resolve the flakiness.
Reverts test skip added previously #78936.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78944
Approved by: https://github.com/janeyx99, https://github.com/garymm
2022-06-07 01:03:22 +00:00
Eli Uriegas
4220799ea7 scripts: Fix dry run for cut-release-branch.sh
Signed-off-by: Eli Uriegas <eliuriegasfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77978

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Approved by: https://github.com/suo, https://github.com/atalman
2022-06-02 19:23:51 +00:00
BowenBao
cfc968956c [ONNX] Update CI test script to run parallel by default (#78200)
Also update default process count to auto, matching the CI machine
cpu core count.

Fixes #77678

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78200
Approved by: https://github.com/garymm
2022-06-02 00:25:17 +00:00
Linbin Yu
1f8049566f Re-land BUCK build for pytorch mobile (#77612)
see https://github.com/pytorch/pytorch/pull/76480
fixed most lint errors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77612
Approved by: https://github.com/kit1980
2022-05-17 00:30:13 +00:00
PyTorch MergeBot
530481ed69 Revert "[mobile] add buck build for mobile targets (#76480)"
This reverts commit 168dc70faf.

Reverted https://github.com/pytorch/pytorch/pull/76480 on behalf of https://github.com/atalman
2022-05-16 16:14:17 +00:00
Linbin Yu
168dc70faf [mobile] add buck build for mobile targets (#76480)
Create buck targets to replicate internal BUCK build, including
- XNNPACK
- QNNPACK
- C10
- aten_cpu
- torch_mobile_core
- torch_mobile_all_ops
- ptmobile_benchmark

And able to run mobilenet v2 using ptmobile_benchmark (with all ops).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76480
Approved by: https://github.com/seemethere, https://github.com/dreiss
2022-05-15 18:42:41 +00:00
Brian Hirsh
43f6d79e51 update release notes script to automatically grab labels from the PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75020

Approved by: https://github.com/albanD, https://github.com/anjali411
2022-05-12 18:39:24 +00:00
Brian Hirsh
5ed7312081 release notes script changes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72154

Approved by: https://github.com/albanD, https://github.com/anjali411
2022-05-12 18:39:24 +00:00
Masaki Kozuki
0ae3aa648e [torch.onnx] support torch.nn.functional.grid_sample
summary

- Adds `F.grid_sample` support
- Adds a test case

Fixes #27212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76159
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-05-02 22:07:58 +00:00
Catherine Lee
4bf5380ec7 remove references to ort_test
Fixes #ISSUE_NUMBER

ort_test, -test1, -test2 is from before migration to GHA?
removing dead/no longer relevant code?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76091
Approved by: https://github.com/janeyx99
2022-04-26 18:29:59 +00:00
Thiago Crepaldi
90d31cb311 Emit ATen ops when symbolics raise + minor fixes
Currently `torch.onnx.export(.., operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK)` only issues ATen ops through explicit requests (e.g. `g.at()`) calls inside each op symbolic function. This is done based on specific conditions such as `operator_export_type==OperatorExportTypes.ONNX_ATEN_FALLBACK)` or `is_caffe2_aten_fallback()`

This PR extends the ATen fallback mechanism for scenarios when the symbolic function raises `RuntimeError` during export. The idea is that partial implementation of existing ONNX ops can fallback to ATen as a last resort. That is valuable because each operator can have many input combinations and not all are always implemented.

A minor fix was done to make sure the `overload_name` attribute is added to explicit ATen op fallback requests when a symbolic is not registered to a particular op.

ps: The behavior for builds with BUILD_CAFFE2=1 is not changed to ensure BC.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74759
Approved by: https://github.com/garymm, https://github.com/msaroufim
2022-04-23 21:24:25 +00:00
David Berard
9d05ce602e [JIT] Move log_extract.py helper functions to torch.utils
This will allow us to reuse the log_extract.py tools in torchbench

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75436

Approved by: https://github.com/eellison
2022-04-07 22:17:58 +00:00
Natalia Gimelshein
7e9bb1c273 use Timer for cuda benchmarks
`torch.cuda.synchronize()` is a heavy hammer and distorts benchmarking results a lot. Timer provides results that are closer to kernel times observed in profiler.
If you want, instead of `blocked_autorange` you can use `timeit` that repeats the stmt fixed number of times.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75393
Approved by: https://github.com/davidberard98
2022-04-07 01:03:12 +00:00
Gary Miguel
ca374773b4 [ONNX] update default opset_version to 13 (#73898)
Summary:
And add a new tool to update it in the future, which follows the policy
of using "latest as of 18 months ago". This policy is meant to balance:
* recent enough to increase the odds of being able to successfully
  export
* old enough to increase the odds of exported model being runnable by
  different ONNX implementations

Related changes:

* test_models.py: explicitly fix opset_version to 9 rather than relying on default. Caffe2 doesn't support newer versions.
* symbolic_helper.py:
  * Remove a misleading comment
  * Remove unnecessary check in `_set_opset_version`
  * Use a range to define `_onnx_stable_opsets`
* test_pytorch_common.py:
  * Rename a variable from min -> max. I think it was a copy-paste error.
  * Make skip test messages more informative.
  * Remove unused `skipIfONNXShapeInference`. More on that below.
* test_pytorch_onnx_onnxruntime.py:
  * Make all the `TestCase` classes explicitly specify opset version.
  * Make `test_unsupported_pad` respect `opset_version` by using `run_test`
  * Unrelated simplification: make it obvious that all tests run with `onnx_shape_inference=True`. AFAICT this was already the case.
  * There was one test that was entirely disabled (test_tolist) because it was asking to be skipped whenever `onnx_shape_inference=True`, but it was always True. I changed the model being tested so as to preserve the intended test coverage but still have the test actually pass.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73898

Reviewed By: msaroufim

Differential Revision: D35264615

Pulled By: malfet

fbshipit-source-id: cda8fbdffe4cc8210d8d96e659e3a9adf1b5f1d2
(cherry picked from commit b5e639e88828d34442282d0b50c977e610a2ba3a)
2022-04-07 00:02:31 +00:00
Elias Ellison
24c255ee7c Small repro improvements (#75108)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75108

- Add option to only run some graphs
- Add NNC Static vs Dynamic
- Update make_tensor bc it wasnt using strides

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D35374000

Pulled By: eellison

fbshipit-source-id: df16b8647f2309a8837207cacba55d30f46845ce
(cherry picked from commit 19feb54db049186972b47548cf3d83e76512adfd)
2022-04-06 18:00:53 +00:00
Elias Ellison
c90be037b4 Extend Graph Export to NNC, extend script to support CPU (#74076)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74076

Extends the repro script to cpu and NNC. As in file:
Usage:
```
1. Run your script and pipe into a log file
  PYTORCH_JIT_LOG_LEVEL=">>tensorexpr_fuser" python3 my_test.py &> log.txt
2. Run log_extract:
  log_extract.py log.txt --baseline --nnc
```

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D34946883

Pulled By: eellison

fbshipit-source-id: 644012dbbca0b490820ef83e761c06b0dd009e52
(cherry picked from commit 5256c8f3ff8545033d1335cc96d34194abda1370)
2022-03-29 18:38:52 +00:00