Summary:
Adds a58c6aea5a0c9f8759a4154e46f544c8b03b8db1 and 7106d216c29ca16a3504aa2bedad948ebcf4abc2 to the list of excluded
commits since this was landed through phab and cherry picked to master
directly
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76417
Reviewed By: janeyx99
Differential Revision: D35951416
Pulled By: seemethere
fbshipit-source-id: 30a226c381e0cebfccc82f7ccfa7ce79075220c9
(cherry picked from commit b75fbe3b9e8024734e749a42464620c1879265ad)
Currently `torch.onnx.export(.., operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK)` only issues ATen ops through explicit requests (e.g. `g.at()`) calls inside each op symbolic function. This is done based on specific conditions such as `operator_export_type==OperatorExportTypes.ONNX_ATEN_FALLBACK)` or `is_caffe2_aten_fallback()`
This PR extends the ATen fallback mechanism for scenarios when the symbolic function raises `RuntimeError` during export. The idea is that partial implementation of existing ONNX ops can fallback to ATen as a last resort. That is valuable because each operator can have many input combinations and not all are always implemented.
A minor fix was done to make sure the `overload_name` attribute is added to explicit ATen op fallback requests when a symbolic is not registered to a particular op.
ps: The behavior for builds with BUILD_CAFFE2=1 is not changed to ensure BC.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74759
Approved by: https://github.com/garymm, https://github.com/msaroufim
Summary:
Difference between were b5222584e6 and 69e048b090 are reconciled in b3aa2de5be, so the commit must be manually skipped
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76231
Reviewed By: bigfootjon
Differential Revision: D35845975
Pulled By: malfet
fbshipit-source-id: 4e4a2f03a26202bffe2045ac80704b356144164e
(cherry picked from commit dd32c3e33059b28c4727ffbeb40661dd14b3c7dc)
Fixes #ISSUE_NUMBER
undo #75783 b/c setting fetch depth 1 doesn't really help reduce time b/c most of the jobs need either master or viable/strict
also, more branches need viable/strict than i thought, so sharding isn't picking up test times (although default sharding seems to do pretty well) (regarding the jobs i didn't realize needed viable/strict: it looks like the linux-bionic jobs don't fail when `git rev-parse viable/strict` is run but viable/strict doesn't exist but the linux-xenial ones do)
pretty sure jobs are broken only b/c its using the master version of `checkout-pytorch/action.yml`
tested via #76077
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76090
Approved by: https://github.com/seemethere
crossref is a new strategy for performing tests when you want
to run a normal PyTorch API call, separately run some variation of
the API call (e.g., same thing but all the arguments are meta tensors)
and then cross-reference the results to see that they are consistent.
Any logic you add to CrossRefMode will get run on *every* PyTorch API
call that is called in the course of PyTorch's test suite. This can
be a good choice for correctness testing if OpInfo testing is not
exhaustive enough.
For now, the crossref test doesn't do anything except verify that
we can validly push a mode onto the torch function mode stack for all
functions.
Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75988
Approved by: https://github.com/seemethere
This PR would allow Quansight sparse experts (in addition to metamates) to approve sparse related changes. As the sparse module is relatively new and should not have many internal dependencies, we can start encouraging more GitHub 1st (GH1) landing for these.
This is DIFFERENT from the superuser rule because it allows non-metamates to be approvers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75872
Approved by: https://github.com/IvanYashchuk, https://github.com/osalpekar
This PR would allow Quansight FFT experts (in addition to metamates) to approve sparse related changes. As the fft module is not really internally used, we can start encouraging more GitHub 1st (GH1) landing for these.
This is DIFFERENT from the superuser rule because it allows non-metamates to be approvers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75874
Approved by: https://github.com/osalpekar
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75837
We run this as an internal service to check if a PR can be merged. We don't care about internal checks because these diffs are landing internally.
bypass-github-export-checks
Reviewed By: seemethere, osalpekar
Differential Revision: D35657708
fbshipit-source-id: f52cf28a424839532b5be4cce0f7010a6816e179
(cherry picked from commit f7a8f8c4f979e77b3ce6c659e49fc213860b3351)
This PR would allow Quansight linear algebra experts (in addition to metamates) to approve sparse related changes. Linear algebra would be a great place to start encouraging more GitHub 1st (GH1) landing to test our external contributor GH1 experience.
This is DIFFERENT from the superuser rule because it allows non-metamates to be approvers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75881
Approved by: https://github.com/osalpekar
Currently ONNX exporter symbolics can emit ATen operators when `operator_export_type==ONNX_ATEN_FALLBACK`. However, this is a behavior specific to Caffe2 builds, as the intend use of `ONNX_ATEN_FALLBACK` is to emit ATen operators only when there is no ONNX equivalent.
The reason Caffe2 choses to emit ATen operators when ONNX counterpart exists is for performance on their particular engine implementation, which might not be true for other implementations. e.g. ONNX Runtime can optimize the generated ONNX graph into something more efficient
This PR must be merged only after https://github.com/pytorch/pytorch/pull/73954
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74680
Approved by: https://github.com/garymm, https://github.com/malfet
We used to have a ton of workflow runs each with few jobs, but now we are switching it up to fewer workflow runs with many jobs each.
Thus edit the query so we can get the maximum checks for a PR, which is a preliminary thing for when we want to add mroe required status checks :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75820
Approved by: https://github.com/seemethere, https://github.com/osalpekar
Fixes #ISSUE_NUMBER
tested via #75232 b/c need to change the source of the workflow
- set fetch-depth: 1
- manually checkout additional branches/history (usually either viable/strict, or master and the rest of the commit's history) when needed
- seems to reduce checkout time by about 30s for jobs that don't need additional branches/history, but minimal improvement otherwise
- checkouts for most lint jobs now takes <15s
Rough estimates for how long different parts of checkout take on linux (windows is similar, but scaled up):
- just the commit, no history: <15s, seems to be around 6-7s
- viable/strict: 25-30s
- submodules: 80-120s
- master + commit history: 40-50s (if checked out viable/strict before this, then this time is much smaller, <10s)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75783
Approved by: https://github.com/seemethere, https://github.com/janeyx99