Commit Graph

17764 Commits

Author SHA1 Message Date
Edward Z. Yang
f7365eca90 Add unbacked symints support; item works now (#90624)
The big idea is to add `create_unbacked_symfloat` and `create_unbacked_symint` to ShapeEnv, allowing you to allocate symbolic floats/ints corresponding to data you don't know about at compile time. Then, instead of immediately erroring out when you try to call local_scalar_dense on a FakeTensor, we instead create a fresh symint/symfloat and return that.

There a bunch of odds and ends that need to be handled:

* A number of `numel` calls converted to `sym_numel`
* When we finally return from item(), we need to ensure we actually produce a SymInt/SymFloat when appropriate. The previous binding code assumed that you would have to get a normal Python item. I add a pybind11 binding for Scalar (to PyObject only) and refactor the code to use that. There is some trickiness where you are NOT allowed to go through c10::SymInt if there isn't actually any SymInt involved. See comment.
* One of our unit tests tripped an implicit data dependent access which occurs when you pass a Tensor as an argument to a sizes parameter. This is also converted to support symbolic shapes
* We now support tracking bare SymInt/SymFloat returns in proxy tensor mode (this was already in symbolic-shapes branch)
* Whenever we allocate an unbacked symint, we record the stack trace it was allocated at. These get printed when you attempt data dependent access on the symint (e.g., you try to guard on it)
* Subtlety: unbacked symints are not necessarily > 1. I added a test for this.

These unbacked symints are not very useful right now as you will almost always immediately raise an error later when you try to guard on them. The next logical step is adding an assertion refinement system that lets ShapeEnv learn facts about unbacked symints so it can do a better job eliding guards that are unnecessary.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90624
Approved by: https://github.com/Skylion007, https://github.com/voznesenskym
2022-12-12 13:33:07 +00:00
Yanbo Liang
2e0ce24890 [Dynamo] Support access nn.Module keys (#90502)
Fixes https://github.com/pytorch/torchdynamo/issues/1973

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90502
Approved by: https://github.com/jansel
2022-12-12 09:15:42 +00:00
XiaobingSuper
4ca2fc485c inductor(CPU): add Conv+binary+unary fusion filter (#90259)
For Conv+binary+unary fusion, we only support conv+add+relu, this PR adds a such check to fix TIMM failed models.
TODO: enable more Conv+binary+unary fusion to improve TIMM models' performance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90259
Approved by: https://github.com/EikanWang, https://github.com/jgong5, https://github.com/jansel
2022-12-12 06:04:55 +00:00
Edward Z. Yang
8fd31ac4da Preserve original GraphArgs for shape guard codegen (#90665)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90665
Approved by: https://github.com/voznesenskym
2022-12-12 02:35:23 +00:00
Yuxin Wu
8127724c3b Skip some unittests (#90609)
* Skip a unittest that needs FFT if not built with FFT
* Mark a test with "slow": `python test/test_ops.py -k TestCompositeComplianceCUDA.test_forward_ad_svd_lowrank_cuda_float32` took >5min on my machine.
* Skip a flaky test that's marked "expectedFailure", similar to #90233
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90609
Approved by: https://github.com/soumith
2022-12-11 23:53:05 +00:00
Yuxin Wu
5d8618dfbd Some memory saving in large unittests (#90148)
Two tests test_large_cumsum, test_large_cumprod use a lot of memory. This PR:
* Reduces their memory usage by: avoid `self.assertEqual` and avoid a temporary python variable
* Mark their memory requirement by decorator.

related to https://github.com/pytorch/pytorch/issues/84944
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90148
Approved by: https://github.com/soumith
2022-12-11 21:04:38 +00:00
Edward Z. Yang
e33f1eeeb7 SymIntify resize_ and deduplicate memory format logic (#90442)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90442
Approved by: https://github.com/bdhirsh
2022-12-11 14:38:38 +00:00
Shen Li
80542add73 [FSDP] Allow MixedPrecision to skip inputs (#90620)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90620
Approved by: https://github.com/rohan-varma, https://github.com/awgu
2022-12-11 06:39:38 +00:00
Rohan Varma
c7d2fb7f86 Adopt state_dict_pre_hook in FSDP (#90436)
Use register_state_dict_pre_hook in FSDP to simplify state_dict implementations & remove hacks. This removes `def state_dict` entirely and paves the path for composable API as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90436
Approved by: https://github.com/fegin
2022-12-11 03:54:26 +00:00
Andrew Gu
746c773d7c [FSDP][Easy] Move to _storage() in test file (#90622)
This is to silence some deprecation warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90622
Approved by: https://github.com/rohan-varma
2022-12-11 03:50:30 +00:00
Andrew Gu
6845598617 [FSDP] Uncomment test for use_orig_params=True (#90610)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90610
Approved by: https://github.com/rohan-varma
2022-12-11 03:50:23 +00:00
Shen Li
a69cdd9cf8 Add global registry to composable API contract (#90579)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90579
Approved by: https://github.com/awgu, https://github.com/yhcharles
2022-12-10 22:41:10 +00:00
Edward Z. Yang
45109ec30a Completely redo how ShapeEnv guards are generated (#90528)
Instead of inferring shape mappings from a bunch of data structures that were plumbed in InstructionTranslator, we instead work out mappings by just iterating over the GraphArgs and mapping symbols to arguments as they show up. If multiple argument sizes/strides/offset map to the same symbol, this means they are duck sized, so we also generate extra equality tests that they must be equal. Finally, we generate 0/1 specialization guards. The resulting code is much shorter, and I think also easier to understand.

TODO: Delete all the tensor ref tracking code, it's unnecessary

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90528
Approved by: https://github.com/voznesenskym
2022-12-10 13:35:04 +00:00
Edward Z. Yang
49c674e155 Revert guaranteed symint allocation (#90381)
So, uh, I have a new strategy for generating dupe guards, one where I don't actually need to allocate symints for every tensor that is fakeified. So I'm reverting the changes I made from earlier PRs in this one.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90381
Approved by: https://github.com/voznesenskym
2022-12-10 13:17:34 +00:00
Edward Z. Yang
b68dead20c Keep track of source name on all allocated SymInts (#90295)
Wow, I had to sweat so much to get this PR out lol.

This PR enforces the invariant that whenever we allocate SymInts as part of fakeification, the SymInt is associated with a Source, and in fact we store the string source name on SymbolWithSourceName. We use 'sname' as the shorthand for source name, as 'name' is already used by sympy to name symbols.

In order to store source names, we have to plumb source names from Dynamo to PyTorch. This made doing this PR a bit bone crushing, because there are many points in the Dynamo codebase where we are improperly converting intermediate tensors into fake tensors, where there is no source (and there cannot be, because it's a frickin' intermediate tensor). I've fixed all of the really awful cases in earlier PRs in the stack. This PR is just plumbing in source names from places where we do have it.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90295
Approved by: https://github.com/voznesenskym
2022-12-10 13:17:34 +00:00
blzheng
f9aa099074 [Inductor] fix issue: redeclaration of float g_tmp_buffer_xxx (#90270)
This pr is to fix the issue: redeclaration of 'float g_tmp_buffer_in_ptr1[16] = {0};'
If a bool or uint8 tensor is used by multiple op, this tensor will be loaded multiple times. On load, it writes the declaration of this variable, i.e., `self.loads.writeline(f"float {g_tmp_buf}[{nelements}] = {{0}};")`, which will introduce redeclaration error.

![image](https://user-images.githubusercontent.com/69951214/205869956-5c325761-dc09-4aa8-a9ed-fad7f4c85917.png)
![image](https://user-images.githubusercontent.com/69951214/205870695-ee252f17-8f54-484f-9b0a-3a424c479327.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90270
Approved by: https://github.com/EikanWang, https://github.com/jgong5, https://github.com/desertfire, https://github.com/jansel
2022-12-10 12:59:30 +00:00
Zachary DeVito
0457020d2c [dims] Fix large array inputs (#88596)
Variable length arguments can overflow the arena being used to keep overhead
low for torch dims. If we hit this case, we know the amount of work being done
is already relatively big, so we just fallback to standard memory allocation.

Fixes #88586
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88596
Approved by: https://github.com/ezyang
2022-12-10 03:49:16 +00:00
Yanli Zhao
2bac4d1fae [reland] add save and load stats in memory_tracker (#90510)
reland https://github.com/pytorch/pytorch/pull/90144, this PR removed temporary path "memory.trace" in the unit test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90510
Approved by: https://github.com/rohan-varma
2022-12-10 01:39:22 +00:00
BowenBao
1b2c59ad24 [ONNX] Introduce ONNX reference evaluator for verification (#89808)
Reference evaluator requires ONNX >= 1.13. Running in CI is blocked by unable to bump onnx submodule version, like in #83201. Local tests pass.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89808
Approved by: https://github.com/justinchuby
2022-12-10 01:29:12 +00:00
BowenBao
79f9672249 [ONNX] Use VerificationOptions to wrap option arguments (#89807)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89807
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms
2022-12-09 23:49:51 +00:00
Angela Yi
6de216a2e8 [fx] Have replace_pattern return replaced nodes (#90244)
Summary: Modified replace_pattern in the subgraph rewriter to return a list of pairs of matches along with their corresponding replacement nodes in the modified graph (`List[Tuple[Match, List[Node]]]`). This allows us to easily modify the replaced nodes, including setting the metadata.

Test Plan: CI

Differential Revision: D41737056

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90244
Approved by: https://github.com/SherlockNoMad
2022-12-09 23:43:16 +00:00
Angela Yi
02eb0bdbc1 [fx] Added better tests to pass infra (#90432)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90432
Approved by: https://github.com/SherlockNoMad
2022-12-09 21:43:18 +00:00
Andrew Gu
1a735a8094 [FSDP] Subtest CPUOffload for test_fsdp_grad_acc.py (#90545)
In preparation for the next PR, I wanted to reduce the time to run these gradient accumulation tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90545
Approved by: https://github.com/mrshenli
2022-12-09 21:28:27 +00:00
Driss Guessous
912748e3b7 [SDP] Fix alignment check for efficient_attention (#90413)
Fixes a bug found using head_dim_size==100 on an a100 gpu. This PR contains stricter guards on the input shape. These constraints are taken from xformers: https://github.com/facebookresearch/xformers/blob/gh/danthe3rd/60/orig/xformers/ops/fmha/cutlass.py#L23
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90413
Approved by: https://github.com/mikekgfb
2022-12-09 21:09:25 +00:00
Michael Lazos
9c4189f82d [dynamo] Add is_compiling for dynamo (#90329)
`is_tracing` returns True during dynamo tracing and False when run in Eager

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90329
Approved by: https://github.com/jansel
2022-12-09 20:19:41 +00:00
Shen Li
082450609c [FSDP] Allow nested FSDP wrapper to use different mixed precision (#90523)
The main change is to move `args` and `kwargs` dtype convertion
from `_root_pre_forward` to `_pre_forward`, so that every
FSDP has a chance to apply its own precision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90523
Approved by: https://github.com/awgu, https://github.com/rohan-varma
2022-12-09 20:06:05 +00:00
mfkasim1
eedf7a4989 Log1p complex for CUDA (#90422)
Another pull request in the direction of solving #89205: log1p for complex numbers in CUDA.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90422
Approved by: https://github.com/lezcano
2022-12-09 19:53:22 +00:00
Yuxin Wu
4e1881b8b7 use proper temp directories in test_tensorboard.py (#89826)
The old `temp_dir` is created under `PWD`. But `PWD` may not be writable and in general is not a good place to create temporary directories. Use the standard `tempfile` instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89826
Approved by: https://github.com/soumith
2022-12-09 19:33:03 +00:00
Thiago Crepaldi
bcf7036be5 Disable BUILD_CAFFE2 from ONNX builds (#90475)
Fixes https://github.com/microsoft/onnx-converters-private/issues/132

@kit1980 and @malfet agreed in disabling ONNX tests for Caffe2 builds.
With this change, exporting models with `operator+export_type=ONNX_ATEN_FALLBACK` will properly test non-caffe2 builds, which is the only scenario for aten fallback after caffe2 deprecation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90475
Approved by: https://github.com/kit1980, https://github.com/BowenBao
2022-12-09 18:02:48 +00:00
Bin Bao
282dfe8ba4 [inductor][Reland] Use decomposition for _to_copy (#90494)
Summary: also contains a fix for https://github.com/pytorch/pytorch/issues/89633

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90494
Approved by: https://github.com/ngimel
2022-12-09 16:51:50 +00:00
Alex Settle
6b7efac3c9 Reland "Add heirachical module names to torchFX graph.node" (#90205)
Fixes #87659

Reland of PR #87742

Resolves errors that caused the changes to be backed out.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90205
Approved by: https://github.com/jerryzh168
2022-12-09 06:20:31 +00:00
Sean Ross-Ross
0a00858095 Implement checks for vmap escaped errors (#89585)
Follow on to https://github.com/pytorch/pytorch/pull/89077
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89585
Approved by: https://github.com/zou3519
2022-12-09 05:58:07 +00:00
Xilun Wu
3759777edc [threaded PG] fix long hang issue in testing (#90515)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90515
Approved by: https://github.com/wanchaol
2022-12-09 05:24:08 +00:00
Nikita Shulga
6fb79b7004 Bump version: 1.14.0->2.0.0 (#90491)
Except for the usual location, had to update the version in one of ONNX expect patterns, namely here: 43660051d8/test/onnx/expect/TestOperators.test_avg_pool2d.expect (L3)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90491
Approved by: https://github.com/jansel, https://github.com/albanD
2022-12-09 01:08:08 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
c8f5c194ca Fix bug in dynamic shapes multiply (#90336)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90336
Approved by: https://github.com/ezyang
2022-12-09 00:59:50 +00:00
Andrew Gu
2cf703214b [Composable API][Easy] Fix some follow-ups (#90471)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90471
Approved by: https://github.com/mrshenli
2022-12-09 00:26:38 +00:00
William Wen
eb5b4c21e1 Deepcopy GraphModule in minifier (#90401)
Fixes https://github.com/pytorch/pytorch/issues/90397. Remove deepcopy calls in minifier tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90401
Approved by: https://github.com/anijain2305, https://github.com/mlazos
2022-12-08 23:59:05 +00:00
Howard Huang
80150788bc [21/N] Add alltoall_base custom op with CPU/CUDA implementations (#89813)
Differential Revision: [D41812670](https://our.internmc.facebook.com/intern/diff/D41812670)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89813
Approved by: https://github.com/kwen2501
2022-12-08 23:39:26 +00:00
Sergii Dymchenko
912a1f7b27 Fix issue 38095 TODOs in test_quantized_tensor.py (#90344)
Fix TODOs related to https://github.com/pytorch/pytorch/issues/38095
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90344
Approved by: https://github.com/malfet
2022-12-08 22:28:15 +00:00
Michael Lazos
76f440f20a [dynamo] Rewrite inplace addcdiv and inplace add (#90330)
Rewrite inplace addcdiv to a div, mul and inplace add to avoid graph break
Rewrite inplace add to a mul and inplace add to avoid graph break

Needed to close optimizer graph breaks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90330
Approved by: https://github.com/jansel
2022-12-08 21:19:23 +00:00
Stephen Macke
0c972fb5c7 [rfc][pkg] check spec for module source before falling back to file in package exporter (#90258)
Summary: To get source for a particular module, the "correct" thing to do is to check the module's spec and use `get_source` if it's a SourceFileLoader, since subclasses may look elsewhere than the `__file__`, and the spec will give the source of truth. For torch packager, however, we prefer to use linecache, but the loader could still change the file, so we figure out the file for the module using the spec's loader rather than using `module.__file__`, if possible.

Test Plan: This code path will get exercised by CI. Also added a test for remapped files.

Differential Revision: D41412983

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90258
Approved by: https://github.com/PaliC
2022-12-08 20:24:45 +00:00
Elias Ellison
b651e06049 Add Pointwise Tag from pointwise set in DTensor, use in aot_autograd partitioner (#90029)
Takes the pointwise op list from [DTensor](https://github.com/pytorch/pytorch/blob/master/torch/distributed/_tensor/ops/pointwise_ops.py#L36) as an initially starting point for pointwise ops, and feeds them to the aot autograd partitioner.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90029
Approved by: https://github.com/ezyang
2022-12-08 20:21:17 +00:00
Richard Zou
7342251281 functorch.grad support for autograd.Function (#89860)
Happy to split this PR more if it helps.

This PR adds functorch.grad support for autograd.Function. There's a lot
going on; here is the high level picture and there are more details as
comments in the code.

Mechanism (PyOperator)
- Somehow, autograd.Function needs to dispatch with functorch. This is
necessary because every layer of functorch needs to see the
autograd.Function; grad layers need to preserve the backward pass.
- The mechanism for this is via PyOperator. If functorch transforms are
active, then we wrap the autograd.Function in a `custom_function_call`
PyOperator where we are able to define various rules for functorch
transforms.
- `custom_function_call` has a rule for the functorch grad transform.

autograd.Function changes
- I needed to make some changes to autograd.Function to make this work.
- First, this PR splits autograd.Function into a _SingleLevelFunction
(that works with a single level of functorch transform) and
autograd.Function (which works with multiple levels). This is necessary
because functorch's grad rule needs some way of specifying a backward
pass for that level only.
- This PR changes autograd.Function's apply to eitehr call
`custom_function_call` (if functorch is active) or super().apply (if
functorch isn't active).

Testing
- Most of this PR is just testing. It creates an autograd.Function
OpInfo database that then gets passed to the functorch grad-based tests
(grad, vjp, vjpvjp).
- Since functorch transform tests are autogenerated from OpInfo tests,
this is the easiest way to test various autograd.Function with
functorch.

Future
- jvp and vmap support coming next
- better error message (functorch only supports autograd.Function that
have the optional setup_context staticmethod)
- documentation to come when we remove the feature flag

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89860
Approved by: https://github.com/soulitzer
2022-12-08 19:31:04 +00:00
Richard Zou
eb314f9b1a Add setup_context staticmethod to autograd.Function (#89859)
Adds a setup_context staticmethod to autograd.Function.
If it exists, then the user splits the ctx-specific logic from the
forward() and puts it in the setup_context staticmethod.

Docs will come later when we remove the feature flag.

Test Plan:
- some light tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89859
Approved by: https://github.com/soulitzer
2022-12-08 19:31:04 +00:00
Richard Zou
103be1f164 Add feature flag for the autograd.Function extension (#89858)
This PR adds a private runtime feature flag for the feature work we're going
to do with extending autograd.Function. The motivation of the feature flag
is:
- to guard the feature against unsuspecting users
- control the release of the feature to when we are ready to release it

We might not even need the feature flag (because we hope to have the
work done in the next month), but it is good practice and it does touch
currently public API (autograd.Function).

Concretely, "autograd.Function extension" refers to:
- adding an optional `setup_context` staticmethod to autograd.Function
- adding an optional `vmap` staticmethod to autograd.Function
- autograd.Function support for functorch

Test Plan:
- new test that the feature flag works
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89858
Approved by: https://github.com/soulitzer
2022-12-08 19:31:01 +00:00
PyTorch MergeBot
e89685b0b5 Revert "[inductor] Use decomposition for _to_copy (#90314)"
This reverts commit 3fdb5f2dda.

Reverted https://github.com/pytorch/pytorch/pull/90314 on behalf of https://github.com/desertfire due to regresses performance on hf_Bert
2022-12-08 18:29:06 +00:00
Denis Vieriu
b71c710db1 Add additional tests for view slice tensors (#86282)
Fixes https://github.com/pytorch/pytorch/issues/83995 and https://github.com/pytorch/pytorch/issues/84489

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86282
Approved by: https://github.com/kulinseth
2022-12-08 17:59:55 +00:00
PyTorch MergeBot
465005c1e0 Revert "Fix issue 38095 TODO in test_multiprocessing.py (#90335)"
This reverts commit cbb2d5af81.

Reverted https://github.com/pytorch/pytorch/pull/90335 on behalf of https://github.com/clee2000 due to somehow caused test_multiprocessing to timeout cbb2d5af81 https://github.com/pytorch/pytorch/actions/runs/3645873711/jobs/6159998523
2022-12-08 17:12:10 +00:00
Bin Bao
d2ee94231e [inductor] Fallback for index with None in the middle of indices (#90022)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90022
Approved by: https://github.com/ngimel
2022-12-08 16:18:57 +00:00
Rohan Varma
793a999ce0 Hybrid Sharded Data Parallel (#89915)
Adds 2 new hybrid sharding strategy to FSDP:
1. HYBRID_SHARD: applies zero-3 style sharding within a node, and data parallel across
2. HYBRID_SHARD_ZERO2: applies zero-2 style sharding within a node, and data parallel across

These are useful for medium sized models and aim to decrease communication volume, tests and benchmarks will be run to understand which workloads are optimal under which sharding strategy.

Hybrid sharding in general works by sharding the model using a process group within a single node, and creating intra-node process groups for replication / data parallelism. The user either needs to pass in a tuple of these process groups, or None, and we generate the process groups appropriately.

** Acknowledgements **
- @awgu 's excellent prototype: 5ad3a16d48
- @liangluofb For ideation, feedback, and initial implementation and experimentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89915
Approved by: https://github.com/awgu
2022-12-08 16:18:03 +00:00