Commit Graph

890 Commits

Author SHA1 Message Date
kato8966
64c39ece65 Fix a docstring of resolve_neg (#104151)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104151
Approved by: https://github.com/malfet
2023-07-19 03:55:20 +00:00
Horace He
b88b742db8 fixed torch.manual_seed note (#105175)
Fixes https://github.com/pytorch/pytorch/issues/87509

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105175
Approved by: https://github.com/ezyang
2023-07-13 23:43:44 +00:00
Kurt Mohler
f987d11fa7 Reland: Make torch.empty* deterministic by filling with NaN or max int (#104995)
Relands #101849 after #104302 reverted it.

torchrec PR https://github.com/pytorch/torchrec/pull/1269 fixes the torchrec failure that caused #101849 to be reverted

Part of #82004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104995
Approved by: https://github.com/albanD
2023-07-13 22:18:03 +00:00
Oren Leung
3ff111a4b4 doc: fix fake_quantize_per_tensor_affine docs (#104453)
Fixes #82800

Fixes wrong `fake_quantize_per_tensor_affine` example and wrong `fake_quantize_per_tensor_affine` formula

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104453
Approved by: https://github.com/kit1980
2023-06-30 22:59:00 +00:00
Amr Elshennawy
a78bddac01 Revert D46920584: Multisect successfully blamed D46920584 for test or build failures (#104269) (#104302)
Summary:

This diff is reverting D46920584
D46920584: Make `torch.empty*` deterministic by filling with NaN or max int value (#101849) by generatedunixname499836121 has been identified to be causing the following test or build failures:

Tests affected:
- [torchrec/distributed/composable/tests:test_fsdp - torchrec.distributed.composable.tests.test_fsdp.FullyShardTest: test_composable_checkpoint](https://www.internalfb.com/intern/test/281475062923125/)

Here's the Multisect link:
https://www.internalfb.com/multisect/2341386
Here are the tasks that are relevant to this breakage:

We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.

If you believe this diff has been generated in error you may Commandeer and Abandon it.

Test Plan: NA

Reviewed By: huydhn, osalpekar

Differential Revision: D46997394

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104302
Approved by: https://github.com/osalpekar
2023-06-29 20:20:58 +00:00
JenDL
a6b9a61a6a Added a note to torch.round doc to indicate the return type (#97227)
Added a note to torch.round doc to indicate the return type of output tensor

Fixes #89056

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97227
Approved by: https://github.com/albanD
2023-06-29 20:02:59 +00:00
Kurt Mohler
2642f31e4c Make torch.empty* deterministic by filling with NaN or max int value (#101849)
Part of #82004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101849
Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/kulinseth
2023-06-21 02:53:22 +00:00
Zheng, Zhaoqiong
d52d1fd5ba add description for unexpected case (#103500)
Fixes #88547

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103500
Approved by: https://github.com/mingfeima, https://github.com/mikaylagawarecki
2023-06-20 19:02:45 +00:00
Liang Hou
e82616d900 Add generator argument in torch.randn signature (#102075)
Fix the document issue of `torch.randn`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102075
Approved by: https://github.com/kit1980, https://github.com/soulitzer
2023-06-14 23:37:19 +00:00
Simon-Martin Schröder
a0885dff98 Link torch.cat in docstring of torch.stack and vice versa (#103421)
torch.cat and torch.stack are similar enough that they should point to each other.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103421
Approved by: https://github.com/malfet, https://github.com/svekars, https://github.com/kit1980
2023-06-14 23:31:22 +00:00
Snorf Yang
2a3e45a2a8 Docs: update default device description (#101283)
Closes #101274

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101283
Approved by: https://github.com/albanD
2023-05-16 17:07:31 +00:00
Yukio Siraichi
b3b333205f Fix asarray doc examples. (#100971)
Fixes issue raised on [PyTorch discuss](https://discuss.pytorch.org/t/confused-on-an-example-on-pytorch-official-documentation/178785).

**Summary:** the examples in `asarray` docs have a few mistakes that makes it not work. This PR fixes those.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100971
Approved by: https://github.com/Skylion007, https://github.com/lezcano
2023-05-12 11:52:10 +00:00
ts
2a6a159c0c Modify repeat_interleave docs to highlight potential overloading (#99650)
Fixes #99259 , drawing to attention that input is optional by putting a variation of the method signature at the top of the file and by modifying the input arguments.

Note that I'm not certain how to get the additional signature at the same level of indentation as the first one, but I think this change does a good job of highlighting the change is optional.

Would be happy to iterate on this if there are any issues.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99650
Approved by: https://github.com/mikaylagawarecki
2023-05-01 17:53:03 +00:00
Akinori Mitani
c11441fda3 Update torch.arange doc. (#99963)
To always exclude `end` without being affected by rounding error, `epsilon` should be subtracted, instead of being added.

Fixes #99853

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99963
Approved by: https://github.com/kit1980
2023-04-26 04:18:56 +00:00
Kiersten Stokes
5c16dfd708 Add half to real param description in torch.complex docs (#99938)
Fixes #89733 according to the issue description

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99938
Approved by: https://github.com/Skylion007
2023-04-25 21:23:16 +00:00
gusty1g
efc90c797d improvements to torch.gradient docs (#98824)
Fixes #98693

Clarified docs for `torch.gradient` on `h_l` and how the gradient is computed. For the mathematical equations, I followed this reference: https://www.dam.brown.edu/people/alcyew/handouts/numdiff.pdf.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98824
Approved by: https://github.com/ngimel, https://github.com/kit1980
2023-04-12 23:43:40 +00:00
Connor Henderson
9d04d376d8 docs: Match open bracket with close bracket in unsqueeze (#95215)
Was going to fix something else that I thought was an issue, but isn't, so just leaving this tiny thing in case it's wanted
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95215
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-02-24 03:56:59 +00:00
Edward Z. Yang
ce950b412f Reland "Add torch.empty_permuted (#95069)" (#95208)
This reverts commit 92e03cd583.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95208
Approved by: https://github.com/albanD
2023-02-21 18:02:48 +00:00
PyTorch MergeBot
92e03cd583 Revert "Add torch.empty_permuted (#95069)"
This reverts commit bedeb1f014.

Reverted https://github.com/pytorch/pytorch/pull/95069 on behalf of https://github.com/jeanschmidt due to Breaking internal builds. More in https://fburl.com/phabricator/ztrxrroq
2023-02-21 12:05:20 +00:00
Edward Z. Yang
bedeb1f014 Add torch.empty_permuted (#95069)
torch.empty_permuted is a generalized version of torch.empty(memory_format=...), where you can pass an arbitrary physical layout as a tuple of dims to allow you to setup dense, non-overlapping tensors with non-standard memory format. Check the docblock for a full description of semantics.

The initial motivation for this PR is with guard-less unbacked SymInts. Traditionally, the way we allocate dense tensors with arbitrary layout is with `empty_strided`. However, `empty_strided` does not know that the given strides are actually contiguous, and must test this manually to find out if it is the case. With `empty_permuted`, this is known statically to be the case and helps us skip some 0/1 guards.

However, I also think torch.empty_permuted is a useful API in its own right. It is technically possible to simulate this with an empty and a permute; however, there are some downsides:

* The manual incant is tricky to work out. To allocate an NHWC tensor, the invocation is `torch.empty(N, H, W, C).permute(0, 3, 1, 2)`; the permute call has to take NHWC to NCHW, and is the *inverse* of the permutation people are typically thinking of when they talk about NHWC (0, 2, 3, 1). Instead, torch.empty_permuted lets you say `torch.empty_permuted((N, C, H, W), (0, 2, 3, 1))`, letting you provide the intuitive permutation. It can be literally be read off as NHWC if you assign N=0, C=1, H=2, W=3.
* An empty(requires_grad=True).permute() is no longer a leaf tensor. You can force it to be a leaf with a detach(), but it is more straightforward and less error prone to allow directly allocating a tensor with the correct permutation.

It is also technically possible to simulate this with empty_strided. However, this requires the user to manually compute the contiguous output strides and is bad from a reduction of guards perspective. For what it's worth, this is one of the more common uses of as_strided in the wild, and it would be nice to get rid of it.

A nice enhancement of this feature would be to accept `physical_layout` anywhere `memory_format` is accepted. However, this would be a pretty involved change, so I'm doing the easy thing instead.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95069
Approved by: https://github.com/malfet, https://github.com/ngimel, https://github.com/albanD, https://github.com/dagitses
2023-02-20 00:23:10 +00:00
Ivan Yashchuk
fba13d94a1 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

- [x] XLA PR: https://github.com/pytorch/xla/pull/4498

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980, https://github.com/malfet
2023-01-31 11:59:11 +00:00
Minh-Long Luu (刘明龙)
00b3f22210 Add missing scalar example in docs of torch.where (#93145)
[`torch.where(condition, x, y)`](https://pytorch.org/docs/stable/generated/torch.where.html) accepts `x` and `y` as either `Tensor` or Scalar, but the Scalar example is missing in the docs. I simply add the example.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93145
Approved by: https://github.com/ngimel
2023-01-28 03:46:44 +00:00
PyTorch MergeBot
acdd462b1a Revert "Remove deprecated torch.symeig (#70988)"
This reverts commit d70ed68162.

Reverted https://github.com/pytorch/pytorch/pull/70988 on behalf of https://github.com/kit1980 due to Failing XLA tests, forward fix unsuccessful
2023-01-24 19:03:40 +00:00
Yukio Siraichi
3f64c96655 asarray: Add support for NumPy scalars (#90914)
Follow up from: Quansight-Labs/numpy_pytorch_interop#3

This PR adds support for NumPy scalars for `torch.asarray`.

**Before:** treats the scalar as an object that implements the buffer protocol. Thus, interprets the data as the default data type (`float32`)

```python
>>> torch.asarray(numpy.float64(0.5))
tensor([0.0000, 1.7500])
```

**After:** identifies the NumPy scalar, and does the "right" thing. i.e. creates a 0-dimensional tensor from the NumPy array that doesn't share its memory

```python
>>> torch.asarray(numpy.float64(0.5))
tensor(0.5000, dtype=torch.float64)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90914
Approved by: https://github.com/lezcano, https://github.com/mruberry
2023-01-24 08:09:30 +00:00
Ivan Yashchuk
d70ed68162 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980
2023-01-23 22:51:40 +00:00
Masaki Kozuki
30876229a7 [mta] Backward of unary foreach functions (#89591)
as per title, this PR defines backward of those.

This doesn't implement forward-mode automatic differentiation as [the current codegen](a747326423/tools/autograd/gen_variable_type.py (L1513)) doesn't seem to handle `ArrayRef<Tensor>`.

Rel:
- https://github.com/pytorch/pytorch/issues/53796
- https://github.com/pytorch/pytorch/issues/58833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89591
Approved by: https://github.com/albanD
2023-01-23 08:28:06 +00:00
Peter Bell
fb1427ea8f squeeze: allow squeezing multiple dimensions at once (#89017)
Ref #70924

This addresses part 1 of the issue, allowing `torch.squeeze` to be
passed a tuple of dimensions. e.g.
```python
x.squeeze(0).squeeze(0)
```
can now be written
```python
x.squeeze((0, 1))
```
(assuming x has at least 2 dimensions)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89017
Approved by: https://github.com/albanD
2023-01-17 14:20:15 +00:00
Luca Lumetti
a4a0195c6c Fix torch.where signature mismatch that was caused by torchgen (#91627)
Fixes #91003

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91627
Approved by: https://github.com/albanD
2023-01-13 16:17:55 +00:00
Pearu Peterson
b3e4f5029b Add check-sparse-tensor-invariants flag to Context - 2nd try. (#92094)
This PR is a copy of https://github.com/pytorch/pytorch/pull/90849 that merge was reverted.

The PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

`torch.sparse.check_sparse_tensor_invariants` class provides different ways to enable/disable the invariant checking.

`torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR fixes https://github.com/pytorch/pytorch/issues/90833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92094
Approved by: https://github.com/cpuhrsch
2023-01-13 14:50:33 +00:00
PyTorch MergeBot
c7a22bb7c7 Revert "Add check-sparse-tensor-invariants flag to Context. (#90849)"
This reverts commit b9a035c1c5.

Reverted https://github.com/pytorch/pytorch/pull/90849 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-12 09:58:16 +00:00
Pearu Peterson
b9a035c1c5 Add check-sparse-tensor-invariants flag to Context. (#90849)
This PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

- `torch.enable_check_sparse_tensor_invariants` and `torch.is_check_sparse_tensor_invariants_enabled` functions to globally enable/disable the invariant checks and to retrieve the state of the feature, respectively
- `torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR also fixes https://github.com/pytorch/pytorch/issues/90833

# Main issue

*The following content is outdated after merging the PRs in this ghstack but kept for the record.*

The importance of this feature is that when enabling the invariants checks by default, say, via

<details>

```
$ git diff
diff --git a/torch/__init__.py b/torch/__init__.py
index c8543057c7..19a91d0482 100644
--- a/torch/__init__.py
+++ b/torch/__init__.py
@@ -1239,3 +1239,8 @@ if 'TORCH_CUDA_SANITIZER' in os.environ:

 # Populate magic methods on SymInt and SymFloat
 import torch.fx.experimental.symbolic_shapes
+
+# temporarily enable sparse tensor arguments validation in unsafe
+# constructors:
+
+torch._C._set_check_sparse_tensor_invariants(True)
```

</details>

a massive number of test failures/errors occur in test_sparse_csr.py tests:
```
$ pytest -sv test/test_sparse_csr.py
<snip>
==== 4293 failed, 1557 passed, 237 skipped, 2744 errors in 69.71s (0:01:09) ====
```
that means that we are silently constructing sparse compressed tensors that do not satisfy the sparse tensor invariants. In particular, the following errors are raised:

```
AssertionError: "resize_as_sparse_compressed_tensor_: self and src must have the same layout" does not match "expected values to be a strided and contiguous tensor"

RuntimeError: CUDA error: device-side assert triggered

RuntimeError: `col_indices[..., crow_indices[..., i - 1]:crow_indices[..., i]] for all i = 1, ..., nrows are sorted and distinct along the last dimension values` is not satisfied.

RuntimeError: expected col_indices to be a strided and contiguous tensor

RuntimeError: expected row_indices to be a strided and contiguous tensor

RuntimeError: expected values to be a strided and contiguous tensor

RuntimeError: for_each: failed to synchronize: cudaErrorAssert: device-side assert triggered

RuntimeError: tensor dimensionality must be sum of batch, base, and dense dimensionalities (=0 + 2 + 0) but got 3
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90849
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-01-11 01:05:14 +00:00
Peter Bell
2a64365a29 Fix rendering of std/var docs (#91730)
Due to the indentation, "versionchanged" is being rendered as if it was an
argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91730
Approved by: https://github.com/albanD, https://github.com/lezcano
2023-01-05 22:17:37 +00:00
PyTorch MergeBot
df4b3b13bc Revert "squeeze: allow squeezing multiple dimensions at once (#89017)"
This reverts commit e26cb06681.

Reverted https://github.com/pytorch/pytorch/pull/89017 on behalf of https://github.com/mehtanirav due to Internal breakages
2023-01-05 19:25:08 +00:00
Peter Bell
e26cb06681 squeeze: allow squeezing multiple dimensions at once (#89017)
Ref #70924

This addresses part 1 of the issue, allowing `torch.squeeze` to be
passed a tuple of dimensions. e.g.
```python
x.squeeze(0).squeeze(0)
```
can now be written
```python
x.squeeze((0, 1))
```
(assuming x has at least 2 dimensions)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89017
Approved by: https://github.com/albanD
2023-01-04 14:40:56 +00:00
Salahuddin
f1d8fef4d4 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-28 15:06:24 +00:00
PyTorch MergeBot
af7132302a Revert "Softmax added to tensor, torch and docs (#91292)"
This reverts commit f8b28799f8.

Reverted https://github.com/pytorch/pytorch/pull/91292 on behalf of https://github.com/weiwangmeta due to breaking internal distributed testing builds
2022-12-28 14:30:46 +00:00
Salahuddin
f8b28799f8 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-25 12:59:45 +00:00
Peter Bell
e6a7278753 Give std/var correction overloads proper defaults (#56398)
The correction overloads defaults were left off for forward
compatibility reasons, but this FC window expired well over a year ago
at this point.

Differential Revision: [D29625593](https://our.internmc.facebook.com/intern/diff/D29625593)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56398
Approved by: https://github.com/mruberry
2022-12-07 15:15:00 +00:00
David Boetius
f3aeed4960 Add generator argument to torch.rand docstring (#90071)
The documentation of `torch.rand` was missing the `generator` keyword argument in the function signature. However, the argument is explained in the documentation and `torch.rand` accepts that argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90071
Approved by: https://github.com/janeyx99
2022-12-04 07:19:24 +00:00
Chung-chieh Shan
1a25e6f3c3 Fix indentation (#90110)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90110
Approved by: https://github.com/kit1980
2022-12-04 07:13:53 +00:00
Pearu Peterson
526e4aa5f8 Update to_sparse docs regarding the layout and blocksize kw arguments. (#89912)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89912
Approved by: https://github.com/cpuhrsch
2022-12-02 12:23:15 +00:00
Pearu Peterson
5167108c1a Add device note to the docs of sparse tensor factory functions (#89910)
Fixes #89402

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89910
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2022-12-01 00:06:38 +00:00
Alvaro Gaona
abb446af8c Implement old windows in Python (#87082)
Relates to #85366

- Bartlett, Blackman, Hamming, Hann.
- Except Kaiser which will be in a different PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87082
Approved by: https://github.com/mruberry, https://github.com/lezcano
2022-11-25 11:09:28 +00:00
Nikita Karetnikov
4270bb37da [primTorch] Improve narrow and narrow_copy: refs, tests, docs (#87045)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87045
Approved by: https://github.com/mruberry
2022-11-12 15:03:50 +00:00
PyTorch MergeBot
93d3bd626e Revert "[primTorch] Improve narrow and narrow_copy: refs, tests, docs (#87045)"
This reverts commit aa8279bcb8.

Reverted https://github.com/pytorch/pytorch/pull/87045 on behalf of https://github.com/izaitsevfb due to BC-breaking change, D41161182
2022-11-09 20:48:32 +00:00
Nikita Karetnikov
aa8279bcb8 [primTorch] Improve narrow and narrow_copy: refs, tests, docs (#87045)
Fixes #87019.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87045
Approved by: https://github.com/mruberry
2022-11-09 09:19:28 +00:00
Kazuaki Ishizaki
2ddefbdc3c Fix typos used in documents under torch directory (#88300)
This PR fixes typos, in comments of Python files, that are found from a search box at https://pytorch.org/docs/master/search.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88300
Approved by: https://github.com/lezcano
2022-11-02 09:38:13 +00:00
foram-chandra
0f7df16c71 [doc] Add out-kwarg documentation to torch.where (#87870)
Fixes #87862

cc: @lezcano

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87870
Approved by: https://github.com/lezcano
2022-10-27 21:03:42 +00:00
S.Cao-office
312628d299 Fixed minor typos in torch.flip and torch.rot90 (#87724)
Fixes #87721

@malfet

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87724
Approved by: https://github.com/malfet
2022-10-25 19:51:42 +00:00
Edward Z. Yang
838b699e10 as_strided_scatter storage offset defaults to None not 0 (#87481)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87481
Approved by: https://github.com/bdhirsh
2022-10-21 20:12:40 +00:00