Commit Graph

47 Commits

Author SHA1 Message Date
Laith Sakka
189a054cfb Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. [attempt2] (#160869)
[relanding again after fixing internal build]
Summary:
This might cause some new DDEs on call sites that do not use is_contiguous_or_false() or sym_is_contiguous()
but want to find those call sites to handle this properly by calling  is_contiguous_or_false() and not is_contiguous() explitly when appropriate.
I had to fix one issue after removing the implicit size oblivious reasoning. here is context

we defined in this https://github.com/pytorch/pytorch/pull/157472 sym_is_contiguous to be the function computing contiguity for dynamic shapes in c++. It returns a symbolic expression that represents contiguity and guaranteed not to throw a DDE.

when people call is_contiguous we do sym_is_contiguous().guard_bool()
when people call is_contiguous_or_false we do sym_is_contiguous().guard_or_false()

one issue not handled well was this path
```
c10::SymBool TensorImpl::sym_is_contiguous_custom(
    at::MemoryFormat memory_format) const {
  if (C10_UNLIKELY(matches_python_custom(SizesStridesPolicy::CustomStrides))) {
    return pyobj_slot_.load_pyobj_interpreter()->is_contiguous(
        this, memory_format);
  }

  return sym_is_contiguous_default(memory_format);
}
```
namely if we call sym_is_contiguous_custom but we have matches_python_custom(SizesStridesPolicy::CustomStrides) return true , then we used to call is_contiguous(this, memory_format);

This used to go through the load_pyobj_interpreter and end up calling the python is_contiguous call which used implicit size oblivious reasoning.
once we removed that implicit size oblivious reasoning, the right thing we want is to call
return pyobj_slot_.load_pyobj_interpreter()->sym_is_contiguous(this, memory_format);
otherwise we would get DDE even if the caller is doing sym_is_contiguous.

so I had to define it for pyinterpreter, and then I had to override it for nested tensors.

Approved by: https://github.com/ezyang

Test Plan:
contbuild & OSS CI, see e444cd24d4

Rollback Plan:

Differential Revision: D80435179

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160869
Approved by: https://github.com/ezyang
2025-09-08 22:59:13 +00:00
PyTorch MergeBot
b82aa3df20 Revert "Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. (#159197)"
This reverts commit e444cd24d4.

Reverted https://github.com/pytorch/pytorch/pull/159197 on behalf of https://github.com/laithsakka due to internal build failures ([comment](https://github.com/pytorch/pytorch/pull/159197#issuecomment-3195436668))
2025-08-18 07:22:13 +00:00
Laith Sakka
e444cd24d4 Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. (#159197)
This might cause some new DDEs on call sites that do not use is_contiguous_or_false() or sym_is_contiguous()
but want to find those call sites to handle this properly by calling  is_contiguous_or_false() and not is_contiguous() explitly when appropriate.
I had to fix one issue after removing the implicit size oblivious reasoning. here is context

we defined in this https://github.com/pytorch/pytorch/pull/157472 sym_is_contiguous to be the function computing contiguity for dynamic shapes in c++. It returns a symbolic expression that represents contiguity and guaranteed not to throw a DDE.

when people call is_contiguous we do sym_is_contiguous().guard_bool()
when people call is_contiguous_or_false we do sym_is_contiguous().guard_or_false()

one issue not handled well was this path
```
c10::SymBool TensorImpl::sym_is_contiguous_custom(
    at::MemoryFormat memory_format) const {
  if (C10_UNLIKELY(matches_python_custom(SizesStridesPolicy::CustomStrides))) {
    return pyobj_slot_.load_pyobj_interpreter()->is_contiguous(
        this, memory_format);
  }

  return sym_is_contiguous_default(memory_format);
}
```
namely if we call sym_is_contiguous_custom but we have matches_python_custom(SizesStridesPolicy::CustomStrides) return true , then we used to call is_contiguous(this, memory_format);

This used to go through the load_pyobj_interpreter and end up calling the python is_contiguous call which used implicit size oblivious reasoning.
once we removed that implicit size oblivious reasoning, the right thing we want is to call
return pyobj_slot_.load_pyobj_interpreter()->sym_is_contiguous(this, memory_format);
otherwise we would get DDE even if the caller is doing sym_is_contiguous.

so I had to define it for pyinterpreter, and then I had to override it for nested tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159197
Approved by: https://github.com/ezyang
2025-08-16 09:15:58 +00:00
Ayan Das
2620361d19 Add batching rule for torch.matrix_exp (#155202)
## Summary

Adds the missing batching rule for `torch.matrix_exp` to enable efficient `vmap` support.
Previously, using `vmap` with `matrix_exp` would trigger a performance warning and fall back to a slow loop-based implementation, even though `matrix_exp` natively supports batched inputs.

Fixes #115992

## Details

`torch.matrix_exp` is an alias for `torch.linalg.matrix_exp`. This PR adds vmap support by registering `matrix_exp` with `OP_DECOMPOSE`, which reuses the existing CompositeImplicitAutograd decomposition to automatically generate batching behavior from the operation's simpler component operations.

## Testing

The existing test suite for vmap and matrix_exp should cover this change. The fix enables:
- No performance warning when using `vmap(torch.matrix_exp)`
- Efficient native batched execution instead of loop-based fallback

**Edit:** Updated Details section to accurately reflect the implementation approach (decomposition rather than batch rule registration)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155202
Approved by: https://github.com/zou3519
2025-06-18 17:35:35 +00:00
chilli
e40a0a9359 Add randomness checking for sdpa vmap (#135176)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135176
Approved by: https://github.com/zou3519
2024-09-06 04:50:49 +00:00
Huamin Li
311af3b988 Add new ops wrapped_linear_prepack and wrapped_quantized_linear_prepacked (#134232)
Summary:
This diff adds two new operators torch.ops._quantized.wrapped_linear_prepack and torch.ops._quantized.wrapped_quantized_linear_prepacked. It is a decomposition of the op torch.ops._quantized.wrapped_quantized_linear added in the previous diff.

We decomposed in this way as packed weight could be computed early so we don;t need to do it in every forward in AOTI

Reviewed By: jerryzh168

Differential Revision: D61395887

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134232
Approved by: https://github.com/houseroad
2024-08-23 04:54:26 +00:00
Xuehai Pan
76169cf691 [BE][Easy][9/19] enforce style for empty lines in import segments in test/[e-h]*/ (#129760)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129760
Approved by: https://github.com/ezyang
2024-07-17 14:25:29 +00:00
Tom Ritchford
edb45dce85 Add OpInfo entry for as_strided_copy (#127231)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127231
Approved by: https://github.com/lezcano
2024-06-13 13:58:47 +00:00
Tom Ritchford
2386045e4f Add OpInfo entry for alias_copy (#127232) (#128142)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128142
Approved by: https://github.com/lezcano
2024-06-12 09:39:58 +00:00
PyTorch MergeBot
3b73f5de3a Revert "Add OpInfo entry for alias_copy (#127232) (#128142)"
This reverts commit 04da6aeb61.

Reverted https://github.com/pytorch/pytorch/pull/128142 on behalf of https://github.com/DanilBaibak due to The changes broke the test_output_match_alias_copy_cpu_complex64 test. ([comment](https://github.com/pytorch/pytorch/pull/128142#issuecomment-2158793878))
2024-06-10 16:17:16 +00:00
Tom Ritchford
04da6aeb61 Add OpInfo entry for alias_copy (#127232) (#128142)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128142
Approved by: https://github.com/lezcano
2024-06-10 15:01:53 +00:00
PyTorch MergeBot
c58d3af3b4 Revert "Add OpInfo entry for alias_copy (#127232)"
This reverts commit 457df212e1.

Reverted https://github.com/pytorch/pytorch/pull/127232 on behalf of https://github.com/clee2000 due to broke [onnx](https://github.com/pytorch/pytorch/actions/runs/9397057801/job/25880181144) and [mps](https://github.com/pytorch/pytorch/actions/runs/9397057805/job/25879818705) tests, [hud link](457df212e1) , base is 15 days old, the onnx test xfailed on the pr but the xfail was removed so if you rebase itll surface, mps build failed so no mps tests were run on the pr ([comment](https://github.com/pytorch/pytorch/pull/127232#issuecomment-2152848758))
2024-06-06 15:44:47 +00:00
Tom Ritchford
457df212e1 Add OpInfo entry for alias_copy (#127232)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127232
Approved by: https://github.com/lezcano
2024-06-06 07:46:26 +00:00
Yuanhao Ji
e3ac61587a Enable UFMT on test/functorch (#123541)
Partially addresses #123062

Ran lintrunner on:

- `test/functorch`

Co-authored-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123541
Approved by: https://github.com/zou3519, https://github.com/ezyang
2024-04-15 06:21:52 +00:00
kshitij12345
1a3dbf57ca vmap: simple inplace batch rule (#113513)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113513
Approved by: https://github.com/zou3519
2023-11-21 18:55:54 +00:00
Guilherme Leobas
b8a10a8a2d Add batch decomposition for torch.unsafe_chunk (#110862)
This updates the docs as well to show `torch.unsafe_chunk`. Should the `unsafe_*` functions should not appear in the docs?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110862
Approved by: https://github.com/kshitij12345, https://github.com/zou3519
2023-10-31 00:37:08 +00:00
Guilherme Leobas
974c47a20e remove flatten.using_ints, linalg_*, linear, log_softmax.int, logdet, special_* from xfail list (#110985)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110985
Approved by: https://github.com/Skylion007, https://github.com/zou3519
2023-10-20 18:15:39 +00:00
Guilherme Leobas
935f697754 remove movedim.intlist, tensor_split*, to.* from xfail list (#110999)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110999
Approved by: https://github.com/kshitij12345
2023-10-19 23:54:45 +00:00
Guilherme Leobas
e151307db0 Clean-up composite implicit ops for aten::isfinite, isreal and log_sigmoid (#110896)
Functions:
* aten::isfinite
* aten::log_sigmoide
* aten::isreal
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110896
Approved by: https://github.com/Skylion007, https://github.com/kshitij12345
2023-10-11 19:28:10 +00:00
Guilherme Leobas
0a580da582 Add batch decomposition for torch.linalg.eigh (#110640)
Closes https://github.com/pytorch/pytorch/issues/108481

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110640
Approved by: https://github.com/kshitij12345, https://github.com/zou3519
2023-10-09 21:36:49 +00:00
kshitij12345
b8a3998c23 add batch rule for missing inplace ops (#110692)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110692
Approved by: https://github.com/ezyang
2023-10-06 20:53:28 +00:00
kshitij12345
371d8ba599 vmap: decompose real and imag instead of registering batch rule (#110508)
Clean-up

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110508
Approved by: https://github.com/zou3519
2023-10-06 06:01:12 +00:00
vfdev-5
d9fe1713c3 Enabled batch rule decompositions for upsample*.vec ops (#110333)
Follow-up PR to https://github.com/pytorch/pytorch/pull/110172
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110333
Approved by: https://github.com/zou3519
2023-10-03 06:58:18 +00:00
vfdev-5
c62be12061 Added batch rules for _upsample_bi*2d_aa and _upsample_bi*2d_aa_backward (#110172)
Description:
- Added batch rules for `_upsample_bi*2d_aa` and `_upsample_bi*2d_aa_backward`
- Added few more test cases into `sample_inputs_upsample_aten`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110172
Approved by: https://github.com/kshitij12345, https://github.com/zou3519
2023-09-28 17:42:48 +00:00
SherlockNoMad
d997969b8b [Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#103107)
Differential Revision: D46459100

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103107
Approved by: https://github.com/angelayi, https://github.com/soulitzer
2023-06-12 19:18:49 +00:00
Nikita Shulga
20cf42de2c Revert "[Reland] Add sym_size/stride/numel/storage_offset to native_function.… (#100749)"
This reverts commit bb454891ed.
2023-05-16 18:17:02 -07:00
Sherlock Huang
bb454891ed [Reland] Add sym_size/stride/numel/storage_offset to native_function.… (#100749)
…yaml (#91… (#91919)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91919 Approved by: https://github.com/ezyang

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92402

Reviewed By: ezyang

Differential Revision: D42565586

Pulled By: SherlockNoMad

fbshipit-source-id: 1c2986e45307e076d239836a1b45441a9fa3c9d9
ghstack-source-id: 969f4928486e04c57aaf98e20e3c3ca946c51613

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100749
Approved by: https://github.com/zhxchen17, https://github.com/albanD
2023-05-12 22:57:42 +00:00
Li-Huai (Allan) Lin
c0674c439c [vmap] Add max_pool3d batch rule (#99522)
Also add a helper to integrate `max_pool2d_with_indices` and `max_pool3d_with_indices`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99522
Approved by: https://github.com/zou3519
2023-04-20 05:08:19 +00:00
Li-Huai (Allan) Lin
d31a00e713 [vamp] Add max_pool1d batch_rule (#99517)
Fixes #97558

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99517
Approved by: https://github.com/Skylion007, https://github.com/zou3519
2023-04-20 05:08:17 +00:00
kshitij12345
5e014bfbbd [vmap] ldl_factor: batch rule (#97518)
Ref https://github.com/pytorch/pytorch/issues/96855

Will look into `ldl_solve` separately.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97518
Approved by: https://github.com/zou3519
2023-03-25 04:37:32 +00:00
Danni Li
7711d24717 vmap support for linalg.lu_factor (#94328)
Differential Revision: D43093457

Fix #91415

### Expected behaviour

No use warning.

```python
from functorch import vmap
x = torch.randn(4, 3, 2)
z = vmap(torch.linalg.lu_factor)(x)
```
Same behaviour as for-loop:

```python
x = torch.randn(4, 3, 2)
results = []
for xi in x:
  y = torch.linalg.lu_factor(xi)
  results.append(y)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94328
Approved by: https://github.com/zou3519, https://github.com/Skylion007, https://github.com/Chillee
2023-03-23 14:18:57 +00:00
Kshiteej K
24c49dbf14 [functorch] batch rule : few decomposition ops (#96744)
Fixes https://github.com/pytorch/pytorch/issues/96741

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96744
Approved by: https://github.com/zou3519
2023-03-15 18:55:05 +00:00
Richard Zou
13011afb87 Fix vmap registration for t, t_ (#96539)
- t, t_ are not CompositeImplicitAutograd
- They were previously registered in BatchRulesDecompositions.cpp.
- The only thing that should get registered in BatchRulesDecompositions.cpp
are CompositeImplicitAutograd
- This PR moves their registrations out of there and into
BatchRulesViews.cpp.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96539
Approved by: https://github.com/srossross, https://github.com/kshitij12345, https://github.com/Chillee
2023-03-13 16:08:32 +00:00
Sean Ross-Ross
6650aac8ce move more operators to BatchRulesDecompositions (#93164)
Moving operators over to `BatchRulesDecompositions.cpp` to remove xfails. I noticed that composite-compliant does not mean inductor or vmap compliant, so I added more `isTensorSubclassLike` checks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93164
Approved by: https://github.com/lezcano, https://github.com/kshitij12345
2023-02-03 16:36:05 +00:00
PyTorch MergeBot
f7bd5d0ccb Revert "[Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#91… (#92402)"
This reverts commit 965f4ea3ba.

Reverted https://github.com/pytorch/pytorch/pull/92402 on behalf of https://github.com/zhxchen17 due to Caused a regression for an export model.
2023-02-03 03:12:43 +00:00
Sherlock Huang
965f4ea3ba [Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#91… (#92402)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91919
Approved by: https://github.com/ezyang

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92402
Approved by: https://github.com/ezyang
2023-02-01 04:47:49 +00:00
Khushi Agrawal
4c074ddfd2 [functorch][reland] vmap: bitwise operators (#92836)
Previous PR: #91971

Fixes: https://github.com/pytorch/functorch/issues/1069

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92836
Approved by: https://github.com/Chillee
2023-01-26 06:12:47 +00:00
Sean Ross-Ross
d354499faf adding some more missing ops to vmap (#92110)
removes some xfails that were a part of https://github.com/pytorch/functorch/issues/1009 and https://github.com/pytorch/functorch/issues/1087

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92110
Approved by: https://github.com/zou3519
2023-01-25 19:43:12 +00:00
PyTorch MergeBot
7ddcf4e0c3 Revert "[functorch] vmap: bitwise operators (#91971)"
This reverts commit e54f7b3edd.

Reverted https://github.com/pytorch/pytorch/pull/91971 on behalf of https://github.com/malfet due to Broke functorch bitwise, see e54f7b3edd
2023-01-23 14:52:16 +00:00
Khushi Agrawal
e54f7b3edd [functorch] vmap: bitwise operators (#91971)
Fixes https://github.com/pytorch/functorch/issues/1069

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91971
Approved by: https://github.com/kshitij12345, https://github.com/Chillee
2023-01-23 09:03:13 +00:00
Henry Cheng
b6cfd62285 vmap support for torch.linalg.vander (#91749)
Adds vmap support for torch.linalg.vander in a similar manner to how view_as_complex is implemented.

#91700

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91749
Approved by: https://github.com/lezcano
2023-01-19 14:49:54 +00:00
PyTorch MergeBot
befe815466 Revert "Add sym_size/stride/numel/storage_offset to native_function.yaml (#91919)"
This reverts commit 0388400f3f.

Reverted https://github.com/pytorch/pytorch/pull/91919 on behalf of https://github.com/atalman due to Break internal build
2023-01-17 21:03:18 +00:00
Sherlock Huang
0388400f3f Add sym_size/stride/numel/storage_offset to native_function.yaml (#91919)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91919
Approved by: https://github.com/ezyang
2023-01-17 03:39:57 +00:00
Sean Ross-Ross
0100293a7b feat: adding greater_equal Scalar variant (#91324)
Fixes https://github.com/pytorch/functorch/issues/1080

```py
import torch
from functorch import vmap

def f(x):
    return torch.greater_equal(torch.cumsum(x, dim=0), .5 * 10)

x = torch.randn([10,10])
vmap(f)(x)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91324
Approved by: https://github.com/zou3519
2023-01-05 20:25:38 +00:00
Joel Schlosser
1effabe257 Support per-parameter test decoration (#91658)
Continuation of #79979.

Fixes #79161

This PR does the following:
* Expands the `parametrize_fn()` signature from returning a 3-tuple of `(test, test_name, param_kwargs)` to returning a 4-tuple of `(test, test_name, param_kwargs, decorator_fn)`. Expected signature for the addition is `decorator_fn(param_kwargs) -> List[decorator]` i.e. given the full set of test params, return a list of decorators to apply.
    * `modules`, `ops`, and `parametrize` now fit the new signature, returning `decorator_fn`s instead of applying decorators themselves.
    * `instantiate_parametrized_tests()` and `instantiate_device_type_tests()` now call the returned `decorator_fn`, passing in the full set of `param_kwargs` (after composition + `device` / `dtype` additions) and applying the returned decorators.
    * Composing multiple `parametrize_fn`s also composes the corresponding `decorator_fn`s; the composed `decorator_fn` simply concatenates the decorator lists returned by the constituents.
* Expands `DecorateInfo.is_active` to support callables:
```python
DecorateInfo(
    unittest.expectedFailure, "TestOps", "test_python_ref_executor",
    device_type='cuda', active_if=lambda params: params['executor'] == 'nvfuser'
),
```
* Adds several tests to `test/test_testing.py` ensuring proper decoration using `@parametrize`, `@modules`, and `@ops`.
* (minor) Fixes a couple `ModuleInfo` naming oddities uncovered during testing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91658
Approved by: https://github.com/malfet
2023-01-04 21:08:32 +00:00
Sean Ross-Ross
cb3204823e adding test to audit CompositeImplicitAutograd ops that do not have a batching rule (#91367)
Fixes https://github.com/pytorch/functorch/issues/1087

It looks like there are `306` rules that should be looked into
```
test/functorch/test_vmap_registrations.py .x.....xxxxxxx.x.x.x.x.x.x.x.x........xx.x.x..x.x.xxx...xxxx.x.x.x........x.........xxxxx..x..x.....xx...xx.....xxx.xxxxxxxxxxxxxxxxx.. [ 24%]
.........x.x......x.xxxxxx..x..xx.x.xxx.x.......x.xxx.xx..xxx.xxx...xxxxx.x....xxxxxxxxxxxxxxx....xx.xxx.xx.x...xx...xx...xxxxxx...xxxxx..x...xxxxxxxxxxxx..xx..xx.xx.x..xxxx..xx [ 56%]
.xx..x.x....xxxxxx.x.xx...xxxxx.xx...x..x.x.xx...xx.xxxxxx.xxxxxx..x........xxxxxxxx..xxxxxxxx..xx.xxxxxxxxxxxxxxxxxxxxxxx..........xxxx.xxxx.........xxxxxxxx..xxx..xxx.x.x.x.xx [ 88%]
xx.xxx.x......xxx.x.xxxxxxxx....x......xxxxxxxxx.xx.x.x.x.......xx                                                                                                                [100%]

=================================================================== 249 passed, 1185 deselected, 306 xfailed in 3.17s ===================================================================

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91367
Approved by: https://github.com/zou3519
2023-01-03 04:21:39 +00:00
Sean Ross-Ross
dcce5677fd Adding test when registering a batching rule for a CompositeImplicitAutograd operation (#89465)
This is a Follow on from https://github.com/pytorch/pytorch/pull/88771 which should close out https://github.com/pytorch/functorch/issues/1009 I've got another PR where I'm moving some operators over https://github.com/pytorch/pytorch/pull/89762

you can see that the new test file is being picked [run here](https://github.com/pytorch/pytorch/actions/runs/3617298059/jobs/6096218583#step:10:472)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89465
Approved by: https://github.com/zou3519
2022-12-12 16:21:07 +00:00