Commit Graph

224 Commits

Author SHA1 Message Date
Edward Z. Yang
c2c269c10a Convert MetaConverter's tensor memo into a weak value dictionary. (#87911)
This is in preparation for unifying fake tensor converter and meta converter's memo tables.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87911
Approved by: https://github.com/eellison
2022-10-28 21:05:13 +00:00
Sherlock Huang
13de4d2137 Meta OpInfo Test for stride correctness (#87849)
Failing test logs here
https://gist.github.com/SherlockNoMad/a7e132f3cb4152900f8a6d7df358c59e
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87849
Approved by: https://github.com/eellison
2022-10-28 03:40:14 +00:00
Sherlock Huang
b21fe312c0 Fix meta for index_add and index_put (#87775)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87775
Approved by: https://github.com/ezyang, https://github.com/ngimel
2022-10-26 20:33:23 +00:00
Sherlock Huang
f3f1b44778 Fix meta for meta_fill_ (#87493)
Existing meta_fill_ doesn't correctly reflect the aliasing relationship for aten.fill. A new MetaTensor should be return instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87493
Approved by: https://github.com/eellison, https://github.com/bdhirsh
2022-10-22 12:41:03 +00:00
albanD
9bd6ea5d76 Add meta inplace testing (#87291)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87291
Approved by: https://github.com/ezyang
2022-10-20 14:20:16 +00:00
Nikita Karetnikov
91b3cd0b5a [primTorch] Add a ref for narrow_copy (#86748)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86748
Approved by: https://github.com/mruberry
2022-10-17 10:16:05 +00:00
Sherlock Huang
ef045695e0 Fix decomp for huber_loss_backward (#86955)
Fixes https://github.com/pytorch/pytorch/issues/86846

aten.huber_loss_backward calls aten.huber_loss_backward.out in its CompositeExplicitAutograd kernel.
The decomp was mistaken registered for both aten.huber_loss_backward.default and aten.huber_loss_backward.out.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86955
Approved by: https://github.com/Chillee
2022-10-14 18:53:02 +00:00
Brian Hirsh
e17732b234 [test] add cross-ref tests for python meta kernels (#86228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86228
Approved by: https://github.com/albanD
2022-10-13 14:14:26 +00:00
Brian Hirsh
3376050543 fix type promotion for group_norm composite C++ kernel (#86607)
python decomp for `native_group_norm` is correct in more cases than the C++ composite. Updating the tests to fail properly in this case was more annoying than just fixing the C++ decomp, so I fixed it here.

When the input tensor had a dtype with less precision than float32, the C++ decomp would unconditionally set the mean/variance to float32, which was wrong.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86607
Approved by: https://github.com/albanD
2022-10-13 14:14:22 +00:00
Fabio Rocha
493ded249e [primTorch] decomposition for bucketize (#86366)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86366
Approved by: https://github.com/mruberry
2022-10-12 12:25:42 +00:00
albanD
6d7235e3d3 enable cpu meta testing (#86226)
Just add the relevant skips for now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86226
Approved by: https://github.com/ezyang
2022-10-05 00:15:13 +00:00
lezcano
787028cadb Implement col2im decomposition and fix im2col and add a few preconditions (#85541)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85541
Approved by: https://github.com/jansel
2022-09-30 09:31:53 +00:00
Edward Z. Yang
224b689cf1 Handling for getitem with boolean in meta, and other improvements (#85807)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85807
Approved by: https://github.com/albanD
2022-09-28 16:53:53 +00:00
Nikolay Korovaiko
b4f9b68225 should_check_strides (#85416)
This PR ports `should_check_strides` checks from `origin/symbolic-shapes` to `master` as the part of our dynamic shapes landing effort.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85416
Approved by: https://github.com/ezyang
2022-09-23 04:55:50 +00:00
kshitij12345
56a41b5998 [composite compliance] ctc_loss (#84752)
#Ref #69991

I have mixed feelings about adding new (private) operators. Backends writers will have to override them as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84752
Approved by: https://github.com/zou3519
2022-09-22 00:21:11 +00:00
lezcano
d17b144e65 Adding multigammaln ref and fix arange (#85153)
Partially based on https://github.com/pytorch/pytorch/pull/83662.

I'll help land this one, as Rob does not work in the PyTorch project
anymore

I removed the data-dependent check for the args, as data dependencies
are bad for many reasons (and it was failing when the input has NaNs).

It also registers arange as a decomposition, and fixes the naming of its
args.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85153
Approved by: https://github.com/mruberry, https://github.com/ngimel
2022-09-20 17:52:56 +00:00
kshitij12345
4f6027b78a [opinfo] narrow: add new sample for Tensor overload (#84785)
`narrow` accepts `start` argument to be a Tensor. We add a sample to test this overload.

NOTE: This leads to a bunch of failed tests and hence the skips and xfails
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84785
Approved by: https://github.com/zou3519
2022-09-12 16:59:08 +00:00
Ivan Yashchuk
01c54ad6de Remove deprecated torch.eig (#70982)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.eig`.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70982
Approved by: https://github.com/Lezcano, https://github.com/malfet
2022-09-09 21:31:57 +00:00
Fabio Rocha
91a5f52f51 Decomp for nn.functional.grid_sampler_2d (#84350)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84350
Approved by: https://github.com/jansel, https://github.com/Lezcano
2022-09-05 21:33:26 +00:00
soulitzer
bfdfeecd15 Add per-op MPS gradient tests and update skips (#84242)
Follow up:
- ~Remove non-float dtypes from allow-list for gradients~
- ~Map dtypes to short-hand so there aren't so many lines, i.e. float16 should be f16.~
- ~There were a lot of linting issues that flake8 wouldn't format for me, so I reformatted with black. This makes the diff a little trickier to parse.~

Observations:
- there are entries in the allow-list that weren't there before
- some forward that we previously passing now fail with requires_grad=True
- Because the allow list does not know about variants, a special skip was added for that in the block list

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84242
Approved by: https://github.com/kulinseth, https://github.com/malfet
2022-09-01 16:41:52 +00:00
Sherlock Huang
ef3ab31f1c Decomp for aten.im2col (#84303)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84303
Approved by: https://github.com/jansel, https://github.com/ngimel
2022-09-01 00:06:35 +00:00
Edward Z. Yang
ad44670fa1 Back out "Revert D38984222: Don't introduce new overload for SymInt (#83628)" (#84173)
Also Back out "Revert D39075159: [acc_tensor] Use SymIntArrayRef for overloaded empty.memory_format's signature"

Original commit changeset: dab4a9dba4fa
Original commit changeset: dcaf16c037a9

Original Phabricator Diff: D38984222
Original Phabricator Diff: D39075159

Also update Metal registrations for C++ registration changes.

Also update NNPI registration to account for tightened schema checking

Differential Revision: [D39084762](https://our.internmc.facebook.com/intern/diff/D39084762/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39084762/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84173
Approved by: https://github.com/Krovatkin
2022-08-29 18:01:07 +00:00
PyTorch MergeBot
c7edcd6968 Revert "Don't introduce new overload for SymInt (#83628)"
This reverts commit 9790d90e4b.

Reverted https://github.com/pytorch/pytorch/pull/83628 on behalf of https://github.com/malfet due to Breaks internal builds, see D39076487
2022-08-27 01:23:17 +00:00
Edward Z. Yang
9790d90e4b Don't introduce new overload for SymInt (#83628)
Previously, we introduced new SymInt overloads for every function we wanted.  This led to a lot of boilerplate, and also a lot of confusion about how the overloads needed to be implemented.

This PR takes a simpler but more risky approach: just take the original function and changes its ints to SymInts.

This is BC-breaking in the following ways:

* The C++ API for registering implementations for aten operators will change from int64_t to SymInt whenever you make this change. Code generated registrations in PyTorch do not change as codegen handles the translation automatically, but manual registrations will need to follow the change.  Typically, if you now accept a SymInt where you previously only took int64_t, you have to convert it back manually.  This will definitely break XLA, see companion PR https://github.com/pytorch/xla/pull/3914 Note that not all dispatch keys get the automatic translation; all the composite keys and Meta keys are modified to take SymInt directly (because they should handle them directly), and so there are adjustments for this.

This is not BC-breaking in the following ways:

* The user facing C++ API remains compatible.  Even if a function changes from int to SymInt, the default C++ binding still takes only ints.  (e.g., at::empty(IntArrayRef, ...).  To call with SymInts, you must call at::empty_symint instead. This involved adding two more signatures to CppSignatureGroup; in many cases I refactored code to iterate over all signatures in the group instead of hard-coding the two that previously existed.
* This is TorchScript compatible; internally we treat SymInts as ints so there is no change to what happens at runtime in TorchScript. In particular, it's OK to reference an empty schema by its old type (using int types), as long as you're not doing string equality (which you shouldn't be), these parse to the same underyling type.

Structure of the PR:

* The general strategy of this PR is that, even when you write `SymInt` inside `native_functions.yaml`, sometimes, we will treat it *as if* it were an `int`. This idea pervades the codegen changes, where we have a translation from SymInt to c10::SymInt or int64_t, and this is controlled by a symint kwarg which I added and then audited all call sites to decide which I wanted. Here are some of the major places where we pick one or the other:
  * The C++ FunctionSchema representation represents `SymInt` as `int`. There are a few places we do need to know that we actually have a SymInt and we consult `real_type()` to get the real type in this case. In particular:
    * When we do schema validation of C++ operator registration, we must compare against true schema (as the C++ API will provide `c10::SymInt`, and this will only be accepted if the schema is `SymInt`. This is handled with cloneWithRealTypes before we check for schema differences.
    * In `toIValue` argument parsing, we parse against the true schema value. For backwards compatibility reasons, I do still accept ints in many places where Layout/SymInt/etc were expected. (Well, accepting int where SymInt is expected is not BC, it's just the right logic!)
  * In particular, because SymInt never shows up as type() in FunctionSchema, this means that we no longer need a dedicated Tag::SymInt. This is good, because SymInts never show up in mobile anyway.
* Changes to functorch/aten are mostly about tracking changes to the C++ API registration convention. Additionally, since SymInt overloads no longer exist, registrations for SymInt implementations are deleted. In many cases, the old implementations did not properly support SymInts; I did not add any new functionality with this PR, but I did try to annotate with TODOs where this is work to do. Finally, because the signature of `native::` API changed from int to SymInt, I need to find alternative APIs for people who were directly calling these functions to call. Typically, I insert a new dispatch call when perf doesn't matter, or use `at::compositeexplicitautograd` namespace to handle other caes.
* The change to `make_boxed_from_unboxed_functor.h` is so that we accept a plain IntList IValue anywhere a SymIntList is expected; these are read-only arguments so covariant typing is OK.
* I change how unboxing logic works slightly. Previously, we interpret the C++ type for Layout/etc directly as IntType JIT type, which works well because the incoming IValue is tagged as an integer. Now, we interpret the C++ type for Layout as its true type, e.g., LayoutType (change to `jit_type.h`), but then we accept an int IValue for it anyway. This makes it symmetric with SymInt, where we interpret the C++ type as SymIntType, and then accept SymInt and int IValues for it.
* I renamed the `empty.names` overload to `empty_names` to make it less confusing (I kept mixing it up with the real empty overload)
* I deleted the `empty.SymInt` overload, which ended up killing a pile of functions. (This was originally a separate PR but the profiler expect test was giving me grief so I folded it in.)
* I deleted the LazyDynamicOpsTest tests. These were failing after these changes, and I couldn't figure out why they used to be passing: they make use of `narrow_copy` which didn't actually support SymInts; they were immediately converted to ints.
* I bashed LTC into working. The patches made here are not the end of the story. The big problem is that SymInt translates into Value, but what if you have a list of SymInt? This cannot be conveniently represented in the IR today, since variadic Values are not supported. To work around this, I translate SymInt[] into plain int[] (this is fine for tests because LTC dynamic shapes never actually worked); but this will need to be fixed for proper LTC SymInt support. The LTC codegen also looked somewhat questionable; I added comments based on my code reading.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83628
Approved by: https://github.com/albanD, https://github.com/bdhirsh
2022-08-26 01:35:40 +00:00
PyTorch MergeBot
a7edf71360 Revert "Don't introduce new overload for SymInt (#83628)"
This reverts commit 8fae7027b3.

Reverted https://github.com/pytorch/pytorch/pull/83628 on behalf of https://github.com/malfet due to breaking internal builds, see https://www.internalfb.com/diff/D38984222
2022-08-25 00:49:40 +00:00
Edward Z. Yang
8fae7027b3 Don't introduce new overload for SymInt (#83628)
Previously, we introduced new SymInt overloads for every function we wanted.  This led to a lot of boilerplate, and also a lot of confusion about how the overloads needed to be implemented.

This PR takes a simpler but more risky approach: just take the original function and changes its ints to SymInts.

This is BC-breaking in the following ways:

* The C++ API for registering implementations for aten operators will change from int64_t to SymInt whenever you make this change. Code generated registrations in PyTorch do not change as codegen handles the translation automatically, but manual registrations will need to follow the change.  Typically, if you now accept a SymInt where you previously only took int64_t, you have to convert it back manually.  This will definitely break XLA, see companion PR https://github.com/pytorch/xla/pull/3914 Note that not all dispatch keys get the automatic translation; all the composite keys and Meta keys are modified to take SymInt directly (because they should handle them directly), and so there are adjustments for this.

This is not BC-breaking in the following ways:

* The user facing C++ API remains compatible.  Even if a function changes from int to SymInt, the default C++ binding still takes only ints.  (e.g., at::empty(IntArrayRef, ...).  To call with SymInts, you must call at::empty_symint instead. This involved adding two more signatures to CppSignatureGroup; in many cases I refactored code to iterate over all signatures in the group instead of hard-coding the two that previously existed.
* This is TorchScript compatible; internally we treat SymInts as ints so there is no change to what happens at runtime in TorchScript. In particular, it's OK to reference an empty schema by its old type (using int types), as long as you're not doing string equality (which you shouldn't be), these parse to the same underyling type.

Structure of the PR:

* The general strategy of this PR is that, even when you write `SymInt` inside `native_functions.yaml`, sometimes, we will treat it *as if* it were an `int`. This idea pervades the codegen changes, where we have a translation from SymInt to c10::SymInt or int64_t, and this is controlled by a symint kwarg which I added and then audited all call sites to decide which I wanted. Here are some of the major places where we pick one or the other:
  * The C++ FunctionSchema representation represents `SymInt` as `int`. There are a few places we do need to know that we actually have a SymInt and we consult `real_type()` to get the real type in this case. In particular:
    * When we do schema validation of C++ operator registration, we must compare against true schema (as the C++ API will provide `c10::SymInt`, and this will only be accepted if the schema is `SymInt`. This is handled with cloneWithRealTypes before we check for schema differences.
    * In `toIValue` argument parsing, we parse against the true schema value. For backwards compatibility reasons, I do still accept ints in many places where Layout/SymInt/etc were expected. (Well, accepting int where SymInt is expected is not BC, it's just the right logic!)
  * In particular, because SymInt never shows up as type() in FunctionSchema, this means that we no longer need a dedicated Tag::SymInt. This is good, because SymInts never show up in mobile anyway.
* Changes to functorch/aten are mostly about tracking changes to the C++ API registration convention. Additionally, since SymInt overloads no longer exist, registrations for SymInt implementations are deleted. In many cases, the old implementations did not properly support SymInts; I did not add any new functionality with this PR, but I did try to annotate with TODOs where this is work to do. Finally, because the signature of `native::` API changed from int to SymInt, I need to find alternative APIs for people who were directly calling these functions to call. Typically, I insert a new dispatch call when perf doesn't matter, or use `at::compositeexplicitautograd` namespace to handle other caes.
* The change to `make_boxed_from_unboxed_functor.h` is so that we accept a plain IntList IValue anywhere a SymIntList is expected; these are read-only arguments so covariant typing is OK.
* I change how unboxing logic works slightly. Previously, we interpret the C++ type for Layout/etc directly as IntType JIT type, which works well because the incoming IValue is tagged as an integer. Now, we interpret the C++ type for Layout as its true type, e.g., LayoutType (change to `jit_type.h`), but then we accept an int IValue for it anyway. This makes it symmetric with SymInt, where we interpret the C++ type as SymIntType, and then accept SymInt and int IValues for it.
* I renamed the `empty.names` overload to `empty_names` to make it less confusing (I kept mixing it up with the real empty overload)
* I deleted the `empty.SymInt` overload, which ended up killing a pile of functions. (This was originally a separate PR but the profiler expect test was giving me grief so I folded it in.)
* I deleted the LazyDynamicOpsTest tests. These were failing after these changes, and I couldn't figure out why they used to be passing: they make use of `narrow_copy` which didn't actually support SymInts; they were immediately converted to ints.
* I bashed LTC into working. The patches made here are not the end of the story. The big problem is that SymInt translates into Value, but what if you have a list of SymInt? This cannot be conveniently represented in the IR today, since variadic Values are not supported. To work around this, I translate SymInt[] into plain int[] (this is fine for tests because LTC dynamic shapes never actually worked); but this will need to be fixed for proper LTC SymInt support. The LTC codegen also looked somewhat questionable; I added comments based on my code reading.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83628
Approved by: https://github.com/albanD, https://github.com/bdhirsh
2022-08-23 22:04:07 +00:00
Edward Z. Yang
b9c8db435b Allow map location to meta device (#82603)
Fixes https://github.com/pytorch/pytorch/issues/82412

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82603
Approved by: https://github.com/eellison
2022-08-08 19:56:59 +00:00
Edward Z. Yang
a64c981d09 Fixed QuantizedMeta creation via empty (#82587)
Need to properly audit the rest of quantization and meta tensors.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82587
Approved by: https://github.com/malfet, https://github.com/khabinov
2022-08-01 20:40:01 +00:00
kshitij12345
e93b5210ec [composite compliance] allclose, linalg_eig (#82437)
Ref: #69991

Make `allclose` CompositeExplicit as it calls `item` (we can't get away from it) which makes it non Composite Compliant.

`linalg_eig` backward passes CompositeCompliance as it calls on `allclose`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82437
Approved by: https://github.com/zou3519
2022-08-01 18:01:15 +00:00
soulitzer
16093a1d81 Fix primtorch out_wrapper semantics for factory functions (#82375)
This PR:
- introduces new OpInfo attribute `is_factory_function`
- updates OpInfo test_out to handle case when `is_factory_function=True`:
- correct primtorch out_wrapper
- update sample inputs for arange, linspace, logspace to not explicitly pass in dtype or device (having this sample is necessary for the test to get triggered)

Fixes https://github.com/pytorch/pytorch/issues/82364

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82375
Approved by: https://github.com/ezyang, https://github.com/ngimel
2022-07-29 00:57:57 +00:00
Elias Ellison
1c0f7bd6d2 Enable complex for meta tensors (#79975)
There weren't really any fundamental blockers
- add support for `aten::complex`
- update `angle` for complex
- remove the error in the fallback kernel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79975
Approved by: https://github.com/ezyang
2022-07-27 22:19:14 +00:00
soulitzer
80e2d5704b Add OpInfo and ref for linspace and logspace (#81826)
Implements linspace with arange, and logspace with linspace.
- Implements a more precise path in linspace's ref when dtype is integral to avoid off-by-one issues when output of computation is casted to int. The trade off is that there's an increased chance of overflow.
- Files several issues #82242, #82230, #81996, on preexisting issues with the linspace and logspace. These mainly concern when dtype is integral - the affect tests are xfailed in this PR.
- Fixes the check that the reference implementation is closer to precise implementation than torch implementation to also update the dtype kwarg to the precise dtype.

TODO:
- ~support negative bases~ (not in this PR)
- ~support complex. Since arange does not support complex, but linspace does, one solution is to just call linspace separately on the real and imag components and sum the results in the end~ (not in this PR)
- ~default dtypes need to be explicitly handled since computation is done in a different dtype than result~ (done)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81826
Approved by: https://github.com/ngimel
2022-07-27 05:53:06 +00:00
soulitzer
6b0ca72b61 Add prim, ref, and OpInfo for arange (#81734)
Per title.
- See https://github.com/pytorch/pytorch/issues/81959 for discussion on overloading

TODO:
- ~Handle remaining TensorOptions: layout, pin_memory (won't do in this PR)~
- ~Add sample inputs for floating point and complex numbers (done for floating point, won't do for complex in this PR)~
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81734
Approved by: https://github.com/ngimel
2022-07-26 19:35:02 +00:00
PyTorch MergeBot
dc57624622 Revert "Add prim, ref, and OpInfo for arange (#81734)"
This reverts commit 67dc9abbd8.

Reverted https://github.com/pytorch/pytorch/pull/81734 on behalf of https://github.com/kit1980 due to Broke trunk slow tests 67dc9abbd8
2022-07-26 05:56:09 +00:00
soulitzer
67dc9abbd8 Add prim, ref, and OpInfo for arange (#81734)
Per title.
- See https://github.com/pytorch/pytorch/issues/81959 for discussion on overloading

TODO:
- ~Handle remaining TensorOptions: layout, pin_memory (won't do in this PR)~
- ~Add sample inputs for floating point and complex numbers (done for floating point, won't do for complex in this PR)~
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81734
Approved by: https://github.com/ngimel
2022-07-25 23:05:01 +00:00
Robert
47b4ec8aa7 Conv2d shape calculation for meta tensors (#79834)
Fixes #79512

This PR adds support for convolutional meta modules and computes the output shape correctly for some meta input tensor.
Currently in progress, no tests written so far.

**Feature implementations**:
- [x] `Conv1d`
- [x] `Conv2d`
- [x] `Conv3d`

**Tests**:
- [x] `Conv1d`
- [x] `Conv2d`
- [x] `Conv3d`

cc @albanD @anjali411
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79834
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-23 05:58:56 +00:00
samdow
2ac24675cc get rid of push_torch_{dispatch, function}_mode (#78215)
Currently we have 2 ways of doing the same thing for torch dispatch and function modes:
`with push_torch_dispatch_mode(X)` or `with X.push(...)`
is now the equivalent of doing
`with X()`

This removes the first API (which is older and private so we don't need to go through a deprecation cycle)

There is some risk here that this might land race with a PR that uses the old API but in general it seems like most are using the `with X()` API or `enable_torch_dispatch_mode(X())` which isn't getting removed.

EDIT: left the `with X.push(...)` API since there were ~3 land races with that over the past day or so. But made it give a warning and ask users to use the other API
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78215
Approved by: https://github.com/ezyang
2022-07-22 18:56:37 +00:00
soulitzer
f595467e5c Reenable slow gradcheck and make it pass (#80514)
Context: For a while slow gradcheck CI was skipping nearly all tests and this hid the fact that it should've been failing and timing out (10+h runtime for TestGradients). The CI configuration has since been fixed to correct this, revealing the test failures. This PR reenables slow gradcheck CI and makes it pass again.

This PR:
- makes slow and failing tests run in fast gradcheck mode only
- reduce the input size for slow gradcheck only for unary/binary ufuncs (alternatively, skip the test entirely)
- skip entire test files on slow gradcheck runner if they don't use gradcheck (test_ops, test_meta, test_decomp, test_ops_jit)
- reduces the input size for some ops

Follow ups:
1. Investigate slow mode failures https://github.com/pytorch/pytorch/issues/80411
2. See if we can re-enable slow gradcheck tests for some of the slow tests by reducing the sizes of their inputs

The following are failing in slow mode, they are now running in fast mode only.
```
test_fn_fwgrad_bwgrad___rmod___cuda_float64
test_fn_fwgrad_bwgrad_linalg_householder_product_cuda_complex128
test_fn_fwgrad_bwgrad__masked_prod_cuda_complex128
test_fn_fwgrad_bwgrad__masked_prod_cuda_float64
test_fn_fwgrad_bwgrad_linalg_matrix_power_cuda_complex128
test_fn_fwgrad_bwgrad_cat_cuda_complex128
test_fn_fwgrad_bwgrad_linalg_lu_factor_ex_cuda_float64
test_fn_fwgrad_bwgrad_copysign_cuda_float64
test_fn_fwgrad_bwgrad_cholesky_inverse_cuda_complex128
test_fn_fwgrad_bwgrad_float_power_cuda_complex128
test_fn_fwgrad_bwgrad_fmod_cuda_float64
test_fn_fwgrad_bwgrad_float_power_cuda_float64
test_fn_fwgrad_bwgrad_linalg_lu_cuda_float64
test_fn_fwgrad_bwgrad_remainder_cuda_float64
test_fn_fwgrad_bwgrad_repeat_cuda_complex128
test_fn_fwgrad_bwgrad_prod_cuda_complex128
test_fn_fwgrad_bwgrad_slice_scatter_cuda_float64
test_fn_fwgrad_bwgrad_tile_cuda_complex128
test_fn_fwgrad_bwgrad_pow_cuda_float64
test_fn_fwgrad_bwgrad_pow_cuda_complex128
test_fn_fwgrad_bwgrad_fft_*
test_fn_fwgrad_bwgrad_zero__cuda_complex128
test_fn_gradgrad_linalg_lu_factor_cuda_float64
test_fn_grad_div_trunc_rounding_cuda_float64
test_fn_grad_div_floor_rounding_cuda_float64
```

Marks the OpInfos for the following ops that run slowly in slow gradcheck as `fast_gradcheck` only (the left column represents runtime in seconds):
```
0  918.722  test_fn_fwgrad_bwgrad_nn_functional_conv_transpose3d_cuda_float64
1  795.042  test_fn_fwgrad_bwgrad_nn_functional_unfold_cuda_complex128
2  583.63  test_fn_fwgrad_bwgrad_nn_functional_max_pool3d_cuda_float64
3  516.946  test_fn_fwgrad_bwgrad_svd_cuda_complex128
4  503.179  test_fn_fwgrad_bwgrad_linalg_svd_cuda_complex128
5  460.985  test_fn_fwgrad_bwgrad_linalg_lu_cuda_complex128
6  401.04  test_fn_fwgrad_bwgrad_linalg_lstsq_grad_oriented_cuda_complex128
7  353.671  test_fn_fwgrad_bwgrad_nn_functional_max_pool2d_cuda_float64
8  321.903  test_fn_fwgrad_bwgrad_nn_functional_gaussian_nll_loss_cuda_float64
9  307.951  test_fn_fwgrad_bwgrad_stft_cuda_complex128
10  266.104  test_fn_fwgrad_bwgrad_svd_lowrank_cuda_float64
11  221.032  test_fn_fwgrad_bwgrad_istft_cuda_complex128
12  183.741  test_fn_fwgrad_bwgrad_lu_unpack_cuda_complex128
13  132.019  test_fn_fwgrad_bwgrad_nn_functional_unfold_cuda_float64
14  125.343  test_fn_fwgrad_bwgrad_nn_functional_pad_constant_cuda_complex128
15  124.2  test_fn_fwgrad_bwgrad_kron_cuda_complex128
16  123.721  test_fn_fwgrad_bwgrad_pca_lowrank_cuda_float64
17  121.074  test_fn_fwgrad_bwgrad_nn_functional_max_unpool3d_cuda_float64
18  119.387  test_fn_fwgrad_bwgrad_rot90_cuda_complex128
19  112.889  test_fn_fwgrad_bwgrad__masked_normalize_cuda_complex128
20  107.541  test_fn_fwgrad_bwgrad_dist_cuda_complex128
21  106.727  test_fn_fwgrad_bwgrad_diff_cuda_complex128
22  104.588  test_fn_fwgrad_bwgrad__masked_cumprod_cuda_complex128
23  100.135  test_fn_fwgrad_bwgrad_nn_functional_feature_alpha_dropout_with_train_cuda_float64
24  88.359  test_fn_fwgrad_bwgrad_mH_cuda_complex128
25  86.214  test_fn_fwgrad_bwgrad_nn_functional_max_unpool2d_cuda_float64
26  83.037  test_fn_fwgrad_bwgrad_nn_functional_bilinear_cuda_float64
27  79.987  test_fn_fwgrad_bwgrad__masked_cumsum_cuda_complex128
28  77.822  test_fn_fwgrad_bwgrad_diag_embed_cuda_complex128
29  76.256  test_fn_fwgrad_bwgrad_mT_cuda_complex128
30  74.039  test_fn_fwgrad_bwgrad_linalg_lu_solve_cuda_complex128
```
```
0  334.142  test_fn_fwgrad_bwgrad_unfold_cuda_complex128
1  312.791  test_fn_fwgrad_bwgrad_linalg_lu_factor_cuda_complex128
2  121.963  test_fn_fwgrad_bwgrad_nn_functional_max_unpool3d_cuda_float64
3  108.085  test_fn_fwgrad_bwgrad_diff_cuda_complex128
4  89.418  test_fn_fwgrad_bwgrad_nn_functional_max_unpool2d_cuda_float64
5  72.231  test_fn_fwgrad_bwgrad___rdiv___cuda_complex128
6  69.433  test_fn_fwgrad_bwgrad___getitem___cuda_complex128
7  68.582  test_fn_fwgrad_bwgrad_ldexp_cuda_complex128
8  68.572  test_fn_fwgrad_bwgrad_linalg_pinv_cuda_complex128
9  67.585  test_fn_fwgrad_bwgrad_nn_functional_glu_cuda_float64
10  66.567  test_fn_fwgrad_bwgrad_lu_cuda_float64
```
```
0  630.13  test_fn_gradgrad_nn_functional_conv2d_cuda_complex128
1  81.086  test_fn_gradgrad_linalg_solve_triangular_cuda_complex128
2  71.332  test_fn_gradgrad_norm_cuda_complex128
3  64.308  test_fn_gradgrad__masked_std_cuda_complex128
4  59.519  test_fn_gradgrad_div_no_rounding_mode_cuda_complex128
5  58.836  test_fn_gradgrad_nn_functional_adaptive_avg_pool3
```

Reduces the sizes of the inputs for:
- diff
- diag_embed

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80514
Approved by: https://github.com/albanD
2022-07-22 02:05:37 +00:00
soulitzer
6cf0d9249f Add prelu and relu6 refs missing from __all__ and decomp db (#81420)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81420
Approved by: https://github.com/ngimel
2022-07-19 20:50:04 +00:00
lezcano
e505796a2c [Array API] Add linalg.vecdot (#70542)
This PR adds the function `linalg.vecdot` specified by the [Array
API](https://data-apis.org/array-api/latest/API_specification/linear_algebra_functions.html#function-vecdot)

For the complex case, it chooses to implement \sum x_i y_i. See the
discussion in https://github.com/data-apis/array-api/issues/356

Edit. When it comes to testing, this function is not quite a binopt, nor a reduction opt. As such, we're this close to be able to get the extra testing, but we don't quite make it. Now, it's such a simple op that I think we'll make it without this.

Resolves https://github.com/pytorch/pytorch/issues/18027.

cc @mruberry @rgommers @pmeier @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70542
Approved by: https://github.com/IvanYashchuk, https://github.com/mruberry
2022-07-12 14:28:54 +00:00
PyTorch MergeBot
39f659c3ba Revert "[Array API] Add linalg.vecdot (#70542)"
This reverts commit 74208a9c68.

Reverted https://github.com/pytorch/pytorch/pull/70542 on behalf of https://github.com/malfet due to Broke CUDA-10.2 for vecdot_bfloat16, see 74208a9c68
2022-07-08 22:56:51 +00:00
lezcano
74208a9c68 [Array API] Add linalg.vecdot (#70542)
This PR adds the function `linalg.vecdot` specified by the [Array
API](https://data-apis.org/array-api/latest/API_specification/linear_algebra_functions.html#function-vecdot)

For the complex case, it chooses to implement \sum x_i y_i. See the
discussion in https://github.com/data-apis/array-api/issues/356

Edit. When it comes to testing, this function is not quite a binopt, nor a reduction opt. As such, we're this close to be able to get the extra testing, but we don't quite make it. Now, it's such a simple op that I think we'll make it without this.

Resolves https://github.com/pytorch/pytorch/issues/18027.

cc @mruberry @rgommers @pmeier @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70542
Approved by: https://github.com/IvanYashchuk, https://github.com/mruberry
2022-07-08 15:37:58 +00:00
Nikolay Korovaiko
0a5123a752 Revert "Revert "Add support for directly passing symint to empty"" (#79954)
Relanding https://github.com/Krovatkin/pytorch/pull/new/krovatkin/symint_empty

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79954
Approved by: https://github.com/Chillee, https://github.com/kulinseth
2022-07-04 20:08:55 +00:00
lezcano
37a5819665 Make slogdet, linalg.sloget and logdet support metatensors (#79742)
This PR also adds complex support for logdet, and makes all these
functions support out= and be composite depending on one function. We
also extend the support of `logdet` to complex numbers and improve the
docs of all these functions.

We also use `linalg_lu_factor_ex` in these functions, so we remove the
synchronisation present before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79742
Approved by: https://github.com/IvanYashchuk, https://github.com/albanD
2022-07-01 16:09:21 +00:00
Elias Ellison
1058b47562 Weak-ref-ify MetaConverter and FakeTensorConverter (#80544)
Make `MetaConverter` and `FakeTensorConverter` hold weak references to their memoized tensors, and also have `MetaConverter` hold weak reference to Tensor storage. Otherwise it can be tricky for users to make sure all existing FakeTensors or FakeTensorModes are deleted which otherwise will leak memory.

I ran into https://github.com/pytorch/pytorch/issues/7733 which I was able to get around with the following (see comment from code):

```
# torch.Tensors cannot be used as a key in a dictionary
# because they define a custom __eq__ function which when used
# to resolve hash collisions will throw when comparing tensors:
# "RuntimeError: bool value of Tensor with more than one value is ambiguous."
# To avoid that, we use an object which will hold a Tensor and use
# its id for both hashing and equality.
# In order to use this as a weak key reference, we cannot
# simply use weakref.WeakKeyDictionary because the newly constructed
# WeakTensorRefKey only use would be a dictionary so it would have no strong
# references.
# To get around this issue, we can use it as a normal key, and then set
# `weakref.finalize` to delete the key when its contained tensor dies.
```

While for the tensor memo we can set a `weakref.finalize` callback that will remove the corresponding `WeakTensorRefKey` from the tensor memo, at the point that this callback is invoked the tensor storage is not yet deallocated.. See comment from code:

```
# [expired-storages]
# NB: even though the tensor has died,
# the deallocation of its storage can take longer,
# even when the storage has no other uses/views.
# In this case, the StorageWeakRef object will be kept alive
# longer than it needs to be, however the storage itself
# will be deallocated. We retain the possibly dead storages
# and periodically check if any of them are expired and
# can be freed.
```

partial fix for https://github.com/pytorch/torchdynamo/issues/468
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80544
Approved by: https://github.com/ezyang
2022-06-29 23:36:35 +00:00
Ryan Spring
1d0d506e97 Add Div reference (#77936)
Add Prims:
-  trunc
-  Replace _wrap_scalar with scalar_tensor

Add Reference:
-  copysign
- div
- floor_divide
- trunc_divide

Other:
* Add support for `variant_test_name` in _find_referenced_opinfo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77936
Approved by: https://github.com/mruberry
2022-06-27 14:46:17 +00:00
lezcano
ea5299fca3 Make linalg_det function structured (#79486)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79486
Approved by: https://github.com/IvanYashchuk, https://github.com/albanD
2022-06-23 23:24:44 +00:00
Horace He
e89676f76c fix logical_not reland issues
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79900

Approved by: https://github.com/ngimel
2022-06-21 03:41:18 +00:00
Peter Bell
9bf52f4be8 Add OpInfo for torch.equal and fix support for non-standard bools
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79389

Approved by: https://github.com/mruberry
2022-06-20 23:48:39 +00:00
Elias Ellison
9705fb03b3 Add support for a couple ops
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79581

Approved by: https://github.com/Chillee
2022-06-20 22:25:39 +00:00
Nikita Shulga
f5eb05f107 Revert "Reland #2 of "Added {logical_not, trace} refs, moved logical ops to use method overloads""
This reverts commit f3665dd237.

Reverted https://github.com/pytorch/pytorch/pull/79819 on behalf of https://github.com/malfet due to land raced with softshrink refs
2022-06-20 14:22:15 -07:00
Horace He
f3665dd237 Reland #2 of "Added {logical_not, trace} refs, moved logical ops to use method overloads"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79819

Approved by: https://github.com/mruberry
2022-06-20 19:50:43 +00:00
lezcano
648a6658ec Remove python implementation for eigh meta
Following https://github.com/pytorch/pytorch/pull/79072#discussion_r898210048

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79786

Approved by: https://github.com/ngimel, https://github.com/bdhirsh
2022-06-17 18:52:28 +00:00
lezcano
549a597c00 Port linalg_eigh and linalg_eigvalsh to structured
This follows the structure of linalg.svd.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79072

Approved by: https://github.com/IvanYashchuk, https://github.com/albanD
2022-06-14 20:17:01 +00:00
kshitij12345
31ada133cb [meta] nansum, nanmedian (and few minor clean-ups) (#79411)
meta support for `nansum` and `nanmedian`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79411
Approved by: https://github.com/anjali411
2022-06-14 16:21:13 +00:00
kshitij12345
a732bbea23 [meta] Add meta support for fft ops (#79311)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79311
Approved by: https://github.com/ezyang
2022-06-13 01:56:42 +00:00
kshitij12345
bd1a35dfc8 [meta] diag ops, trace (#79341)
meta registration for `diag.out` and update test skips/expectedFailures
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79341
Approved by: https://github.com/ezyang
2022-06-12 18:45:03 +00:00
lezcano
54949a5abc Simplify and optimize linalg.solve
This PR heavily simplifies the code of `linalg.solve`. At the same time,
this implementation saves quite a few copies of the input data in some
cases (e.g. A is contiguous)

We also implement it in such a way that the derivative goes from
computing two LU decompositions and two LU solves to no LU
decompositions and one LU solves. It also avoids a number of unnecessary
copies the derivative was unnecessarily performing (at least the copy of
two matrices).

On top of this, we add a `left` kw-only arg that allows the user to
solve `XA = B` rather concisely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74046

Approved by: https://github.com/nikitaved, https://github.com/IvanYashchuk, https://github.com/mruberry
2022-06-11 04:06:40 +00:00
kshitij12345
7b307e5fca [meta] angle, angle.out (#79278)
meta registration for `angle, angle.out`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79278
Approved by: https://github.com/anjali411
2022-06-10 20:06:31 +00:00
Brian Hirsh
7b3a0ff87a Port index.Tensor to structured kernels.
Tracking issue: #55070

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69607

Approved by: https://github.com/bdhirsh
2022-06-10 17:27:47 +00:00
PyTorch MergeBot
fefff54cad Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads"""
This reverts commit a2d2981e8e.

Reverted https://github.com/pytorch/pytorch/pull/79224 on behalf of https://github.com/suo due to broke lots of things a2d2981e8e
2022-06-10 04:40:43 +00:00
Horace He
a2d2981e8e Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads""
This reverts commit d67309aefb.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79224

Approved by: https://github.com/mruberry
2022-06-10 03:07:14 +00:00
PyTorch MergeBot
d67309aefb Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads"
This reverts commit 64b6bd8c1e.

Reverted https://github.com/pytorch/pytorch/pull/79000 on behalf of https://github.com/malfet due to Introduces test failure, see https://hud.pytorch.org/pr/79000
2022-06-09 13:11:23 +00:00
Horace He
64b6bd8c1e Added {logical_not, trace} refs, moved logical ops to use method overloads
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79000

Approved by: https://github.com/ezyang
2022-06-09 07:16:36 +00:00
Horace He
c3531c9bce Ported roll to use torch ops and added as a decomposition
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78991

Approved by: https://github.com/mruberry
2022-06-09 03:20:28 +00:00
Edward Z. Yang
50f2af84da Add embedding_bag meta functions
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78997

Approved by: https://github.com/Chillee, https://github.com/Lezcano
2022-06-08 22:03:27 +00:00
PyTorch MergeBot
4b82ef7928 Revert "Port index.Tensor to structured kernels."
This reverts commit cfd84125bd.

Reverted https://github.com/pytorch/pytorch/pull/69607 on behalf of https://github.com/zengk95 due to This is breaking mac trunk tests cfd84125bd
2022-06-08 20:16:10 +00:00
Brian Hirsh
cfd84125bd Port index.Tensor to structured kernels.
Tracking issue: #55070

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69607

Approved by: https://github.com/bdhirsh
2022-06-08 18:17:52 +00:00
Edward Z. Yang
41bd5b85fd cdist meta function
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78993

Approved by: https://github.com/Lezcano, https://github.com/Chillee
2022-06-08 01:57:00 +00:00
Edward Z. Yang
d09e3674d8 addbmm meta function
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78992

Approved by: https://github.com/Lezcano, https://github.com/Chillee
2022-06-07 23:24:57 +00:00
lezcano
c7d6cec078 Add linalg.lu_solve
This PR adds `linalg.lu_solve`. While doing so, I found a bug in MAGMA
when calling the batched MAGMA backend with trans=True. We work around
that by solving the system solving two triangular systems.

We also update the heuristics for this function, as they were fairly
updated. We found that cuSolver is king, so luckily we do not need to
rely on the buggy backend from magma for this function.

We added tests testing this function left and right. We also added tests
for the different backends. We also activated the tests for AMD, as
those should work as well.

Fixes https://github.com/pytorch/pytorch/issues/61657

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77634

Approved by: https://github.com/malfet
2022-06-07 22:28:28 +00:00
Mikayla Gawarecki
814ff74460 Add prod reduce option to segment_reduce + opinfo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76067

Approved by: https://github.com/cpuhrsch
2022-06-07 17:06:07 +00:00
Edward Z. Yang
54c99a9e1d relu ref
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78601

Approved by: https://github.com/ngimel
2022-06-04 02:18:56 +00:00
Edward Z. Yang
83d40a4dba linalg_cholesky_ex meta function
Taken from https://github.com/albanD/subclass_zoo/blob/main/python_meta_tensor.py

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78604

Approved by: https://github.com/bdhirsh, https://github.com/ngimel, https://github.com/Lezcano
2022-06-03 23:11:02 +00:00
Edward Z. Yang
6120a8e05d Implement meta function for aten::index.Tensor
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78527

Approved by: https://github.com/bdhirsh, https://github.com/ngimel, https://github.com/Lezcano
2022-06-03 23:11:02 +00:00
Edward Z. Yang
1bd21dd152 _linalg_qr_helper meta function
Taken from https://github.com/albanD/subclass_zoo/blob/main/python_meta_tensor.py

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78603

Approved by: https://github.com/Lezcano, https://github.com/mruberry
2022-06-03 20:27:05 +00:00
Elias Ellison
26d273959c Add Caching of Conversion to Fake/Meta tensors in FakeTensorMode
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78090

Approved by: https://github.com/ezyang
2022-06-03 13:56:00 +00:00
Edward Z. Yang
9446f9678a repeat_interleaves meta function
Taken from https://github.com/albanD/subclass_zoo/blob/main/python_meta_tensor.py

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78602

Approved by: https://github.com/mruberry
2022-06-02 21:24:46 +00:00
Edward Z. Yang
d7dd0df22b Add ability to compare skips/xfails against a text file of operators
When I run against a list of DPER ops I get this list:

```
aten.count_nonzero.dim_IntList
aten.count_nonzero.default
aten.empty.memory_format  # SKIP
aten.repeat_interleave.Tensor
aten.relu.default
aten.nonzero.out
aten.nonzero.default
```

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78526

Approved by: https://github.com/zou3519
2022-06-01 00:46:51 +00:00
PyTorch MergeBot
fca1f495c2 Revert "Port index.Tensor to structured kernels."
This reverts commit 9fe6f1baf5.

Reverted https://github.com/pytorch/pytorch/pull/69607 on behalf of https://github.com/suo due to this broke master, see: 9fe6f1baf5
2022-06-01 00:12:15 +00:00
Brian Hirsh
9fe6f1baf5 Port index.Tensor to structured kernels.
Tracking issue: #55070

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69607

Approved by: https://github.com/bdhirsh
2022-05-31 22:15:20 +00:00
Edward Z. Yang
7313a7a987 Make Meta into a backend component
Seems like it should be one.  This will make it possible to register
meta implementations even when there is a CompositeImplicitAutograd
registration already.  It also paves the way for sparse meta, etc.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78469

Approved by: https://github.com/ngimel
2022-05-31 18:59:16 +00:00
Edward Z. Yang
eee2aa14a6 Register std_mean ref as a decomposition
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78468

Approved by: https://github.com/ngimel
2022-05-31 18:59:16 +00:00
Elias Ellison
678213ead2 Fake Tensor Part 1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77969

Approved by: https://github.com/ezyang
2022-05-31 16:20:35 +00:00
Edward Z. Yang
789115e05e Don't check for linalg errors on meta tensors
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78467

Approved by: https://github.com/Chillee
2022-05-31 14:18:49 +00:00
Edward Z. Yang
59fdb627a3 Reenable TestMeta native_batch_norm and native_layer_norm
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78466

Approved by: https://github.com/Chillee
2022-05-31 14:18:49 +00:00
Edward Z. Yang
be0629e925 Reenable TestMeta slice
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78465

Approved by: https://github.com/mruberry
2022-05-31 14:18:49 +00:00
Edward Z. Yang
4bbc3e2809 Some helper code for determining missing meta coverage for XLA ops
When I ran it I got this:

```
$ PYTORCH_COMPARE_XLA=/scratch/ezyang/xla/xla_native_functions.yaml python test/test_meta.py
aten.logdet.default
aten._local_scalar_dense.default
aten.cholesky.default
aten.diag.default
aten.empty.memory_format  # SKIP
aten.index.Tensor
aten.kthvalue.default
aten.masked_select.default
aten.max_pool3d_with_indices.default
aten.max_unpool2d.default
aten.max_unpool3d.default
aten.native_batch_norm.default  # SKIP
aten.nll_loss2d_forward.default
aten.nonzero.default
aten.prelu.default
aten.relu.default
aten.roll.default
aten.rrelu_with_noise.default
aten.std_mean.correction
aten.symeig.default
aten.take.default
aten.trace.default
aten.native_layer_norm.default  # SKIP
```

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78464

Approved by: https://github.com/albanD
2022-05-31 14:18:49 +00:00
Edward Z. Yang
0865df4b87 Reenable TestMeta testing for isnan
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78463

Approved by: https://github.com/ngimel, https://github.com/mruberry
2022-05-31 14:18:49 +00:00
Edward Z. Yang
1e11fc894c Reenable tensor_split meta testing
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78462

Approved by: https://github.com/mruberry
2022-05-31 14:18:49 +00:00
Edward Z. Yang
b7215de32f prod ref
It turns out the prim is implemented incorrectly as torch.prod does not accept
a dim list, so I added a little stub for this.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78461

Approved by: https://github.com/ngimel
2022-05-31 14:18:49 +00:00
Ryan Spring
2df1da09e1 Add Elementwise unary ops 4 references (#78216)
Add reference implementations for `nan_to_num, positive, sigmoid, signbit, tanhshink`
Add prims for `minimum_value(dtype)` and `maximum_value(dtype)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78216
Approved by: https://github.com/mruberry
2022-05-27 21:55:34 +00:00
Aidyn-A
31016eb81e [primTorch] Elementwise Binary Ops I (#78023)
This PR is a result of collaboration with @rdspring1 and @mruberry on primTorch.

It adds the following prims:
- `fmax`
- `fmin`
- `fmod`

And adds the following refs:
- `fmax`
- `fmin`
- `fmod`
- `logical_xor`

The work is in progress as there are some tests that fail.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78023
Approved by: https://github.com/mruberry
2022-05-26 20:22:27 +00:00
Edward Z. Yang
a1765f0176 addr ref
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78014

Approved by: https://github.com/ngimel
2022-05-25 01:40:11 +00:00
Kshiteej K
88fca3be59 [reland][complex32] conv1d, conv2d : enable test (#77999)
Reland: #77239
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77999
Approved by: https://github.com/anjali411
2022-05-23 05:49:03 +00:00
Mike Ruberry
d4345ed0a6 [primTorch] Adds random operations (#78026)
This PR...

**Issues Found**
- https://github.com/pytorch/pytorch/issues/78058
- https://github.com/pytorch/pytorch/issues/78054
- https://github.com/pytorch/pytorch/issues/78053
- https://github.com/pytorch/pytorch/issues/78050
- https://github.com/pytorch/pytorch/issues/77932

**Testing**
- disables stride consistency checks in test_ops and test_meta pending resolution of https://github.com/pytorch/pytorch/issues/78050
- skips chalf in reference tests (addressing https://github.com/pytorch/pytorch/issues/78054)
- splits test test_python_reference_consistency in one test for the ctx where torch.foo is torch.foo, and another for when torch.foo is refs.foo
- updates test names to be more natural and consistent:
  - test_python_reference_errors -> test_python_ref_errors
  - test_python_reference_consistency -> test_python_ref and test_python_ref_torch_fallback
  - test_python_reference_meta_functions -> test_python_ref_meta
  - test_reference_testing -> test_numpy_ref
- updates test_python_ref and test_python_ref_torch_fallback to check that the reference is more accurate than the torch op if the reference and torch op results are not close, a warning is raised when this occurs (addressing https://github.com/pytorch/pytorch/issues/77687)
- adds reference inputs for broadcast_tensors
- Updates the "fill_" OpInfo to "fill", adding a NumPy reference and making it an elementwise unary operator
- Adds 1D no element sample inputs to the cat OpInfo and updates the NumPy reference to handle them and type promotion correctly
- Adds reference inputs for elementwise ternary operations, like clamp
- Adds a NumPy reference for clamp
- Adds reference inputs to where's OpInfo
- Makes softplus an elementwise unary OpInfo
- Removes the great majority of Python reference OpInfo skips and xfails due to the above test changes
- Adds Python reference OpInfos for fill, dropout, clamp, broadcast_tensors, and where

**Prims**
- adds the fill, empty_strided, and uniform prims
- removes the empty, empty_like, full, and full_like prims -- these are now references that use empty_strided and fill
- renames the "concatenate" and "select" prims to "cat" and "where", respectively, to be consistent with PyTorch
- extends the `_elementwise_meta` operation to accepts tensors that don't participate in type promotion, like the `cond` tensor in `where`
- fixes a bug in the stride propagation of broadcast_in_dim
- moves some error checks from prims.cat to prims.where to refs.cat and refs.where, respectively, consistent with our new policy of doing as much error checking in the ref as possible

**Utils**
- adds the canoicalize_device, extract_shape, and extract_shape_from_varargs helpers
- adds the elementwise_unary_scalar_wrapper -- this allows elementwise unary operators to take and return scalar values (ex. refs.sin(1) will return .84...)

**Refs**
- adds the fill, broadcast_tensors, clamp, empty_strided, ones, zeros, and uniform references
- adds the nn.functional.dropout reference
- fixes refs.cat to handle 1D tensors with no inputs consistent with eager mode
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78026
Approved by: https://github.com/ngimel
2022-05-23 01:56:28 +00:00
PyTorch MergeBot
acfbc16b1c Revert "[primTorch] Adds random operations (#78026)"
This reverts commit 043cf1f9c7.

Reverted https://github.com/pytorch/pytorch/pull/78026 on behalf of https://github.com/suo due to This broke trunk: 043cf1f9c7
2022-05-22 18:11:14 +00:00
Mike Ruberry
043cf1f9c7 [primTorch] Adds random operations (#78026)
This PR...

**Issues Found**
- https://github.com/pytorch/pytorch/issues/78058
- https://github.com/pytorch/pytorch/issues/78054
- https://github.com/pytorch/pytorch/issues/78053
- https://github.com/pytorch/pytorch/issues/78050
- https://github.com/pytorch/pytorch/issues/77932

**Testing**
- disables stride consistency checks in test_ops and test_meta pending resolution of https://github.com/pytorch/pytorch/issues/78050
- skips chalf in reference tests (addressing https://github.com/pytorch/pytorch/issues/78054)
- splits test test_python_reference_consistency in one test for the ctx where torch.foo is torch.foo, and another for when torch.foo is refs.foo
- updates test names to be more natural and consistent:
  - test_python_reference_errors -> test_python_ref_errors
  - test_python_reference_consistency -> test_python_ref and test_python_ref_torch_fallback
  - test_python_reference_meta_functions -> test_python_ref_meta
  - test_reference_testing -> test_numpy_ref
- updates test_python_ref and test_python_ref_torch_fallback to check that the reference is more accurate than the torch op if the reference and torch op results are not close, a warning is raised when this occurs (addressing https://github.com/pytorch/pytorch/issues/77687)
- adds reference inputs for broadcast_tensors
- Updates the "fill_" OpInfo to "fill", adding a NumPy reference and making it an elementwise unary operator
- Adds 1D no element sample inputs to the cat OpInfo and updates the NumPy reference to handle them and type promotion correctly
- Adds reference inputs for elementwise ternary operations, like clamp
- Adds a NumPy reference for clamp
- Adds reference inputs to where's OpInfo
- Makes softplus an elementwise unary OpInfo
- Removes the great majority of Python reference OpInfo skips and xfails due to the above test changes
- Adds Python reference OpInfos for fill, dropout, clamp, broadcast_tensors, and where

**Prims**
- adds the fill, empty_strided, and uniform prims
- removes the empty, empty_like, full, and full_like prims -- these are now references that use empty_strided and fill
- renames the "concatenate" and "select" prims to "cat" and "where", respectively, to be consistent with PyTorch
- extends the `_elementwise_meta` operation to accepts tensors that don't participate in type promotion, like the `cond` tensor in `where`
- fixes a bug in the stride propagation of broadcast_in_dim
- moves some error checks from prims.cat to prims.where to refs.cat and refs.where, respectively, consistent with our new policy of doing as much error checking in the ref as possible

**Utils**
- adds the canoicalize_device, extract_shape, and extract_shape_from_varargs helpers
- adds the elementwise_unary_scalar_wrapper -- this allows elementwise unary operators to take and return scalar values (ex. refs.sin(1) will return .84...)

**Refs**
- adds the fill, broadcast_tensors, clamp, empty_strided, ones, zeros, and uniform references
- adds the nn.functional.dropout reference
- fixes refs.cat to handle 1D tensors with no inputs consistent with eager mode
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78026
Approved by: https://github.com/ngimel
2022-05-22 10:06:24 +00:00
Edward Z. Yang
774b4ff83e Reenable mul tests
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78016

Approved by: https://github.com/mruberry
2022-05-21 19:33:27 +00:00
Edward Z. Yang
47834679ba Disable complex32 meta conversion, which removes a few skips
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77854

Approved by: https://github.com/mruberry
2022-05-21 02:35:14 +00:00
Edward Z. Yang
6b273444c4 Add logit ref; allow non-refs to be called in refs.
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77816

Approved by: https://github.com/mruberry
2022-05-21 02:35:14 +00:00
Horace He
64b4bb4b01 Fix meta tests on norm (and relanding norm fixes) (#77930)
Had a land race with meta tests.

Will also be relanding https://github.com/pytorch/pytorch/pull/77407
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77930
Approved by: https://github.com/malfet, https://github.com/ezyang
2022-05-20 23:15:53 +00:00
PyTorch MergeBot
53b30579b7 Revert "[complex32] conv1d, conv2d : enable test (#77239)"
This reverts commit 2d3a6d7274.

Reverted https://github.com/pytorch/pytorch/pull/77239 on behalf of https://github.com/suo due to This broke nvfuser tests on master, see: 2d3a6d7274
2022-05-20 19:10:42 +00:00
kshitij12345
2d3a6d7274 [complex32] conv1d, conv2d : enable test (#77239)
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77239
Approved by: https://github.com/anjali411
2022-05-20 15:54:30 +00:00
Edward Z. Yang
9e0e949484 Fix bugs, increase meta coverage
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77499

Approved by: https://github.com/mruberry
2022-05-19 21:04:57 +00:00
Edward Z. Yang
e69e9b0ed8 Don't test std values if tensor is meta; fixes normal meta
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77740

Approved by: https://github.com/mruberry
2022-05-19 14:43:35 +00:00
Edward Z. Yang
88c89c9eb9 log_sigmoid_forward out support; out_wrapper_multi
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77739

Approved by: https://github.com/mruberry
2022-05-19 14:43:35 +00:00
Edward Z. Yang
baeffdbc6c reflection_pad2d support
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77681

Approved by: https://github.com/mruberry
2022-05-19 14:43:35 +00:00
Edward Z. Yang
e3403ff4ab square support
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77682

Approved by: https://github.com/ngimel, https://github.com/mruberry
2022-05-19 01:16:19 +00:00
Edward Z. Yang
4941e72e40 Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)""
This reverts commit c35bd8d423.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77719

Approved by: https://github.com/Chillee, https://github.com/malfet
2022-05-18 18:40:57 +00:00
Mike Ruberry
580a053832 [primTorch] Enforces stride metadata (#77542)
This PR...

**Filed the Following Issues**
- https://github.com/pytorch/pytorch/issues/77553
- https://github.com/pytorch/pytorch/issues/77526
- https://github.com/pytorch/pytorch/issues/77600

**Testing**
- Updates test_dtypes to longer attempt to test the backward of sample inputs where no inputs require grad
- Adds a new test_python_reference_errors; it ensures the meta operations for references throw errors as expected
- Updates compare_tensor_meta to better handle CUDA devices, and (temporarily) restricts stride checking to the CUDA device type
- Elementwise unary and elementwise binary operators now have arbitrarily strided reference inputs
- Reference inputs for _like functions are added
- An OpInfo for torch.empty is added
- Reference inputs for torch.clone are added
- A NumPy reference for clone is added
- Adds OpInfos for refs.empty and refs.empty_like

**Prims**
- Renames the "max" and "min" prims have been renamed to "maximum" and "minimum," respectively, to better conform to their ATen names
- Adds the empty, empty_like, full, and full_like prims
- Fixes the elementwise meta function's stride propagation
- Fixes clone's meta function's stride propagation
- Fixes convert_element_type's meta's stride propagation
- Adds a (temporary) _to_dtype pprivate prim that casts a tensor while preserving its stride permutation
- Removes the _set prim comment
- Adds utils.compute_elementwise_output_strides, which computes the correct output strides for elementwise operations
- Corrects an issue where utils.make_contiguous_strides_for was creating the incorrect strides for tensors with no elements

**References**
- Adds the empty, empty_like, full, full_like, and ones_like refs
- Extends make_elementwise_unary_reference to accept an additional callable to perform extra input validation
- Adds an extra validation function to handle refs.neg(BoolTensor)
- Updates the isfinite ref to call ones_like when appropriate
- Models Python scalar handling for elementwise binary operations
- Added a 64 dim check for the amin and amax references
- opmath is now a flag that can be set separately for cpu and CUDA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77542
Approved by: https://github.com/ezyang
2022-05-18 13:57:26 +00:00
lezcano
ff7b6d6b5f Update linalg.*norm
This PR does a number of things:
- Move linalg.vector_norm to structured kernels and simplify the logic
- Fixes a number of prexisting issues with the dtype kwarg of these ops
- Heavily simplifies and corrects the logic of `linalg.matrix_norm` and `linalg.norm` to be consistent with the docs
  - Before the `_out` versions of these functions were incorrect
  - Their implementation is now as efficient as expected, as it avoids reimplementing these operations whenever possible.
- Deprecates `torch.frobenius_norm` and `torch.nuclear_norm`, as they were exposed in the API and they are apparently being used in mobile (??!!) even though they were not documented and their implementation was slow.
  - I'd love to get rid of these functions already, but I guess we have to go through their deprecation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76547

Approved by: https://github.com/mruberry
2022-05-18 11:46:50 +00:00
PyTorch MergeBot
e9d660c331 Revert "Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"""
This reverts commit acf7136a52.

Reverted https://github.com/pytorch/pytorch/pull/77719 on behalf of https://github.com/suo
2022-05-18 05:06:50 +00:00
Edward Z. Yang
acf7136a52 Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)""
This reverts commit c35bd8d423.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77719

Approved by: https://github.com/Chillee, https://github.com/malfet
2022-05-18 03:25:43 +00:00
PyTorch MergeBot
48581d74ad Revert "Add dispatch mode testing for meta tensors and other stuff"
This reverts commit c1cdb1216b.

Reverted https://github.com/pytorch/pytorch/pull/77477 on behalf of https://github.com/malfet
2022-05-18 02:56:48 +00:00
Edward Z. Yang
c1cdb1216b Add dispatch mode testing for meta tensors and other stuff
We don't have any coverage for meta tensor correctness for backwards
because torch function mode can only allow us to interpose on
Python torch API calls, but backwards invocations happen from C++.
To make this possible, I add torch_dispatch_meta test which runs the
tests with __torch_dispatch__

While doing this, I needed to generate fresh expected failure / skip
lists for the new test suite, and I discovered that my original
scaffolding for this purpose was woefully insufficient.  So I rewrote
how the test framework worked, and at the same time rewrote the
__torch_function__ code to also use the new logic.  Here's whats
new:

- Expected failure / skip is now done on a per function call basis,
  rather than the entire test.  This means that separate OpInfo
  samples for a function don't affect each other.

- There are now only two lists: expect failure list (where the test
  consistently fails on all runs) and skip list (where the test
  sometimes passes and fails.

- We explicitly notate the dtype that failed.  I considered detecting
  when something failed on all dtypes, but this was complicated and
  listing everything out seemed to be nice and simple.  To keep the
  dtypes short, I introduce a shorthand notation for dtypes.

- Conversion to meta tensors is factored into its own class
  MetaConverter

- To regenerate the expected failure / skip lists, just run with
  PYTORCH_COLLECT_EXPECT and filter on a specific test type
  (test_meta or test_dispatch_meta) for whichever you want to update.

Other misc fixes:

- Fix max_pool1d to work with BFloat16 in all circumstances, by making
  it dispatch and then fixing a minor compile error (constexpr doesn't
  work with BFloat16)

- Add resolve_name for turning random torch API functions into string
  names

- Add push classmethod to the Mode classes, so that you can more easily
  push a mode onto the mode stack

- Add some more skips for missing LAPACK

- Added an API to let you query if there's already a registration for
  a function, added a test to check that we register_meta for all
  decompositions (except detach, that decomp is wrong lol), and then
  update all the necessary sites to make the test pass.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77477

Approved by: https://github.com/zou3519
2022-05-18 00:18:34 +00:00
Edward Z. Yang
2f602abf14 Register more decomps for meta.
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77362

Approved by: https://github.com/mruberry
2022-05-14 02:24:23 +00:00
Mikayla Gawarecki
841c65f499 Unprivate _index_reduce and add documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76997

Approved by: https://github.com/cpuhrsch
2022-05-13 19:48:38 +00:00
Edward Z. Yang
d5ed73badd Make it possible to register decompositions to Meta key
Decompositions can be used to fill in meta support where necessary,
assuming the operations they decompose to support meta key.
This PR adds register_meta kwarg to register_decomposition that
optionally lets you register the meta to the C++ dispatch table
for meta tensors.  I use this to then get the meta function for
where and huber_loss for free.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77353

Approved by: https://github.com/mruberry
2022-05-12 23:20:16 +00:00
anjali411
767af8e335 Add meta tensor support for some operations using python registration
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76916

Approved by: https://github.com/ezyang
2022-05-10 17:55:06 +00:00
Nikita Shulga
b4fa1e88be Skip lu_solve meta tests (#77110)
Fixes regressions introduced by revert of https://github.com/pytorch/pytorch/pull/72935

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77110
Approved by: https://github.com/suo, https://github.com/janeyx99
2022-05-09 22:59:51 +00:00
Edward Z. Yang
60f131fb6c Add OpInfo based meta tensor tests [RELAND]
PR #75994 was taking too long to ship so I extracted out the CrossRef gadget and
had it run on a simple OpInfo invocation only.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77008

Approved by: https://github.com/ngimel
2022-05-07 12:15:10 +00:00
PyTorch MergeBot
828fb8c620 Revert "Add OpInfo based meta tensor tests"
This reverts commit d9fda18c4b.

Reverted https://github.com/pytorch/pytorch/pull/76905 on behalf of https://github.com/ezyang
2022-05-06 23:11:35 +00:00
Edward Z. Yang
d9fda18c4b Add OpInfo based meta tensor tests
https://github.com/pytorch/pytorch/pull/75994 was taking too long to
ship so I extracted out the CrossRef gadget and had it run on a simple
OpInfo invocation only.

TODO: There are failures that correspond to known bugs and need to be
skipped.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76905

Approved by: https://github.com/anjali411, https://github.com/mruberry, https://github.com/albanD
2022-05-06 20:12:28 +00:00