Commit Graph

702 Commits

Author SHA1 Message Date
Michael Andreas Dagitses
ab2ca95dd1 turn on -Werror=unused-variable in our Bazel CPU build
Summary:
We also fix any existing issues. Note that we only do this for the CPU
build because nvcc is considered a C++ toolchain but it does not have
the same flag support. Adding flags to the GPU build will cause nvcc
errors.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79156

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-11 02:46:34 +00:00
Scott Wolchok
fff1948b02 [PyTorch] intrusive_ptr: don't guarantee release_resources will be called
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76767

We're spending a virtual function call in the common case
where there are no weak references just to save a small amount of care
in intrusive_ptr_target subclasses that override release_resources, of
which there aren't very many.

Differential Revision: [D36109757](https://our.internmc.facebook.com/intern/diff/D36109757/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36109757/)!

Approved by: https://github.com/ezyang
2022-06-10 19:30:35 +00:00
George Qi
a90f006fe5 add strides to slow path
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78610

Approved by: https://github.com/ezyang
2022-06-10 16:59:14 +00:00
Edward Z. Yang
5ca24c60a1 Add Meta backend
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78803

Approved by: https://github.com/bdhirsh
2022-06-09 15:28:46 +00:00
Peter Bell
c936396af2 Always convert truthy booleans to 1
Ref #54789

A `bool` has only two valid values, 1 or 0. Any in-memory value
outside of those leads to undefined behavior. So, instead of
`reinterpret_cast`-ing to `bool*` I introduce `c10::load<scalar_t>`
which will read as `unsigned char` and convert to a valid `bool`.

This gets >90% of operators working, but the remaining operators where
skips and xfails have been added will require individual attention.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77122

Approved by: https://github.com/mruberry
2022-06-07 16:00:30 +00:00
Brian Hirsh
19228205ae functionalization: update wrapper to propagate symints
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78820

Approved by: https://github.com/ezyang
2022-06-06 14:14:06 +00:00
PyTorch MergeBot
6d7eddbb75 Make allocator check C10_UNLIKELY
This popped up while having a look at posible causes for https://github.com/pytorch/pytorch/issues/78800

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78801

Approved by: https://github.com/ezyang
2022-06-03 19:41:29 +00:00
Michael Suo
22b10873f3 Allow torchdispatch to customize dim()
This follows the template in
https://github.com/pytorch/pytorch/pull/77396

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78691

Approved by: https://github.com/ezyang
2022-06-02 20:54:13 +00:00
Kshiteej K
849b08f14b [reland][chalf] where(cpu and cuda), pow(cuda) (#78665)
Reland: https://github.com/pytorch/pytorch/pull/77640
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78665
Approved by: https://github.com/ngimel
2022-06-02 18:04:06 +00:00
PyTorch MergeBot
78824a7d54 Revert "Always convert truthy booleans to 1"
This reverts commit 3c3c6cd982.

Reverted https://github.com/pytorch/pytorch/pull/77122 on behalf of https://github.com/mruberry due to broke some jobs, like https://github.com/pytorch/pytorch/runs/6706333043?check_suite_focus=true
2022-06-02 13:45:54 +00:00
Peter Bell
3c3c6cd982 Always convert truthy booleans to 1
Ref #54789

A `bool` has only two valid values, 1 or 0. Any in-memory value
outside of those leads to undefined behavior. So, instead of
`reinterpret_cast`-ing to `bool*` I introduce `c10::load<scalar_t>`
which will read as `unsigned char` and convert to a valid `bool`.

This gets >90% of operators working, but the remaining operators where
skips and xfails have been added will require individual attention.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77122

Approved by: https://github.com/mruberry
2022-06-02 04:18:34 +00:00
PyTorch MergeBot
4bb8db85e9 Revert "[chalf] where(cpu and cuda), pow(cuda) (#77640)"
This reverts commit 3697cf7f76.

Reverted https://github.com/pytorch/pytorch/pull/77640 on behalf of https://github.com/mruberry due to as it broke ROCM on trunk
2022-06-01 19:39:38 +00:00
kshitij12345
3697cf7f76 [chalf] where(cpu and cuda), pow(cuda) (#77640)
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77640
Approved by: https://github.com/anjali411, https://github.com/ngimel
2022-06-01 18:35:53 +00:00
Richard Zou
de5a2320f2 Mark more methods of DispatchKeySet as constexpr
Added operator-, DispatchKeySet::add, and DispatchKeySet::remove.
I wanted to use these in functorch to make a constexpr DispatchKeySet.

Also adds C10_NODISCARD to DispatchKeySet::remove to make it
consistent with DispatchKeySet::add (this will raise a
warning if someone calls remove without assigning the result to a
variable; remove is NOT mutable and this is a pitfall that I run into a
lot)

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78558

Approved by: https://github.com/bdhirsh
2022-06-01 17:29:03 +00:00
Edward Z. Yang
c20969c40c Fix ParameterList printing meta tensor
Fixes https://github.com/pytorch/pytorch/issues/78250

There are actually two bugs.  First, the crash is caused
by TensorOptions::backend incorrectly reporting noexcept when
it can failed.  Second, ParameterList is using torch.tensortype
for no good reason; we can just print the dtype instead.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78529

Approved by: https://github.com/albanD
2022-06-01 00:46:52 +00:00
Edward Z. Yang
7313a7a987 Make Meta into a backend component
Seems like it should be one.  This will make it possible to register
meta implementations even when there is a CompositeImplicitAutograd
registration already.  It also paves the way for sparse meta, etc.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78469

Approved by: https://github.com/ngimel
2022-05-31 18:59:16 +00:00
Brian Hirsh
7ff091fc4e move Functionalize dispatch key closer to backends
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77132

Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-05-26 16:15:43 +00:00
Elias Ellison
13e444bfa5 Fix internal build
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78282

Approved by: https://github.com/davidberard98
2022-05-25 22:55:06 +00:00
Michael Suo
49979c4021 [symint] Make TensorImpl::sizes_and_strides_ contain SymInt
Change our representation of sizes and strides to contain SymInts
instead of int64_t.

Right now it's not actually possible to create a Tensor with symbolic
shape, so this change is intended to be a no-op.

But the intended behavior is:
- If you create a Tensor with symbolic shape, a `CustomSizes` policy
will be set, and the `has_symbolic_sizes_strides_` bit will be set. (not
currently implemented)
- Calling any TensorImpl function that naively interacts with sizes and
strides will throw. For hot-path functions (`sizes()`, `strides()`), we
make use of the existing policy check to throw. For others, we just have
a regular `TORCH_CHECK(!has_symbolic_sizes_strides_)`.

This also undoes the explicit constructor I made in
https://github.com/pytorch/pytorch/pull/77666; it ended up being more
annoying than useful when making these changes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78272

Approved by: https://github.com/Krovatkin, https://github.com/Chillee
2022-05-25 20:54:51 +00:00
Elias Ellison
2d93e1fada Add slow path for device
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77684

Approved by: https://github.com/ezyang
2022-05-24 21:56:01 +00:00
PyTorch MergeBot
fb84f2223c Revert "[symint] Make TensorImpl::sizes_and_strides_ contain SymInt"
This reverts commit a7a818d9e2.

Reverted https://github.com/pytorch/pytorch/pull/77994 on behalf of https://github.com/seemethere due to Talked with @suo and we decided to revert because of broken [internal builds](https://www.internalfb.com/intern/sandcastle/job/678535557/). Also appears as though internal codegen might be broken as well.
2022-05-24 00:14:02 +00:00
Michael Suo
a7a818d9e2 [symint] Make TensorImpl::sizes_and_strides_ contain SymInt
Change our representation of sizes and strides to contain SymInts
instead of int64_t.

Right now it's not actually possible to create a Tensor with symbolic
shape, so this change is intended to be a no-op.

But the intended behavior is:
- If you create a Tensor with symbolic shape, a `CustomSizes` policy
will be set, and the `has_symbolic_sizes_strides_` bit will be set. (not
currently implemented)
- Calling any TensorImpl function that naively interacts with sizes and
strides will throw. For hot-path functions (`sizes()`, `strides()`), we
make use of the existing policy check to throw. For others, we just have
a regular `TORCH_CHECK(!has_symbolic_sizes_strides_)`.

This also undoes the explicit constructor I made in
https://github.com/pytorch/pytorch/pull/77666; it ended up being more
annoying than useful when making these changes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77994

Approved by: https://github.com/Krovatkin
2022-05-20 20:17:06 +00:00
kshitij12345
efb2c093fc [fix] complex type promotion (#77524)
Fixes https://github.com/pytorch/pytorch/issues/76803

Before Fix:
```python
>> a = torch.randn((2, 2), dtype=torch.float)
>> b = torch.tensor(1, dtype=torch.cdouble)
>> (a + b).dtype
torch.complex128
```

After Fix:
```python
>> a = torch.randn((2, 2), dtype=torch.float)
>> b = torch.tensor(1, dtype=torch.cdouble)
>> (a + b).dtype
torch.complex64
```

**Note**: This is a BC Breaking change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77524
Approved by: https://github.com/anjali411, https://github.com/mruberry
2022-05-20 10:23:56 +00:00
Nikolay Korovaiko
df1f9b9840 Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#77756)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77756
Approved by: https://github.com/desertfire
2022-05-20 05:39:03 +00:00
Michael Suo
68e22aa9fc [symint] add support for negative integers
The bit packing scheme is described in the comments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77913

Approved by: https://github.com/Krovatkin
2022-05-20 03:46:29 +00:00
George Qi
294fff16ec add slow path for is_contiguous (#77906)
Test Plan: CI

Reviewed By: malfet, b0noI

Differential Revision: D36493890

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77906
Approved by: https://github.com/malfet
2022-05-19 22:52:45 +00:00
PyTorch MergeBot
00a187c373 Revert "add slow path for is_contiguous"
This reverts commit f6beda89c6.

Reverted https://github.com/pytorch/pytorch/pull/77396 on behalf of https://github.com/malfet
2022-05-19 17:07:54 +00:00
PyTorch MergeBot
e9d660c331 Revert "Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"""
This reverts commit acf7136a52.

Reverted https://github.com/pytorch/pytorch/pull/77719 on behalf of https://github.com/suo
2022-05-18 05:06:50 +00:00
Edward Z. Yang
acf7136a52 Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)""
This reverts commit c35bd8d423.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77719

Approved by: https://github.com/Chillee, https://github.com/malfet
2022-05-18 03:25:43 +00:00
PyTorch MergeBot
c35bd8d423 Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"
This reverts commit fc4c3c9bc7.

Reverted https://github.com/pytorch/pytorch/pull/76836 on behalf of https://github.com/suo
2022-05-18 02:45:25 +00:00
George Qi
f6beda89c6 add slow path for is_contiguous
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77396

Approved by: https://github.com/ezyang, https://github.com/cpuhrsch
2022-05-18 02:25:27 +00:00
Nikolay Korovaiko
fc4c3c9bc7 Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)
LTC Tensors now create real IR (SizeNode) for sym_sizes() in LTCTensorImpl.cpp.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76836
Approved by: https://github.com/ezyang
2022-05-18 00:40:42 +00:00
Michael Suo
7f1e331b34 Make SymInt constructor explicit
Since we plan to have a bunch of code that is sensitive to whether or
not a SymInt contains a symbolic shape or not, it seems like a bad idea
to have an implicit constructor.

For example, code like:
```
sizes_and_strides_.stride_at_unchecked(dim) = 0;
```

would sail through, and the `0` would get implicitly promoted to a
SymInt.

This is a tradeoff though: it makes code that handles `SymInt`s more
clunky as `int64_t`s and integer literals need to be explicitly wrapped
in `SymInt` before being used.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77666

Approved by: https://github.com/ezyang
2022-05-17 22:28:35 +00:00
Michael Suo
c673696b17 [skip ci] fix comment spacing in SymIntArrayRef.h
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77665

Approved by: https://github.com/ezyang
2022-05-17 22:22:23 +00:00
Edward Z. Yang
b5bc954a71 Fix optional dtype/layout/memory_format pycall; fix memory format
Double-header bug fix:

- As reported by jansel, dtypes are still showing up as integers
  when the schema is an optional dtype.  This is simple enough to
  fix and I added a test for it.  But while I was at it...

- I noticed that the THPMemoryFormat_new idiom with "unused" name
  doesn't actually work, the repr of the returned memory format
  object is wrong and this shows up when we try to log the args/kwargs.
  So I fixed memory format to do it properly along with everything
  else.

Fixes https://github.com/pytorch/pytorch/issues/77135

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77543

Approved by: https://github.com/albanD, https://github.com/jansel
2022-05-16 16:46:08 +00:00
Nikita Shulga
649bd82acc c10/SymInt.h: Fix "integer conversion resulted in a change of sign" (#77398)
Summary:
By casting both operands to uint64_t as bitwise operations on signed types might be architecture dependent, but for x86_64 it doesn't matter:
https://godbolt.org/z/n81EncGz9

Test Plan: CI

Differential Revision: D36365405

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77398
Approved by: https://github.com/Krovatkin
2022-05-13 20:45:56 +00:00
Kulin Seth
e011a8e18b Enable PyTorch operations on MPS Backend. (#77343)
Add PyTorch operations to MPS backend.

- https://github.com/pytorch/pytorch/issues/77394
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77343
Approved by: https://github.com/albanD
2022-05-13 18:28:53 +00:00
Hangchen Yu
abb6fab0f4 Add new PrivateUse1 DeviceType for non-public devices (#77208)
Summary: The new PrivateUse1 DeviceType is associated with the PrivateUse1 DispatchKey, which can be used for non-public devices without introducing a new device type. Note that the stringified name of the PrivateUse1 device is "privateuseone".

Test Plan: All CI should pass.

Differential Revision: D35859437

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77208
Approved by: https://github.com/bdhirsh
2022-05-13 16:03:27 +00:00
Brian Hirsh
5762c7b25b fix StridesPolicy logic for FunctionalTensorWrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77358

Approved by: https://github.com/ezyang
2022-05-13 13:27:06 +00:00
Kulin Seth
f348b1b2b5 Add the Runtime components for MPS backend. (#76725)
The PR adds the runtime components and few basic operations like copy, as_strided for MPS backend.

Current list of identified TODOs are:

-  https://github.com/pytorch/pytorch/issues/77176
- Unify the logic with CUDACachingAllocator and remove redundant code.
-  https://github.com/pytorch/pytorch/issues/77170
- Look into using C++ smart pointers where possible with ObjC code
- Use empty_strided_generic() to implement the `empty_strided_mps` code
- https://github.com/pytorch/pytorch/issues/77144
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76725
Approved by: https://github.com/albanD
2022-05-11 17:19:45 +00:00
Nikolay Korovaiko
99339fddd9 move SymInt and SymIntArrayRef to c10/core (#77009)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77009
Approved by: https://github.com/ezyang, https://github.com/malfet
2022-05-11 16:21:31 +00:00
Edward Z. Yang
2896f81dd4 Consolidate customization contiguous/sizes policy into unified policy
Prior to this PR, we had a mish-mash of ways of getting unconventional
sizes/strides behavior:

- In OSS (but not in fbcode), some methods are virtual and you
  can override them directly

- There is a is_contiguous policy which is a bitfield tag that lets
  you toggle is_contiguous to error or hit a virtual method
  is_contiguous_custom if it is set.  Ordinarily is_contiguous()
  is virtual and you can just override it, but this works EVEN IF
  is_contiguous() is non-virtual (e.g., in fbcode)

- There is also a sizes policy which is the same idea but for sizes

This PR unifies these mechanisms, and in doing so, eliminates the
maybe virtual/not-virtualness of the methods in question.  The primary
downside of this change is that it is BC-breaking (but the BC break is
very easy to fix!)

The new scheme works like this: we have three levels of policy for
sizes/strides (order matters).

- The Default policy is a conventional dense tensor, where we use
  all of the built-in fields to directly represent the
  sizes/strides/numel/contiguity of the tensor, and it is possible
  to bypass virtual call entirely.

- The CustomStrides policy represent tensors which have a custom
  notion of strides (most typically, that they don't support them),
  shunting strides() and is_contiguous() to virtual methods
  strides_custom() and is_contiguous_custom().  This INCLUDES handling
  for contiguity, since they typically go hand-in-hand (although
  the situation is murky with batched tensors).  The default
  implementations of these functions raise errors saying the tensor
  doesn't support them.

- The CustomSizes policy represent tensors which have a custom
  notion of sizes (the two notable examples are nested tensor, which
  doesn't have a representation of sizes in the conventional form, and
  XLA/LTC tensor, which synchronizes its sizes with an underlying
  compiler backend).  This shunts sizes(), numel() and dim() (along
  with everything from strides) to _custom() variants.

There is no special policy for erroring; instead, we just do a vcall
and expect the virtual method to raise an exception (the performance
hit from the vcall doesn't matter because you're about to raise a C++
exception anyway).  The default implementations of all overridable
functions are available at _default() which is helpful in some
situations when you just want to do a "sync" and then run the
conventional semantics.

This PR could be extended further in two ways but I did not do them
due to time constraints:

- Ideally, all TENSORIMPL_MAYBE_VIRTUAL would be eliminated from
  TensorImpl, by using the same policy trick.

- set_size and set_stride are still virtual; it's not entirely clear
  the same trick should be used here though as these methods are
  deprecated.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77036

Approved by: https://github.com/bdhirsh
2022-05-11 00:23:07 +00:00
Edward Z. Yang
4bd5b1614b Move legacy Caffe2 TensorImpl methods out of header
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77028

Approved by: https://github.com/bdhirsh
2022-05-11 00:23:07 +00:00
kshitij12345
00fb828276 [chalf] update type promotion table (#76893)
Reference #74537

TODO:
* [x] Add tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76893
Approved by: https://github.com/anjali411
2022-05-10 19:51:33 +00:00
Sherlockk Huang
8b6a78f39f Python Interface for Jiterator
This PR allows user to author a CUDA kernel in python.

```
from torch.cuda.jiterator import create_jit_fn

code_string = "template <typename T> T my_kernel(T x, T y, T alpha) { return  -x * y + x - y + alpha; }"
jitted_fn = create_jit_fn(code_string, alpha=0)

a = torch.rand(3, device='cuda')
b = torch.rand(3, device='cuda')
result = jitted_fn(a, b, alpha=1.0)
```

Limitations:
- Only supports elementwise kernel
- 1~8 tensor inputs (empty input, e.g. factory methods, is not supported)
- inputs tensors must live in cuda device
- cpu Scalar is not supported
- kwargs must be pre-declared when calling create_jit_fn
- kwargs must be convertible to at::Scalar, one of float64, int64_t, bool. (complex not support for now)

TODOs:
- [x] consolidate union and c10::variant implementation
- [x] plug into existing op testing framework
- [ ] rename files, place files in the right folder
- [ ] place util functions in the right file
- [x] enforce assumptions in python interface e.g <8 inputs, kwargs types
- [x] Add user-facing documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76394
Approved by: https://github.com/mruberry
2022-05-06 18:44:28 +00:00
caipengxiang
9bcb4de168 check parameter k and l
Fixes #76715

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76719
Approved by: https://github.com/ezyang
2022-05-03 11:50:36 +00:00
Peter Bell
2e480fc2db Cleanup ATen-core forward declarations
I noticed that when `SymInt` was introduced, `jit_type_base.h` was
added as an include to the `Operator.h` template which is supposed to
be kept extremely clean and only use forward declarations. Also,
that forward declarations for `OptionalArrayRef` were missing.

So, I've refactored the forward declarations into
`ATen/core/ATen_fwd.h` and cleaned up some of the `c10`
headers that were masking these missing declarations. I've also
re-generated the pre-compiled header so `SymInt` is included.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76576
Approved by: https://github.com/albanD
2022-05-02 14:50:48 +00:00
Kulin Seth
54c75e1e8f Add "mps" device to PyTorch framework.
Remove the "mlc" device for Mac platforms.

This commit will be followed up with:

* adding MPS runtime components
* PyTorch ops for MPS device

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76291
Approved by: https://github.com/albanD
2022-04-27 19:21:57 +00:00
Can Balioglu
a0bf0f5611 Add new dispatch keys for Fake Tensor and Deferred Module Initialization
Thanks to @bdhirsh's work, we now have room for new dispatch keys in `DispatchKey` enum. This PR adds two new keys for out-of-core [Fake Tensor](https://pytorch.org/torchdistx/latest/fake_tensor.html) and [Deferred Module Initialization](https://pytorch.org/torchdistx/latest/deferred_init.html) features.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76139
Approved by: https://github.com/bdhirsh
2022-04-27 18:48:44 +00:00
Brian Hirsh
aae7b00f7c fix nested grad(functionalize(f)) transforms
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76318

Approved by: https://github.com/zou3519
2022-04-27 14:22:50 +00:00