Commit Graph

9 Commits

Author SHA1 Message Date
Richard Zou
9fcce1baf1 [custom_op] Allow constructor to infer more types (#106799)
This expands the torch._custom_ops.custom_op API to be able to construct
operators that return (int, bool, float, Scalar, List[Tensor]) to make
it more in-line with our torch.library API.

NB: there needs to be updates to our custom_op autograd registration
API. For ease of review those changes will go in the next PR up but I
can squash if requested.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106799
Approved by: https://github.com/soulitzer
2023-08-14 14:33:43 +00:00
Richard Zou
16b6873885 [custom_ops] extend impl_abstract to work with existing torch.library ops (#106088)
This PR extends impl_abstract to work with existing
torch.library/TORCH_LIBRARY ops.

There's a question of what to do if the user calls impl_abstract
and the op already has a registration for:
- DispatchKey::Meta. We raise.
- DispatchKey::CompositeImplicitAutograd. We raise.
- DispatchKey::CompositeExplicitAutograd. To be pragmatic, we don't
raise, since the user's CompositeExplicitAutograd might work for all
other backends but Meta.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106088
Approved by: https://github.com/soulitzer
ghstack dependencies: #106075, #106076
2023-08-08 13:53:20 +00:00
Richard Zou
cebff39fad [custom_ops] make custom_ops.impl work on existing operators (#106076)
The design is that we construct a CustomOp object around the existing
operator and then use it to register things. It is totally OK if the
operator isn't functional (unlike torch._custom_ops.custom_op that can
only construct functional operators).

If the operator already has an implementation from a backend (either via
direct registration to e.g. DispatchKey::CPU, or an indirect
registration like CompositeImplicitAutograd/CompositeExplicitAutograd),
we raise an error.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106076
Approved by: https://github.com/soulitzer
ghstack dependencies: #106075
2023-08-08 13:53:20 +00:00
Richard Zou
60a4ac3068 [custom_ops] Block overload names (#106075)
These are valid with the torch.library API, but (1) they add complexity
and (2) I have never seen a custom op actually use an overload name
before. For simplicity we block all overloads.

Test Plan:
- new test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106075
Approved by: https://github.com/soulitzer
2023-08-08 13:53:18 +00:00
Jeffrey Dunn
c5b9dc1f40 Optimize stack frame inspection in torch._custom_op.impl:CustomOp._register_impl (#105940)
Summary: This is surprisingly expensive when the stack is deep. We can instead just process the specific stack frame that's relevant -- it's much faster.

Test Plan:
```
import inspect
import sys
import time

def make_deep_stack(fn, n: int = 10):
    if n > 0:
        return make_deep_stack(fn, n - 1)

    return fn()

def full_stack():
    return inspect.stack()[1][3]

def via_current_frame():
    return inspect.getframeinfo(sys._getframe(1))[2]

start = time.perf_counter()
for _ in range(1000):
    make_deep_stack(full_stack)
print(f"full_stack took {time.perf_counter() - start}s")

start = time.perf_counter()
for _ in range(1000):
    make_deep_stack(via_current_frame)
print(f"via_current_frame took {time.perf_counter() - start}s")

> full_stack took 31.788201928138733s
> via_current_frame took 2.33455612603575s
```

Differential Revision: D47674015

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105940
Approved by: https://github.com/zou3519
2023-07-31 15:49:33 +00:00
Richard Zou
dad65d09f2 Update custom op API (#105947)
As described in
https://docs.google.com/document/d/1aGWtgxV3HppuxQAdddyPrs74_aEntpkYt9MalnCKnhk/edit

This PR changes the CustomOp API to be private and adds new public
wrappers around it so that the user does not need to know about the
"CustomOp" object. We've effectively changed the "CustomOp" object to be
some metadata about the operator that the user does not directly
interact with.

The "updated custom op API" is in torch._custom_ops. Pending good customer
feedback, we will promote this module to torch.custom_ops.

NB: I cannot move around the older torch._custom_op APIs yet because
people are already using them.

Test Plan:
- I changed all of our tests to use the new `torch._custom_ops` module
instead of the old CustomOp API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105947
Approved by: https://github.com/soulitzer
2023-07-28 13:30:58 +00:00
Justin Chu
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
Richard Zou
723f111545 [custom_op] explicit autograd API (#101824)
This PR adds an explicit API for registering a backward formula for a
CustomOp. In the end state, we will likely have this explicit API and a
magic API (which is sugar on top of an explicit API), since different
parties of users prefer different ones.

Concretely, to define a backward formula for a CustomOp:
- a user must provide us a "save for backward" function that accepts
(inputs, output) and returns exactly what they want saved for backward
- a user must provide us a "backward" function that accepts
(ctx, saved, *grads) and returns us the grad_inputs. The grad_inputs
are returned as a dict mapping str to a gradient.
Please see the changes in custom_op_db.py for examples of the API.

There are a number of pieces to this PR and I'm happy to split it if it
helps. They are:
- The actual APIs for specifying the two functions
(impl_save_for_backward, impl_backward)
- The autograd kernel: we take the functions the user give us and
construct an autograd.Function object that we then register to
the Autograd dispatch key
- Indirection for the autograd kernel. We add a layer of indirection so
that one can swap out the autograd kernel. This is necessary because by
default, we register an "autograd not implemented" kernel as the
Autograd implementation but then swap it for the actual kernel when the
user provides it.

Test Plan:
- We apply this API to give backward formulas for things in
custom_op_db. We then hook up custom_op_db to the Autograd OpInfo tests.
- Various tests in test_python_dispatch.py to check error cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101824
Approved by: https://github.com/ezyang
2023-05-23 18:31:29 +00:00
Richard Zou
8487105fae [custom_op] Create a new torch._custom_op namespace (#101823)
torch/custom_op.py is getting long, and the autograd pieces are going to
make it even longer. I'm planning on just organizing the files under
a torch/_custom_op folder.

Note that the imports now look a bit crazy (from torch._custom_op.impl
import...) but they will look more OK when we figure out the plan to
make custom_op public (coming later).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101823
Approved by: https://github.com/ezyang, https://github.com/albanD, https://github.com/bdhirsh
2023-05-23 18:31:29 +00:00