Summary:
This PR improves `generate_opcheck_tests`:
- We shouldn't run automated testing through operators called in
torch.jit.trace / torch.jit.script
- I improved the error message and added a guide on what to do if one of the
tests fail.
- While dogfooding this, I realize I wanted a way to reproduce the failure
without using the test suite. If you pass `PYTORCH_OPCHECK_PRINT_REPRO`, it
will now print a minimal repro on failure. This involves serializing some
tensors to disk.
- The minimal repro includes a call to a new API called `opcheck`.
The opcheck utility runs the same checks as the tests generated
by `generate_opcheck_tests`. It doesn't have a lot of knobs on it for
simplicity. The general workflow is: if an autogenerated test fails, then the
user may find it easier to reproduce the failure without the test suite by
using opcheck
Test Plan: - new tests
Differential Revision: D48485013
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107597
Approved by: https://github.com/ezyang
This PR adds `generate_opcheck_tests`. This is a utility that adds
additional crossref tests to an existing TestCase that has tests that
invokes operators. The main use case is if you have a large test suite
that already exercises operators and want to add automated testing that
the operators are correct, without actually refactoring your code into
something like OpInfos.
Given a `test_` method of a TestCase, we will generate one new
additional test for each of {schema correctness, autograd registration,
faketensor rule, aot_autograd static shapes, aot_autograd dynamic
shapes}. Each newly generated test runs the original test method under a
special torch_function mode (OpCheckMode) that intercepts
`op(*args, **kwargs)` calls and additional passes (op, args, kwargs) to
a separate function (e.g. SchemaCheck).
Nitty-gritty details:
- If a test is named test_cumsum, we end up generating new tests
(`test_schema__test_cumsum`, `test_<something>__test_cumsum`)
- Users can provide a dictionary of expected failures / skips that is indexed on
operators. This gives us a sense of what operators support PT2 and which
operators require fixing before they support PT2.
Due to some co-dev limitations, I'm planning on landing this PR first
and then using it to add crossref testing for internal tests and
fbgemms. I could squash this PR with the internal changes if we want to
see how that works out, just let me know.
Test Plan:
- We create a mini op test suite called MiniOpTests.
- Then, we use `generate_opcheck_tests` to generate tests onto it.
- We have our own test xfail list to check that the things that should
fail do fail.
- Finally, there is a separate TestGenerateOpcheckTests that checks that
the correct number of tests were generated and also tests some helper
functions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106903
Approved by: https://github.com/ezyang, https://github.com/bdhirsh
- impl_save_for_backward/impl_backward only work for functional,
non-view schemas. We validate this.
- impl_save_for_backward/impl_backward raise if there already exists an
autograd implementation from torch.library / TORCH_LIBRARY.
- Operators constructed via custom_op receive an "autograd indirection
kernel". The "autograd indirection kernel" automatically pulls the
constructed autograd kernel out of a dict. When
impl_save_for_backward/impl_backward get used with torch.library
operators, we also register the "autograd indirection kernel" so we can
reuse the logic.
Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106817
Approved by: https://github.com/soulitzer
ghstack dependencies: #106799, #106800
Recall that the user must give us a backward function that accepts
`(ctx, saved, *grads)`, with one grad per output. Previously,
impl_backward only worked for functions that return one or more Tensors.
The new semantics are that if the output has:
- a TensorList, the backward function provided by the user will receive
a List[Tensor] of grads for that output.
- a number, the backward function provided by the user will receive
None as the grad.
Also recall that impl_backward is implemented by registering an
autograd.Function to the autograd dispatch key.
We needed to make the following changes:
- If an output is a TensorList, autograd.Function will ignore it. So we
need to tree-flatten it before returning it from the autograd.Function
- This means that the autograd.Function receives a flat list of grad
during the backwards pass. We need to tree-unflatten it into the correct
shape before passing it to the user-defined backward
- We modify the logic of output_differentiability. Only
Tensor/TensorList outputs can be marked as differentiable. If a
TensorList is marked as non-differentiable, then this is equivalent to
all Tensors in the list being non-differentiable. There is no
finer-grain control over this (to match derivatives.yaml).
Test Plan:
- There are new `numpy_split_copy` (returns TensorList) and
`numpy_split_copy_with_int` (returns (TensorList, int)) operators in
custom_op_db
- Added tests for output_differentiability into test/test_custom_ops.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106800
Approved by: https://github.com/soulitzer
ghstack dependencies: #106799
This expands the torch._custom_ops.custom_op API to be able to construct
operators that return (int, bool, float, Scalar, List[Tensor]) to make
it more in-line with our torch.library API.
NB: there needs to be updates to our custom_op autograd registration
API. For ease of review those changes will go in the next PR up but I
can squash if requested.
Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106799
Approved by: https://github.com/soulitzer
This PR extends impl_abstract to work with existing
torch.library/TORCH_LIBRARY ops.
There's a question of what to do if the user calls impl_abstract
and the op already has a registration for:
- DispatchKey::Meta. We raise.
- DispatchKey::CompositeImplicitAutograd. We raise.
- DispatchKey::CompositeExplicitAutograd. To be pragmatic, we don't
raise, since the user's CompositeExplicitAutograd might work for all
other backends but Meta.
Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106088
Approved by: https://github.com/soulitzer
ghstack dependencies: #106075, #106076
The design is that we construct a CustomOp object around the existing
operator and then use it to register things. It is totally OK if the
operator isn't functional (unlike torch._custom_ops.custom_op that can
only construct functional operators).
If the operator already has an implementation from a backend (either via
direct registration to e.g. DispatchKey::CPU, or an indirect
registration like CompositeImplicitAutograd/CompositeExplicitAutograd),
we raise an error.
Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106076
Approved by: https://github.com/soulitzer
ghstack dependencies: #106075
These are valid with the torch.library API, but (1) they add complexity
and (2) I have never seen a custom op actually use an overload name
before. For simplicity we block all overloads.
Test Plan:
- new test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106075
Approved by: https://github.com/soulitzer
This PR:
- Changes the AOTAutograd tests to also check that the output of the
forward is equal under AOTAutograd and eager-mode PyTorch.
- Adds a "check_gradients" flag to `check_aot_autograd`.
- If True, then we attempt to compute gradients and check them.
- If False, then we we just check the outputs are equal
- If "auto", then we will compute gradients and check them only if
some input and some output requires grad. This option is useful for
crossref tests where we don't necessarily have inputs that require
grad.
1) I need a testing utility to test "AOTAutograd for inference",
e.g. make_fx + functionalize.
2) I want to run aot_autograd_check in crossref tests for other test
suites (e.g. fbgemm) and not all inputs require grad.
Test Plan:
- existing tests
- new tests to test the degenerate cases
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106558
Approved by: https://github.com/ezyang, https://github.com/soulitzer
As described in
https://docs.google.com/document/d/1aGWtgxV3HppuxQAdddyPrs74_aEntpkYt9MalnCKnhk/edit
This PR changes the CustomOp API to be private and adds new public
wrappers around it so that the user does not need to know about the
"CustomOp" object. We've effectively changed the "CustomOp" object to be
some metadata about the operator that the user does not directly
interact with.
The "updated custom op API" is in torch._custom_ops. Pending good customer
feedback, we will promote this module to torch.custom_ops.
NB: I cannot move around the older torch._custom_op APIs yet because
people are already using them.
Test Plan:
- I changed all of our tests to use the new `torch._custom_ops` module
instead of the old CustomOp API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105947
Approved by: https://github.com/soulitzer
This PR moves most custom op related tests from
test/test_python_dispatch.py to test/test_custom_ops.py. Motivation is
that I had a difficult time finding the custom op tests inside
test_python_dispatch.py.
This doesn't preserve blame, but it's OK - I'm the only person who has
really touched the moved tests so far :).
Test Plan:
- run tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106036
Approved by: https://github.com/bdhirsh, https://github.com/soulitzer