Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html
This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).
This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838
Test Plan: CI. You can also run `flake8` locally.
Reviewed By: jbschlosser
Differential Revision: D27724232
Pulled By: samestep
fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54470
```
git grep -l 'DefaultBackend' | xargs sed -i 's/DefaultBackend/CompositeExplicitAutograd/g'
```
Plus a quick fixup in native/README.md
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bdhirsh
Differential Revision: D27253240
Pulled By: ezyang
fbshipit-source-id: 964df951ea8b52fa72937f3cc66aeaf49a702e6f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54466
I had to very carefully audit all the use sites since there are a lot
of other uses of the string Math; I did most of the conversion by
grepping for all occurrences of Math and then doing a search
replace.
I also updated documentation for clarity.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D27253239
Pulled By: ezyang
fbshipit-source-id: afb485d07ff39575742a4f0e1e205179b60bc953
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54427
A StructuredNativeFunctions is no longer guaranteed to actually
be structured (test structured property for that), so we rename
this to a more neutral name.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ailzhang
Differential Revision: D27235380
Pulled By: ezyang
fbshipit-source-id: 2b438d615bf06a47fc9c7bf6eb66fd8b4df31bc8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54426
Previously, we only put NativeFunctions in StructuredNativeFunctions
if the out variant advertised that the kernel was structured. However,
there are a few code generation things that can take advantage of
this trio structure, even if the kernel itself hasn't been ported
to be structured. So better to always group things when they are
related, and then let clients decide whether or not to use the
structure or throw it away.
While doing this, I had hoped that there weren't any functional/inplace
pairs that didn't also have an out variant. This turned out to not
be true. These are probably all oversights and should get fixed at
some point.
Bill of changes:
- The actual operational change happens in
StructuredNativeFunctions.from_dict; then I need to relax some
__post_init__ invariants. To tell if a StructuredNativeFunctions
is actually structured, there is a new structured property, which
is queried from a few new locations in code
- Refactor native_functions.py into gen_structured/gen_unstructured
functions so I can easily call gen_unstructured from two contexts
I intend to s/StructuredNativeFunctions/NativeFunctionsGroup/ but
for ease of review this rename hasn't been done in this PR.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ailzhang
Differential Revision: D27235379
Pulled By: ezyang
fbshipit-source-id: d8a15de9abb75b365348ab94e67b830704e30cf0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54419
I'm planning to break it into some helper functions, so let's put it in its own module first.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ailzhang
Differential Revision: D27235378
Pulled By: ezyang
fbshipit-source-id: c03c5440d2d753859e2c5ec2b2c8b1b82870f03a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53859
The redispatch API wasn't linking properly when static dispatch is enabled. I'm still not sure why this wasn't caught by the static dispatch test in CI- maybe, as swolchok pointed out, we have a flag set somewhere that defers undefined symbols until runtime.
Before, building with static dispatch enabled locally + running `import torch` gave me this error:
```
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/raid/hirsheybar/pytorch/torch/__init__.py", line 197, in <module>
from torch._C import *
ImportError: /raid/hirsheybar/pytorch/torch/lib/libtorch_cpu.so: undefined symbol: _ZN2at10redispatch11logical_or_EN3c1014DispatchKeySetERNS_6TensorERKS3_
>>>
```
Printing the symbol:
```
(pytorch) hirsheybar@devfair017:/scratch/hirsheybar/pytorch$ c++filt _ZN2at10redispatch11logical_or_EN3c1014DispatchKeySetERNS_6TensorERKS3_
at::redispatch::logical_or_(c10::DispatchKeySet, at::Tensor&, at::Tensor const&)
```
Sure enough, the functions defined in `RedispatchFunctions.cpp` don't have the DispatchKeySet argument included. Adding them in this PR.
Test Plan: Imported from OSS
Reviewed By: ljk53
Differential Revision: D26998735
Pulled By: bdhirsh
fbshipit-source-id: c6c1104e42d13b7ec9d964b7e08d2adc8b344b78
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51957
This is a simplified version of #51554.
Compared to #51554, this version only supports statically dispatching to
a specific backend. The benefit is that it skipped the dispatch key
computation logic thus has less framework overhead. The downside is that
if input tensors do not match the specified backend it will throw error
instead of falling back to regular dispatch.
Sample code:
```
Tensor empty(IntArrayRef size, TensorOptions options, c10::optional<MemoryFormat> memory_format) {
return at::cpu::empty(size, options, memory_format);
}
// aten::conj(Tensor(a) self) -> Tensor(a)
Tensor conj(const Tensor & self) {
return at::math::conj(self);
}
// aten::conj.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)
Tensor & conj_out(Tensor & out, const Tensor & self) {
return at::cpu::conj_out(out, self);
}
// aten::conj.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)
Tensor & conj_outf(const Tensor & self, Tensor & out) {
return at::cpu::conj_out(out, self);
}
// aten::_conj(Tensor self) -> Tensor
Tensor _conj(const Tensor & self) {
return at::defaultbackend::_conj(self);
}
```
For ops without the specific backend dispatch, it will throw error:
```
// aten::_use_cudnn_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank) -> bool
bool _use_cudnn_ctc_loss(const Tensor & log_probs, const Tensor & targets, IntArrayRef input_lengths, IntArrayRef target_lengths, int64_t blank) {
TORCH_CHECK(false, "Static dispatch does not support _use_cudnn_ctc_loss for CPU.");
}
```
Differential Revision: D26337857
Test Plan: Imported from OSS
Reviewed By: bhosmer
Pulled By: ljk53
fbshipit-source-id: a8e95799115c349de3c09f04a26b01d21a679364
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51585
Some payoff from the stack of refactors. When I initially landed
at::cpu, Brian asked me why I couldn't just separate the anonymous
and namespaced definitions. Well, it used to be annoying. Now it's
not annoying anymore, so go ahead and split them up.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: zou3519
Differential Revision: D26209873
Pulled By: ezyang
fbshipit-source-id: 63057d22acfaa0c17229947d9e65ec1193e360ec
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51508
No substantive changes. The codegen for this file was getting a
bit long so I moved it off into tools.codegen.dest submodule (I
wanted to do tools.codegen.gen but that conflicts with the existing
module; oy vey!) To do this I had to move some other functions around
so that they were more generally accessible. Otherwise
self-explanatory.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ljk53
Differential Revision: D26187856
Pulled By: ezyang
fbshipit-source-id: fd3784571d03d01c4acb7ca589fcde4492526408
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51500
I'm going to add some new Target types shortly, so having tighter
types for the individual unions will make it clearer which ones
are valid.
This is also the first use of typing_extensions in the codegen,
and I want to make sure it works.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D26187854
Pulled By: ezyang
fbshipit-source-id: 6a9842f19b3f243b90b210597934db902b816c21
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51499
I'm going to turn on at::cpu signatures on for all operators; before
I do it I want to make sure I'm at feature parity everywhere.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D26187855
Pulled By: ezyang
fbshipit-source-id: 8fdfd9d843fc98435b1f1df8b475d3184d87dc96
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51490
Mutable Tensor ref is a source of endless confusion for kernel writers;
if we're going to make everyone rewrite their kernels, might as well
also get rid of mutable Tensor& while we're at it.
This is a refactor-then-small-update double whammy. The refactor
is to separate tools.codegen.api.structured from api.native for
describing the type signatures of structured kernels (previously,
I was naughtily reusing native for this purpose--now I need it to
behave differently as Tensor). This started off as a copy paste, but
since there are not that many structured kernels so far I could delete
all of the legacy logic from native that didn't make sense (without
having to go out and fix all the use sites all at once).
One more small addition was teaching translate to convert Tensor& to const Tensor&.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D26182413
Pulled By: ezyang
fbshipit-source-id: ed636866add3581179669cf9283f9835fcaddc06
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51245
Splitting this out from #51164 (D26069629) to allow it to
land separately; I'm sure this is a good idea but I'm less sure about
#51164.
ghstack-source-id: 120697499
Test Plan:
double-check effect on empty benchmark with perf stat;
didn't move
Reviweers: ezyang, messmer
Reviewed By: ezyang
Differential Revision: D26112627
fbshipit-source-id: 50d4418d351527bcedd5ccdc49106bc642699870
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51115
Add enum type for dispatch key. Prepare to implement the DispatchTable
computation logic in python for static dispatch.
Verified byte-for-byte compatibility of the codegen output.
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D26077430
Pulled By: ljk53
fbshipit-source-id: 86e74f3eb32266f31622a2ff6350b91668c8ff42
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50794
Original commit changeset: b4a7948088c0
There are some subtle extra tweaks on top of the original. I can unbundle them, but I've opted to keep it with the port because it's the easiest way to make sure the changes are exercised.
* There's a bugfix in the codegen to test if a dispatch key is structured *before* short circuiting because the dispatch key was missing in the table. This accounts for mixed structured-nonstructured situations where the dispatch table is present, but the relevant structured key isn't (because the dispatch table only exists to register, e.g., QuantizedCPU)
* Dispatch tables for functions which delegate to structured kernels don't have Math entries from generated for them.
* It's now illegal to specify a structured dispatch key in a delegated structured kernel (it will be ignored!) add is now fixed to follow this
* There are some extra sanity checks for NativeFunctions validation
* Finally, unlike the original PR, I switched the .vec variant of upsample_nearest2d to also be DefaultBackend, bringing it inline with upsample_nearest1d.
ghstack-source-id: 120038038
Test Plan:
```
buck test mode/dev //coreai/tiefenrausch:python_tests -- --exact 'coreai/tiefenrausch:python_tests - test_can_run_local_async_inference_cpu (coreai.tiefenrausch.tests.python_test.TiefenrauschPY)' --run-disabled
```
Reviewed By: ngimel
Differential Revision: D25962873
fbshipit-source-id: d29a9c97f15151db3066ae5efe7a0701e6dc05a3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49505
I have a problem which is that static runtime needs a way to bypass
dispatch and call into kernels directly. Previously, it used
native:: bindings to do this; but these bindings no longer exist
for structured kernels! Enter at::cpu: a namespace of exactly
at:: compatible functions that assume all of their arguments are
CPU and non-autograd! The header looks like this:
```
namespace at {
namespace cpu {
CAFFE2_API Tensor & add_out(Tensor & out, const Tensor & self, const Tensor & other, Scalar alpha=1);
CAFFE2_API Tensor add(const Tensor & self, const Tensor & other, Scalar alpha=1);
CAFFE2_API Tensor & add_(Tensor & self, const Tensor & other, Scalar alpha=1);
CAFFE2_API Tensor & upsample_nearest1d_out(Tensor & out, const Tensor & self, IntArrayRef output_size, c10::optional<double> scales=c10::nullopt);
CAFFE2_API Tensor upsample_nearest1d(const Tensor & self, IntArrayRef output_size, c10::optional<double> scales=c10::nullopt);
CAFFE2_API Tensor & upsample_nearest1d_backward_out(Tensor & grad_input, const Tensor & grad_output, IntArrayRef output_size, IntArrayRef input_size, c10::optional<double> scales=c10::nullopt);
CAFFE2_API Tensor upsample_nearest1d_backward(const Tensor & grad_output, IntArrayRef output_size, IntArrayRef input_size, c10::optional<double> scales=c10::nullopt);
}}
```
This slows down static runtime because these are not the "allow
resize of nonzero tensor" variant binding (unlike the ones I had manually
written). We can restore this: it's a matter of adding codegen smarts to
do this, but I haven't done it just yet since it's marginally more
complicated.
In principle, non-structured kernels could get this treatment too.
But, like an evil mastermind, I'm withholding it from this patch, as an extra
carrot to get people to migrate to structured muahahahaha.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D25616105
Pulled By: ezyang
fbshipit-source-id: 84955ae09d0b373ca1ed05e0e4e0074a18d1a0b5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49164
This PR removes the logic paths in codegen that were responsible for handling non-c10-full ops.
This only goes through our basic codegen. It does not simplify C++ code yet and it does not remove the codegen for generated unboxing wrappers yet.
ghstack-source-id: 119450487
Test Plan: waitforsandcastle
Reviewed By: ezyang
Differential Revision: D25462977
fbshipit-source-id: 7e70d14bea96948f5056d98125f3e6ba6bd78285
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49498
In the near future, I want to code generate some functions that are
visible externally to this compilation unit. I cannot easily do this
if all the codegen code is wrapped in a global anonymous namespace,
so push the namespace in.
Registration has to stay in an anonymous namespace to avoid name conflicts.
This could also have been solved by making the wrapper functions have
more unique names but I didn't do this in the end.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: albanD, smessmer
Differential Revision: D25616104
Pulled By: ezyang
fbshipit-source-id: 323c0dda05a081502aab702f359a08dfac8c41a4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49735
This is the final wave of autograd codegen data model migration.
After this PR:
- autograd codegen no longer depends on Declarations.yaml;
- autograd codegen sources are fully type annotated and pass mypy-strict check;
To avoid potential merge conflicts with other pending PRs, some structural
changes are intentionally avoided, e.g. didn't move inner methods out, didn't
change all inner methods to avoid reading outer function's variables, and etc.
Confirmed byte-for-byte compatible with the old codegen:
```
Run it before and after this PR:
.jenkins/pytorch/codegen-test.sh <baseline_output_dir>
.jenkins/pytorch/codegen-test.sh <test_output_dir>
Then run diff to compare the generated files:
diff -Naur <baseline_output_dir> <test_output_dir>
```
Confirmed clean mypy-strict run:
```
mypy --config mypy-strict.ini
```
Test Plan: Imported from OSS
Reviewed By: ezyang, bhosmer
Differential Revision: D25678879
Pulled By: ljk53
fbshipit-source-id: ba6e2eb6b9fb744208f7f79a922d933fcc3bde9f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49490
No reason to let people to do the legacy thing for the brand new kernel.
This simplifies the codegen. I have to port the two structured kernels
to this new format.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D25595406
Pulled By: ezyang
fbshipit-source-id: b5931873379afdd0f3b00a012e0066af05de0a69
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49489
Previously, it was done at a use site, but that meant other use
sites don't get the right logic. Pushing it in makes sure everyone
gets it.
I also fixed one case of confusion where defn() was used to define a decl().
If you want to define a declaration with no defaults, say no_default().decl()
which is more direct and will give us code reviewers a clue if you should
have pushed this logic in.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D25595407
Pulled By: ezyang
fbshipit-source-id: 89c664f0ed4d95699794a0d3123d54d0f7e4cba4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49348
This is a redux of #45666 post refactor, based off of
d534f7d4c5
Credit goes to peterbell10 for the implementation.
Fixes#43945.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D25594004
Pulled By: ezyang
fbshipit-source-id: c8eb876bb3348308d6dc8ba7bf091a2a3389450f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49341
I noticed that #49097 was using manual_kernel_registration incorrectly,
so this diff tightens up the testing so that:
1. We don't generate useless wrapper functions when manual_kernel_registration
is on (it's not going to be registered, so it does nothing).
2. manual_kernel_registration shouldn't affect generation of functions in
Functions.h; if you need to stop bindings, use manual_cpp_binding
3. Structured and manual_kernel_registration are a hard error
4. We raise an error if you set dispatch and manual_kernel_registration at the
same time.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D25594003
Pulled By: ezyang
fbshipit-source-id: 655b10e9befdfd8bc95f1631b2f48f995a31a59a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49245
This will make it easier to implement the POC in
d534f7d4c5
see also https://github.com/pytorch/pytorch/pull/45666
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D25594005
Pulled By: ezyang
fbshipit-source-id: e458d3dc3a765ec77425761b9b17f23769cecf9e
Summary:
Since caffe2 and torch have been consolidated, CAFFE2_API should be merged with TORCH_API. Addresses a TODO.
Manually edited some references of the removed `CAFFE2_API`:
* `CONTRIBUTING.md`
* `caffe2/proto/CMakeLists.txt`
* `cmake/ProtoBuf.cmake`
* `c10/macros/Export.h`
* `torch/csrc/WindowsTorchApiMacro.h`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49496
Reviewed By: malfet, samestep
Differential Revision: D25600726
Pulled By: janeyx99
fbshipit-source-id: 7e068d959e397ac183c097d7e9a9afeca5ddd782
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49510
Adding old style operators with out arguments will break XLA. This prevents that. See for background: https://fb.workplace.com/groups/pytorch.dev/permalink/809934446251704/
This is a temporary change that will prevent this breakage for the next couple of days until the problem is resolved for good.
It will be deleted in https://github.com/pytorch/pytorch/pull/49164 then.
ghstack-source-id: 118756437
(Note: this ignores all push blocking failures!)
Test Plan: waitforsandcastle
Reviewed By: bhosmer
Differential Revision: D25599112
fbshipit-source-id: 6b0ca4da4b55da8aab9d1b332cd9f68e7602301e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49122
cpparguments_exprs has induced a lot of head scratching in many recent PRs for how to structure the code in a good way. This PR eliminates the old algorithm for an entirely new algorithm inspired by logic programming. The net result is shorter, cleaner and should be more robust to future changes.
This PR is a bit of a whopper. Here is the order to review it.
- tools/codegen/api/types.py
- Deleted CppArgument, CppArgumentPackIface (and subclasses), CppExpr, DispatcherExpr, DispatcherArgument, NativeExpr, NativeArgument, MetaArgument. All things previously called XArgument are now Binding. All things previously called XExpr are now Expr. I deleted the `__str__` implementation on Binding and fixed all call sites not to use it. On Binding, I renamed `str_no_default` and `str_default` to `defn` and `decl` for better symmetry with the corresponding signature concepts, although I'm open to naming them back to their original versions.
- Obviously, things are less type safe without the class distinctions. So I introduce a new ADT called CType. CType represents the *semantic C++ type* of a binding: it is both the C++ type (e.g., `const Tensor&`) as well as the argument name that specifies what the binding denotes (e.g., `other`). Every binding now records its CType. The key observation here is that you don't actually care if a given expression is from the cpp or dispatcher or native API; what you care is having enough information to know what the expression means, so you can use it appropriately. CType has this information. For the most part, ArgNames are just the string names of the arguments as you see them in JIT schema, but there is one case (`possibly_redundant_memory_format`) where we encode a little extra information. Unlike the plain strings we previously used to represent C++ types, CType have a little bit of structure around optional and references, because the translation code needs to work around these concepts.
- I took the opportunity to kill all of the private fields like `_arguments` and `_returns_type` (since the argument types don't make sense anymore). Everything is computed for you on the fly. If this is a perf problem in codegen we can start using `cached_property` decorator.
- All of the heavy lifting in CppSignature.argument_packs has been moved to the cpp module. We'll head over there next. Similarly, all of the exprs methods are now calling translate, the new functionality which we haven't gotten to yet
- tools/codegen/api/cpp.py
- We refactor all of the type computation functions to return CType instead of str. Because CTypes need to know the denotation, there is a new `binds: ArgName` argument to most functions that provides the denotation, so we can slot it in. (An alternative would have been to construct CTypes without denotations and then fill them in post-facto, but I didn't do it this way. One downside is there are some places where I need a CType without denotation, so I fill these in with `__placeholder__` whenever this happens).
- `argument` and `arguments` are now extremely simple. There is no more Pack business, just produce one or more Bindings. The one thing of note is that when both a `memory_format` and `options` are in scope, we label the memory format as `possibly_redundant_memory_format`. This will be used in translation
- tools/codegen/api/dispatcher.py and tools/codegen/api/native.py - same deal as cpp.py. One thing is that `cpparguments_exprs` is deleted; that is in the translator
- tools/codegen/api/translate.py - the translator! It uses a very simple backwards deduction engine to work out how to fill in the arguments of functions. There are comments in the file that explain how it works.
- Everything else: just some small call site tweaks for places when I changed API.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ljk53
Differential Revision: D25455887
Pulled By: ezyang
fbshipit-source-id: 90dc58d420d4cc49281aa8647987c69f3ed42fa6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49012
For some reason we apply default arguments to the functions in at::native too. So when an out overload had default arguments,
we couldn't move the out argument to the end because of those default arguments preceding it.
This PR fixes that and makes out overloads with default arguments c10-full
ghstack-source-id: 118619222
(Note: this ignores all push blocking failures!)
Test Plan: waitforsandcastle
Reviewed By: ezyang
Differential Revision: D25394605
fbshipit-source-id: 2ed1c3ce0d04a548e3141df2dca517756428fe15
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49244
see https://fb.quip.com/ceEdANd5iVsO
RegisterMkldnnCPU kernels incorrectly used makeUnboxedOnly() calls to register add_.Tensor kernels. This is because the codegen incorrectly thought they're not c10-full.
This PR fixes that.
ghstack-source-id: 118411117
Test Plan: After this PR, RegisterMkldnnCPU doesn't contain the makeUnboxedOnly() calls anymore.
Reviewed By: ezyang
Differential Revision: D25500246
fbshipit-source-id: 8a8c2be9c4f4a5ce7eaae94257c2f8cbd176e92e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48911
This enables us to use hacky_wrapper_for_legacy_signatures for ops with out arguments so they can use templated unboxing logic without having to be rewritten.
This only actually enables it for one op as a proof of concept. There will be a separate PR enabling it for more ops.
ghstack-source-id: 118379659
Test Plan: waitforsandcastle
Reviewed By: bhosmer
Differential Revision: D25363336
fbshipit-source-id: da075d2cc58814f886a25d52652511dbbe990cec
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49092
Functions which specify manual_cpp_binding don't automatically
get C++ bindings generated for them in TensorBody.h or
Functions.h. This lets end users manually define the bindings
themselves, which may be helpful if there is a way to
short circuit the dispatcher entirely. contiguous() is switched
to use this mechanism.
Although manual_cpp_binding suggests that we don't generate the
binding at all, it is often the case that there is some "fast
path", but when this path is not satisfied, we should go back
to the slow dispatch. So we still generate a fallback method/function
which the user-defined binding can call into in case that we
have to go slowpath.
The correctness conditions for bindings manually written in this
way are subtle. Here are the ones I can think of off the top
of my head:
- Whatever condition is tested in the C++ body, must ALSO be
tested again in the native:: implementation on the other
side of the dispatcher. This is because you are NOT GUARANTEED
to hit the native:: implementation through the C++ binding,
you may go straight to the implementation via a boxed call.
- If a binding is written in this way, it is only safe to
skip dispatch if you would have returned the same tensor as
before. In any situation you would return a fresh tensor,
you MUST go to the slow path, because you need to actually
get to the autograd kernel.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D25428440
Pulled By: swolchok
fbshipit-source-id: 6e71767cb8d1086d56cd827c1d2d56cac8f6f5fe
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49043
Previously, this function had nontrivial algorithmic content,
but after #48195, this was just a swiss army knife for pasting
together arguments while maintaining structure. I added some
more properties for Arguments for convenient access in this way,
and then inlined the implementation of group_arguments into all of its call
sites, simplifying whenever contextual. This might be controversial, but I
think the resulting code is easier to understand.
You may notice that there is some modest code duplication between
dispatcher.cpparguments_exprs and CppSignature.argument_packs.
This is a known problem and I will be attempting to fix it in
a follow up PR.
Confirmed to be byte-for-byte compatible.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D25455885
Pulled By: ezyang
fbshipit-source-id: 8fbe066e8c3cb7ee8adb5b87296ec5bd7b49e01f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49042
I want the names positional and kwarg_only to give the unflat
representation (e.g., preserving TensorOptionsArguments in the
returned Union). So I regret my original naming choice when
I moved grouping to model. This renames them to have flat_ prefix
and also adds a flat_non_out argument for cases where you just
want to look at non-out arguments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D25455884
Pulled By: ezyang
fbshipit-source-id: f923f8881267a3e3e8e9521519412f7cc25034fc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49079
BackendSelect kernels have been changed to be written the new way, so this hacky_wrapper here isn't needed anymore.
This PR is not expected to change perf or anything, just simplify the code a bit. The hacky_wrapper here was a no-op and not creating any actual wrappers
because it short-cirtuits to not create a wrapper when there is no wrapper needed.
ghstack-source-id: 118318436
Test Plan: waitforsandcastle
Reviewed By: bhosmer
Differential Revision: D25421633
fbshipit-source-id: 7a6125613f465dabed155dd892c8be6af5c617cf
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48718
This PR rewrites structured kernels to do the class-based mechanism (instead of defining a meta and impl function, they are methods on a class), and adds enough customizability on the class to support TensorIterator. To show it works, add is made a structured kernel. Don't forget to check https://github.com/pytorch/rfcs/pull/9 for a mostly up-to-date high level description of what's going on here.
High level structure of this PR (the order you should review files):
* TensorMeta.h - TensorMeta is deleted entirely; instead, meta functions will call `set_output` to allocate/resize their outputs. MetaBase gets a new `maybe_get_output` virtual method for retrieving the (possibly non-existent) output tensor in a meta function; this makes it easier to do special promotion behavior, e.g., as in TensorIterator.
* TensorIterator.cpp - Two major changes: first, we add TensorIteratorBase::set_output, which is a "light" version of TensorIterator::set_output; it sets up the internal data structures in TensorIterator, but it doesn't do allocation (that is assumed to have been handled by the structured kernels framework). The control flow here is someone will call the subclassed set_output, which will allocate output, and then we will call the parent class (TensorIteratorBase) to populate the fields in TensorIterator so that other TensorIterator phases can keep track of it. Second, we add some tests for meta tensors, and skip parts of TensorIterator which are not necessary when data is not available.
* tools/codegen/model.py - One new field in native_functions.yaml, structured_inherits. This lets you override the parent class of a structured meta class; normally it's MetaBase, but you can make it point at TensorIteratorBase instead for TensorIterator based kernels
* tools/codegen/gen.py - Now generate all of the classes we promised. It's kind of hairy because this is the first draft. Check the RFC for what the output looks like, and then follow the logic here. There are some complications: I need to continue to generate old style wrapper functions even if an operator is structured, because SparseCPU/SparseCUDA/etc won't actually use structured kernels to start. The most complicated code generation is the instantiation of `set_output`, which by in large replicates the logic in `TensorIterator::set_output`. This will continue to live in codegen for the forseeable future as we would like to specialize this logic per device.
* aten/src/ATen/native/UpSampleNearest1d.cpp - The previous structured kernel is ported to the new format. The changes are very modest.
* aten/src/ATen/native/BinaryOps.cpp - Add is ported to structured.
TODO:
* Work out an appropriate entry point for static runtime, since native:: function stubs no longer are generated
* Refactor TensorIteratorConfig construction into helper functions, like before
* Make Tensor-Scalar addition structured to fix perf regression
* Fix `verify_api_visibility.cpp`
* Refactor tools/codegen/gen.py for clarity
* Figure out why header changes resulted in undefined reference to `at::Tensor::operator[](long) const`
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D25278031
Pulled By: ezyang
fbshipit-source-id: 57c43a6e5df21929b68964d485995fbbae4d1f7b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47712
This adds a faithful API for ops with out arguments, as described in https://docs.google.com/document/d/1h7nBibRwkRLQ8rsPhfALlwWR0QbkdQm30u4ZBwmaps8/edit# .
After this, an op will generate the following overloads for the C++ API:
```cpp
// Generated from the aten::abs operator (NOT from aten::abs.out)
Tensor at::abs(Tensor& self)
// Generated from the aten::abs.out operator
Tensor& at::abs(Tensor& self, Tensor& out)
Tensor& at::abs_out(Tensor& out, Tensor& self)
```
This is an important step towards making those ops c10-full (it allows VariableType, XLA and other backends to ignore reordering and just call through with the same argument order), but this does not make any of those ops c10-full yet.
It enables the faithful API independent from c10-fullness. That means the API is more consistent with the same API for all ops and making an op c10-full in the future will not trigger future C++ API changes.
ghstack-source-id: 118068091
Test Plan: waitforsandcastle
Reviewed By: ezyang
Differential Revision: D24835252
fbshipit-source-id: dedfabd07140fc8347bbf16ff219aad3b20f2870
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47711
Seems we generated the declaration but the definition only for c10-full ops. We should also generate the definition for non-c10-full ops.
This makes future migrations of ops from non-c10-full to c10-full have a lower impact on the C++ API.
ghstack-source-id: 118064755
Test Plan: waitforsandcastle
Reviewed By: bhosmer
Differential Revision: D24835006
fbshipit-source-id: 8f5c3c0ffcdc9b479ca3785d57da16db508795f5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48195
The general approach is to change Arguments, splitting `positional`, `kwarg_only` and `out`, into `pre_self_positional`, `self_arg`, `post_self_positional`, and `pre_tensor_options_kwarg_only`, `tensor_options` and `post_tensor_options_kwarg_only`. The splits are as you'd expect: we extract out the self argument and the tensor options arguments, and record the other arguments that came before and after. To do this, we move the logic in `group_arguments` to the parsing process.
Some fuzz in the process:
* I renamed `ThisArgument` to `SelfArgument`, since we don't actually use the terminology "this" outside of C++ (and the model is Python biased)
* I kept the `group_arguments` function, which now just reads out the arguments from the structured model in the correct order. In the long term, we should get rid of this function entirely, but for now I kept it as is to reduce churn.
* I decided to arbitrarily say that when self is missing, everything goes in "post-self", but when tensor options is missing, everything goes in "pre-tensor-options". This was based on where you typically find the argument in question: self is usually at front (so most args are after it), while tensor options are typically at the end (so most args go before it).
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: zhangguanheng66
Differential Revision: D25231166
Pulled By: ezyang
fbshipit-source-id: 25d77ad8319c4ce0bba4ad82e451bf536ef823ad
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48182
I'm planning to add a bunch more argument fields following
https://github.com/pytorch/pytorch/pull/45890#discussion_r503646917 and
it will be a lot more convenient if the arguments get to live
in their own dedicated struct. Type checker will tell you if
I've done it wrong. No change to output.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ljk53
Differential Revision: D25057897
Pulled By: ezyang
fbshipit-source-id: dd377181dad6ab0c894d19d83408b7812775a691
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48116
If you port kernels to be structured, you get Meta kernels automatically
generated for you. This is one payoff of structured kernels.
Code generation was mercifully really simple, although at risk of
"swiss cheese" syndrome: there's two new conditionals in the codegen
to tweak behavior when generating for meta keys. It's not too bad
right now but there's a risk of things getting out of hand. One
way to rationalize the logic here would be to transmit "TensorMeta-ness"
inside the TensorOptions (so tensor_from_meta can deal with it); then
the "Meta" kernel magic would literally just be generating empty
out_impls to call after all the scaffolding is done. But I didn't
do this because it seemed like it would be more annoying short term.
Also had to teach resize_ to work on meta tensors, since we use them
to implement the out kernels.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer, ailzhang
Differential Revision: D25056640
Pulled By: ezyang
fbshipit-source-id: f8fcfa0dbb58a94d9b4196748f56e155f83b1521
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48252
Moved to a shared place so that gen_variable_type.py can reuse it.
Test Plan: Imported from OSS
Reviewed By: ezyang
Differential Revision: D25087808
Pulled By: ljk53
fbshipit-source-id: 1f32e506956fc4eb08734cfde0add47b3e666bd9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45277
Implements structured kernels as per https://github.com/pytorch/rfcs/pull/9 and ports upsample_nearest1d to use the framework.
The general structure of this diff:
- Define a new syntax for specifying structured kernels in `native_functions.yaml`. You put `structured: True` on the `out` function (that's what you implement) and `structured_delegate: foo.out` on the functional/inplace variants to define them in terms of the `out` function. There's a bunch of new consistency checking to see if you've done this right, though the error messages are of varying quality. This is most of what's going on in tools.codegen.model
- NativeFunctionGroup turns into StructuredNativeFunctions. Previously I thought that maybe we would use this grouping mechanism for both structured and unstructured kernels, but it turned out that Jiakai needed to make his own grouping structure. So now I've specialized it for structured kernels, which also means I get to add a bunch of invariants, like requiring structured kernels to have both a functional and an out variant. This is the lower bundle of changes in tools.codegen.model
- When you make an out kernel structured, this induces us to generate a new meta function signature for you to write shape checking and output allocation code. The signatures of these is defined by `tools.codegen.api.meta` and generated into `MetaFunctions.h`. Coverage here is very bare bones and will be driven by actual operators we port as we go.
- The meaty part of code generation is what we do when we have some grouped StructuredNativeFunctions. We continue to generate a wrapper per function type, but they're are a bit different as the call your meta functions, and make reference to the actual implementations in out.
- Then there's a port of `upsample_nearest1d`; easiest to review by just looking at what the final code looks like.
Missing pieces:
- Stride calculation in TensorMeta
- Sufficient sanity checking for inplace/out variants
- Enough rope to make TensorIterator work
This PR improves instruction counts on `upsample_nearest1d` because it eliminates an extra redispatch. Testing `at::upsample_nearest1d(x, {10});`
* Functional: before 1314105, after 1150705
* Out: before 915705, after 838405
These numbers may be jittered up to +-16400 (which is the difference when I tested against an unaffected operator `at::upsample_linear1d`), though that may also because unrelated changes affected all operators globally.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D24253555
Test Plan: Imported from OSS
Reviewed By: smessmer
Pulled By: ezyang
fbshipit-source-id: 4ef58dd911991060f13576864c8171f9cc614456
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47011
smessmer has complained about how it is difficult to find generated
code. Well hopefully this diffs helps a bit with that.
There are three components to this refactor:
- Rename TypeDerived (CPUType) to RegisterDispatchKey (RegisterCPU).
The 'Type' nomenclature is vestigial and I think Register says
what these files do a lot more clearly. I also got rid of
the CPUType namespace; everything just goes in anonymous
namespace now, less moving parts this way.
- Give Math and DefaultBackend their own files (RegisterMath and
RegisterDefaultBackend)
- Restructure code generation so that schema definition is done
completely separately from RegisterDispatchKey
I decided to name the files RegisterCPU rather than the old convention
BackendSelectRegister, because it seems better to me if these
files clump together in an alphabetical listing rather than being
spread out everywhere. There are a few manual registration files
which should probably get similar renaming.
I also did a little garden cleaning about how we identify if a
dispatch key is a cuda key or a generic key (previously called
KEYWORD_ALL_BACKENDS but I like my naming better).
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D24600806
Test Plan: Imported from OSS
Reviewed By: smessmer
Pulled By: ezyang
fbshipit-source-id: c1b510dd7515bd95e3ad25b8edf961b2fb30a25a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47008
bhosmer has been complaining about how it is difficult to distinguish
between local variables and closed over variables in the higher order
functions. Well, closures and objects do basically the same thing, so
just convert all these HOFs into objects.
The decoder ring:
- Higher order function => Constructor for object
- Access to closed over variable => Access to member variable on object
- with_native_function => method_with_native_function (because it's
hard writing decorators that work for both functions and methods)
I didn't even have to change indentation (much).
When there is no need for closed over variables (a few functions), I
kept them as plain old functions, no need for an object with no
members.
While I was at it, I also deleted the kwargs, since the types are
enough to prevent mistakes.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D24600805
Pulled By: ezyang
fbshipit-source-id: 7e3ce8cb2446e3788f934ddcc17f7da6e9299511
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47002
There was no good reason for TypeDerived.h (CPUType.h) codegen
to exist after static dispatch was deleted, and now that we
have Math alias key TypeDefault.h header is not needed either.
Sorry to anyone who was using these out of tree.
I didn't entirely delete TypeDefault.h as it has a use in
a file that I can't conveniently compile test locally. Will
kill it entirely in a follow up.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D24596583
Pulled By: ezyang
fbshipit-source-id: b5095d3509098ff74f836c5d0c272db0b2d226aa
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46991
This change is motivated by a problem bdhirsh observed which is
that in internal builds that include both SchemaRegister.cpp
and TypeDefault.cpp, some operators have their schemas defined
multiple times. Instead of dumping schema registrations in
multiple files, it seems better to just toggle how many schemas
we write into TypeDefault.cpp.
ljk53 observes that technically SchemaRegister.cpp is only needed by
full-JIT frontend, and not by light interpreter (to resolve schema
lookups). However, in practice, the registration file seems to be
unconditionally loaded. This change will make it harder to do the
optimization where we drop schemas in the light interpreter, but you
probably want to architect this differently (similar to per-op
registrations, DON'T do any registrations in ATen, and then write out
the schema registrations in a separate library.)
I took this opportunity to also simplify the TypeDefault generation
logic by reworking things so that we only ever call with None argument
when registering. Soon, we should be able to just split these
files up entirely.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: ljk53
Differential Revision: D24593704
Pulled By: ezyang
fbshipit-source-id: f01ea22a3999493da77b6e254d188da0ce9adf2f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46970
Now that catchall declarations are reinterpreted as registrations to
dispatch key Math, we can now simplify code generation logic by directly
generating to Math, and bypasing logic for catchall. This also helps
avoid bugs where we incorrectly classify some kernels as Math and others
as not, even though they get registered in the same way.
Bill of changes:
- Give Math its own unique TORCH_LIBRARY_IMPL
- Make it so NativeFunction.dispatch is always non-None. Simplify
downstream conditionals accordingly
- When parsing NativeFunction, fill in missing dispatch with a
singleton Math entry (pointing to the cpp.name!)
One thing that is a little big about this change is a lot of kernels
which previously didn't report as "math" now report as math. I picked
a setting for these booleans that made sense to me, but I'm not sure
if e.g. XLA will handle it 100% correctly.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D24592391
Pulled By: ezyang
fbshipit-source-id: 2e3355f19f9525698864312418df08411f30a85d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46938
It turns out that after https://github.com/pytorch/pytorch/pull/42194
landed we no longer actually generate any registrations into this
file. That means it's completely unnecessary.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: IvanKobzarev
Differential Revision: D24573518
Pulled By: ezyang
fbshipit-source-id: b41ada9e394b780f037f5977596a36b896b5648c
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal
Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462
Reviewed By: zou3519
Differential Revision: D24422343
Pulled By: ezyang
fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45722
This diff does a bunch of things:
1. Introduces some abstractions as detailed in https://fb.quip.com/2oEzAR5MKqbD to help with selective build related codegen in multiple files.
2. Adds helper methods to combine operators, debug info, operator lists, etc...
3. Currently, the selective build machinery querying `op_registration_whitelist` directly at various places in the code. `op_registration_whitelist` is a list of allowed operator names (without overload name). We want to move to a world where the overload names are also included so that we can be more selective about which operators we include. To that effect, it makes sense to hide the checking logic in a separate abstraction and have the build use that abstraction instead of putting all this selective build specific logic in the code-generator itself. This change is attempting to do just that.
4. Updates generate_code, unboxing-wrapper codegen, and autograd codegen to accept the operator selector paradigm as opposed to a selected operator list.
5. Update `tools/code_analyzer/gen_op_registration_allowlist.py` to expose providing an actual structured operator dependency graph in addition to a serialized string.
There are a bunch of structural changes as well:
1. `root_op_list.yaml` and `combined_op_list.yaml` are now actual YAML files (not a space separated list of operator names)
2. `generate_code.py` accepts only paths to operator list YAML files (both old style as well as new style) and not list of operator names on the command line as arguments
3. `gen.py` optionally also accepts a custom build related operators YAML path (this file has information about which operators to register in the generated library).
ghstack-source-id: 114578753
(Note: this ignores all push blocking failures!)
Test Plan:
`buck test caffe2/test:selective_build`
Generated YAML files after the change:
{P143981979}
{P143982025}
{P143982056}
Ensure that the generated files are same before and after the change:
```
[dhruvbird@devvm2490 /tmp/TypeDefault.cpp] find -name "*.cpp" | xargs md5sum
d72c3d125baa7b77e4c5581bbc7110d2 ./after_change/gen_aten/TypeDefault.cpp
42353036c83ebc7620a7159235b9647f ./after_change/lite_predictor_lib_aten/TypeDefault.cpp
d72c3d125baa7b77e4c5581bbc7110d2 ./before_change/gen_aten/TypeDefault.cpp
42353036c83ebc7620a7159235b9647f ./before_change/lite_predictor_lib_aten/TypeDefault.cpp
```
`VariableTypes_N.cpp` are generated the same both before and after the change:
```
[dhruvbird@devvm2490 /tmp/VariableType] find -name "*.cpp" | xargs -n 1 md5sum | sort
3be89f63fd098291f01935077a60b677 ./after/VariableType_2.cpp
3be89f63fd098291f01935077a60b677 ./before/VariableType_2.cpp
40a3e59d64e9dbe86024cf314f127fd6 ./after/VariableType_4.cpp
40a3e59d64e9dbe86024cf314f127fd6 ./before/VariableType_4.cpp
a4911699ceda3c3a430f08c64e8243fd ./after/VariableType_1.cpp
a4911699ceda3c3a430f08c64e8243fd ./before/VariableType_1.cpp
ca9aa611fcb2a573a8cba4e269468c99 ./after/VariableType_0.cpp
ca9aa611fcb2a573a8cba4e269468c99 ./before/VariableType_0.cpp
e18f639ed23d802dc4a31cdba40df570 ./after/VariableType_3.cpp
e18f639ed23d802dc4a31cdba40df570 ./before/VariableType_3.cpp
```
Reviewed By: ljk53
Differential Revision: D23837010
fbshipit-source-id: ad06b1756af5be25baa39fd801dfdf09bc565442
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45990
In #45890 we introduced the concept of a CppSignature, which bundled
up all of the information necessary to declare a C++ signature for
the cpp API. This PR introduces analogous concepts for dispatcher
and native: DispatcherSignature and NativeSignature.
The three interfaces are not particularly well coupled right now,
but they do have some duck typing coincidences:
- defn() which renders the C++ definition "bool f(int x)"
- decl() which renders the C++ declaration "bool f(int x = 2)"
- type() which renders the C++ function type "bool(int)"
Maybe at some point we'll introduce a Protocol, or a supertype.
Many other methods (like arguments()) have varying types. These
signatures also have some helper methods that forward back to real
implementations in the api modules. Something to think about is
whether or not we should attempt to reduce boilerplate here or
not; I'm not too sure about it yet.
The net effect is we get to reduce the number of variables we
have to explicitly write out in the codegen, since now these are all
bundled together into a signature. Something extra special happens
in BackendSelect, where we now dynamically select between dispatcher_sig
and native_sig as "how" the backend select is implemented.
A little bit of extra cleanup:
- Some places where we previously advertised Sequence, we now advertise
a more informative Tuple.
- defn() may take an optional positional parameter overriding the entire
name, or a kwarg-only prefix parameter to just add a prefix to the
name.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D24223100
Pulled By: ezyang
fbshipit-source-id: f985eced08af4a60ba9641d125d0f260f8cda9eb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45975
I reordered declarations in the faithful API reimplementation to
make sure the diffs lined up nicely; they're not necessary now.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D24223102
Pulled By: ezyang
fbshipit-source-id: 77c6ae40c9a3dac36bc184dd6647d6857c63a50c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45974
The term "legacy dispatcher" caused a bunch of confusion between
me and Sebastian when discussing what the intended semantics of
legacy dispatcher argument is. Legacy dispatcher argument implies
that you ought NOT to use it when you have use_c10_dispatcher: full;
but that's not really what's going on; legacy dispatcher API describes
the API that you write native:: functions (NativeFunctions.h) to.
Renaming it here makes this more clear.
I applied these seds:
```
git grep -l 'legacy_dispatcher' | xargs sed -i 's/legacy_dispatcher/native/g'
git grep -l 'legacydispatcher' | xargs sed -i 's/legacydispatcher/native/g'
git grep -l 'LegacyDispatcher' | xargs sed -i 's/LegacyDispatcher/Native/g'
```
and also grepped for "legacy" in tools/codegen and fixed documentation.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D24223101
Pulled By: ezyang
fbshipit-source-id: d1913b8b823b3b95e4546881bc0e876acfa881eb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45918
This groups together related native functions (functional, inplace, out)
into a single group. It's not used by anything but Jiakai said this
would be useful for his stuff so I'm putting it in immediately.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D24163526
Pulled By: ezyang
fbshipit-source-id: 9979b0fe9249c78e4a64a50c5ed0e2ab99f499b9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45890
This rewrite is as per my comments at https://github.com/pytorch/pytorch/pull/44087#issuecomment-701664506
I did the rewrite by reverting #44087 and then reimplementing it on top.
You may find it easier to review by diffing against master with only #44087
reverted.
There are two main ideas.
First, we now factor cpp argument processing into two phases operating
on three representations of data:
1. `FunctionSchema` - this is the source from native_functions.yaml
2. `Union[Argument, ThisArgument, TensorOptionsArgument]` - this is
the arguments after doing some basic semantic analysis to group
them (for TensorOptions) or identify the this argument (if this
is a method). There is only ever one of these per functions.
3. `Union[CppArgument, CppThisArgument, CppTensorOptionsArgument]` -
this is the arguments after we've elaborated them to C++. There
may be multiple of these per actual C++ signature.
You can think of (2) as common processing, whereas (3) bakes in specific
assumptions about whether or not you have a faithful or non-faithful
signature.
Second, we now have CppSignature and CppSignatureGroup representing
the *total* public C++ API signature. So those dataclasses are what
know how to render definitions/declarations, and you no longer have
to manually type it out in the Functions/TensorMethods codegen.
Here is an exhaustive accounting of the changes.
tools.codegen.api.types
- CppSignature and CppSignatureGroup got moved to tools.codegen.api.types
- Add new CppThisArgument and CppTensorOptionsArguments (modeled off
of ThisArgument and TensorOptionsArguments) so that we can retain
high level semantic structure even after elaborating terms with C++
API information. Once this is done, we can refine
CppArgument.argument to no longer contain a ThisArgument (ThisArgument
is always translated to CppThisArgument. Note that this doesn't
apply to TensorOptionsArguments, as those may be expanded or not
expanded, and so you could get a single CppArgument for 'options')
- Add no_default() functional mutator to easily remove default arguments
from CppArgument and friends
- Add an explicit_arguments() method to CppArgument and friends to
extract (flat) argument list that must be explicitly written in the signature.
This is everything except (Cpp)ThisArgument, and is also convenient
when you don't care about the extra structure of
CppTensorOptionsArguments
tools.codegen.api.cpp
- group_arguments is back, and it doesn't send things directly to a
CppSignatureGroup; instead, it moves us from representation (1) to (2)
(perhaps it should live in model). Here I changed my mind from my
PR comment; I discovered it was not necessary to do classification at
grouping time, and it was simpler and easier to do it later.
- argument got split into argument_not_this/argument/argument_faithful.
argument and argument_faithful are obvious enough what they do,
and I needed argument_not_this as a more refined version of argument
so that I could get the types to work out on TensorOptionsArguments
tools.codegen.api.dispatcher
- Here we start seeing the payoff. The old version of this code had a
"scatter" mode and a "gather" mode. We don't need that anymore:
cppargument_exprs is 100% type-directed via the passed in cpp
arguments. I am able to write the functions without any reference
to use_c10_dispatcher
tools.codegen.gen
- Instead of having exprs_str and types_str functions, I moved these to
live directly on CppSignature, since it seemed pretty logical.
- The actual codegen for TensorMethods/Functions is greatly simplified,
since (1) all of the heavy lifting is now happening in
CppSignature(Group) construction, and (2) I don't need to proxy one
way or another, the new dispatcher translation code is able to handle
both cases no problem. There is a little faffing about with ordering
to reduce the old and new diff which could be removed afterwards.
Here are codegen diffs. For use_c10_dispatcher: full:
```
+// aten::_cudnn_init_dropout_state(float dropout, bool train, int dropout_seed, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False) -> Tensor
Tensor _cudnn_init_dropout_state(double dropout, bool train, int64_t dropout_seed, const TensorOptions & options) {
- return _cudnn_init_dropout_state(dropout, train, dropout_seed, optTypeMetaToScalarType(options.dtype_opt()), options.layout_opt(), options.device_opt(), options.pinned_memory_opt());
+ static auto op = c10::Dispatcher::singleton()
+ .findSchemaOrThrow("aten::_cudnn_init_dropout_state", "")
+ .typed<Tensor (double, bool, int64_t, c10::optional<ScalarType>, c10::optional<Layout>, c10::optional<Device>, c10::optional<bool>)>();
+ return op.call(dropout, train, dropout_seed, optTypeMetaToScalarType(options.dtype_opt()), options.layout_opt(), options.device_opt(), options.pinned_memory_opt());
}
```
Otherwise:
```
+// aten::empty_meta(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor
Tensor empty_meta(IntArrayRef size, c10::optional<ScalarType> dtype, c10::optional<Layout> layout, c10::optional<Device> device, c10::optional<bool> pin_memory, c10::optional<MemoryFormat> memory_format) {
- return empty_meta(size, TensorOptions().dtype(dtype).layout(layout).device(device).pinned_memory(pin_memory), memory_format);
+ static auto op = c10::Dispatcher::singleton()
+ .findSchemaOrThrow("aten::empty_meta", "")
+ .typed<Tensor (IntArrayRef, const TensorOptions &, c10::optional<MemoryFormat>)>();
+ return op.call(size, TensorOptions().dtype(dtype).layout(layout).device(device).pinned_memory(pin_memory), memory_format);
}
```
Things that I probably did not get right:
- The Union[Argument, TensorOptionsArguments, ThisArgument] and
the Cpp variants are starting to get a little unwieldy. Not sure if
this means I should add a supertype (or at the very least an
alias); in some cases I do purposely omit one of these from the Union
- Code may not necessarily live in the most logical files. There isn't
very much rhyme or reason to it.
- The fields on CppSignature. They're not very well constrained and
it will be better if people don't use them directly.
- Disambiguation. We should do this properly in #44087 and we don't
need special logic for deleting defaulting for faithful signatures;
there is a more general story here.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: smessmer
Differential Revision: D24144035
Pulled By: ezyang
fbshipit-source-id: a185f8bf9df8b44ca5718a7a44dac23cefd11c0a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45742
Add a new flag to native_functions.yaml: `use_c10_dispatcher: hacky_wrapper_for_legacy_signatures`
and the codegen only wraps kernels in the aforementioned wrapper if that flag is set.
Apart from that, `use_c10_dispatcher: hacky_wrapper_for_legacy_signatures` is equivalent to `full`,
i.e. it has full boxing and unboxing support.
This greatly reduces the number of ops we apply the hacky_wrapper to, i.e. all ops marked as `use_c10_dispatcher: full` don't have it anymore.
ghstack-source-id: 113982139
Test Plan:
waitforsandcastle
vs fbcode:
https://www.internalfb.com/intern/fblearner/details/214511705/
vs base diff:
https://www.internalfb.com/intern/fblearner/details/214693207/
Reviewed By: ezyang
Differential Revision: D23328718
fbshipit-source-id: be120579477b3a05f26ca5f75025bfac37617620
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45665Fixes#43944
Note that the codegen doesn't use a proper parser so, in the same way as with lists, the string `, ` cannot appear in defaults or it will be interpreted as a splitting point between arguments.
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D24141835
Pulled By: ezyang
fbshipit-source-id: 578127861fd2504917f4486c44100491a2c40343
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44087
Each op taking a TensorOptions argument now has an additional overload in the C++ frontend where it takes scattered ScalarType, Layout, Device, bool instead of one TensorOptions argument.
If it is a c10-full op, then the scattered version calls into the dispatcher and the gathered version is a proxy calling into the scattered version.
If it is a non-c10-full op, then the gathered version calls into the dispatcher and the scattered version is a proxy calling into the gathered version.
This should minimize the amount of gathering and scattering needed.
This PR is also a prerequisite to remove the re-gathering of arguments that is currently happening in VariableKernel. Currently, VariableKernels gather arguments into a TensorOptions object
to call into the C++ API. In a PR stacked on top of this, VariableKernel will just directly call into the scattered C++ API introduced here and avoid the gathering step.
ghstack-source-id: 113355689
Test Plan:
waitforsandcastle
vs master: https://www.internalfb.com/intern/fblearner/details/216169815/
vs previous diff: https://www.internalfb.com/intern/fblearner/details/216169957/
Reviewed By: ezyang
Differential Revision: D23492188
fbshipit-source-id: 3e84c467545ad9371e98e09075a311bd18411c5a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44062
Previously, BackendSelect kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory, and they used hacky_wrapper to be callable. This caused a re-wrapping step. Calling into a BackencSelect kernel required taking the individual scattered arguments, packing them into a TensorOptions, and the kernel itself then gathered them again for redispatch.
Now with this PR, BackendSelect kernels are written in the new way and no hacky_wrapper or rewrapping is needed for them.
ghstack-source-id: 112825789
Test Plan:
vs master: https://www.internalfb.com/intern/fblearner/details/216117032/
vs previous diff: https://www.internalfb.com/intern/fblearner/details/216170194/
Reviewed By: ezyang
Differential Revision: D23484192
fbshipit-source-id: e8fb49c4692404b6b775d18548b990c4cdddbada
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45134
Per-Op-Registration was a mechanism used for mobile selective build v0. Since then, a new dispathing mechanism has been built for PyTorch, and this code path isn't used any more. Remove it to simplify understanding/updating the code-generator's code-flow.
ghstack-source-id: 112723942
Test Plan: `buck build` and sandcastle.
Reviewed By: ezyang
Differential Revision: D23806632
fbshipit-source-id: d93cd324650c541d9bfc8eeff2ddb2833b988ecc
Summary:
This PR moves `DispatchKey::Autograd` to an alias dispatch key mapping to `AutogradCPU, AutogradCUDA, AutogradXLA, AutogradOther, AutogradPrivate*` keys.
A few things are handled in this PR:
- Update alias dispatch key mapping and precompute dispatchTable logic
- Move `Autograd` key from `always_included` set to TensorImpl constructor.
- Update `dummyTensor` constructor to take `requires_grad` as optional argument so that it's closer to the real application in op_registration_test.
- Use `BackendSelect` key for both backend select before and after autograd layer. (1 liner in backend_select codegen)
A few planned followups ordered by priority:
- [cleanup] Update `test_dispatch.py` to include testing `Autograd`.
- [cleanup] Add Math alias key and move catchAll to Math. (to remove 2.2 in `computeDispatchTableEntryWithDebug`)
- [new feature] Add support for Math in native_functions.yaml
- [cleanup] Add iterator like functionality to DispatchKeySet
- [cleanup/large] Only add Autograd backend keys when tensor requires grad. (cc: ljk53 ?)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43070
Reviewed By: ezyang
Differential Revision: D23281535
Pulled By: ailzhang
fbshipit-source-id: 9ad00b17142e9b83304f63cf599f785500f28f71
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42629
How to approach reviewing this diff:
- The new codegen itself lives in `tools/codegen`. Start with `gen.py`, then read `model.py` and them the `api/` folder. The comments at the top of the files describe what is going on. The CLI interface of the new codegen is similar to the old one, but (1) it is no longer necessary to explicitly specify cwrap inputs (and now we will error if you do so) and (2) the default settings for source and install dir are much better; to the extent that if you run the codegen from the root source directory as just `python -m tools.codegen.gen`, something reasonable will happen.
- The old codegen is (nearly) entirely deleted; every Python file in `aten/src/ATen` was deleted except for `common_with_cwrap.py`, which now permanently finds its home in `tools/shared/cwrap_common.py` (previously cmake copied the file there), and `code_template.py`, which now lives in `tools/codegen/code_template.py`. We remove the copying logic for `common_with_cwrap.py`.
- All of the inputs to the old codegen are deleted.
- Build rules now have to be adjusted to not refer to files that no longer exist, and to abide by the (slightly modified) CLI.
- LegacyTHFunctions files have been generated and checked in. We expect these to be deleted as these final functions get ported to ATen. The deletion process is straightforward; just delete the functions of the ones you are porting. There are 39 more functions left to port.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D23183978
Pulled By: ezyang
fbshipit-source-id: 6073ba432ad182c7284a97147b05f0574a02f763