Commit Graph

62 Commits

Author SHA1 Message Date
Ailing Zhang
7cb8d68ae1 Rename XLAPreAutograd to AutogradXLA. (#43047)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43047

Reviewed By: ezyang

Differential Revision: D23134326

Pulled By: ailzhang

fbshipit-source-id: 5fcbc23755daa8a28f9b03af6aeb3ea0603b5c9a
2020-08-17 10:47:43 -07:00
Edward Yang
840ad94ef5 Add reference documentation for torch/library.h (#41470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41470

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D22577426

Pulled By: ezyang

fbshipit-source-id: 4bfe5806061e74181a74d161c868acb7c1ecd1e4
2020-07-17 10:05:16 -07:00
Sebastian Messmer
86b1afa039 Assert that kernels are called with the right signature (#40251)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40251

Rather than segfaulting, we should show a good error message when in op.call<Return, Args...>(...) the Return type or Args types mismatch the kernel.

This adds an assertion comparing two std::type_index to the call path, but that should be fast. Hashing the function signature is also in the call path and not strictly constexpr, but I checked on godbolt that GCC >=5 and Clang >=3.8 optimize it away and make it constexpr, i.e. it's not part of the assembly.
ghstack-source-id: 106194240

Test Plan: waitforsandcastle

Differential Revision: D22126701

fbshipit-source-id: 6c908a822e295757bcc0014f78f51e6a560f221f
2020-06-18 21:54:05 -07:00
Sebastian Messmer
cb8b2f0636 Revert D21534052: Assert that kernels are called with the right signature
Test Plan: revert-hammer

Differential Revision:
D21534052

Original commit changeset: 6be436a3f205

fbshipit-source-id: a149c5ca7f9e78947ae3059ac4470712f291660b
2020-06-18 15:00:13 -07:00
Sebastian Messmer
55cdd31bd0 Assert that kernels are called with the right signature (#38361)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38361

Rather than segfaulting, we should show a good error message when in op.call<Return, Args...>(...) the Return type or Args types mismatch the kernel.

This adds an assertion comparing two std::type_index to the call path, but that should be fast. Hashing the function signature is also in the call path and not strictly constexpr, but I checked on godbolt that GCC >=5 and Clang >=3.8 optimize it away and make it constexpr, i.e. it's not part of the assembly.

supersedes D17485438

ghstack-source-id: 106178820

Test Plan: waitforsandcastle

Differential Revision: D21534052

fbshipit-source-id: 6be436a3f20586277a051d764af29e21d5567da0
2020-06-18 14:22:48 -07:00
Sebastian Messmer
f69b72c738 Back out "Revert D21986243: TORCH_FN" (#40110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40110

Original commit changeset: 72c690c2b4c2
ghstack-source-id: 105993222

Test Plan: waitforsandcastle

Differential Revision: D22072829

fbshipit-source-id: 0bc1a3e389e2afb05688c472793d34eaddb67f2a
2020-06-16 13:38:29 -07:00
Mike Ruberry
8939849f72 Revert D21986243: TORCH_FN
Test Plan: revert-hammer

Differential Revision:
D21986243

Original commit changeset: a123571c18aa

fbshipit-source-id: 72c690c2b4c2fc39e1c9192d1c410f49bb4077a5
2020-06-16 04:43:46 -07:00
Sebastian Messmer
12cb80b5b8 TORCH_FN (#39823)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39823

Add a compile time function pointer that can be used to pass function pointers in template args.
This is very useful for metaprogramming function wrappers.
ghstack-source-id: 105944072

Test Plan: waitforsandcastle

Differential Revision: D21986243

fbshipit-source-id: a123571c18aa0e65908cbb131f28922ceb59061c
2020-06-16 03:08:08 -07:00
Edward Yang
2b6a48e962 Remove supports_named_tensor from codegen entirely. (#38739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38739

Instead of codegenning the named tensor support checks into
CPUType/CUDAType, we instead add a new dispatch key that is put
into tensor whenever it has names.  By default, the fallback
implementation says that named tensors are not supported, but
if they are supported, we register a fallthrough which lets
us through to the true backend implementation.

There are a bunch of small pieces which are necessary to make this
happen:

- NameMode now also excludes DispatchKey::Named from the dispatch set
- To avoid bad error messages, we add a teensy special case to
  the dispatcher for named_not_supported_kernel: if we see that
  the boxed kernel we need to invoke from unboxed is this kernel,
  but we don't support boxing, but it's a kernel which is known
  to not need boxing, we just pass in nullptr for the stack.
  The special case here is very nice: it doesn't affect the fast
  path and only gets exercised when things are not supported.
- I need to add support for per operator fallthrough registration.
  This is done similarly to how we support fallthrough fallback,
  by just keeping track if the registered kernel for an operator
  is a fallthrough.

It is possible we could go even further down this path, and move
the named tensor logic itself into this key.  I leave this
up to future work.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21662643

Pulled By: ezyang

fbshipit-source-id: 5bc6ae14a1f600189bd8bf865f74dd1700d932f7
2020-06-01 13:09:08 -07:00
Edward Yang
a894fff265 Back out "Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API"
Summary: Original commit changeset: 636e8a11afc6

Test Plan: export to OSS

Reviewed By: malfet

Differential Revision: D21170502

fbshipit-source-id: e8f35f103c4924aedbcaaf868475008d24bdeeab
2020-04-22 09:18:23 -07:00
James Reed
2ccdc39dce Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API
Test Plan: revert-hammer

Differential Revision:
D21089648

Original commit changeset: 8d54329c1252

fbshipit-source-id: 636e8a11afc628a4cdae9d44824985c10c70555e
2020-04-21 12:21:45 -07:00
Edward Yang
01100cb477 Put TORCH_LIBRARY in torch/library.h; add custom class API (#36742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36742

Now, you can define a custom class inside a TORCH_LIBRARY block.
It looks very similar to what you did before.  Instead of

```
static auto m = torch::class_<Class>("Namespace", "Class").def("foo", foo);
```

you write

```
TORCH_LIBRARY(Namespace, m) {
  m.class_<Class>("Class")
    .def("foo", foo);
}
```

All the old usages still work, but at some point we should start
updating the tutorials when we're ready to go 100% live with the
new pybind11 style API.

custom class API previously lived in torch/ folder and in torch
namespace, so for consistency, the new TORCH_LIBRARY also got
moved to torch/library.h The definition of Library::class_ is in the
bottom of that header because I need all of the class_ constructors
available, but there is a circular dependency between the two headers.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D21089648

Test Plan: Imported from OSS

Pulled By: ezyang

fbshipit-source-id: 8d54329c125242605336c22fa1642aae6940b507
2020-04-21 10:05:21 -07:00