Commit Graph

13 Commits

Author SHA1 Message Date
Sherlockk Huang
8b6a78f39f Python Interface for Jiterator
This PR allows user to author a CUDA kernel in python.

```
from torch.cuda.jiterator import create_jit_fn

code_string = "template <typename T> T my_kernel(T x, T y, T alpha) { return  -x * y + x - y + alpha; }"
jitted_fn = create_jit_fn(code_string, alpha=0)

a = torch.rand(3, device='cuda')
b = torch.rand(3, device='cuda')
result = jitted_fn(a, b, alpha=1.0)
```

Limitations:
- Only supports elementwise kernel
- 1~8 tensor inputs (empty input, e.g. factory methods, is not supported)
- inputs tensors must live in cuda device
- cpu Scalar is not supported
- kwargs must be pre-declared when calling create_jit_fn
- kwargs must be convertible to at::Scalar, one of float64, int64_t, bool. (complex not support for now)

TODOs:
- [x] consolidate union and c10::variant implementation
- [x] plug into existing op testing framework
- [ ] rename files, place files in the right folder
- [ ] place util functions in the right file
- [x] enforce assumptions in python interface e.g <8 inputs, kwargs types
- [x] Add user-facing documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76394
Approved by: https://github.com/mruberry
2022-05-06 18:44:28 +00:00
Scott Wolchok
eca4f14b6c [PyTorch] Add C10_ prefix to MPARK_* macros in variant.h (#65589)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65589

Without this prefix, the include guards interfere with attempts to indirectly include both c10::variant and the original mpark variant in the same translation unit.
ghstack-source-id: 138901838

Test Plan: Temporarily `#include <c10/util/variant.h>` in ivalue.h and buck build //data_preproc/preproc:preproc_adapter_utils mode/no-gpu -- this delayed D31101962 (01720d6a23) from fixing S244170

Reviewed By: bhosmer

Differential Revision: D31159414

fbshipit-source-id: 234c5ed37ca853702bcdf3263e4f185b95ac1d08
2021-09-24 12:57:26 -07:00
Joel Schlosser
ee482edf0a Callable activation function support for Transformer modules (C++) (#62342)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60747

Enhances the C++ versions of `Transformer`, `TransformerEncoderLayer`, and `TransformerDecoderLayer` to support callables as their activation functions. The old way of specifying activation function still works as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62342

Reviewed By: malfet

Differential Revision: D30022592

Pulled By: jbschlosser

fbshipit-source-id: d3c62410b84b1bd8c5ed3a1b3a3cce55608390c4
2021-08-02 08:06:39 -07:00
Richard Barnes
ee44d73e59 Modernize override (#61744)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61744

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D29717320

fbshipit-source-id: 6eea4295ee2e5572ab337620be412376fcc2f3cc
2021-07-23 23:04:46 -07:00
Andres Suarez
5455df2b99 [codemod][dirsync] Apply clang-format
Test Plan: Sandcastle and visual inspection.

Reviewed By: igorsugak

Differential Revision: D28477071

fbshipit-source-id: e844e0fad2f5599fd27e0fd113a328031cb63aa7
2021-05-20 21:23:24 -07:00
Scott Wolchok
44cc873fba [PyTorch] Autoformat c10 (#56830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56830

Opt into formatting on GitHub and format everything. This is a trial run before turning on formatting for more and eventually all of the codebase.

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D27979080

fbshipit-source-id: a80f0c48691c08ae8ca0af06377b87e6a2351151
2021-04-30 21:23:28 -07:00
root
ab14375b08 Workaround for CUDA10.2.89 CUDA extension compilation error (#33230)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/33203
PR based on https://github.com/mpark/variant/pull/73

Verified locally on CUDA10.2.89 and 10.1.243

Thanks ngimel for the hint and gridley for the initial fix in the variant repo! :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33230

Differential Revision: D19858083

Pulled By: ngimel

fbshipit-source-id: b9438084f5688712c6aa6b17813c68ccde237bbb
2020-02-12 14:23:30 -08:00
Will Feng
aad5071206 Use torch::variant for enums in C++ API
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26837

Test Plan: Imported from OSS

Differential Revision: D17579438

Pulled By: yf225

fbshipit-source-id: 9ac59df28a317fdb3be2cc02c65962ad99117127
2019-10-16 22:40:57 -07:00
Will Feng
1b385e7e5f Add std::variant backport (mpark) as c10::variant, with gcc 7.3.1 fix (#27575)
Summary:
This is the same as https://github.com/pytorch/pytorch/pull/26836 with workarounds for gcc 7.3.1 bug in light of https://github.com/pytorch/pytorch/pull/27277#issue-324044466. The workaround also limits the use cases of `c10::variant`, but it is sufficient for our (simple) use case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27575

Differential Revision: D17834410

Pulled By: yf225

fbshipit-source-id: e8f3c0be2904ec3d2975cbb80af237a5c9d0cb92
2019-10-09 12:10:39 -07:00
Will Feng
7e95b89980 Revert "Add std::variant backport as c10::variant (#26836)" (#27277)
Summary:
This reverts commit 0cd188035a.

As reported by jerryzh168 and pritamdamania87, mpark::variant doesn’t compile with gcc 7.3.1 on fb devserver and throws error similar to https://github.com/mpark/variant/issues/43. (However, it doesn’t fail with gcc 7.3.1 in OSS CI, based on https://circleci.com/api/v1.1/project/github/pytorch/pytorch/2995606/output/107/0?file=true)
A plausible workaround is to upgrade devserver to devtoolset-8, but that would in turn causes CUDA build to complain:
```
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/host_config.h:119:2: error: #error -- unsupported GNU version! gcc versions later than 7 are not supported!
 #error -- unsupported GNU version! gcc versions later than 7 are not supported!
```
(Thanks pritamdamania87 for the report!)

The solution for now is to revert the mpark::variant addition, and I will find alternatives that will work with gcc 7.3.1 on fb devserver.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27277

Differential Revision: D17739804

fbshipit-source-id: ad945b3d86ab7ddbff58f4ecab95e0e1ac725ae9
2019-10-03 09:33:48 -07:00
Will Feng
0cd188035a Add std::variant backport as c10::variant (#26836)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26836

* **#26836 Add std::variant backport as c10::variant**

Test Plan: Imported from OSS

Differential Revision: D17649064

Pulled By: yf225

fbshipit-source-id: aa5ee26fe7078cc66d03663b9ff9e998e1d5839a
2019-09-27 20:53:29 -07:00
Karl Ostmo
baa227b410 Revert D17579439: Add std::variant backport as torch::variant
Test Plan: revert-hammer

Differential Revision:
D17579439

Original commit changeset: 6416521047f5

fbshipit-source-id: 0a57bef5d1d2d5366f84fcfa52b3968e01802164
2019-09-27 14:31:50 -07:00
Will Feng
71011211c1 Add std::variant backport as torch::variant
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26836

Test Plan: Imported from OSS

Differential Revision: D17579439

Pulled By: yf225

fbshipit-source-id: 6416521047f5b93c01514e3cd153c9abc3ad3417
2019-09-27 12:44:13 -07:00