Commit Graph

66 Commits

Author SHA1 Message Date
Sebastian Messmer
02f794b102 Add overload names to native_functions.yaml (#23532)
Summary:
We need this to be able to register them with the c10 dispatcher.

The overload names are based on one-letter-per-argument-type.

Script used to change native_functions.yaml and derivatives.yaml: P75630718

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23532
ghstack-source-id: 87539687

Differential Revision: D16553437

fbshipit-source-id: a1d0f10c42d284eba07e2a40641f71baa4f82ecf
2019-08-01 02:08:37 -07:00
Roy Li
0a04513367 Remove old Type based backend extensions (#22009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22009
ghimport-source-id: e481b64707434a1abdc382fd80bd70f165540711

Test Plan: Imported from OSS

Differential Revision: D15914755

Pulled By: li-roy

fbshipit-source-id: 9230b8b234f71a5d865bf6bca93347c68c349ff7
2019-07-30 14:07:46 -07:00
Brian Vaughan
97a604ef57 Rereapply optional ScalarType interface changes that were reverted in D16079809 (#22456)
Summary:
re-apply changes reverted in:
https://github.com/pytorch/pytorch/pull/22412

Also change log_softmax to take positional arguments. Long-term we do want the kwarg-only interface, but seems to currently be incompatible with jit serialization.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22456

Differential Revision: D16097159

Pulled By: nairbv

fbshipit-source-id: 8cb73e9ca18fc66b35b873cf4a574b167a578b3d
2019-07-03 20:03:25 -07:00
Wanchao Liang
dff2c07183 Manual revert of D16012838
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22412

Reviewed By: nairbv, houseroad

Differential Revision: D16079809

fbshipit-source-id: ee0d805ff7a2bc5f98bcc65f90b8199751c840f6
2019-07-01 19:58:21 -07:00
Roy Li
2a698682e4 Remove Type dispatch (#21964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21964
ghimport-source-id: fdfb555ac4efbf31ae7d2c700a5aa44ad0cc4d7f

Test Plan: Imported from OSS

Differential Revision: D15897424

Pulled By: li-roy

fbshipit-source-id: 3cd6744254e34d70e6875ffde749b5cf959b663c
2019-06-30 04:11:35 -07:00
Roy Li
6c454ff14c Stop using Type in Python bindings (#21963)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21963
ghimport-source-id: 4d9d66ba2c8587503d892b67f535cc2a62e2d19e

Test Plan: Imported from OSS

Differential Revision: D15897423

Pulled By: li-roy

fbshipit-source-id: 2dd55ceb80971df7c86545b7bfff733387f13572
2019-06-30 04:11:32 -07:00
Brian Vaughan
7707dee761 Re apply optional ScalarType changes (#22237)
Summary:
This is (mostly) the re-application of:
https://github.com/pytorch/pytorch/pull/21088

which was reverted due to an issue conflicting with changes in:
https://github.com/pytorch/pytorch/pull/22104
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22237

Differential Revision: D16012838

Pulled By: nairbv

fbshipit-source-id: 35f4a73c97ab68b4e2648aca96b2176f07b5a883
2019-06-26 13:36:25 -07:00
Vitaly Fedyunin
516c7e4456 Adding memory_format to empty and empty_like operators (#20558)
Summary:
Original RFC https://github.com/pytorch/pytorch/issues/19092

To ensure that we are not introducing BC breaking change, empty_like returns contiguous tensor by default.

```python
nCwh = torch.randn(N, C, H, W)
nhwC = nCwh.contiguous(memory_format=torch.channels_last)

new_nCwh = torch.empty_like(nhwC)
new_nCwh.is_contiguous(memory_format=torch.channels_last) == False
```

Now we need a way to preserve memory format in `empty_like`

```python
nCwh = torch.randn(N, C, H, W)
nhwC = nCwh.contiguous(memory_format=torch.channels_last)

new_nhwC = torch.empty_like(nhwC, memory_format=torch.preserve_format)
new_nhwC.is_contiguous(memory_format=torch.channels_last) == True

like_nCwh = torch.empty_like(nCwh, memory_format=torch.preserve_format)
like_nCwh.is_contiguous(memory_format=torch.channels_last) == False
```

Usage of `torch.preserve_format` allows us to avoid `if` constructs.

We can also generate different memory format outputs

```python
nCwh = torch.randn(N, C, H, W)
nhwC = nCwh.contiguous(memory_format=torch.channels_last)

new_nhwC = torch.empty_like(nCwh, memory_format=torch.channels_last)
new_nhwC.is_contiguous(memory_format=torch.channels_last) == True

new_nCwh = torch.empty_like(nhwC, memory_format=torch.contiguous_format)
new_nCwh.is_contiguous(memory_format=torch.channels_last) == False
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20558

Differential Revision: D15502474

Pulled By: VitalyFedyunin

fbshipit-source-id: 2e120d57eefad6fb8e04b8322c79871392f64331
2019-06-26 11:48:27 -07:00
Michael Suo
e016a424ef Revert D15944971: [pytorch][PR] merge interfaces that have an optional scalartype parameter
Differential Revision:
D15944971

Original commit changeset: 53473c370813

fbshipit-source-id: a18158b448cb8993b12e1a3bf2c2a3e0d6df6b10
2019-06-24 09:41:33 -07:00
Brian Vaughan
142361a7e4 merge interfaces that have an optional scalartype parameter (#21088)
Summary:
This change is backwards incompatible in *C++ only* on mean(), sum(), and prod() interfaces that accepted either of:
```
Tensor sum(IntArrayRef dim, bool keepdim=false) const;
Tensor sum(IntArrayRef dim, ScalarType dtype) const;
```
but now to specify both the dim and dtype will require the keepdim parameter:
```
Tensor sum(IntArrayRef dim, bool keepdim=false, c10::optional<ScalarType> dtype=c10::nullopt) const;
```

[xla ci]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21088

Reviewed By: ailzhang

Differential Revision: D15944971

Pulled By: nairbv

fbshipit-source-id: 53473c370813d9470b190aa82764d0aea767ed74
2019-06-24 07:17:58 -07:00
Roy Li
3d44cd6d19 Replace Type dispatch with ATenDispatch (#22008)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22008
ghimport-source-id: b0a5cc3da283b195f88636e2a61939d2facd11d9

Test Plan: Imported from OSS

Differential Revision: D15914756

Pulled By: li-roy

fbshipit-source-id: 5bc300ec525a3ee9e6491dd4c55e78bbd977d691
2019-06-19 21:42:54 -07:00
Ailing Zhang
cba79f4872 Revert D15637222: [wip] Replace Type dispatch with ATenDispatch
Differential Revision:
D15637222

Original commit changeset: fcfaea0b5480

fbshipit-source-id: 9bca7ebb91d7a3609b86663089140d7c5a33f58d
2019-06-19 17:36:52 -07:00
Roy Li
24a6c32407 Replace Type dispatch with ATenDispatch (#21320)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21320
ghimport-source-id: cc18f746a1c74df858cb0f6d8b7d4de4315683c7

Test Plan: Imported from OSS

Differential Revision: D15637222

Pulled By: li-roy

fbshipit-source-id: fcfaea0b5480ab966175341cce92e3aa0be7e3cb
2019-06-19 15:46:45 -07:00
Will Feng
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
Edward Yang
73a97387c1 Replace AT_CHECK with TORCH_CHECK [shard 9/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20435

Reviewed By: jerryzh168

Differential Revision: D15318877

fbshipit-source-id: 4d83571187ea14a604fef83ac355d328b46d93e1
2019-05-15 08:05:59 -07:00
Roy Li
17e4cd0c0a Remove old complex Types (#19616)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19616
ghimport-source-id: d8b3e15d84d3e6f810af3cb83d1413c5f048bcdc

Differential Revision: D15047741

Pulled By: li-roy

fbshipit-source-id: 572045f88f410d97f60c56298018bfee6268b375
2019-04-24 19:18:16 -07:00
Roy Li
689dd800ed Generate only one Type class per backend (#19295)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19295
ghimport-source-id: 9345110f91f044a449804ddd5116cc9179444a00

Differential Revision: D14948581

Pulled By: li-roy

fbshipit-source-id: a317b03d58d621e8df162918038f7543bfb13ba2
2019-04-21 21:16:14 -07:00
Roy Li
189f30603c Make complex its own backend (#19275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19275
ghimport-source-id: 73fd40b02152aed6f24225a88d7ffde7f700899e

Differential Revision: D14948582

Pulled By: li-roy

fbshipit-source-id: a1be6e57057defc74a007c5351c5edb2b9dcaf30
2019-04-21 21:16:10 -07:00
Will Feng
c7b5a8a876 Change is_variable() to check existence of AutogradMeta, and remove is_variable_ (#19139)
Summary:
Currently, a TensorImpl's `is_variable_` is true if and only if the TensorImpl has AutogradMeta. This PR unifies these two concepts by removing `is_variable_` and change `is_variable()` to check existence of AutogradMeta instead.

Removing `is_variable_` is part of the work in Variable/Tensor merge.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19139

Differential Revision: D14893339

Pulled By: yf225

fbshipit-source-id: ceb5e22c3c01f79b5d21d5bdbf4a7d1bc397796a
2019-04-11 14:03:33 -07:00
Edward Yang
515238e0a5 Unify cudaGetDeviceCount implementations. (#18445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18445
ghimport-source-id: 30d018737bf6989bc68b7e3676f44e0ca6141fde

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18242 Test running a CUDA build on CPU machine.
* **#18445 Unify cudaGetDeviceCount implementations.**

I went about doing this by searching for calls to cudaGetDeviceCount,
and then methodically replacing them with references to c10::cuda::device_count()
or at::cuda::device_count().

There is a point to doing this: the various implementations wildly differed
in their handling of what to do when cudaGetDeviceCount returns an error.
The final standardized behavior is that **all errors are swallowed** and
we return device count of zero.  This indirectly fixes running CUDA builds
on CPU, which was broken in #17847.

I added 'noexcept' to the 'deviceCount' virtual method on DeviceGuardImpl.
This is a BC-breaking change for anyone inheriting from DeviceGuardImpl
but all you need to do is put 'noexcept' on your method and it is backwards
compatible with older libtorch.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14612189

fbshipit-source-id: 3c8d186e3dd623c0e27625212c7ce30f75d943cb
2019-03-26 09:50:14 -07:00
Davide Libenzi
272a48f6fe Enable autograd to recognize the XLA backend as one providing multiple devices (#17847)
Summary:
…e devices, while not being CUDA/HIP.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17847

Differential Revision: D14545634

Pulled By: ezyang

fbshipit-source-id: 417181bf2ff4f8978544afe2fb6b042e787854ed
2019-03-20 13:58:36 -07:00
Roy Li
80a7eac79e Remove Type::elementSizeInBytes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17785

Reviewed By: ezyang

Differential Revision: D14379074

fbshipit-source-id: 60727f187d61eb571b144bd6eed4dd4908da0b51
2019-03-15 12:56:02 -07:00
Roy Li
340cf2a2dd Generate derived extension backend Type classes for each scalar type (#17278)
Summary:
Previously we only generate one class for each extension backend. This caused issues with scalarType() calls and mapping from variable Types to non-variable types. With this change we generate one Type for each scalar type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17278

Reviewed By: ezyang

Differential Revision: D14161489

Pulled By: li-roy

fbshipit-source-id: 91e6a8f73d19a45946c43153ea1d7bc9d8fb2409
2019-02-22 18:38:33 -08:00
Gregory Chanan
6454e3262d Make getting the dtype of a tensor work for backend extensions.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17131

Differential Revision: D14093163

Pulled By: gchanan

fbshipit-source-id: 06638706e26505e3c741b7ae290000ca258599db
2019-02-15 13:47:37 -08:00
Edward Yang
4404762d7d Rename IntList to IntArrayRef. (#16751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16751

This was made more complicated by the fact that ivalue::IntList
is a thing.  So I had to fix all of the sites where we referring
to IValue post facto.

The following codemods were run, in this order:

```
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntList IntArrayRef
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntArrayRef::create IntList::create
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in ivalue::IntArrayRef ivalue::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in Tag::IntArrayRef Tag::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in isIntArrayRef isIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in toIntArrayRef toIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'Shared<IntArrayRef>' 'Shared<IntList>'
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'intrusive_ptr<IntArrayRef>' 'intrusive_ptr<IntList>'
```

Some manual fixups were done afterwards; they can be reviewed separately
at https://github.com/pytorch/pytorch/pull/16752

Reviewed By: dzhulgakov

Differential Revision: D13954363

fbshipit-source-id: b5c40aacba042402155a2f5a229fa6db7992ac64
2019-02-05 14:54:34 -08:00
Roy Li
4c803f4ebd Expose backend extensions to python
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16582

Reviewed By: gchanan

Differential Revision: D13887539

fbshipit-source-id: 8755babf2e3e849af974655f2f3a91740efe977e
2019-02-01 11:00:18 -08:00
Edward Yang
3e790f6ee8 complex_registration_extension.cpp includes to angled brackets
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16122

Reviewed By: smessmer

Differential Revision: D13717900

fbshipit-source-id: 8401f39d993482d3e08d2d79bc1841deafee2a5b
2019-01-22 14:22:38 -08:00
Edward Yang
0f45e6dbdc Remove ATen/Allocator.h forwarding header.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16121

Reviewed By: smessmer

Differential Revision: D13717899

fbshipit-source-id: 83488f2aa801ca75059949ec85171ec03e64c4ff
2019-01-22 14:22:36 -08:00
Edward Yang
b9b160d86f Remove ATen/Half.h and ATen/core/Half.h forwarding headers.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16115

Reviewed By: bddppq

Differential Revision: D13717049

fbshipit-source-id: fb1d690183a932a1fa1a2d235f3219520f51620a
2019-01-18 10:55:21 -08:00
Soumyaroop Roy
95d3fed68f Fix for issue 14829 (#14908)
Summary:
* Modify the testcase as outlined in the issue
   * Issue url: https://github.com/pytorch/pytorch/issues/14829
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14908

Differential Revision: D13490360

Pulled By: ezyang

fbshipit-source-id: ff11a72e19b49223652182e82c2b4e65fe444ca7
2018-12-17 14:28:50 -08:00
Peter Goldsborough
4327a2d70a Better tests/support for Python/C++ inter-op (#15193)
Summary:
Methods like `module.named_modules()` returns a container of `shared_ptr<nn::Module>`. Currently the `nn::Module` base class does  not have Python bindings. This PR fixes this, and adds more unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15193

Differential Revision: D13458713

Pulled By: goldsborough

fbshipit-source-id: 4091fe1b96a1be8db14c6a4307fbacc2b41ff6fe
2018-12-14 08:42:10 -08:00
Peter Goldsborough
0bf1383f0a Python <-> C++ Frontend inter-op (#13481)
Summary:
This PR enables C++ frontend modules to be bound into Python and added as submodules of Python modules. For this, I added lots of pybind11 bindings for the `torch::nn::Module` class, and modified the `torch.nn.Module` class in Python to have a new Metaclass that makes `isinstance(m, torch.nn.Module)` return true when `m` is a C++ frontend module. The methods and fields of C++ modules are bound in such a way that they work seamlessly as submodules of Python modules for most operations (one exception I know of: calling `.to()` ends up calling `.apply()` on each submodule with a Python lambda, which cannot be used in C++ -- this may require small changes on Python side).

I've added quite a bunch of tests to verify the bindings and equality with Python. I think I should also try out adding a C++ module as part of some large PyTorch module, like a WLM or something, and see if everything works smoothly.

The next step for inter-op across our system is ScriptModule <-> C++ Frontend Module inter-op. I think this will then also allow using C++ frontend modules from TorchScript.

apaszke zdevito

CC dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13481

Differential Revision: D12981996

Pulled By: goldsborough

fbshipit-source-id: 147370d3596ebb0e94c82cec92993a148fee50a7
2018-12-13 08:04:02 -08:00
Sebastian Messmer
3fa53da61a Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818

Reviewed By: ezyang

Differential Revision: D13348042

fbshipit-source-id: 11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
2018-12-11 21:01:45 -08:00
Sebastian Messmer
2dfdbef91d Fix include paths for TensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14816

Reviewed By: ezyang

Differential Revision: D13348040

fbshipit-source-id: a7204d89c2dd277d13093b0ed862f40b53dee82f
2018-12-11 21:01:40 -08:00
Roy Li
c03851e93a remove copy_wrapper (#13937)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13937

We can now replace s_copy_ with our new _copy_ function. Experimented with moving s_copy_ out of VariableManualType.cpp, but seemed like there was enough special casing to warrant it staying.

Reviewed By: ezyang

Differential Revision: D13053648

fbshipit-source-id: e9e04d460baf4ee49b500212cf91b95221acd769
2018-11-30 11:12:59 -08:00
Peter Goldsborough
6f2307ba6a Allow building libraries with setuptools that dont have abi suffix (#14130)
Summary:
When using `setuptools` to build a Python extension, setuptools will automatically add an ABI suffix like `cpython-37m-x86_64-linux-gnu` to the shared library name when using Python 3. This is required for extensions meant to be imported as Python modules. When we use setuptools to build shared libraries not meant as Python modules, for example libraries that define and register TorchScript custom ops, having your library called `my_ops.cpython-37m-x86_64-linux-gnu.so` is a bit annoying compared to just `my_ops.so`, especially since you have to reference the library name when loading it with `torch.ops.load_library` in Python.

This PR fixes this by adding a `with_options` class method to the `torch.utils.cpp_extension.BuildExtension` which allows configuring the `BuildExtension`. In this case, the first option we add is `no_python_abi_suffix`, which we then use in `get_ext_filename` (override from `setuptools.build_ext`) to throw away the ABI suffix.

I've added a test `setup.py` in a `no_python_abi_suffix_test` folder.

Fixes https://github.com/pytorch/pytorch/issues/14188

t-vi fmassa soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14130

Differential Revision: D13216575

Pulled By: goldsborough

fbshipit-source-id: 67dc345c1278a1a4ee4ca907d848bc1fb4956cfa
2018-11-27 17:35:53 -08:00
Tugrul Ates
1224ef9ea1 Delete backwards compatibility StorageImpl.h and TensorImpl.h (#14230)
Summary:
Since they directly include the real ones in core.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14230

Differential Revision: D13140323

Pulled By: tugrulates

fbshipit-source-id: d7e3b94e891b2d7fa273d01c0b7edfebdbd7e368
2018-11-20 12:29:24 -08:00
Peter Goldsborough
393ad6582d Use torch:: instead of at:: in all C++ APIs (#13523)
Summary:
In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions.

Note that since we're just talking about typedefs, this change does not break any existing code.

Once this lands I will update stuff in `pytorch/tutorials` too.

zdevito ezyang gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523

Differential Revision: D12942787

Pulled By: goldsborough

fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
2018-11-06 14:32:25 -08:00
Peter Goldsborough
7b9d755d88 Restructure torch/torch.h and extension.h (#13482)
Summary:
This PR restructures the public-facing C++ headers in a backwards compatible way. The problem right now is that the C++ extension header `torch/extension.h` does not include the C++ frontend headers from `torch/torch.h`. However, those C++ frontend headers can be convenient. Further, including the C++ frontend main header `torch/torch.h` in a C++ extension currently raises a warning because we want to move people away from exclusively including `torch/torch.h` in extensions (which was the correct thing 6 months ago), since that *used* to be the main C++ extension header but is now the main C++ frontend header. In short: it should be possible to include the C++ frontend functionality from `torch/torch.h`, but without including that header directly because it's deprecated for extensions.

For clarification: why is `torch/torch.h` deprecated for extensions? Because for extensions we need to include Python stuff, but for the C++ frontend we don't want this Python stuff. For now the python stuff is included in `torch/torch.h` whenever the header is used from a C++ extension (enabled by a macro passed by `cpp_extensions.py`) to not break existing users, but this should change in the future.

The overall fix is simple:

1. C++ frontend sub-headers move from `torch/torch.h` into `torch/all.h`.
2. `torch/all.h` is included in:
    1. `torch/torch.h`, as is.
    2. `torch/extensions.h`, to now also give C++ extension users this functionality.

With the next release we can then:
1. Remove the Python includes from `torch/torch.h`
2. Move C++-only sub-headers from `all.h` back into `torch.h`
3. Make `extension.h` include `torch.h` and `Python.h`

This will then break old C++ extensions that include `torch/torch.h`, since the correct header for C++ extensions is `torch/extension.h`.

I've also gone ahead and deprecated `torch::CPU` et al. since those are long due to die.

ezyang soumith apaszke fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13482

Differential Revision: D12924999

Pulled By: goldsborough

fbshipit-source-id: 5bb7bdc005fcb7b525195b769065176514efad8a
2018-11-05 16:46:52 -08:00
Peter Goldsborough
482b1366e6 Remove half_support.* (#13534)
Summary:
These two files are unused. I think at the time I moved the code into an inline extension (https://github.com/pytorch/pytorch/blob/master/test/test_cpp_extensions.py#L288) and forgot to delete the files.

soumith ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13534

Differential Revision: D12924365

Pulled By: goldsborough

fbshipit-source-id: 050dd7da267008ea58a5dcc8febee7d7e443bc3d
2018-11-05 10:04:21 -08:00
Gregory Chanan
bb35d085ef Dispatch backend-specific TensorOptions-based 'factory' functions via… (#12071)
Summary:
… Type.

This allows one to write a cpu/cuda split 'factory' function that uses TensorOptions.
Also move all remaining native_functions with either function or method variants that use Type to use TensorOptions.
Thus, there are no more Types in the public function / method API.

I believe there is a _lot_ of opportunity for cleanup here, as the old tensor, th_tensor, native_tensor and sparse variants can probably be removed, but let's do that in a follow-on patch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12071

Reviewed By: ezyang

Differential Revision: D10041600

Pulled By: gchanan

fbshipit-source-id: 30ebc17146d344bc3e32ccec7b98b391aac5470b
2018-10-15 15:21:11 -07:00
Yangqing Jia
713e706618 Move exception to C10 (#12354)
Summary:
There are still a few work to be done:

- Move logging and unify AT_WARN with LOG(ERROR).
- A few header files are still being plumbed through, need cleaning.
- caffe2::EnforceNotMet aliasing is not done yet.
- need to unify the macros. See c10/util/Exception.h

This is mainly a codemod and not causing functional changes. If you find your job failing and trace back to this diff, usually it can be fixed by the following approaches:

(1) add //caffe2/c10:c10 to your dependency (or transitive dependency).
(2) change objects such as at::Error, at::Optional to the c10 namespace.
(3) change functions to the c10 namespace. Especially, caffe2::MakeString is not overridden by the unified c10::str function. Nothing else changes.

Please kindly consider not reverting this diff - it involves multiple rounds of rebasing and the fix is usually simple. Contact jiayq@ or AI Platform Dev for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12354

Reviewed By: orionr

Differential Revision: D10238910

Pulled By: Yangqing

fbshipit-source-id: 7794d5bf2797ab0ca6ebaccaa2f7ebbd50ff8f32
2018-10-15 13:33:18 -07:00
Peter Goldsborough
e05d689c49 Unify C++ API with C++ extensions (#11510)
Summary:
Currently the C++ API and C++ extensions are effectively two different, entirely orthogonal code paths. This PR unifies the C++ API with the C++ extension API by adding an element of Python binding support to the C++ API. This means the `torch/torch.h` included by C++ extensions, which currently routes to `torch/csrc/torch.h`, can now be rerouted to `torch/csrc/api/include/torch/torch.h` -- i.e. the main C++ API header. This header then includes Python binding support conditioned on a define (`TORCH_WITH_PYTHON_BINDINGS`), *which is only passed when building a C++ extension*.

Currently stacked on top of https://github.com/pytorch/pytorch/pull/11498

Why is this useful?

1. One less codepath. In particular, there has been trouble again and again due to the two `torch/torch.h` header files and ambiguity when both ended up in the include path. This is now fixed.
2. I have found that it is quite common to want to bind a C++ API module back into Python. This could be for simple experimentation, or to have your training loop in Python but your models in C++. This PR makes this easier by adding pybind11 support to the C++ API.
3. The C++ extension API simply becomes richer by gaining access to the C++ API headers.

soumith ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11510

Reviewed By: ezyang

Differential Revision: D9998835

Pulled By: goldsborough

fbshipit-source-id: 7a94b44a9d7e0377b7f1cfc99ba2060874d51535
2018-09-24 14:44:21 -07:00
Christian Puhrsch
e998038bc0 Use TypeMeta instead of TypeIdentifier within at::StorageImpl (#11236)
Summary:
Further aligns at::StorageImpl with caffe2::StorageImpl
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11236

Differential Revision: D9776286

Pulled By: cpuhrsch

fbshipit-source-id: f2c53995fcece013b77b3a1f709ab0f9df8ab23e
2018-09-12 22:26:00 -07:00
Gregory Chanan
86ab92b0a9 Move TensorImpl / UndefinedTensor(Impl) to core (#11441)
Summary:
Moves TensorImpl to core.
Renames UndefinedTensor to UndefinedTensorImpl and moves to core.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11441

Differential Revision: D9736620

Pulled By: gchanan

fbshipit-source-id: 0322ae3b903e338de253b35a0d74a9d3e219204b
2018-09-11 07:45:56 -07:00
Peter Goldsborough
35008e0a1a Add flags to fix half comparison and test (#11395)
Summary:
The controller you requested could not be found.  found there are some issues when using comparison operators for half types when certain THC header are included. I was able to reproduce and added a test. I also fix the issue by adding the proper definitions to avoid this issue.

Reported in https://github.com/pytorch/pytorch/pull/10301#issuecomment-416773333
Related: https://github.com/pytorch/tutorials/pull/292

soumith fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11395

Differential Revision: D9725102

Pulled By: goldsborough

fbshipit-source-id: 630425829046bbebea3409bb792a9d62c91f41ad
2018-09-10 14:10:21 -07:00
Peter Goldsborough
31d36b1d31 move complex registration test out-of-line (#11397)
Summary:
Moves the code for the complex registration code into an out-of-line C++ extension to de-noise the test_cpp_extensions.py file. Let's keep it nice and tidy so we can point our users at it for usage examples.

ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11397

Differential Revision: D9725335

Pulled By: goldsborough

fbshipit-source-id: 290618f2ee711b1895cdb8f05276034dfe315c6d
2018-09-07 16:56:14 -07:00
Johannes M Dieterich
a4c59a9dab MIOpen integration, more tests enabled, bug fixes (#10612)
Summary:
* first integration of MIOpen for batch norm and conv on ROCm
* workaround a ROCm compiler bug exposed by elementwise_kernel through explicit capture of variables in the densest packing
* workaround a ROCm compiler bug exposed by having `extern "C" __host__` as a definition and just `__host__` in the implementation through the hipify script
* use fabs() in accordance with C++11 for double absolute, not ::abs() which is integer-only on ROCm
* enable test_sparse set on CI, skip tests that don't work currently on ROCm
* enable more tests in test_optim after the elementwise_bug got fixed
* enable more tests in test_dataloader
* improvements to hipification and ROCm build

With this, resnet18 on CIFAR data trains without hang or crash in our tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10612

Reviewed By: bddppq

Differential Revision: D9423872

Pulled By: ezyang

fbshipit-source-id: 22c0c985217d65c593f35762b3eb16969ad96bdd
2018-08-23 15:24:47 -07:00
Edward Yang
6bdbad93b9 Refactor Device to not depend on Backend. (#10478)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10478

- Removed Backend constructor from Device, and fixed all
  use-sites to use DeviceType::CPU instead of kCPU, or
  use a new function backendToDeviceType to perform
  the conversion.
- New method device_type() on Type; it gives you the
  underlying device type, e.g., CPU for SparseCPU.
- We add backward compatibility for kCPU/kCUDA uses,
  by introducing a new special type which is implicitly
  convertible to both DeviceType and Backend.  As long as
  you don't define a function that's overloaded on both
  DeviceType and Backend (but not on BackendOrDeviceType),
  the implicit conversions will ensure that uses
  of at::Device(at::kCPU) keep working. We fixed use-sites in
  the library, but did NOT fix sites in the test code, so that
  we can exercise this BC code.

Reviewed By: Yangqing

Differential Revision: D9301861

fbshipit-source-id: 9a9d88620500715c7b37e655b4fd761f6dd72716
2018-08-18 17:39:14 -07:00
Mike Ruberry
1003ccfa15 Creates CUDAContext (#9435)
Summary:
ezyang noticed that the CUDAStream files lived under ATen/ despite being CUDA-specific, and suggested porting them to ATen/cuda and exposing them with a new CUDAContext. This PR does that. It also:

- Moves ATen's CUDA-specific exceptions for ATen/cudnn to ATen/cuda for consistency
- Moves getDeviceProperties() and getCurrentCUDASparseHandle() to CUDAContext from CUDAHooks

The separation between CUDAContext and CUDAHooks is straightforward. Files that are in CUDA-only builds should rely on CUDAContext, while CUDAHooks is for runtime dispatch in files that can be included in CPU-only builds. A comment in CUDAContext.h explains this pattern. Acquiring device properties and CUDA-specific handles is something only done in builds with CUDA, for example, so I moved them from CUDAHooks to CUDAContext.

This PR will conflict with #9277 and I will merge with master after #9277 goes in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9435

Reviewed By: soumith

Differential Revision: D8917236

Pulled By: ezyang

fbshipit-source-id: 219718864234fdd21a2baff1dd3932ff289b5751
2018-07-20 12:56:15 -07:00