Commit Graph

15 Commits

Author SHA1 Message Date
Bel H
30cb6ac53c Introduce mlc device (ML Compute device) to PyTorch's device list (#50634)
Summary:
Apple recently announced ML Compute, a new framework available in macOS Big Sur, which enables users to accelerate the training of neural networks on Mac hardware. This PR is the first on a series of PRs that will enable the integration with ML Compute. Most of the integration code will live on a separate subrepo named `mlc`.
The integration with `mlc` (ML Compute) will be very similar to that of xla. We rely on registering our ops through:

TORCH_LIBRARY_IMPL(aten, PrivateUse1, m) {
 m.impl_UNBOXED(<op_schema_name>, &customized_op_kernel)
 ...
}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50634

Reviewed By: malfet

Differential Revision: D26614213

Pulled By: smessmer

fbshipit-source-id: 3b492b346c61cc3950ac880ac01a82fbdddbc07b
2021-02-24 22:39:11 -08:00
chengjun
4a8ef4525e Add new backend type for Intel heterogeneous computation platform. (#49786)
Summary:
Add a new device type 'XPU' ('xpu' for lower case) to PyTorch. Changes are needed for code related to device model and kernel dispatch, e.g. DeviceType, Backend and DispatchKey etc.

https://github.com/pytorch/pytorch/issues/48246

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49786

Reviewed By: mrshenli

Differential Revision: D25893962

Pulled By: ezyang

fbshipit-source-id: 7ff0a316ee34cf0ed6fc7ead08ecdeb7df4b0052
2021-01-20 08:15:18 -08:00
Ivan Kobzarev
3112e23428 [py][vulkan][reland] Add is_vulkan to py api, add vulkan to device type parsing (#46655)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46655

Test Plan: Imported from OSS

Pulled By: IvanKobzarev

Reviewed By: mrshenli

Differential Revision: D24448984

fbshipit-source-id: 5000846a06077f7a5a06dd51da422d2a42f70820
2020-10-22 09:35:50 -07:00
Shen Li
cebe87fe3a Revert D24379422: [py][vulkan] Add is_vulkan to py api, add vulkan to device type parsing
Test Plan: revert-hammer

Differential Revision:
D24379422 (e8fbe54cf5)

Original commit changeset: afab89bb9e17

fbshipit-source-id: 743c77e453239f10c155c67490cba5a42ab42f58
2020-10-21 08:23:05 -07:00
Ivan Kobzarev
e8fbe54cf5 [py][vulkan] Add is_vulkan to py api, add vulkan to device type parsing (#46511)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46511

Test Plan: Imported from OSS

Reviewed By: AshkanAliabadi

Differential Revision: D24379422

Pulled By: IvanKobzarev

fbshipit-source-id: afab89bb9e17c50934083598262bbe14ea82e893
2020-10-20 20:04:24 -07:00
Dylan Bespalko
c767d65caf Added FPGA DispatchKey, DeviceType, Backend (#38938)
Summary:
ezyang,

I have added the changes to DispatchKey, DeviceType, Backend to support the out-of-tree FPGA.

cc. tataetae
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38938

Differential Revision: D21748955

Pulled By: ezyang

fbshipit-source-id: fe76d9730818205961430d2a0e00727b5c547b32
2020-06-03 07:28:14 -07:00
SsnL
ae392a77a6 Add better device idx parse checks (#37376)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/32079
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37376

Differential Revision: D21476036

Pulled By: zou3519

fbshipit-source-id: 86907083c23cbaf165b645307fb340f2656b814e
2020-05-14 09:07:12 -07:00
Danny Huang
ced9edbaa4 [Torch Device][c10] Fix the expected torch device error message (#36446)
Summary:
This PR made the expected torch device string error message to include `xla` as the acceptable torch device prefix string.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36446

Test Plan:
No Logic changed, and made sure `xla` is acceptable in `torch.device`.
```
import torch

device = torch.device("xla")
```

```
device = torch.device("unrecognized")

RuntimeError: Expected one of cpu, cuda, mkldnn, opengl, opencl, ideep, hip, msnpu, xla device type at start of device string: unrecognized
```

Differential Revision: D20993449

Pulled By: dahsh

fbshipit-source-id: 83afe4f913a650a655bfda9c2a64bf9e5aa27e16
2020-04-13 12:02:07 -07:00
cyy
8a14b41617 fix warnings reported by PVS (#33868)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33868

Differential Revision: D20169059

Pulled By: ailzhang

fbshipit-source-id: ec12226ae27ddd89fa5bacdd35151981ebfedcfd
2020-03-02 18:51:38 -08:00
Jeremy Lilley
abf55eb3a8 Pickler: convert std::stringstream cases. (#29351)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29351

When torch::save()ing a smallish tensor, we spend ~5% of the time
still in std::stringstream constructors.

This removes the last couple of cases. Benchmark shows ~5% improvement:
  TorchSaveSmallTensor Pre: 13.12us
  TorchSaveSmallTensor Post: 12.48us
ghstack-source-id: 93517928

Test Plan:
buck build mode/opt experimental/jeremyl/c2:
   buck-out/opt/gen/experimental/jeremyl/c2/SerializationBench  --bm_regex=TorchSaveSmallTensor

Differential Revision: D18365066

fbshipit-source-id: a3284bec004751cedae1cdadf27f969422faff8e
2019-11-08 14:26:40 -08:00
Sam Gross
dee11a92c1 Use Device instead of Backend in TensorIterator (#20690)
Summary:
This PR also moves Device::validate into the header file, which makes
statements like `Device d = kCPU` effectively free.

Device includes the device's index, so TensorIterator::compute_types
now implicitly checks that all CUDA inputs are on the same GPU.
Previously, this was done ad-hoc in places like TensorIterator::binary_op.

Note that zero-dim Tensor (scalars) are NOT required to be on the
same device as other inputs because they behave almost like Python numbers.
TensorIterator handles copying zero-dim Tensors to the common device.

Prior to this PR, TensorIterator would copy zero-dim Tensors between CPU
and GPU, but not between different GPUs (because Backend didn't encode
the GPU index). This removes that restriction.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20690

Differential Revision: D15414826

Pulled By: colesbury

fbshipit-source-id: 1d0ad1f7d663252af36dd4590bcda418c2f7a09f
2019-05-24 12:14:08 -07:00
Edward Yang
365fc26571 Replace AT_CHECK with TORCH_CHECK [shard 8/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20434

Reviewed By: jerryzh168

Differential Revision: D15318396

fbshipit-source-id: dcd0f51be2d64b9440bb95ce8f40acb12545c2f4
2019-05-15 08:05:56 -07:00
Davide Libenzi
66084c0bc9 Add recognition for XLA device types.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16844

Differential Revision: D13988805

Pulled By: gchanan

fbshipit-source-id: 4e89d6d2cde8bdac41739efa65cc91569a360953
2019-02-07 14:51:28 -08:00
Roy Li
4c803f4ebd Expose backend extensions to python
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16582

Reviewed By: gchanan

Differential Revision: D13887539

fbshipit-source-id: 8755babf2e3e849af974655f2f3a91740efe977e
2019-02-01 11:00:18 -08:00
Sebastian Messmer
d408324350 Move files to/from c10/core and c10/util (#15316)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15316

This starts cleaning up the files in c10 according to the module structure we decided on.

Move to c10/util:
- Half.h, Half-inl.h, Half.cpp, bitcasts.h

Move to c10/core:
- Device.h, Device.cpp
- DeviceType.h, DeviceType.cpp

i-am-not-moving-c2-to-c10

Reviewed By: dzhulgakov

Differential Revision: D13498493

fbshipit-source-id: dfcf1c490474a12ab950c72ca686b8ad86428f63
2019-01-10 16:22:22 -08:00