pytorch/caffe2
Yu, Guangye 79811e765c [2/4] Intel GPU Runtime Upstreaming for Device (#116833)
# Motivation
According to [[1/4] Intel GPU Runtime Upstreaming for Device](https://github.com/pytorch/pytorch/pull/116019), as mentioned in [[RFC] Intel GPU Runtime Upstreaming](https://github.com/pytorch/pytorch/issues/114842), the second PR  covers the changes under `aten`.

# Design
We will compile the code for XPU separately into a library named `libtorch_xpu.so`. Currently, it primarily offers device-related APIs, including
- `getCurrentDeviceProperties`
- `getDeviceProperties`
- `getGlobalIdxFromDevice`
- `getDeviceFromPtr`

# Additional Context
`XPUHooks` is an indispensable part of the runtime. We upstream `XPUHooks` in this PR since there is some code related to `Device` in it and we also refine some logic and code to avoid forward declaration in `DLPack`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116833
Approved by: https://github.com/EikanWang, https://github.com/jgong5, https://github.com/gujinghui, https://github.com/malfet
2024-01-18 05:02:42 +00:00
..
contrib Revert "Add _foreach_clamp (#106574)" 2023-08-11 21:05:04 +00:00
core Add function to materialize COW storages (#117053) 2024-01-10 15:34:16 +00:00
cuda_rtc
db
distributed [4/N] Add -Wdeprecated and related fixes (#110204) 2023-10-07 19:46:08 +00:00
experiments [BE] Remove dependency on six and future (#94709) 2023-02-14 09:14:14 +00:00
ideep [ONEDNN][BC-breaking] update onednn from v2.7.3 to v3.1.1 (#97957) 2023-08-25 12:13:18 +00:00
image
mobile
mpi [BE] Enforce missing override keyword (#104032) 2023-06-24 02:34:24 +00:00
observers
onnx
operators [codemod] Fix shadows in PyTorch (#117562) 2024-01-17 20:33:50 +00:00
opt [codemod][lowrisk] Remove extra semi colon from caffe2/caffe2/opt/optimizer.cc (#115018) 2023-12-13 23:11:33 +00:00
perfkernels Revert "Use missing-prototypes in torch_cpu (#103725)" 2023-06-22 18:30:31 +00:00
predictor
proto extract torch.proto to its own library (#97614) 2023-03-30 10:35:03 +00:00
python Revert "[Reland2] Update NVTX to NVTX3 (#109843)" 2023-12-05 16:10:20 +00:00
quantization [BE]: Apply PYI autofixes to various types (#107521) 2023-08-20 02:42:21 +00:00
queue [caffe2] Replace CAFFE_ prefixes in static_tracepoint.h macros with TORCH_ (#106380) 2023-08-03 21:51:36 +00:00
serialize Reduce single reader check time for inline_container (#113328) 2023-11-09 22:02:28 +00:00
sgd [CUDA] Drop CUDA 10 support (#89582) 2023-01-05 05:11:53 +00:00
share Revert "Use missing-prototypes in torch_cpu (#103725)" 2023-06-22 18:30:31 +00:00
test
transforms [1/N] Enable Wunused-result and Wunused-variable in torch targets (#110722) 2023-10-08 23:43:45 +00:00
utils [codemod][lowrisk] Remove extra semi colon from caffe2/c10/util/Float8_e5m2.h (#115761) 2024-01-04 02:02:26 +00:00
video
__init__.py
.clang-format
BUILD_MODE.bzl
CMakeLists.txt [2/4] Intel GPU Runtime Upstreaming for Device (#116833) 2024-01-18 05:02:42 +00:00
README.md
release-notes.md
requirements.txt
unexported_symbols.lds
VERSION_NUMBER
version_script.lds

Caffe2

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

Questions and Feedback

Please use GitHub issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.

Further Resources on Caffe2.ai