mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
# Motivation According to [[1/4] Intel GPU Runtime Upstreaming for Device](https://github.com/pytorch/pytorch/pull/116019), as mentioned in [[RFC] Intel GPU Runtime Upstreaming](https://github.com/pytorch/pytorch/issues/114842), the second PR covers the changes under `aten`. # Design We will compile the code for XPU separately into a library named `libtorch_xpu.so`. Currently, it primarily offers device-related APIs, including - `getCurrentDeviceProperties` - `getDeviceProperties` - `getGlobalIdxFromDevice` - `getDeviceFromPtr` # Additional Context `XPUHooks` is an indispensable part of the runtime. We upstream `XPUHooks` in this PR since there is some code related to `Device` in it and we also refine some logic and code to avoid forward declaration in `DLPack`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/116833 Approved by: https://github.com/EikanWang, https://github.com/jgong5, https://github.com/gujinghui, https://github.com/malfet |
||
|---|---|---|
| .. | ||
| contrib | ||
| core | ||
| cuda_rtc | ||
| db | ||
| distributed | ||
| experiments | ||
| ideep | ||
| image | ||
| mobile | ||
| mpi | ||
| observers | ||
| onnx | ||
| operators | ||
| opt | ||
| perfkernels | ||
| predictor | ||
| proto | ||
| python | ||
| quantization | ||
| queue | ||
| serialize | ||
| sgd | ||
| share | ||
| test | ||
| transforms | ||
| utils | ||
| video | ||
| __init__.py | ||
| .clang-format | ||
| BUILD_MODE.bzl | ||
| CMakeLists.txt | ||
| README.md | ||
| release-notes.md | ||
| requirements.txt | ||
| unexported_symbols.lds | ||
| VERSION_NUMBER | ||
| version_script.lds | ||
Caffe2
Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.
Questions and Feedback
Please use GitHub issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.