Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73445
Refactors the whole codebase to use LazyTensorPtr (defined as c10::intrusive_ptr) to enable XLA to use a derived class XlaLazyTensor and override functionality.
this PR is just the first step, and we will need to add a factory class that XLA can override in their backend to actually hook up their derived tensor class.
Parallel PR on lazy_tensor_staging: #73429
Test Plan: tested via lazy_tensor_staging test_ptltc and torchbench and CI
Reviewed By: ezyang
Differential Revision: D34481918
fbshipit-source-id: 01176b127df6b79039aa1bc57bc6da5505161f87
(cherry picked from commit 52b9ae4e22d2703d44c6436311d79d40bd62c6aa)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68027
This commit upstreams class BackendDevice to the master, which is a backend
specific representation of the actual hardware, for instances, CPU, GPU, or
TPU.
This concept is important for backend like XLA where it needs to tell the
actual hardware type from the c10::DeviceType::Lazy virtual device during
both IR constructions and lowerings.
Test Plan: ./build/bin/test_lazy --gtest_filter=BackendDeviceTest.*
Reviewed By: wconstab
Differential Revision: D32261838
Pulled By: alanwaketan
fbshipit-source-id: 579c3fc5f9da7847c887a383c6047e8ecb9cc5bc