Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71291
This commit removes torch::lazy::convertShapes since it's no longer used.
In addition, it replaces a numel logic within LTCTensorImpl.
Test Plan:
./build/bin/test_lazy
CI in lazy_tensor_staging branch
Reviewed By: wconstab
Differential Revision: D33575084
Pulled By: alanwaketan
fbshipit-source-id: b104ef39fd552822e1f4069eab2cb942d48423a6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68314
Add a convenience to lazy::Shape for counting the number of elements (by multiplying out the dimensions). This is a method on Tensor, and in switching other lazy tensor shape utils to use aten shape inference, we need numel counts.
Test Plan: add unit tests
Reviewed By: alanwaketan
Differential Revision: D32409138
fbshipit-source-id: 3ae725300f8826d38e45412f46501d5e5f776fb2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68027
This commit upstreams class BackendDevice to the master, which is a backend
specific representation of the actual hardware, for instances, CPU, GPU, or
TPU.
This concept is important for backend like XLA where it needs to tell the
actual hardware type from the c10::DeviceType::Lazy virtual device during
both IR constructions and lowerings.
Test Plan: ./build/bin/test_lazy --gtest_filter=BackendDeviceTest.*
Reviewed By: wconstab
Differential Revision: D32261838
Pulled By: alanwaketan
fbshipit-source-id: 579c3fc5f9da7847c887a383c6047e8ecb9cc5bc