Summary:
Since `c10::ArrayRef` now support `c10::ArrayRef<const T>`, let's restore `ComputePostOrder` to accept `const Node*` again, which is more suitable for the context of the given helpers.
Test Plan:
CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88773
Approved by: https://github.com/JackCaoG
Proposed solution for #76826
Basically adds a context which is only "active" when called in mark step. Any backend can then use this to check if within a mark step context.
I've also added an example warning in the TS backend so that we now see the following:
```python
>>> import torch
>>> import torch._lazy
>>> import torch._lazy.ts_backend
>>> torch._lazy.ts_backend.init()
>>> a = torch.tensor([1, 2, 3, 4], device="lazy")
>>> b = torch.tensor([5, 6, 7, 8], device="lazy")
>>> c = a + b
>>> c
[W ts_backend_impl.cpp:187] Compile outside of mark step
tensor([ 6, 8, 10, 12], device='lazy:0')
>>> d = a * b
>>> torch._lazy.mark_step()
>>> d
tensor([ 5, 12, 21, 32], device='lazy:0')
```
Though it was mainly for example and will be happy to remove if this warning is not desired.
Fixes#76826
CC: @wconstab @desertfire @henrytwo @ke1337
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76840
Approved by: https://github.com/desertfire
Next stage of breaking up https://github.com/pytorch/pytorch/pull/74710
IR builder class introduced to decouple the explicit usage of `TsNode` in core lazy tensors.
Requires https://github.com/pytorch/pytorch/pull/75324 to be merged in first.
**Background**
- there are ~ 5 special ops used in lazy core but defined as :public {Backend}Node. (DeviceData, Expand, Scalar...)
- we currently require all nodes derive from {Backend}Node, so that backends can make this assumption safely
- it is hard to have shared 'IR classes' in core/ because they depend on 'Node'
**Motivation**
1. avoid copy-paste of "special" node classes for each backend
2. in general decouple and remove all dependencies that LTC has on the TS backend
**Summary of changes**
- new 'IRBuilder' interface that knows how to make 5 special ops
- move 'special' node classes to `ts_backend/`
- implement TSIRBuilder that makes the special TS Nodes
- new backend interface API to get the IRBuilder
- update core code to call the builder
CC: @wconstab @JackCaoG @henrytwo
Partially Fixes#74628
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75433
Approved by: https://github.com/wconstab
Summary:
This PR enables Input/Output aliasing for Lazy Tensor Core. `SetUpAlias` is a virtual function that can be overridden in a vendor's custom `LoweringContext` implementation.
The return type of `LoweringContext::GetResultShape` has also been updated to return a `c10::optional` value, since `GetResultShape` isn't currently implemented for the TorchScript backend.
The changes here mirror the interface used by `torch_xla`: https://github.com/pytorch/xla/blob/master/torch_xla/csrc/tensor.cpp#L1548-L1549
cc: antoniojkim ke1337 wconstab silvasean
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75828
Reviewed By: Krovatkin
Differential Revision: D35952593
Pulled By: wconstab
fbshipit-source-id: e20b11e44e0e1beda1b1c47aa3a8b611afd97b7f
(cherry picked from commit bcbc9ef01ef8eb84667e5c42edc10d38d5d78395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67927
BackendData - represents 'tensor data' in opaque backend storage
LoweringContext - interface for performing backend-specific IR lowering
BackendImplInterface - interface for lazy tensors backends to implement
Reorgs backend-related files into lazy/backend subdir
includes a few small fixes, which were made on lazy_tensor_staging but need to be back-ported to master.
Test Plan: used by lazy_tensor_staging branch
Reviewed By: desertfire
Differential Revision: D32142032
fbshipit-source-id: 828c717bcd0d511876e64ad209b50f7bfb10cec5