This PR re-implements pin memory aiming to get rid of the optional `device` argument and makes all related APIs to be device-agnostic. We add two new abstract APIs in [AcceleratorHooksInterface](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/detail/AcceleratorHooksInterface.h#L12) and redefine pin memory as: "Pin memory is always pinned for the current accelerator device". In detail, it uses [getAcceleratorHooksInterface](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/Context.h#L61) in pin_memory/is_pinned to get an appropriate device and invoke the corresponding overridden interfaces, instead of using BackendSelect and then dispatching to CUDA or other specific backends' implement methods.
Note: For new backends who want to implement and use pin memory, just inherit AcceleratorHooksInterface and overwrite the `isPinnedPtr` and `getPinnedMemoryAllocator` methods.
Additional context: To avoid BC-breaking, this PR just preserves the `device` arg of related APIs and would throw a deprecation warning if `device` arg is passed. Another PR will be submitted to update all PT callers (`Tensor.is_pinned()`, `Tensor.pin_memory()`...) not to pass this arg based on this PR. In future, `device` arg will be actually removed.
Relates #124908
Relates #14560
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126376
Approved by: https://github.com/albanD
1) Add skip undefined tensor in cpu fallback when call _copy_from_and_resize;
2) Modify to_cpu function support optional tensor;
3) Add copy back to origin optional tensor when alias_info isWrite is true.
@ezyang @bdhirsh
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130237
Approved by: https://github.com/ezyang
This PR re-implements pin memory aiming to get rid of the optional `device` argument and makes all related APIs to be device-agnostic. We add two new abstract APIs in [AcceleratorHooksInterface](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/detail/AcceleratorHooksInterface.h#L12) and redefine pin memory as: "Pin memory is always pinned for the current accelerator device". In detail, it uses [getAcceleratorHooksInterface](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/Context.h#L61) in pin_memory/is_pinned to get an appropriate device and invoke the corresponding overridden interfaces, instead of using BackendSelect and then dispatching to CUDA or other specific backends' implement methods.
Note: For new backends who want to implement and use pin memory, just inherit AcceleratorHooksInterface and overwrite the `isPinnedPtr` and `getPinnedMemoryAllocator` methods.
Additional context: To avoid BC-breaking, this PR just preserves the `device` arg of related APIs and would throw a deprecation warning if `device` arg is passed. Another PR will be submitted to update all PT callers (`Tensor.is_pinned()`, `Tensor.pin_memory()`...) not to pass this arg based on this PR. In future, `device` arg will be actually removed.
Relates #124908
Relates #14560
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126376
Approved by: https://github.com/albanD
1) add operand and get_dim_names API;
2) set will_resize to true when output tensor is undefined;
3) add abs_stub for dummy device and calculate on cpu device;
4) support dummy device copy with stride;
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120792
Approved by: https://github.com/ezyang
Fixes https://github.com/pytorch/pytorch/issues/102970. See the comment [here](https://github.com/pytorch/pytorch/issues/102970#issuecomment-1577223773) for details.
We normally treat "outputs that alias inputs" specially in AOTAutograd, by replaying the views at runtime, instead of baking them into the graph. For views that are part of custom autograd functions though, we can't do that view-replay, since it will clobber the backwards function that the user specified in their custom autograd.Function.
Right now in this PR, I distinguish between "aliased inputs that are normal views" vs. "aliased inputs that are views that came from an autograd.Function call" by checking the outputs `.grad_fn` field, to see if it inherits from our custom CBackward function class. Then I added a new `OutputType` enum value, that we effectively treat the "normal" way (the same way that we treat ordinary, non-aliased outputs). The new enum val is mostly for debugging - so we can print it and know that our graph had custom autograd.Function aliased outputs in it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102992
Approved by: https://github.com/ezyang, https://github.com/zou3519
Fixes #ISSUE_NUMBER
as the title, add context support for custom device and testcase.
And in the future, we may want to refactor these hooks for different device to unify the APIs, would you agree my
idea? @albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105056
Approved by: https://github.com/albanD
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.
In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang, https://github.com/huydhn
Fixes #ISSUE_NUMBER
For the scenario where users inherit storageimpl to implement their own subclasses, the current storage creation method cannot correctly create storage objects.
Refer to the registration method of Allocator to expand the creation method of storageimpl, users can register their own custom storageimpl creation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100237
Approved by: https://github.com/albanD
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.
In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.
In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang
Fixes#99326
Support storage pin_memory and is_pinned for custom device, by calling dispatched tensor operations.
@ezyang this pr is what we have discussed in issue #99326, would you please take a moment to review it, thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99712
Approved by: https://github.com/ezyang
Currently storage only considers partial backend. We want storage to create on custom backend by key PrivateUse1.
It also provides an easy automatic generation of storage-related attributes.
When the user registers a new backend, the corresponding methods and attributes can be automatically generated.
Do this code.
`torch.utils.rename_privateuse1_backend('foo')`
`torch.utils.generate_storage_for_privateuse1_backend()`
Then, get the following methods and attributes.
`torch.TypedStorage.is_foo`
`torch.TypedStorage.foo()`
`torch.UntypedStorage.is_foo`
`torch.UntypedStorage.foo()`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98478
Approved by: https://github.com/albanD