Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57985
Fixes https://github.com/pytorch/pytorch/issues/57756
This PR introduces a new `pyobj_interpreter_` field on TensorImpl which tracks what Python interpreter (if any) owns the TensorImpl. This makes it illegal to bind a TensorImpl from multiple Python interpreters, and means that we can now directly store PyObject pointer on TensorImpl even in the presence of multiple Python interpreters, as is the case in torchdeploy. This is a necessary step for PyObject preservation, which cannot be easily implemented when there are multiple Python interpreters.
Although the PR is not that long, there is a very subtle portion of the implementation devoted to ensuring that the tagging process is thread safe, since multiple threads can concurrently try to tag a PyObject. Check Note [Python interpreter tag] and Note [Memory ordering on Python interpreter tag] for detailed discussion of how this is handled. You will have to check this code carefully in code review; I did not torture test the multithreaded paths in any meaningful way.
In a follow up PR, I will pack the interpreter and PyObject fields into single atomic word on 64-bit.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: wconstab
Differential Revision: D28390242
Pulled By: ezyang
fbshipit-source-id: a6d9b244ee6b9c7209e1ed185e336297848e3017
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58412
Second try- avoid ctor/dtor handling this time as it is kind of
pointless if the rethrow will still terminate(), and upsets -Werror=terminate
Original commit changeset: 1775bed18269
Test Plan: existing unit tests and CI
Reviewed By: suo
Differential Revision: D28478588
fbshipit-source-id: 84191cecc3ef52e23f11bfea07bbb9773ebc5df4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58192
Exceptions thrown by deploy internals need to be sanitized
for application safety.
See commment in deploy.h for detailed explanation.
Test Plan: Added unit test
Reviewed By: suo
Differential Revision: D28371127
fbshipit-source-id: c0ced2f194424a394c5852bd4ab5cb41b0f4e87b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57748
To be used by PyTorchPredictor integration for deploy.
Original commit changeset: 4d41efc733b2
Test Plan: tested via new unit tests
Reviewed By: suo
Differential Revision: D28258525
fbshipit-source-id: 8b9436e47501d7c1c16e79909e668100f825711e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57484
To be used by PyTorchPredictor integration for deploy.
Test Plan: tested via new unit tests
Reviewed By: suo
Differential Revision: D28154522
fbshipit-source-id: 5ba57a8d7f01686180e6fd47663635ec3ab2120d
Summary:
In my last PR I've missed CUDA and distributed folders, fixing this now
This change is autogenerated by `python tool/clang_tidy.py -s`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57235
Reviewed By: janeyx99
Differential Revision: D28084444
Pulled By: malfet
fbshipit-source-id: bf222f69ee90c7872c3cb0931e8cdb84f0cb3cda
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53670
This puts deploy into the torch::deploy namespace. It also renames some
objects to better match their behavior:
PythonObject -> Obj, in the future it will refer to either a python object or a handle to a script obj, so rename it torch::deploy::Obj to be generic
MovableObject -> ReplicatedObj, to prevent confusion with "std::move" which is unrelated, and to note that we are replicating this object across interpreters.
Test Plan: Imported from OSS
Reviewed By: wconstab
Differential Revision: D26932131
Pulled By: zdevito
fbshipit-source-id: 8041d6c5b2041a7c3192c1a17d2edb38112a89f3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51754
This API allows you to manage multiple python interpreters in a single
process to deploy PyTorch models packaged with torch.package.
torch/csrc/deploy/deploy.h contains the API definition
torch/csrc/deploy/test_deploy.cpp has some examples.
Notes:
* mutex is added to PyTorchStreamReader to make it safe to use from multiple threads at once.
* USE_DEPLOY is only true for the special libtorch_deployinterpreter.so library, when enabled
we use a hash table to maintain PyObject <> at::Tensor mappping rather than the internal pointer
in Tensor since >1 interpreter may have a reference to the tensor.
* serialization.py has some additional functions for creating pickle objects
but keeping storages in memory for use transfering tensors between interpreters
Test Plan: Imported from OSS
Reviewed By: wconstab
Differential Revision: D26329468
Pulled By: zdevito
fbshipit-source-id: d75f4ebb9a27f1d911179d9996041bcb3ca04a07