pytorch/torch/csrc/utils
Edward Z. Yang f7365eca90 Add unbacked symints support; item works now (#90624)
The big idea is to add `create_unbacked_symfloat` and `create_unbacked_symint` to ShapeEnv, allowing you to allocate symbolic floats/ints corresponding to data you don't know about at compile time. Then, instead of immediately erroring out when you try to call local_scalar_dense on a FakeTensor, we instead create a fresh symint/symfloat and return that.

There a bunch of odds and ends that need to be handled:

* A number of `numel` calls converted to `sym_numel`
* When we finally return from item(), we need to ensure we actually produce a SymInt/SymFloat when appropriate. The previous binding code assumed that you would have to get a normal Python item. I add a pybind11 binding for Scalar (to PyObject only) and refactor the code to use that. There is some trickiness where you are NOT allowed to go through c10::SymInt if there isn't actually any SymInt involved. See comment.
* One of our unit tests tripped an implicit data dependent access which occurs when you pass a Tensor as an argument to a sizes parameter. This is also converted to support symbolic shapes
* We now support tracking bare SymInt/SymFloat returns in proxy tensor mode (this was already in symbolic-shapes branch)
* Whenever we allocate an unbacked symint, we record the stack trace it was allocated at. These get printed when you attempt data dependent access on the symint (e.g., you try to guard on it)
* Subtlety: unbacked symints are not necessarily > 1. I added a test for this.

These unbacked symints are not very useful right now as you will almost always immediately raise an error later when you try to guard on them. The next logical step is adding an assertion refinement system that lets ShapeEnv learn facts about unbacked symints so it can do a better job eliding guards that are unnecessary.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90624
Approved by: https://github.com/Skylion007, https://github.com/voznesenskym
2022-12-12 13:33:07 +00:00
..
auto_gil.h
byte_order.cpp
byte_order.h
cpp_stacktraces.cpp
cpp_stacktraces.h
cuda_enabled.h
cuda_lazy_init.cpp
cuda_lazy_init.h
disable_torch_function.cpp Revert "rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)" 2022-11-11 19:13:05 +00:00
disable_torch_function.h Revert "rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)" 2022-11-11 19:13:05 +00:00
init.cpp
init.h
invalid_arguments.cpp Fix a PyObject leak (#87608) 2022-10-24 23:55:13 +00:00
invalid_arguments.h
memory.h
nested.cpp Implement a constructor for nested_tensor that is similar to torch.tensor() (#88213) 2022-11-08 00:03:18 +00:00
nested.h Implement a constructor for nested_tensor that is similar to torch.tensor() (#88213) 2022-11-08 00:03:18 +00:00
numpy_stub.h
object_ptr.cpp
object_ptr.h
out_types.cpp
out_types.h
pybind.cpp Add unbacked symints support; item works now (#90624) 2022-12-12 13:33:07 +00:00
pybind.h Add unbacked symints support; item works now (#90624) 2022-12-12 13:33:07 +00:00
pycfunction_helpers.h
python_arg_parser.cpp Fix bugs found by static analysis (#85705) 2022-10-28 23:51:55 +00:00
python_arg_parser.h Add unbacked symints support; item works now (#90624) 2022-12-12 13:33:07 +00:00
python_compat.h Make functorch compilable with Py-3.11 (#85054) 2022-09-23 04:48:18 +00:00
python_dispatch.cpp Add crossref debug mode for functionalization, catches stride errors (#89498) 2022-11-23 04:18:25 +00:00
python_dispatch.h Make Python op registration work with torchdeploy/multipy (#87162) 2022-11-03 12:56:44 +00:00
python_numbers.h Add SymInt to Scalar (#84958) 2022-09-25 23:51:06 +00:00
python_scalars.h
python_strings.h
python_stub.h
python_symnode.cpp Unify SymIntNode and SymFloatNode into SymNode (#87817) 2022-10-27 20:56:02 +00:00
python_symnode.h Use standard __func__ macro in symbolic shape. (#89264) 2022-11-18 17:03:53 +00:00
python_torch_function_mode.h [Modes] refactor modes to only use a stack in cpp (#86458) 2022-10-21 19:18:23 +00:00
python_tuples.h
schema_info.cpp
schema_info.h
six.h
structseq.cpp
structseq.h
tensor_apply.cpp
tensor_apply.h
tensor_dtypes.cpp Revert "Add bits tensor types (#88594)" 2022-11-30 11:37:56 +00:00
tensor_dtypes.h
tensor_flatten.cpp
tensor_flatten.h
tensor_layouts.cpp
tensor_layouts.h
tensor_list.cpp
tensor_list.h
tensor_memoryformats.cpp Consistent compute numel/contiguous strategy with SymInts (#85858) 2022-09-30 21:26:34 +00:00
tensor_memoryformats.h Add tests for custom pybind type_casters (#89897) 2022-12-02 07:02:09 +00:00
tensor_new.cpp Symintify repeat_interleave.self_int (#89111) 2022-11-18 05:04:02 +00:00
tensor_new.h
tensor_numpy.cpp
tensor_numpy.h
tensor_qschemes.cpp
tensor_qschemes.h
tensor_types.cpp add XLA backend into tensor type strings (#86881) 2022-10-17 18:27:49 +00:00
tensor_types.h
throughput_benchmark-inl.h
throughput_benchmark.cpp
throughput_benchmark.h
torch_dispatch_mode.h [Modes] refactor modes to only use a stack in cpp (#86458) 2022-10-21 19:18:23 +00:00
variadic.cpp
variadic.h
verbose.cpp