pytorch/tools/autograd/templates
Laith Sakka 7cfd054075 [attempt 2] Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#157472)
Summary:
When we compute contiguity for a tensor with dynamic shapes we first:
1) Try to compute it without guarding.
2) If all shapes hinted, compute it with potentially adding guards.
3) if any input is not hinted, compute it symbolically.

sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called
on it to avoid data dependent errors.

ex:
 bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__);
is_contiguous_or_false is a helper function that does that.

In this PR I only handle default contiguity, will follow up with changes for other formats like  channel_last .
We use this patter in this PR for several locations to avoid DDEs.

Test Plan:
contbuild & OSS CI,

Rollback Plan:

Reviewed By: malfet

Differential Revision: D77639021

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157472
Approved by: https://github.com/aorenste
2025-07-02 23:12:29 +00:00
..
ADInplaceOrViewType.cpp
annotated_fn_args.py.in
Functions.cpp functional compiled autograd (#144707) 2025-01-27 05:20:56 +00:00
Functions.h
python_enum_tag.cpp
python_fft_functions.cpp
python_functions.cpp
python_functions.h
python_linalg_functions.cpp
python_nested_functions.cpp
python_nn_functions.cpp [3/N] Fix clang-tidy warnings in python_variable_methods.cpp (#139248) 2024-10-31 03:29:19 +00:00
python_return_types.cpp
python_return_types.h
python_sparse_functions.cpp
python_special_functions.cpp
python_torch_functions.cpp
python_variable_methods.cpp [attempt 2] Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#157472) 2025-07-02 23:12:29 +00:00
TraceType.cpp
variable_factories.h
VariableType.cpp
VariableType.h Remove ConstQuantizerPtr in torchgen (#142375) 2024-12-10 02:37:01 +00:00
ViewFuncs.cpp
ViewFuncs.h