mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Summary: When we compute contiguity for a tensor with dynamic shapes we first: 1) Try to compute it without guarding. 2) If all shapes hinted, compute it with potentially adding guards. 3) if any input is not hinted, compute it symbolically. sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called on it to avoid data dependent errors. ex: bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__); is_contiguous_or_false is a helper function that does that. In this PR I only handle default contiguity, will follow up with changes for other formats like channel_last . We use this patter in this PR for several locations to avoid DDEs. Test Plan: contbuild & OSS CI, Rollback Plan: Reviewed By: malfet Differential Revision: D77639021 Pull Request resolved: https://github.com/pytorch/pytorch/pull/157472 Approved by: https://github.com/aorenste |
||
|---|---|---|
| .. | ||
| ADInplaceOrViewType.cpp | ||
| annotated_fn_args.py.in | ||
| Functions.cpp | ||
| Functions.h | ||
| python_enum_tag.cpp | ||
| python_fft_functions.cpp | ||
| python_functions.cpp | ||
| python_functions.h | ||
| python_linalg_functions.cpp | ||
| python_nested_functions.cpp | ||
| python_nn_functions.cpp | ||
| python_return_types.cpp | ||
| python_return_types.h | ||
| python_sparse_functions.cpp | ||
| python_special_functions.cpp | ||
| python_torch_functions.cpp | ||
| python_variable_methods.cpp | ||
| TraceType.cpp | ||
| variable_factories.h | ||
| VariableType.cpp | ||
| VariableType.h | ||
| ViewFuncs.cpp | ||
| ViewFuncs.h | ||