pytorch/torch
Laith Sakka 39df901b2a introduce definitely_contiguous and use it for reshape and tensor meta data computation. (#153432)
when a tensor has unbacked symbols it can be general enough to represent both contiguous and non contiguous tensors.
in that case we cant really evaluate is_contiguous. In many places in the code base, we check for is_contiguous to take a fast path. but the general path usually works for both contiguous and not contiguous in that case we probably want
to use definitely _contiguous API.

This is appleid for reshape in this PR and also to  tensor meta data computation, the meta data now will have an attribute that says that its contiguous when its always contiguous. We would store that only if definitely _contiguous is true  now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153432
Approved by: https://github.com/bobrenjc93
2025-05-28 03:41:26 +00:00
..
_awaits
_C [BE][CI][Easy] Run lintrunner on generated .pyi stub files (#150732) 2025-05-27 14:58:02 +00:00
_C_flatbuffer
_custom_op
_decomp Fix torch.isin decomposition for scalar inputs (#153216) 2025-05-09 20:26:25 +00:00
_dispatch
_dynamo [dynamo] dynamic gb_type -> static gb_type (#154435) 2025-05-28 03:14:26 +00:00
_export PYFMT lint grandfathered files 1 (#154261) 2025-05-25 17:36:14 +00:00
_functorch Don't CSE unbacked nodes (#154387) 2025-05-28 02:21:56 +00:00
_higher_order_ops [export][cond] support merging constant ints as unbacked symint (#152742) 2025-05-22 17:25:38 +00:00
_inductor introduce definitely_contiguous and use it for reshape and tensor meta data computation. (#153432) 2025-05-28 03:41:26 +00:00
_lazy
_library Add torch._C.Tag.needs_contiguous_strides (#152859) 2025-05-08 04:49:59 +00:00
_logging Add flag _metrics_log_runtime to disable runtime metric logging by default (#153506) 2025-05-22 01:02:11 +00:00
_numpy Fix broken URLs (#152237) 2025-04-27 09:56:42 +00:00
_prims
_prims_common introduce definitely_contiguous and use it for reshape and tensor meta data computation. (#153432) 2025-05-28 03:41:26 +00:00
_refs introduce definitely_contiguous and use it for reshape and tensor meta data computation. (#153432) 2025-05-28 03:41:26 +00:00
_strobelight
_subclasses Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
_vendor
accelerator Add torch.accelerator.device_index as accelerator's device switch context (#148864) 2025-04-25 09:45:25 +00:00
amp [Intel GPU] skip a cuda api call in amp to save some host overhead on xpu (#151111) 2025-04-13 06:37:07 +00:00
ao Update to using mypy 1.15 (#154054) 2025-05-24 04:30:57 +00:00
autograd Add memory reporting for XPU to Memory Profiler (#152842) 2025-05-21 01:19:19 +00:00
backends Revert "refine fp32 precision api (#125888)" 2025-05-11 00:35:46 +00:00
compiler add sticky cache pgo (#154418) 2025-05-27 16:40:18 +00:00
contrib
cpu [device_mesh] improve device selection logic (#150897) 2025-05-14 06:29:16 +00:00
csrc Fix the Problems About Defining Static Variable in Inline Function (#147095) 2025-05-28 02:47:16 +00:00
cuda Update to using mypy 1.15 (#154054) 2025-05-24 04:30:57 +00:00
distributed Make torch importable if compiled without TensorPipe (#154382) 2025-05-27 18:13:38 +00:00
distributions Use property instead of ClassVar for Uniform.arg_constraints and Wishart.arg_constraints (#154361) 2025-05-26 17:48:28 +00:00
export [export] Move PT2ArchiveWriter/Reader to torch/export (#153795) 2025-05-23 19:04:36 +00:00
fft
func
futures Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
fx introduce definitely_contiguous and use it for reshape and tensor meta data computation. (#153432) 2025-05-28 03:41:26 +00:00
jit Fix broken URLs (#152237) 2025-04-27 09:56:42 +00:00
legacy
lib
linalg Fix broken URLs (#152237) 2025-04-27 09:56:42 +00:00
masked Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
monitor
mps
mtia Add getDeviceProperties api to torch mtia device (#153577) 2025-05-27 11:55:58 +00:00
multiprocessing
nativert [nativert] Move file_util to pytorch core (#153162) 2025-05-27 03:42:47 +00:00
nested Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
nn [BE][CI][Easy] Run lintrunner on generated .pyi stub files (#150732) 2025-05-27 14:58:02 +00:00
onnx [ONNX] Update onnx to 1.18 (#153746) 2025-05-25 20:58:47 +00:00
optim Fix lr_scheduler unexpectedly calls step() when init argument last_epoch is larger than -1 (#149312) 2025-05-22 08:42:37 +00:00
package [BE]: Enable ruff YTT linter for Python version checks (#153547) 2025-05-14 21:09:16 +00:00
profiler [profiler][retry] don't disable CUPTI_LAZY_REINIT for cuda >= 12.6 (#151124) 2025-04-15 16:11:49 +00:00
quantization
signal
sparse Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
special Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
testing [BE][CI][Easy] Run lintrunner on generated .pyi stub files (#150732) 2025-05-27 14:58:02 +00:00
utils [BE][CI][Easy] Run lintrunner on generated .pyi stub files (#150732) 2025-05-27 14:58:02 +00:00
xpu Correct torch.xpu.is_bf16_supported return False if no XPU detected (#152317) 2025-05-06 10:03:17 +00:00
__config__.py
__future__.py
__init__.py Add missing docstring for sym_ite (#154201) 2025-05-26 15:59:21 +00:00
_appdirs.py Fix broken URLs (#152237) 2025-04-27 09:56:42 +00:00
_classes.py
_compile.py
_custom_ops.py Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
_deploy.py
_environment.py
_guards.py Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)" 2025-05-16 08:29:26 +00:00
_jit_internal.py Revert "[BE]: Type previously untyped decorators (#153726)" 2025-05-22 16:49:08 +00:00
_linalg_utils.py
_lobpcg.py Fixed rerr computation in lobpcg (#152789) 2025-05-08 12:22:31 +00:00
_lowrank.py
_meta_registrations.py [Intel GPU][Inductor] Fallback embedding_dense_backward on XPU (#151637) 2025-05-19 02:19:37 +00:00
_namedtensor_internals.py
_ops.py Revert "Improve torch.ops typing (#153558)" 2025-05-19 23:32:36 +00:00
_python_dispatcher.py
_size_docs.py Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
_tensor_str.py
_tensor.py Avoid triggering ignored requires_grad warning in our code (#152686) 2025-05-05 23:56:40 +00:00
_thread_safe_fork.py
_torch_docs.py Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
_utils_internal.py [reland] Add graph module runtime asserts to AOTI (#153182) 2025-05-09 22:56:19 +00:00
_utils.py Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
_VF.py
_vmap_internals.py Fix broken URLs (#152237) 2025-04-27 09:56:42 +00:00
_weights_only_unpickler.py
CMakeLists.txt Refactor torch/utils/data/datapipes/gen_pyi.py with torchgen (#150626) 2025-05-17 06:21:41 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py Optimize cdist param description (#151178) 2025-04-14 13:53:10 +00:00
header_only_apis.txt Add torch/header_only_apis.txt and enforce they're tested (#153635) 2025-05-20 23:42:24 +00:00
hub.py
library.h Overload Library::def rather than templating it (#151626) 2025-04-18 22:51:16 +00:00
library.py Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
overrides.py [CUDA][cuBLAS] Aten GEMM overload for FP32 output from FP16/BF16 inputs (#150812) 2025-04-18 01:53:26 +00:00
py.typed
quasirandom.py
random.py Update description for torch.random.fork_rng (#151881) 2025-04-23 16:59:29 +00:00
return_types.py
script.h
serialization.py Update serialization docs (#153631) 2025-05-19 20:22:07 +00:00
storage.py
torch_version.py
types.py
version.py.tpl