pytorch/torch
Nikita Shulga 2bc6f329b2 Make PyTorch argparser understand complex (#129580)
It understands float and int, so why not `complex`.

Test plan: `python -c "import torch;print(torch.rand(3, dtype=complex))"`

Fixes https://github.com/pytorch/pytorch/issues/126837

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129580
Approved by: https://github.com/albanD
2024-06-29 01:21:12 +00:00
..
_awaits
_C Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi stub files (#129419)" 2024-06-29 00:44:24 +00:00
_C_flatbuffer
_custom_op Rename impl_abstract to register_fake, part 2/2 (#123938) 2024-06-14 14:37:24 +00:00
_decomp Add decomposition for slice_scatter (#123744) 2024-06-28 17:02:10 +00:00
_dispatch
_dynamo Revert "[cond] inlining into one of the branches when pred is a python constant (#128709)" 2024-06-29 01:03:55 +00:00
_export Prototype for export_for_training (#129092) 2024-06-27 18:27:11 +00:00
_functorch Fix typo in stack_module_state doc (#129126) 2024-06-28 21:36:40 +00:00
_higher_order_ops Revert "[cond] inlining into one of the branches when pred is a python constant (#128709)" 2024-06-29 01:03:55 +00:00
_inductor Revert "[BE][Easy] use pathlib.Path instead of dirname / ".." / pardir (#129374)" 2024-06-29 00:47:15 +00:00
_lazy
_library [custom op] add error message (#129417) 2024-06-28 01:03:14 +00:00
_logging [ts migration] add logging as part of torch logging system (#129405) 2024-06-27 00:20:20 +00:00
_numpy Revert "[BE] enforce style for empty lines in import segments (#129751)" 2024-06-29 00:41:41 +00:00
_prims [Traceable FSDP2] Add Dynamo support for run_with_rng_state HOP (#127247) 2024-06-28 01:04:49 +00:00
_prims_common
_refs Fix rot90 decomposition for no rotation (#129097) 2024-06-24 22:19:42 +00:00
_strobelight
_subclasses Revert "Conversions between strided and jagged layouts for Nested Tensors (#115749)" 2024-06-29 00:16:47 +00:00
_vendor
amp [BE] enable UFMT for torch/storage.py (#127706) 2024-06-27 23:16:24 +00:00
ao chore(quantization): Enable PT2E symmetric dynamic quantization (#124615) 2024-06-26 16:14:58 +00:00
autograd [profiler] Directly use end_ns to create the FunctionEvent instead of using start_ns + duration_ns in pytorch profiler post processing for checking parent-child precisely (#129554) 2024-06-27 10:46:05 +00:00
backends Revert "[cuDNN][SDPA] Remove TORCH_CUDNN_SDPA_ENABLED=1, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)" 2024-06-28 06:03:54 +00:00
compiler
contrib
cpu [inductor][cpp] BF16 AMX micro-gemm support (#127195) 2024-06-21 07:21:47 +00:00
csrc Make PyTorch argparser understand complex (#129580) 2024-06-29 01:21:12 +00:00
cuda Inductor to fail gracefully on Voltas for bf16 tensors (#129288) 2024-06-25 00:04:13 +00:00
distributed Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi stub files (#129419)" 2024-06-29 00:44:24 +00:00
distributions [BE]: Update mypy to 1.10.0 (#127717) 2024-06-13 15:57:13 +00:00
export [export] make with_effect mark op has_effect to prevent them from DCEed. (#129680) 2024-06-28 02:22:30 +00:00
fft
func
futures
fx Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi stub files (#129419)" 2024-06-29 00:44:24 +00:00
jit Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi stub files (#129419)" 2024-06-29 00:44:24 +00:00
legacy
lib
linalg
masked [BE] update type annotations for basic utilities in torch/__init__.py (#129001) 2024-06-24 18:04:38 +00:00
monitor
mps Add support in Python API for the recommended max working set size. (#128289) 2024-06-12 16:03:57 +00:00
mtia [MTIA] Fix synchronize API (#128714) 2024-06-17 21:58:46 +00:00
multiprocessing expose set_thread_name to Python and set thread names (#128448) 2024-06-13 16:38:23 +00:00
nested Revert "Conversions between strided and jagged layouts for Nested Tensors (#115749)" 2024-06-29 00:16:47 +00:00
nn Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi stub files (#129419)" 2024-06-29 00:44:24 +00:00
onnx [BE] enable UFMT for torch/storage.py (#127706) 2024-06-27 23:16:24 +00:00
optim Optim package docstring fix (#129086) 2024-06-21 14:30:53 +00:00
package Revert "[BE] enforce style for empty lines in import segments (#129751)" 2024-06-29 00:41:41 +00:00
profiler [Profiler] Clean up use_mtia to follow standard use_device instead (#126284) 2024-06-18 21:01:03 +00:00
quantization
signal
sparse
special
testing Revert "[BE][Easy] use pathlib.Path instead of dirname / ".." / pardir (#129374)" 2024-06-29 00:47:15 +00:00
utils Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi stub files (#129419)" 2024-06-29 00:44:24 +00:00
xpu
__config__.py
__future__.py
__init__.py Refine typing annotation for compile (#129136) 2024-06-28 17:57:44 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py Evaluate symexprs on load path of cache not write (#128997) 2024-06-20 08:55:12 +00:00
_jit_internal.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_linalg_utils.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_lobpcg.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
_lowrank.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_meta_registrations.py Revert "[cuDNN][SDPA] Remove TORCH_CUDNN_SDPA_ENABLED=1, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)" 2024-06-28 06:03:54 +00:00
_namedtensor_internals.py
_ops.py Torchbind call method + effects support (#128397) 2024-06-14 21:28:17 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py
_tensor_str.py
_tensor.py [BE] explicitly export subpackage torch.utils (#128342) 2024-06-13 04:39:16 +00:00
_torch_docs.py Update torch.nanmean() docstring to mention input dtype requirement (#128155) 2024-06-12 17:46:36 +00:00
_utils_internal.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_utils.py Remove dependency on private _compat_pickle in CPython (#129509) 2024-06-26 14:20:27 +00:00
_VF.py
_vmap_internals.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_weights_only_unpickler.py Improve error message for weights_only load (#129705) 2024-06-28 19:36:31 +00:00
abi-check.cpp
CMakeLists.txt Expose nholmann json to torch (#129570) 2024-06-26 21:59:26 +00:00
custom_class_detail.h Revert "[1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)" 2024-06-15 01:58:20 +00:00
custom_class.h
extension.h
functional.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
hub.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
library.h Revert "[1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)" 2024-06-15 01:58:20 +00:00
library.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
overrides.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
py.typed
quasirandom.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
random.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
README.txt
return_types.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
script.h
serialization.py Improve error message for weights_only load (#129705) 2024-06-28 19:36:31 +00:00
storage.py [BE] enable UFMT for torch/storage.py (#127706) 2024-06-27 23:16:24 +00:00
torch_version.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
types.py [BE] enable UFMT for torch/storage.py (#127706) 2024-06-27 23:16:24 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.