..
_C
Autogen Tags enum, and allow specifying tags while defining an op
2022-06-11 00:29:32 +00:00
_C_flatbuffer
_decomp
Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads"""
2022-06-10 04:40:43 +00:00
_lazy
Revert "Revert "[LT] Codegen ReuseNode for supported ops""
2022-05-16 20:14:42 +00:00
_masked
masked logasumexp/logaddexp
2022-06-11 05:46:36 +00:00
_prims
Revert "Fixes maybe_broadcast to actually broadcast only when needed ( #79298 )"
2022-06-11 23:36:18 +00:00
_refs
Revert "Fixes maybe_broadcast to actually broadcast only when needed ( #79298 )"
2022-06-11 23:36:18 +00:00
_subclasses
Add Dynamic Output Shape Tagdfor ata-dependent ops, handle in FakeTensor
2022-06-09 22:16:16 +00:00
amp
remove spurious warning in amp ( #79203 )
2022-06-10 21:53:58 +00:00
ao
[ao] Added fx model report per_channel detector
2022-06-10 08:09:59 +00:00
autograd
[forward ad] forbid non-float non-complex tangent and primal
2022-05-31 20:58:19 +00:00
backends
Deprecate torch.lu
2022-06-07 22:50:14 +00:00
contrib
cpu
add autocast cpu doc
2022-03-22 02:02:43 +00:00
csrc
Put symint overloads on a different name
2022-06-12 14:36:39 +00:00
cuda
Resolve TODO after Python 2 for custom_fwd ( #78592 )
2022-06-01 05:17:41 +00:00
distributed
Fix shard_module to appropriately deal with sub process groups.
2022-06-12 03:50:45 +00:00
distributions
add type annotation to distributions.kl_divergence ( #78432 )
2022-06-10 13:39:20 +00:00
fft
[complex32] fft support (cuda only) ( #74857 )
2022-05-12 04:28:55 +00:00
futures
fx
Ported proxy tensor tests over to core ( #78890 )
2022-06-07 00:28:53 +00:00
jit
Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 ( #74353 ) ( #74353 ) ( #76771 )
2022-06-07 21:44:55 +00:00
legacy
lib
turn on -Werror=unused-variable in our Bazel CPU build
2022-06-11 02:46:34 +00:00
linalg
Simplify and optimize linalg.solve
2022-06-11 04:06:40 +00:00
monitor
multiprocessing
Restore old names for private funcs in legacy storages ( #77861 )
2022-05-20 02:03:34 +00:00
nested
nn
Port index.Tensor to structured kernels.
2022-06-10 17:27:47 +00:00
onnx
Add onnx support for movedim and moveaxis ( #78931 )
2022-06-09 19:41:09 +00:00
optim
Adding maximize to Adamax ( #77409 )
2022-05-16 17:34:44 +00:00
package
torch/package: add fix for implicit numpy dependency ( #78979 )
2022-06-08 17:07:00 +00:00
profiler
Revert "Revert "[Profiler] Move python tracing to unified event type (Part 2)""
2022-06-09 19:45:02 +00:00
quantization
[quant][fx][improvement] Renamed default_affine_fixed_qparams_observer and default_symmetric_fixed_qparams_observer ( #76637 )
2022-05-04 02:39:20 +00:00
sparse
Compressed sparse layout conversion stubs ( #77489 )
2022-05-16 18:37:42 +00:00
special
Orthogonal Polynomials ( #78304 )
2022-06-03 22:38:56 +00:00
testing
Fix shard_module to appropriately deal with sub process groups.
2022-06-12 03:50:45 +00:00
utils
[DataPipe] Correcting deprecation version
2022-06-10 19:31:29 +00:00
__config__.py
__future__.py
__init__.py
[CUBLAS][TF32] Fix broken docstring for set_float32_matmul_precision ( #78949 )
2022-06-06 22:04:10 +00:00
_appdirs.py
_classes.py
_deploy.py
[lint] upgrade mypy to latest version
2022-05-03 20:51:34 +00:00
_jit_internal.py
Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 ( #74353 ) ( #74353 ) ( #76771 )
2022-06-07 21:44:55 +00:00
_linalg_utils.py
Remove deprecated torch.solve ( #70986 )
2022-05-10 13:44:07 +00:00
_lobpcg.py
remove inverse from LOBPCG
2022-04-20 19:03:00 +00:00
_lowrank.py
_meta_registrations.py
[meta] Add meta support for fft ops ( #79311 )
2022-06-13 01:56:42 +00:00
_namedtensor_internals.py
_ops.py
Autogen Tags enum, and allow specifying tags while defining an op
2022-06-11 00:29:32 +00:00
_python_dispatcher.py
Lint fix
2022-05-05 05:52:40 +00:00
_six.py
_sources.py
Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 ( #74353 ) ( #74353 ) ( #76771 )
2022-06-07 21:44:55 +00:00
_storage_docs.py
Merge torch.cuda._UntypedStorage into torch._UntypedStorage ( #75459 )
2022-05-19 13:54:39 +00:00
_tensor_docs.py
Move Tensor.grad back into C++
2022-06-10 13:44:45 +00:00
_tensor_str.py
Support str for Sparse Compressed tensors
2022-05-18 12:58:54 +00:00
_tensor.py
Move Tensor.grad back into C++
2022-06-10 13:44:45 +00:00
_torch_docs.py
MAINT: Harmonize argsort params with array_api ( #75162 )
2022-06-09 12:32:01 +00:00
_utils_internal.py
_utils.py
[DOCS] Add docstring to _get_async_or_non_blocking in _utils.py ( #78036 )
2022-06-01 16:19:43 +00:00
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt
Make Wunused-local-typedef a hard error ( #77918 )
2022-06-09 18:14:01 +00:00
custom_class_detail.h
custom_class.h
deploy.h
extension.h
functional.py
Deprecate torch.lu
2022-06-07 22:50:14 +00:00
hub.py
Minor torchhub docs
2022-05-10 11:01:02 +00:00
library.h
Autogen Tags enum, and allow specifying tags while defining an op
2022-06-11 00:29:32 +00:00
library.py
Make torch.library decorators return function
2022-06-08 01:57:00 +00:00
overrides.py
Add offsets-based reduction to segment_reduce (CPU, CUDA)
2022-06-11 17:43:42 +00:00
py.typed
quasirandom.py
Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
2021-08-12 11:45:01 -07:00
random.py
Adds return type annotation for fork_rng function ( #63724 )
2021-08-27 09:03:40 -07:00
README.txt
return_types.py
Simplify and optimize linalg.solve
2022-06-11 04:06:40 +00:00
script.h
serialization.py
Merge torch.cuda._UntypedStorage into torch._UntypedStorage ( #75459 )
2022-05-19 13:54:39 +00:00
storage.py
Fix _free_weak_ref error ( #78575 )
2022-06-01 00:07:48 +00:00
torch_version.py
Move Tensor.grad back into C++
2022-06-10 13:44:45 +00:00
types.py