..
_awaits
_C
Revert "Refactor gpu trace to be device-agnostic ( #121794 )"
2024-03-21 20:33:17 +00:00
_C_flatbuffer
_custom_op
infer_schema can add alias annotations when passed a list of mutated args ( #122343 )
2024-03-21 21:39:07 +00:00
_decomp
Batch Norm Consolidation ( #116092 )
2024-03-18 21:01:30 +00:00
_dispatch
_dynamo
Revert "Proper view support for jagged layout NestedTensor ( #113279 )"
2024-03-21 22:03:01 +00:00
_export
[export] build the infra to rollout predispatch export. ( #122326 )
2024-03-22 00:55:10 +00:00
_functorch
Revert "Teach dynamo about torch.func.jvp ( #119926 )"
2024-03-20 18:34:43 +00:00
_higher_order_ops
triton_kernel_wrap shouldn't access FakeTensor.data_ptr ( #122418 )
2024-03-21 21:48:07 +00:00
_inductor
Revert "Precompile triton templates ( #121998 )"
2024-03-21 23:05:59 +00:00
_lazy
_library
Fix FallbackKernel behavior on mutable ops ( #118649 )
2024-02-09 19:01:54 +00:00
_logging
Switch TORCH_TRACE to accept a directory by default ( #121331 )
2024-03-06 22:46:18 +00:00
_numpy
_prims
add decomposition for frexp ( #119217 )
2024-02-23 21:52:42 +00:00
_prims_common
Make expected stride test in torch._prims_common size oblivious ( #122370 )
2024-03-21 17:14:42 +00:00
_refs
Fix #83241 : torch.nn.TripletMarginLoss allowed margin less or equal to 0 ( #121978 )
2024-03-19 23:19:11 +00:00
_subclasses
Revert "Support for torch.nested.as_nested_tensor(t) ( #113280 )"
2024-03-21 22:00:44 +00:00
_vendor
amp
Remove device assert in Gradscaler ( #119362 )
2024-02-22 08:02:18 +00:00
ao
[quant][pt2e] Add support for conv transpose + bn + {relu} weights fusion in PTQ ( #122046 )
2024-03-19 21:00:57 +00:00
autograd
Revert "Teach dynamo about torch.func.jvp ( #119926 )"
2024-03-20 18:34:43 +00:00
backends
[TorchElastic] Refactoring to support non-default logging strategy ( #120691 )
2024-02-29 20:59:17 +00:00
compiler
[torch.export] Support is_compiling() flag for non-strict mode ( #119602 )
2024-02-29 05:52:51 +00:00
contrib
cpu
csrc
Revert "Proper view support for jagged layout NestedTensor ( #113279 )"
2024-03-21 22:03:01 +00:00
cuda
Revert "Refactor gpu trace to be device-agnostic ( #121794 )"
2024-03-21 20:33:17 +00:00
distributed
Add tensor step to adadelta ( #122252 )
2024-03-21 07:28:47 +00:00
distributions
export
[export] allow static constraints in dynamic_shapes ( #121860 )
2024-03-21 16:59:59 +00:00
fft
func
Let torch dynamo inline torch.func.grad ( #118407 )
2024-02-28 20:05:00 +00:00
futures
fx
Make check_is_size clamp to sys.maxsize - 1, so sys.maxsize comparison returns False ( #122372 )
2024-03-21 17:14:42 +00:00
jit
Add scuba logging for TorchScript usage ( #121936 )
2024-03-19 17:38:27 +00:00
legacy
lib
Remove unneeded linking of torch_shm_manager in CMake ( #119540 )
2024-02-11 06:33:35 +00:00
linalg
Move doc links to point to main ( #121823 )
2024-03-15 19:49:37 +00:00
masked
monitor
mps
multiprocessing
nested
Revert "Proper view support for jagged layout NestedTensor ( #113279 )"
2024-03-21 22:03:01 +00:00
nn
Fix #83241 : torch.nn.TripletMarginLoss allowed margin less or equal to 0 ( #121978 )
2024-03-19 23:19:11 +00:00
onnx
Allow fake models to run with ONNXProgram.__call__ ( #122230 )
2024-03-21 22:28:05 +00:00
optim
Add tensor step to adadelta ( #122252 )
2024-03-21 07:28:47 +00:00
package
Back out "Support triton.language.dtype with torch.compile ( #121690 )" ( #122108 )
2024-03-18 20:50:28 +00:00
profiler
[profiler] Fix recorded profiler step number ( #121127 )
2024-03-09 06:54:51 +00:00
quantization
signal
Clarifying windows cosine behaviour in the documentation ( #119444 )
2024-02-09 05:57:44 +00:00
sparse
[sparse] semi-structured sparse refactor ( #117302 )
2024-02-14 01:10:40 +00:00
special
testing
Revert "Introduce XPU implementation for PyTorch ATen operators ( #120891 )"
2024-03-21 20:30:20 +00:00
utils
Revert "Refactor gpu trace to be device-agnostic ( #121794 )"
2024-03-21 20:33:17 +00:00
xpu
Revert "Support gpu trace on XPU ( #121795 )"
2024-03-21 20:33:16 +00:00
__config__.py
__future__.py
Update nn.Module._apply to not gate on should_use_set_data when swap_tensors is set ( #120659 )
2024-02-28 00:59:34 +00:00
__init__.py
[dynamo][guards] Move backend match to eval_frame ( #121954 )
2024-03-17 06:52:10 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
[Lint] replace [assigment] with [method-assign] for methods ( #119706 )
2024-02-13 02:06:04 +00:00
_guards.py
[dynamo] Compile time optimizations in tx.step() ( #121790 )
2024-03-15 01:01:05 +00:00
_jit_internal.py
Add scuba logging for TorchScript usage ( #121936 )
2024-03-19 17:38:27 +00:00
_linalg_utils.py
_lobpcg.py
[Lint] replace [assigment] with [method-assign] for methods ( #119706 )
2024-02-13 02:06:04 +00:00
_lowrank.py
_meta_registrations.py
add int8 packed gemm support on CPU device ( #118056 )
2024-03-07 08:41:43 +00:00
_namedtensor_internals.py
_ops.py
Don't cache predispatch kernels ( #121712 )
2024-03-12 18:05:59 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py
update the tensor.scatter_ doc ( #120169 )
2024-02-23 02:51:55 +00:00
_tensor_str.py
Add sparse compressed meta tensor support ( #120707 )
2024-03-01 13:28:47 +00:00
_tensor.py
Move doc links to point to main ( #121823 )
2024-03-15 19:49:37 +00:00
_torch_docs.py
Graph-Safe RNG State Exchange for Tensor Parallelism ( #114068 )
2024-03-21 01:57:08 +00:00
_utils_internal.py
[export] build the infra to rollout predispatch export. ( #122326 )
2024-03-22 00:55:10 +00:00
_utils.py
Revert "Refactor gpu trace to be device-agnostic ( #121794 )"
2024-03-21 20:33:17 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class_detail.h
custom_class.h
extension.h
functional.py
Fix ouput typos ( #120870 )
2024-02-29 08:29:14 +00:00
hub.py
Add verbose parameter to torch.hub.list ( #120717 )
2024-03-01 07:39:48 +00:00
library.h
library.py
Better error messages for impl_abstract_pystub ( #120959 )
2024-03-04 15:24:36 +00:00
overrides.py
Add assign argument to torch.Tensor.module_load ( #121158 )
2024-03-06 01:32:06 +00:00
py.typed
quasirandom.py
random.py
[2/2] Intel GPU Runtime Upstreaming for Generator ( #118613 )
2024-02-28 05:28:11 +00:00
README.txt
return_types.py
register torch.return_types in torch.fx._pytree ( #120027 )
2024-02-23 21:52:42 +00:00
script.h
serialization.py
Add support to save safetensors checkpoint directly into onnx ( #121001 )
2024-03-11 15:21:59 +00:00
storage.py
Add hpu device support in storage/resize ( #119761 )
2024-02-17 01:04:27 +00:00
torch_version.py
types.py
version.py.tpl