pytorch/torch/nn
2022-06-10 17:27:47 +00:00
..
backends
intrinsic [ao][sparsity] Composability of fusion and sparsity (#74847) 2022-04-08 00:44:12 +00:00
modules Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
parallel Guard distributed imports (#77727) 2022-05-18 11:27:52 +00:00
qat [quant][fx] Use native backend_config_dict in prepare 2022-04-12 17:05:31 +00:00
quantizable Add flag to optionally average output attention weights across heads (#70055) 2022-01-06 17:32:37 -08:00
quantized [quant][refactor] Remove the base class from __all__ 2022-05-20 17:56:22 +00:00
utils Port index.Tensor to structured kernels. 2022-06-10 17:27:47 +00:00
__init__.py Explicitly import functional into the torch.nn namespace 2022-04-20 19:08:38 +00:00
_reduction.py
common_types.py
cpp.py
functional.py Revert "kl_div: fix for grads wrt target, double backward, forward-over-reverse AD support. (#79007)" 2022-06-09 13:07:03 +00:00
functional.pyi.in Fix typehint of multi_head_attention_forward 2022-04-27 13:47:43 +00:00
grad.py
init.py Add type hints for a few random functions/classes 2022-05-04 13:53:00 +00:00
parameter.py Throw a nice error when SubTensor.__torch_dispatch__() returns the wrong type for detach() 2022-05-18 20:00:42 +00:00
parameter.pyi