pytorch/torch/nn
2022-05-26 20:33:36 +00:00
..
backends
intrinsic [ao][sparsity] Composability of fusion and sparsity (#74847) 2022-04-08 00:44:12 +00:00
modules [docs][nn] conv: complex support note (#78351) 2022-05-26 20:33:36 +00:00
parallel Guard distributed imports (#77727) 2022-05-18 11:27:52 +00:00
qat [quant][fx] Use native backend_config_dict in prepare 2022-04-12 17:05:31 +00:00
quantizable Add flag to optionally average output attention weights across heads (#70055) 2022-01-06 17:32:37 -08:00
quantized [quant][refactor] Remove the base class from __all__ 2022-05-20 17:56:22 +00:00
utils Added setattr to functional_call. (#77137) 2022-05-17 05:40:46 +00:00
__init__.py Explicitly import functional into the torch.nn namespace 2022-04-20 19:08:38 +00:00
_reduction.py
common_types.py
cpp.py
functional.py [docs][nn] conv: complex support note (#78351) 2022-05-26 20:33:36 +00:00
functional.pyi.in Fix typehint of multi_head_attention_forward 2022-04-27 13:47:43 +00:00
grad.py
init.py Add type hints for a few random functions/classes 2022-05-04 13:53:00 +00:00
parameter.py Throw a nice error when SubTensor.__torch_dispatch__() returns the wrong type for detach() 2022-05-18 20:00:42 +00:00
parameter.pyi