pytorch/torch/nn
jlquinn bc68a8745f Spelling fix in transformer docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31973

Differential Revision: D19330660

Pulled By: zou3519

fbshipit-source-id: 29ea1e790a34f0241cb7aba85110f087cdc069ba
2020-01-09 11:13:23 -08:00
..
backends Remove Module._backend as it's not used anymore. 2019-08-29 15:43:49 -07:00
intrinsic Refactor QAT Conv module for better extensibility (#30362) 2019-11-26 06:53:12 -08:00
modules Spelling fix in transformer docs 2020-01-09 11:13:23 -08:00
parallel Fix typos (#30606) 2019-12-02 20:17:42 -08:00
qat Refactor QAT Conv module for better extensibility (#30362) 2019-11-26 06:53:12 -08:00
quantized Fix default instantation of dynamic quantized LSTM 2019-12-18 16:59:00 -08:00
utils Fix typos (#30606) 2019-12-02 20:17:42 -08:00
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
__init__.pyi Fixes #25454 2019-08-30 07:59:26 -07:00
_reduction.py Remove weak script (#22212) 2019-07-03 17:28:25 -07:00
_VF.py Convert functional dropouts to weak script (#13484) 2018-11-05 17:13:07 -08:00
common_types.pyi Fix Typing Error for Padding with asymmetric signatures (#24895) 2019-08-20 14:14:12 -07:00
cpp.py Fix _apply in nn.Module (#15305) 2018-12-17 16:22:21 -08:00
functional.py Renaming scales parameter for interpolate (#31526) 2020-01-02 08:19:30 -08:00
functional.pyi.in Add torch.nn.GELU for GELU activation (#28944) 2019-11-03 21:55:05 -08:00
grad.py Fix bug in grad.py when conv bias != None (#12281) 2018-10-05 12:55:14 -07:00
init.py Simplify _calculate_fan_in_and_fan_out (#29370) 2019-11-07 15:53:05 -08:00
parameter.py explicitly provide memory format when calling to clone() at parameter.py 2019-11-07 07:38:44 -08:00
parameter.pyi Fix typing on nn.Parameter (#25586) 2019-09-09 07:54:27 -07:00