pytorch/torch/nn
Negin Raoof 9409e03903
[ONNX][1.6] Update interpolate recompute_scale_factor default (#41117)
* Update interpolate recompute_scale_factor default

* Update upsampling.h

* Update functional.py
2020-07-09 17:24:53 -07:00
..
backends Remove Module._backend as it's not used anymore. 2019-08-29 15:43:49 -07:00
intrinsic fused convbn: respect the strict argument when loading from state dict (#39205) 2020-05-28 19:25:45 -07:00
modules [1.6] Make IterableDataset DataLoader.__len__ warning clearer (#41185) 2020-07-09 14:07:58 -07:00
parallel Add a link in RPC doc page to point to PT Distributed overview (#41108) (#41156) 2020-07-09 07:49:10 -07:00
qat qat eager: remove unneeded modules (#40396) 2020-06-22 17:45:51 -07:00
quantized Docstring changes for dynamic quantized classes (#40931) (#41032) 2020-07-06 21:37:53 -07:00
utils [jit] support pad_sequence/pack_sequence (#39844) 2020-06-19 19:03:14 -07:00
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
__init__.pyi fix type stub errors (#33762) 2020-02-27 06:58:53 -08:00
_reduction.py Fix type annotations and make MyPy run on torch/ (#36584) 2020-04-22 14:17:08 -07:00
common_types.py Move all torch.nn.modules type annotations inline (#38211) 2020-06-11 15:59:57 -07:00
cpp.py
functional.py [ONNX][1.6] Update interpolate recompute_scale_factor default (#41117) 2020-07-09 17:24:53 -07:00
functional.pyi.in Delete torch/__init__.pyi, deferring to direct extension stubs (#38157) 2020-05-11 07:20:13 -07:00
grad.py Grad input padding support for dilation argument (#33872) 2020-04-09 11:09:55 -07:00
init.py docs: Fixed docstring indentation for documentation (#37739) 2020-05-04 19:08:55 -07:00
parameter.py explicitly provide memory format when calling to clone() at parameter.py 2019-11-07 07:38:44 -08:00
parameter.pyi Fix wrong typing (torch/nn/parameter.pyi) (#32617) 2020-01-25 16:19:33 -08:00