pytorch/torch
Michael Suo 93db2b86d1 Fix type sharing on loaded ScriptModules (#29826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29826

After save/load, we lose concrete type information. So if you tried to
script something that contained a loaded ScriptModule as a submodule,
the following sequence happened:
1. During ConcreteType inference, the loaded submodule got a new
inferred type.
2. But it already has a type! So there was a type mismatch.

To fix this, we should generate a ConcreteType directly from the loaded
submodule type (similar to what we do for interfaces). This makes sense
too--the ConcreteModuleType should be empty, since all the "sugaredness"
was stripped out during the save/load process.

Test Plan: Imported from OSS

Differential Revision: D18575009

Pulled By: suo

fbshipit-source-id: 4d329b7e9b7e7624f459e50092e35ab0ab813791
2019-11-20 01:13:09 -08:00
..
autograd explicitly provide memory format when calling to *_like operators 2019-11-18 21:47:52 -08:00
backends Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840) 2019-09-27 13:45:15 -07:00
contrib
csrc Fix type sharing on loaded ScriptModules (#29826) 2019-11-20 01:13:09 -08:00
cuda Move torch.cuda's atfork handler into C++ (#29101) 2019-11-11 07:34:27 -08:00
distributed Polish rpc docstring. (#30069) 2019-11-19 23:10:14 -08:00
distributions explicitly provide memory format when calling to clone() at Indexing.cpp 2019-11-07 05:38:32 -08:00
for_onnx Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
jit Fix type sharing on loaded ScriptModules (#29826) 2019-11-20 01:13:09 -08:00
legacy
lib explicitly provide memory format when calling to clone() at ProcessGroupGloo.cpp 2019-11-11 11:48:53 -08:00
multiprocessing Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
nn Overwrite __setstate__ func in MultiheadAttention (#29001) 2019-11-19 18:32:44 -08:00
onnx Support Exporting Bitshift to ONNX (#28210) 2019-11-19 09:25:50 -08:00
optim explicitly provide memory format when calling to *_like operators 2019-11-19 16:19:29 -08:00
quantization Make PerChannelMinMaxObserver scriptable using torch.jit.ignore (#29416) 2019-11-19 19:12:55 -08:00
sparse
testing
utils Hipify contrib/nccl (#29385) 2019-11-08 10:39:17 -08:00
__config__.py
__future__.py
__init__.py explicitly provide memory format when calling to *_like operators 2019-11-11 17:57:34 -08:00
__init__.pyi.in Improve Tensor type hints (#28578) 2019-10-27 04:43:51 -07:00
_classes.py
_jit_internal.py use new overload mechanism for rnns (#29614) 2019-11-13 15:44:25 -08:00
_namedtensor_internals.py Allow align_to to take in partially named tensors (#27308) 2019-10-09 16:28:45 -07:00
_ops.py
_six.py module dedupe (#26666) 2019-10-12 09:51:57 -07:00
_storage_docs.py
_tensor_docs.py Add op bitwise_xor to replace __xor__ and __ixor__ (#25665) 2019-11-12 16:14:04 -08:00
_tensor_str.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00
_torch_docs.py C++ API parity: isfinite 2019-11-19 20:00:05 -08:00
_utils_internal.py Add a wrapper for inspect in JIT to produce better error message (#25415) 2019-09-14 21:27:51 -07:00
_utils.py Implement pickle support for sparse tensors and torch.layout instances (#27062) 2019-10-04 08:09:32 -07:00
abi-check.cpp
CMakeLists.txt Add support for quantized operator conversion from PT to C2 via ONNX (#29694) 2019-11-18 12:12:40 -08:00
custom_class.h Small fixes for torchbind (#28800) 2019-10-28 16:45:24 -07:00
extension.h
functional.py C++ API parity: isfinite 2019-11-19 20:00:05 -08:00
hub.py Fix hub when branch name contains slash. (#27960) 2019-10-18 10:18:12 -07:00
py.typed
quasirandom.py explicitly provide memory format when calling to clone() at Indexing.cpp 2019-11-07 05:38:32 -08:00
random.py Move torch.cuda's atfork handler into C++ (#29101) 2019-11-11 07:34:27 -08:00
README.txt
script.h turn off autograd mode in android JNI wrapper (#26477) 2019-09-19 21:25:39 -07:00
serialization.py Add zipfile serialization (#29232) 2019-11-19 10:17:32 -08:00
storage.py
tensor.py explicitly provide memory format when calling to clone() at Indexing.cpp 2019-11-07 05:38:32 -08:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.