Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.
Fixes#35735
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
Summary: Make it possible to `torch.jit.load(model, device)` to a device when `model` contains weights that are on device `meta`. Just leave the `meta` weights on `meta`, and load the weights that can be loaded to the target device.
Reviewed By: singlaiiit, RoshanPAN, sayitmemory
Differential Revision: D45099145
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100495
Approved by: https://github.com/houseroad
Summary: The support of BINUNICODE8 is missing. So adding it. So we can support attributes > 4GB. For example, for very large model, we save the lowered model in the EngineHolder using a string attribute.
Test Plan: buck2 test mode/opt //caffe2/test:jit -- --exact 'caffe2/test:jit - test_save_load_large_string_attribute (jit.test_save_load.TestSaveLoad)'
Differential Revision: D44905770
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99104
Approved by: https://github.com/qihqi
**Summary** NamedTuple attributes can be annotated to declare their type:
```python
class MyNamedTuple(NamedTuple):
x: int
y: torch.Tensor
z: MyOtherType
```
Normally in python you can also declare your types as strings, `x: 'int'`. But NamedTuples previously didn't support this, because their annotation evaluation process was slightly different. This PR updates the NamedTuple attribute type annotation evaluation method to support ForwardRef declarations (i.e. declaring as strings).
**Details**
Below I repeat the comment I left in _jit_internal.py:
NamedTuple types are slightly different from normal types.
Normally, annotations are evaluted like this (during jit.script):
1. Load strings of python code into c++ and parse.
2. Get annotations as strings
3. Use the PythonResolver's resolution callback (rcb) to convert the string into a python object
4. We call into annotations.py:ann_to_type to convert python obj from step 3 into a type that torchscript understands.
NamedTuples are more complicated, because they have sub-types. Normally, once we have the NamedTuple type object from #3, we can just look at the annotation literal values and use ann_to_type directly on them.
But sometimes, users will annotate with string literals, e.g.
```
x: 'int'
```
This also happens with PEP563 (from __forward__ import annotations)
These annotations appear in the annotation dict as ForwardRef('int').
Then, we need to convert the string into a python object. This requires having local context for custom objects or imported types. rcb() is what gives us this. So, we plumb rcb through the stack so it can be used in this context for the if block below.
FAQ:
- Why do we need this special handling for NamedTuple but string annotations work fine for normal types? Normally, we parse the string directly and then call rcb() directly from C++.
- Why not use ForwardRef._evaluate? For that, we need globals() and locals() for the local context where the NamedTuple was defined. rcb is what lets us look up into these. So, basically rcb does the hard work for us.
- What is rcb? rcb is a ResolutionCallback - python callable that takes a string and returns a type. It's generated by `createResolutionCallback.*` in _jit_internal.py.
**Why is this only partial support**:
This only plumbs the rcb through some paths. In particular, the `toSugaredValue` path uses a fake rcb.
**Alternatives**:
We could also treat this the way we treat non-nn.Module classes: we evaluate them separately, ahead of time. That solution is probably better, but probably requires a more risky refactor for the way NamedTuples are handled.
Fixes#95858
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96933
Approved by: https://github.com/qihqi
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
Preparation for the next PR in this stack: #89559.
I replaced
- `self.assertTrue(torch.equal(...))` with `self.assertEqual(..., rtol=0, atol=0, exact_device=True)`,
- the same for `self.assertFalse(...)` with `self.assertNotEqual(...)`, and
- `assert torch.equal(...)` with `torch.testing.assert_close(..., rtol=0, atol=0)` (note that we don't need to set `check_device=True` here since that is the default).
There were a few instances where the result of `torch.equal` is used directly. In that cases I've replaced with `(... == ...).all().item()` while sometimes also dropping the `.item()` depending on the context.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89527
Approved by: https://github.com/mruberry
Summary:
Includes following refactor:
1. common loading on operator validation that is dup'd in pickle and
flatbuffer loader moved to function.h/cpp
2. Allow loading of a function without wiring operator.
This function will be used to implement get_bundled_input and friends
for flatbuffer.
Test Plan: contbuild & OSS CI, see 69fa49f123
Reviewed By: cccclai
Differential Revision: D36348549
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77624
Approved by: https://github.com/cccclai
Includes following refactor:
1. common loading on operator validation that is dup'd in pickle and
flatbuffer loader moved to function.h/cpp
2. Allow loading of a function without wiring operator.
This function will be used to implement get_bundled_input and friends
for flatbuffer.
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77328
Approved by: https://github.com/cccclai
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75244
Original commit changeset: d653a5af662a
Original Phabricator Diff: D35060736 (d9d34922a0)
Test Plan: Model loading test, verified that D35060736 (d9d34922a0) will cause the torch::save => torch::load failure.
Reviewed By: yinghai, jianyuh
Differential Revision: D35387009
fbshipit-source-id: 9d176992d402d57779e2af3d905b3c1538335298
(cherry picked from commit 6c8cc0d3b8a88b15e35702d70e18bbae8aa4628a)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74387
Make temporary python bindings for flatbuffer to test ScriptModule save / load.
(Note: this ignores all push blocking failures!)
Test Plan: unittest
Reviewed By: iseeyuan
Differential Revision: D34968080
fbshipit-source-id: d23b16abda6e4b7ecf6b1198ed6e00908a3db903
(cherry picked from commit 5cbbc390c5f54146a1c469106ab4a6286c754325)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73435
Add support for torch.jit.save and load for meta tensors to use in meta tensor based xl weights.
Test Plan:
```
buck test //caffe2/test:jit && -- -r .*save_load_meta_tensors.*
```
Reviewed By: houseroad
Differential Revision: D34479511
fbshipit-source-id: 117ccb12e9e427290a17297204508ec85495e3be
(cherry picked from commit ee9aaaf8208d6c9530c828a4b9f28cf2cca05630)
Summary:
1. Split the test `test_save_load.py` to two files. Basically move the operator versioning related changes to `test_save_load_for_op_versions.py`.
2. Add mobile module related test to `test_save_load_for_op_versions.py`
How to run:
```
buck test mode/opt //caffe2/test:jit
or
python test/test_jit.py TestSaveLoadForOpVersion
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70090
ghstack-source-id: 146103547
Test Plan:
```
buck test mode/opt //caffe2/test:jit
python test/test_jit.py TestSaveLoadForOpVersion
```
Reviewed By: tugsbayasgalan
Differential Revision: D33180767
fbshipit-source-id: dd31e313c81e90b598ea9dd5ad04a853c017f994
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68367
- bmm_test.py was using syntax not allowed in 3.6
- Some suppressions were not placed on the correct line.
With this file,
```
lintrunner --paths-cmd='git grep -Il .'
```
passes successfully.
Test Plan: Imported from OSS
Reviewed By: janeyx99, mrshenli
Differential Revision: D32436644
Pulled By: suo
fbshipit-source-id: ae9300c6593d8564fb326822de157d00f4aaa3c2
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.
Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27: print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28: print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:
- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
```
test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
```
I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272
Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:
- https://github.com/pytorch/pytorch/runs/2365189927
Reviewed By: janeyx99
Differential Revision: D27830127
Pulled By: samestep
fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49594
**Summary**
This commit adds a unit test to `test_save_load.py` that checks that
saving and loading a module preserves metadata about which module
attributes are parameters and buffers. The hooks that are currently used
to automatically check serialization of every function and module in the
unit tests check that the archive produced by saving and loading and
saving again are the same and that the type tags for the actual IValues
representing the module match before saving and after loading. However,
these tests do not check that buffer and parameter metadata was not
lost or destroyed during serialization.
**Test Plan**
Ran the new unit test.
Test Plan: Imported from OSS
Reviewed By: xw285cornell
Differential Revision: D25730603
Pulled By: SplitInfinity
fbshipit-source-id: 06a202935d9e0654cb1966c34f54707f0a28a331
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47775
When serializing graphs, we check every node for named types referenced,
so that we can register them as dependencies. We were skipping this
check for the graph inputs themselves. Since types used at input are
almost always used somewhere in the graph, we never noticed this gap
until a user reported an issue with NamedTuples.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24896289
Pulled By: suo
fbshipit-source-id: 4ce76816cb7997a7b65e7cea152ea52ed8f27276
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44266
If PyTorchStreamWriter is writing to a file in a non-existing path, it throws an exception. In unwinding the destructor calls writeEndOfFile() and throws again. To avoid this double-exception, a check and throw is added in the constructor. In such case the destructor will not be called and the exception can go through the unwinding.
Test Plan: python test/test_jit.py TestSaveLoad.test_save_nonexit_file
Reviewed By: dreiss
Differential Revision: D23560770
Pulled By: iseeyuan
fbshipit-source-id: 51b24403500bdab3578c7fd5e017780467a5d06a
Summary:
This PR:
- updates div to perform true division
- makes torch.true_divide an alias of torch.div
This follows on work in previous PyTorch releases that first deprecated div performing "integer" or "floor" division, then prevented it by throwing a runtime error.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42907
Reviewed By: ngimel
Differential Revision: D23622114
Pulled By: mruberry
fbshipit-source-id: 414c7e3c1a662a6c3c731ad99cc942507d843927
Summary:
In case we want to store binary files using `ScriptModule.save(..., _extra_files=...)` functionality. With python3 we can just use bytes only and not bother about it.
I had to do a copy-pasta from pybind sources, maybe we should upstream it, but it'd mean adding a bunch of template arguments to `bind_map` which is a bind untidy.
Let me know if there's a better place to park this function (it seems to be the only invocation of `bind_map` so I put it in the same file)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43241
Reviewed By: zdevito
Differential Revision: D23205244
Pulled By: dzhulgakov
fbshipit-source-id: 8f291eb4294945fe1c581c620d48ba2e81b3dd9c
Summary:
After being deprecated in 1.5 and throwing a runtime error in 1.6, we can now enable torch.full inferring its dtype when given bool and integer fill values. This PR enables that inference and updates the tests and docs to reflect this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41912
Reviewed By: albanD
Differential Revision: D22836802
Pulled By: mruberry
fbshipit-source-id: 33dfbe4d4067800c418b314b1f60fab8adcab4e7
Summary:
After being deprecated in 1.5 and throwing a runtime error in 1.6, we can now enable torch.full inferring its dtype when given bool and integer fill values. This PR enables that inference and updates the tests and docs to reflect this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41912
Reviewed By: pbelevich
Differential Revision: D22790718
Pulled By: mruberry
fbshipit-source-id: 8d1eb01574b1977f00bc0696974ac38ffdd40d9e
Summary:
BC-breaking NOTE:
In PyTorch 1.6 bool and integral fill values given to torch.full must set the dtype our out keyword arguments. In prior versions of PyTorch these fill values would return float tensors by default, but in PyTorch 1.7 they will return a bool or long tensor, respectively. The documentation for torch.full has been updated to reflect this.
PR NOTE:
This PR causes torch.full to throw a runtime error when it would have inferred a float dtype by being given a boolean or integer value. A versioned symbol for torch.full is added to preserve the behavior of already serialized Torchscript programs. Existing tests for this behavior being deprecated have been updated to reflect it now being unsupported, and a couple new tests have been added to validate the versioned symbol behavior. The documentation of torch.full has also been updated to reflect this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40364
Differential Revision: D22176640
Pulled By: mruberry
fbshipit-source-id: b20158ebbcb4f6bf269d05a688bcf4f6c853a965
Summary:
**1.6 Deprecation Note**
In PyTorch 1.6 attempting to divide two integer tensors or an integer tensor and an integer scalar will throw a runtime error. This behavior was deprecated with a warning in PyTorch 1.5. In PyTorch 1.7 torch.div and the division operator will always perform true division like Python3 and NumPy.
To divide integer values use either torch.true_divide, for true division, or torch.floor_divide (the // operator) for floor division.
**PR Summary**
This PR updates the warning message when performing integer division to be a runtime error. Because some serialized Torchscript programs may rely on torch.div's historic behavior it also implements a "versioned symbol" for div that lets those models retain their current behavior. Extensive tests of this behavior are the majority of this PR.
Note this change bumps the produced file format version to delineate which programs should have their historic div behavior preserved.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38620
Differential Revision: D21612598
Pulled By: mruberry
fbshipit-source-id: c9c33591abce2f7e97f67f0f859901f5b03ed47d
Summary:
PyTorch users write programs and save them as serialized Torchscript. When this Torchscript is loaded it contains symbols like "aten::div" describing some of the program's behavior. If the behavior of these symbols has changed since the program was serialized, however, then the original program's semantics may not be preserved.
For example, when we make aten::div always perform "true" division, like NumPy, Python3, and JAX, then serialized Torchscript programs relying on aten::div performing floor division on integral inputs will break.
This PR demonstrates the "Versioned Symbol" pattern that lets symbols be remapped into Torchscript builtins that preserve their historic behavior. Using this pattern, after we update aten::div to always perform true division, we could remap it in older Torchscript programs to a builtin that implements its historic behavior.
The pattern is described in the [Versioned Symbols] note in the code and is implemented like this:
- If BuiltinModule is given a version, before it returns a symbol it queries to see if another symbol should be substituted for it.
- versioned_symbol.cpp has a map for symbols and the version range for which another symbol should be substituted for them.
- The substitutions are implemented as builtin functions.
An example using the new, test-only _subcmul function is implemented to test this behavior. A test in jit/test_save_load.py follows the pattern described in the [Versioned Symbols] note and uses a fixture serialized with file version 2 to verify that the historic behavior is preserved.
In the future we will likely need a slightly more complex mechanism with multiple substitutions with distinct version ranges, and this just requires changing the map to be Symbol->Iterable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36300
Differential Revision: D21058990
Pulled By: mruberry
fbshipit-source-id: 2b7c732878c0ecfcd9f0a6205fb6d6421feeaf61