Summary:
TorchScript currently doesn’t support `*args, **kwargs` in method signature, which is extensively used in DPER3 low-level modules’ forward method. In order to make DPER3 low-level modules scriptable, I was thinking about a solution of having a forward method *only* for TorchScript, and replace the forward method when we are not in scripting mode.
This solution works today, and I would like to add a test to make sure it will always work in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38158
Differential Revision: D21485657
Pulled By: yf225
fbshipit-source-id: df7368e8a5265418be7c305e6666ffd76e595466
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35623
Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.
Test Plan: CI
Differential Revision: D20842874
Pulled By: dreiss
fbshipit-source-id: 9f12e046f827d4f9d5eca99b0b0b46f73e06ff51
Summary:
Previously, we weren't adding the location to implicit conversions, so the error message wouldn't show location when these ops failed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38442
Differential Revision: D21563500
Pulled By: eellison
fbshipit-source-id: 19dd786ab8580f11ed919aac669efeed0ef52dcb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37994
Before, reassigning a method in a module (like `forward = _forward`)
didn't work, because we look at the function object's name for our def
name when building AST. Mkae that overrideable to handle cases like
reassignment
Test Plan: Imported from OSS
Differential Revision: D21444535
Pulled By: suo
fbshipit-source-id: 4f045f18b5a146edc8005689af525d7d7ed8dd5f
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986
Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156
Differential Revision: D21504449
Pulled By: eellison
fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38253
This pass removes dropout and dropout_ nodes when training is false. It
requires to have run freeze_module pass which does both inlining and constant
propagation, without which training variable remains as attribute instead of
constant.
ghstack-source-id: 103939141
Test Plan: python test/test_jit.py TestScript.test_remove_dropout
Reviewed By: dreiss
Differential Revision: D21505863
fbshipit-source-id: 42ea45804e4653b625b6a254c8d8480757264aa8
Summary:
Followup of https://github.com/pytorch/pytorch/issues/37848 I realized that it's better to condition on `Value` type instead of token type. So now it also support indexing through list variables (used to be list literal only).
Also apparently our eager frontend accept indexing with float list as well, so matched this edge case behavior as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37966
Reviewed By: suo
Differential Revision: D21439642
Pulled By: ailzhang
fbshipit-source-id: cedb8431ef38747d4aa9909a6bbf8e954dbe0e25
Summary:
**Summary**
This commit adds `torch::jit::RegisterBackend`, an API that allows
external backends to be registered for the execution of JIT subgraphs
outside the JIT interpreter. In order to register an external backend,
one must extend the provided abstract class `PyTorchBackendInterface` and provide
two additional functions: one that creates an instance of the aforementioned subclass
of `PyTorchBackendInterface`, and another that preprocesses a `ScriptModule` so that
it can run on the backend. Then, a `ScriptModule` that can compile and execute a given
JIT subgraph using the functions provided at registration time is generated
for each registered backend.
**Testing**
This commit adds a unit test that uses a minimal test backend
to make sure that the registration endpoint and generated
`ScriptModule` work.
```
$ python test/test_jit.py TestBackends
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.183s
OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35833
Differential Revision: D21231955
Pulled By: SplitInfinity
fbshipit-source-id: 452db1123d0e5d83f97fe5da8a00fdfdb50dbef9
Summary:
this is failing in the profiling_executor job
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37961
Differential Revision: D21434341
Pulled By: eellison
fbshipit-source-id: b34f94b1595ef6f6edee76cd200f951a2ef21f22
Summary:
The existing contextmanager only conditionally enabled_profiling_mode, which was counter intuitive. When we changed the default executor it broke internal benchmarking as a result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37825
Differential Revision: D21404611
Pulled By: eellison
fbshipit-source-id: 306b3c333ef4eb44ab6a6e5ab4e0682e5ce312ce
Summary:
We used to only support indexing through
- numbers like `x[0, 1]`
- tuple like `x[(0, 1)]`
- tensor like `x[torch.tensor([0, 1])]`
This PR adds support for indexing through list which is equivalent to tensor.
- `x[[0, 1, 5]]`
- `x[[0, 1], [0, 1]]`
- `x[[[0, 1], [0, 1]], [[0, 1], [0, 1]]]`
Note for `x[[0, 1, 5]]` we had a bug in AST conversion code so we used to treat it like `x[0, 1, 5]` which means it might accidentally run and produce wrong result(fixes https://github.com/pytorch/pytorch/issues/37286 fixes https://github.com/pytorch/pytorch/issues/18616), now that it's fixed we probably want to mark it as BC breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37848
Reviewed By: suo
Differential Revision: D21409840
Pulled By: ailzhang
fbshipit-source-id: 6f2d962885c6dc009cb384d98be1822f5ca7a189
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37842
Fixes https://github.com/pytorch/pytorch/issues/23993.
Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample1 = torch.ones(1)
sample2 = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample1, sample2,),))
> produces a graph with something like:
> %sample1, %sample2 = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
> produces a graph with something like
> %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.
Test Plan: Imported from OSS
Differential Revision: D21406478
Pulled By: suo
fbshipit-source-id: 3c7066b95d4a6e9b528888309954b02dadbc1a07
Summary:
We were previously only looking at class attributes, so that didn't include methods etc, and would silently give wrong semantics. This makes hasAttr go through the same resolution as our other attribute lookups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37424
Differential Revision: D21282633
Pulled By: eellison
fbshipit-source-id: 8e970f365c2740d137a02331739c2ed93747b918
Summary:
as a part of moving to the dynamic shapes we are now passing `frame_id` to each profiling callback. The implementation of that requires copying profiling callbacks into Interpreter, so `first`s are actually different for every run. The dynamic shapes merging algorithm won't be using `first`, but in the meantime, while we get there, this should be a good enough fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36806
Differential Revision: D21307173
Pulled By: Krovatkin
fbshipit-source-id: 7dade56ebcc72ebd40bb7f3d636c7b83c99b628f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37464
Fixes https://github.com/pytorch/pytorch/issues/23993.
There are two fixes here:
1. Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample, sample,),))
# produces a graph with something like
# %sample, %sample = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
# produces a graph with something like
# %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.
2. Previously, our input cloning for trace checking didn't do a memoized
deep copy. So:
```
_clone_inputs((sample, sample, sample))
```
produces a tuple with three non-aliased tensors. That's wrong! Use
copy.deepcopy with a memoization argument to fix this.
Test Plan: Imported from OSS
Differential Revision: D21297549
Pulled By: suo
fbshipit-source-id: 981d5879a4a244520dd68489767129ff357f1497
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37012
Removes an if statement in `torch.nn.functional.affine_grid`
Test Plan: Imported from OSS
Differential Revision: D21160755
Pulled By: eellison
fbshipit-source-id: 8b030936c9fbdb05b44abc9f254805d102f2acc2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36953
Add support for generic lists as a constant. generic dicts & tuples are already implemented. This is a pretty common pattern and cuts down on the number of non-tensor nodes executed in interpolate tests.
Test Plan: Imported from OSS
Differential Revision: D21160761
Pulled By: eellison
fbshipit-source-id: 1e6b7b25b7580f09067794772d44e615601c60c4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37088
For an inlined expression tree like `(e_0, (e_1, e_long))` the previous
algoritm only scanned the same statement as `e_long`, splitting the
inlined expressions across lines. Because it did not scan `e_0`, `e_0`
would still get emitted inline, causing it to reverse order with `e_1` and
`e_long`. The new algorithm scans starting at `e_long` and going all
the way back up the expression until it reaches the end of the inlined
statement. Caching of what has already been scanned has been added so that
if there was a second long long `e_long2` after `e_long`, it would not
rescan and re-inline the statements that were already split.
Test Plan: Imported from OSS
Differential Revision: D21180394
Pulled By: zdevito
fbshipit-source-id: 4d142c83a04c89a47d04282f67a513f82cf153c0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615
Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).
Test Plan: CI
Differential Revision: D20842886
Pulled By: dreiss
fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36188
* Need to remove n^2 behavior for scanning whether to split or not
otherwise long inline chains will take a long time re-scanning.
Test Plan: Imported from OSS
Differential Revision: D20907254
Pulled By: zdevito
fbshipit-source-id: ebfc1a4eefc26d5806381e7afd75b7a9cd4cde97
Summary:
Our test suite used to set double as its default scalar type, and when it was switched to not do so (to be more consistent with how users experience PyTorch), a few tests had to still set the default scalar type to double to function properly. Now that the jit no longer creates double tensors so frequently, it appears that test_jit no longer needs to set double as its default scalar type, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36982
Differential Revision: D21152120
Pulled By: mruberry
fbshipit-source-id: ea6d3c1ad55552dc5affa1fe1bd0e5189849e6d7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34258
This PR allows both atol and rtol to be specified, uses defaults based on the prior analysis (spreadsheet attached to https://github.com/pytorch/pytorch/pull/32538), but retains the absolute tolerance behavior in cases where precision was previously specified explicitly.
Test Plan: Imported from OSS
Differential Revision: D21110255
Pulled By: nairbv
fbshipit-source-id: 57b3a004c7d5ac1be80ee765f03668b1b13f4a7e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36696
This PR add dictionary as a supported output of tracer under the strict
flag.
Test Plan: Imported from OSS
Reviewed By: houseroad
Differential Revision: D21056962
Pulled By: wanchaol
fbshipit-source-id: ace498182d636de853cf8a1efb3dc77f5d53db29
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36727
Looks like this was renamed by accident in 0cbd7fa46f
Test Plan:
Unit test.
Lint.
Differential Revision: D21076697
Pulled By: dreiss
fbshipit-source-id: dbd18cb41c7b26479984a7a7b12ad41a4c5b7658