Summary:
This simplifies our handling and allows passing CompilationUnits from Python to C++ defined functions via PyBind easily.
Discussed on Slack with SplitInfinity
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50614
Reviewed By: anjali411
Differential Revision: D25938005
Pulled By: SplitInfinity
fbshipit-source-id: 94aadf0c063ddfef7ca9ea17bfa998d8e7b367ad
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50593
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex
types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors.
Also found a few places that explicitly cast inputs to floating point types,
which would drop the imaginary component before running the test.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D25954050
Pulled By: mruberry
fbshipit-source-id: 1fa8e5af233aa095c839d5e2f860564baaf92aef
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50074
Adds Conv-BN fusion for models that have been frozen. I haven't explicitly tested perf yet but it should be equivalent to the results from Chillee's PR [here](https://github.com/pytorch/pytorch/pull/476570) and [here](https://github.com/pytorch/pytorch/pull/47657#issuecomment-725752765). Click on the PR for details but it's a good speed up.
In a later PR in the stack I plan on making this optimization on by default as part of `torch.jit.freeze`. I will also in a later PR add a peephole so that there is not conv->batchnorm2d doesn't generate a conditional checking # dims.
Zino was working on freezing and left the team, so not really sure who should be reviewing this, but I dont care too much so long as I get a review �
Test Plan: Imported from OSS
Reviewed By: tugsbayasgalan
Differential Revision: D25856261
Pulled By: eellison
fbshipit-source-id: da58c4ad97506a09a5c3a15e41aa92bdd7e9a197
Summary:
This adds guarding for DifferentiableGraph nodes in order to not depend on
Also bailing out on required gradients for the CUDA fuser.
Fixes https://github.com/pytorch/pytorch/issues/49299
I still need to look into a handful of failing tests, but maybe it can be a discussion basis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49433
Reviewed By: ngimel
Differential Revision: D25681374
Pulled By: Krovatkin
fbshipit-source-id: 8e7be53a335c845560436c0cceeb5e154c9cf296
Summary:
=======
This PR addresses the following:
* Adds JIT support for CUDA Streams
* Adds JIT support for CUDA Events
* Adds JIT support for CUDA Stream context manager
Testing:
======
python test/test_jit.py -v TestCUDA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48020
Reviewed By: navahgar
Differential Revision: D25725749
Pulled By: nikithamalgifb
fbshipit-source-id: b0addeb49630f8f0c430ed7badeca43bb9d2535c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022
**BC-breaking note**:
Previously torch.stft took an optional `return_complex` parameter that indicated whether the output would be a floating point tensor or a complex tensor. By default `return_complex` was False to be consistent with the previous behavior of torch.stft. This PR changes this behavior so `return_complex` is a required argument.
**PR Summary**:
* **#49022 stft: Change require_complex warning to an error**
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D25658906
Pulled By: mruberry
fbshipit-source-id: 11932d1102e93f8c7bd3d2d0b2a607fd5036ec5e
Summary:
========
Fixes #{42915}
This commit adds support for Bitwise Shorthands in TorchScript, i.e : |=,&=,^=,<<=,>>=,**=
Testing:
======
This commit also adds test for the above fix in test_jit.py
The test can be invoked by
pytest -k augassign test/test_jit.py
Here is a snapshot of the testing:
<img width="1238" alt="image" src="https://user-images.githubusercontent.com/70345919/93105141-8f9f5300-f663-11ea-836b-3b52da6d2be5.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44621
Reviewed By: mrshenli
Differential Revision: D23906344
Pulled By: nikithamalgifb
fbshipit-source-id: 4c93a7430a625f698b163609ccec15e51417d564
Summary:
Fixes https://github.com/pytorch/pytorch/issues/598
This is BC-breaking as we now explicitly don't call the hook when there are not Tensors at the top level of the output.
This feature was not working anyways as the returned grad_input/grad_output were wrong (not respecting the output structure and wrong inputs for multi-Node Module).
This is also BC-breaking as we now report the correct gradients for `nn.Module`s that contain multiple autograd `Node`s while we use to return bad results before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46163
Reviewed By: ailzhang, mruberry
Differential Revision: D24894180
Pulled By: albanD
fbshipit-source-id: e1b5d193d2818eb2f51e2a2722c7405c8bd13c2b
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49362
**Summary:**
This PR fixes the issue where invalid annotation types are used for a dictionary.
Unsupported assertion message is generated for all invalid annotations
**Test Case**:
python test/test_jit.py TestJit.test_dict_invalid_annotations
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49425
Reviewed By: navahgar
Differential Revision: D25601578
Pulled By: nikithamalgifb
fbshipit-source-id: 91633e3d0891bdcb5402f044a74d02fe352ecd6f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48660
We used to support tuple slicing without any step size before, but this PR extends this feature to support arbitrary step size. We do this by manually reconstructing a new tuple in the IR instead of relying on TupleSlice prim.
Test Plan:
python tests
Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D25359336
fbshipit-source-id: 28cde536f28dd8a00607814b2900765e177f0ed7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47695
The method_tests from common_methods_invoations.py are being migrated into a new OpInfo class-based testing framework. The work in this commit pulls out the functions embedded in the old method_tests logic and places them in a location that both the old method_tests and OpInfo tests can use
Specifically: created torch/testing/_internal/common_jit.py from functions and methods in torch/testing/_internal/jit_utils.py and test/test_jit.py. Also created new intermediate class JitCommonTestCase to house moved methods. Also slightly modified jit_metaprogramming_utils.py to work for OpInfo tests
Test Plan: Imported from OSS
Reviewed By: mruberry
Differential Revision: D25212437
Pulled By: Lilyjjo
fbshipit-source-id: 97bc52c95d776d567750e7478fac722da30f4985
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46703
Previously, we would compile one side of an if-statement if it was a type-based expression we could statically resolve. I think it's reasonable to extend this metacompilation to booleans that are constant at compile time. There have been some instances where i've recommended unintuitive workarounds due to not having this behavior.
This is also possibly needed if we add boolean literals to schema declarations, which is a feature that might be needed to cleanup our `boolean_dispatch` mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46721
Reviewed By: ppwwyyxx
Differential Revision: D25008862
Pulled By: eellison
fbshipit-source-id: 5bc60a18f1021c010cb6abbeb5399c669fe04312
Summary:
Fix for https://github.com/pytorch/pytorch/issues/46122
For `Any`, we infer the type of the ivalue to set the ivalue's type tag. When we saw a Tensor, we would use a specialized Tensor type, so when `Dict[str, Tensor]` was passed in as any `Any` arg it would be inferred as `Dict[str, Float(2, 2, 2, 2)]` which breaks runtime `isinstance` checking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46130
Reviewed By: glaringlee
Differential Revision: D24261447
Pulled By: eellison
fbshipit-source-id: 8a2bb26ce5b6c56c8dcd8db79e420f4b5ed83ed5
Summary:
inside IValue.h, we previously printed -0.0 as 0.0. Therefore, it was causing some inconsistency when using -0.0.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47081
Test Plan:
A new test case inside test_jit that divides a tensor by -0. and checks if it outputs -inf for all modes.
Fixes https://github.com/pytorch/pytorch/issues/46848
Reviewed By: mrshenli
Differential Revision: D24688572
Pulled By: gmagogsfm
fbshipit-source-id: 01a9d3f782e0711dd10bf24e6f3aa62eee72c895
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47211
The attribute is getting shadowed by the default one set on all modules,
and the __setattr__ on the TracedModule object prevents setting it correctly.
import torch
inp = torch.zeros(1, 3, 224, 224)
model = torch.hub.load('pytorch/vision:v0.6.0', 'mobilenet_v2', pretrained=True)
model.eval()
print(model.training)
with torch.no_grad():
traced = torch.jit.trace(model, inp)
print(traced.training)
traced.eval()
print(traced.training)
traced.training = False
print(traced.training)
torch.jit.freeze(traced)
Test Plan: Imported from OSS
Reviewed By: suo
Differential Revision: D24686690
Pulled By: zdevito
fbshipit-source-id: 9c1678dc68e9bf83176e9f5a20fa8f6bff5d69a0
Summary:
If there is no annotation given, we want to show users that the type is inferred
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46969
Test Plan:
Added a new test case that throws an error with the expected error message
Fixes https://github.com/pytorch/pytorch/issues/46326
Reviewed By: ZolotukhinM
Differential Revision: D24614450
Pulled By: gmagogsfm
fbshipit-source-id: dec555a53bfaa9cdefd3b21b5142f5e522847504
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46686
I was trying to page this code back in after a while and some things
stuck out as unnecessarily confusing.
1. Improve documentation of closures and fork stuff to be more accurate
to how we use them today.
2. Change `prim::LocalVariableScope` to `prim::ListComprehension`. It is
only ever used for a list comprehensions, and in general the nodes
emitted by `ir_emitter` should correspond to concrete operations or
language features rather than semantic constraints.
3. Change the somewhat mysterious "inputs" and "attributes" argument
names throughout the codebase to be the more obvious "args" and "kwargs"
that they generally represent (I think "inputs" and "attributes" come
from the AST naming).
Test Plan: Imported from OSS
Reviewed By: navahgar, jamesr66a
Differential Revision: D24464197
Pulled By: suo
fbshipit-source-id: 1f4b1475b58b5690a0b204e705caceff969533b4
Summary:
It used to be that TorchScript only supported hashing of `int`, `float` and `str`. This PR adds hashing for many other types including `Tuple`, `bool`, `device` by implementing generic hashing on IValue.
* Tensor hashing follows eager behavior, which is identity-based (hash according to pointer address rather than tensor content).
Fixes https://github.com/pytorch/pytorch/issues/44038
This is based on suo's https://github.com/pytorch/pytorch/issues/44047, with some cleaning, more tests and fixing BC check issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46441
Reviewed By: robieta
Differential Revision: D24440713
Pulled By: gmagogsfm
fbshipit-source-id: 851f413f99b6f65084b551383ad21e558e7cabeb
Summary:
As per title. Limitations: only for batches of squared full-rank matrices.
CC albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46284
Reviewed By: zou3519
Differential Revision: D24448266
Pulled By: albanD
fbshipit-source-id: d98215166268553a648af6bdec5a32ad601b7814
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal
Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462
Reviewed By: zou3519
Differential Revision: D24422343
Pulled By: ezyang
fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46601
* except excluded tests and magic methods.
https://github.com/pytorch/pytorch/issues/38731
Previously, we'd only do run these tests for inplace operations. Since this is a lot more tests, fixed these issues that came up when running them -
- Updated schema of conj() to reflect existing behaviour.
- Updated deepEquals method in check_alias_annotation.cpp to re-use the overloaded == operator. Previous implementation did not cover all types of IValues.
- Corrected the order inputs are passed in during autograd testing of 'view' & 'reshape'.
- Subbed out atn::ger with the func its aliased to, atn::outer, for testing. The alias annotation checking code doesn't handle aliased operators properly.
ghstack-source-id: 114830903
Test Plan: Ran all tests in test:jit and verified they pass.
Reviewed By: eellison
Differential Revision: D24424955
fbshipit-source-id: 382d7e2585911b81b1573f21fff1d54a5e9a2054