Commit Graph

1978 Commits

Author SHA1 Message Date
Han Qi
5547741960 Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74353

Repatched `d00de0d43598522b8f6ab2de553b6aaf6768faa5` by Nora Belrose (norabelrose). With following changes:
* Register fake source of generated methods in linecache so that inspect.get_source will succeed.
* this patching is only triggered if the given dataclass passed to torch.jit.script previously. Effectively we make this feature opt-in.

## Original Summary:
Fixes #72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. torch/jit/_dataclass_impls.py has the code that does this.

What's supported

Synthesized __init__, __eq__, and the comparison magic methods when order=True is set on the dataclass decorator
Default values for fields
__post_init__, including using InitVar fields inside of __post_init__, on Python 3.8+
Overriding __eq__ or any of the comparison magic methods to provide your own implementation
What's not supported

Default factory initializers for fields
Frozen dataclasses
InitVar on Python 3.7
__repr__ and __hash__ (these are actually implemented, but the TorchScript interpreter won't call them)
Using the != operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement __ne__ to use this operator, whereas in regular Python the != operator will resolve to the negation of whatever is returned by __eq__ if there's no __ne__. Dataclasses don't actually synthesize an __ne__ method for this reason. I've been toying with different ways to fix this but != is not working in this PR at the moment.

Test Plan:
unittest

Also run previously failed test:
```
buck test mode/dev-nosan //fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests -- --exact 'fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests - test_mixmatch_multiclass (fblearner.flow.projects.fluent2.definition.transformers.contrib.faim.test.faim_mixmatch_test.TestFaimTransformerMixMatch)'
```
passes

Differential Revision: D35206262

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74889
Approved by: https://github.com/zhxchen17
2022-03-31 00:20:48 +00:00
Elias Ellison
aacdf291e0 [JIT] Make aot autograd decompositions usable in JIT, add script for serializing the decompositions (#73938)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73938

This is a first step in porting and making usable all of the decompositions defined in [functorch](https://github.com/pytorch/functorch/blob/main/functorch/_src/decompositions.py#L349) in core and in JIT as well as C++.

The decompositions are defined in python, scripted and inlined, and then serialized as C++ code which TorchScript can parse. The workflow is edit python decomposition file then run [tools/codegen/decompositions/gen_jit_decompositions.py](https://github.com/pytorch/pytorch/pull/73938/files#diff-6adef2116be233c3524e3b583e373ab0ffc9169beb6c1f6d96b5d0385e75afa1).

Decompositions are mapped to their corresponding aten schemas via the schema in their python def. This allows multiple decompositions for an overloaded op like `aten.var` (shown here in the example).

This is just a first PR, i'm sure there will be many follows ups such as:
- making these runnable in C++ with simple executor
- porting over more decompositions from AOT Autograd
- Using opinfos / more robust testing
- Categorizing decompositions
- Hooking in decompositions at various points of JIT execution

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D34938126

Pulled By: eellison

fbshipit-source-id: 9559a7cb731982e3a726f2f95af498b84fb09c13
(cherry picked from commit a4e0e748791e378e7e12a9dd0b63fb3c62dc1890)
2022-03-29 18:38:52 +00:00
Elias Ellison
6694fdaccd Clean up profiling mode and profiling executor strategy (#73875)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73875

Previously we had a few settings:
- getExecutor - which toggled between Profiling Executor and Legacy
- getGraphOptimize - if true, overrides PE/Legacy to run with simple executor (no optimizations)
and then...
- getProfilingMode - which would set PE to 0 specializtions.

The last mode is redundant with getGraphOptimize, we should just remove it and use getGraphOptimize in these cases. It would lead to potentially invalid combinations of logic - what does mean if getProfilingMode is true but getExecutor is set to false ? This would lead to a bug in specialize_autograd_zero in this case, see: https://github.com/pytorch/pytorch/blob/master/torch%2Fcsrc%2Fjit%2Fpasses%2Fspecialize_autogradzero.cpp#L93.

The tests here are failing but get fixed with the PR above it, so i'll squash for landing.

Test Plan: Imported from OSS

Reviewed By: cpuhrsch

Differential Revision: D34938130

Pulled By: eellison

fbshipit-source-id: 1a9c0ae7f6d1cfddc2ed3499a5af611053ae5e1b
(cherry picked from commit cf69ce3d155ba7d334022c42fb2cee54bb088c23)
2022-03-29 18:38:51 +00:00
Davit Kobaladze
8e12d2bf25 fixes torch.jit.script lp_pool bug. (#73287)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60258

I used the solution proposed in https://github.com/pytorch/pytorch/issues/61275.  His solution failed unit tests and there was no progress after 08/07/2021. I'm willing to fix problems if they arise during CI.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73287

Reviewed By: navahgar, zou3519

Differential Revision: D35057812

Pulled By: eellison

fbshipit-source-id: 8e82e9f73b9536979aecf476c5c65336cdffc93a
(cherry picked from commit e85e912a4edec1111623c5cbbba4171fe3bc5b1d)
2022-03-28 23:16:07 +00:00
Slava Kovalevskyi
3b3bdfd51c Revert D34808842: Reland "[pytorch][PR] Support dataclasses in TorchScript"
Test Plan: revert-hammer

Differential Revision:
D34808842 (b57cc9c752)

Original commit changeset: 02f807cff1ea

Original Phabricator Diff: D34808842 (b57cc9c752)

fbshipit-source-id: bd7c47493b598677e77634d06d7dc3e3a457b92d
(cherry picked from commit e1853d73b3ad2494457626fbb34c65169ae8cc31)
2022-03-25 17:17:30 +00:00
Han Qi
b57cc9c752 Reland "[pytorch][PR] Support dataclasses in TorchScript" (#74353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74353

Repatched `d00de0d43598522b8f6ab2de553b6aaf6768faa5` by Nora Belrose (norabelrose). With following changes:
* Register fake source of generated methods in linecache so that inspect.get_source will succeed.
* this patching is only triggered if the given dataclass passed to torch.jit.script previously. Effectively we make this feature opt-in.

## Original Summary:
Fixes #72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. torch/jit/_dataclass_impls.py has the code that does this.

What's supported

Synthesized __init__, __eq__, and the comparison magic methods when order=True is set on the dataclass decorator
Default values for fields
__post_init__, including using InitVar fields inside of __post_init__, on Python 3.8+
Overriding __eq__ or any of the comparison magic methods to provide your own implementation
What's not supported

Default factory initializers for fields
Frozen dataclasses
InitVar on Python 3.7
__repr__ and __hash__ (these are actually implemented, but the TorchScript interpreter won't call them)
Using the != operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement __ne__ to use this operator, whereas in regular Python the != operator will resolve to the negation of whatever is returned by __eq__ if there's no __ne__. Dataclasses don't actually synthesize an __ne__ method for this reason. I've been toying with different ways to fix this but != is not working in this PR at the moment.

Test Plan:
unittest

Also run previously failed test:
```
buck test mode/dev-nosan //fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests -- --exact 'fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests - test_mixmatch_multiclass (fblearner.flow.projects.fluent2.definition.transformers.contrib.faim.test.faim_mixmatch_test.TestFaimTransformerMixMatch)'
```
passes

Reviewed By: zhxchen17

Differential Revision: D34808842

fbshipit-source-id: 02f807cff1ea99e606333960225c71a239743a4b
(cherry picked from commit ec885a2bc04f9e5f65838fa5704d9a05815ebd37)
2022-03-25 06:41:07 +00:00
Han Qi
75d6cbe605 [4/5]Testing jit module in flatbuffer in Python. (#74387)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74387

Make temporary python bindings for flatbuffer to test ScriptModule save / load.

(Note: this ignores all push blocking failures!)

Test Plan: unittest

Reviewed By: iseeyuan

Differential Revision: D34968080

fbshipit-source-id: d23b16abda6e4b7ecf6b1198ed6e00908a3db903
(cherry picked from commit 5cbbc390c5f54146a1c469106ab4a6286c754325)
2022-03-24 23:29:47 +00:00
David Berard
15c98700ed Add CPU slow test job (#73748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73748

This adds CPU-only slow test jobs, which previously would never run.

Includes fixes/skips for slow tests which fail (they need to be skipped now because they used to never run)

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D34628803

Pulled By: davidberard98

fbshipit-source-id: c090ab7bf7bda9e24ec5cdefa6fd35c6310dbac0
(cherry picked from commit 06f7a94a57cc7023e9c5442be8298d20cd011144)
2022-03-23 21:17:27 +00:00
jjsjann123
fde282fc23 supporting complex with requires_grad in autodiff (#74339)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/65480

autodiff should propagate requires_grad for complex tensors as well as float tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74339

Reviewed By: anjali411

Differential Revision: D34967622

Pulled By: eellison

fbshipit-source-id: 89d23469294c0191f3a5d1c8e1df3d34acc94056
(cherry picked from commit 712f8bdf03b072ab6f4ab90a64ccaad11d64c862)
2022-03-21 21:32:24 +00:00
gmagogsfm
fdd12a9f4c Support tensor.__getitem__() in TorchScript compilation (#73952)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73952

Reviewed By: tugsbayasgalan

Differential Revision: D34743346

Pulled By: gmagogsfm

fbshipit-source-id: 2273c289c2224166cb1eed10a138d4ac7043ed83
(cherry picked from commit 37aefb9a95e0df4586bb623a1aaa974fbe799687)
2022-03-11 01:45:18 +00:00
Apoorva Garg
63932edcc7 Back out "[pytorch][PR] Support dataclasses in TorchScript"
Summary:
Original commit changeset: f5a792555c88

Original Phabricator Diff: D34398107 (d00de0d435)

Backing out as this broke fluent2 tests

Test Plan: sandcastle

Reviewed By: qihqi

Differential Revision: D34597363

fbshipit-source-id: 26bbe64b981aeb53b901cda61557614d9f28700e
(cherry picked from commit f17adfed8125ef84efaf2c8923c11a751eb7fb98)
2022-03-03 14:30:54 +00:00
bing
dc81ba1f9f parse TernaryIf as right associative, fix #68221 (#68416)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68221

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68416

Reviewed By: gchanan

Differential Revision: D32819402

Pulled By: eellison

fbshipit-source-id: c32d9fcf49e24cc0df877b794dfcb8df7c7a6d78
(cherry picked from commit 8a5a1000859bb4bdbf84730b4b137a3ec171151f)
2022-03-01 23:28:14 +00:00
Nora Belrose
d00de0d435 Support dataclasses in TorchScript (#73066)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. `torch/jit/_dataclass_impls.py` has the code that does this.

What's supported
- Synthesized `__init__`, `__eq__`, and the comparison magic methods when `order=True` is set on the dataclass decorator
- Default values for fields
- `__post_init__`, including using `InitVar` fields inside of `__post_init__`, on Python 3.8+
- Overriding `__eq__` or any of the comparison magic methods to provide your own implementation

What's not supported
- Default factory initializers for fields
- Frozen dataclasses
- `InitVar` on Python 3.7
- `__repr__` and `__hash__` (these are actually implemented, but the TorchScript interpreter won't call them)
- Using the `!=` operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement `__ne__` to use this operator, whereas in regular Python the `!=` operator will resolve to the negation of whatever is returned by `__eq__` if there's no `__ne__`. Dataclasses don't actually synthesize an `__ne__` method for this reason. I've been toying with different ways to fix this but `!=` is not working in this PR at the moment.

qihqi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73066

Reviewed By: mrshenli

Differential Revision: D34398107

Pulled By: qihqi

fbshipit-source-id: f5a792555c88f3631f97837a96687e4890660a32
(cherry picked from commit ea7f077dc49a4ee75ca0d1409aedd85228952881)
2022-02-28 19:34:20 +00:00
Philip Meier
0973c5a1cc align signature of make_tensor with other creation ops (#72702)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72702

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D34457729

Pulled By: mruberry

fbshipit-source-id: 83d580c4201eef946dc9cf4b9e28a3d36be55609
(cherry picked from commit aa4cf20fbeb4b795595729b8ac2e6ba7707d8283)
2022-02-25 06:30:31 +00:00
Elias Ellison
8bc28e9c9c [JIT] Add more python ir utilities (#69871)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69871

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33515232

Pulled By: eellison

fbshipit-source-id: d48da7b398a3f1a8862789484a4035d874196763
(cherry picked from commit e5976b8b7a4995be25a93601bbae5c52d6d3fca8)
2022-02-25 01:07:05 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
e59403fe2a Make TS recognize input arg name (#73253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73253

This PR allows TS schema_matching to match input arg with self for aten operators. This is because, operators in their functional form have input as paremeter instead of self.

fixes: https://github.com/pytorch/pytorch/issues/71994

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D34427556

Pulled By: tugsbayasgalan

fbshipit-source-id: 96c2340d605c59634bf6e37db1db6025d93a933a
(cherry picked from commit 45a593d73bc5e6308dd80a4a29afed8e318a0a1c)
2022-02-24 20:38:15 +00:00
Alban Desmaison
3bd1507ff2 Revert D33994011: Make debug_pkl smaller by only emitting unique traces.
Test Plan: revert-hammer

Differential Revision:
D33994011 (3d37f5b052)

Original commit changeset: 8e6224c6e942

Original Phabricator Diff: D33994011 (3d37f5b052)

fbshipit-source-id: 885e739efa1081382e1fcf9c6cccba92c57e9f7a
(cherry picked from commit a6d98c85a736c2eb321a6f38005dd0f5dc43eb87)
2022-02-24 16:38:55 +00:00
Han Qi
3d37f5b052 Make debug_pkl smaller by only emitting unique traces. (#72596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72596

debug_pkl file inside of pytorch's .pt file consists of a list of SourceRanges. Each SourceRange points to a Source which is a stack track, filename, and start, end numbers. Those are emitted in debug_pkl file as strings.

Since many SourceRange shares the same source, the string for trace can be deduped.

The newer format saves a set of unique traces in a tuple, then each SourceRange will save the offset of it's trace w.r.t. position in that tuple. (i.e. manually applying dictionary compression).

The above helps with smaller file size. On loading, if we copy each trace to Source as string the runtime memory would still blowup.
To mitigate this, we use SourceView directly instead of source which will take the reference of string inside of Deserializer and make that into string_view. This is safe because Deserializer is hold by Unpickler by shared_ptr, and Unpickler is also hold by shared_ptr by another Source object. That Source object will be alive during the model construction.

Test Plan:
unit test

Took original file (312271638_930.predictor.disagg.local); loaded with `torch.jit.load` save again with `torch.jit.save`. Unzip both, look at contents:
```
[qihan@devvm5585.vll0 ~]$ du archive -h
4.0K    archive/xl_model_weights
3.7M    archive/extra
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform
8.0K    archive/code/__torch__/caffe2/torch/fb
8.0K    archive/code/__torch__/caffe2/torch
8.0K    archive/code/__torch__/caffe2
20M     archive/code/__torch__/torch/fx/graph_module
20M     archive/code/__torch__/torch/fx
8.0K    archive/code/__torch__/torch/classes
20M     archive/code/__torch__/torch
20M     archive/code/__torch__
20M     archive/code
2.7M    archive/constants
35M     archive
[qihan@devvm5585.vll0 ~]$ du resaved -h
4.0K    resaved/extra
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform
8.0K    resaved/code/__torch__/caffe2/torch/fb
8.0K    resaved/code/__torch__/caffe2/torch
8.0K    resaved/code/__torch__/caffe2
1.3M    resaved/code/__torch__/torch/fx/graph_module
1.3M    resaved/code/__torch__/torch/fx
8.0K    resaved/code/__torch__/torch/classes
1.4M    resaved/code/__torch__/torch
1.4M    resaved/code/__torch__
1.4M    resaved/code
2.7M    resaved/constants
13M     resaved
[qihan@devvm5585.vll0 ~]$
```

Reviewed By: JasonHanwen

Differential Revision: D33994011

fbshipit-source-id: 8e6224c6e942e91c3403f686c8f0937d1002ed41
(cherry picked from commit a7014dd4029308c95007f362a57c31796d686647)
2022-02-24 09:31:16 +00:00
Shunting Zhang
763ad1bf25 (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#72899)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72899

Reland D33282878 (911d527b87). This is the frontend change.
ghstack-source-id: 149204031

Test Plan: Refer to D33282878 (911d527b87). Also check CI

Reviewed By: gmagogsfm

Differential Revision: D34252127

fbshipit-source-id: 27b17ddd4d05d904eb91fd9ee094d9121f00e388
(cherry picked from commit 1d276baca3)
2022-02-16 03:45:15 +00:00
Michael Suo
7db4a48d92 Revert D33342569: (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change
Test Plan: revert-hammer

Differential Revision:
D33342569 (856157fcee)

Original commit changeset: 57984ac67ae2

Original Phabricator Diff: D33342569 (856157fcee)

fbshipit-source-id: 4c12235a1776a3652e7f91e93b626705759d5176
(cherry picked from commit 4cbd7d8bab)
2022-02-15 18:45:44 +00:00
Shunting Zhang
856157fcee (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#70471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70471

Reland D33282878 (911d527b87). This is the frontend change.
ghstack-source-id: 149114933

Test Plan: Refer to D33282878 (911d527b87). Also check CI

Reviewed By: gmagogsfm

Differential Revision: D33342569

fbshipit-source-id: 57984ac67ae2c56c38f72d3b1fb69105901fb472
(cherry picked from commit b47cc935ee)
2022-02-15 07:21:19 +00:00
Nikita Shulga
dc5cda0cca Update min python version to 3.7 in setup.py and mypy configs (#71494)
Summary:
As Python-3.6 have reached EOL

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71494

Reviewed By: atalman

Differential Revision: D33667509

Pulled By: malfet

fbshipit-source-id: ab1f03085cfb9161df77ba5ce373b81f5e7ef3ae
(cherry picked from commit 60343166d9)
2022-01-20 00:03:57 +00:00
John Clow
dabcbb2726 Testing for Default Inference for Device Type (#69052)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69052

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D33555888

Pulled By: Gamrix

fbshipit-source-id: dbd43ebfc1bea4b17a96bdd378ea730ccf5944b2
2022-01-13 13:59:12 -08:00
Elias Ellison
97e8dcba5e Fix mis-specified device arg name (#69645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69645

As noted in code comment:
existing device operator is registered with input name `a`, which prevents torch.device(type="cuda") from working. add shim-layer here

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33515231

Pulled By: eellison

fbshipit-source-id: c04af8158a9568a20cd5fbbbd573f6efab98fd60
2022-01-11 22:11:24 -08:00
John Clow
80659b71a5 Hoisting common expressions out of If blocks [retry] (#65645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65645

This is a retry of PR: https://github.com/pytorch/pytorch/pull/59492

Latest Changes: Added more tests, added the getOrCreateDB pattern, updated logic to remove unnecessary checks
addressed all comments.

Adding code to find common expressions from the two subblocks of an if
operation and hoist them before the if block.
This also allows Dead Code Elimination to
then eliminate some if blocks.

Test Plan: python test_jit.py TestIfHoisting

Reviewed By: eellison

Differential Revision: D33302065

Pulled By: Gamrix

fbshipit-source-id: a5a184a480cf07354359aaca344c6e27b687a3c2
2022-01-10 13:28:17 -08:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
70b18b9511 Fix comment indentation issue (#70227)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/70227

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D33251107

Pulled By: tugsbayasgalan

fbshipit-source-id: 293ffe5dde38480ea13963a2d7e1eb99dc594d22
2022-01-06 19:14:39 -08:00
Joel Schlosser
7b8f73dd32 No-batch-dim support for ConvNd (#70506)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/70506

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33355034

Pulled By: jbschlosser

fbshipit-source-id: 5a42645299b1d82cee7d461826acca1c5b35a71c
2022-01-06 16:53:50 -08:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
b0fdca8855 Bump version number to 7 and compile old operators with old schema (#68358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68358

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33433730

Pulled By: tugsbayasgalan

fbshipit-source-id: 202c58365bae13195d3545cefcb0da9162b02151
2022-01-05 23:57:22 -08:00
Michael Suo
0ece9a49d7 Revert D33198155: Bump version number to 7 and compile old operators with old schema
Test Plan: revert-hammer

Differential Revision:
D33198155 (d35fc409ad)

Original commit changeset: 38a1185f9ecb

Original Phabricator Diff: D33198155 (d35fc409ad)

fbshipit-source-id: 411aaeb4e047aad9202db50d4d0f2ff35bc51f9d
2022-01-04 13:44:59 -08:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
d35fc409ad Bump version number to 7 and compile old operators with old schema (#68358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68358

Test Plan: Imported from OSS

Reviewed By: samdow

Differential Revision: D33198155

Pulled By: tugsbayasgalan

fbshipit-source-id: 38a1185f9ecb34a33f737ad0b060b3490956300c
2022-01-04 01:31:25 -08:00
Bo Wu
bf610f08b0 Back out "Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions"
Summary: as title

Test Plan:
```
buck run mode/opt-split-dwarf -c=python.package_style=inplace //ai_infra/distributed_ai/pyper_test_framework/templates:pyper_release_v2 -- --model inline_cvr_post_imp_deterministic_shrunk_pyper_release_v2 --cluster TSCTestCluster --hpc_identity oncall_pyper_oncall --stage prod_offline_training --test_module training_platform
...
############## Start inline_cvr_post_imp_model Test Results Analysis ##############
I1226 22:03:56.789000 3346280 test_driver.py:139  UNKNOWN     ] Test finished in 808.2743511786684 seconds.
+-------------------------+---------+------------------------+-----------------+
| Test Case               | Status  | Message                | Model Entity ID |
+-------------------------+---------+------------------------+-----------------+
| SmallWorld_release_test | Success | finished successfully. | 987987491       |
+-------------------------+---------+------------------------+-----------------+
I1226 22:03:56.790000 3346280 test_driver.py:143  UNKNOWN     ] test_run_id: 3d085f61-28d1-411d-bd27-940ea2554b23 use this id to find your run in scuba pyper_test_framework
I1226 22:03:56.792000 3346280 test_driver.py:160  UNKNOWN     ] Calling cleanup
I1226 22:03:56.792000 3346280 training_platform_test_launcher.py:385  UNKNOWN     ] Stopping launched jobs 1
I1226 22:03:59.563122 3346280 ClientSingletonManager.cpp:100] Shutting down Manifold ClientSingletonManager
```

Reviewed By: seemethere

Differential Revision: D33325936

fbshipit-source-id: 64414bf7061ad77e8ac12eb8abafee4043e0fa1e
2021-12-27 09:11:46 -08:00
Shunting Zhang
911d527b87 Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions (#70339)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70339

When a python program is translated to TorchScript, the python exception type is dropped. This makes users's life hard when they need to categorize errors based more than only exception message.

Here we make the change so when we raise a python exception, we record the fully qualified class name for the exception. Later on when the TorchScript is interpreted, a special exception CustomJITException is thrown. User can get the python class name from CustomJITException::getPythonClassName .

Note that, this diff does not customize the mapping from C++ exception to Python exception. It's left to the users to do whatever mapping they want.

Code under scripts/shunting are just my own experimental code. I can split them out if requested.
ghstack-source-id: 146221879

Test Plan: buck test mode/opt //caffe2/test:jit

Reviewed By: gmagogsfm

Differential Revision: D33282878

fbshipit-source-id: 910f67a764519f1053a48589d1a34df69001525d
2021-12-24 00:25:40 -08:00
Chen Lai
c321d4c1ca [Operator Versioning] Split the upgrader test to a separate file and cover mobile part (#70090)
Summary:
1. Split the test `test_save_load.py` to two files. Basically move the operator versioning related changes to `test_save_load_for_op_versions.py`.
2. Add mobile module related test to `test_save_load_for_op_versions.py`

How to run:
```
buck test mode/opt //caffe2/test:jit
or
python test/test_jit.py TestSaveLoadForOpVersion
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70090

ghstack-source-id: 146103547

Test Plan:
```
buck test mode/opt //caffe2/test:jit
python test/test_jit.py TestSaveLoadForOpVersion
```

Reviewed By: tugsbayasgalan

Differential Revision: D33180767

fbshipit-source-id: dd31e313c81e90b598ea9dd5ad04a853c017f994
2021-12-21 13:08:01 -08:00
David Berard
ebc35a7ead [JIT] Enable freezing for sparse COO tensors (#69614)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69614

Previously sparse COO tensors were ignored during freezing, because
`tryInsertConstant` would fail during `freeze_module.cpp`, and because
hashes weren't implemented for COO tensor IValues.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D32954620

Pulled By: davidberard98

fbshipit-source-id: a91f97fdfc2152b417f43a6948100c94970c0831
2021-12-14 15:43:50 -08:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
20f7c893c1 Populate runtime with upgrader graph (#68773)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68773

Test Plan: Imported from OSS

Reviewed By: qihqi, gmagogsfm

Differential Revision: D32603258

Pulled By: tugsbayasgalan

fbshipit-source-id: 6fa0b7ee4ebe46c9aa148923c6ef3e1de106ad13
2021-12-11 13:44:24 -08:00
Nik B
2d5b3101c1 Added ScriptFunction pkl exception for issue #61210 #61381 (#67076)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61381, https://github.com/pytorch/pytorch/issues/61210

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67076

Reviewed By: jbschlosser

Differential Revision: D32908175

Pulled By: suo

fbshipit-source-id: f6e175793243dc96cde5e44022d92f2623b934eb

Co-authored-by: LucaStubbe <stubbeluca@gmail.com>
Co-authored-by: Kanon Tromp <ktromp1@student.cccd.edu>
2021-12-09 09:44:49 -08:00
John Clow
adb619a193 Adding hardswish, opinfo tests to custom rules (#69399)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69399

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D32937576

Pulled By: Gamrix

fbshipit-source-id: 0e53d9e6669e70abcc744399f022a902214ef213
2021-12-08 11:56:34 -08:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
2ea70a6462 Aloow Union of scalars to be NumberType (#66591)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66591

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D31632599

Pulled By: tugsbayasgalan

fbshipit-source-id: 374065da1d91334a19c15c604faf13ebec1681f6
2021-12-02 10:52:02 -08:00
Alban Desmaison
28c519961f Follow the undefined Tensor <-> None rule better in torch dispatch (#67793)
Summary:
As per title. This in particular allows to more easily override backward function for which the underlying backend returns `None`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67793

Reviewed By: zou3519

Differential Revision: D32242962

Pulled By: albanD

fbshipit-source-id: 6e114def90ee9499161e1303d301ba7fd003ff89
2021-12-02 07:46:56 -08:00
Christian Puhrsch
75955e4ef8 [clone][sparse] Add torch._C._sparse namespace (#68672)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68672

This PR adds `python_module: sparse` to `native_function.yaml`.
These functions would appear in `torch._C._sparse` namespace instead of
just `torch`.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D32517813

fbshipit-source-id: 7c3d6df57a24d7c7354d0fefe1b628dc89be9431
2021-11-19 19:47:38 -08:00
jiej
ca92111758 Add native_dropout (#63937)
Summary:
Adds native_dropout to have a reasonable target for torchscript in auto diff. native_dropout has scale and train as arguments in its signature, this makes native_dropout more consistent with other operators and removes conditionals in the autodiff definition.

cc gmagogsfm

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63937

Reviewed By: mruberry

Differential Revision: D32477657

Pulled By: ngimel

fbshipit-source-id: d37b137a37acafa50990f60c77f5cea2818454e4
2021-11-18 19:41:10 -08:00
Nikolay Korovaiko
ab1d879b33 [WIP] forbid aliasing between the outputs of a differentiable graph (#67732)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67732

Reviewed By: cpuhrsch

Differential Revision: D32522826

Pulled By: Krovatkin

fbshipit-source-id: 9fdf3509dcd1b885f7c7f06d22b340c0f93bbe12
2021-11-18 15:03:35 -08:00
Michael Suo
5c3529a86d [lint] small pass to make lint clean (#68367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68367

- bmm_test.py was using syntax not allowed in 3.6
- Some suppressions were not placed on the correct line.

With this file,
```
lintrunner --paths-cmd='git grep -Il .'
```
passes successfully.

Test Plan: Imported from OSS

Reviewed By: janeyx99, mrshenli

Differential Revision: D32436644

Pulled By: suo

fbshipit-source-id: ae9300c6593d8564fb326822de157d00f4aaa3c2
2021-11-16 10:27:00 -08:00
David Berard
bf60c6e71b [JIT] remove prim::SetAttr from list of ops with side effects (#68311)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68311

prim::SetAttr is listed as an op with side effects, but in AliasDb, `analyzeSetAttr` already accounts for its behavior. By removing it from the list of ops with side effects, dead code elimination will work in a few other scenarios.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D32409510

fbshipit-source-id: 52ed9e19f92afb95c669ad3c2440f72f9515ba4c
2021-11-16 08:39:24 -08:00
Elias Ellison
6b44e75f6b aliasing fixes (#66977)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66977

Fix for https://github.com/pytorch/pytorch/issues/47218

More context is in original PR here: https://github.com/pytorch/pytorch/pull/20556

Test Plan: Imported from OSS

Reviewed By: malfet, albanD

Differential Revision: D31935573

Pulled By: eellison

fbshipit-source-id: 3658d5711116396c35f1d5016773b0096ed347a5
2021-11-09 18:33:37 -08:00
John Clow
ec8a71f9ac Dtype Analysis for Unary and Binary ops with Metatensors (#66898)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66898

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32175961

Pulled By: Gamrix

fbshipit-source-id: 72721259b900e5a311b6bcb5c350366ba420b734
2021-11-04 19:00:50 -07:00
Jane Xu
09c7771e9c Set test owners for jit tests (#66808)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66808

Reviewed By: mrshenli

Differential Revision: D31761414

Pulled By: janeyx99

fbshipit-source-id: baf8c49ff9c4bcda7b0ea0f6aafd26380586e72d
2021-10-25 07:51:10 -07:00
Nikita Shulga
77beccaedb Do not build PyTorch with caffe2 by default (#66658)
Summary:
CAFFE2 has been deprecated for a while, but still included in every PyTorch build.
We should stop building it by default, although CI should still validate that caffe2 code is buildable.

Build even fewer dependencies when compiling mobile builds without Caffe2
Introduce `TEST_CAFFE2` in torch.common.utils
Skip `TestQuantizedEmbeddingOps` and `TestJit.test_old_models_bc`  is code is compiled without Caffe2
Should be landed after https://github.com/pytorch/builder/pull/864

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66658

Reviewed By: driazati, seemethere, janeyx99

Differential Revision: D31669156

Pulled By: malfet

fbshipit-source-id: 1cc45e2d402daf913a4685eb9f841cc3863e458d
2021-10-21 20:32:47 -07:00
Jane Xu
32e3003726 Have test classes extend from common_utils.TestCase, not unittest.TestCase (#66900)
Summary:
This causes some functionality to not work, such as the disabling issues e.g., https://github.com/pytorch/pytorch/issues/66641

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66900

Reviewed By: seemethere

Differential Revision: D31778293

Pulled By: janeyx99

fbshipit-source-id: df3023ddaf7969ffb60117d1e1d7e36d87bc6139
2021-10-19 16:54:05 -07:00
Gary Miguel
543b7fb942 [JIT] Fix type annotations of pooling modules (#65847)
Summary:
All of the pooling modules except MaxUnpool and LPPool return either a
Tensor or [Tensor, Tensor]. The current type annotations are inaccurate,
and prevent scripting the module if return_indices is set as True in the
module.

There's not a great way to make this agree with mypy because the
overload is dependent on the value of return_indices, an attribute.

I tried changing the annotations from `Tensor` to
`Union[Tensor, Tuple[Tensor, Tensor]]`, but that breaks a bunch of uses
that have return_indices=False.
For example, this breaks:
4e94e84f65/torch/nn/modules/container.py (L139)

Also clean up how test names were being constructed in test_jit, since
otherwise we were getting name collisions when there were two tests on
the same nn.Module.

Fixes https://github.com/pytorch/pytorch/issues/45904

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65847

Reviewed By: ZolotukhinM

Differential Revision: D31462517

Pulled By: eellison

fbshipit-source-id: 6f9e8df1be6c75e5e1e9bae07cf3ad3603ba59bd
2021-10-14 10:59:19 -07:00
Natalia Gimelshein
7d9bbd3596 Revert D31580382: [pytorch][PR] dropout update in autodiff
Test Plan: revert-hammer

Differential Revision:
D31580382 (eb8138d886)

Original commit changeset: 41d15da99bf4

fbshipit-source-id: 59f751ee59602a5fd09c17f8c7565dca5e2beb50
2021-10-13 19:52:05 -07:00
jiej
eb8138d886 dropout update in autodiff (#66273)
Summary:
1. Unifies dropout op in autodiff
2. Removes dropout inference support in autodiff

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66273

Reviewed By: jbschlosser, gmagogsfm

Differential Revision: D31580382

Pulled By: eellison

fbshipit-source-id: 41d15da99bf4ce6c47cc335a4156c4a1c9705a70
2021-10-13 16:23:40 -07:00
lezcano
82a216c45b Add tensor.{adjoint(),H,mT,mH} methods and properties (#64179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64179

This PR follows the discussion in https://github.com/pytorch/pytorch/issues/45063#issuecomment-904431478

Fixes https://github.com/pytorch/pytorch/issues/45063

cc ezyang anjali411 dylanbespalko mruberry Lezcano nikitaved rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi heitorschueroff

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D30730483

Pulled By: anjali411

fbshipit-source-id: 821d25083f5f682450f6812bf852dc96a1cdf9f2
2021-10-13 07:44:43 -07:00
Natalia Gimelshein
09eb3e661c don't check 0 elements for cat symbolic diff (#65751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65751

Fixes symbolic script grad formula for cat to correctly handle empty tensors

Test Plan: Existing tests

Reviewed By: eellison

Differential Revision: D31208364

fbshipit-source-id: d676d9abcc033b56076fa946f58f3db50034502d
2021-09-29 09:34:03 -07:00
David Berard
8eb21488fd [JIT] Improve BatchMM mutability handling (#65097)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65097

Previously, BatchMM would skip any block containing any mutable
operators. Now it will avoid batching any operation whose inputs or
outputs are ever mutated. Specifically: consider a tree of ADD, T,
and MM nodes rooted at an ADD node.  If any input or output to any
node in the tree is ever mutated, then the entire tree will be ignored
by BatchMM.

Test Plan: python test/test_jit.py TestBatchMM

Reviewed By: eellison

Differential Revision: D30973515

Pulled By: davidberard98

fbshipit-source-id: 9d836faa1ef0c9e3fefe0ffc0bd265f275471f48
2021-09-16 10:46:14 -07:00
Ansley Ussery
c60075d4b5 Preserve types during empty container assignment (#58911)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58911

Stack from [ghstack](https://github.com/ezyang/ghstack):
* __->__ #58911

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D30785623

Pulled By: ansley

fbshipit-source-id: 4e05d6369318974290fea02ad2bc148293c25090
2021-09-10 16:49:21 -07:00
leslie-fang-intel
768014b3e6 Allow disabling cache in autocast (automatic mixed precision) (#63552)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63552

In this PR, we want to exclude these 2 cases in the `Autocast` weight cache usages:

- Using `torch.jit.trace` under the `Autocast`
As report in https://github.com/pytorch/pytorch/issues/50231 and several other discussions, using `torch.jit.trace` under the `Autocast`, the trace process would hit Autocast's weight cache and fails. So we should disable weight cache under the trace process.
- Using `Autocast` with `Grad mode`

  - Usually we are using `Grad mode` for training. Since in the training phase, the weight will change in every step. So we doesn't need to cache the weight.
  - For the recommended `Autocast` training case in the [doc](https://pytorch.org/docs/stable/amp.html), `Autocast` will clear the cache every step leaving the context. We should disable it to save the clear operations.
    ```
    model = Net().cuda()
    optimizer = optim.SGD(model.parameters(), ...)

    for input, target in data:
        optimizer.zero_grad()
        with autocast():
            output = model(input)
            loss = loss_fn(output, target)
        loss.backward()
        optimizer.step()
    ```

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D30644913

Pulled By: ezyang

fbshipit-source-id: ad7bc87372e554e7aa1aa0795e9676871b3974e7
2021-09-08 07:47:18 -07:00
Ansley Ussery
6831d8e379 Support Union in TorchScript (#64234)
Summary:
This PR is created to replace https://github.com/pytorch/pytorch/pull/53180 PR stack, which has all the review discussions. Reason for needing a replacement is due to a messy Sandcastle issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64234

Reviewed By: gmagogsfm

Differential Revision: D30656444

Pulled By: ansley

fbshipit-source-id: 77536c8bcc88162e2c72636026ca3c16891d669a
2021-09-03 06:12:24 -07:00
Salil Desai
86c9654291 Update optimize_for_mobile to preserve node's debug information (#63106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63106

Propagate debug info to the re-written nodes in the graph.

Test Plan:
- Clone open source repo and build
- ``` python3 test/test_jit.py TestOptimizeForMobilePreserveDebugInfo ```
- Tests pass

Reviewed By: kimishpatel

Differential Revision: D28654659

fbshipit-source-id: 2d7c87f2fb95a3be53246375f35639bbd97c237e
2021-09-01 14:34:20 -07:00
gmagogsfm
479fc4e412 Remove outdated warning about RecursiveScriptModule not being copiable (#64085)
Summary:
RecursiveScriptModule has its customized `__copy__` and `__deepcopy__` defined. The warning/error  that says it is not copiable is outdated

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64085

Reviewed By: rohan-varma

Differential Revision: D30598623

Pulled By: gmagogsfm

fbshipit-source-id: 0701d8617f42d818bc7b88244caee4cd47fbe976
2021-08-31 21:31:32 -07:00
Kushashwa Ravi Shrimali
d37636901e [Doc] make_tensor to torch.testing module (#63925)
Summary:
This PR aims to add `make_tensor` to the `torch.testing` module in PyTorch docs.

TODOs:

* [x] Add examples

cc: pmeier mruberry brianjo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63925

Reviewed By: ngimel

Differential Revision: D30633487

Pulled By: mruberry

fbshipit-source-id: 8e5a1f880c6ece5925b4039fee8122bd739538af
2021-08-30 12:25:40 -07:00
Philip Meier
57d4c6cf42 replace self.assertTrue(torch.allclose(..)) with self.assertEqual(…) (#63637)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/63565

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63637

Reviewed By: malfet

Differential Revision: D30541266

Pulled By: mruberry

fbshipit-source-id: ab461949782c6908a589ea098fcfcf5c3e081ee6
2021-08-25 16:47:40 -07:00
Ansley Ussery
01c35115d8 Fix bug in check_empty_containers (#63492)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63492

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D30402749

Pulled By: ansley

fbshipit-source-id: 7de533355fe91ca4f45b2bafc3bfb205a028c1ed
2021-08-25 09:05:08 -07:00
jiej
e926f75b0b BatchNorm autodiff re-enabled (#57321)
Summary:
Turns on BN in autodiff:

1. outputs an empty tensor for running stats to by pass autodiff issue on None;
2. fixing BN inference backward in cudnn & miopen, where backward falls back to native batchnorm kernel instead;

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57321

Reviewed By: albanD, ngimel

Differential Revision: D30250419

Pulled By: jansel

fbshipit-source-id: a62553789c20fb50a820003a056f40d9d642dfaa
2021-08-21 09:07:31 -07:00
Philip Meier
99203580a9 Updates internal assert_allclose callsites in favor of assert_close (#61841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61841

Redo of #60863.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D30408145

Pulled By: mruberry

fbshipit-source-id: 0b34ebc7f23ba38ecd89640b61d8aca59b7eab58
2021-08-19 12:50:41 -07:00
Alban Desmaison
ce61100923 Revert D29399533: Hoisting common expressions out of If blocks
Test Plan: revert-hammer

Differential Revision:
D29399533 (9477211e7d)

Original commit changeset: 9336b9dc48c0

fbshipit-source-id: f081c7280203f40328bcbb0c03a7c6a007acedb7
2021-08-19 06:20:40 -07:00
John Clow
9477211e7d Hoisting common expressions out of If blocks (#59492)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59492

Adding code to find common expressions from the two subblocks of an if
operation and hoist them before the if block.
This also allows Dead Code Elimination to
then eliminate some if blocks.

Also eliminated some dead code in the codebase.

Test Plan:
python test_jit.py TestIfHoisting

Imported from OSS

Reviewed By: ngimel

Differential Revision: D29399533

fbshipit-source-id: 9336b9dc48c02c38862f98f98cd72fc1767a1802
2021-08-18 16:29:30 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Yida Wang
4ea6a3aa74 Fix issues with printing certain torch modules (#62447)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54420

When I tested on master, with the testing code, there were multiple objects on the garbage collector that cannot be printed.

Testing code:
```
import torch
import gc
import os
import sys

print(torch.__version__)

a = torch.rand(10)

print(a)

objects = gc.get_objects()

for i in range(len(objects)):
   print(objects[i])
```

### 1
```
print(torch.classes)
```

Like SplitInfinity has mentioned in the GitHub issue, the solution here is to set `__file__` for `torch.classes` to something. Similar to [_ops.py](https://github.com/pytorch/pytorch/blob/master/torch/_ops.py#L69), where `__file__` is set to `_ops.py`, we could set `__file__` for torch.classes to `_classes.py`.

### 2
```
print(torch._ops.ops.quantized)
print(torch._ops.ops.atan)
```

When we try to print these two modules, it will call `_OpNamespace::__getattr__`, but the `op_name` is `__file__`. This becomes a problem when `torch._C._jit_get_operation(qualified_op_name)` [(link)](https://github.com/pytorch/pytorch/blob/master/torch/_ops.py#L60) tries to look for an actual op on the native C++ side.

Only when we get the attribute for an actual op, e.g. `print(torch._ops.ops.quantized.elu)`, the `op_name` becomes proper (e.g. `elu`).

My current solution is to return a hardcoded string (i.e. “torch.ops”) if `op_name` is `"__file__"`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62447

Reviewed By: saketh-are

Differential Revision: D30234654

Pulled By: yidawang-oss

fbshipit-source-id: de43a8f599739c749fb3307eea015cc61f1da60e
2021-08-11 09:40:41 -07:00
Sze Wai Celeste Yuen
c5de83adca Fix inconsisteny between Python and JIT power operation (#62842)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62842

Test Plan:
Wrote unit test TestAtenPow to test behavior of aten::pow when:
1. base is int, exponent is int
2. base is int, exponent is float
3. base is float, exponent is int
4. base is float, exponent is float

Specifically, we test that when base is zero and exponent is negative, we raise error. In all other cases, we expect behavior to be the same as the result returned by Python.

It is because the cpp code relies on overloading, we need to make sure all combinations of types give us the expected result.

Reviewed By: zhxchen17

Differential Revision: D30146115

Pulled By: szewaiyuen7

fbshipit-source-id: dc661897ad38da286ee454120fbe41314b7f2995
2021-08-11 08:41:46 -07:00
Edward Yang
cdf702b60c Reject kwonly arguments passed positionally in torch.ops (#62981)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62981

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D30211030

Pulled By: ezyang

fbshipit-source-id: aae426592e92bf3a50076f470e153a4ae7d6f101
2021-08-10 07:16:00 -07:00
Zhengxu Chen
e62189ad69 [jit] Better checking for overload function declarations. (#59956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59956

Issue #50175. Basically two things need to be checked and are lacking currently:
1. Overload declarations should always have a single `pass` statement as the body.
2. There should be always an implementation provided for decls which doesn't
   have the torch.jit._overload decorator. So in this case we need to check
   whether we are actually compiling a function body with decorator ahead.

Test Plan:
python test/test_jit.py TestScript.test_function_overloads

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D29106555

fbshipit-source-id: 2d9d7df2fb51ab6db0e1b726f9644e4cfbf733d6
2021-08-05 14:21:48 -07:00
Nikita Shulga
24a2681358 Revert D30094460: [profiler] Re-enable test on Windows
Test Plan: revert-hammer

Differential Revision:
D30094460 (5a1017be97)

Original commit changeset: 80521f6bc136

fbshipit-source-id: 7c01493ad078be7df1bbb81c08be6364d6ffaa4d
2021-08-05 08:34:15 -07:00
Ilia Cherniavskii
5a1017be97 [profiler] Re-enable test on Windows (#62703)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62703

Re-enable test on Windows

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D30094460

Pulled By: ilia-cher

fbshipit-source-id: 80521f6bc1365d2c252f20b5d0485fc062c8d9c3
2021-08-04 12:32:24 -07:00
Ilia Cherniavskii
773a8eede4 [profiler][refactor] Refactor the usage of legacy profiler implementation (#61931)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61931

This PR consolidates the profiling code around a new C++ implementation
(profiler_kineto.h/cpp) and uses it unconditionally from
torch.autograd.profiler/torch.profiler:
1. Always use profiler_kineto.h/cpp as the C++ implementation
2. Simplify profiler.py to remove unneeded parts depending on legacy
impl
3. Move some of the legacy logic into profiler_legacy.py (to be fully
deleted later)

Test Plan:
USE_KINETO=1 USE_CUDA=1 USE_MKLDNN=1 BLAS=MKL BUILD_BINARY=1 python setup.py develop install --cmake
python test/test_profiler.py -v
USE_KINETO=0 USE_CUDA=1 USE_MKLDNN=1 BLAS=MKL BUILD_BINARY=1 python setup.py develop install --cmake
python test/test_profiler.py -v

Imported from OSS

Reviewed By: gdankel

Differential Revision: D29801599

fbshipit-source-id: 9794d29f2af38dddbcd90dbce4481fc8575fa29e
2021-08-03 18:51:29 -07:00
Amy He
a03466cb07 Back out "Revert D29687143: [3/N] Nnapi Backend Delegate Preprocess: Basic OSS Test" (#61878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61878

CMakeLists.txt
Android NNAPI delegate library was moved from test/cpp/jit/CMakeLists.txt to torch/CMakeLists.txt. This resolves the issue the original PR had, where the NNAPI delegate library was added to builds without Python (when it depends on Python).
Original PR: https://github.com/pytorch/pytorch/pull/61594

There's an error where the library cannot be built on MacOS. This problem existed in the original PR as well, but now an issue has been created: https://github.com/pytorch/pytorch/issues/61930

test_backend_nnapi.py
Also changed the skip unit test headers so that it's a little cleaner. Now the unit tests are skipped if the Nnapi delegate library file is not found. Previously, the skip was based on the platform (only allowing Linux).

Test Plan:
To run NNAPI delegate unit tests: `python test/test_jit.py TestNnapiBackend`

Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D29799895

fbshipit-source-id: b69a767b5cde3814b0853cfbc84d61ab4155f619
2021-07-21 11:58:45 -07:00
Jerry Cai
873cc7a46d Support 3 argument variant of the getattr() call where the third arg is the default return value (#61599)
Summary:
Issue: https://github.com/pytorch/pytorch/issues/56909

Note the emitted code for such a call will either be a) getattr() call with first two args if the
attribute name (which must be a string literal) is determined to be valid based on the hasAttr() result,
or b) just the AST node for the default value (the 3rd arg) alone with no getattr call at all.

Test code:

```
import torch
import numpy as np

class Shape:
    def __init__(self):
        self.center = 1.0

def f(x):
    s = Shape()
    return getattr(s, "missing", [])

y = torch.jit.script(f)
print(y.graph)
```
Output:
```
graph(%x : Tensor):
  %s.1 : __torch__.Shape = prim::CreateObject()
  %2 : NoneType = prim::CallMethod[name="__init__"](%s.1) # ts.py:10:8
  %4 : Tensor[] = prim::ListConstruct()
  return (%4)
```

Another example:
```
import torch

class Shape:
    def __init__(self):
        self.center = 1.0

def f(x):
    s = Shape()
    y = getattr(s, "center")
    w : list[float] = [1.0]
    z = getattr(s, "missing", w)
    z.append(y)
    return z

y = torch.jit.script(f)
print(y.graph)
 --- output ---

graph(%x : Tensor):
  %5 : float = prim::Constant[value=1.]() # ts.py:12:23
  %s.1 : __torch__.Shape = prim::CreateObject()
  %2 : NoneType = prim::CallMethod[name="__init__"](%s.1) # ts.py:10:8
  %center : float = prim::GetAttr[name="center"](%s.1)
  %w.1 : float[] = prim::ListConstruct(%5)
  %11 : float[] = aten::append(%w.1, %center) # ts.py:14:4
  return (%w.1)
```
Fixes #{56969}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61599

Reviewed By: ZolotukhinM

Differential Revision: D29776058

Pulled By: jerryzhenleicai

fbshipit-source-id: 76333bd54002e08a064677c1f287115a80cc7c8e
2021-07-19 20:04:21 -07:00
Nikita Shulga
ee2f2ec9a5 Revert D29687143: [3/N] Nnapi Backend Delegate Preprocess: Basic OSS Test
Test Plan: revert-hammer

Differential Revision:
D29687143 (5798a00aa4)

Original commit changeset: 9ba9e57f7f85

fbshipit-source-id: 6a672c76a04366b35c492698ae5b39fd4dd1785f
2021-07-16 13:32:51 -07:00
Amy He
5798a00aa4 [3/N] Nnapi Backend Delegate Preprocess: Basic OSS Test (#61594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61594

### Summary:
Added a unit test for the Nnapi delegate's preprocess() function. The
function was previously tested locally, but now a basic test is
added for OSS.

See https://github.com/pytorch/pytorch/pull/61499 for preprocess
implementation. See D29647123 for local testing.

**TODO:**
Add more comprehensive tests.
Add tests for model execution, after the Nnapi delegate's initialization
and execution is implemented T91991928.

**CMakeLists.txt:**
Added a library for the Nnapi delegate
- Explicit linking of torch_python is necessary for the Nnapi delegate's use of pybind

**test_backends.py:**
Added a test for lowering to Nnapi
- Based off https://github.com/pytorch/pytorch/blob/master/test/test_nnapi.py
- Only differences are the loading of the nnapi backend library and the need to change dtype from float64 to float32

### Test Plan:
Running `python test/test_jit.py TestBackendsWithCompiler -v` succeeds. Also saved and examined the model file locally.

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D29687143

fbshipit-source-id: 9ba9e57f7f856e5ac15e13527f6178d613b32802
2021-07-16 11:00:38 -07:00
Ansley Ussery
f5c10fdbd3 Allow for heterogenous List and Dict values + Improve container typing algorithm (#57137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57137

This PR corrects and expands our typing algorithm for unannotated, non-empty dicts and lists. Previously, to verify type correctness for an unannotated, non-empty container, we had gotten the type of the first element in the container, then checked if each following element was a subtype of the first type. That's too restrictive--what if the first element were a subtype of the second element? Instead, we should type the container by getting the smallest common supertype of all the given elements.

We need slightly different rules for keys and values in dicts, though: because the set of key types is restricted, finding two key types that cannot be unified should cause an error. On the other hand, the set of value types is not restricted, so we should be able to use `Any` as a valid supertype. We need to keep the set of keys restricted since the keys are used to generate and match schemas.

This does not break backwards compatibility, because the default element type is the smallest supertype of all the given types. So, if someone creates an unannotated dict where the keys are all `str` and the values are all `torch.Tensor`, the dict will be inferred to `Dict[str, Tensor]` just like it was before. Empty lists are still typed as `List[torch.Tensor],` and empty dicts are still typed as `Dict[str, Tensor]`.

This PR unblocks three engineers on an FB-internal team and improves FX-TorchScript compatibility.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D28231839

Pulled By: ansley

fbshipit-source-id: 7297bf239749daa54895add708185c75e6ca5999
2021-07-10 14:29:05 -07:00
Gary Miguel
dec5aa2260 [JIT] clean up (#60390)
Summary:
* Minor: spelling, grammar.
* Add calls to `GRAPH_DUMP()` where they were missing.
* Add or expand a few comments.
* Move a few comments to seemingly more appropriate spots.
* In canonicalize_graph_fuser_ops.cpp inline `runnableInputs()` since it
  was only called in one place and had a misleading comment and
  confusing name.
* In `PeepholeOptimizeImpl::optimizeBlock()`, set `changed = true;` when
  removing `aten::is_complex`. Pretty sure its absence was a bug.
* Delete unused `_jit_pass_remove_inplace_ops` and and its
  implementation `RemoveInplaceOps()`.
* In `preprocessCaffe2Ops()`, remove redundant check for nested optional
  types. It was already checked in `checkONNXCompatibility()`.
* In `EncoderBase::AddAttribute`, log the unexpected attribute kind.
  I don't remember the repro case now but I did hit this error at some
  point and this additional logging made it easier to understand.
* In `fuseConvBatchNorm()` in eval_peephole.cpp, consistently use
  camelCase instead of snake_case for local variables.
* Add curly braces around the bodies of if and loops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60390

Reviewed By: Krovatkin

Differential Revision: D29523283

Pulled By: SplitInfinity

fbshipit-source-id: 4e16c5648616f53da07d68dab7fdf252e06a0752
2021-07-09 16:28:27 -07:00
Meghan Lele
4a2e8b53bb [JIT] Add torch._C.ScriptList` (#52832)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52832

**Summary**
This commit adds `torch._C.ScriptList`, a list type that has reference
semantics across the Python/TorchScript boundary. That is, modifications
made in TorchScript to instances of `torch._C.ScriptList`
are visible in Python even when it is not returned from the function.

`torch._C.ScriptList` is implemented using a modified version of pybind's
`stl_bind.h`-style bindings attached to `ScriptList` and `ScriptListIterator`,
wrapper classes around `c10::impl::GenericList` and
`c10::impl::GenericList::iterator`. These bindings allow instances of
`torch._C.ScriptList` to be used as if it were a
regular `list` in Python. Reference semantics are achieved by simply
retrieving the `IValue` contained in `ScriptList` in `toIValue` (invoked
when converting Python arguments to `IValues` before calling TorchScript
code).

**Test Plan**
This commit adds `TestScriptList` to `test_list_dict.py`, a set of tests
that check that all of the common list operations are supported
and that instances have reference semantics across the
Python/TorchScript boundary.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D29478121

Pulled By: SplitInfinity

fbshipit-source-id: 652cc25cfa37debe28db9527504846f22abd8b54
2021-07-01 20:28:13 -07:00
Zafar
c1499a9933 Enable jit tracing to parametrization and add jit tests (#60969)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60969

This PR fixes the tracing in the parametrizations.
The current resolution is that when tracing is performed while caching is enabled, we throw an error.
Without caching, the tracing should work properly (tests added).

Currently, the parametrizations don't support scripting.
This PR introduces the same logic as with the tracing (throw error if caching).
However, the scripting itself cannot enabled due to the use of the generator expressions in the parametrizations.
Added TODO to fix it.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D29462887

Pulled By: z-a-f

fbshipit-source-id: 49721d3059be58f36055d1c374080df41a748d66
2021-06-30 23:54:02 -07:00
Heitor Schueroff
8f658d537d Improved JIT support for torch.einsum (#59265)
Summary:
Added JIT support for the vararg version of `torch.einsum`. Note that JIT does not support the Python's Ellipsis object (`...`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59265

Reviewed By: VitalyFedyunin

Differential Revision: D29328469

Pulled By: heitorschueroff

fbshipit-source-id: 5e4b177fda93255251f45d735b00c08220f0f124
2021-06-29 14:01:21 -07:00
Ansley Ussery
0fbc471d10 Support default values on NamedTuple fields (#54682)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54682

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27327241

Pulled By: ansley

fbshipit-source-id: 76546f1770d50ebc3435bba3b74540e3c6be8a1c
2021-06-26 15:18:21 -07:00
Amy He
d52ef2497a Python basic module execution unit test on delegation of backend_with_compiler_demo (#60468)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60468

Added a unit test for the execution of a basic module with a compiler
ghstack-source-id: 132307488

Test Plan:
Running python test/test_jit.py TestBackendsWithCompiler -v returns a successful test

Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D29306225

fbshipit-source-id: bf1ff075ebc63acbbe46d6ea030086405e29d7d3
2021-06-24 11:43:45 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
fca931d181 List striding with arbitrary step size (#58537)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58537

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28531721

Pulled By: tugsbayasgalan

fbshipit-source-id: 8c8ed32ca00366603bfb5086e87dfa62736ff4b2
2021-06-22 11:25:23 -07:00
Michael Dagitses
91451369ed require non-empty inputs to grad() calls in the API (#52016)
Summary:
The grad() function needs to return the updated values, and hence
needs a non-empty inputs to populate.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52016

Test Plan:
Passes Python and C++ unit tests, and added new tests to catch this behavior.

Fixes https://github.com/pytorch/pytorch/issues/47061

Reviewed By: albanD

Differential Revision: D26406444

Pulled By: dagitses

fbshipit-source-id: 023aeca9a40cd765c5bad6a1a2f8767a33b75a1a
2021-06-22 10:10:58 -07:00
Thomas J. Fan
c16f87949f ENH Adds nn.ReflectionPad3d (#59791)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/27655

This PR adds a C++ and Python version of ReflectionPad3d with structured kernels. The implementation uses lambdas extensively to better share code from the backward and forward pass.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59791

Reviewed By: gchanan

Differential Revision: D29242015

Pulled By: jbschlosser

fbshipit-source-id: 18e692d3b49b74082be09f373fc95fb7891e1b56
2021-06-21 10:53:14 -07:00
Meghan Lele
c01939a9b1 [JIT] Handle modules that already have __constants__ (#60003)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60003

**Summary**
`infer_concrete_type_builder` in `_recursive.py` assumes `__constants__`
is a `set` if it exists as an attribute on the module being scripted.
Instead, it should create a set out of whatever `__constants__` is.

**Test Plan**
Ran code from the issue.

**Fixes**
This commit fixes #59947.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D29174243

Pulled By: SplitInfinity

fbshipit-source-id: aeb8bded80038da35478714b6a697a766ac447f5
2021-06-16 20:01:18 -07:00
Mike Ruberry
de40c8e495 Adds remaining OpInfos and removes redundant test generators (#55558)
Summary:
Per title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55558

Reviewed By: ngimel

Differential Revision: D28922522

Pulled By: mruberry

fbshipit-source-id: 89cefd93788bc8aa0683f4583cf5caa81aa2dc93
2021-06-06 14:52:26 -07:00
Nikita Shulga
f1ce7f4b7f Update PyTorch version to 0.10.0a (#59345)
Summary:
Also fix `TestProducerVersion` by removing assumption that major and minor are single digit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59345

Reviewed By: robieta

Differential Revision: D28853720

Pulled By: malfet

fbshipit-source-id: 4b6d03c6b0c9d652a5aef792aaa84eaa522d10e8
2021-06-03 07:55:44 -07:00
Bin Bao
add291cf66 [JIT] Add a phase to perform inplace<->functional conversion for activation operators (#57477)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57477

Currently the conversion only deals with activation operators. The legality check is somewhat strict for now.

Test Plan:
```
python test/test_jit.py -k test_functional_to_inplace_activation
python test/test_jit.py -k test_inplace_to_functional_activation
```

Reviewed By: mrshenli

Differential Revision: D28155153

Pulled By: desertfire

fbshipit-source-id: df092830c4dff3ce9578ff76285eb7a566b7d81b
2021-06-03 06:43:23 -07:00
Meghan Lele
b14c3205fd [JIT] Add torch._C.ScriptDict (#52659)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52659

**Summary**
This commit adds `torch._C.ScriptDict`, a dictionary type that has reference
semantics across the Python/TorchScript boundary. That is, modifications
made to instances of `torch._C.ScriptDict` in TorchScript are visible in
Python even when it is not returned from the function. Instances can be
constructed by passing an instance of a Python dictionary to
`torch.jit.script`. In the case of an empty dictionary, its type is
assumed to be `Dict[str, Tensor]` to be consistent with the handling of
empty dictionaries in TorchScript source code.

`torch._C.ScriptDict` is implemented using a modified version of pybind's `stl_bind.h`-style bindings attached to `ScriptDict`, `ScriptDictIterator` and `ScriptDictKeyIterator`, wrapper classes around `c10::impl::GenericDict` and `c10::impl::GenericDict::iterator`. These bindings allow instances of `torch._C.ScriptDict` to be used as if it were a regular `dict` Python. Reference semantics are achieved by simply retrieving the `IValue` contained in `ScriptDict` in `toIValue` (invoked when converting Python arguments to `IValues` before calling TorchScript code).

**Test Plan**
This commit adds `TestScriptDict` to `test_list_dict.py`, a set of tests
that check that all of the common dictionary operations are supported
and that instances have reference semantics across the
Python/TorchScript boundary.

Differential Revision:
D27211605
D27211605

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Pulled By: SplitInfinity

fbshipit-source-id: 446d4e5328375791aa73eb9e8b04dfe3465af960
2021-05-27 10:25:30 -07:00
Zhengxu Chen
6b6a27e430 [jit] Add Python API for ScriptProfile (#57398)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57398

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D28133577

fbshipit-source-id: dcb8338159a24b00b5c495ecec66a3303d9b4aba
2021-05-25 11:09:18 -07:00
Kimish Patel
470160ad78 [Pytorch] Update FuseLinear to map source range information (#58492)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58492

Update graph rewrite to specify how values in replacement pattern should
map to values in original pattern for fuse_linear pass

(Note: this ignores all push blocking failures!)

Test Plan:
python test/test_quantization.py TestQuantizeJitPasses.test_fuse_linear

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D28512464

fbshipit-source-id: 250a69cebc11eb4328a34c8f685b36e337439aae
2021-05-25 09:19:57 -07:00
Kimish Patel
e067675167 [Pytorch] Provide API to preserve source range and callstack information during graph rewrite (#58300)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58300

Current state: During graph rewriting that can fuse nodes or add nodes
result in new nodes without debug information that was available in
original node. Thus we lose this information during graph rewrite.

This PR changes graph rewriting API to let user specify how the values
in the replacement pattern map to values in the pattern to be matched.
Then the graph rewriting will copy source range and inlined callstack
from the matched nodes onto the nodes being inserted.

(Note: this ignores all push blocking failures!)

Test Plan:
python test/test_jit.py
TestJit.test_pattern_based_rewrite_with_source_range_preserved

Imported from OSS

Reviewed By: malfet

Differential Revision: D28512465

fbshipit-source-id: 863173c29de726be85b3acbd3ddf3257eea36d13
2021-05-25 09:18:59 -07:00
Elias Ellison
f39471a171 Initial Symbolic Shape Analysis (#54809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54809

I'm going to post on dev-discuss soon with a more thorough explanation of the design and advantages of this shape analysis, so I'm leaving out that for now.

There is still a ton left to do, I'm posting this initial version so we can get something on master multiple can work on. List of many remaining steps to do:

- [ ] Add symbolic shapes support
- [ ] Bind shape functions for operators in C++
- [ ] Make classes of operators share the same shape function (e.g. pointwise, broadcast two inputs)
- [ ] Refactor APIs
- [ ] Only iteratively optimize shape function while a change has been made
- [ ] Expand coverage of coverage to common ops
- [ ] Add shape analysis pass on Graph that handles Ifs and Loops
- [ ] Allow concurrent reads to the operator map
- [ ] Successive applications of same inputs to same shape function (e.g. series of pointwise ops)

For this review, I am mostly looking for comments related to the implementation of symolic_shape_analysis.cpp, with the caveats listed above. I am not really looking for comments related to api/registration/graph level analysis as those are all planned to be changed. I am fine landing this as is or waiting until necessary components of the TODOs above are finished.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27750998

Pulled By: eellison

fbshipit-source-id: 4338b99e8651df076291c6b781c0e36a1bcbec03
2021-05-21 08:49:46 -07:00
Heitor Schueroff
72ae924fad Added sublist support for torch.einsum (#56625)
Summary:
This PR adds an alternative way of calling `torch.einsum`. Instead of specifying the subscripts as letters in the `equation` parameter, one can now specify the subscripts as a list of integers as in `torch.einsum(operand1, subscripts1, operand2, subscripts2, ..., [subscripts_out])`. This would be equivalent to `torch.einsum('<subscripts1>,<subscripts2>,...,->[<subscript_out>]', operand1, operand2, ...)`

TODO
- [x] Update documentation
- [x] Add more error checking
- [x] Update tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56625

Reviewed By: zou3519

Differential Revision: D28062616

Pulled By: heitorschueroff

fbshipit-source-id: ec50ad34f127210696e7c545e4c0675166f127dc
2021-05-21 08:36:45 -07:00
Kyle Vedder
bbf92e6176 Add missing .to_sparse(ndim) gradient (#58413)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46720, extends PR https://github.com/pytorch/pytorch/issues/46825 by adding test requested in [this comment](https://github.com/pytorch/pytorch/pull/46825#issuecomment-842304079).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58413

Reviewed By: ailzhang

Differential Revision: D28540550

Pulled By: albanD

fbshipit-source-id: d7e292e09b5402336c43844ee233b83b0a095035
2021-05-20 15:08:34 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
9db64e6e56 Revert "Striding for lists Part 2 (#49352)" (#58523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58523

This reverts commit fee7e8b91d.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D28528023

Pulled By: tugsbayasgalan

fbshipit-source-id: 9fa1d86f0c81fcc6fd3798e0d51a712a3c9b3952
2021-05-20 13:20:33 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
88ff651e90 torch.jit.ignore as a context manager (#55172)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55172

Description:
This is part 1 of series of PRs for supporting torch.jit.ignore as context manager. Following features are implemented in this PR:

- Unique name for the registered function under torch.jit.frontend module. The unique name is generated based on the file name and line number of context manager
- Forcing user to explicitly annotate the input and outputs.
- No side effects are considered.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27895283

Pulled By: tugsbayasgalan

fbshipit-source-id: 5d36d9aa5d457055a6bb1676f264647a745ec36a
2021-05-14 01:53:50 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
fee7e8b91d Striding for lists Part 2 (#49352)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49352

In this PR, we replace all definitions of slice to take None parameters for the start, end, and step. This will simplify the compiler logic

Test Plan:
test_jit test cases

Imported from OSS

Reviewed By: jamesr66a, nikithamalgifb

Differential Revision: D25929903

fbshipit-source-id: 5bfc6bad514a8aafbef2dacc706f95f867fe85f1
2021-05-13 00:16:02 -07:00
Yanan Cao
b8ca1219de Add tests for custom state_dict save/load methods in TorchScript (#57886)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57886

Reviewed By: jamesr66a

Differential Revision: D28309228

Pulled By: gmagogsfm

fbshipit-source-id: 6ac60b1d4a8017aefb6f6dff49cde598de000265
2021-05-10 18:04:56 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
b0c27b44cf Enable backward/forward compatibility for TS runtime (#57498)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57498

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D28162448

Pulled By: tugsbayasgalan

fbshipit-source-id: 5c21ced42a22aca7cee089e876e9d98d32f68955
2021-05-07 15:41:45 -07:00
Kimish Patel
f4a921600a [PyTorch, Mobile] Serialization format change for source range (#54284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54284

In order to bring mobile deployment, via lite interpreter, on feature
parity with JIT, with respect model level debug information we must make
model level debug information available to mobile runtime.
At the moment, model level debug information is stored in SourceRange
which associates node's of graph to where the come from in original
python source code.
This information is serialized as part of debug_pkl and deserialized
when JIT loads the model and reads the model code.
On lite interpreter, we do not have access to all the functionality of
JIT and hence we cannot load model in the same way as JIT, by reading
code, constructing module hierarchy and graph corresponding module
methods etc. Instead in, lite interpreter, only bytecode corresonding to
the compiled graph, Code, is saved.
Thus in order to annotate OPs in the bytecode with equivalent
SourceRange information we do the following:
1. During model serialization, we create a unique tag for each source
range of the model.
2. Create a map of <SourceRange, tag>
3. During debug_pkl serialization we save tag along with SourceRange, on
top of byte offset.
4. During bytecode generation, the methods of the top module are
lowered. During this process methods are inlined. In the inlined graph,
when the node of a graph is lowered to bytecode, we query node's source
range and look it up against the map.
5. Resulting source range tag is serialized in module_debug_info.
6. During model deserialization, we read all the debug_pkl records in
the archieve and create a map of <tag, SourceRange>
7. This map can be used to find source code information.

During mobile runtime:
1. We read all the debug_pkl records and create <tag=debug_handle,
SourceRange> map.
   1.1 This map, MobileDebugInfo, is a member of mobile Module.
2. Interpreter catches appropriate exceptions and sets the thread local
debug handle and rethrows the exception.
3. In Function's run method we catch exception and query current debug
handle where the exception happened.
4. Query MobileDebugInfo with debug handle to retrieve source range and
augment error with source range info.

This information is still incomplete as it does not contain entire
callstack.

In the following diffs we will serialize InlinedCallStack directly.

Note that compilation is gated by SYMBOLICATE_MOBILE_DEBUG_HANDLE macro,
so that mobile builds can avoid building MobileDebugInfo, source range
and source range pickler/unpickler. Later we will add path where, if
building without debug support stack trace will contain only debug
handles. They can be symbolicated later.

Test Plan:
Ported bunch of source range tests from test_jit.py. Added on more test
in test_lite_interpreter.py

Imported from OSS

Reviewed By: raziel

Differential Revision: D27174722

fbshipit-source-id: a7b7c6088ce16dec37e823c7fefa4f0b61047e12
2021-05-04 09:19:27 -07:00
nikithamalgifb
2e2c0099eb Support type inference of nn.Module methods using PDT (#57165)
Summary:
Adds support for type inference of nn.Module methods using monkeytype in JIT

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57165

Reviewed By: gmagogsfm

Differential Revision: D28064983

Pulled By: nikithamalgifb

fbshipit-source-id: 303eaf8d7a27e74be09874f70f519b4c1081645b
2021-04-29 11:09:37 -07:00
Yanan Cao
78736a72a5 Fix default dtype for randperm, triu/tril_indices inside TorchScript (#57105)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56676

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57105

Reviewed By: ezyang

Differential Revision: D28060969

Pulled By: gmagogsfm

fbshipit-source-id: 6b074418306377f5f906aafd121b614964972fc3
2021-04-28 16:18:33 -07:00
Edward Yang
4d72538f80 Give Tensor a trivial (for now) metaclass _TensorMeta (#56147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56147

This is support of #55686, you can see the broader context of the metaclass in
a more complete PR #56017.  The short story is that in the future I want to
give Tensor a non-trivial metaclass, so to derisk the change first I give it a
trivial metaclass to shake out any bugs that might be caused by it.  The
metaclass shouldn't have any performance impact on Tensor as it only gets
invoked upon subclass creation.

By the way, it was totally not documented how to create metaclasses in the Python
C API, and it took a good bit of trial error to figure it out (and the answer is
now immortalized in https://stackoverflow.com/q/67077317/23845 -- the things
that I got wrong in earlier versions of the PR included setting tp_basicsize
incorrectly, incorrectly setting Py_TPFLAGS_HAVE_GC on the metaclass--you want
to leave it unset so that it inherits, and determining that tp_init is what
actually gets called when you construct a class, not tp_call as another
not-to-be-named StackOverflow question suggests).

Aside: Ordinarily, adding a metaclass to a class is a user visible change, as
it means that it is no longer valid to mixin another class with a different
metaclass. However, because _C._TensorBase is a C extension object, it will
typically conflict with most other metaclasses, so this is not BC breaking.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D28028747

Pulled By: ezyang

fbshipit-source-id: c1e35a986aeb3db540c73d188f53dce951eeed33
2021-04-28 09:24:21 -07:00
Heitor Schueroff
57e37080cd Added OpInfo for torch.einsum (#56276)
Summary:
Adds OpInfo testing for torch.einsum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56276

Reviewed By: mruberry

Differential Revision: D27967095

Pulled By: heitorschueroff

fbshipit-source-id: 60524273d2ca885e7eeb932db3e7fd697ae5ca8e
2021-04-27 07:39:38 -07:00
Sam Estep
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
Zhengxu Chen
8176ab6ca0 [JIT] Put explicit error message on class attribute accesses. (#55723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55723

Resolving https://github.com/pytorch/pytorch/issues/51139

Test Plan:
python test/test_jit.py TestClassType.test_unresolved_attributes

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27691960

fbshipit-source-id: 1d078a4ab25af1a73109ca6ef0333a67a634bff6
2021-04-16 15:47:10 -07:00
Tugsbayasgalan Manlaibaatar
1360980659 Remove duplicate test due to rebasing mistake (#56287)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56287

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27828430

Pulled By: tugsbayasgalan

fbshipit-source-id: a5d846871ce78399409113fd5dbf2c43a4e46296
2021-04-16 13:57:13 -07:00
Tugsbayasgalan Manlaibaatar
b405e2ce12 Implicit conversion from null tensor to NoneType (#55823)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55823

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27717324

Pulled By: tugsbayasgalan

fbshipit-source-id: a071b90bcea9e8f2b5da633a8dadd11772fb5101
2021-04-16 00:05:52 -07:00
nikithamalgi
14d529a368 Add support for refinement for torch.jit.Future (#56148)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56148

Fixes issue: #55787

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D27796830

Pulled By: nikithamalgifb

fbshipit-source-id: b7a60218010793a54eb52d6b7602d333dc5a1c9e
2021-04-15 14:08:58 -07:00
James Reed
71a5314591 Fix ScriptMethod dispatch on __torch_function__ (#56103)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56103

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D27784142

Pulled By: jamesr66a

fbshipit-source-id: 555dcb7c3a98b8fb9e9ca9b499cafad54e819aa7
2021-04-15 08:46:43 -07:00
nikithamalgi
d7d7556f17 Move tensor implicit conversions to test_builtins.py (#55532)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55532

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D27729682

Pulled By: nikithamalgifb

fbshipit-source-id: d2517ee68b83e59cde87b8fb7d5bf7203f02cbc6
2021-04-13 07:13:20 -07:00
nikithamalgi
8fc16da649 [Hackathon]Move tests for slice to test_slice.py (#55524)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55524

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D27686738

Pulled By: nikithamalgifb

fbshipit-source-id: f1896d739c3a3a7ece987f6eea4072477626231b
2021-04-12 21:02:19 -07:00
nikithamalgi
5cd73df8f8 [Hackathon]Move complex tests to test_complex.py (#55514)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55514

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27679881

Pulled By: nikithamalgifb

fbshipit-source-id: 8a4f4ab8f375187b72ede6feaea37ab546da6d3e
2021-04-12 20:35:36 -07:00
James Reed
68e0796466 [JIT][write path] Make NoneType annotation_str emit NoneType instead of None (#54746)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54746

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D27350331

Pulled By: jamesr66a

fbshipit-source-id: 3f44d6589c29f39378432d0b6b281d96bb4829e7
2021-04-12 17:36:45 -07:00
James Reed
a3c06e69aa [JIT][write path] Fix TupleType.annotation_str to conform to typing module syntax for empty tuple type (#54745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54745

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D27350332

Pulled By: jamesr66a

fbshipit-source-id: 62af3b2b53561bb8e4adbdc0aec54520e08a5bf7
2021-04-12 17:29:33 -07:00
Sameer Deshmukh
5fb1142702 Add CSR (compressed sparse row) layout for sparse tensors (#50937)
Summary:
Implement compressed sparse row format. Derived from the GCS implementation at https://github.com/pytorch/pytorch/pull/44190

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50937

Reviewed By: mrshenli

Differential Revision: D27439865

Pulled By: ezyang

fbshipit-source-id: 3ba3dcb9679505b980ff6a5f513e913bbae2fb1d
2021-04-12 10:09:12 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Yanan Cao
fa29a647db [JIT] Allow unpacking tuple and assign their values to SELECT-type expressions (#55268)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51176

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55268

Reviewed By: pbelevich, izdeby

Differential Revision: D27551950

Pulled By: gmagogsfm

fbshipit-source-id: 35324b728649bb1e6c5410a1004d2f6964f98304
2021-04-11 14:21:48 -07:00
Tugsbayasgalan Manlaibaatar
b80c6f863f Disambiguate error message for working with not fully refined tuple types (#55745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55745

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27698691

Pulled By: tugsbayasgalan

fbshipit-source-id: 7855042d37290f19d53adfc0b4da606430501663
2021-04-11 02:30:05 -07:00
Tugsbayasgalan Manlaibaatar
076961e8b5 Add tuple add operator (#52292)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52292

Test Plan: Imported from OSS

Reviewed By: Lilyjjo

Differential Revision: D26792416

Pulled By: tugsbayasgalan

fbshipit-source-id: 882325b171c1ff53ec40243d3f9334049c03fe57
2021-04-09 10:24:48 -07:00
nikithamalgi
c998f3573c [Hackathon]Move tests related to containers in typing to test_typing.py (#55504)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55504

Test Plan: Imported from OSS

Reviewed By: navahgar, pbelevich

Differential Revision: D27666760

Pulled By: nikithamalgifb

fbshipit-source-id: c1a7904f33855efa4f60f8f54c029a95a5fd529c
2021-04-08 18:40:37 -07:00
Rong Rong (AI Infra)
55db156229 remove test_jit_py3.py entirely (#55560)
Summary:
1. move module related stuff to test_module_container
2. created test_types for types and annotation
3. created test_misc for the rest

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55560

Reviewed By: VitalyFedyunin

Differential Revision: D27650911

Pulled By: walterddr

fbshipit-source-id: d895a7da9e9c3d25a662a37faf4daabc276b9c1a
2021-04-08 14:28:54 -07:00
Nikitha Malgi
432df40d83 [Hackathon] Move python builtins to test_python_builtins.py (#55479)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55479

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27642098

Pulled By: nikithamalgifb

fbshipit-source-id: 8d92a7d0f6db63f3cc3f439cb75a8d809af9106d
2021-04-08 08:06:54 -07:00
Rong Rong (AI Infra)
2dd7dafb62 [Hackathon][take2] jit py3 move list dict tuple to jit/ (#55515)
Summary:
moved to jit/test_list_dict.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55515

Reviewed By: mrshenli

Differential Revision: D27627615

Pulled By: walterddr

fbshipit-source-id: 6b17a4d6535ae2d6d848532a4df2278d3aaefa7b
2021-04-07 14:25:04 -07:00
Tugsbayasgalan Manlaibaatar
10abbb812a Support tensor subclasses in Torchscript (#54817)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54817

Test Plan:
python test case

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27407723

fbshipit-source-id: 459b9067f07908026f94620c1cfa3e00e8b50a4e
2021-04-07 12:10:27 -07:00
James Reed
3db2333d09 [JIT] Make NoneType annotation_str emit NoneType instead of None (#54642)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54642

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D27314174

Pulled By: jamesr66a

fbshipit-source-id: 153e9aa4ab781fa1d49d9d55a2e487bf7b04f0d7
2021-03-26 11:32:20 -07:00
James Reed
1e9ad6e5cd [JIT] Fix TupleType.annotation_str to conform to typing module syntax for empty tuple type (#54641)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54641

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D27314173

Pulled By: jamesr66a

fbshipit-source-id: 13c6e6b571672adc443429f59f3b30aae356c03d
2021-03-26 11:30:17 -07:00
anjali411
f9ca0d87a7 Teach Python TS frontend to parse complex literals (#52881)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52881

**This PR adds:**
1. logic to parse complex constants (complex literals of the form `bj`)
2. logic to parse complex lists
3. support for complex constructors: `complex(tensor/int/float/bool, tensor/int/float/bool)`
4. Limited operator support
     - `add`, `sub`, `mul`, `torch.tensor`, `torch.as_tensor`

**Follow-up work:**
1. Add complex support for unary and other registered ops.
2. support complex constructor with string as input (this is supported in Python eager mode).
3. Test all emitXYZ for all XYZ in `ir_emitter.cpp` (currently only emitConst, emitValueToTensor are tested). e.g., test loops etc.
4. onnx doesn't support complex tensors, so we should error out with a clear and descriptive error message.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D27245059

Pulled By: anjali411

fbshipit-source-id: af043b5159ae99a9cc8691b5a8401503fa8d6f05
2021-03-24 08:12:17 -07:00
Elias Ellison
9be4c75fa0 [JIT] Add Reinplacing to MKLDNN Subgraphs (#53908)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53908

This adds reinplacing to MKLDNN Subgraphs so that we replace `aten::add` with `aten::add_`. Normally you would have to prove device and dtype, but we know that already, and because we have explicit broadcast nodes for other reasons we dont have to prove that the output shape of add is the same as inputs.

Ive tested correctness on resnet, I'm going to do more extensive testing as well. When I benchmarked the "unsafe" version (always inplace) I saw average speedups of ~16% for both Single threaded and Multithreaded. I dont think the "safe" version will be far beyond; when I looked at resnet for example every `add` and `relu` were reinplaced.

Theres some question of reusing other alias / liveness / inplacing passes in SR. I thought about it, however I didnt want to add a cross-dependency between very different parts of the code base with a bunch of different assumptions. The logic here is also covering a simpler case and does not add much complexity IMO.

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D27132969

Pulled By: eellison

fbshipit-source-id: 121a38daaedf01363f6b66a814beaaa72a0ab0dc
2021-03-22 19:21:03 -07:00
Yanan Cao
f48a9712b7 Rewrite functional.tensordot to be TorchScript-able (#53672)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53487

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53672

Reviewed By: VitalyFedyunin

Differential Revision: D26934392

Pulled By: gmagogsfm

fbshipit-source-id: f842af340e4be723bf90b903793b0221af158ca7
2021-03-20 23:03:30 -07:00
Edward Yang
c2f41b6b84 Add meta device to generic device testing framework, skip NotImplementedError (#53682)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53682

With this, under the meta device, 101 tests passed and 16953 skipped.
It ain't much, but it's a start.

Some various bits and bobs:
- NotImplementedError suppression at test level is implemented
  in the same way as CUDA memory leak check, i.e., by wrapping
  test methods and monkeypatching them back in.
- I had to reimplement assertRaises/assertRaisesRegex from scratch to
  ignore NotImplementedError when _ignore_not_implemented_error is True.
  The implementation relies on a small amount of private API that hasn't
  changed since 2010
- expectedAlertNondeterministic doesn't really work so I skipped them
  all; there's probably a way to do it better

I tested this using `pytest --disable-warnings --tb=native -k meta --sw
test/*.py` and a pile of extra patches to make collection actually work
(lol).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26955539

Pulled By: ezyang

fbshipit-source-id: ac21c8734562497fdcca3b614a28010bc4c03d74
2021-03-14 20:41:19 -07:00
Philip Meier
b0afe945a7 Fix pylint error torch.tensor is not callable (#53424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53424

Fixes https://github.com/pytorch/pytorch/issues/24807 and supersedes the stale https://github.com/pytorch/pytorch/issues/25093 (Cc Microsheep). If you now run the reproduction

```python
import torch

if __name__ == "__main__":
    t = torch.tensor([1, 2, 3], dtype=torch.float64)
```

with `pylint==2.6.0`, you get the following output

```
test_pylint.py:1:0: C0114: Missing module docstring (missing-module-docstring)
test_pylint.py:4:8: E1101: Module 'torch' has no 'tensor' member; maybe 'Tensor'? (no-
member)
test_pylint.py:4:38: E1101: Module 'torch' has no 'float64' member (no-member)
```

Now `pylint` doesn't recognize `torch.tensor` at all, but it is promoted in the stub. Given that it also doesn't recognize `torch.float64`, I think fixing this is out of scope of this PR.

 ---

## TL;DR

This BC-breaking only for users that rely on unintended behavior. Since `torch/__init__.py` loaded `torch/tensor.py` it was populated in `sys.modules`. `torch/__init__.py` then overwrote `torch.tensor` with the actual function. With this `import torch.tensor as tensor` does not fail, but returns the function rather than the module. Users that rely on this import need to change it to `from torch import tensor`.

Reviewed By: zou3519

Differential Revision: D26223815

Pulled By: bdhirsh

fbshipit-source-id: 125b9ff3d276e84a645cd7521e8d6160b1ca1c21
2021-03-09 11:32:53 -08:00
Yanan Cao
bf88a4dad5 Support parsing Ellipsis in JIT frontend (#53576)
Summary:
De-sugars `Ellipsis` into dots (`...`)

Fixes https://github.com/pytorch/pytorch/issues/53517

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53576

Reviewed By: pbelevich

Differential Revision: D26904361

Pulled By: gmagogsfm

fbshipit-source-id: 5b23e049a075a9a99e37dcb47a9410b6f82a6fb7
2021-03-09 00:06:21 -08:00
mattip
54a2498919 Modify tests to use assertWarnsOnceRegex instead of maybeWarnsRegex (#52387)
Summary:
Related to https://github.com/pytorch/pytorch/issues/50006

Follow on for https://github.com/pytorch/pytorch/issues/48560 to ensure TORCH_WARN_ONCE warnings are caught. Most of this is straight-forward find-and-replace, but I did find one place where the TORCH_WARN_ONCE warning was not wrapped into a python warning.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52387

Reviewed By: albanD

Differential Revision: D26773387

Pulled By: mruberry

fbshipit-source-id: 5be7efbc8ab4a32ec8437c9c45f3b6c3c328f5dd
2021-03-08 03:32:14 -08:00
Joel Schlosser
6557ea0509 Context manager for hiding source ranges (#53188)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/52456

## Background

Provides a context manager `_hide_source_ranges()` that disables printing graph source ranges by default. It can be overridden on a per-graph basis if desired.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53188

Test Plan:
```
python test/test_jit.py TestJit.test_hide_source_ranges_context_manager
```

```python
import torch

torch.jit.script
def foo(x):
    return torch.add(x, x)

print(foo.graph)
with torch.jit._hide_source_ranges():
    print(foo.graph)

    # Override context manager
    print(foo.graph.str(print_source_ranges=True))

print(foo.graph)
```

```
graph(%x.1 : Tensor):
  %3 : int = prim::Constant[value=1]()
  %4 : Tensor = aten::add(%x.1, %x.1, %3) # /Users/jbschlosser/misc/example.py:5:11
  return (%4)

graph(%x.1 : Tensor):
  %3 : int = prim::Constant[value=1]()
  %4 : Tensor = aten::add(%x.1, %x.1, %3)
  return (%4)

graph(%x.1 : Tensor):
  %3 : int = prim::Constant[value=1]()
  %4 : Tensor = aten::add(%x.1, %x.1, %3) # /Users/jbschlosser/misc/example.py:5:11
  return (%4)

graph(%x.1 : Tensor):
  %3 : int = prim::Constant[value=1]()
  %4 : Tensor = aten::add(%x.1, %x.1, %3) # /Users/jbschlosser/misc/example.py:5:11
  return (%4)
```

Reviewed By: walterddr, zhangguanheng66

Differential Revision: D26817070

Pulled By: jbschlosser

fbshipit-source-id: e9d123452c616b0a9dda9e134ef6c2886f229d9b
2021-03-04 09:11:08 -08:00
nikithamalgi
c5b0c2fa8b Support torch.complex (#53227)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53227

========
Adds Support for Torch.complex in JIT

Test:
====
python test/test_jit.py -k test_torch_complex

Test Plan: Imported from OSS

Reviewed By: zdevito, bhosmer

Differential Revision: D26808285

Pulled By: nikithamalgifb

fbshipit-source-id: c6918b2baac814e78613a264d90941b8c6102237
2021-03-04 00:03:00 -08:00
Nikitha Malgi
7a60b7dc3e Add support to compare devices (#53045)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53045

Test Plan:
=====
python test/test_jit.py -k test_device_not_equal

Reviewed By: pbelevich

Differential Revision: D26737964

Pulled By: nikithamalgifb

fbshipit-source-id: 2205aa1f214a86282602168c364dca1363d2f7dd
2021-03-01 21:04:43 -08:00
nikithamalgi
c423733967 Add support for builtin sum (#52188)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/18627
Adds torch.sum support for JIT

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52188

Test Plan:
python test/test_jit.py -k test_list_sum
python test/test_jit.py -k test_torch_sum

Reviewed By: pbelevich, anjali411

Differential Revision: D26670022

Pulled By: nikithamalgifb

fbshipit-source-id: eb58f0a3a64dab4b9fa1f4eb854e9854fa9bda55
2021-02-25 21:09:34 -08:00
Tugsbayasgalan Manlaibaatar
e658d7c37b Ignore user annotated ignored attributes (#52367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52367

This fixes https://github.com/pytorch/pytorch/issues/52217

Test Plan: Imported from OSS

Reviewed By: navahgar, gmagogsfm

Differential Revision: D26574411

Pulled By: tugsbayasgalan

fbshipit-source-id: 7eac097f5b97cfe65854bceca14d41c156cd6e0a
2021-02-23 10:40:44 -08:00
Nikolay Korovaiko
847d1d4d53 add debug_flush_compilation_cache to Method (#52317)
Summary:
Forgot to add `debug_flush_compilation_cache ` to `Method` as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52317

Reviewed By: bdhirsh

Differential Revision: D26583313

Pulled By: Krovatkin

fbshipit-source-id: 1b3e503950cc3314796aff53b3b8038d16767870
2021-02-22 12:31:09 -08:00
nikithamalgi
e677b71056 Add support for pow (#52374)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/18627
Adds pow support for JIT

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52374

Test Plan: python test/test_jit.py -k test_torch_pow

Reviewed By: heitorschueroff

Differential Revision: D26555070

Pulled By: nikithamalgifb

fbshipit-source-id: 0d325f09cf893e4ae50277a95a6b7ad67d94f342
2021-02-21 19:55:58 -08:00
nikithamalgi
d819a21692 Support any (#52360)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/18627
Adds torch.any support for JIT

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52360

Test Plan:
python test/test_jit.py -k test_torch_any
python test/test_jit.py -k test_any

Reviewed By: tugsbayasgalan

Differential Revision: D26550626

Pulled By: nikithamalgifb

fbshipit-source-id: 36c2ae15e3bfb7b32bbf442818c879b0d2120cf1
2021-02-21 15:49:57 -08:00
Yuxin Wu
a62b0deae0 [pytorch] make is_tracing scriptable (#49853)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49853

fix https://github.com/pytorch/pytorch/issues/47379

Test Plan: buck test mode/dev-nosan //caffe2/test:jit -- 'test_script_is_tracing'

Reviewed By: SplitInfinity

Differential Revision: D25704315

fbshipit-source-id: 33c09c5bc1f1b62ef254f58e18ab1e951dbd1790
2021-02-20 02:53:28 -08:00
Richard Zou
b71215a909 Revert D26515596: [pytorch][PR] Add support for pow
Test Plan: revert-hammer

Differential Revision:
D26515596 (83feaebfc3)

Original commit changeset: 0c25a8eba8ed

fbshipit-source-id: 1a206f0b2923d922911fdaa5448a4e3a844ac5c4
2021-02-19 07:29:37 -08:00
nikithamalgi
83feaebfc3 Add support for pow (#52374)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/18627
Adds pow support for JIT

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52374

Test Plan: python test/test_jit.py -k test_torch_pow

Reviewed By: Lilyjjo

Differential Revision: D26515596

Pulled By: nikithamalgifb

fbshipit-source-id: 0c25a8eba8ed93291c5e447e863edac2a35b61fb
2021-02-18 23:03:28 -08:00
Nikolay Korovaiko
0019a20a2b [WIP] Add a _flush_compilation_cache for testing (#52001)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52001

Reviewed By: eellison

Differential Revision: D26371876

Pulled By: Krovatkin

fbshipit-source-id: db773d7124916bad31e80bdd7bb9b4170060977b
2021-02-16 10:49:38 -08:00
Ansley Ussery
1657d59641 Walk Python AST to check for unsupported attribute type annotations (#51805)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51805

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26418589

Pulled By: ansley

fbshipit-source-id: c13e9096dcfa242d158ebf1ae4f86ef6c46ff0ec
2021-02-12 18:18:01 -08:00
Yanan Cao
705fa7e964 [Usability] Capture argument names for traced functions and modules (#51775)
Summary:
Previously `torch.jit.trace` relies on AutoGrad hooks to infer name of tensors in computation, including those of function/method arguments. This often doesn't work out because:

- These names often do not exist
- Tracer uses argument name of first tensor operation on each tensor as inferred argument names. These tensor operations have programmatically-generated names like `argument_1`

This PR extracts argument names directly from Python functions and pass them down to tracer, which then assigns them to correct graph inputs. This way, we always have the correct argument names captured in IR.

This is useful for both debugging and supporting using `InterfaceType` to represent traced modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51775

Reviewed By: izdeby

Differential Revision: D26273105

Pulled By: gmagogsfm

fbshipit-source-id: 934a385041137dc3731bb6fa8657b11532fed9e5
2021-02-10 18:28:08 -08:00
Mike Ruberry
594a66d778 Warn about floor_divide performing incorrect rounding (#50281) (#50281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51745

Test Plan: Imported from OSS

Reviewed By: ngimel

Pulled By: mruberry

Differential Revision: D26257855

fbshipit-source-id: e5d497cf07b0c746838ed081c5d0e82fb4cb701b
2021-02-10 03:13:34 -08:00
nikithamalgi
9c0caf0384 Adding support for comparing two bool varibales (#51844)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51844

Fixes issue #48174

=========

Adds support to compare two bool variables

Test:
======
python test/test_jit.py -k test_compare_two_bool_inputs

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26353694

Pulled By: nikithamalgifb

fbshipit-source-id: 41af5ba3e4075ed7a21595b10e388a7302aa1fce
2021-02-10 02:13:25 -08:00
nikithamalgi
141f615161 Support torch.type (#51904)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51904

Fixes issue: #25433

=========
Makes tensor.type(dtype) scriptable

Test:
======
python test/test_jit.py -v TestJit.test_script_tensor_type

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26331503

Pulled By: nikithamalgifb

fbshipit-source-id: d9188999fee601a8402fdc4d9052dee4e0d529d5
2021-02-09 11:39:57 -08:00
Chester Liu
58eb23378f Clean up usage of torch._six partially (#49785)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49785

Reviewed By: mruberry

Differential Revision: D25963833

Pulled By: bugra

fbshipit-source-id: 11c90d6b8d3f206c9d0a4d8621b773beb10c6ba2
2021-02-08 13:58:34 -08:00
Yanan Cao
b9acfcddeb Support mypy ignore annotation with particular rule specified (#51675)
Summary:
Previously TorchScript allows a ignore-all type check suppression rule that looks like
```
code code code  # type: ignore
```

But a more common use case is
```
code code code  # type: ignore[specific-rule]
```
This PR allows the more common use case

Fixes https://github.com/pytorch/pytorch/issues/48643

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51675

Reviewed By: ansley

Differential Revision: D26304870

Pulled By: gmagogsfm

fbshipit-source-id: 0ac9ee34f0219c86e428318a69484d5aa3ec433f
2021-02-08 11:21:47 -08:00
nikithamalgi
fa70168804 Add metacompile of Ternary if (#51789)
Summary:
Fixes issue: https://github.com/pytorch/pytorch/issues/49728
========
Ternary if operation fails in Torchscript when the condition variable is annotated as Final.

Tests:
=======
pytest -k test_ternary_static_if test/test_jit.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51789

Reviewed By: gmagogsfm

Differential Revision: D26278969

Pulled By: nikithamalgifb

fbshipit-source-id: 27d1383290211503188428fb2e8b7749f59ba16e
2021-02-06 10:14:30 -08:00
jiej
4d703d040b Linear autodiff revert revert (#51613)
Summary:
patch PR https://github.com/pytorch/pytorch/issues/50856 and rollbak the revert D26105797 (e488e3c443)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51613

Reviewed By: mruberry

Differential Revision: D26253999

Pulled By: ngimel

fbshipit-source-id: a20b1591de06dd277e4cd95542e3291a2f5a252c
2021-02-04 16:32:05 -08:00
nikithamalgi
ecf8166522 Support Union[NoneType, T] as input type (#51605)
Summary:
ghstack-source-id: 32db9661ce0f9441ef7061285bc24967c2808ea6
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51605

Fixes https://github.com/pytorch/pytorch/issues/51582
=========
In Python 3.9+ Union[T, NoneType] and Union[NoneType, T] as OptionalType.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51606

Test Plan:
====
python test/test_jit.py -v TestJit.test_union_to_optional

Reviewed By: pbelevich

Differential Revision: D26242353

Pulled By: nikithamalgifb

fbshipit-source-id: 0ac441fa1bdf2fb1044e3fe131bee47adda90bbb
2021-02-04 06:25:41 -08:00
Yanan Cao
75ee575671 [Usability] Handle repeated jit.script calls on function gracefully (#51545)
Summary:
Repeated calls on `class` is not handled since `class`'s compilation process will change soon in https://github.com/pytorch/pytorch/issues/44324

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51545

Reviewed By: H-Huang

Differential Revision: D26207010

Pulled By: gmagogsfm

fbshipit-source-id: 5f3f64b0e4b4ab4dbf5c9411d9c143472922a106
2021-02-03 02:09:25 -08:00
Natalia Gimelshein
26f9ac98e5 Revert D26105797: [pytorch][PR] Exposing linear layer to fuser
Test Plan: revert-hammer

Differential Revision:
D26105797 (e488e3c443)

Original commit changeset: 6f7cedb9f6e3

fbshipit-source-id: f0858cefed76d726e9dba61e51e1eaf2af4c99c5
2021-02-02 17:39:17 -08:00
jiej
e488e3c443 Exposing linear layer to fuser (#50856)
Summary:
1. enabling linear in autodiff;
2. remove control flow in python for linear;

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50856

Reviewed By: pbelevich

Differential Revision: D26105797

Pulled By: eellison

fbshipit-source-id: 6f7cedb9f6e3e46daa24223d2a6080880498deb4
2021-02-02 15:39:01 -08:00
Meghan Lele
751c30038f [JIT] Properly convert Python strings implictly to device (#51340)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51340

**Summary**
`toIValue` assumes that any value passed for an argument of type
`torch.device` is a valid device object, even when it is not. This can
lead to device type arguments of functions being assigned incorrect
values (see #51098).

This commit adds an explicit check that the passed in object is indeed a
`torch.device` using `THPDevice_Check` and only then does is it
converted to an `IValue`. Since implicit conversion from strings to
devices is generally allowed, if `THPDevice_Check` fails, it is assumed
that the object is a string and an `IValue` containing a `c10::Device`
containing the passed in string is returned.

**Test Plan**
This commit adds a unit test to `test_jit.py` to test that invalid
strings passed as devices are not longer silently accepted.

**Fixes**
This commit fixes #51098.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26187190

Pulled By: SplitInfinity

fbshipit-source-id: 48c990203431da30f9f09381cbec8218d763325b
2021-02-02 10:57:56 -08:00
Ansley Ussery
09e48dbd33 Handle error during dict expansion (#51374)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51374

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D26155995

Pulled By: ansley

fbshipit-source-id: 04e924cb641565341c570c6cf5e5eec42e4f9c8b
2021-01-29 18:46:10 -08:00
anjali411
f9f22c8b5c Add serialization logic for complex numbers (#51287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51287

This reverts commit dfdb1547b9.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26131165

Pulled By: anjali411

fbshipit-source-id: 047167fac594ddb670c5e169446e90e74991679a
2021-01-28 17:25:35 -08:00
Nikitha Malgi
b955da3310 Adding correct error message for for..else (#51258)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51040

========
Add error message for for..else statement in Torchscript

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51258

Test Plan:
=====
pytest -k test_for_else test/test_jit.py

Reviewed By: pbelevich

Differential Revision: D26125148

Pulled By: nikithamalgifb

fbshipit-source-id: 82b67ab1c68e29312162ff5d73b82c8c0c9553df
2021-01-28 08:17:31 -08:00
Lillian Johnson
3b6f30824c OpInfo JIT op.output_func handling support (#50775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50775

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25964541

Pulled By: Lilyjjo

fbshipit-source-id: 8cf1ee9191d526cc46ae283f38c2d64bd60afdb2
2021-01-27 15:04:23 -08:00
Nikita Shulga
00adc7b07f Fix more JIT tests under Python-3.9 (#51182)
Summary:
Mostly replace `global Foo` with `make_global(Foo)`
The only real fix is generating Subscript annotation, which is a follow up from https://github.com/pytorch/pytorch/pull/48676

Fixes https://github.com/pytorch/pytorch/issues/49617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51182

Reviewed By: gmagogsfm

Differential Revision: D26095244

Pulled By: malfet

fbshipit-source-id: 0e043d9a2cf43fff71dfbb341f708cd7af87c39a
2021-01-27 10:57:03 -08:00
Thomas Viehmann
ac0a3cc5fd Merge CompilationUnit from torch._C and torch.jit (#50614)
Summary:
This simplifies our handling and allows passing CompilationUnits from Python to C++ defined functions via PyBind easily.

Discussed on Slack with SplitInfinity

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50614

Reviewed By: anjali411

Differential Revision: D25938005

Pulled By: SplitInfinity

fbshipit-source-id: 94aadf0c063ddfef7ca9ea17bfa998d8e7b367ad
2021-01-25 11:06:40 -08:00
Peter Bell
47f0bda3ef Improve complex support in common_nn test machinery (#50593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50593

There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex
types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors.

Also found a few places that explicitly cast inputs to floating point types,
which would drop the imaginary component before running the test.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25954050

Pulled By: mruberry

fbshipit-source-id: 1fa8e5af233aa095c839d5e2f860564baaf92aef
2021-01-22 09:44:45 -08:00
Lillian Johnson
3b88e1b0e7 [WIP] JIT Static Hooks: python tests (#49546)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49546

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D25771119

Pulled By: Lilyjjo

fbshipit-source-id: bf8a8e20f790691d3ff58fa9c8d0d9ab3e8322c4
2021-01-20 09:12:53 -08:00
Guilherme Leobas
a9e46f1413 add type annotations to torch.nn.modules.container (#48969)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48969

Reviewed By: mrshenli

Differential Revision: D25728987

Pulled By: walterddr

fbshipit-source-id: 02c3aa2078f4ed6cc6edd90ffe1177d789c328a9
2021-01-19 15:12:17 -08:00
Nikolay Korovaiko
8e60bf9034 add RequiresGradCheck (#50392)
Summary:
This change improves perf by 3-4% on fastrnns.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50392

Reviewed By: izdeby

Differential Revision: D25891392

Pulled By: Krovatkin

fbshipit-source-id: 44d9b6907d3975742c9d77102fe6a85aab2c08c0
2021-01-15 16:50:42 -08:00
Guilherme Leobas
0d981eea6c add type annotations to torch.nn.modules.conv (#49564)
Summary:
closes gh-49563

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49564

Reviewed By: albanD

Differential Revision: D25917441

Pulled By: walterddr

fbshipit-source-id: 491dc06cfc1bbf694dfd9ccefca4f55488a931b2
2021-01-15 11:16:11 -08:00
Guilherme Leobas
374951d102 Add type annotations to torch.nn.modules.padding (#49494)
Summary:
Closes gh-49492

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49494

Reviewed By: mruberry

Differential Revision: D25723837

Pulled By: walterddr

fbshipit-source-id: 92af0100f6d9e2bb25b259f5a7fe9d449ffb6443
2021-01-12 15:34:28 -08:00
Elias Ellison
a69f008cb7 [JIT] Factor out peephole to own test file (#50220)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50220

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856263

Pulled By: eellison

fbshipit-source-id: f3d918d860e64e788e0bb9b9cb85125660f834c6
2021-01-12 11:39:06 -08:00
Elias Ellison
035229c945 [JIT] Frozen Graph Conv-BN fusion (#50074)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50074

Adds Conv-BN fusion for models that have been frozen. I haven't explicitly tested perf yet but it should be equivalent to the results from Chillee's PR [here](https://github.com/pytorch/pytorch/pull/476570) and [here](https://github.com/pytorch/pytorch/pull/47657#issuecomment-725752765). Click on the PR for details but it's a good speed up.

 In a later PR in the stack I plan on making this optimization on by default as part of `torch.jit.freeze`. I will also in a later PR add a peephole so that there is not conv->batchnorm2d doesn't generate a conditional checking # dims.

Zino was working on freezing and left the team, so not really sure who should be reviewing this, but I dont care too much so long as I get a review �

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856261

Pulled By: eellison

fbshipit-source-id: da58c4ad97506a09a5c3a15e41aa92bdd7e9a197
2021-01-12 11:37:32 -08:00
Thomas Viehmann
ea087e2d92 JIT: guard DifferentiableGraph node (#49433)
Summary:
This adds guarding for DifferentiableGraph nodes in order to not depend on
Also bailing out on required gradients for the CUDA fuser.

Fixes https://github.com/pytorch/pytorch/issues/49299

I still need to look into a handful of failing tests, but maybe it can be a discussion basis.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49433

Reviewed By: ngimel

Differential Revision: D25681374

Pulled By: Krovatkin

fbshipit-source-id: 8e7be53a335c845560436c0cceeb5e154c9cf296
2021-01-08 20:01:27 -08:00
Richard Barnes
ec6d29d6fa Drop unused imports from test (#49973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49973

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727350

fbshipit-source-id: 237ec4edd85788de920663719173ebec7ddbae1c
2021-01-07 12:09:38 -08:00
Nikitha Malgi
12b73fdbbf Adding JIT support for cuda streams and events (#48020)
Summary:
=======

This PR addresses the following:

 * Adds JIT support for CUDA Streams
 * Adds JIT support for CUDA Events
 * Adds JIT support for CUDA Stream context manager

Testing:
======

python test/test_jit.py -v TestCUDA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48020

Reviewed By: navahgar

Differential Revision: D25725749

Pulled By: nikithamalgifb

fbshipit-source-id: b0addeb49630f8f0c430ed7badeca43bb9d2535c
2020-12-29 20:24:57 -08:00
peter
8d7338e820 Enable tests using named temp files on Windows (#49640)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49640

Reviewed By: ngimel

Differential Revision: D25681548

Pulled By: malfet

fbshipit-source-id: 0e2b25817c98d749920cb2b4079033a2ee8c1456
2020-12-29 09:57:35 -08:00
Ansley Ussery
58fe67967c Support the in operator with str (#47057)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47057

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24863370

Pulled By: ansley

fbshipit-source-id: 5d17165b06052f0a4676537c5f6757083185a591
2020-12-28 10:26:24 -08:00
Erjia Guan
b80a36614f Fix return type Any for Ternary ops (#49165)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49165

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D25463694

Pulled By: ejguan

fbshipit-source-id: 5cf907e8de6eeb0171d61175a60fac9812b76c6c
2020-12-21 10:12:41 -08:00
Peter Bell
5c25f8faf3 stft: Change require_complex warning to an error (#49022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

**BC-breaking note**:

Previously torch.stft took an optional `return_complex` parameter that indicated whether the output would be a floating point tensor or a complex tensor. By default `return_complex` was False to be consistent with the previous behavior of torch.stft. This PR changes this behavior so `return_complex` is a required argument.

**PR Summary**:

* **#49022 stft: Change require_complex warning to an error**

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25658906

Pulled By: mruberry

fbshipit-source-id: 11932d1102e93f8c7bd3d2d0b2a607fd5036ec5e
2020-12-20 14:48:25 -08:00
Nikitha Malgi
e17f0fd676 Adding support for bitwise augassignment operators (#44621)
Summary:
========
Fixes #{42915}

This commit adds support for Bitwise Shorthands in TorchScript, i.e : |=,&=,^=,<<=,>>=,**=

Testing:
======
This commit also adds test for the above fix in test_jit.py
The test can be invoked by
pytest -k augassign test/test_jit.py

Here is a snapshot of the testing:
<img width="1238" alt="image" src="https://user-images.githubusercontent.com/70345919/93105141-8f9f5300-f663-11ea-836b-3b52da6d2be5.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44621

Reviewed By: mrshenli

Differential Revision: D23906344

Pulled By: nikithamalgifb

fbshipit-source-id: 4c93a7430a625f698b163609ccec15e51417d564
2020-12-18 12:07:54 -08:00
albanD
ccd646696b Fix Module backward hooks for all Tensor inputs/outputs (#46163)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/598

This is BC-breaking as we now explicitly don't call the hook when there are not Tensors at the top level of the output.
This feature was not working anyways as the returned grad_input/grad_output were wrong (not respecting the output structure and wrong inputs for multi-Node Module).

This is also BC-breaking as we now report the correct gradients for `nn.Module`s that contain multiple autograd `Node`s while we use to return bad results before.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46163

Reviewed By: ailzhang, mruberry

Differential Revision: D24894180

Pulled By: albanD

fbshipit-source-id: e1b5d193d2818eb2f51e2a2722c7405c8bd13c2b
2020-12-18 09:04:36 -08:00
Ansley Ussery
d17dc37112 Add dict comprehension (#47774)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47774

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25615464

Pulled By: ansley

fbshipit-source-id: 10bba6f70e812fa580cbbbf097e93de7142484cc
2020-12-17 15:25:30 -08:00
Mike Ruberry
f5b68e74d7 Revert D25574962: [pytorch][PR] Updated derivative rules for complex svd and pinverse
Test Plan: revert-hammer

Differential Revision:
D25574962 (9955355853)

Original commit changeset: 832b61303e88

fbshipit-source-id: d73f77f3e51b0f535dad6d21c5bebf8d41a6bfbd
2020-12-17 00:59:43 -08:00
Nikitha Malgi
26e076d19e Adding fix for invalid annotation types for dictionary (#49425)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49362

**Summary:**
This PR fixes the issue where invalid annotation types are used for a dictionary.
Unsupported assertion message is generated for all invalid annotations

**Test Case**:
python test/test_jit.py TestJit.test_dict_invalid_annotations

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49425

Reviewed By: navahgar

Differential Revision: D25601578

Pulled By: nikithamalgifb

fbshipit-source-id: 91633e3d0891bdcb5402f044a74d02fe352ecd6f
2020-12-17 00:28:29 -08:00
Mike Ruberry
47c65f8223 Revert D25569586: stft: Change require_complex warning to an error
Test Plan: revert-hammer

Differential Revision:
D25569586 (5874925b46)

Original commit changeset: 09608088f540

fbshipit-source-id: 6a5953b327a4a2465b046e29bb007a0c5f4cf14a
2020-12-16 16:21:52 -08:00
Peter Bell
5874925b46 stft: Change require_complex warning to an error (#49022)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25569586

Pulled By: mruberry

fbshipit-source-id: 09608088f540c2c3fc70465f6a23f2aec5f24f85
2020-12-16 12:47:56 -08:00
Ivan Yashchuk
9955355853 Updated derivative rules for complex svd and pinverse (#47761)
Summary:
Updated `svd_backward` to work correctly for complex-valued inputs.
Updated `common_methods_invocations.py` to take dtype, device arguments for input construction.
Removed `test_pinverse` from `test_autograd.py`, it is replaced by entries to `common_methods_invocations.py`.
Added `svd` and `pinverse` to list of complex tests.

References for complex-valued SVD differentiation:

- https://giggleliu.github.io/2019/04/02/einsumbp.html
- https://arxiv.org/abs/1909.02659

The derived rules assume gauge invariance of loss functions, so the result would not be correct for loss functions that are not gauge invariant.
https://re-ra.xyz/Gauge-Problem-in-Automatic-Differentiation/

The same rule is implemented in Tensorflow and [BackwardsLinalg.jl](https://github.com/GiggleLiu/BackwardsLinalg.jl).

Ref. https://github.com/pytorch/pytorch/issues/33152

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47761

Reviewed By: izdeby

Differential Revision: D25574962

Pulled By: mruberry

fbshipit-source-id: 832b61303e883ad3a451b84850ccf0f36763a6f6
2020-12-16 12:32:22 -08:00
Chen Lai
717f31d984 Remove unused reconstruct_scopes function (#48822)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48822

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D25325012

Pulled By: cccclai

fbshipit-source-id: 86ea4c0b2926257c0f82aa05cbcd83278b1b67f7
2020-12-11 23:43:36 -08:00
Tugsbayasgalan Manlaibaatar
42c78ed745 Tuple Slice with both negative and positive stepped size (#48660)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48660

We used to support tuple slicing without any step size before, but this PR extends this feature to support arbitrary step size. We do this by manually reconstructing a new tuple in the IR instead of relying on TupleSlice prim.

Test Plan:
python tests

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D25359336

fbshipit-source-id: 28cde536f28dd8a00607814b2900765e177f0ed7
2020-12-11 11:00:38 -08:00
Peter Bell
533c837833 Register OpInfos for torch.fft transforms (#48427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48427

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25266218

Pulled By: mruberry

fbshipit-source-id: 406e7ed5956bc7445daf8c027c9b4d2c8ff88fa1
2020-12-07 17:19:29 -08:00
Ivan Yashchuk
cb285080b0 Added computing matrix condition numbers (linalg.cond) (#45832)
Summary:
This PR adds `torch.linalg.cond` for NumPy compatibility.

Ref https://github.com/pytorch/pytorch/issues/42666.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45832

Reviewed By: ngimel

Differential Revision: D25183690

Pulled By: mruberry

fbshipit-source-id: a727959bfec2bc2dc36df59d9ef79c0534b68194
2020-12-04 02:23:57 -08:00