Commit Graph

1679 Commits

Author SHA1 Message Date
Yuxin Wu
a62b0deae0 [pytorch] make is_tracing scriptable (#49853)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49853

fix https://github.com/pytorch/pytorch/issues/47379

Test Plan: buck test mode/dev-nosan //caffe2/test:jit -- 'test_script_is_tracing'

Reviewed By: SplitInfinity

Differential Revision: D25704315

fbshipit-source-id: 33c09c5bc1f1b62ef254f58e18ab1e951dbd1790
2021-02-20 02:53:28 -08:00
Richard Zou
b71215a909 Revert D26515596: [pytorch][PR] Add support for pow
Test Plan: revert-hammer

Differential Revision:
D26515596 (83feaebfc3)

Original commit changeset: 0c25a8eba8ed

fbshipit-source-id: 1a206f0b2923d922911fdaa5448a4e3a844ac5c4
2021-02-19 07:29:37 -08:00
nikithamalgi
83feaebfc3 Add support for pow (#52374)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/18627
Adds pow support for JIT

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52374

Test Plan: python test/test_jit.py -k test_torch_pow

Reviewed By: Lilyjjo

Differential Revision: D26515596

Pulled By: nikithamalgifb

fbshipit-source-id: 0c25a8eba8ed93291c5e447e863edac2a35b61fb
2021-02-18 23:03:28 -08:00
Nikolay Korovaiko
0019a20a2b [WIP] Add a _flush_compilation_cache for testing (#52001)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52001

Reviewed By: eellison

Differential Revision: D26371876

Pulled By: Krovatkin

fbshipit-source-id: db773d7124916bad31e80bdd7bb9b4170060977b
2021-02-16 10:49:38 -08:00
Ansley Ussery
1657d59641 Walk Python AST to check for unsupported attribute type annotations (#51805)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51805

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26418589

Pulled By: ansley

fbshipit-source-id: c13e9096dcfa242d158ebf1ae4f86ef6c46ff0ec
2021-02-12 18:18:01 -08:00
Yanan Cao
705fa7e964 [Usability] Capture argument names for traced functions and modules (#51775)
Summary:
Previously `torch.jit.trace` relies on AutoGrad hooks to infer name of tensors in computation, including those of function/method arguments. This often doesn't work out because:

- These names often do not exist
- Tracer uses argument name of first tensor operation on each tensor as inferred argument names. These tensor operations have programmatically-generated names like `argument_1`

This PR extracts argument names directly from Python functions and pass them down to tracer, which then assigns them to correct graph inputs. This way, we always have the correct argument names captured in IR.

This is useful for both debugging and supporting using `InterfaceType` to represent traced modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51775

Reviewed By: izdeby

Differential Revision: D26273105

Pulled By: gmagogsfm

fbshipit-source-id: 934a385041137dc3731bb6fa8657b11532fed9e5
2021-02-10 18:28:08 -08:00
Mike Ruberry
594a66d778 Warn about floor_divide performing incorrect rounding (#50281) (#50281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51745

Test Plan: Imported from OSS

Reviewed By: ngimel

Pulled By: mruberry

Differential Revision: D26257855

fbshipit-source-id: e5d497cf07b0c746838ed081c5d0e82fb4cb701b
2021-02-10 03:13:34 -08:00
nikithamalgi
9c0caf0384 Adding support for comparing two bool varibales (#51844)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51844

Fixes issue #48174

=========

Adds support to compare two bool variables

Test:
======
python test/test_jit.py -k test_compare_two_bool_inputs

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26353694

Pulled By: nikithamalgifb

fbshipit-source-id: 41af5ba3e4075ed7a21595b10e388a7302aa1fce
2021-02-10 02:13:25 -08:00
nikithamalgi
141f615161 Support torch.type (#51904)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51904

Fixes issue: #25433

=========
Makes tensor.type(dtype) scriptable

Test:
======
python test/test_jit.py -v TestJit.test_script_tensor_type

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26331503

Pulled By: nikithamalgifb

fbshipit-source-id: d9188999fee601a8402fdc4d9052dee4e0d529d5
2021-02-09 11:39:57 -08:00
Chester Liu
58eb23378f Clean up usage of torch._six partially (#49785)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49785

Reviewed By: mruberry

Differential Revision: D25963833

Pulled By: bugra

fbshipit-source-id: 11c90d6b8d3f206c9d0a4d8621b773beb10c6ba2
2021-02-08 13:58:34 -08:00
Yanan Cao
b9acfcddeb Support mypy ignore annotation with particular rule specified (#51675)
Summary:
Previously TorchScript allows a ignore-all type check suppression rule that looks like
```
code code code  # type: ignore
```

But a more common use case is
```
code code code  # type: ignore[specific-rule]
```
This PR allows the more common use case

Fixes https://github.com/pytorch/pytorch/issues/48643

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51675

Reviewed By: ansley

Differential Revision: D26304870

Pulled By: gmagogsfm

fbshipit-source-id: 0ac9ee34f0219c86e428318a69484d5aa3ec433f
2021-02-08 11:21:47 -08:00
nikithamalgi
fa70168804 Add metacompile of Ternary if (#51789)
Summary:
Fixes issue: https://github.com/pytorch/pytorch/issues/49728
========
Ternary if operation fails in Torchscript when the condition variable is annotated as Final.

Tests:
=======
pytest -k test_ternary_static_if test/test_jit.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51789

Reviewed By: gmagogsfm

Differential Revision: D26278969

Pulled By: nikithamalgifb

fbshipit-source-id: 27d1383290211503188428fb2e8b7749f59ba16e
2021-02-06 10:14:30 -08:00
jiej
4d703d040b Linear autodiff revert revert (#51613)
Summary:
patch PR https://github.com/pytorch/pytorch/issues/50856 and rollbak the revert D26105797 (e488e3c443)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51613

Reviewed By: mruberry

Differential Revision: D26253999

Pulled By: ngimel

fbshipit-source-id: a20b1591de06dd277e4cd95542e3291a2f5a252c
2021-02-04 16:32:05 -08:00
nikithamalgi
ecf8166522 Support Union[NoneType, T] as input type (#51605)
Summary:
ghstack-source-id: 32db9661ce0f9441ef7061285bc24967c2808ea6
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51605

Fixes https://github.com/pytorch/pytorch/issues/51582
=========
In Python 3.9+ Union[T, NoneType] and Union[NoneType, T] as OptionalType.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51606

Test Plan:
====
python test/test_jit.py -v TestJit.test_union_to_optional

Reviewed By: pbelevich

Differential Revision: D26242353

Pulled By: nikithamalgifb

fbshipit-source-id: 0ac441fa1bdf2fb1044e3fe131bee47adda90bbb
2021-02-04 06:25:41 -08:00
Yanan Cao
75ee575671 [Usability] Handle repeated jit.script calls on function gracefully (#51545)
Summary:
Repeated calls on `class` is not handled since `class`'s compilation process will change soon in https://github.com/pytorch/pytorch/issues/44324

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51545

Reviewed By: H-Huang

Differential Revision: D26207010

Pulled By: gmagogsfm

fbshipit-source-id: 5f3f64b0e4b4ab4dbf5c9411d9c143472922a106
2021-02-03 02:09:25 -08:00
Natalia Gimelshein
26f9ac98e5 Revert D26105797: [pytorch][PR] Exposing linear layer to fuser
Test Plan: revert-hammer

Differential Revision:
D26105797 (e488e3c443)

Original commit changeset: 6f7cedb9f6e3

fbshipit-source-id: f0858cefed76d726e9dba61e51e1eaf2af4c99c5
2021-02-02 17:39:17 -08:00
jiej
e488e3c443 Exposing linear layer to fuser (#50856)
Summary:
1. enabling linear in autodiff;
2. remove control flow in python for linear;

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50856

Reviewed By: pbelevich

Differential Revision: D26105797

Pulled By: eellison

fbshipit-source-id: 6f7cedb9f6e3e46daa24223d2a6080880498deb4
2021-02-02 15:39:01 -08:00
Meghan Lele
751c30038f [JIT] Properly convert Python strings implictly to device (#51340)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51340

**Summary**
`toIValue` assumes that any value passed for an argument of type
`torch.device` is a valid device object, even when it is not. This can
lead to device type arguments of functions being assigned incorrect
values (see #51098).

This commit adds an explicit check that the passed in object is indeed a
`torch.device` using `THPDevice_Check` and only then does is it
converted to an `IValue`. Since implicit conversion from strings to
devices is generally allowed, if `THPDevice_Check` fails, it is assumed
that the object is a string and an `IValue` containing a `c10::Device`
containing the passed in string is returned.

**Test Plan**
This commit adds a unit test to `test_jit.py` to test that invalid
strings passed as devices are not longer silently accepted.

**Fixes**
This commit fixes #51098.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26187190

Pulled By: SplitInfinity

fbshipit-source-id: 48c990203431da30f9f09381cbec8218d763325b
2021-02-02 10:57:56 -08:00
Ansley Ussery
09e48dbd33 Handle error during dict expansion (#51374)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51374

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D26155995

Pulled By: ansley

fbshipit-source-id: 04e924cb641565341c570c6cf5e5eec42e4f9c8b
2021-01-29 18:46:10 -08:00
anjali411
f9f22c8b5c Add serialization logic for complex numbers (#51287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51287

This reverts commit dfdb1547b9.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26131165

Pulled By: anjali411

fbshipit-source-id: 047167fac594ddb670c5e169446e90e74991679a
2021-01-28 17:25:35 -08:00
Nikitha Malgi
b955da3310 Adding correct error message for for..else (#51258)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51040

========
Add error message for for..else statement in Torchscript

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51258

Test Plan:
=====
pytest -k test_for_else test/test_jit.py

Reviewed By: pbelevich

Differential Revision: D26125148

Pulled By: nikithamalgifb

fbshipit-source-id: 82b67ab1c68e29312162ff5d73b82c8c0c9553df
2021-01-28 08:17:31 -08:00
Lillian Johnson
3b6f30824c OpInfo JIT op.output_func handling support (#50775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50775

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25964541

Pulled By: Lilyjjo

fbshipit-source-id: 8cf1ee9191d526cc46ae283f38c2d64bd60afdb2
2021-01-27 15:04:23 -08:00
Nikita Shulga
00adc7b07f Fix more JIT tests under Python-3.9 (#51182)
Summary:
Mostly replace `global Foo` with `make_global(Foo)`
The only real fix is generating Subscript annotation, which is a follow up from https://github.com/pytorch/pytorch/pull/48676

Fixes https://github.com/pytorch/pytorch/issues/49617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51182

Reviewed By: gmagogsfm

Differential Revision: D26095244

Pulled By: malfet

fbshipit-source-id: 0e043d9a2cf43fff71dfbb341f708cd7af87c39a
2021-01-27 10:57:03 -08:00
Thomas Viehmann
ac0a3cc5fd Merge CompilationUnit from torch._C and torch.jit (#50614)
Summary:
This simplifies our handling and allows passing CompilationUnits from Python to C++ defined functions via PyBind easily.

Discussed on Slack with SplitInfinity

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50614

Reviewed By: anjali411

Differential Revision: D25938005

Pulled By: SplitInfinity

fbshipit-source-id: 94aadf0c063ddfef7ca9ea17bfa998d8e7b367ad
2021-01-25 11:06:40 -08:00
Peter Bell
47f0bda3ef Improve complex support in common_nn test machinery (#50593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50593

There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex
types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors.

Also found a few places that explicitly cast inputs to floating point types,
which would drop the imaginary component before running the test.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25954050

Pulled By: mruberry

fbshipit-source-id: 1fa8e5af233aa095c839d5e2f860564baaf92aef
2021-01-22 09:44:45 -08:00
Lillian Johnson
3b88e1b0e7 [WIP] JIT Static Hooks: python tests (#49546)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49546

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D25771119

Pulled By: Lilyjjo

fbshipit-source-id: bf8a8e20f790691d3ff58fa9c8d0d9ab3e8322c4
2021-01-20 09:12:53 -08:00
Guilherme Leobas
a9e46f1413 add type annotations to torch.nn.modules.container (#48969)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48969

Reviewed By: mrshenli

Differential Revision: D25728987

Pulled By: walterddr

fbshipit-source-id: 02c3aa2078f4ed6cc6edd90ffe1177d789c328a9
2021-01-19 15:12:17 -08:00
Nikolay Korovaiko
8e60bf9034 add RequiresGradCheck (#50392)
Summary:
This change improves perf by 3-4% on fastrnns.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50392

Reviewed By: izdeby

Differential Revision: D25891392

Pulled By: Krovatkin

fbshipit-source-id: 44d9b6907d3975742c9d77102fe6a85aab2c08c0
2021-01-15 16:50:42 -08:00
Guilherme Leobas
0d981eea6c add type annotations to torch.nn.modules.conv (#49564)
Summary:
closes gh-49563

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49564

Reviewed By: albanD

Differential Revision: D25917441

Pulled By: walterddr

fbshipit-source-id: 491dc06cfc1bbf694dfd9ccefca4f55488a931b2
2021-01-15 11:16:11 -08:00
Guilherme Leobas
374951d102 Add type annotations to torch.nn.modules.padding (#49494)
Summary:
Closes gh-49492

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49494

Reviewed By: mruberry

Differential Revision: D25723837

Pulled By: walterddr

fbshipit-source-id: 92af0100f6d9e2bb25b259f5a7fe9d449ffb6443
2021-01-12 15:34:28 -08:00
Elias Ellison
a69f008cb7 [JIT] Factor out peephole to own test file (#50220)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50220

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856263

Pulled By: eellison

fbshipit-source-id: f3d918d860e64e788e0bb9b9cb85125660f834c6
2021-01-12 11:39:06 -08:00
Elias Ellison
035229c945 [JIT] Frozen Graph Conv-BN fusion (#50074)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50074

Adds Conv-BN fusion for models that have been frozen. I haven't explicitly tested perf yet but it should be equivalent to the results from Chillee's PR [here](https://github.com/pytorch/pytorch/pull/476570) and [here](https://github.com/pytorch/pytorch/pull/47657#issuecomment-725752765). Click on the PR for details but it's a good speed up.

 In a later PR in the stack I plan on making this optimization on by default as part of `torch.jit.freeze`. I will also in a later PR add a peephole so that there is not conv->batchnorm2d doesn't generate a conditional checking # dims.

Zino was working on freezing and left the team, so not really sure who should be reviewing this, but I dont care too much so long as I get a review �

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856261

Pulled By: eellison

fbshipit-source-id: da58c4ad97506a09a5c3a15e41aa92bdd7e9a197
2021-01-12 11:37:32 -08:00
Thomas Viehmann
ea087e2d92 JIT: guard DifferentiableGraph node (#49433)
Summary:
This adds guarding for DifferentiableGraph nodes in order to not depend on
Also bailing out on required gradients for the CUDA fuser.

Fixes https://github.com/pytorch/pytorch/issues/49299

I still need to look into a handful of failing tests, but maybe it can be a discussion basis.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49433

Reviewed By: ngimel

Differential Revision: D25681374

Pulled By: Krovatkin

fbshipit-source-id: 8e7be53a335c845560436c0cceeb5e154c9cf296
2021-01-08 20:01:27 -08:00
Richard Barnes
ec6d29d6fa Drop unused imports from test (#49973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49973

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727350

fbshipit-source-id: 237ec4edd85788de920663719173ebec7ddbae1c
2021-01-07 12:09:38 -08:00
Nikitha Malgi
12b73fdbbf Adding JIT support for cuda streams and events (#48020)
Summary:
=======

This PR addresses the following:

 * Adds JIT support for CUDA Streams
 * Adds JIT support for CUDA Events
 * Adds JIT support for CUDA Stream context manager

Testing:
======

python test/test_jit.py -v TestCUDA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48020

Reviewed By: navahgar

Differential Revision: D25725749

Pulled By: nikithamalgifb

fbshipit-source-id: b0addeb49630f8f0c430ed7badeca43bb9d2535c
2020-12-29 20:24:57 -08:00
peter
8d7338e820 Enable tests using named temp files on Windows (#49640)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49640

Reviewed By: ngimel

Differential Revision: D25681548

Pulled By: malfet

fbshipit-source-id: 0e2b25817c98d749920cb2b4079033a2ee8c1456
2020-12-29 09:57:35 -08:00
Ansley Ussery
58fe67967c Support the in operator with str (#47057)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47057

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24863370

Pulled By: ansley

fbshipit-source-id: 5d17165b06052f0a4676537c5f6757083185a591
2020-12-28 10:26:24 -08:00
Erjia Guan
b80a36614f Fix return type Any for Ternary ops (#49165)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49165

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D25463694

Pulled By: ejguan

fbshipit-source-id: 5cf907e8de6eeb0171d61175a60fac9812b76c6c
2020-12-21 10:12:41 -08:00
Peter Bell
5c25f8faf3 stft: Change require_complex warning to an error (#49022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

**BC-breaking note**:

Previously torch.stft took an optional `return_complex` parameter that indicated whether the output would be a floating point tensor or a complex tensor. By default `return_complex` was False to be consistent with the previous behavior of torch.stft. This PR changes this behavior so `return_complex` is a required argument.

**PR Summary**:

* **#49022 stft: Change require_complex warning to an error**

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25658906

Pulled By: mruberry

fbshipit-source-id: 11932d1102e93f8c7bd3d2d0b2a607fd5036ec5e
2020-12-20 14:48:25 -08:00
Nikitha Malgi
e17f0fd676 Adding support for bitwise augassignment operators (#44621)
Summary:
========
Fixes #{42915}

This commit adds support for Bitwise Shorthands in TorchScript, i.e : |=,&=,^=,<<=,>>=,**=

Testing:
======
This commit also adds test for the above fix in test_jit.py
The test can be invoked by
pytest -k augassign test/test_jit.py

Here is a snapshot of the testing:
<img width="1238" alt="image" src="https://user-images.githubusercontent.com/70345919/93105141-8f9f5300-f663-11ea-836b-3b52da6d2be5.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44621

Reviewed By: mrshenli

Differential Revision: D23906344

Pulled By: nikithamalgifb

fbshipit-source-id: 4c93a7430a625f698b163609ccec15e51417d564
2020-12-18 12:07:54 -08:00
albanD
ccd646696b Fix Module backward hooks for all Tensor inputs/outputs (#46163)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/598

This is BC-breaking as we now explicitly don't call the hook when there are not Tensors at the top level of the output.
This feature was not working anyways as the returned grad_input/grad_output were wrong (not respecting the output structure and wrong inputs for multi-Node Module).

This is also BC-breaking as we now report the correct gradients for `nn.Module`s that contain multiple autograd `Node`s while we use to return bad results before.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46163

Reviewed By: ailzhang, mruberry

Differential Revision: D24894180

Pulled By: albanD

fbshipit-source-id: e1b5d193d2818eb2f51e2a2722c7405c8bd13c2b
2020-12-18 09:04:36 -08:00
Ansley Ussery
d17dc37112 Add dict comprehension (#47774)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47774

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25615464

Pulled By: ansley

fbshipit-source-id: 10bba6f70e812fa580cbbbf097e93de7142484cc
2020-12-17 15:25:30 -08:00
Mike Ruberry
f5b68e74d7 Revert D25574962: [pytorch][PR] Updated derivative rules for complex svd and pinverse
Test Plan: revert-hammer

Differential Revision:
D25574962 (9955355853)

Original commit changeset: 832b61303e88

fbshipit-source-id: d73f77f3e51b0f535dad6d21c5bebf8d41a6bfbd
2020-12-17 00:59:43 -08:00
Nikitha Malgi
26e076d19e Adding fix for invalid annotation types for dictionary (#49425)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49362

**Summary:**
This PR fixes the issue where invalid annotation types are used for a dictionary.
Unsupported assertion message is generated for all invalid annotations

**Test Case**:
python test/test_jit.py TestJit.test_dict_invalid_annotations

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49425

Reviewed By: navahgar

Differential Revision: D25601578

Pulled By: nikithamalgifb

fbshipit-source-id: 91633e3d0891bdcb5402f044a74d02fe352ecd6f
2020-12-17 00:28:29 -08:00
Mike Ruberry
47c65f8223 Revert D25569586: stft: Change require_complex warning to an error
Test Plan: revert-hammer

Differential Revision:
D25569586 (5874925b46)

Original commit changeset: 09608088f540

fbshipit-source-id: 6a5953b327a4a2465b046e29bb007a0c5f4cf14a
2020-12-16 16:21:52 -08:00
Peter Bell
5874925b46 stft: Change require_complex warning to an error (#49022)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25569586

Pulled By: mruberry

fbshipit-source-id: 09608088f540c2c3fc70465f6a23f2aec5f24f85
2020-12-16 12:47:56 -08:00
Ivan Yashchuk
9955355853 Updated derivative rules for complex svd and pinverse (#47761)
Summary:
Updated `svd_backward` to work correctly for complex-valued inputs.
Updated `common_methods_invocations.py` to take dtype, device arguments for input construction.
Removed `test_pinverse` from `test_autograd.py`, it is replaced by entries to `common_methods_invocations.py`.
Added `svd` and `pinverse` to list of complex tests.

References for complex-valued SVD differentiation:

- https://giggleliu.github.io/2019/04/02/einsumbp.html
- https://arxiv.org/abs/1909.02659

The derived rules assume gauge invariance of loss functions, so the result would not be correct for loss functions that are not gauge invariant.
https://re-ra.xyz/Gauge-Problem-in-Automatic-Differentiation/

The same rule is implemented in Tensorflow and [BackwardsLinalg.jl](https://github.com/GiggleLiu/BackwardsLinalg.jl).

Ref. https://github.com/pytorch/pytorch/issues/33152

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47761

Reviewed By: izdeby

Differential Revision: D25574962

Pulled By: mruberry

fbshipit-source-id: 832b61303e883ad3a451b84850ccf0f36763a6f6
2020-12-16 12:32:22 -08:00
Chen Lai
717f31d984 Remove unused reconstruct_scopes function (#48822)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48822

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D25325012

Pulled By: cccclai

fbshipit-source-id: 86ea4c0b2926257c0f82aa05cbcd83278b1b67f7
2020-12-11 23:43:36 -08:00
Tugsbayasgalan Manlaibaatar
42c78ed745 Tuple Slice with both negative and positive stepped size (#48660)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48660

We used to support tuple slicing without any step size before, but this PR extends this feature to support arbitrary step size. We do this by manually reconstructing a new tuple in the IR instead of relying on TupleSlice prim.

Test Plan:
python tests

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D25359336

fbshipit-source-id: 28cde536f28dd8a00607814b2900765e177f0ed7
2020-12-11 11:00:38 -08:00
Peter Bell
533c837833 Register OpInfos for torch.fft transforms (#48427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48427

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25266218

Pulled By: mruberry

fbshipit-source-id: 406e7ed5956bc7445daf8c027c9b4d2c8ff88fa1
2020-12-07 17:19:29 -08:00