Commit Graph

19 Commits

Author SHA1 Message Date
Xuehai Pan
6ff1e43a41 [BE][Easy][13/19] enforce style for empty lines in import segments in test/j*/ (#129764)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129764
Approved by: https://github.com/ezyang
2024-08-01 12:13:42 +00:00
Yuanhao Ji
604c9c5601 Enable UFMT on all of test/jit (#123623)
Partially addresses #123062

Ran lintrunner on:

- `test/jit`

with command:

```bash
lintrunner -a --take UFMT --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123623
Approved by: https://github.com/ezyang
2024-04-11 23:45:05 +00:00
Kurt Mohler
ffce2492af Remove set_default_dtype calls from jit and ops tests (#105072)
Part of #68972

This only attempts to avoid setting the default dtype for `test_jit.py` and `test_ops.py`. There are other tests, like `test_nn.py`, which will be addressed in follow up PRs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105072
Approved by: https://github.com/ezyang
2023-07-15 03:18:33 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
Brian Hirsh
ea5209c9fd functionalization: add native fill() op
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76084

Approved by: https://github.com/ezyang
2022-04-25 21:34:16 +00:00
Jane Xu
09c7771e9c Set test owners for jit tests (#66808)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66808

Reviewed By: mrshenli

Differential Revision: D31761414

Pulled By: janeyx99

fbshipit-source-id: baf8c49ff9c4bcda7b0ea0f6aafd26380586e72d
2021-10-25 07:51:10 -07:00
Elias Ellison
b059f035be Fix bug preventing optimization from firing (#65573)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65573

When we remove mutation on
```
x = [0, 1, 3, 4]
x[-2] = 4
```
we have a safety check that the new index will be in bounds of the old index. in practice, this should always be the case otherwise you would have a runtime error. Within that check (not within the actual adjustment) we were using the wrong length of inputs preventing the optimization from firing.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D31797469

Pulled By: eellison

fbshipit-source-id: 02a1686b9f6016eb5aeb87ed342c043c203dcd0e
2021-10-20 16:13:01 -07:00
Michael Suo
90b42452e2 Revert D31732417: Fix bug preventing optimization from firing
Test Plan: revert-hammer

Differential Revision:
D31732417 (853fc25fb0)

Original commit changeset: dd734254c021

fbshipit-source-id: 3da0663dac5b5d2117b3d7abdbcd45d96f98de33
2021-10-19 20:07:02 -07:00
Elias Ellison
853fc25fb0 Fix bug preventing optimization from firing (#65573)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65573

When we remove mutation on
```
x = [0, 1, 3, 4]
x[-2] = 4
```
we have a safety check that the new index will be in bounds of the old index. in practice, this should always be the case otherwise you would have a runtime error. Within that check (not within the actual adjustment) we were using the wrong length of inputs preventing the optimization from firing.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D31732417

Pulled By: eellison

fbshipit-source-id: dd734254c0212ca459c1c135da262974de5299be
2021-10-19 16:41:24 -07:00
Elias Ellison
fff83f3f66 Add handling of list write to remove mutation (#62904)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62904

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30168493

Pulled By: eellison

fbshipit-source-id: 3b25982b235938cc7439dd3a5236dfce68254c05
2021-08-09 08:56:06 -07:00
Peng Wu
1e09880b45 Add support for list insertion for mutation removal (#54271)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54271

Test Plan:
python test/test_jit.py TestRemoveMutation.test_lists_insert

Imported from OSS

Reviewed By: bertmaher

Differential Revision: D27180031

fbshipit-source-id: ba4388b6688cf83caf70901934f4adacd22d8ca6
2021-03-22 14:27:24 -07:00
Sam Estep
fee263595c Remove trailing whitespace introduced in #52175 (#53762)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53762

Test Plan: CI.

Reviewed By: seemethere

Differential Revision: D26961133

Pulled By: samestep

fbshipit-source-id: 972ea480baa3f34b65327abdf7e8bfdf30788572
2021-03-10 15:35:04 -08:00
Elias Ellison
5563248b58 [JIT] [Remove Mutation] Add handling of normal_ (#52175)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52175

Reviewed By: mrshenli

Differential Revision: D26919193

Pulled By: eellison

fbshipit-source-id: d036cbc7b42377f88a3d381e4932a710b8d22a04
2021-03-10 14:28:09 -08:00
jiej
4d703d040b Linear autodiff revert revert (#51613)
Summary:
patch PR https://github.com/pytorch/pytorch/issues/50856 and rollbak the revert D26105797 (e488e3c443)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51613

Reviewed By: mruberry

Differential Revision: D26253999

Pulled By: ngimel

fbshipit-source-id: a20b1591de06dd277e4cd95542e3291a2f5a252c
2021-02-04 16:32:05 -08:00
Natalia Gimelshein
26f9ac98e5 Revert D26105797: [pytorch][PR] Exposing linear layer to fuser
Test Plan: revert-hammer

Differential Revision:
D26105797 (e488e3c443)

Original commit changeset: 6f7cedb9f6e3

fbshipit-source-id: f0858cefed76d726e9dba61e51e1eaf2af4c99c5
2021-02-02 17:39:17 -08:00
jiej
e488e3c443 Exposing linear layer to fuser (#50856)
Summary:
1. enabling linear in autodiff;
2. remove control flow in python for linear;

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50856

Reviewed By: pbelevich

Differential Revision: D26105797

Pulled By: eellison

fbshipit-source-id: 6f7cedb9f6e3e46daa24223d2a6080880498deb4
2021-02-02 15:39:01 -08:00
Elias Ellison
ae286d81e0 [JIT] improve alias analysis for list constructs (#39111)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39111

In our present alias analysis, we consider any Value that enter another container as entering the heap, and thus aliasing all other heap values of the same type. There are a number of advantages to this approach:
- it is not to hard to maintain the aliasDb implementation
- it is much easier from an op schema perspective - there are many composite list ops registered internally and externally that would be tricky to register and get right if we did something more complicated
- It limits the size of the AliasDb, because a container of size 10 only contains a single memory dag element instead of 10 elements.

The downside is that we have are unable to handle the simple and extremely common case of a list of tensors being used in an ATen op.

In an example like:

```
 def foo(input):
    x = torch.tensor([1, 2, 3, 4])
    y = [x, x]
    input.add_(1)
    return torch.cat(y)
```

we will consider x to be written to. any write to any wildcard element (an element that enters a tuple, an element that is taken from a list) will mark x as written to. This can be limiting for our ability to create a functional subset and fuse graphs - as a result, 4 of TorchVision classification models could not be functionalized.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D23828003

Pulled By: eellison

fbshipit-source-id: 9109fcb6f2ca20ca897cae71683530285da9d537
2020-09-22 09:38:59 -07:00
Elias Ellison
de400fa5ac [JIT] handle specially mapped ops (#41503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41503

Fix for https://github.com/pytorch/pytorch/issues/41192

We can map fill_ and zero_ to their functional equivalents full_like and zeros_like

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D22629269

Pulled By: eellison

fbshipit-source-id: f1c62684dc55682c0b3845022e0461ec77d07179
2020-07-20 12:03:31 -07:00
Elias Ellison
6161730174 [JIT] move remove mutation to its own test file (#41502)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41502

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D22629270

Pulled By: eellison

fbshipit-source-id: fcec6ae4ff8f108164539d67427ef3d72fa07494
2020-07-20 12:03:28 -07:00