Commit Graph

101 Commits

Author SHA1 Message Date
Avik Chaudhuri
2ae71c4598 [user errors] compulsory case names, allow multiple (#110878)
We want to get to a point where most UserErrors link to exportdb examples. This PR makes passing case names non-optional to make this intent clearer and encourage developers who raise UserErrors to make or point to examples that make fixing such errors more obvious for users.

In addition, sometimes there are multiple examples that are relevant to an error. Thus this PR also enables passing multiple case names.

Retry of #110733 which was reverted due to a landrace.

Differential Revision: [D50087148](https://our.internmc.facebook.com/intern/diff/D50087148/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110878
Approved by: https://github.com/gmagogsfm, https://github.com/tugsbayasgalan
2023-10-10 03:48:07 +00:00
Animesh Jain
58637c4b43 [dynamo] Remove SuperSource (#110475)
The motivation for removing this is already present in the pre-PR comments. Copying it

~~~
# NB - SuperSource is a weird one.
# it is our only source with 2 bases, so we use the objec
# as the base, rather than the type, since an invocation
# like super(Foo, foo) is represented here, the source object base is more spiritually
# aligned with the instance, rather than the type.
# This whole construction is questionable tho, and we should probably find a way to
# avoid this exception to our otherwise nice source parentage invariant.
~~~

Instead of using super(a, b), we can use `type(b).__mro__[index]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110475
Approved by: https://github.com/jansel
2023-10-08 04:45:06 +00:00
Tugsbayasgalan Manlaibaatar
0a5f0b5db3 Suport tracing HuggingFace models (#110748)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110748
Approved by: https://github.com/avikchaudhuri
2023-10-07 22:37:28 +00:00
Huy Do
18f0d3af72 Revert "[user errors] compulsory case names, allow multiple (#110733)" (#110783)
This reverts commit 983f6f36db.  I have no idea how to revert https://github.com/pytorch/pytorch/pull/110733 with the bot.  So reverting it manually for now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110783
Approved by: https://github.com/ZainRizvi, https://github.com/kit1980
2023-10-07 07:32:39 +00:00
Avik Chaudhuri
983f6f36db [user errors] compulsory case names, allow multiple (#110733)
We want to get to a point where most `UserError`s link to `exportdb` examples. This PR makes passing case names non-optional to make this intent clearer and encourage developers who raise `UserError`s to make or point to examples that make fixing such errors more obvious for users.

In addition, sometimes there are multiple examples that are relevant to an error. Thus this PR also enables passing multiple case names.

Differential Revision: [D50020465](https://our.internmc.facebook.com/intern/diff/D50020465/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110733
Approved by: https://github.com/zhxchen17
2023-10-07 01:25:12 +00:00
Avik Chaudhuri
416eca9736 export db links for user errors (#110555)
Ideally all `_dynamo.exc.UserError`s should have "case names", i.e., link to examples in `exportdb`.

This PR adds case names to several instances of `_dynamo.exc.UserError`. In particular, looking at coverage based on `UserErrorType`:
* `DYNAMIC_CONTROL_FLOW`, `ANTI_PATTERN`, and `STANDARD_LIBRARY` are fully covered.
* `CONSTRAINT_VIOLATION` and `DYNAMIC_DIM` have no coverage. We don't seem to have any dedicated examples of specifying dynamic shapes in `exportdb` (although they are used in some other examples without explanation, to avoid some specialization that would make such examples moot).
* `INVALID_INPUT` is only partly covered. Frankly this is tedious to cover via examples.

Differential Revision: [D49928518](https://our.internmc.facebook.com/intern/diff/D49928518/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110555
Approved by: https://github.com/angelayi, https://github.com/ydwu4
2023-10-05 05:03:04 +00:00
lezcano
4b1e138162 [dynamo] [easy]Remove InstructionTranslator from within Set (#110521)
I believe this was a left over from the before times. See if CI agrees.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110521
Approved by: https://github.com/ezyang
2023-10-05 04:01:18 +00:00
Ken Jin
31d635803b [Dynamo] Fx proxy for builtin all with list iterators (#109972)
Fixes https://github.com/pytorch/pytorch/issues/109057.
Fixes https://github.com/pytorch/pytorch/issues/103620.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109972
Approved by: https://github.com/ezyang
2023-10-04 07:59:26 +00:00
Peter Bell
dc794ec32c [dynamo] Trace through builtin abs (#110398)
In python `abs(x)` does nothing but delegate to `x.__abs__()` so we should do
the same in dynamo. This also adds `SymNode.__abs__` so we can trace through
indexing expressions involving `abs`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110398
Approved by: https://github.com/jansel, https://github.com/lezcano
2023-10-03 19:25:37 +00:00
Moritz Hennen
09c598745c Rename torch._C._TensorBase to TensorBase (#109940)
I have gone ahead and implemented the renaming of the type `torch._C._TensorBase` to a non-private class name `TensorBase`.
The changes also include leaving `torch._C._TensorBase` as an alias to the new type: 70458768fb/torch/csrc/autograd/python_variable.cpp (L2196-L2197) both in the c++ code and in the corresponding `__init__.pyi.in` file:
70458768fb/torch/_C/__init__.pyi.in (L1522)

Fixes #109438

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109940
Approved by: https://github.com/ezyang
2023-09-25 19:10:22 +00:00
Michael Voznesensky
a902150a1e [Easy] ConstantVariable() -> .create (#109896)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109896
Approved by: https://github.com/ezyang
2023-09-22 22:30:15 +00:00
Edward Z. Yang
518308a740 Trace through pytree API with dynamo. (#108533)
Fix: #107315

This PR enables dynamo to trace through the `pytree` API by inlining its functions. In
order to do so, a few details of `pytree` had to be changed.

In summary, this PR:

- Introduces `TreeSpecVariable` for representing `TreeSpec` instances
- Specializes `<type>.__bases__` call, returning a `TupleVariable`
- Enables the call to `id` builtin function for every variable that implements
  `as_python_constant` method
- Specializes `ConstantVariable.call_method` for its (un)flatten functions
- Implements `UserDefinedObjectVariable.as_python_constant`
- Modifies `pytree` by:
    - Make `SUPPORTED_NODES` a map of ids (instead of types) to `NodeDef`
    - Removed `functools.wraps` function, since it can't be inlined

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108533
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
ghstack dependencies: #109201
2023-09-20 00:04:56 +00:00
Edward Z. Yang
d860313903 Improve can't call type() error message (#109378)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109378
Approved by: https://github.com/Skylion007
2023-09-16 16:12:58 +00:00
weifengpy
9021fb8dac [dynamo] implement custom dict variable as a general solution for HF's ModelOutput class (#105044)
before the PR, for HF's ModelOutput class, we use dicts.py/DataClassVariable with our own implementation on __getItem__, __setAttr__, __setItem__. There is a risk that ModelOutput logic may change since it is a user code

after the PR, we inline __getItem__, __setAttr__, __setItem__ using dicts.py/CustomizedDictVariable so the logic always keep AA

unit test
* python test/dynamo/test_model_output.py -k test_HF_bert_model_output

test on HF benchmark
* python benchmarks/dynamo/huggingface.py -d cuda --inference --accuracy --progress --inductor --print-dataframe-summary 2>&1
* all metric are the same before/after the PR, including pass rate, unique_graphs, graph_breaks, unique_graph_breaks
  * before the PR: P790393916
  * after the PR: P790368991

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105044
Approved by: https://github.com/jansel
2023-09-14 17:15:50 +00:00
Nakul Camsamudram
109ab6a0df Support str() on user defined functions (#108973)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108973
Approved by: https://github.com/anijain2305
2023-09-14 01:32:02 +00:00
Yanbo Liang
9862c7196b [Dynamo] SetVariable supports contains (#108189)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108189
Approved by: https://github.com/voznesenskym
2023-08-31 04:28:49 +00:00
Jason Ansel
6d61d74545 [dynamo] Fix setattr nn.Module with new attribute (#108098)
This is one (but not all) issues in DALLE2_pytorch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108098
Approved by: https://github.com/eellison
ghstack dependencies: #108096, #108087
2023-08-29 02:58:48 +00:00
Animesh Jain
0156eeb564 [dynamo] bugfix - make module setattr more restrictive (#107828)
A check got missed in https://github.com/pytorch/pytorch/pull/106092

Fixes https://github.com/pytorch/pytorch/issues/107721

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107828
Approved by: https://github.com/eellison
2023-08-24 16:00:29 +00:00
lezcano
207b06d099 [dynamo] Wrap ndarray dunder methods (#107689)
Fixes https://github.com/pytorch/pytorch/issues/107437

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107689
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710, #107711, #107746
2023-08-23 13:55:36 +00:00
lezcano
612c8a8c84 Guard numpy imports in the dynamo folder (#107299)
Fixes https://github.com/pytorch/pytorch/issues/107228

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107299
Approved by: https://github.com/atalman
2023-08-21 19:07:20 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
Yukio Siraichi
e514386315 Normalize builtin types to dtypes. (#106074)
Fix: #105052
Follow-up: #105588

This PR normalizes builtin Python types (e.g. `int` and `float`) into PyTorch data types
when these are passed as argument, instead of used as functions.

In summary, we:

- Implement `BuiltinVariable.as_proxy`, mapping Python types into PyTorch data types

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106074
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-08-01 13:32:19 +00:00
Elias Ellison
76a2ec49d7 [Dynamo] Ignore no-op tensor assignment (#106092)
Ignore no-op `self.attr = self.attr` on NN Modules when attr is a Tensor attribute.

This comes from a [llama pattern](https://github.com/pytorch/benchmark/blob/main/torchbenchmark/models/llama/model.py#L121-L122). Normally, when a set attr occurs on an nn module we turn it into an `UnspecializedNNModuleVariable` which prevents static buffers and parameters. In subsequent pr i will add support for cudagraph mutation of buffers/params, which with this pr takes llama 1.6x -> 4.4x in inference

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106092
Approved by: https://github.com/yanboliang
2023-07-28 17:16:19 +00:00
Wanchao Liang
f139aab2f4 [dynamo] add initial dynamo support for DTensor (#103146)
This PR adds initial dynamo support for DTensor, in particular, it:
- allows DTensor be passed into a compiled function, and allow fakify
DTensor during dynamo tracing by turning the inner local tensor to meta
tensor.
- We use `allow_in_graph` to include `DTensor` and `DTensor.from_local` to be represented as `TorchVariable`
- The dtensor created becomes a normal `TensorVariable` and it would insert any tensor operations to the output graph just like torch.Tensor
- note that dtensor have a new instance method `redistribute` compare to plain tensor, and we currently special handle it in `TensorVariable`

`from_local` and `redistribute` both accepts some non-trival metadata as arguments (i.e. DeviceMesh, Placement) which fx.Graph does not support. In order to let these two APIs appear in the dynamo captured graph, we encoded the metadata into a new_function (like `functools.partial`) and the new function only accepts prim args (i.e. tensor), then we put `call_function` with this new_function to the graph. This is suggested by @ezyang. The underlying rationale here is that the metadata will not change across the graph invocations so it's safe to encode them.

Captured graph:
```
    def forward(self, L_x_ : torch.Tensor):
        l_x_ = L_x_

        # File: /scratch/wanchaol/work/pytorch/test/distributed/_tensor/test_dtensor.py:685, code: dt = DTensor.from_local(x, mesh, [Shard(0)], run_check=False)
        prim_from_local = torch__dynamo_variables_torch_prim_from_local(l_x_, run_check = False);  l_x_ = None

        # File: /scratch/wanchaol/work/pytorch/test/distributed/_tensor/test_dtensor.py:686, code: return dt.redistribute(mesh, [Replicate()]).to_local() + 2
        prim_redistribute = torch__dynamo_variables_tensor_prim_redistribute(prim_from_local);  prim_from_local = None
        to_local = prim_redistribute.to_local();  prim_redistribute = None
        add = to_local + 2;  to_local = None
        return (add,)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103146
Approved by: https://github.com/voznesenskym
2023-07-19 16:01:12 +00:00
Michael Voznesensky
a6758cb304 Revert "Revert "SetVariable in dynamo (#103205)"" + Fix for improved graph breaks (#105345)
This reverts commit 94b3f9f646.

Fix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105345
Approved by: https://github.com/atalman
2023-07-17 23:21:30 +00:00
PyTorch MergeBot
94b3f9f646 Revert "SetVariable in dynamo (#103205)"
This reverts commit 82fb5edfc7.

Reverted https://github.com/pytorch/pytorch/pull/103205 on behalf of https://github.com/atalman due to Failing cuda11.8-py3.10-gcc7-sm86 / test (inductor_torchbench_dynamic) with CUDA oom ([comment](https://github.com/pytorch/pytorch/pull/103205#issuecomment-1638115073))
2023-07-17 13:13:47 +00:00
Michael Voznesensky
82fb5edfc7 SetVariable in dynamo (#103205)
Set initial
Fixes https://github.com/pytorch/pytorch/issues/94738

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103205
Approved by: https://github.com/jansel
2023-07-15 02:25:31 +00:00
Animesh Jain
9647a251cb [dynamo] Dataclass variables with default field (#104840)
The main complexity comes from the __init__ function of Dataclass variables which look something like this

```
[2023-07-10 05:01:29,548] torch._dynamo.symbolic_convert: [DEBUG] INLINING <code object __init__ at 0x7f7015154450, file "<string>", line 2>
  3           0 LOAD_FAST                1 (b)
              2 LOAD_FAST                0 (self)
              4 STORE_ATTR               0 (b)

  4           6 LOAD_FAST                2 (named_tensors)
              8 LOAD_DEREF               0 (_HAS_DEFAULT_FACTORY)
             10 IS_OP                    0
             12 POP_JUMP_IF_FALSE       20
             14 LOAD_DEREF               1 (_dflt_named_tensors)
             16 CALL_FUNCTION            0
             18 JUMP_FORWARD             2 (to 22)
        >>   20 LOAD_FAST                2 (named_tensors)
        >>   22 LOAD_FAST                0 (self)
             24 STORE_ATTR               1 (named_tensors)
             26 LOAD_CONST               0 (None)
             28 RETURN_VALUE
```

There are multiple issues
* VariableBuilder call in functions.py was wrong. We were calling *options as args.
* We were not setting source while tracking the new object. This led to no source for Dataclass variable, which has some new variables in its closures as seen in the above bytecode.
* There is IS_OP in above bytecode, which brings more cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104840
Approved by: https://github.com/jansel
2023-07-13 01:25:57 +00:00
Michael Lazos
86680a6c0b [dynamo] handle calls to typing.cast (#104799)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104799
Approved by: https://github.com/jansel
2023-07-10 21:05:17 +00:00
Andy Rock
fb1ad02833 Support bit shifting SymInts (#104318)
Fixes #104228.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104318
Approved by: https://github.com/ezyang
2023-07-05 18:35:57 +00:00
Animesh Jain
2bb83cd45c [dynamo][ac] Minor refactor for better code organization and a bugfix (#104276)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104276
Approved by: https://github.com/zou3519
2023-06-29 12:57:59 +00:00
cdzhan
c06bb82ba1 fix specialization when you pass an unspec int into slicing on a Python list. (#104142)
Fixes #103545

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104142
Approved by: https://github.com/malfet, https://github.com/jansel
2023-06-28 13:13:07 +00:00
Tugsbayasgalan Manlaibaatar
d4b85f3031 Support params/buffers inside cond and map (#102310)
With #102022, params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.

Differential Revision: [D46746202](https://our.internmc.facebook.com/intern/diff/D46746202)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri, https://github.com/zou3519
2023-06-20 05:33:10 +00:00
Yanbo Liang
1be1f5090e [Dynamo] Fix broken NNModule comparison (#103812)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103812
Approved by: https://github.com/msaroufim
2023-06-20 04:01:24 +00:00
Mengwei Liu
96c23fe212 [dynamo][numpy] Add support for builtin functions (#103457)
In order to be able to run stuff like:
```
def f(x):
	a = x.numpy()
        return a + a
```
This PR adds a branch in `BuiltinVariable` to handle `NumpyNdarrayVariable` case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103457
Approved by: https://github.com/ezyang
2023-06-15 09:18:45 +00:00
PyTorch MergeBot
2087d32811 Revert "Support params/buffers inside cond and map (#102310)"
This reverts commit 766f236bad.

Reverted https://github.com/pytorch/pytorch/pull/102310 on behalf of https://github.com/huydhn due to The test is failing in trunk 766f236bad ([comment](https://github.com/pytorch/pytorch/pull/102310#issuecomment-1592159710))
2023-06-15 00:29:20 +00:00
Tugsbayasgalan Manlaibaatar
766f236bad Support params/buffers inside cond and map (#102310)
With #102022, params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri, https://github.com/zou3519
2023-06-14 22:32:33 +00:00
Michael Voznesensky
056bf951bf Strengthen partially supported invariant of base for chained sources (#103445)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103445
Approved by: https://github.com/ezyang
2023-06-13 22:44:28 +00:00
Edward Z. Yang
1d40b394e6 Remove getitem dynamic shapes special case (#103296)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103296
Approved by: https://github.com/voznesenskym
2023-06-10 01:27:22 +00:00
Bin Bao
39bf86ae90 [dynamo] Support OrderedDict constructor with kwargs (#103192)
Summary: To solve an issue in https://github.com/pytorch/pytorch/issues/102878.
The solution follows the example in https://github.com/pytorch/pytorch/pull/98660.
It only solves a problem for standard OrderedDict. There is another
problem if we use a user-defined CustomDict.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103192
Approved by: https://github.com/yanboliang
2023-06-08 12:14:21 +00:00
Mark Saroufim
790f5732f6 Fix Graph Break on builtin comparison on NNModule (#103176)
Fixes https://github.com/pytorch/pytorch/issues/102338

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103176
Approved by: https://github.com/anijain2305
2023-06-07 22:51:43 +00:00
Michael Lazos
2434a205de Support unary not on lists (#102210)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102210
Approved by: https://github.com/anijain2305
2023-05-25 02:45:36 +00:00
Animesh Jain
2fa1b563da [dynamo] Activation checkpoint higher order ops - Reland 101028 (#101790)
https://github.com/pytorch/pytorch/pull/101028 was reverted due to internal breakage. Relanding.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101790
Approved by: https://github.com/zou3519
2023-05-18 19:09:14 +00:00
PyTorch MergeBot
d0db7d624d Revert "[dynamo] Activation checkpointing as higher order op (#101028)"
This reverts commit de15e740a1.

Reverted https://github.com/pytorch/pytorch/pull/101028 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/101028#issuecomment-1548280970))
2023-05-15 17:47:08 +00:00
Animesh Jain
de15e740a1 [dynamo] Activation checkpointing as higher order op (#101028)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101028
Approved by: https://github.com/voznesenskym, https://github.com/zou3519
2023-05-12 03:17:41 +00:00
Michael Voznesensky
aafc6ce8cc Produce constant variables in cases where a SymNode is created with a constant (#100144)
` AOT_DYNAMIC_SHAPES=1 TORCHDYNAMO_DYNAMIC_SHAPES=1  benchmarks/dynamo/huggingface.py --performance  --training --amp --backend eager --disable-cudagraphs --device cuda --only AllenaiLongformerBase --explain`

Looks promising!

Goes from:

Dynamo produced 173 graphs covering 2760 ops with 160 graph breaks (14 unique)

To:

Dynamo produced 6 graphs covering 2298 ops with 15 graph breaks (7 unique)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100144
Approved by: https://github.com/ezyang
2023-05-01 21:32:11 +00:00
PyTorch MergeBot
89c43f4108 Revert "Produce constant variables in cases where a SymNode is created with a constant (#100144)"
This reverts commit d7bdfd3454.

Reverted https://github.com/pytorch/pytorch/pull/100144 on behalf of https://github.com/ezyang due to ci failure is real ([comment](https://github.com/pytorch/pytorch/pull/100144#issuecomment-1529587039))
2023-05-01 11:10:48 +00:00
Michael Voznesensky
d7bdfd3454 Produce constant variables in cases where a SymNode is created with a constant (#100144)
` AOT_DYNAMIC_SHAPES=1 TORCHDYNAMO_DYNAMIC_SHAPES=1  benchmarks/dynamo/huggingface.py --performance  --training --amp --backend eager --disable-cudagraphs --device cuda --only AllenaiLongformerBase --explain`

Looks promising!

Goes from:

Dynamo produced 173 graphs covering 2760 ops with 160 graph breaks (14 unique)

To:

Dynamo produced 6 graphs covering 2298 ops with 15 graph breaks (7 unique)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100144
Approved by: https://github.com/ezyang
2023-04-30 17:13:57 +00:00
Michael Voznesensky
e789de952f Make sizevar addition work properly (#100015)
Rm

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100015
Approved by: https://github.com/ezyang
2023-04-26 15:59:26 +00:00
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00