Commit Graph

129 Commits

Author SHA1 Message Date
Oguz Ulgen
8894a97707 [Dynamo] Fix source for autograd.function default value (#116894)
Before this PR, the source guard would emit
```
globals()['Gradient'].__class__.forward.__defaults__[0]
```
which is incorrect

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116894
Approved by: https://github.com/zou3519, https://github.com/yanboliang
2024-01-06 00:36:00 +00:00
lezcano
7c8f38700a [dynamo] Fix np.issubdtype (#116459)
Fixes the issue described at https://github.com/pytorch/pytorch/issues/93697#issuecomment-1828346590

This doesn't fix the full issue yet, now we hit
```python
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 744, in step
  getattr(self, inst.opname)(inst)
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 1366, in BUILD_MAP
      assert (
      AssertionError
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116459
Approved by: https://github.com/peterbell10
2024-01-05 01:48:07 +00:00
PyTorch MergeBot
75dae4f691 Revert "[dynamo] Fix np.issubdtype (#116459)"
This reverts commit b5c33ccdb3.

Reverted https://github.com/pytorch/pytorch/pull/116459 on behalf of https://github.com/zou3519 due to Broke CI, seems to be a landrace ([comment](https://github.com/pytorch/pytorch/pull/116459#issuecomment-1877135999))
2024-01-04 14:00:11 +00:00
lezcano
b5c33ccdb3 [dynamo] Fix np.issubdtype (#116459)
Fixes the issue described at https://github.com/pytorch/pytorch/issues/93697#issuecomment-1828346590

This doesn't fix the full issue yet, now we hit
```python
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 744, in step
  getattr(self, inst.opname)(inst)
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 1366, in BUILD_MAP
      assert (
      AssertionError
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116459
Approved by: https://github.com/peterbell10
2024-01-04 03:55:50 +00:00
PyTorch MergeBot
68105da229 Revert "[Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358)"
This reverts commit 97891b184c.

Reverted https://github.com/pytorch/pytorch/pull/116358 on behalf of https://github.com/izaitsevfb due to Breaks internal accuracy test, see D52491095, pytorch/benchmark/fb/test_gpu:run_test_gpu - test_train_ig_feed_over_inductor_accuracy  ([comment](https://github.com/pytorch/pytorch/pull/116358#issuecomment-1875779697))
2024-01-03 18:20:51 +00:00
Oguz Ulgen
97891b184c [Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358)
For training graphs (when inputs require grad), previously, we would speculate the forward and backward graph to determine if there are any graph breaks, side effect and etc but would not actually use these speculated graphs. We would just insert a call function node on the graph and later rely on autograd's tracing.

This approach does not work for more generalized graphs like graphs that include user defined triton kernels because autograd is not able to do the higher order function conversation.

This PR speculates the forward and backward functions and emits them in a HOF that later gets used via templating mechanism.

While working on this PR, I have exposed some bugs in the current tracing due to trampoline functions losing the source information resulting in incorrect graphs being produced. I have fixed these source information bugs and killed the trampolines.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116358
Approved by: https://github.com/jansel
2023-12-30 01:51:30 +00:00
Yanbo Liang
7e12e722af [Dynamo][12/N] Remove allowed_functions.py (#116401)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116401
Approved by: https://github.com/angelayi
2023-12-28 21:26:06 +00:00
Yanbo Liang
6375eb15ef [Dynamo][11/N] allow_in_graph/disallow_in_graph decorator refactor (#116365)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116365
Approved by: https://github.com/jansel
2023-12-27 23:50:35 +00:00
PyTorch MergeBot
13505898c9 Revert "[Dynamo][11/N] allow_in_graph/disallow_in_graph decorator refactor (#116365)"
This reverts commit 951da38800.

Reverted https://github.com/pytorch/pytorch/pull/116365 on behalf of https://github.com/kit1980 due to Need to revert this because of https://github.com/pytorch/pytorch/pull/116312 ([comment](https://github.com/pytorch/pytorch/pull/116365#issuecomment-1869824468))
2023-12-26 23:43:45 +00:00
Yanbo Liang
951da38800 [Dynamo][11/N] allow_in_graph/disallow_in_graph decorator refactor (#116365)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116365
Approved by: https://github.com/jansel
2023-12-25 07:15:09 +00:00
Yanbo Liang
be9de33240 [Dynamo][9/N] Make SkipFilesVariable wrap functions only (#115963)
Make ```SkipFilesVariable``` only handle function type, and route skipped classes to ```UserDefinedClassVariable```. The reasons behind this are:
* We'd like to remove ```is_allowed```, so the allowed/disallowed torch classes should have a proper place to handle. We can put them in either ```SkipFilesVariable``` and ```UserDefinedClassVariable``` under the current architecture, but it's  confusing to have two places do one thing.
   - Going forward, let's make ```SkipFilesVariable``` only handle functions, and probably I'll rename it to ```SkippedFunctionVariable``` in the following PRs.
   - Let's do dispatch by value's type, all torch classes stuff would go to ```UserDefinedClassVariable``` in the next PR.
* We'd merge in_graph/skip/inline trace decision into the same API ```trace_rule.lookup```, so probably we have to limit the input to only function for better organizing ```VariableBuilder._wrap``` logics.
   - Next step, I'll merge ```skipfiles.check``` into ```trace_rules.lookup```, and do the skipfile check before wrapping them into correct variable tracker.
   - Though the ```TorchCtxManagerClassVariable``` is decided by ```trace_rules.lookup```, I'll refactor it out in the following PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115963
Approved by: https://github.com/jansel
2023-12-21 01:35:07 +00:00
PyTorch MergeBot
bdfabe5e7d Revert "[Dynamo][9/N] Make SkipFilesVariable wrap functions only (#115963)"
This reverts commit bb5a27052f.

Reverted https://github.com/pytorch/pytorch/pull/115963 on behalf of https://github.com/jeanschmidt due to causing significant performance regression, identified by number of ops in ads, please check internal diff ([comment](https://github.com/pytorch/pytorch/pull/115963#issuecomment-1864361697))
2023-12-20 12:06:55 +00:00
Yanbo Liang
bb5a27052f [Dynamo][9/N] Make SkipFilesVariable wrap functions only (#115963)
Make ```SkipFilesVariable``` only handle function type, and route skipped classes to ```UserDefinedClassVariable```. The reasons behind this are:
* We'd like to remove ```is_allowed```, so the allowed/disallowed torch classes should have a proper place to handle. We can put them in either ```SkipFilesVariable``` and ```UserDefinedClassVariable``` under the current architecture, but it's  confusing to have two places do one thing.
   - Going forward, let's make ```SkipFilesVariable``` only handle functions, and probably I'll rename it to ```SkippedFunctionVariable``` in the following PRs.
   - Let's do dispatch by value's type, all torch classes stuff would go to ```UserDefinedClassVariable``` in the next PR.
* We'd merge in_graph/skip/inline trace decision into the same API ```trace_rule.lookup```, so probably we have to limit the input to only function for better organizing ```VariableBuilder._wrap``` logics.
   - Next step, I'll merge ```skipfiles.check``` into ```trace_rules.lookup```, and do the skipfile check before wrapping them into correct variable tracker.
   - Though the ```TorchCtxManagerClassVariable``` is decided by ```trace_rules.lookup```, I'll refactor it out in the following PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115963
Approved by: https://github.com/jansel
2023-12-19 02:01:47 +00:00
Yanbo Liang
14a6b24c8b [Dynamo][8/N] Wrap itertools.* as ItertoolsVariable (#115802)
This is part of a series changes before removing ```is_allowed```.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115802
Approved by: https://github.com/voznesenskym
2023-12-16 01:42:02 +00:00
Yanbo Liang
db851b1bc9 [Dynamo][7/N] Wrap python modules under torch as regular PythonModuleVariable (#115724)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115724
Approved by: https://github.com/jansel
2023-12-13 21:23:14 +00:00
Yanbo Liang
274fdc81f8 [Dynamo][6.3/N] Further cleanup torch.py (#114669)
A follow-up PR to clean up what I found during the refactor of torch.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114669
Approved by: https://github.com/jansel
2023-12-11 07:16:03 +00:00
Michael Lazos
fbeca60b1f Remove replace_all and make VTs mutable (#113725)
1.  Removes calls to `replace_all` and `clone` and makes VTs mutable.
2. Properly handles Tuple Iterator mutation. Previously TupleIterator variables would only be properly reconstructed if they were advanced at least once in a frame. On calls to `next`, the source information would be lost (due to constructing a new iterator without using builder), which would ensure that during codegen the variable would be reconstructed from scratch. Now that VTs are mutated, the source is never lost, so we need to properly track mutation and handle it by replaying calls to `next` at the end of the modified bytecode.
3. Added test for checking iadd side effects, this was missing in our unit test coverage.
4. Fixed two incorrect sources, DelayGraphBreakVariable, and UserMethodVariable both relied on setting the source to AttrSource(parent, name) at the callsite of `var_getattr`.
5. Fixed a bug in inplace adding for lists, it would set the resulting VariableTracker's source to `None` which would utilize a different reconstruct path in codegen. Now this is handled explicitly by reconstructing vars when allow_cache=`False`, so that during side effect replay, the mutated var is correctly updated.

In subsequent PRs:
* Refactoring side effect tracking to be significantly simpler (I think we only need an `is_modified` flag)
* Refactor `next_variables` iterator to match the signature of `next`
* Remove all references to `options` in the code
* Refactor VTs representing mutable collections to implement their own mutation update handling
* Remove clone and/or make it specific to lists for creating slices
* Add mutation tracking/replay for sets
* Add mutation tracking/replay for iter.py
* Removing setting source in builder (it's set at the top level after a var is returned)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113725
Approved by: https://github.com/jansel
2023-12-10 09:31:21 +00:00
Michael Lazos
1c3a4a864c Remove always restore (#115317)
Removes always restore, assuming that a HOP will cleanup any leftover state from tracing fwd + bwd

This required a minor change to the autograd fn variable higher order op. If we are tracing forward DON'T add the call_function node into the main graph, since we are only tracing it for the purposes of speculation. Instead return the result directly to be passed to the backward for speculation. This was the only observable side effect on the output graph that I found.

Test plan:
test_smoke_from_test_autograd in test_autograd_function.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115317
Approved by: https://github.com/voznesenskym, https://github.com/jansel
2023-12-08 18:17:37 +00:00
Jason Ansel
f4c67ffff4 [dynamo] Improve support for dynamic shapes str.format and _assert (#115203)
This removes a graph break in vision_maskrcnn.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115203
Approved by: https://github.com/yanboliang
2023-12-06 04:54:45 +00:00
Yanbo Liang
4620170008 [Dynamo] Revert multiple PRs since they triggered compilation stuck internally (#115126)
Revert the following PRs to mitigate internal compilation stuck:
#113432
#114016
#114507
#114196
#114739
#114669

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115126
Approved by: https://github.com/xush6528
2023-12-05 22:35:37 +00:00
Jason Ansel
a70c85ce90 [dynamo] Improve support for inspect.signature().parameters (#115047)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115047
Approved by: https://github.com/oulgen
ghstack dependencies: #114830
2023-12-04 19:08:36 +00:00
Xuehai Pan
2e8ac5ea93 [dynamo] support dict.fromkeys() / OrderedDict.fromkeys() / defaultdict.fromkeys() (#115010)
Add support for `dict.fromkeys`, `OrderedDict.fromkeys`, and `defaultdict.fromkeys`.

Fixes #114963

- #114963

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115010
Approved by: https://github.com/jansel
2023-12-04 01:49:59 +00:00
Yanbo Liang
ab5385fc50 [Dynamo][6.3/N] Further cleanup torch.py (#114669)
A follow-up PR to clean up what I found during the refactor of torch.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114669
Approved by: https://github.com/jansel
2023-12-01 04:08:29 +00:00
rzou
ce4bff4013 [dynamo] fix functools.wraps on nested functions (#114279)
Updated version of #108885 addressing the review. In this PR:
- We add a VT.can_reconstruct utility that checks if VT.reconstruct()
  does something.
- If functools.wraps(fn) is passed a `fn` that either has a source or
  has .can_reconstruct() == True, then we stash the source (or the VT)
- Later on, we use the source (or VT.reconstruct) to actually
  reconstruct the object in codegen.

Test Plan:
- New tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114279
Approved by: https://github.com/voznesenskym
2023-11-28 22:34:59 +00:00
Bin Bao
0bef97fac3 [dynamo] Support itertools.groupby (#114192)
Summary: for https://github.com/pytorch/pytorch/issues/108698

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114192
Approved by: https://github.com/jansel
2023-11-28 14:58:59 +00:00
Yanbo Liang
bab41f44b8 [dynamo] Fix allow_in_graph decorator doesn't work on autograd.Function (#113510)
Fixes #111032

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113510
Approved by: https://github.com/zou3519
2023-11-16 22:44:46 +00:00
PyTorch MergeBot
5d170fce29 Revert "Support tensors as Dict keys (#111196)"
This reverts commit b0805fa5d0.

Reverted https://github.com/pytorch/pytorch/pull/111196 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing internally. I will provide the details there ([comment](https://github.com/pytorch/pytorch/pull/111196#issuecomment-1813410149))
2023-11-15 23:08:00 +00:00
lezcano
b0805fa5d0 Support tensors as Dict keys (#111196)
This prepares the PR where we implement sets in terms of dicts.
To do so, rather than storing internally a dictionary that maps literals
to VariableTrackers, it stores (pretty much) a dictionary from VTs to VTs.
To do so, keys are wrapped in an opaque internal class `_Hashable`.
The Hashable class is opaque on purpose so that it fails hard if
if it inadvertently leaks back into user code.

We also found and fixed a number of latent bugs and inconsistencies
in the way dynamo checked what can be a dict key. More generally, we
make much clearer what are the things that need to be modified to add
a new supported key type to Dicts.

Fixes https://github.com/pytorch/pytorch/issues/107595
Fixes https://github.com/pytorch/pytorch/issues/111603
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111196
Approved by: https://github.com/jansel
2023-11-14 19:14:03 +00:00
Jason Ansel
3914566c73 [dynamo] Refactor OrderedDict to dict (#113234)
In Python3 all dicts are ordered.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113234
Approved by: https://github.com/oulgen, https://github.com/lezcano
2023-11-08 09:27:08 +00:00
Jason Ansel
356f3458c4 [dynamo] Remove incorrect sources (#112961)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112961
Approved by: https://github.com/voznesenskym, https://github.com/Skylion007
ghstack dependencies: #111306, #111415, #111725, #111726, #112962
2023-11-07 22:01:40 +00:00
Jason Ansel
5fe96eaaf4 [dynamo] Remove VariableTracker.propagate (#111726)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111726
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306, #111415, #111725
2023-11-07 19:55:19 +00:00
Jason Ansel
843a8ecd24 [dynamo] Remove VariableTracker.add_options (#111725)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111725
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306, #111415
2023-11-07 19:55:19 +00:00
Jason Ansel
9664190952 [dynamo] Eagerly install guards (#111415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111415
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306
2023-11-07 19:55:19 +00:00
Evgeni Burovski
d5fff7338e BUG: gracefully fall back to numpy.random if asked in dynamo.config (#109205)
Graph break if `config.use_numpy_random_stream=True` instead of a hard failure in inductor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109205
Approved by: https://github.com/lezcano
2023-11-04 14:54:05 +00:00
Yanbo Liang
6f681ab5d9 [torch.compile] autograd.Function with multiple return values (#112475)
Fixes #106389

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112475
Approved by: https://github.com/zou3519
2023-11-02 04:43:49 +00:00
Jon Chuang
41720c2a48 [dynamo] add infinite generators itertools.{count, repeat, cycle} (#110967)
Fixes https://github.com/pytorch/pytorch/pull/110953/files#r1352868935

Depends on: https://github.com/pytorch/pytorch/pull/110953

Why not use these for `repeat(item, count)`:
> These are not preferred as they return an opaque VariableTracker. In particular, one cannot do `enumerate(repeat(1))`. `repeat(1, 10)` benefits from the integration enjoyed by `ListVariableIterator`

Follow ups:
- [ ] make listiterator an IteratorVariable, define iterator integrations on base IteratorVariable where unspecialized https://github.com/pytorch/pytorch/pull/110967#discussion_r1356656469
    - Please make a new issue for this
- [ ] explore integrating cpython itertools test suite https://github.com/pytorch/pytorch/pull/110967#discussion_r1358326402
- [ ] Use something other than `StopIteration` to handle iterator termination https://github.com/pytorch/pytorch/pull/110967#discussion_r1358336038
- [ ] Add test case for consuming iterator simultaneously from two code points https://github.com/pytorch/pytorch/pull/110967/files#r1358325511

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110967
Approved by: https://github.com/ezyang
2023-11-01 00:33:17 +00:00
Jason Ansel
0948550c53 [dynamo] Remove mutation in AutogradFunctionContextVariable (#112216)
AutogradFunctionContextVariable was mutating self._saved_tensors, which is generally not allowed since VariableTracker objects should be read-only and are frequently copied via apply/clone.  This was causing some test failures up the PR stack.

This moves the mutation into a separate object that is not copied.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112216
Approved by: https://github.com/voznesenskym
ghstack dependencies: #112122
2023-10-28 06:46:48 +00:00
Jason Ansel
c7b78fb76c [dynamo] Replace recursively_contains with parents_tracker (#112122)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112122
Approved by: https://github.com/voznesenskym
2023-10-28 06:46:48 +00:00
lezcano
1774704fc1 [dynamo] Simplify add_dict in preparation to refactor it with call_set (#110523)
The previous implementation had a fair amount of repeated code, and did
things like calling `add_options` where options was always empty (which
is fine, as the guards are already set within ConstDictVariable).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110523
Approved by: https://github.com/yanboliang, https://github.com/jansel
ghstack dependencies: #110522
2023-10-27 20:17:10 +00:00
Michael Lazos
fb8876069d Support tracing base torch_function impl (#111731)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111731
Approved by: https://github.com/jansel
ghstack dependencies: #111730
2023-10-23 07:11:32 +00:00
voznesenskym
303c54dbd9 [dynamo] share a subgraph tracer across fwd and bwd in autograd.Function (#111588)
Fixes https://github.com/pytorch/pytorch/issues/111031

The current design of autograd.Function tracing in dynamo is that we:

1) speculate fwd, and if its fine,
2) speculate bwd, and if its fine
3) install the .apply in the graph alongside fwd guards

The mechanism for doing so involves creating HOPs for fwd, bwd, and apply. The speculation for fwd and bwd create their own subtracer. This is fine, until a proxy created in fwd is used in bwd.

For a simple example, consider:

```
 class Foo(Function):
            @staticmethod
            def forward(ctx, x):
                ctx.x0 = x.size(0)
                return x * 2

            @staticmethod
            def backward(ctx, grad_out):
                return grad_out * ctx.x0
```
the value stored at `x0` is a proxy - but it is a proxy belonging to the fwd speculation subtracer. Rather than teaching it to the subtracer for bwd, we choose to create a subtracer that covers both fwd and bwd speculation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111588
Approved by: https://github.com/zou3519
2023-10-20 21:32:02 +00:00
Michael Lazos
a55ecec195 [dynamo][__torch_function__ 2/n] Refactor TensorWithTFOverrideVariable (#109556)
This is purely a refactor that preserves the existing behavior and tests.

The main contributions of the PR are to refactor the dispatch of `__torch_function__` to enable calling it with  TF override objects in any argument position and matching the eager dispatch behavior.

This will allow for the following in upcoming PRs:

1) have TensorWithTFOverrideVariable inherit from TensorVariable
2) enable tracing through the base `__torch_function__` implementation.

Note: this depends on https://github.com/pytorch/pytorch/pull/109542

towards tracing for https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109556
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-10-20 18:53:38 +00:00
Aaron Gokaslan
cb856b08b2 [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
2023-10-19 21:56:36 +00:00
Jon Chuang
6e770c0dda [dynamo] Add itertools.repeat via polyfill (#110953)
Fixes https://github.com/pytorch/pytorch/issues/110286

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110953
Approved by: https://github.com/ezyang
2023-10-10 20:40:33 +00:00
Animesh Jain
e1f0f9c64e [dynamo][easy] Move code from GetAttrVariable to a suitable place (#110535)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110535
Approved by: https://github.com/jansel
2023-10-08 22:37:34 +00:00
Jon Chuang
844ea6408b feat(dynamo): handle accumulate kwargs ("func", "initial") (#110686)
Follow up to: https://github.com/pytorch/pytorch/pull/110683

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110686
Approved by: https://github.com/ezyang
2023-10-08 07:06:52 +00:00
Animesh Jain
58637c4b43 [dynamo] Remove SuperSource (#110475)
The motivation for removing this is already present in the pre-PR comments. Copying it

~~~
# NB - SuperSource is a weird one.
# it is our only source with 2 bases, so we use the objec
# as the base, rather than the type, since an invocation
# like super(Foo, foo) is represented here, the source object base is more spiritually
# aligned with the instance, rather than the type.
# This whole construction is questionable tho, and we should probably find a way to
# avoid this exception to our otherwise nice source parentage invariant.
~~~

Instead of using super(a, b), we can use `type(b).__mro__[index]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110475
Approved by: https://github.com/jansel
2023-10-08 04:45:06 +00:00
Jon Chuang
9b55194f81 fix(dynamo): Incorrect accumulate implementation, bad tests (#110683)
Root cause of: https://github.com/pytorch/pytorch/issues/110287

Fixed many tests that didn't actually test due to unreliability of `CompileCounter.frame_count` in detecting graph breaks: https://github.com/pytorch/pytorch/issues/110730

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110683
Approved by: https://github.com/voznesenskym
2023-10-06 23:07:56 +00:00
Yanbo Liang
9bc5e10899 [New][1/N] Dynamo skipfiles refactor (#110330)
This is the replacement of #109567. Now I preserved all existing semantics and only focusing on API (for developers) and code structure changes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110330
Approved by: https://github.com/ezyang
2023-10-03 16:50:33 +00:00
atalman
b253fc9c93 Revert "[1/N] Dynamo skipfiles refactor (#109567)" (#110296)
This reverts commit 84c5435b29.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110296
Approved by: https://github.com/yanboliang
2023-09-29 20:35:46 +00:00