Commit Graph

173 Commits

Author SHA1 Message Date
PyTorch MergeBot
54e320d4d1 Revert "[dynamo] Lazy disable_dynamo API out-of-dynamo (#104317)"
This reverts commit 5c12a810ac.

Reverted https://github.com/pytorch/pytorch/pull/104317 on behalf of https://github.com/huydhn due to This has been reverted internally by D47166892, so I need to also revert it on OSS to keep them in sync ([comment](https://github.com/pytorch/pytorch/pull/104317#issuecomment-1621099151))
2023-07-05 06:21:48 +00:00
PyTorch MergeBot
40f53912cf Revert "[dynamo][ac] Remove disable monkeypatching of utils.checkpoint (#104397)"
This reverts commit 537a6c0651.

Reverted https://github.com/pytorch/pytorch/pull/104397 on behalf of https://github.com/huydhn due to This has been reverted internally by D47216591, so I need to also revert it on OSS to keep them in sync ([comment](https://github.com/pytorch/pytorch/pull/104397#issuecomment-1621086360))
2023-07-05 06:11:08 +00:00
Animesh Jain
537a6c0651 [dynamo][ac] Remove disable monkeypatching of utils.checkpoint (#104397)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104397
Approved by: https://github.com/wconstab
2023-06-30 02:27:06 +00:00
Animesh Jain
5c12a810ac [dynamo] Lazy disable_dynamo API out-of-dynamo (#104317)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104317
Approved by: https://github.com/jansel, https://github.com/wconstab, https://github.com/mlazos
2023-06-29 13:30:17 +00:00
Tugsbayasgalan Manlaibaatar
936cd4f2f5 Migrate exportdb to torch.export (#104260)
Reapply of (https://github.com/pytorch/pytorch/pull/103861). Things that needed to be fixed:

- Fix a bug with returning dict output type
- Make pass_base work with map implementation
- Fix subtle bug with dynamo not propagating "val" in node.meta
- Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104260
Approved by: https://github.com/angelayi
2023-06-27 17:49:18 +00:00
Michael Voznesensky
ec24f1e4cc Simulate treespec flattening/unflattening (#101896)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101896
Approved by: https://github.com/jansel, https://github.com/anijain2305
2023-06-23 10:53:15 +00:00
PyTorch MergeBot
518abe8b7e Revert "Migrate exportdb to torch.export from torchdynamo.export (#103861)"
This reverts commit fb6173a4ac.

Reverted https://github.com/pytorch/pytorch/pull/103861 on behalf of https://github.com/huydhn due to It looks like this change is failing in trunk due to a landrace fb6173a4ac ([comment](https://github.com/pytorch/pytorch/pull/103861#issuecomment-1601960600))
2023-06-22 03:24:01 +00:00
Tugsbayasgalan Manlaibaatar
fb6173a4ac Migrate exportdb to torch.export from torchdynamo.export (#103861)
Things that needed to be fixed:
1. Fix a bug with returning dict output type
2. Make pass_base work with map implementation
3. Fix subtle bug with dynamo not propagating "val" in node.meta
4. Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103861
Approved by: https://github.com/zhxchen17, https://github.com/ydwu4
2023-06-22 02:53:41 +00:00
Brian Hirsh
c3c03e7cb8 Reland of https://github.com/pytorch/pytorch/pull/101818 (#103888)
Original PR broke internal

This reverts commit 5ed618132f.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103888
Approved by: https://github.com/albanD
2023-06-21 21:00:56 +00:00
Michael Lazos
38f35b4fc3 Add some missing disabled functions (#103662)
Disable Adadelta, rprop, multitensor, and the fused optimizers

Fixes https://github.com/pytorch/benchmark/actions/runs/5167132765/jobs/9307817625

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103662
Approved by: https://github.com/janeyx99
2023-06-16 00:11:13 +00:00
Thiago Crepaldi
6f655d4195 Add symbolic tracing support to torch._dynamo.export (fake input + weights) (#100017)
Fixes #95900
Using the following repro as guide:

```python
import torch
import torch._dynamo
from torch._subclasses import fake_tensor
from torch.fx.experimental.symbolic_shapes import ShapeEnv
from torch._dynamo.output_graph import config
class Model(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.linear = torch.nn.Linear(2, 2)
        self.linear2 = torch.nn.Linear(2, 2)

    def forward(self, x):
        out = self.linear(x)
        out = self.linear2(out)
        return out

fake_mode = fake_tensor.FakeTensorMode(allow_non_fake_inputs=False,
                                       allow_fallback_kernels=True,
                                       shape_env=ShapeEnv(
                                            allow_scalar_outputs=config.capture_scalar_outputs,
                                            allow_dynamic_output_shape_ops=config.capture_dynamic_output_shape_ops,
                                            frame_id=0
                                        ),
)
# Fakefying input/model before calling torch._dynamo.export
with fake_mode:
    fake_x = torch.rand(5, 2, 2)
    model = Model()

# Calling torch._dynamo.export without active fake mode
graph_module, guards = torch._dynamo.export(
    model,
    fake_x,
    aten_graph=True,
    fake_mode=fake_mode
)
graph_module.print_readable()
graph_module.graph.print_tabular()
```

Summary of changes:

    * Plumb fake_mode through torch.export API. When specified, it
    replaces the creation of a new FaketendorMode at InstructionTranslator on behalf of OutputGraph
     Hacks FakeTensor.__new__ to prevent a
    torch.tensor._make_subclass call for inputs that are already fakefied by
    user. This probably need to be fixed in a nicer way. Any idea?
    * Removed a few asserts that didn't want faked tensors coming
    from user script
    * Added torch._subclasses.fake_tensor.FakeTensor to type list on a few
    asserts check to allow fake inputs

The changes above allowed symbolic tracing with both static and dynamic shapes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100017
Approved by: https://github.com/ezyang
2023-06-15 21:28:10 +00:00
Edward Z. Yang
bc6ec97e02 Switch dynamic_shapes to True by default (#103597)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103597
Approved by: https://github.com/voznesenskym
2023-06-15 15:16:20 +00:00
Edward Z. Yang
8b015c166c Don't test dynamic_shapes in tensor_always_has_static_shape (#103517)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103517
Approved by: https://github.com/anijain2305
2023-06-14 07:04:17 +00:00
Laith Hasanian
4a52694b08 [torch.compile] Add explain as a backend #102053 (#103259)
Fixes #102053

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103259
Approved by: https://github.com/voznesenskym
2023-06-13 00:32:17 +00:00
PyTorch MergeBot
5ed618132f Revert "change pre_autograd to pre_dispatch tracing (#101818)"
This reverts commit b0392de2c3.

Reverted https://github.com/pytorch/pytorch/pull/101818 on behalf of https://github.com/izaitsevfb due to Breaks internal builds see D46629736 TypeError: wrap_key() got an unexpected keyword argument pre_autograd ([comment](https://github.com/pytorch/pytorch/pull/101818#issuecomment-1587837667))
2023-06-12 18:16:37 +00:00
Edward Z. Yang
c3fdfca5da Always create ShapeEnv, always apply unspec logic (#103302)
Originally, my goal for this PR was to remove the `dynamic_shapes` tests in torch/_dynamo/variables/builder.py. However, one thing lead to another, and it turns out that it was easiest to do all of the following in one go:

* Unconditionally allocate a ShapeEnv, no matter if dynamic_shapes is enabled or not (torch/_dynamo/output_graph.py). There is a small adjustment to export torch/_dynamo/eval_frame.py to account for the fact that a ShapeEnv always exists, even if you're not doing symbolic export.
* Remove dynamic_shapes test from unspec logic (torch/_dynamo/variables/builder.py), the original goal
* Specialize strides and storage offset if all sizes are dynamic (torch/fx/experimental/symbolic_shapes.py). This is required to deal with unconditional ShapeEnv: if a ShapeEnv exist, fake tensor-ification may choose to allocate symbols. The idea is that with `automatic_dynamic_shapes == False`, Dynamo should never request dynamic sizes, but this invariant was not upheld for nontrivial strides/offset.

The rest are just auxiliary fixups from the above:

* Workaround bug in FakeTensorProp where sometimes it doesn't return a FakeTensor (torch/fx/passes/fake_tensor_prop.py), see https://github.com/pytorch/pytorch/pull/103395 for follow up
* Make ShapeProp correctly handle int inputs (torch/fx/passes/shape_prop.py)
* Disable indexing strength reduction if `assume_static_by_default` is False (torch/_inductor/codegen/triton.py)
* Fix hf_T5_generate to NOT toggle `assume_static_by_default` if dynamic shapes is not enabled (benchmarks/dynamo/common.py); technically this is not necessary anymore but it's in for safety.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103302
Approved by: https://github.com/voznesenskym
2023-06-12 12:48:28 +00:00
Edward Z. Yang
cf8af57c4a Make torch.compile(dynamic=True) not assume static by default (#99469)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99469
Approved by: https://github.com/ezyang
2023-06-10 02:56:01 +00:00
Avik Chaudhuri
59ee6cd864 fix soundness bug with unsupported constraints (#102897)
We do not raise constraint violations for complex binary conditions, such as conditions involving `%`. Moreover, while these constraints are discovered by our solver, the solver does not inject new constraint violations. This can result in cases where export passes, appropriate assertions are not added, and we get runtime crashes.

Now, when the solver discovers constraints that are too complex, we force-specialize the involved dimensions and raise a constraint violation when such dimensions are marked dynamic. This forces the user to remove the dynamic marking, and causes the appropriate specialization assertions to be added.

Differential Revision: [D46415786](https://our.internmc.facebook.com/intern/diff/D46415786/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102897
Approved by: https://github.com/tugsbayasgalan
2023-06-10 01:59:55 +00:00
Brian Hirsh
b0392de2c3 change pre_autograd to pre_dispatch tracing (#101818)
We discussed in a composability meeting a few weeks ago that `pre_autograd` should probably be renamed to `pre_dispatch`.

One question in this PR was: should I re-use a dispatch key? Or should I create a new dispatch key (that yet again corresponds to "top of the dispatcher")?

~~For now, I ended up sticking our proxy mode on the mode stack corresponding to `PythonTLSSnapshot`, because it was simple and it works. It looks like one of the functorch dispatch keys has higher priority though, so it's possible that functorch will end up running first. Open to options, but we can consider adding a new dispatch key later if that becomes a problem~~

Update: I added a dedicated dispatch key, `PreDispatch`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101818
Approved by: https://github.com/ezyang, https://github.com/Neilblaze, https://github.com/albanD, https://github.com/zou3519
2023-06-09 17:30:15 +00:00
PyTorch MergeBot
79e0a1eacb Revert "Make torch.compile(dynamic=True) not assume static by default (#99469)"
This reverts commit 7108c035bc.

Reverted https://github.com/pytorch/pytorch/pull/99469 on behalf of https://github.com/ZainRizvi due to Breaks trunk ([comment](https://github.com/pytorch/pytorch/pull/99469#issuecomment-1584868864))
2023-06-09 16:46:29 +00:00
Edward Z. Yang
7108c035bc Make torch.compile(dynamic=True) not assume static by default (#99469)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99469
Approved by: https://github.com/ezyang
2023-06-09 13:36:40 +00:00
Michael Lazos
05e91a50d9 Manually generate guards for optimizer (#103121)
Manually generate guards for optimizer rather than use variable builder, which can be slow with lots of params.

This is the reason for ~10s compile slowdown

Redisable `_init_group`. This is important, because if for any reason a frame which calls `_init_group` is run in the python interpreter, we will trace it, which we don't want to do. We only want to call it when it is accessed via the fast path implemented with the optimizer variable during symbolic interpretation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103121
Approved by: https://github.com/jansel
2023-06-08 21:45:19 +00:00
xuanqi
c37f02f61c [PyTorch][HAM]: Deprecate functionalize (#103053)
TLDR is to remove this flag and utilize functionalization functionality by lower level lib directly.

Differential Revision: [D46469134](https://our.internmc.facebook.com/intern/diff/D46469134/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103053
Approved by: https://github.com/tugsbayasgalan
2023-06-08 00:46:41 +00:00
Mark Saroufim
95fced4483 Pretty dataclass dynamo explain (#102869)
Also thinking out loud: maybe we only print graph break reasons? And for the rest we have a verbose print which prints everything?

TODO: some tests are failing based on what they expect a guard string to look like, easy to fix i'll do it early next week

# After

```
(sourcetorch) ubuntu@ip-172-31-1-136:~/test$ python pretty.py
BREAK
Graph Count: 2
Graph Break Count: 1
Op Count: 2
Break Reasons:
  Break Reason 1:
    Reason: call_function BuiltinVariable(print) [ConstantVariable(str)] {}
    User Stack:
      <FrameSummary file /home/ubuntu/test/pretty.py, line 6 in fn>
Ops per Graph:
  Ops 1:
    <built-in function add>
  Ops 2:
    <built-in function add>
Out Guards:
  Guard 1:
    Name: ''
    Source: global
    Create Function: GRAD_MODE
    Guard Types: ['GRAD_MODE']
    Code List: ['___is_grad_enabled()']
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 2:
    Name: ''
    Source: global
    Create Function: DEFAULT_DEVICE
    Guard Types: ['DEFAULT_DEVICE']
    Code List: ['utils_device.CURRENT_DEVICE == None']
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 3:
    Name: "G['print']"
    Source: global
    Create Function: BUILTIN_MATCH
    Guard Types: None
    Code List: None
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 4:
    Name: ''
    Source: global
    Create Function: DETERMINISTIC_ALGORITHMS
    Guard Types: ['DETERMINISTIC_ALGORITHMS']
    Code List: ['not ___are_deterministic_algorithms_enabled()']
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 5:
    Name: "L['x']"
    Source: local
    Create Function: TENSOR_MATCH
    Guard Types: None
    Code List: None
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 6:
    Name: ''
    Source: global
    Create Function: GRAD_MODE
    Guard Types: ['GRAD_MODE']
    Code List: ['___is_grad_enabled()']
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 7:
    Name: ''
    Source: global
    Create Function: DEFAULT_DEVICE
    Guard Types: ['DEFAULT_DEVICE']
    Code List: ['utils_device.CURRENT_DEVICE == None']
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 8:
    Name: ''
    Source: global
    Create Function: DETERMINISTIC_ALGORITHMS
    Guard Types: ['DETERMINISTIC_ALGORITHMS']
    Code List: ['not ___are_deterministic_algorithms_enabled()']
    Object Weakref: None
    Guarded Class Weakref: None
  Guard 9:
    Name: "L['x']"
    Source: local
    Create Function: TENSOR_MATCH
    Guard Types: None
    Code List: None
    Object Weakref: None
    Guarded Class Weakref: None
Compile Times: TorchDynamo compilation metrics:
Function                        Runtimes (s)
------------------------------  --------------
_compile                        0.0164, 0.0035
OutputGraph.call_user_compiler  0.0000, 0.0000
```

## Before

```
('Dynamo produced 2 graphs with 1 graph break and 2 ops', [{Guard(name='print', source=<GuardSource.GLOBAL: 1>, create_fn=<function GuardBuilder.BUILTIN_MATCH at 0x7f92ea5009d0>, is_volatile=False, guard_types=None, code_list=None, obj_weakref=None, guarded_class_weakref=None), Guard(name='x', source=<GuardSource.LOCAL: 0>, create_fn=<function GuardBuilder.TENSOR_MATCH at 0x7f92ea501000>, is_volatile=False, guard_types=['TENSOR_MATCH'], code_list=None, obj_weakref=<weakref at 0x7f9224d28f40; dead>, guarded_class_weakref=<weakref at 0x7f92d81734c0; to 'torch._C._TensorMeta' at 0x540b610 (Tensor)>)}, {Guard(name='x', source=<GuardSource.LOCAL: 0>, create_fn=<function GuardBuilder.TENSOR_MATCH at 0x7f92ea501000>, is_volatile=False, guard_types=['TENSOR_MATCH'], code_list=None, obj_weakref=<weakref at 0x7f9224d5e700; dead>, guarded_class_weakref=<weakref at 0x7f92d81734c0; to 'torch._C._TensorMeta' at 0x540b610 (Tensor)>)}], [GraphModule(), GraphModule()], [[<built-in function add>], [<built-in function add>]], [GraphCompileReason(reason='call_function BuiltinVariable(print) [ConstantVariable(str)] {}', user_stack=[<FrameSummary file <ipython-input-1-9e2ddb639697>, line 6 in fn>]), GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file <ipython-input-1-9e2ddb639697>, line 8 in <graph break in fn>>])], 'Dynamo produced 2 graphs with 1 graph break and 2 ops\n Break reasons: \n\n1. call_function BuiltinVariable(print) [ConstantVariable(str)] {}\n  File "<ipython-input-1-9e2ddb639697>", line 6, in fn\n    print("BREAK")\n \n2. return_value\n  File "<ipython-input-1-9e2ddb639697>", line 8, in <graph break in fn>\n    return x\n \nTorchDynamo compilation metrics:\nFunction                        Runtimes (s)\n------------------------------  --------------\n_compile                        0.0418, 0.0084\nOutputGraph.call_user_compiler  0.0001, 0.0001')

```

## Program

```python
import torch
import torch._dynamo

def fn(x):
    x = x + 1
    print("BREAK")
    x = x + 1
    return x

out = torch._dynamo.explain(fn, torch.randn(10))
print(out)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102869
Approved by: https://github.com/voznesenskym
2023-06-07 22:38:57 +00:00
Michael Lazos
0769a50a5f Disable dynamo on some opt methods and differentiable optimizer tests (#103066)
- Disables dynamo on the differentiable optimizer tests
- Disables dynamo on some test methods which expose a very rare dynamo edge case
- Disables dynamo on export/save optimizer state methods because it shouldn't trace those anyway.

I have a draft PR to fix the two tests marked skip due to unsupported mutation of step.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103066
Approved by: https://github.com/janeyx99, https://github.com/malfet
2023-06-07 03:50:42 +00:00
Ali Moezzi
719584600b Merge original module attributes with attributes assigned by __setattr__ (#102910)
Fixes https://github.com/pytorch/pytorch/issues/94478 @davidberard98

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102910
Approved by: https://github.com/Skylion007, https://github.com/Neilblaze, https://github.com/davidberard98
2023-06-05 19:14:07 +00:00
Michael Lazos
c46af25bb3 Initialize optimizer in dynamo to avoid graph break and tracing slowness (#102640)
On calls to `_init_group` rather than tracing through it, extract python values from the arguments, and call the initialization. This avoids having to trace this function which is very slow with large parameters, and also avoids graph breaking on it. This is sound in this case because the state is only initialized once in the eager case. Guards on the state and params are generated explicitly rather than via tracing the initialization.

Caveats:
`_init_group` also gathers various state tensors into lists via mutating list arguments to pass to the functional optimizer implementation. These state tensors exist on the optimizer itself, but we don't know exactly how the gathering is done and which tensors correspond to which attributes of the optimizer module (each optimizer has different states). To rectify this, we keep weak_ptrs to all of the tensors collected in the lists in globals (similar to how parameter keys are stored for dictionaries). These pointers are guaranteed to be alive as long as the optimizer object is alive if the internal state is not interfered with and they are guarded with weakref guards

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102640
Approved by: https://github.com/jansel
2023-06-03 15:49:51 +00:00
Michael Lazos
00f1bb0963 Fix optimizer cuda health check graph break (can be done in the compiler) (#102765)
- Ignore the health check if we are compiling
- Don't disable the function anymore

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102765
Approved by: https://github.com/albanD
2023-06-03 03:42:23 +00:00
David Berard
c36d235db0 Revert "implement __dir__ for dynamo (#102480)" (#102766)
This reverts commit b02f48b181.

If a user does this:

```
mod = torch.compile(mod)
mod.is_compiled = True
assert "is_compiled" in dir(mod)
```

it will fail after #102480.

Differential Revision: [D46368712](https://our.internmc.facebook.com/intern/diff/D46368712)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102766
Approved by: https://github.com/msaroufim
2023-06-02 19:40:44 +00:00
Tugsbayasgalan Manlaibaatar
d9f75dded1 [export] Add aot_export 1/N (#101490)
This PR adds aot_export_module as the lowering path from torch.level graph to aten graph. Some known limitations that need to be addressed in the follow up PRs:
1. Store param/buffer data in ExportedProgram
2. Fully support torch.cond with params/buffers
3. Making sure no duplicated ExportMetaData entry
4. This API will break Executorch if used on PyE, we will figure out a plan internally.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101490
Approved by: https://github.com/avikchaudhuri
2023-05-31 20:56:21 +00:00
ALi
b02f48b181 implement __dir__ for dynamo (#102480)
Fixes #94478 modules' attributes are not included in when `__dir__` is called on the optimized module.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102480
Approved by: https://github.com/msaroufim
2023-05-30 18:46:10 +00:00
Angela Yi
c4028de462 [export] ExportedProgram (#102259)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102259
Approved by: https://github.com/ydwu4, https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan, https://github.com/zhxchen17
2023-05-26 23:36:38 +00:00
Avik Chaudhuri
8751002215 equality assertions (#102256)
Previously we had runtime asserts for range constraints. This diff adds runtime asserts for equality constraints.

This requires a bit of refactoring that is worth calling out.
1. [Minor] Some of the data structures produced by export and consumed by the runtime assertion pass need to be broadened. This is a WIP. There are some associated code improvements that are included in this diff, but by and large the structures are similar to what exists now. Meanwhile @angelayi and I are chatting about how to make it qualitatively better: briefly, we want to index everything by symbols, which are 1-1 with (name, dim) pairs.
2. [Major] The order in which runtime asserts are emitted is changed. Previously we used to do the work in `placeholder`, now this diff adds a hook for "post-processing" after processing of all placeholders is done. This is needed because equality constraints can mention different placeholders. This change also opens the way to optimizing codegen: e.g., each (name, dim) pair should correspond to a single intermediate variable that is reused across runtime asserts. This is future work.

Differential Revision: [D46177642](https://our.internmc.facebook.com/intern/diff/D46177642/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102256
Approved by: https://github.com/tugsbayasgalan, https://github.com/angelayi
2023-05-26 14:57:31 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
dbf6912be6 Populate all args with fake tensor value (#102129)
Summary: We don't need to leak matched input positions from dynamo anymore if we can just populate all args with corresponding fake tensors.

Test Plan: CI

Differential Revision: D46131556

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102129
Approved by: https://github.com/angelayi
2023-05-24 22:01:47 +00:00
Avik Chaudhuri
38f8f756bf group constraints by arg (#102096)
Differential Revision: [D46110979](https://our.internmc.facebook.com/intern/diff/D46110979/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102096
Approved by: https://github.com/ydwu4
2023-05-24 05:27:54 +00:00
Avik Chaudhuri
73e887b5c7 [easy] refactor signature flattening transform (#101886)
Move `ChangeInputOutputSignature` out of export function to avoid closed over variables that make dependencies hard to understand. Also rename it while we're at it.

Differential Revision: [D46029076](https://our.internmc.facebook.com/intern/diff/D46029076/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101886
Approved by: https://github.com/tugsbayasgalan
2023-05-20 00:47:04 +00:00
Angela Yi
4f9aa7cb0f [export] Error when constraining on static values (#101655)
Fixes https://github.com/pytorch/pytorch/issues/100415

Results in the following error:
```
Traceback (most recent call last):
  File "/scratch/angelayi/work/pytorch/test/export/test_export.py", line 572, in test_export_constrain_static
    export(f, example_inputs, constraints)
  File "/scratch/angelayi/work/pytorch/torch/_export/__init__.py", line 348, in export
    method_name_to_graph_module[compile_spec.method_name] = _export(
  File "/scratch/angelayi/work/pytorch/torch/_export/__init__.py", line 119, in _export
    raise UserError(UserErrorType.CONSTRAIN_VIOLATION, str(e))
torch._dynamo.exc.UserError:   File "/scratch/angelayi/work/pytorch/test/export/test_export.py", line 561, in f
    constrain_as_value(c, min=1, max=3)

It appears that you're trying to set a constraint on a value which we evaluated to have a static value of 3. Scroll up to see where this constraint was set.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101655
Approved by: https://github.com/avikchaudhuri
2023-05-19 18:27:36 +00:00
ydwu4
113b67059f Fix specify_constraints signature for exporting module (#101831)
Currently, when exporting a module, specify_constraint's signature is wrong. We make it consistent with the calling convention of the module being exported.

@build[pytorch_dynamo_inductor]

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101831
Approved by: https://github.com/avikchaudhuri
2023-05-19 16:58:17 +00:00
Animesh Jain
2fa1b563da [dynamo] Activation checkpoint higher order ops - Reland 101028 (#101790)
https://github.com/pytorch/pytorch/pull/101028 was reverted due to internal breakage. Relanding.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101790
Approved by: https://github.com/zou3519
2023-05-18 19:09:14 +00:00
Angela Yi
f994d0b619 [dynamo] Change dimension constraint summary to log.info (#101584)
Running with `TORCH_LOGS=dynamo` will have the dimension constraint summary pop up again.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101584
Approved by: https://github.com/avikchaudhuri
2023-05-17 06:47:14 +00:00
Nikita Shulga
20cf42de2c Revert "[Reland] Add sym_size/stride/numel/storage_offset to native_function.… (#100749)"
This reverts commit bb454891ed.
2023-05-16 18:17:02 -07:00
William Wen
0e811044bd [dynamo 3.11] enable other torch 3.11 dynamo-related tests (#99180)
Notes:
- No segfaults observed in any CI tests: dynamo unittests, inductor unittests, dynamo-wrapped pytorch tests. So we remove the warning that using dynamo 3.11 may result in segfaults.
- Fixed a weakreflist copying bug that caused a few dynamo-wrapped tests to hang.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99180
Approved by: https://github.com/malfet, https://github.com/TamirFriedman-RecoLabs
2023-05-15 22:06:28 +00:00
Tugsbayasgalan Manlaibaatar
9ffad5b62b Remove input tracker from runtime assertion pass (#100955)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100955
Approved by: https://github.com/ydwu4
2023-05-15 21:26:47 +00:00
PyTorch MergeBot
d0db7d624d Revert "[dynamo] Activation checkpointing as higher order op (#101028)"
This reverts commit de15e740a1.

Reverted https://github.com/pytorch/pytorch/pull/101028 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/101028#issuecomment-1548280970))
2023-05-15 17:47:08 +00:00
Aaron Gokaslan
dfe484a3b3 [BE]: Bugfix functorch and some generic typing improvements (#101337)
Fixes some typing bugs found with newer versions of mypy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101337
Approved by: https://github.com/ezyang
2023-05-14 14:20:56 +00:00
Sherlock Huang
bb454891ed [Reland] Add sym_size/stride/numel/storage_offset to native_function.… (#100749)
…yaml (#91… (#91919)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91919 Approved by: https://github.com/ezyang

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92402

Reviewed By: ezyang

Differential Revision: D42565586

Pulled By: SherlockNoMad

fbshipit-source-id: 1c2986e45307e076d239836a1b45441a9fa3c9d9
ghstack-source-id: 969f4928486e04c57aaf98e20e3c3ca946c51613

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100749
Approved by: https://github.com/zhxchen17, https://github.com/albanD
2023-05-12 22:57:42 +00:00
PyTorch MergeBot
a64e97b62c Revert "[dynamo 3.11] enable other torch 3.11 dynamo-related tests (#99180)"
This reverts commit aa8dcab1ce.

Reverted https://github.com/pytorch/pytorch/pull/99180 on behalf of https://github.com/huydhn due to Sorry for reverting this, but linux-bionic-py3.11-clang9 test starts to timeout after this taking more than 3h30m. This is probably a landrace ([comment](https://github.com/pytorch/pytorch/pull/99180#issuecomment-1545982256))
2023-05-12 16:18:22 +00:00
William Wen
aa8dcab1ce [dynamo 3.11] enable other torch 3.11 dynamo-related tests (#99180)
Notes:
- No segfaults observed in any CI tests: dynamo unittests, inductor unittests, dynamo-wrapped pytorch tests. So we remove the warning that using dynamo 3.11 may result in segfaults.
- Some dynamo-wrapped pytorch tests hang. They will be skipped in the dynamo-wrapped test suite and will be addressed in a future PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99180
Approved by: https://github.com/malfet
2023-05-12 07:03:09 +00:00
Animesh Jain
de15e740a1 [dynamo] Activation checkpointing as higher order op (#101028)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101028
Approved by: https://github.com/voznesenskym, https://github.com/zou3519
2023-05-12 03:17:41 +00:00
PyTorch MergeBot
d57544d39a Revert "fix specify_constraints's signature when exporting model (#100739)"
This reverts commit b0a372e1fa.

Reverted https://github.com/pytorch/pytorch/pull/100739 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/100739#issuecomment-1540920698))
2023-05-09 21:42:35 +00:00