Commit Graph

143 Commits

Author SHA1 Message Date
Guilherme Leobas
673cc88fd6 Add support for contextmanager in Dynamo (#136033)
Fixes #130559

* Intro

This PR adds support for `@contextmanager` in Dynamo. We chose to limit the
scope of this work to only `@contextmanager` and plan to handle generators fully
in #141055 (still in draft).

* Motivation

Dynamo lacks support for generator functions. When it encounters one, it traces
it as if it were a regular function. This is problematic because it can lead to
incorrect behavior. To illustrate, consider the test case below:

```python
import torch
import contextlib

@contextlib.contextmanager
def set_default_dtype(dtype):
    old_dtype = torch.get_default_dtype()
    try:
        torch.set_default_dtype(dtype)
        yield
    finally:
        torch.set_default_dtype(old_dtype)

@torch.compile(backend="eager", fullgraph=True)
def fn():
    with set_default_dtype(torch.float64):
        x = torch.tensor([3.0, 3.0 + 5.0j])
    return x
```

Before this work, Dynamo would not stop at the `yield`, and the graph produced
would contain both calls to `set_default_dtype` executed one after the other.
This is incorrect because the context manager should execute code before and
after the `yield`.

* List of changes

`YIELD_VALUE` now raises an exception (`YieldValueOp`) to signal that control
flow must be suspended and returned to the caller. Additionally, `RETURN_VALUE`
behaves differently in a generator function. Unlike regular functions, where
`RETURN_VALUE` indicates the final result, in generators it signifies that the
generator is exhausted and implicitly raises `StopIteration`.

A new `VariableTracker` named `FunctionDecoratedByContextlibContextManagerVariable`
was introduced to handle `@contextmanager`. This variable tracker acts not just
as a wrapper for the original function but also maintains an internal `tx`
(InstructionTranslator) object to suspend and return control flow to the parent
tracer when a `yield` is encountered.

* Corner cases

Returning a context manager from a compiled function is not supported. This
would require PyTorch to synchronize the generator state between Dynamo and the
interpreter. Any attempt to return it will result in an `IncorrectUsage`
exception.

Graph breaks require special handling as well. In the event of a graph break,
the frame associated with the context manager is skipped, and the context
manager runs in eager mode.

* This PR is breaking my code

There is a configuration flag (`enable_trace_contextlib`) that can be set to
`False` to disable tracing context managers. If this still causes crashes,
please revert this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136033
Approved by: https://github.com/zou3519
2024-12-20 12:02:20 +00:00
Edward Z. Yang
256bfd1096 Rename 'cache limit' to 'recompile limit' (#141542)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141542
Approved by: https://github.com/oulgen, https://github.com/jansel
2024-12-11 05:05:11 +00:00
Bob Ren
f3f7ba5a69 Restart dynamo analysis when we fail to tensorify away all symfloat inputs (#140346)
Fixes a bunch of benchmarks that failed with cudagraph errors including `tlp python benchmarks/dynamo/timm_models.py --device cuda --inductor --accuracy --amp --training --only resmlp_12_224` when `specialize_float=False`

Also brings down number of overall failures (with keep-going) from 108 => 62. I'd estimate >80% of those 62 are wobbly expect tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140346
Approved by: https://github.com/ezyang
ghstack dependencies: #140983, #141003
2024-11-20 21:20:41 +00:00
Will Feng
36c6ad71ba [tlparse] Add dynamo_graph_break_reason logging to trace_structured (#138778)
A common challenge during torch.compile enablement is to answer user's question: "where is the graph break?". This PR will help make it easier to answer by surfacing graph breaks and their corresponding user stack trace / compiler stack trace in a direct link e.g. `0_0_0/dynamo_graph_break_reason_0.txt` from tlparse index.html.

![image](https://github.com/user-attachments/assets/79cd43f5-af14-4d08-9d5b-cb47d8203851)

![image](https://github.com/user-attachments/assets/23233ee2-0d56-4526-bf9a-d22c337c4d18)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138778
Approved by: https://github.com/ezyang
2024-10-25 02:00:04 +00:00
Prajesh Praveen Anchalia
bb65c9b883 [PyTorch] Classify Unsupported mutated Dynamic Shapes as User Error (#137054)
Summary: We don't need an assert on for unsupported dyn shape inputs, removing the assert and raising a user exception instead.

Differential Revision: D63661569

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137054
Approved by: https://github.com/bdhirsh
2024-10-23 03:15:37 +00:00
William Wen
2157e396a3 [dynamo] attempt run only mode when dynamo cache limit is hit (#136655)
Implement https://github.com/pytorch/pytorch/issues/135458.

Try run-only mode when dynamo cache limit is hit. If no valid cache entries are found, then skip code recursively.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136655
Approved by: https://github.com/jansel
2024-09-27 17:15:05 +00:00
Edward Z. Yang
a2d2a30311 Add torch._dynamo.config.fail_on_cache_limit_hit (#136767)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136767
Approved by: https://github.com/albanD, https://github.com/jansel
ghstack dependencies: #136533
2024-09-27 03:58:00 +00:00
Guilherme Leobas
e09c5b6046 Remove vt argument in raise_observed_exception (#136037)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136037
Approved by: https://github.com/zou3519
2024-09-24 02:36:57 +00:00
William Wen
95e976a63f [dynamo] recursively skip frames when Dynamo cache limit is hit (#135144)
Fixes https://github.com/pytorch/pytorch/pull/135144 and [T197117723](https://www.internalfb.com/intern/tasks/?t=197117723).

In general, adds `SkipCodeRecursiveException` to Dynamo - when raised in Dynamo, convert_frame will return a `skip_code_recursive_flag` back to C Dynamo, signaling it to skip the current frame and all recursive calls.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135144
Approved by: https://github.com/jansel, https://github.com/anijain2305
2024-09-06 21:38:53 +00:00
Animesh Jain
8a2b064236 [dynamo][user_defined][stable-diffusion] Raise ObservedAttributeError on UserDefinedObject var_getattr (#132806)
Fixes https://github.com/pytorch/pytorch/issues/132551

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132806
Approved by: https://github.com/williamwen42
2024-08-16 04:30:06 +00:00
Edward Z. Yang
90d2593b3e Revert #132806, #132736, #132539, #132487 (#133570)
This reverts commit 25df063f04.
This reverts commit de00c79583.
This reverts commit 419b76c4ac.
This reverts commit bc57d5b6ff.

Differential Revision: [D61335013](https://our.internmc.facebook.com/intern/diff/D61335013)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133570
Approved by: https://github.com/albanD, https://github.com/jansel, https://github.com/anijain2305
2024-08-15 20:54:21 +00:00
Animesh Jain
25df063f04 [dynamo][user_defined][stable-diffusion] Raise ObservedAttributeError on UserDefinedObject var_getattr (#132806)
Fixes https://github.com/pytorch/pytorch/issues/132551

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132806
Approved by: https://github.com/williamwen42
2024-08-07 18:19:49 +00:00
Oguz Ulgen
6e79932543 Add basic mypy annotations to dynamo (#132415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132415
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
2024-08-04 18:43:36 +00:00
PyTorch MergeBot
3558a8cf4a Revert "Add basic mypy annotations to dynamo (#132415)"
This reverts commit 71e22e0959.

Reverted https://github.com/pytorch/pytorch/pull/132415 on behalf of https://github.com/ZainRizvi due to Sorry, this PR has entered a weird state in the diff train. Trying to revert it to skip it, and then we can try relanding it ([comment](https://github.com/pytorch/pytorch/pull/132415#issuecomment-2267631785))
2024-08-04 18:39:29 +00:00
Animesh Jain
6c4ce4331c [dynamo][exception] Raise Observed KeyError exception for dict __getitem__ (#132425)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132425
Approved by: https://github.com/yanboliang, https://github.com/Skylion007
2024-08-02 02:58:31 +00:00
Oguz Ulgen
71e22e0959 Add basic mypy annotations to dynamo (#132415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132415
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
2024-08-01 20:14:25 +00:00
Oguz Ulgen
72d2dba992 Add None return type to init (#132335)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132335
Approved by: https://github.com/albanD
2024-08-01 15:26:45 +00:00
Xuehai Pan
e74ba1b34a [BE][Easy][15/19] enforce style for empty lines in import segments in torch/_d*/ (#129767)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129767
Approved by: https://github.com/anijain2305
2024-07-31 21:18:11 +00:00
Animesh Jain
2a4ca5ccc4 [dynamo] Pop the exception stack on handling the StopIteration natively (#131801)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131801
Approved by: https://github.com/yanboliang
ghstack dependencies: #131795
2024-07-25 23:33:19 +00:00
Edward Z. Yang
0c6f1ca064 Introduce torch._dynamo.config.enable_compiler_collectives for syncing compilation across ranks (#130935)
This PR implements an opt-in configuration option for synchronizing compilation across all ranks at the end of Dynamo tracing (and potentially, other places in the future). There are two pieces to this PR:

1. Implementing infrastructure for compiler collectives (DistributedState/LocalState, the actual collective)
2. Using this infrastructure to synchronize automatic dynamic choices across all ranks

The infrastructure in part one can be used for other purposes, just add more (serializable) fields to LocalState.

Here is how automatic dynamic synchronization works:

1. Preflight in "torch/_dynamo/variables/builder.py": On the first Dynamo trace run, we trace without automatic dynamic at all; we assume all Tensor inputs that are not otherwise marked are static. This run is purely to collect all Tensor input sizes in the program.
2. torch/_dynamo/output_graph.py: At the end of the first Dynamo trace run, we perform a compiler collective to distribute all Tensor input sizes to all ranks. Then, we restart Dynamo
3. Apply the updates in "torch/_dynamo/variables/builder.py": Now that we have all sizes for every rank, we now update frame state with the observed sizes for all ranks, in rank order. Under the assumption that frame state is consistent on all ranks, this series of updates will preserve consistency.

For future work, it would be safer if we force a consistent hint on all ranks; this is more involved as we have to interpose in fakification.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130935
Approved by: https://github.com/jansel
2024-07-24 11:24:11 +00:00
Animesh Jain
6850e42266 [dynamo][exception] Remove older specialization for StopIteration (#131512)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131512
Approved by: https://github.com/yanboliang
ghstack dependencies: #131347, #131367, #131378, #131389, #131405, #131480
2024-07-24 00:06:53 +00:00
Shangdi Yu
cfb9ccab6c [export] Filter errors by exception type, add case name (#131327)
Summary:
-  Log export errors to Scuba and mark them with "classified" and "unclassified"
- Classify errors by exception type (ALLOW_LIST) and a `case_name` attribute
- Add `case_name` for some exceptions.

Test Plan:
Running the code below logs a classified error to `torch_export_usage` table in Scuba.

```
import torch

from torch._export.db.case import SupportLevel

class TorchSymMin(torch.nn.Module):
    """
    torch.sym_min operator is not supported in export.
    """

    def forward(self, x):
        return x.sum() + torch.sym_min(x.size(0), 100)

example_args = (torch.randn(3, 2),)
tags = {"torch.operator"}
support_level = SupportLevel.NOT_SUPPORTED_YET
model = TorchSymMin()

torch.export.export(model, example_args)
``

Differential Revision: D59981459

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131327
Approved by: https://github.com/zhxchen17
2024-07-23 18:01:13 +00:00
Aaron Orenstein
4c3348932c typing: convert_frame (#130670)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130670
Approved by: https://github.com/Skylion007
ghstack dependencies: #130669
2024-07-16 14:31:35 +00:00
Aaron Orenstein
dcfa7702c3 Flip default value for mypy disallow_untyped_defs [1/11] (#127838)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127838
Approved by: https://github.com/oulgen
2024-06-08 18:16:33 +00:00
Animesh Jain
4aa7a1efcf [dynamo] Initial exception handling support (#126923)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126923
Approved by: https://github.com/williamwen42, https://github.com/jansel
2024-06-01 13:00:32 +00:00
ydwu4
461ffaaaf3 [dynamo] support torchbind object input (#124978)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124978
Approved by: https://github.com/jansel
2024-05-07 03:02:00 +00:00
Xuehai Pan
7b11fb4695 [Dynamo] fix opcode YIELD_FROM and SEND (#123912)
This PR is split from #120300.

- #120300

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123912
Approved by: https://github.com/anijain2305
2024-04-12 21:57:47 +00:00
Jason Ansel
781e8d2201 [dynamo] Support __next__ on UserDefinedObjectVariable (#122565)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122565
Approved by: https://github.com/yanboliang
2024-03-31 19:00:03 +00:00
Peter Bell
5790096059 [dynamo] Remove uses of raise unimplemented (#122136)
`unimplemented` is a function that raises an error, so
`raise unimplemented(...)` never reaches the `raise`.
Another related issue is that `raise unimplemented(...) from e`
doesn't attach the exception cause correctly. I fix this by adding
a `from_exc` argument to `unimplemented`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122136
Approved by: https://github.com/lezcano
2024-03-22 19:29:58 +00:00
James Wu
df1cdaedeb Log restart reasons and extra compile time in CompilationMetrics (#121827)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121827
Approved by: https://github.com/ezyang, https://github.com/yanboliang
2024-03-18 18:59:25 +00:00
Angela Yi
b2a3d6ba0d [exportdb] Remove torch/fb/exportdb (#117866)
Summary: This has already been moved to torch/_export/db

Test Plan: no tests? I think?

Reviewed By: avikchaudhuri

Differential Revision: D52875607

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117866
Approved by: https://github.com/ydwu4
2024-01-22 17:41:33 +00:00
angelayi
249a226113 [export] Error on not pytree-flattened nodes (#117598)
Attempts to make the input/output mismatch error better by first checking if the inputs/outputs are able to be pytree flattened into supporting types (tensors, symints, ...). So if user passes in some datastructure which does not have a pytree flatten registration, this will error with the message "It looks like one of the inputs is with type CustomType is not supported or pytree flatten-able.... please register a pytree flatten/unflatten function using the pytree.register_pytree_node API".

The check inside of produce_matching should now only error if something unexpected happens (dynamo accidentally adds an input or removes an output), and should be considered an internal error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117598
Approved by: https://github.com/avikchaudhuri, https://github.com/BowenBao
2024-01-19 17:13:39 +00:00
PyTorch MergeBot
646229218f Revert "[export] Error on not pytree-flattened nodes (#117598)"
This reverts commit 560213de2d.

Reverted https://github.com/pytorch/pytorch/pull/117598 on behalf of https://github.com/PaliC due to breaking executorch tests internally ([comment](https://github.com/pytorch/pytorch/pull/117598#issuecomment-1898926720))
2024-01-18 17:37:59 +00:00
angelayi
560213de2d [export] Error on not pytree-flattened nodes (#117598)
Attempts to make the input/output mismatch error better by first checking if the inputs/outputs are able to be pytree flattened into supporting types (tensors, symints, ...). So if user passes in some datastructure which does not have a pytree flatten registration, this will error with the message "It looks like one of the inputs is with type CustomType is not supported or pytree flatten-able.... please register a pytree flatten/unflatten function using the pytree.register_pytree_node API".

The check inside of produce_matching should now only error if something unexpected happens (dynamo accidentally adds an input or removes an output), and should be considered an internal error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117598
Approved by: https://github.com/avikchaudhuri, https://github.com/BowenBao
2024-01-18 03:06:42 +00:00
PyTorch MergeBot
06dab05405 Revert "[export] Error on not pytree-flattened nodes (#117598)"
This reverts commit 35e8478305.

Reverted https://github.com/pytorch/pytorch/pull/117598 on behalf of https://github.com/huydhn due to Sorry for reverting you change but it is failing ONNX test in trunk 35e8478305, probably a landrace as the PR signal looks fine ([comment](https://github.com/pytorch/pytorch/pull/117598#issuecomment-1896389009))
2024-01-17 18:29:04 +00:00
angelayi
35e8478305 [export] Error on not pytree-flattened nodes (#117598)
Attempts to make the input/output mismatch error better by first checking if the inputs/outputs are able to be pytree flattened into supporting types (tensors, symints, ...). So if user passes in some datastructure which does not have a pytree flatten registration, this will error with the message "It looks like one of the inputs is with type CustomType is not supported or pytree flatten-able.... please register a pytree flatten/unflatten function using the pytree.register_pytree_node API".

The check inside of produce_matching should now only error if something unexpected happens (dynamo accidentally adds an input or removes an output), and should be considered an internal error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117598
Approved by: https://github.com/avikchaudhuri
2024-01-17 16:33:57 +00:00
Yanbo Liang
b4d6443bcf [Dynamo] Log innermost user frame filename & lineno for better error aggregation (#115899)
CompilationMetrics example:
```
frame_key='1',
co_name='fn',
co_filename='/data/users/ybliang/debug/debug1.py',
co_firstlineno=58,
cache_size=0,
accumulated_cache_size=0,
guard_count=None,
graph_op_count=None,
graph_node_count=None,
graph_input_count=None,
entire_frame_compile_time_s=None,
backend_compile_time_s=None,
fail_type="<class 'torch._dynamo.exc.Unsupported'>",
fail_reason='custome dict init with args/kwargs unimplemented',
fail_user_frame_filename='/data/users/ybliang/debug/debug1.py',
fail_user_frame_lineno=61
```
where:
* ```fail_type``` and ```fail_reason``` are exceptions inside of Dynamo.
* ```fail_user_frame_filename``` and ```fail_user_frame_lineno``` are where the original user code triggered the exception.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115899
Approved by: https://github.com/davidberard98, https://github.com/ydwu4
2023-12-15 08:24:55 +00:00
Jez Ng
c1fa708b03 [dynamo] Enable typechecking for utils.py (#112971)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112971
Approved by: https://github.com/lezcano, https://github.com/jansel
ghstack dependencies: #112130, #112970
2023-11-08 21:17:45 +00:00
Jason Ansel
7818a2887a [dynamo] Replace InstructionTranslator.checkpoint with speculate/restart (#112902)
In my work on making guards installed eagerly (look up the stack), I found that our checkpoint/restore mechanism is very broken.  There is lots of state (especially in shape_env) which we don't checkpoint and restore properly.  We also have lots of mutable state on variable trackers already which is not checkpointed/restored.  (See other PRs in this stack for some spot fixes.)

Since we wanted to get rid of this anyway for making VariableTracker mutable, I figured I would just switch to restarting analysis.

For other usages of copy_graphstate/restore_graphstate:
1) Many usages were pointless and not needed, these are removed in PRs below this.
2) Some other usage (similar to this one) is removed in PRs above this.
3) The tricky one I am not handling is higher_order_ops, which uses checkpoint/restore a lot.    There might be some cases there where this speculate/restart trick won't work.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112902
Approved by: https://github.com/voznesenskym
2023-11-05 17:09:29 +00:00
Jez Ng
632ac01bef [dynamo] Enable typechecking for exc.py (#112127)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112127
Approved by: https://github.com/Skylion007
ghstack dependencies: #111894, #111992, #112031
2023-10-27 06:18:58 +00:00
PyTorch MergeBot
33403336fa Revert "[user errors] compulsory case names, allow multiple (#110878)"
This reverts commit 2ae71c4598.

Reverted https://github.com/pytorch/pytorch/pull/110878 on behalf of https://github.com/kit1980 due to export/test_export.py::TestExport::test_multiple_definitions_same_name_dim - TypeError: UserError.init() missing 1 required positional argument: 'case_names' ([comment](https://github.com/pytorch/pytorch/pull/110878#issuecomment-1754360051))
2023-10-10 04:44:40 +00:00
Avik Chaudhuri
2ae71c4598 [user errors] compulsory case names, allow multiple (#110878)
We want to get to a point where most UserErrors link to exportdb examples. This PR makes passing case names non-optional to make this intent clearer and encourage developers who raise UserErrors to make or point to examples that make fixing such errors more obvious for users.

In addition, sometimes there are multiple examples that are relevant to an error. Thus this PR also enables passing multiple case names.

Retry of #110733 which was reverted due to a landrace.

Differential Revision: [D50087148](https://our.internmc.facebook.com/intern/diff/D50087148/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110878
Approved by: https://github.com/gmagogsfm, https://github.com/tugsbayasgalan
2023-10-10 03:48:07 +00:00
Huy Do
18f0d3af72 Revert "[user errors] compulsory case names, allow multiple (#110733)" (#110783)
This reverts commit 983f6f36db.  I have no idea how to revert https://github.com/pytorch/pytorch/pull/110733 with the bot.  So reverting it manually for now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110783
Approved by: https://github.com/ZainRizvi, https://github.com/kit1980
2023-10-07 07:32:39 +00:00
Avik Chaudhuri
983f6f36db [user errors] compulsory case names, allow multiple (#110733)
We want to get to a point where most `UserError`s link to `exportdb` examples. This PR makes passing case names non-optional to make this intent clearer and encourage developers who raise `UserError`s to make or point to examples that make fixing such errors more obvious for users.

In addition, sometimes there are multiple examples that are relevant to an error. Thus this PR also enables passing multiple case names.

Differential Revision: [D50020465](https://our.internmc.facebook.com/intern/diff/D50020465/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110733
Approved by: https://github.com/zhxchen17
2023-10-07 01:25:12 +00:00
Avik Chaudhuri
416eca9736 export db links for user errors (#110555)
Ideally all `_dynamo.exc.UserError`s should have "case names", i.e., link to examples in `exportdb`.

This PR adds case names to several instances of `_dynamo.exc.UserError`. In particular, looking at coverage based on `UserErrorType`:
* `DYNAMIC_CONTROL_FLOW`, `ANTI_PATTERN`, and `STANDARD_LIBRARY` are fully covered.
* `CONSTRAINT_VIOLATION` and `DYNAMIC_DIM` have no coverage. We don't seem to have any dedicated examples of specifying dynamic shapes in `exportdb` (although they are used in some other examples without explanation, to avoid some specialization that would make such examples moot).
* `INVALID_INPUT` is only partly covered. Frankly this is tedious to cover via examples.

Differential Revision: [D49928518](https://our.internmc.facebook.com/intern/diff/D49928518/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110555
Approved by: https://github.com/angelayi, https://github.com/ydwu4
2023-10-05 05:03:04 +00:00
Edward Z. Yang
8791e8697a Print full stack trace on suppressed error (#110106)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110106
Approved by: https://github.com/zou3519, https://github.com/voznesenskym
2023-09-27 16:09:06 +00:00
PyTorch MergeBot
8caaa4f4cd Revert "Re-land: Break graph on manual_seed. (#108647)"
This reverts commit c887309437.

Reverted https://github.com/pytorch/pytorch/pull/108647 on behalf of https://github.com/huydhn due to Ouch, we are hit again my another internal import error from https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L205-L206 ([comment](https://github.com/pytorch/pytorch/pull/108647#issuecomment-1712230103))
2023-09-08 21:18:00 +00:00
Yukio Siraichi
c887309437 Re-land: Break graph on manual_seed. (#108647)
Trying to re-land #107594.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108647
Approved by: https://github.com/eellison
2023-09-07 12:52:38 +00:00
ydwu4
49e964cad6 Automatically turn on dynamo in cond (#108028)
A replacement of https://github.com/pytorch/pytorch/pull/107932.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108028
Approved by: https://github.com/zou3519
ghstack dependencies: #108025, #108026, #108027
2023-08-28 10:16:41 +00:00
ydwu4
6f8eecfb10 Add UncapturedHigherOrderOpError to always raise exceptions for cond. (#108027)
We want cond to always throw errors despite user's torch.compile mode.

The current implementation is to
1. catch the UserError.GRAPH_BREAK_IN_CONTROL_FLOW and once saw it, we directly raise: once in [break_graph_if_unsupported](bad3f2db40/torch/_dynamo/symbolic_convert.py (L1250)), which catches and raises for call_function (entry point of higher order operator)  and a few others.
2. The raised exception is caught and raised again in [step](bad3f2db40/torch/_dynamo/symbolic_convert.py (L691)), where all instructions' exceptions are handled.
3. At the top-level, we treat it like an hard error and not supressing the errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108027
Approved by: https://github.com/zou3519
ghstack dependencies: #108025, #108026
2023-08-28 07:23:03 +00:00
Animesh Jain
7cb2a6bfab [dynamo][fallback] Fallback to eager when backend fails with fake tensor exceptions (#107179)
Example (I think we should fix this test case for real, but using this to test the ux around fallbacks)

~~~
@torch.compile(backend="aot_eager")
def fn(x):
    return torch.sum(x, dim=1).tolist()

print(fn(torch.rand(4, 4).to(dtype=torch.int64)))
~~~

Running the script as is

~~~
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING] Backend compiler failed with a fake tensor exception at
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING]   File "/data/users/anijain/pytorch/examples/spl.py", line 5, in fn
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING]     return torch.sum(x, dim=1).tolist()
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING] Falling back to eager for this frame. Please use TORCH_LOGS=graph_breaks to see the full stack trace.
[0, 0, 0, 0]
~~~

Running the script with TORCH_LOGS="graph_breaks"

~~~
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] WON'T CONVERT fn /data/users/anijain/pytorch/examples/spl.py line 3
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] ========== TorchDynamo Stack Trace ==========
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] Traceback (most recent call last):
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/output_graph.py", line 995, in call_user_compiler
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     compiled_fn = compiler_fn(gm, self.example_inputs())
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     compiled_gm = compiler_fn(gm, example_inputs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/__init__.py", line 1586, in __call__
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return self.compiler_fn(model_, inputs_, **self.kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/backends/common.py", line 55, in compiler_fn
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     cg = aot_module_simplified(gm, example_inputs, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 3795, in aot_module_simplified
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     compiled_fn = create_aot_dispatcher_function(
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/utils.py", line 194, in time_wrapper
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     r = func(*args, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 3283, in create_aot_dispatcher_function
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     fw_metadata = run_functionalized_fw_and_collect_metadata(
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 757, in inner
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     flat_f_outs = f(*flat_f_args)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 3400, in functional_call
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     out = Interpreter(mod).run(*args[params_len:], **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/fx/interpreter.py", line 138, in run
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     self.env[node] = self.run_node(node)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/fx/interpreter.py", line 195, in run_node
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return getattr(self, n.op)(n.target, args, kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/fx/interpreter.py", line 289, in call_method
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return getattr(self_obj, target)(*args_tail, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/utils/_stats.py", line 20, in wrapper
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return fn(*args, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 1233, in __torch_dispatch__
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return self.dispatch(func, types, args, kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 1470, in dispatch
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     op_impl_out = op_impl(self, func, *args, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 501, in local_scalar_dense
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     raise DataDependentOutputException(func)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] While executing %item : [num_users=1] = call_method[target=item](args = (%getitem,), kwargs = {})
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] Original traceback:
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/examples/spl.py", line 5, in fn
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return torch.sum(x, dim=1).tolist()
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]
~~~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107179
Approved by: https://github.com/ezyang
2023-08-16 14:57:42 +00:00
Edward Z. Yang
76163a56c0 Refactor stack handling to always use TracingContext to populate real stack on exception (#106277)
The basic gist of the PR is simple, but it's accompanied with some careful modifications and unit tests to make sure I got it right. Check inline comments for more details.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106277
Approved by: https://github.com/albanD, https://github.com/voznesenskym
2023-08-02 00:09:16 +00:00
Edward Z. Yang
d3b508d068 Fix typo which suppresses user exception reporting (#106289)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106289
Approved by: https://github.com/albanD
2023-07-31 14:35:33 +00:00
gmagogsfm
f5def50461 Supress eager fallback suggestions when exporting (#105767)
Previously during torch.export(), when an exception is raised during tracing, Dynamo displays this error:

“You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True”

This is not viable in torch.export(), thus this diff suppresses this suggestion during export.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105767
Approved by: https://github.com/anijain2305
2023-07-22 19:17:08 +00:00
Yanbo Liang
4c73016ff2 [Dynamo] Enable torch._dynamo.config.suppress_errors by default (#105307)
Summary:
We are working toward full model compilation, where when compilation error happens, we just fall back to eager mode rather than error out.
But at the same time, we should fix these issues if they are bugs. We will:
* 1/ log warnings in OSS;
* 2/ log warnings and write them into Scuba in fbcode;

to prevent us from ignoring these issues.

Test Plan: Manual test

Differential Revision: D47506314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105307
Approved by: https://github.com/jansel
2023-07-21 19:17:46 +00:00
Tugsbayasgalan Manlaibaatar
936cd4f2f5 Migrate exportdb to torch.export (#104260)
Reapply of (https://github.com/pytorch/pytorch/pull/103861). Things that needed to be fixed:

- Fix a bug with returning dict output type
- Make pass_base work with map implementation
- Fix subtle bug with dynamo not propagating "val" in node.meta
- Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104260
Approved by: https://github.com/angelayi
2023-06-27 17:49:18 +00:00
PyTorch MergeBot
518abe8b7e Revert "Migrate exportdb to torch.export from torchdynamo.export (#103861)"
This reverts commit fb6173a4ac.

Reverted https://github.com/pytorch/pytorch/pull/103861 on behalf of https://github.com/huydhn due to It looks like this change is failing in trunk due to a landrace fb6173a4ac ([comment](https://github.com/pytorch/pytorch/pull/103861#issuecomment-1601960600))
2023-06-22 03:24:01 +00:00
Tugsbayasgalan Manlaibaatar
fb6173a4ac Migrate exportdb to torch.export from torchdynamo.export (#103861)
Things that needed to be fixed:
1. Fix a bug with returning dict output type
2. Make pass_base work with map implementation
3. Fix subtle bug with dynamo not propagating "val" in node.meta
4. Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103861
Approved by: https://github.com/zhxchen17, https://github.com/ydwu4
2023-06-22 02:53:41 +00:00
Tugsbayasgalan Manlaibaatar
d4b85f3031 Support params/buffers inside cond and map (#102310)
With #102022, params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.

Differential Revision: [D46746202](https://our.internmc.facebook.com/intern/diff/D46746202)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri, https://github.com/zou3519
2023-06-20 05:33:10 +00:00
PyTorch MergeBot
2087d32811 Revert "Support params/buffers inside cond and map (#102310)"
This reverts commit 766f236bad.

Reverted https://github.com/pytorch/pytorch/pull/102310 on behalf of https://github.com/huydhn due to The test is failing in trunk 766f236bad ([comment](https://github.com/pytorch/pytorch/pull/102310#issuecomment-1592159710))
2023-06-15 00:29:20 +00:00
Tugsbayasgalan Manlaibaatar
766f236bad Support params/buffers inside cond and map (#102310)
With #102022, params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri, https://github.com/zou3519
2023-06-14 22:32:33 +00:00
Michael Lazos
40dbbcab6c Update error message with torch logging instructions (#102892)
https://github.com/pytorch/pytorch/issues/100109

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102892
Approved by: https://github.com/yanboliang
2023-06-09 00:07:08 +00:00
Tugsbayasgalan Manlaibaatar
cea899cd57 Add early validation logic to dynamic_dim (#102982)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102982
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-08 20:23:49 +00:00
zhxchen17
8d598f2f25 [exportdb] Change case ids to case names for UserErrors. (#100600)
Associate UserErrors with the unique case name instead of the
case ids, because in practice they work similarly but names are more
meaningful to use and remember.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100600
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-05-04 06:14:50 +00:00
vfdev
0692bdd95f Improved message to suppress errors in _dynamo/exc.py (#97345)
If user adds simply to their code:
```python
import torch

torch._dynamo.config.suppress_errors = True
```
they will get:
```
AttributeError: module 'torch' has no attribute '_dynamo'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97345
Approved by: https://github.com/zou3519, https://github.com/kit1980
2023-04-28 01:12:08 +00:00
zhxchen17
9e012fd401 [export] Associate one cond() error case with exportdb. (#99844)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99844
Approved by: https://github.com/tugsbayasgalan, https://github.com/avikchaudhuri
2023-04-25 21:33:24 +00:00
Guang Yang
aa4ed332c3 Improve torch.cond useability: Return UserError with actionable error messages (#98909)
It's part of the effort to improve PT2 Export UX. This PR is to improve the usability of `torch.cond()` by separating user errors from the dynamo internal errors. By definition, user error means the usage of `torch.cond()` violates the restrictions of this API therefore needs users to take action and fix the error.

In this notebook N3363227 we discovered a bunch of limitations of using `torch.cond(pred, true_fn, false_fn, operands)`. In summary, the limitations can be categorized as:
 - predicate restriction (`pred`)
 - operands restriction (`operands`)
 - branch restriction (`true_fn` & `false_fn`)

The error message will be more accurate about where the (user) error is from and more actionable for users to fix it.

For example, `operands` must be a list of tensors and the signature of `true_fn` and `false_fn` must match with the `operands`.
If the operands contains non-tensor types, user will see error message like:
```
torch._dynamo.exc.UserError: Expected a list of tensors but got ["<class 'torch.Tensor'>", "<class 'float'>"]

from user code:
   File "~/pytorch/test/dynamo/test_export.py", line 2504, in f_non_tensor_operands
    return cond(True, lambda x, a: x.sin(), lambda x, a: x.cos(), [x, a])
```
If the signature of the branch function doesn't match with `operands`, user will see error message like:
```
torch._dynamo.exc.UserError: too many positional arguments.
  func = 'false_fn' ~/pytorch/test/dynamo/test_export.py:2514, args = [<class 'torch.Tensor'>, <class 'torch.Tensor'>], kwargs = {}
```
Or if the tensor returned from user defined branches has different metadata, e.g. shapes, dtypes, etc., user will see error message like:
```
TypeError: Expected each tensor to have same metadata but got:
  cond_true_0 returns TensorMetadata(shape=torch.Size([2, 1]), dtype=torch.int64, requires_grad=False, stride=(1, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
  cond_false_0 returns TensorMetadata(shape=torch.Size([1]), dtype=torch.float32, requires_grad=False, stride=(1,), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98909
Approved by: https://github.com/jansel
2023-04-20 17:20:41 +00:00
Jason Ansel
47c685def3 [dynamo] Support DELETE_ATTR (#98698)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98698
Approved by: https://github.com/yanboliang
2023-04-15 20:31:40 +00:00
Angela Yi
1d077f28ed [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-13 21:20:10 +00:00
PyTorch MergeBot
ab761605ae Revert "[export] Constraints API (#98433)"
This reverts commit 1510eb4072.

Reverted https://github.com/pytorch/pytorch/pull/98433 on behalf of https://github.com/izaitsevfb due to Breaks internal tests, asked by author to revert
2023-04-12 23:37:19 +00:00
PyTorch MergeBot
629377ea8b Revert "Replace _dynamo.config with an object instead of module (#96455)"
This reverts commit 420104a886.

Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Animesh Jain
951df11af8 [dynamo] Raise exception on incorrect usage of disallow_in_graph (#98892)
Summary -
`disallow_in_graph` is mostly useful for backends. Suppose, your backend does not support `torch.abs()`. So, you can use `disallow_in_graph` to do a graph break.

The assumption in the above statement is that `disallow_in_graph` is called on an `allowed` callable. `allowed` in Dynamo language refers to a callable that is put as-is in the Dynamo graph.

Therefore, if one uses `disallow_in_graph` on some non-torch non-allowed function, we want to raise an exception to tell user that they probably want something else.
* If they want to disable Dynamo - they should use torch._dynamo.disable
* If they wanted to stop inlining - they should use torch._dynamo.graph_break. However this is not a decorator. So, we need to provide another API. But, the question - who would want to do this?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98892
Approved by: https://github.com/jansel
2023-04-12 07:50:56 +00:00
Angela Yi
1510eb4072 [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-12 01:32:44 +00:00
Han Qi
420104a886 Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Tugsbayasgalan Manlaibaatar
12f340dcd9 Add round as UserError (#98376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98376
Approved by: https://github.com/anijain2305
2023-04-06 19:28:00 +00:00
Tugsbayasgalan Manlaibaatar
37dc47a1ac Make caling type on user defined class UserError (#98366)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98366
Approved by: https://github.com/anijain2305
2023-04-06 05:20:50 +00:00
Tugsbayasgalan Manlaibaatar
7f9533e224 [Dynamo] Add UserError type (#97705)
To get started the dynamo error message improvement effort, we discussed about adding new user error type which covers cases where the user used something that TorchDynamo doesn't support and there is clear actions they can take.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97705
Approved by: https://github.com/anijain2305, https://github.com/yanboliang
2023-04-01 01:18:00 +00:00
Michael Lazos
e626be79a4 Add config setting to error on recompile (#97829)
Adds a config setting `error_on_recompile` - when set dynamo will raise an exception after compiling a function for the second time.

This was requested to help debugging in pyper

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97829
Approved by: https://github.com/bertmaher
2023-03-29 19:00:43 +00:00
Jason Ansel
de2230baa7 [dynamo] Improve error message for missing backend (#97255)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97255
Approved by: https://github.com/msaroufim
2023-03-21 19:36:04 +00:00
Michael Lazos
a1c46e5f8f component-level configurable logging for dynamo, inductor, aot (#94858)
Summary:

Adds NNC-like logging that is configured through an env var `TORCH_COMPILE_LOGS`
Examples:
`TORCH_LOGS="dynamo,guards" python script.py` - prints dynamo logs at level INFO with guards of all functions that are compiled

`TORCH_LOGS="+dynamo,guards,graph" python script.py` - prints dynamo logs at level DEBUG with guards and graphs (in tabular) format of all graphs that are compiled

[More examples with full output](https://gist.github.com/mlazos/b17f474457308ce15e88c91721ac1cce)

Implementation:
The implementation parses the log settings from the environment, finds any components (aot, dynamo, inductor) or other loggable objects (guards, graph, etc.) and generates a log_state object. This object contains all of the enabled artifacts, and a qualified log name -> level mapping. _init_logs then adds handlers to the highest level logs (the registered logs), and sets any artifact loggers to level DEBUG if the artifact is enabled.

Note: set_logs is an alternative for manipulating the log_state, but if the environment contains TORCH_LOGS, the environment settings will be prioritized.

Adding a new log:
To add a new log, a dev should add their log name to torch._logging._registrations (there are examples there already).

Adding a new artifact:
To add a new artifact, a dev should add their artifact name to torch._logging._registrations as well.
Additionally, wherever the artifact is logged, `torch._logging.getArtifactLogger(__name__, <artifact_name>)` should be used instead of the standard logging implementation.

[design doc](https://docs.google.com/document/d/1ZRfTWKa8eaPq1AxaiHrq4ASTPouzzlPiuquSBEJYwS8/edit#)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94858
Approved by: https://github.com/ezyang
2023-03-18 04:17:31 +00:00
BowenBao
60a68477a6 Bump black version to 23.1.0 (#96578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96578
Approved by: https://github.com/ezyang
2023-03-15 06:27:59 +00:00
Michael Lazos
203890e1e0 Properly show buck target to run (#96089)
Summary: Makes the debug dir location configurable with TORCH_COMPILE_DEBUG_DIR env var

Test Plan: TORCH_COMPILE_DEBUG_DIR=”.” buck2 run mode/dev-nosan //caffe2/test/inductor:minifier_smoke

Reviewed By: bertmaher

Differential Revision: D43639955

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96089
Approved by: https://github.com/bertmaher
2023-03-07 22:52:27 +00:00
Jason Ansel
95d17dc93d [inductor] Reland #95567 part 1 (#96023)
This is the non-problematic part of #95567.  The errors were coming from
IR printing changes which will be next in the stack.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96023
Approved by: https://github.com/ngimel, https://github.com/mlazos
2023-03-06 22:57:22 +00:00
Jason Ansel
43dd043ea7 Revert "[inductor] Improve error messages (#95567)" (#96014)
This reverts commit 62b775583f.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96014
Approved by: https://github.com/Chillee
2023-03-04 04:03:31 +00:00
Jason Ansel
62b775583f [inductor] Improve error messages (#95567)
Example error message before/after (710 to 131 lines):
https://gist.github.com/jansel/6fecad057738089fa95bf08c3de9fc8a

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95567
Approved by: https://github.com/mlazos
2023-03-02 02:20:55 +00:00
Jason Ansel
4d6a4401f8 Raise warning if torch.compile options change without reset (#94680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94680
Approved by: https://github.com/wconstab, https://github.com/malfet
2023-02-13 20:21:04 +00:00
Xuehai Pan
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
Edward Z. Yang
ca9ebf9e2b Delete dynamo_import and inductor_import (#93851)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93851
Approved by: https://github.com/albanD, https://github.com/jansel
2023-02-02 01:51:29 +00:00
Michael Lazos
cac217c80a Fix key error formatting and move exc code to exc.py (#92593)
Fixes https://github.com/pytorch/torchdynamo/issues/1953 and moves exception formatting code from convert_frame.py to exc.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92593
Approved by: https://github.com/ezyang
2023-01-19 02:54:00 +00:00
Horace He
12dd877395 Fix all references to torchdynamo from the merge (#87731)
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87731
Approved by: https://github.com/yanboliang, https://github.com/ezyang, https://github.com/anijain2305, https://github.com/jansel
2022-10-31 06:51:07 +00:00
Michael Lazos
9691ba2dbd Remove excess exception logging for minifier, cleanup backend failure exception format (#87537)
Fixes https://github.com/pytorch/torchdynamo/issues/1376

Ensures exceptions are printed only in one place, once.

implements some of the ideas from https://github.com/pytorch/torchdynamo/issues/1754
- Attaches a field to the exception which indicates that it's minified, a usage message is printed if this field is present

cc @jansel @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @lezcano @fdrocha
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87537
Approved by: https://github.com/anijain2305
2022-10-28 21:33:55 +00:00
Edward Z. Yang
96691865b9 [dynamo] Unify raise_on_* config to suppress_errors and raise by default (#87440)
I noticed that a lot of bugs are being suppressed by torchdynamo's default
error suppression, and worse yet, there's no way to unsuppress them.  After
discussion with voz and soumith, we decided that we will unify error suppression
into a single option (suppress_errors) and default suppression to False.

If your model used to work and no longer works, try TORCHDYNAMO_SUPPRESS_ERRORS=1
to bring back the old suppression behavior.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87440
Approved by: https://github.com/voznesenskym, https://github.com/albanD
2022-10-21 17:03:29 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00