Commit Graph

92 Commits

Author SHA1 Message Date
ydwu4
9f6d70b2fd Support SymBool input to torch.compile (#107850)
We could have SymBool inputs for torch.compile, e.g. in the following situation:
```
def f(x:torch.Tensor):
  pred = x.size(0) == 3
  torch.compile(f)(pred, x)

make_fx(f, tracing_mode="symbolic")(x)
```

The idea of this PR (credit to @ezyang) is to support SymBool by re-using the infra we've already had for SymInt so that we don't need to replicate a lot of stuff.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107850
Approved by: https://github.com/ezyang
ghstack dependencies: #107662
2023-09-14 01:16:29 +00:00
CK Luk
366baf690b Back out "[Dynamo x FSDP] Add support for params, buffers, submodules on FSDPManagedNNModuleVariable (#107923)" (#108823)
Summary:
Original commit changeset: 33650f7cb0fb

Original Phabricator Diff: D48833682

Test Plan: See T162942232 for how we figured out that this diff caused significant numeric difference.

Reviewed By: voznesenskym

Differential Revision: D49082219

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108823
Approved by: https://github.com/xw285cornell
2023-09-08 14:39:43 +00:00
voznesenskym
f3a8d57aea [Dynamo x FSDP] Add support for params, buffers, submodules on FSDPManagedNNModuleVariable (#107923)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107923
Approved by: https://github.com/wconstab
2023-08-29 08:54:13 +00:00
Animesh Jain
a506d0ad8f [dynamo] Store originating source in the Guard object (#107634)
Many times, I find myself wanting to know the source for the guard. This PR adds that as a field of guard itself.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107634
Approved by: https://github.com/voznesenskym
ghstack dependencies: #107622
2023-08-22 02:16:31 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
Edward Z. Yang
697893568d Improve error message when export encounters non-local input (#106403)
Previously, you would get an error like

```
Dynamo input and output is a strict subset of traced input/output
```

now you get

```
Cannot export model which references tensors that are neither
buffers/parameters/constants nor are direct inputs.  For each tensor, if you'd
like this tensor to be an explicit input, add it as a dummy argument
to the top-level model definition you are exporting; if you would
like its value to be embedded as an exported constant, wrap its access
in a function marked with @assume_constant_result.

G['bulbous_bouffant'], accessed at:
  File "test_export.py", line N, in f
    return bulbous_bouffant + y
```

This doesn't handle outputs, I'm going to hit that next.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106403
Approved by: https://github.com/tugsbayasgalan
2023-08-03 12:35:25 +00:00
Michael Lazos
05eea20eb9 [dynamo] Simulate torch function enablement state (#105091)
Part of https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105091
Approved by: https://github.com/voznesenskym, https://github.com/anijain2305
2023-07-13 17:42:20 +00:00
Michael Voznesensky
d27bc34f4b Simple Source traversal util (#103450)
lint

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103450
Approved by: https://github.com/ezyang
2023-06-14 20:04:20 +00:00
Michael Voznesensky
056bf951bf Strengthen partially supported invariant of base for chained sources (#103445)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103445
Approved by: https://github.com/ezyang
2023-06-13 22:44:28 +00:00
Yukio Siraichi
7550ec16a4 Add support for dictionary with torch object keys. (#103158)
Fixes: #101979

This PR adds support for dictionaries with torch object as keys in dynamo.

The main problem was that, for example, the source built for `d[torch.float]` (`d` being a
dictionary) was `ODictGetItemSource(GlobalSource('d'), index=torch.float)`. When
`Source.name` method was called, we got `odict_getitem(G['d'], torch.float)`. Evaluating
that string raised an error, since `torch` was only available in the global dictionary `G`
as `G["torch"]`.

Instead, this PR builds the source:
`ODictGetItemSource(GlobalSource('d'), index=AttrSource(GlobalSource('torch'), 'float'))`.
The to-be-evaluated string is correctly generated as:
`odict_getitem(G['d'], G['torch'].float)`.

Here's a minimal example that reproduces the error, before this PR:

```python
import torch

d = {
    torch.float16: torch.float32,
}

@torch.compile
def f():
    return torch.randn(3, dtype=d[torch.float16])

f()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103158
Approved by: https://github.com/mlazos
2023-06-09 20:18:49 +00:00
Brian Hirsh
98ab11a2c3 separate out dynamo .requires_grad and .is_grad_enabled guards (#100570)
Fixes https://github.com/pytorch/pytorch/issues/100977

This will hopefully fix this error (from [issue](https://github.com/pytorch/pytorch/issues/99616))

This PR fixes an internal model: we were running an inductor inference graph, but `torch.is_grad_enabled()` was True, causing us to error inside of the inference graph when we encountered an out= operator.

I haven't been able to create a smaller repro - before landing this, I want to create a smaller repro to convince myself of why we need to separate out these guards.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100570
Approved by: https://github.com/ezyang
2023-05-24 14:58:40 +00:00
PyTorch MergeBot
7f3fed125e Revert "separate out dynamo .requires_grad and .is_grad_enabled guards (#100570)"
This reverts commit 1fabee399d.

Reverted https://github.com/pytorch/pytorch/pull/100570 on behalf of https://github.com/PaliC due to breaking inductor tests along with #101219 ([comment](https://github.com/pytorch/pytorch/pull/100570#issuecomment-1555271267))
2023-05-19 21:29:09 +00:00
Brian Hirsh
1fabee399d separate out dynamo .requires_grad and .is_grad_enabled guards (#100570)
Fixes https://github.com/pytorch/pytorch/issues/100977

This will hopefully fix this error (from [issue](https://github.com/pytorch/pytorch/issues/99616))

This PR fixes an internal model: we were running an inductor inference graph, but `torch.is_grad_enabled()` was True, causing us to error inside of the inference graph when we encountered an out= operator.

I haven't been able to create a smaller repro - before landing this, I want to create a smaller repro to convince myself of why we need to separate out these guards.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100570
Approved by: https://github.com/ezyang
2023-05-19 16:14:56 +00:00
Michael Voznesensky
0ac0d9d224 Pass locals to enum_repr to correctly make the guard str for enums (#99680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99680
Approved by: https://github.com/jansel
2023-04-21 07:14:49 +00:00
Edward Z. Yang
e47e8c9d98 Guard on default device (#99551)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99551
Approved by: https://github.com/voznesenskym, https://github.com/mlazos
2023-04-20 17:02:59 +00:00
Will Constable
98907589ee Make GetItemSource(*, slice) hashable (#99379)
All Sources must be hashable, since we are using set equality to check for
duplicate sources in AOTAutograd.  We should have a more systematic way
of asserting this.  For this PR just fix the local issue.

Fixes #99145

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99379
Approved by: https://github.com/ezyang
2023-04-19 13:50:49 +00:00
Edward Z. Yang
039faf0dbf Add invariant that all symbolic shapes must be bound in graph (#99089)
Previously, we had a problem when partitioning forward-backward dynamic graphs, which is that we could end up with a backward graph that mentions a symbol in an input tensor (e.g., `f32[s0 + s1]`), but without this symbol being otherwise bound elsewhere. When this happens, we have no way of actually deriving the values of `s0` and `s1`. Our fix for this in https://github.com/pytorch/pytorch/pull/93059 was to just retrace the graph, so that s0 + s1 got allocated a new symbol s2 and everything was happy. However, this strategy had other problems, namely (1) we lost all information from the previous ShapeEnv, including guards and (2) we end up allocating a LOT of fresh new symbols in backwards.

With this change, we preserve the same ShapeEnv between forward and backwards. How do we do this? We simply require that every symbol which may be present inside tensors, ALSO be a plain SymInt input to the graph. This invariant is enforced by Dynamo. Once we have done this, we can straightforwardly modify the partitioner to preserve these SymInt as saved for backwards, if they are needed in the backwards graph to preserve the invariant as well.

This apparently breaks yolov3, but since everything else is OK I'm merging this as obviously good and investigating later.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99089
Approved by: https://github.com/voznesenskym
2023-04-16 01:48:19 +00:00
Edward Z. Yang
9abae6ae32 Make all Source subclasses frozen. (#98737)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98737
Approved by: https://github.com/albanD
2023-04-10 17:51:10 +00:00
knwng
e943b120a3 Fix incorrectly getting the name of OrderedDict's index in dynamo (#96940)
Fixes #96737

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96940
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-04-05 03:53:45 +00:00
Michael Voznesensky
b1e60bfb6a Pass f_locals as a dict rather than kwargs (#98107)
Fixes https://github.com/pytorch/pytorch/issues/97688

One big problem is that instead of printing x < y we now print
`E["x"] < E["y"]` and now all of the tests wobbled and I'm mad.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98107
Approved by: https://github.com/ezyang
2023-04-04 00:30:08 +00:00
Sam Gross
87f5e92916 [dynamo] Add guards for deterministic algos (#96695)
Inductor now falls back to eager mode for deterministic algos. Add guards in dynamo to check if the deterministic algos mode changes.

See #93537

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96695
Approved by: https://github.com/ngimel, https://github.com/jansel
2023-03-31 16:28:45 +00:00
William Wen
24a5d006f2 [dynamo 3.11] Refactor create_instruction (#96499)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96499
Approved by: https://github.com/jansel, https://github.com/albanD
2023-03-30 17:05:27 +00:00
Yanbo Liang
f388bec985 [Dynamo] torch.Generator state should have a source and be reconstructed properly (#97403)
Fixes #97077 partially.

During FX graph propagation, we request every tensor should have source:
a524123c91/torch/_dynamo/variables/builder.py (L929)
However, the output of ```torch.Generator.get_state()``` is a tensor but without source, since it's generated inside of the FX graph. My change is following what we did for [Python random functions](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/variables/user_defined.py#L260), to have a dedicated ```GeneratorStateSource```. We have to also update the reconstruction logics, since we will reuse the ```TensorVariable``` reconstruction.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97403
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-03-29 04:31:23 +00:00
Will Constable
c1a6dde79e Make dynamo-FSDP skip guards (#97463)
Create a new GuardSource for FSDP modules, and use it
to opt out of guard installation.

Based on @awgu's work in https://github.com/pytorch/pytorch/pull/97091

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97463
Approved by: https://github.com/voznesenskym, https://github.com/jansel, https://github.com/awgu
2023-03-28 04:04:34 +00:00
Michael Voznesensky
f9ce593267 Extend aot autograd dedup guards to params, stop using positions (#96774)
The purpose of this PR is to remove reliance on argument positions in dedup guards, AND extend the functionality to params.

A version of this PR was stamped prior https://github.com/pytorch/pytorch/pull/95831 - but was kinda gross, because it was based on an underlying PR that did way too much with source names.

This PR leaves most of that alone, in favor of just reusing the same name standardization logic that dynamo module registration does.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96774
Approved by: https://github.com/ezyang
2023-03-21 05:59:33 +00:00
Yanbo Liang
b8f7bd593c [Dynamo] Guard name should be valid Python identifier (#96174)
Fixes #96149

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96174
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-03-08 01:33:29 +00:00
William Wen
055a9e45aa [dynamo 3.11] changes to LOAD_GLOBAL and function calls (#94098)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94098
Approved by: https://github.com/albanD
2023-02-21 18:47:30 +00:00
Yanbo Liang
304d8dd6c8 [Dynamo] Support enum.Enum type as dict key (#93026)
Fixes Meta internal user case of using ```enum.Enum``` type as dict key, pleaser refer the added test case for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93026
Approved by: https://github.com/mlazos
2023-01-29 06:37:10 +00:00
Michael Voznesensky
4ca511c69e Fix positional issues in dedup guards (#93137)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93137
Approved by: https://github.com/bertmaher, https://github.com/wconstab, https://github.com/bdhirsh
2023-01-28 19:21:32 +00:00
Will Constable
6cfaa92239 Handle tensor default func args when inlining (#90575)
Handle tensor default func/method args when inlining

    Previously, when inlining a function, its default arguments
    were only wrapped with VariableTrackers if non-tensor. Now,
    tensor default args are also handled by adding them to the
    parent InstructionTranslator as an attribute.

    - also patches up a missing source in nnmodule call_function,
      needed to properly guard on a default arg in its methods
    - adds new 'DefaultsSource' type which guards either a `__defaults__`
      or `__kwdefaults__` entry on a function

Fixes #90361  https://github.com/pytorch/torchdynamo/issues/1968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90575
Approved by: https://github.com/voznesenskym
2023-01-12 05:04:18 +00:00
Will Constable
8e2e648f84 Propagate sources in VariableBuilder and add SuperSource (#91729)
**Motivation**
When adding support for default args (#90575), a lot of VariableTrackers missing sources were encountered.  Currently, in a lot of cases it seems OK to skip the source for VariableTrackers created (especially during inlining), but that assumption breaks down when inlining functions with default arguments.

**Summary** of changes
- propagate the self.source of the VariableBuilder to the new variables being built, which seems like it was an omission previously
- Add SuperSource to track usages of super(), so that SuperVariables can support function calls with default args

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91729
Approved by: https://github.com/ezyang
2023-01-12 05:04:18 +00:00
Will Constable
75c652821c Assert valid base source for derivative sources (#91711)
We should not allow creating a derived source (e.g. AttrSource), without a valid base source.

It's more reliable to check this in the source `__init__` or `__post_init__` than asserting we have a valid source before passing that to an AttrSource() call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91711
Approved by: https://github.com/voznesenskym
2023-01-07 00:51:55 +00:00
Edward Z. Yang
f8740db410 Properly resolve source_ref when constructing shape guards (#91058)
Whenever you guard on something, you're supposed to tell GuardBuilder about it, so GuardBuilder knows that it has to actually bind it in scope when it creates the guard function. But shape env guards bypass that mechanism completely. Well, now they don't.

For the most part, this didn't matter in practice, because we usually had a `TENSOR_MATCH` guard floating around that made sure that the guard stayed live. But if we ever eliminate those guards (e.g., because we build it into the shape guard directly; something we'll probably want to do when https://github.com/pytorch/pytorch/pull/89707 goes online) then this will indeed matter.

One complication: some of the shape env guards are on globals. You have to make sure to shunt the usage to the correct guard builder in that case. Maybe it would be better if we refactored things so there is only one GuardBuilder. Not sure.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91058
Approved by: https://github.com/voznesenskym
2022-12-30 05:56:56 +00:00
Edward Z. Yang
bcf15cd93b Store source, not sname, in Symbol (#91057)
I'm going to need this in the follow up PR. Instead of storing only Source.name() in Symbol, I now store a full on Source. Lots of replumbing reoccurs. In particular:

- Move Source to torch._guards to break cycles
- I have to add TensorPropertySource and NegateSource to handle x.size()[0] and -x codegen that I was doing with string manipulation previously
- I tighten up invariants so that I never pass source=None; instead I pass ConstantSource (these are constant sources right) and test for that rather than source being missing. I think this is more parsimonious
- Some mypy wobbles from new imports

I didn't move LocalSource and friends to torch._guards, but I ended up needing to access them in a few places. The main annoyance with moving these is that then I also need to move the bytecode codegen stuff, and that's not so easy to move without bringing in the kitchen sink.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91057
Approved by: https://github.com/albanD, https://github.com/voznesenskym, https://github.com/zou3519
2022-12-30 05:56:56 +00:00
PyTorch MergeBot
b68fd7e319 Revert "Store source, not sname, in Symbol (#91057)"
This reverts commit 88c581be87.

Reverted https://github.com/pytorch/pytorch/pull/91057 on behalf of https://github.com/atalman due to causing internal build failures
2022-12-21 22:33:15 +00:00
Edward Z. Yang
88c581be87 Store source, not sname, in Symbol (#91057)
I'm going to need this in the follow up PR. Instead of storing only Source.name() in Symbol, I now store a full on Source. Lots of replumbing reoccurs. In particular:

- Move Source to torch._guards to break cycles
- I have to add TensorPropertySource and NegateSource to handle x.size()[0] and -x codegen that I was doing with string manipulation previously
- I tighten up invariants so that I never pass source=None; instead I pass ConstantSource (these are constant sources right) and test for that rather than source being missing. I think this is more parsimonious
- Some mypy wobbles from new imports

I didn't move LocalSource and friends to torch._guards, but I ended up needing to access them in a few places. The main annoyance with moving these is that then I also need to move the bytecode codegen stuff, and that's not so easy to move without bringing in the kitchen sink.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91057
Approved by: https://github.com/albanD, https://github.com/voznesenskym
2022-12-21 04:51:51 +00:00
Edward Z. Yang
57390116e0 Restructure ShapeEnv so it uses GuardBuilder.SHAPE_ENV directly (#91055)
The idea is to make ShapeEnv guards less of a one-off special snowflake, and integrate it more closely with the regular builder infrastructure. But it is not so easy: the shape env code has to live after tensor match code, because we need to know that the values in question are tensors before we start matching on them. So we introduce a new `shape_env_code` field to put the special shape env code, so we can add it to the final constructed code after tensor.

Everything else works the obvious way. There's a new ShapeEnvSource for constructing the singleton SHAPE_ENV guard that drives the shape env guard construction. I added some more docs and also made the printed code for guards include the enclosing lambda for more clarity.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91055
Approved by: https://github.com/albanD, https://github.com/voznesenskym
2022-12-21 03:50:47 +00:00
Michael Voznesensky
11442accc6 Make torch._guards, shuffle structures around for migration (#90636)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90636
Approved by: https://github.com/ezyang
2022-12-11 23:16:07 +00:00
PyTorch MergeBot
15a4c60383 Revert "Make torch._guards, shuffle structures around for migration (#90636)"
This reverts commit 933b6c4eed.

Reverted https://github.com/pytorch/pytorch/pull/90636 on behalf of https://github.com/huydhn due to Breaking lint on master. Please rebase and run lintrunner -a before re-merging the PR
2022-12-11 10:15:47 +00:00
Michael Voznesensky
933b6c4eed Make torch._guards, shuffle structures around for migration (#90636)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90636
Approved by: https://github.com/ezyang
2022-12-11 06:04:17 +00:00
Edward Z. Yang
dcefc8f90f Implement guard_source on RandomValueSource (#89711)
I audited the pattern matches on the enum and it didn't
look like this one should apply there.

Sorry, no test, I know this matters on symbolic-shapes branch
but I haven't had time to extract out a minimal reproducer.
Take my word for it.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89711
Approved by: https://github.com/jansel
2022-11-28 00:32:48 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00