Commit Graph

14 Commits

Author SHA1 Message Date
Simon Fan
8ac0f072e6 [aot eager] Support frontend graphs with list arguments (#123212)
We already support bumpy inputs for 3rd party frontend and compiled backward graph, we should add the behavior to aot_eager too

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123212
Approved by: https://github.com/jansel
ghstack dependencies: #122691, #122746, #123007
2024-04-03 17:07:52 +00:00
Elias Ellison
d03b11ad5b Pass inductor strides forward in ddp optimizer (#120523)
# Note: Returning Fake Tensors on First AOT Autograd Call
            #
            # Inductor will optimize strides of outputs when it deems it profitable.
            # For instance, converting to channels last. When we split the graph here
            # into multiple inductor compilations, we need to make sure that the
            # output strides of one compilation is appropriately passed to the subsequent
            # compilations. However, the mapping from inductor output to dynamo output
            # is non-trivial due to aot_autograd's deduping, de-aliasing, mutation, re-writing,
            # subclass handling, etc. In order to replay all this logic we set a flag such that
            # the first invocation of inductor in aot_autograd will return Fake Tensors with
            # appropriate strides. Then, all of aot autograd's runtime logic is replayed.
            # This gives us the appropriately strided outputs here which will reflect runtime strides.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120523
Approved by: https://github.com/yf225, https://github.com/bdhirsh
2024-02-29 22:25:00 +00:00
Edward Z. Yang
d03173e88c Unify MYPYINDUCTOR and MYPY (#118432)
The original motivation for MYPYINDUCTOR was a faster type checking configuration that only checked a subset of files. With the removal of `follow_imports = ignore`, we are now able to use dmypy to do fast incremental typechecking, eliminating the need for this.

Perhaps erroneously, when I tee'ed up this PR I elected to delete the `follow_imports = skip` designations in the mypy-inductor.ini. This lead to a number of extra type error suppressions that I manually edited. You will need to review.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118432
Approved by: https://github.com/Skylion007
ghstack dependencies: #118414, #118418
2024-01-27 17:23:20 +00:00
Animesh Jain
735e6ae801 [dynamo] Maintainable code - Move decorators in a separate file (#105070)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105070
Approved by: https://github.com/ezyang
2023-07-13 07:41:19 +00:00
Bin Bao
86ddfc7f68 [inductor] Move cpp wrapper trigger logic to inner_compile (#100611)
Summary: This enables cpp wrapper for backward as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100611
Approved by: https://github.com/jansel
2023-05-08 15:24:02 +00:00
Edward Z. Yang
a109453df4 Delete use_functionalize feature flag (#99317)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99317
Approved by: https://github.com/voznesenskym
2023-04-18 02:09:57 +00:00
Edward Z. Yang
17d7be68ee Delete functorch use_fake_tensor and debug_fake_cross_ref (#99314)
Using fake tensor with AOTAutograd is now mandatory, simplifying our
logic.  Unfortunately, this means debug_fake_cross_ref must go,
but I don't think anyone has used it recently.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99314
Approved by: https://github.com/eellison, https://github.com/zou3519
2023-04-18 02:09:54 +00:00
Sun, Jiayi
f959a0d56c Modify 'fake_tensor_unsupported' function (#98585)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98585
Approved by: https://github.com/jansel
2023-04-08 01:04:00 +00:00
Elias Ellison
9c144bc4fe Dont increment generation if forward of backward exists, and warning on deallocation of live tensors (#97168)
Refining the logic for when it is okay to ignore previously live outputs from cudagraphs. If there is a forward that has been invoked without invocation of the corresponding backwards, dont allow overwriting outputs.

Differential Revision: [D44228369](https://our.internmc.facebook.com/intern/diff/D44228369)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97168
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-03-22 18:27:36 +00:00
Brian Hirsh
7a076b7b93 [aot_autograd] only performance functionalization analysis pass once (#95992)
For a while now, we've been re-running our functionalization analysis pass twice - once for get metadata when dedup'ing, and an entire second time during aot_dispatch_base/autograd.

This should also probably speed up compile times pretty noticeably, since we're going from:

(a) inference-only trace case: 3 fw traces -> 2 fw traces
(b) autograd trace case: 2 fw traces + 1 joint trace -> 1 fw trace + 1 joint trace

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95992
Approved by: https://github.com/ezyang
2023-03-15 13:45:40 +00:00
Jiayi Sun
5fe72b8716 [Dynamo] modify dynamo ipex backend (#94169)
1. Extend fake_tensor_unsupported to support dynamic shapes mode.
2. Use fake_tensor_unsupported  in dynamo ipex backend.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94169
Approved by: https://github.com/jgong5, https://github.com/jansel
2023-02-08 05:10:42 +00:00
Jason Ansel
0a93e6db5a Fix/refactor dynamo ipex backend (#93863)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93863
Approved by: https://github.com/desertfire
2023-02-03 21:42:27 +00:00
Jason Ansel
a5ff40032d Fix/refactor dynamo onnxrt backend (#93818)
Fixes https://github.com/pytorch/pytorch/issues/90352

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93818
Approved by: https://github.com/voznesenskym
2023-02-03 20:48:02 +00:00
Jason Ansel
60e8c766b5 Refactor dynamo training backends (#93409)
This splits training.py into many files and moves them from `dynamo.optimizations.training` to `dynamo.backends.*`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93409
Approved by: https://github.com/ezyang
2023-02-03 03:07:15 +00:00