Animesh Jain
735e6ae801
[dynamo] Maintainable code - Move decorators in a separate file ( #105070 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105070
Approved by: https://github.com/ezyang
2023-07-13 07:41:19 +00:00
Bin Bao
86ddfc7f68
[inductor] Move cpp wrapper trigger logic to inner_compile ( #100611 )
...
Summary: This enables cpp wrapper for backward as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100611
Approved by: https://github.com/jansel
2023-05-08 15:24:02 +00:00
Edward Z. Yang
a109453df4
Delete use_functionalize feature flag ( #99317 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99317
Approved by: https://github.com/voznesenskym
2023-04-18 02:09:57 +00:00
Edward Z. Yang
17d7be68ee
Delete functorch use_fake_tensor and debug_fake_cross_ref ( #99314 )
...
Using fake tensor with AOTAutograd is now mandatory, simplifying our
logic. Unfortunately, this means debug_fake_cross_ref must go,
but I don't think anyone has used it recently.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99314
Approved by: https://github.com/eellison , https://github.com/zou3519
2023-04-18 02:09:54 +00:00
Sun, Jiayi
f959a0d56c
Modify 'fake_tensor_unsupported' function ( #98585 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98585
Approved by: https://github.com/jansel
2023-04-08 01:04:00 +00:00
Elias Ellison
9c144bc4fe
Dont increment generation if forward of backward exists, and warning on deallocation of live tensors ( #97168 )
...
Refining the logic for when it is okay to ignore previously live outputs from cudagraphs. If there is a forward that has been invoked without invocation of the corresponding backwards, dont allow overwriting outputs.
Differential Revision: [D44228369](https://our.internmc.facebook.com/intern/diff/D44228369 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97168
Approved by: https://github.com/ezyang , https://github.com/jansel
2023-03-22 18:27:36 +00:00
Brian Hirsh
7a076b7b93
[aot_autograd] only performance functionalization analysis pass once ( #95992 )
...
For a while now, we've been re-running our functionalization analysis pass twice - once for get metadata when dedup'ing, and an entire second time during aot_dispatch_base/autograd.
This should also probably speed up compile times pretty noticeably, since we're going from:
(a) inference-only trace case: 3 fw traces -> 2 fw traces
(b) autograd trace case: 2 fw traces + 1 joint trace -> 1 fw trace + 1 joint trace
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95992
Approved by: https://github.com/ezyang
2023-03-15 13:45:40 +00:00
Jiayi Sun
5fe72b8716
[Dynamo] modify dynamo ipex backend ( #94169 )
...
1. Extend fake_tensor_unsupported to support dynamic shapes mode.
2. Use fake_tensor_unsupported in dynamo ipex backend.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94169
Approved by: https://github.com/jgong5 , https://github.com/jansel
2023-02-08 05:10:42 +00:00
Jason Ansel
0a93e6db5a
Fix/refactor dynamo ipex backend ( #93863 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93863
Approved by: https://github.com/desertfire
2023-02-03 21:42:27 +00:00
Jason Ansel
a5ff40032d
Fix/refactor dynamo onnxrt backend ( #93818 )
...
Fixes https://github.com/pytorch/pytorch/issues/90352
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93818
Approved by: https://github.com/voznesenskym
2023-02-03 20:48:02 +00:00
Jason Ansel
60e8c766b5
Refactor dynamo training backends ( #93409 )
...
This splits training.py into many files and moves them from `dynamo.optimizations.training` to `dynamo.backends.*`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93409
Approved by: https://github.com/ezyang
2023-02-03 03:07:15 +00:00