Commit Graph

12 Commits

Author SHA1 Message Date
Shiyan Deng
2e73c86d45 [fx][split] make sure we copy node.meta over during split (#107248)
Summary: Previously when we create placeholder nodes for sub graph modules, we didn't copy node.meta over.

Test Plan: CI

Differential Revision: D48330866

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107248
Approved by: https://github.com/zyan0, https://github.com/houseroad, https://github.com/Neilblaze
2023-08-22 00:06:45 +00:00
Kazuaki Ishizaki
105ef68f72 Fix typos under torch/fx directory (#97596)
This PR fixes typos in comments and messages of `.py` files under `torch/fx` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97596
Approved by: https://github.com/dagitses, https://github.com/kit1980
2023-04-10 21:57:36 +00:00
Isaac Hoffman
20018aa766 modify split_by_tags to retain output order (#84136)
Summary: Currently `split_by_tags` determines submodule output order by iterating over `used_in_main`. Since this is a `Set`, insertion order is not retained so we run into problems with submodule output order being "randomized" & inconsistent between splits. By using `Dict[Node, None]` we can implement `used_in_main` as an ordered set so that output order is consistent when splitting the same model.

Test Plan: CI

Differential Revision: D39039268

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84136
Approved by: https://github.com/houseroad
2022-08-30 20:36:33 +00:00
Sherlock Huang
ac5a94789f Refactor lift_subgraph_as_module as a fx.passes.util function (#80292)
lift_subgraph_as_module can be shared between fuser_utils.py and spliter_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80292
Approved by: https://github.com/jjsjann123, https://github.com/842974287
2022-06-29 22:35:39 +00:00
anjali411
3bcc19b29a Add __all__ to various submodules in torch.fx, distributions, distributed, package (#80367)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80367
Approved by: https://github.com/albanD
2022-06-27 21:27:30 +00:00
Yinghai Lu
5a559a547d [FX] fix split_util by using getattr_recursive instead of getattr (#80011)
Summary: If the model contains ModuleList, it's possible that we got some of the weight attributes as module.sub.0.weight. `getattr` doesn't work in this case and we have a dedicated function `getattrt_recursive` for that. Just use that.

Reviewed By: houseroad

Differential Revision: D37326955

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80011
Approved by: https://github.com/houseroad
2022-06-23 03:35:46 +00:00
Shiyan Deng
5e86505693 Move util functions to a more common place (#73519)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73519

Move `getattr_recursive()` and `setattr_recursive()` to fx main.

Test Plan: CI

Reviewed By: khabinov

Differential Revision: D34524723

fbshipit-source-id: a656e821d9dc1d446aa80cdc03a923bf0c05aeb5
(cherry picked from commit 4835965ac72d299487be14687823ea62394f4079)
2022-03-01 01:33:30 +00:00
Shirong Wu
84d4087874 Fix trt const_fold as output use case (#71194)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/71194

Reviewed By: jfix71, khabinov

Differential Revision: D33541168

fbshipit-source-id: dd5787430b272977963323a6ce38b3e15e979278
2022-01-12 16:57:19 -08:00
James Reed
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
Oleg Khabinov
36a22967b7 [fx ir] Handle the case when output consumes get_attr directly (#57844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57844

Reviewed By: 842974287

Differential Revision: D28294298

fbshipit-source-id: db337fadca9f10f208324c9da6d95620178a189b
2021-05-10 22:04:43 -07:00
Shiyan Deng
d896d1f4ce [fx splitter] Fix fusion group utility (#57280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57280

We've found an issue that fusion group would results in circular dependency. For example
```
a -> b -> c -> d
|              ^
+ -------------+

Only a has non tensor output and currently we would create a fusion group (a, b, d). This results in circular dependency because now the fusion group depends on c while c depends on the fusion group as well.
```

This diff implement the solution discussed before. When we add a node to fusion group, we add all the nodes that are in the middle of the fusion group and this newly added node.

Use the same logic in minimizer to build fusion group.

Test Plan: split_tests and net_min_tests

Reviewed By: khabinov

Differential Revision: D27917432

fbshipit-source-id: a3d99fe5929dbc9f8eb0f45bccd83fd7b173795a
2021-04-30 10:18:01 -07:00
Shiyan Deng
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00