Commit Graph

16 Commits

Author SHA1 Message Date
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Angela Yi
a076bdb357 [fx] Copy codegen in legalize_graph (#90023)
Test Plan: CI

Differential Revision: D41666330

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90023
Approved by: https://github.com/SherlockNoMad
2022-12-07 21:09:38 +00:00
Horace He
51bbf6329a Improved legalize_graph pass in FX (#82874)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82874
Approved by: https://github.com/jamesr66a
2022-08-07 00:13:17 +00:00
Shirong Wu
09059d9148 integrate plugin (#82395)
Differential Revision: D38162861

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82395
Approved by: https://github.com/frank-wei
2022-08-02 00:41:36 +00:00
anjali411
3bcc19b29a Add __all__ to various submodules in torch.fx, distributions, distributed, package (#80367)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80367
Approved by: https://github.com/albanD
2022-06-27 21:27:30 +00:00
Shirong Wu
ea8a0184b7 Fix fuse_parallel_linear (#76202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76202

move legalize_graph to a common tool class.

Reviewed By: yinghai, jfix71, 842974287

Differential Revision: D35694145

fbshipit-source-id: b044df3b46b3029c383581f7853a4338c2b13c62
(cherry picked from commit 49884d557d220f981f5f894bdcd381df749e3efb)
2022-04-22 18:59:07 +00:00
Jordan Fix
4737ae7a16 [tools_common] Don't remove underscores from call_module targets in get_acc_ops_name (#72664)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72664

Test Plan: CI.

Reviewed By: wushirong

Differential Revision: D34148357

fbshipit-source-id: 9c75aaeae59461d7550fb00c6f98c879e98274f6
(cherry picked from commit 553525698a)
2022-02-11 08:32:10 +00:00
Shirong Wu
e03c3dd150 Add leaf module code example (#72100)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72100

Facebook :
enable splitter to properly read the leaf module specified by acc_tracer leaf module list, and parse leaf module as run_on_acc if customize leaf module converter is provided.
add scratch board for customize leaf module converter and example code for std_conv2d_same converter.

Reviewed By: jfix71

Differential Revision: D33698402

fbshipit-source-id: 01ce84ee1543f0fb8a8899256530ef1300797417
(cherry picked from commit 1357b2d528)
2022-02-03 02:07:00 +00:00
Kefei Lu
911d01c1de type annotate operator_support (#65136)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65136

Opportunistically add type annotation for operator_support.py

Test Plan: run linter, CI

Reviewed By: yinghai

Differential Revision: D30928464

fbshipit-source-id: 615c75152b9938792f03cdceb2a113bda6ab28c7
2021-09-29 10:38:47 -07:00
James Reed
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
Kefei Lu
adbcc819cd Fix fx2trt SplitterBase non_tensor_input logic (#64286)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64286

During graph splitting, `_SplitterBase` supports taking into consideration whether the subnet boundary nodes
produces "supported" outputs that will cross the acc/non-acc boundary. Specifically, if the backend only
supports Tensor-based data passing cross boundary, then we cannot split the graph at a place where the node
output is a non-Tensor type (e.g., `Tuple[Tensor]`).

There's currently a bug in this logic that it does not correctly detect the output type of a Node. Instead of
using `Node.meta['tensor_meta']`, we should instead check `Node.meta['type']`.

`Node.meta['tensor_meta']` is not appropriate because this key will exist if the node output is an iterable
and one of the element is of type `Tensor`. So `Tuple[Tensor]` will be wrongly considered "supported".

Test Plan:
arc lint
run CI tests

Reviewed By: yinghai, 842974287

Differential Revision: D30617147

fbshipit-source-id: e8ba70dfaddc05cafb8037d58fca73b7ccbb1a49
2021-09-07 04:02:29 -07:00
Shiyan Deng
cc18654d66 [fx_acc] Refactoring acc_tracer (#61963)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61963

Test Plan: CI

Reviewed By: jfix71

Differential Revision: D29772522

fbshipit-source-id: 4b117735147624f9428b933ea798495823423a0e
2021-07-21 20:09:15 -07:00
Shiyan Deng
9d56176034 Fix splitter and add a unittest (#58075)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58075

Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1687

Reviewed By: mikekgfb

Differential Revision: D28357724

fbshipit-source-id: 36c2d211576a90107bc75468a39408ffecaeed43
2021-05-12 10:40:37 -07:00
Aravind Kalaiah
747312bf61 Support for accumulate nodes traversal and to access op names in the compare function (#57685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57685

- Accumulate traversal : `minimizer.settings.traverse_method = "accumulate" `
   - Feature
   - net_min_tests
- Return op name to the compare function so that we can map the cosine similarity to the individual ops
- Fix the settings combinations in net_min_tests

Test Plan:
buck test glow/fb/nnpi/lowering:net_min_tests

NNPI_LOG_LEVEL=5 USE_INF_API=1 buck run mode/opt -j 12 --config fbcode//cxx.link_weight=3 --config misc.strip_binaries=debug-non-line -c glow.nnpi_project_name='fb-nnpi-nextgen' ai_codesign/video/inference:xrayvideo_2019a_eval -- --job create --model_a model_prod --device_a PTCPU --trace_a none --model_b model_v3 --device_b NNPI --trace_b fusion --replace_b true --log_level INFO --use_scrambled false --save_repro false --num_ab_runs 0 --symbolic_trace_b true --save_modified_model_b false

USE_INF_API=1 buck test glow/fb/nnpi/lowering:net_min_tests

Reviewed By: 842974287

Differential Revision: D27867010

fbshipit-source-id: 6a756468b1f1fe24ef0400669d911825a7562484
2021-05-10 15:52:17 -07:00
Shiyan Deng
d896d1f4ce [fx splitter] Fix fusion group utility (#57280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57280

We've found an issue that fusion group would results in circular dependency. For example
```
a -> b -> c -> d
|              ^
+ -------------+

Only a has non tensor output and currently we would create a fusion group (a, b, d). This results in circular dependency because now the fusion group depends on c while c depends on the fusion group as well.
```

This diff implement the solution discussed before. When we add a node to fusion group, we add all the nodes that are in the middle of the fusion group and this newly added node.

Use the same logic in minimizer to build fusion group.

Test Plan: split_tests and net_min_tests

Reviewed By: khabinov

Differential Revision: D27917432

fbshipit-source-id: a3d99fe5929dbc9f8eb0f45bccd83fd7b173795a
2021-04-30 10:18:01 -07:00
Shiyan Deng
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00