Commit Graph

14 Commits

Author SHA1 Message Date
Zejun Huang
d271a5c796 [minimizer]skip mode for minimizer (#109399)
Summary: - skip known issue nodes in minimizer and check the whole graph

Reviewed By: siyan-lin

Differential Revision: D48990707

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109399
Approved by: https://github.com/jfix71
2023-09-20 06:23:46 +00:00
Edward Z. Yang
b8b840be3d Convert logging f-strings to use % format, part five (#98765)
This does some annoying but simple cases by hand.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98765
Approved by: https://github.com/wanchaol
2023-04-11 13:17:59 +00:00
Kazuaki Ishizaki
105ef68f72 Fix typos under torch/fx directory (#97596)
This PR fixes typos in comments and messages of `.py` files under `torch/fx` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97596
Approved by: https://github.com/dagitses, https://github.com/kit1980
2023-04-10 21:57:36 +00:00
Qiming Lu
e71370064c Improvements to FX Minimizer (#83833)
Summary: This diff improves the FX Minimizer for better error reports, and fixes a few other issues.

Test Plan: CI

Differential Revision: D38900309

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83833
Approved by: https://github.com/yuhc, https://github.com/Chillee
2022-09-01 18:39:26 +00:00
anjali411
f68f77610a Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80376
Approved by: https://github.com/albanD
2022-06-27 21:36:27 +00:00
Jerry Zhang
eaae62fed9 Make args work in the uru10x10_to_trt_eval script (#74707)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74707

att

Test Plan:
```
buck run mode/dev-nosan -c fbcode.split-dwarf=true -c fbcode.platform=platform009 accelerators/workloads/models/uru10x10:uru_10x10_to_trt_eval -- -h
```

Reviewed By: 842974287

Differential Revision: D34088069

fbshipit-source-id: 5c89d25db6493e0f66f7e57aac24ed72196d0378
(cherry picked from commit d9d79f03e28d609a14ddc3e55b97c52b0e102438)
2022-03-25 03:52:47 +00:00
James Reed
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
Kefei Lu
5757d03145 Add logging for _MinimizerBase
Summary: Add logging so we know which nodes are currently being visited

Test Plan: lint & SC tests

Reviewed By: 842974287

Differential Revision: D30509865

fbshipit-source-id: 09e77e44c97c825242e0b24f90463b50f3ca19c6
2021-08-26 00:52:58 -07:00
Philip Meier
d5988c5eca remove unused type: ignore directives (#60006)
Summary:
During development it is common practice to put `type: ignore` comments on lines that are correct, but `mypy` doesn't recognize this. This often stems from the fact, that the used `mypy` version wasn't able to handle the used pattern.

With every new release `mypy` gets better at handling complex code. In addition to fix all the previously accepted but now failing patterns, we should also revisit all `type: ignore` comments to see if they are still needed or not. Fortunately, we don't need to do it manually: by adding `warn_unused_ignores = True` to the configuration, `mypy` will error out in case it encounters an `type: ignore` that is no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60006

Reviewed By: jbschlosser, malfet

Differential Revision: D29133237

Pulled By: albanD

fbshipit-source-id: 41e82edc5cd5affa7ccedad044b59b94dad4425a
2021-06-18 07:23:31 -07:00
Shiyan Deng
9d56176034 Fix splitter and add a unittest (#58075)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58075

Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1687

Reviewed By: mikekgfb

Differential Revision: D28357724

fbshipit-source-id: 36c2d211576a90107bc75468a39408ffecaeed43
2021-05-12 10:40:37 -07:00
Aravind Kalaiah
747312bf61 Support for accumulate nodes traversal and to access op names in the compare function (#57685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57685

- Accumulate traversal : `minimizer.settings.traverse_method = "accumulate" `
   - Feature
   - net_min_tests
- Return op name to the compare function so that we can map the cosine similarity to the individual ops
- Fix the settings combinations in net_min_tests

Test Plan:
buck test glow/fb/nnpi/lowering:net_min_tests

NNPI_LOG_LEVEL=5 USE_INF_API=1 buck run mode/opt -j 12 --config fbcode//cxx.link_weight=3 --config misc.strip_binaries=debug-non-line -c glow.nnpi_project_name='fb-nnpi-nextgen' ai_codesign/video/inference:xrayvideo_2019a_eval -- --job create --model_a model_prod --device_a PTCPU --trace_a none --model_b model_v3 --device_b NNPI --trace_b fusion --replace_b true --log_level INFO --use_scrambled false --save_repro false --num_ab_runs 0 --symbolic_trace_b true --save_modified_model_b false

USE_INF_API=1 buck test glow/fb/nnpi/lowering:net_min_tests

Reviewed By: 842974287

Differential Revision: D27867010

fbshipit-source-id: 6a756468b1f1fe24ef0400669d911825a7562484
2021-05-10 15:52:17 -07:00
Shiyan Deng
d896d1f4ce [fx splitter] Fix fusion group utility (#57280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57280

We've found an issue that fusion group would results in circular dependency. For example
```
a -> b -> c -> d
|              ^
+ -------------+

Only a has non tensor output and currently we would create a fusion group (a, b, d). This results in circular dependency because now the fusion group depends on c while c depends on the fusion group as well.
```

This diff implement the solution discussed before. When we add a node to fusion group, we add all the nodes that are in the middle of the fusion group and this newly added node.

Use the same logic in minimizer to build fusion group.

Test Plan: split_tests and net_min_tests

Reviewed By: khabinov

Differential Revision: D27917432

fbshipit-source-id: a3d99fe5929dbc9f8eb0f45bccd83fd7b173795a
2021-04-30 10:18:01 -07:00
Shiyan Deng
a6fa6a6cda [fx minimizer] Add an option to minimizer to allow return all intermediate results (#57279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57279

Added an option "return_intermediate". If true, when building the submodule we want to run , we will replace the output with all the nodes, so that intermediate results of all the nodes will be returned as output.

This is recommended to use with `run_node()` function.

Test Plan: `buck test glow/fb/nnpi/lowering:net_min_tests`

Reviewed By: khabinov

Differential Revision: D27913887

fbshipit-source-id: 5a3eab02da05214fb9adeb25656c267b58075b1d
2021-04-29 13:46:25 -07:00
Shiyan Deng
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00