Commit Graph

227 Commits

Author SHA1 Message Date
Oleg Khabinov
a0c1c7e5d4 Fixing the case when starter nodes depend on get_attr node (#62234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62234

There was a typo that we caught until recently, thus making this fix.

Reviewed By: 842974287

Differential Revision: D29924190

fbshipit-source-id: ee6259fcd41358aefe9680b419acc87c0c2821cb
2021-07-27 10:29:53 -07:00
Shiyan Deng
cc18654d66 [fx_acc] Refactoring acc_tracer (#61963)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61963

Test Plan: CI

Reviewed By: jfix71

Differential Revision: D29772522

fbshipit-source-id: 4b117735147624f9428b933ea798495823423a0e
2021-07-21 20:09:15 -07:00
Zeina Migeed
4e2fe9718d flatten operation (resnet50) (#61265)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61265

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D29626383

Pulled By: migeed-z

fbshipit-source-id: 107769fc14f1fad295a93a10e84235f25ae17357
2021-07-16 16:06:10 -07:00
Malay Bag
287c0ab170 [FX] Add requires_grad to TensorMetadata (#60972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60972

For PyTorch model memory requirement calculation, requires_grad is needed. Output tensors with requires_grad are saved in module context and increases memory during forward pass.

Test Plan: Existing test cases

Reviewed By: jamesr66a

Differential Revision: D29024932

fbshipit-source-id: def990f8c6ff6fa4537bfc377c646b9d44464ebd
2021-06-29 23:07:27 -07:00
Philip Meier
d5988c5eca remove unused type: ignore directives (#60006)
Summary:
During development it is common practice to put `type: ignore` comments on lines that are correct, but `mypy` doesn't recognize this. This often stems from the fact, that the used `mypy` version wasn't able to handle the used pattern.

With every new release `mypy` gets better at handling complex code. In addition to fix all the previously accepted but now failing patterns, we should also revisit all `type: ignore` comments to see if they are still needed or not. Fortunately, we don't need to do it manually: by adding `warn_unused_ignores = True` to the configuration, `mypy` will error out in case it encounters an `type: ignore` that is no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60006

Reviewed By: jbschlosser, malfet

Differential Revision: D29133237

Pulled By: albanD

fbshipit-source-id: 41e82edc5cd5affa7ccedad044b59b94dad4425a
2021-06-18 07:23:31 -07:00
Oleg Khabinov
0d7d316dc1 [fx ir] Support lists and dicts in FX IR GraphDrawer (#58775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58775

Reviewed By: RoshanPAN

Differential Revision: D28613939

fbshipit-source-id: 4164e2dd772b59240ea3907001fe4ebddb003060
2021-06-10 01:55:53 -07:00
Jordan Fix
6d97a80dd2 [fx][graph_drawer] Improve graph drawer coloring and tensor_meta handling (#58699)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58699

Make `call_function`/`call_method` random colors based on their target name. This coloring is stable according to the name of the target. Also handle tensor_meta more elegantly for quantized types, including print q_scale/q_zero_point if they're used.

Test Plan: Tested locally

Reviewed By: chenccfb, 842974287

Differential Revision: D28580333

fbshipit-source-id: ad9961e1106a1bfa5a018d009b0ddb8802d2163c
2021-05-20 21:26:04 -07:00
Shiyan Deng
bcacf91a71 [fx_glow]Add Support for importing quantized linear in FXIRImporter (#57483)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57483

Pull Request resolved: https://github.com/pytorch/glow/pull/5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Test Plan: `buck test glow/fb/fx/nnpi_importer:test_importer`

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: e46e961734788fd5333e227ca6143fd37c33204e
2021-05-14 18:48:31 -07:00
Shiyan Deng
9d56176034 Fix splitter and add a unittest (#58075)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58075

Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1687

Reviewed By: mikekgfb

Differential Revision: D28357724

fbshipit-source-id: 36c2d211576a90107bc75468a39408ffecaeed43
2021-05-12 10:40:37 -07:00
Oleg Khabinov
36a22967b7 [fx ir] Handle the case when output consumes get_attr directly (#57844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57844

Reviewed By: 842974287

Differential Revision: D28294298

fbshipit-source-id: db337fadca9f10f208324c9da6d95620178a189b
2021-05-10 22:04:43 -07:00
Aravind Kalaiah
747312bf61 Support for accumulate nodes traversal and to access op names in the compare function (#57685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57685

- Accumulate traversal : `minimizer.settings.traverse_method = "accumulate" `
   - Feature
   - net_min_tests
- Return op name to the compare function so that we can map the cosine similarity to the individual ops
- Fix the settings combinations in net_min_tests

Test Plan:
buck test glow/fb/nnpi/lowering:net_min_tests

NNPI_LOG_LEVEL=5 USE_INF_API=1 buck run mode/opt -j 12 --config fbcode//cxx.link_weight=3 --config misc.strip_binaries=debug-non-line -c glow.nnpi_project_name='fb-nnpi-nextgen' ai_codesign/video/inference:xrayvideo_2019a_eval -- --job create --model_a model_prod --device_a PTCPU --trace_a none --model_b model_v3 --device_b NNPI --trace_b fusion --replace_b true --log_level INFO --use_scrambled false --save_repro false --num_ab_runs 0 --symbolic_trace_b true --save_modified_model_b false

USE_INF_API=1 buck test glow/fb/nnpi/lowering:net_min_tests

Reviewed By: 842974287

Differential Revision: D27867010

fbshipit-source-id: 6a756468b1f1fe24ef0400669d911825a7562484
2021-05-10 15:52:17 -07:00
Oleg Khabinov
73f22bcbf9 [fx ir] Handle cases in GraphDrawer when shape, type or stride are not present (#57845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57845

As title says

Test Plan: N/A

Reviewed By: 842974287

Differential Revision: D28295999

fbshipit-source-id: f2cbf80c468f13685b17bb396c1f48972744ced0
2021-05-07 17:24:48 -07:00
Shiyan Deng
d896d1f4ce [fx splitter] Fix fusion group utility (#57280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57280

We've found an issue that fusion group would results in circular dependency. For example
```
a -> b -> c -> d
|              ^
+ -------------+

Only a has non tensor output and currently we would create a fusion group (a, b, d). This results in circular dependency because now the fusion group depends on c while c depends on the fusion group as well.
```

This diff implement the solution discussed before. When we add a node to fusion group, we add all the nodes that are in the middle of the fusion group and this newly added node.

Use the same logic in minimizer to build fusion group.

Test Plan: split_tests and net_min_tests

Reviewed By: khabinov

Differential Revision: D27917432

fbshipit-source-id: a3d99fe5929dbc9f8eb0f45bccd83fd7b173795a
2021-04-30 10:18:01 -07:00
Shiyan Deng
a6fa6a6cda [fx minimizer] Add an option to minimizer to allow return all intermediate results (#57279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57279

Added an option "return_intermediate". If true, when building the submodule we want to run , we will replace the output with all the nodes, so that intermediate results of all the nodes will be returned as output.

This is recommended to use with `run_node()` function.

Test Plan: `buck test glow/fb/nnpi/lowering:net_min_tests`

Reviewed By: khabinov

Differential Revision: D27913887

fbshipit-source-id: 5a3eab02da05214fb9adeb25656c267b58075b1d
2021-04-29 13:46:25 -07:00
Horace He
786b0a8091 [FX] fix normalization issues with lists of tensors (#57004)
Summary:
Fixes issue with lists of tensors not being normalized correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57004

Reviewed By: jamesr66a

Differential Revision: D28034559

Pulled By: Chillee

fbshipit-source-id: f935f0b73a8356acd8a2ae93fcfc0417f0eab224
2021-04-27 20:02:00 -07:00
Shiyan Deng
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00
Horace He
0df239e550 [FX] Make arg normalization a method on Node and not a pass (also augment tests to be exhaustive) (#55992)
Summary:
Commandeered from https://github.com/pytorch/pytorch/pull/54563

Primary changes from first PR:
1. Refactored primary `normalize_function` logic into `operator_schemas.py` so that non-FX users can use it.
2. Refactored tests a bit, and added a path to call `normalize_function` directly.
3. Moved check for `boolean_dispatch` so that `torch.lu` also gets properly handled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55992

Reviewed By: mruberry

Differential Revision: D27774396

Pulled By: Chillee

fbshipit-source-id: 7f65632e1d608e4abd55aec5ccbfdc3f67f52b8e
2021-04-22 03:53:41 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
James Reed
d02919dd50 [FX] Make shape_prop handle targets with aggregate outputs (#56221)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56221

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D27810693

Pulled By: jamesr66a

fbshipit-source-id: 17c6ad671786b3bacb5026bd88b8f5b7b4b96a1a
2021-04-16 18:58:25 -07:00
Jordan Fix
5eadc243f3 Preserve node meta info in split_module (#56212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56212

The current design doesn't make it easy to use `node.copy()`. Explicitly copy over the node's meta.

Test Plan: Updated `test_subgraph_creation` in `test_fx_experimental`

Reviewed By: jamesr66a

Differential Revision: D27808477

fbshipit-source-id: 7fe7b6428c830307dbd1e395f16fa2774936d3b3
2021-04-16 18:02:50 -07:00
James Reed
2236f43da0 [FX] Put tensor metadata into a NamedTuple in ShapeProp (#55930)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55930

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27741730

Pulled By: jamesr66a

fbshipit-source-id: 0a0a1b94beed6c482add9e9551f316f3b4220ab2
2021-04-13 22:21:50 -07:00
James Reed
8bdea14cd3 [FX] Add memory_format to shape_prop (#55815)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55815

Test Plan: Imported from OSS

Reviewed By: pbelevich, ansley

Differential Revision: D27716342

Pulled By: jamesr66a

fbshipit-source-id: f7c22dd77a4f48650700fc4c3c44b1c59196282e
2021-04-13 16:37:54 -07:00
Shiyan Deng
43ede4c2e3 Add Per Tensor Quantization Support to FXIRImporter (#55405)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55405

Pull Request resolved: https://github.com/pytorch/glow/pull/5516

Allows FXIRImport to import quantized model.

This diff doesn't include the supports for per-channel weights, linear and conv. Will address them in the next diff.

Test Plan: buck test glow/fb/fx/nnpi_importer:test_importer

Reviewed By: jackm321, jfix71

Differential Revision: D27313543

fbshipit-source-id: bf5c96ef5f2ff1835c09db981e0ceefaec56dd5b
2021-04-09 10:49:48 -07:00
James Reed
641d4ff160 [FX] Add stride to shape_prop pass (#55108)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55108

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27482241

Pulled By: jamesr66a

fbshipit-source-id: 7d928015712126e916c86225dc3ab27aba22d431
2021-04-02 19:57:11 -07:00
James Reed
bcb4583170 [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54926

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27417801

Pulled By: jamesr66a

fbshipit-source-id: 68a5155120a235065f58aa64ba1a6a97818dd0c1
2021-03-31 14:36:54 -07:00
Zeina Migeed
5105250e16 [FX] Add docs for shape propagation (#54554)
Summary:
Fixes #{i54538}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54554

Reviewed By: nikithamalgifb

Differential Revision: D27281263

Pulled By: jamesr66a

fbshipit-source-id: 2fd3914f0e24be0b6a18ad7715f3336dcf7949ba
2021-03-23 21:18:11 -07:00
James Reed
a1c5eba4bd [FX] Move some heavily used passes out of experimental (#51392)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51392

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26161172

Pulled By: jamesr66a

fbshipit-source-id: 04bfe606555bdf1988f527231d4de2e0196e6b37
2021-02-01 19:02:26 -08:00