Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62234
There was a typo that we caught until recently, thus making this fix.
Reviewed By: 842974287
Differential Revision: D29924190
fbshipit-source-id: ee6259fcd41358aefe9680b419acc87c0c2821cb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60972
For PyTorch model memory requirement calculation, requires_grad is needed. Output tensors with requires_grad are saved in module context and increases memory during forward pass.
Test Plan: Existing test cases
Reviewed By: jamesr66a
Differential Revision: D29024932
fbshipit-source-id: def990f8c6ff6fa4537bfc377c646b9d44464ebd
Summary:
During development it is common practice to put `type: ignore` comments on lines that are correct, but `mypy` doesn't recognize this. This often stems from the fact, that the used `mypy` version wasn't able to handle the used pattern.
With every new release `mypy` gets better at handling complex code. In addition to fix all the previously accepted but now failing patterns, we should also revisit all `type: ignore` comments to see if they are still needed or not. Fortunately, we don't need to do it manually: by adding `warn_unused_ignores = True` to the configuration, `mypy` will error out in case it encounters an `type: ignore` that is no longer needed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60006
Reviewed By: jbschlosser, malfet
Differential Revision: D29133237
Pulled By: albanD
fbshipit-source-id: 41e82edc5cd5affa7ccedad044b59b94dad4425a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58699
Make `call_function`/`call_method` random colors based on their target name. This coloring is stable according to the name of the target. Also handle tensor_meta more elegantly for quantized types, including print q_scale/q_zero_point if they're used.
Test Plan: Tested locally
Reviewed By: chenccfb, 842974287
Differential Revision: D28580333
fbshipit-source-id: ad9961e1106a1bfa5a018d009b0ddb8802d2163c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57483
Pull Request resolved: https://github.com/pytorch/glow/pull/5622
Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.
Test Plan: `buck test glow/fb/fx/nnpi_importer:test_importer`
Reviewed By: gcatron, jfix71, khabinov
Differential Revision: D27451237
fbshipit-source-id: e46e961734788fd5333e227ca6143fd37c33204e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57280
We've found an issue that fusion group would results in circular dependency. For example
```
a -> b -> c -> d
| ^
+ -------------+
Only a has non tensor output and currently we would create a fusion group (a, b, d). This results in circular dependency because now the fusion group depends on c while c depends on the fusion group as well.
```
This diff implement the solution discussed before. When we add a node to fusion group, we add all the nodes that are in the middle of the fusion group and this newly added node.
Use the same logic in minimizer to build fusion group.
Test Plan: split_tests and net_min_tests
Reviewed By: khabinov
Differential Revision: D27917432
fbshipit-source-id: a3d99fe5929dbc9f8eb0f45bccd83fd7b173795a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57279
Added an option "return_intermediate". If true, when building the submodule we want to run , we will replace the output with all the nodes, so that intermediate results of all the nodes will be returned as output.
This is recommended to use with `run_node()` function.
Test Plan: `buck test glow/fb/nnpi/lowering:net_min_tests`
Reviewed By: khabinov
Differential Revision: D27913887
fbshipit-source-id: 5a3eab02da05214fb9adeb25656c267b58075b1d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201
Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.
Test Plan: CI
Reviewed By: jackm321
Differential Revision: D27629598
fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
Summary:
Commandeered from https://github.com/pytorch/pytorch/pull/54563
Primary changes from first PR:
1. Refactored primary `normalize_function` logic into `operator_schemas.py` so that non-FX users can use it.
2. Refactored tests a bit, and added a path to call `normalize_function` directly.
3. Moved check for `boolean_dispatch` so that `torch.lu` also gets properly handled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55992
Reviewed By: mruberry
Differential Revision: D27774396
Pulled By: Chillee
fbshipit-source-id: 7f65632e1d608e4abd55aec5ccbfdc3f67f52b8e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56212
The current design doesn't make it easy to use `node.copy()`. Explicitly copy over the node's meta.
Test Plan: Updated `test_subgraph_creation` in `test_fx_experimental`
Reviewed By: jamesr66a
Differential Revision: D27808477
fbshipit-source-id: 7fe7b6428c830307dbd1e395f16fa2774936d3b3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55405
Pull Request resolved: https://github.com/pytorch/glow/pull/5516
Allows FXIRImport to import quantized model.
This diff doesn't include the supports for per-channel weights, linear and conv. Will address them in the next diff.
Test Plan: buck test glow/fb/fx/nnpi_importer:test_importer
Reviewed By: jackm321, jfix71
Differential Revision: D27313543
fbshipit-source-id: bf5c96ef5f2ff1835c09db981e0ceefaec56dd5b