Summary:
Fixes: https://github.com/pytorch/pytorch/issues/78117
Fixes: https://github.com/pytorch/pytorch/issues/73463
This PR adds a normalization pass that normalizes all the args to keyword args in positional order and fixes lowering code that previously
only uses node.args to use both args and kwargs instead.
Also tried to add a test for F.conv2d, but since conv2d matches multiple schemas we are doing an extra schema match, and because we are using symbolic values
in `transform`, we don't have a schema match, so F.conv2d still fails with runtime errors. we can resolve this issue later when there is a need.
Another thing I'm considering is to do the normalization with real inputs instead of symbolic inputs and not rely on operator_schemas (which is based on torchscript),
and rely on inspect.signature, I tried this briefly but didn't get too far, it looks like we cannot get the python signature for `torch._C._nn.linear`, it might be possible to fix as well, but will need follow up discussions.
The goal for this PR is just to introduce normalization in our codebase so that we can adapt some downstream code to this, and also fix the F.linear issue.
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_normalize_args
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: [D37163228](https://our.internmc.facebook.com/intern/diff/D37163228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79095
Approved by: https://github.com/andrewor14
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74637
Forgot to update the expect file in https://github.com/pytorch/pytorch/pull/74242. Reland to include changes in expect file.
Test Plan: unit test
Reviewed By: yinghai
Differential Revision: D35089989
fbshipit-source-id: 5e3ad9c696cf31cbc691d34fdb77eff26f92e38d
(cherry picked from commit 110ac12f5e2bcca7552d4b4691c7d98fafb21a57)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74242
The inputs and outputs of the graph module might be different from the graph inputs and outputs if users are using custom codegen. In interpreter, it runs the graph instead of the generated forward function so it might not work if user provides the inputs to the graph module. To fill the gap, we call `process_inputs` and `process_outputs` inside interpreter.
Test Plan: unit test: test_interpreter_with_codegen
Reviewed By: jamesr66a, Chillee
Differential Revision: D34898108
fbshipit-source-id: 250bd236f6c8c1268a363cf19a09521a4f64b3a9
(cherry picked from commit b33076fa3b10788d455cecc590bc01c4ad8ef94c)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74189
Use the codegen on the original graph module for the new graph module produced by transformer.
Test Plan: Added a unit test: test_custom_codegen_with_transformer
Reviewed By: yinghai
Differential Revision: D34867938
fbshipit-source-id: fcda6600faeccfa7a650ba7226ca125e8440b19c
(cherry picked from commit d098c12081f61ddcf69052db5b8a1f31b0a0b67b)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73406
Placeholder defaults are stored in `node.args`, during normalization we had dropped these. This diff passes the default args through the normalization transformation.
Test Plan:
Added tests to cover cases with optional inputs, test covers
* nothing passed to optional input
* `None` passed to optional input
* a tensor passed to optional input
Reviewed By: jfix71
Differential Revision: D34463493
fbshipit-source-id: f0c3a4083cb3dd4a69111a758561f0d2c0609787
(cherry picked from commit 7fb482cbfc34077426efa18ac74311bd4533dcdf)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60057
This ensures that if a function was `wrap`'d before symbolic tracing + being passed into the transformer then it will still be wrapped.
Test Plan: Added test to `test_fx.py`
Reviewed By: jamesr66a
Differential Revision: D29151191
fbshipit-source-id: 93560be59505bdcfe8d4f013e21d4719788afd59
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52473
Use `map_aggregate` to create output for new graph so that it won't raise error when we have outputs that is not `Proxy`.
Test Plan: `test_transformer_multi_outputs` in `test_fx.py`
Reviewed By: jamesr66a
Differential Revision: D26502277
fbshipit-source-id: 404d9030a9b84db3f66f8505887a75717a28ad30