pytorch/torch/quantization
Vasiliy Kuznetsov 2a2bc1fc8a ns for fx: add fqn to results, when present (#61377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61377

Both the quantization tracer and the NS tracer record
`_node_name_to_scope`, which contains the mapping from
node name to FQN.

This PR adds the FQN information to the NS results, so that it is
more convenient for users to attribute a NS result to the corresponding
module in their model.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_fqn
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_match_activations_fqn
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_shadow_activations_fqn
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D29600349

fbshipit-source-id: df489e03daff97dd380f59c83ffdc2b0012a0a53
2021-07-17 20:53:41 -07:00
..
fx ns for fx: support comparing fp32 vs fp32_prepared, except shadowed (#61129) 2021-07-17 20:52:23 -07:00
ns ns for fx: add fqn to results, when present (#61377) 2021-07-17 20:53:41 -07:00
__init__.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
_correct_bias.py
_equalize.py [quant] Eager mode equalization support for ConvReLU and LinearReLU (#58792) 2021-05-24 17:25:13 -07:00
_learnable_fake_quantize.py [docs] Fix backticks in docs (#60474) 2021-06-24 06:27:41 -07:00
_numeric_suite_fx.py ns for fx: add fqn to results, when present (#61377) 2021-07-17 20:53:41 -07:00
_numeric_suite.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
fake_quantize.py [quant] update FakeQuant modules to use tensor qparams (#61318) 2021-07-10 19:43:02 -07:00
fuse_modules.py Add lint for unqualified noqa (#56272) 2021-04-19 13:16:18 -07:00
fuser_method_mappings.py fix docstring for fusing functions (#58638) 2021-05-24 18:27:22 -07:00
observer.py [quant] Add tensor_qparam variant to fake_quantize_per_tensor (#61317) 2021-07-10 19:41:55 -07:00
qconfig.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
quant_type.py
quantization_mappings.py [quant][graphmode][fx] Produce reference linear module in convert (#60152) 2021-06-20 20:08:12 -07:00
quantize_fx.py [quant] Input Weight Equalization - prepare modifications (#59747) 2021-06-16 22:32:28 -07:00
quantize_jit.py Enable the quantization on XPU devices (#54857) 2021-05-20 17:02:13 -07:00
quantize.py [quant][eager][fix] Fix a typo in convert function in eager mode quantization (#59571) 2021-06-08 10:24:22 -07:00
stubs.py
utils.py [quant] Equalization Observer modifications (#59953) 2021-06-16 22:32:30 -07:00