pytorch/torch/quantization
James Reed a0b13b4fa5 extra_repr for quantized modules (#24443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24443

This gives us useful information about the Module when we print it, like so:

```
FloatModule(
  (quant): Quantize()
  (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1), scale=0.08209919929504395, zero_point=128)
  (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1), scale=0.16885940730571747, zero_point=128)
  (fc1): Linear(in_features=800, out_features=500, bias=True, scale=0.12840059399604797, zero_point=128)
  (fc2): Linear(in_features=500, out_features=10, bias=True, scale=0.260015606880188, zero_point=128)
  (dequant): DeQuantize()
)
```

Test Plan: Imported from OSS

Differential Revision: D16847140

Pulled By: jamesr66a

fbshipit-source-id: 8c995108f17ed1b086d1fb30471a41c532c68080
2019-08-16 22:38:45 -07:00
..
__init__.py Dynamic Quantized Linear Module (#23128) 2019-08-13 21:01:23 -07:00
fake_quantize.py Change return type of observer to two tensors (#24339) 2019-08-15 10:26:44 -07:00
fuse_modules.py ConvBn2d/ConvBnReLU2d (#23357) 2019-08-01 10:07:00 -07:00
observer.py extra_repr for quantized modules (#24443) 2019-08-16 22:38:45 -07:00
QConfig.py Fix QConfig_dynamic typename (#24431) 2019-08-15 15:25:19 -07:00
quantize.py Fixes the adding of the observer to the FloatFunctional (#24418) 2019-08-15 17:27:00 -07:00