Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40396
Removes activation and normalization modules from eager mode QAT.
These were incorrectly added, but we don't actually need them.
Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining
```
Imported from OSS
Differential Revision: D22169768
fbshipit-source-id: b5bd753dafe92e90e226fb773eb18c6aae179703
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39203
Adds logic and test coverage for optional weights and biases for
the quantized normalization operators. This was broken before this
PR because the `TORCH_LIBRARY` registration had these as required parameters
- removed it, and cleaned up the callsites.
Note: consolidating the registrations in `native_functions.yaml` as opposed to `library.cpp`
after a discussion with ezyang .
Test Plan:
```
python test/test_quantization.py TestQuantizedOps.test_qlayer_norm
python test/test_quantization.py TestQuantizedOps.test_group_norm
python test/test_quantization.py TestQuantizedOps.test_instance_norm
python test/test_quantization.py TestStaticQuantizedModule.test_layer_norm
python test/test_quantization.py TestStaticQuantizedModule.test_group_norm
python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_layer_norm
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_group_norm
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_instance_norm
```
Imported from OSS
Differential Revision: D21885259
fbshipit-source-id: 978c7b8bd6c11a03e9e5fdb68f154cb80cc43599
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40101
Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
ghstack-source-id: 105997268
(Note: this ignores all push blocking failures!)
Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details
buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'
buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details
Differential Revision: D22070826
fbshipit-source-id: 46c333e19b9eab8fa5cab6f132e89b80a635791a
Summary:
Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38851
ghstack-source-id: 105945574
(Note: this ignores all push blocking failures!)
Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details
buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'
buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details
Differential Revision: D21628596
fbshipit-source-id: 4aeda899f2e5f14bfbe3d82096cb4ce89c725fa1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39090
Makes quantized GroupNorm work in eager mode post training static quant.
Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
python test/test_quantization.py TestStaticQuantizedModule.test_group_norm
```
Imported from OSS
Differential Revision: D21885262
fbshipit-source-id: 58b0ffb59c601fcb4c79f711c7c98a667ffc6170
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38434
We insert dequantize for each use in order to produce quantization patterns that will
later be fused, after that we should also remove extra dequantize node produced by this operation.
Test Plan: Imported from OSS
Differential Revision: D21597834
fbshipit-source-id: 18dfb2760bbb08932aa4e1d06f96cfc5fb37ed88
Summary:
Return unmodified type from decorator if fbgemm is present.
Fix `Tried to trace <__torch__.torch.classes.rnn.CellParamsBase object at 0x55f504c56b40> but it is not part of the active trace. Modules that are called during a trace must be registered as submodules of the thing being traced` thrown from `TestPostTrainingDynamic.test_quantized_rnn` by preserving modules in returned qRNNBase (i.e. by partially reverting https://github.com/pytorch/pytorch/pull/38134 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38432
Differential Revision: D21567333
Pulled By: malfet
fbshipit-source-id: 364fa2c8fc6e400b4f2e425b922a977756aec1d8
Summary:
I've picked wrong revision when landed the diff, it should have had an actual check rather than `if True`:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38058
Differential Revision: D21466152
Pulled By: malfet
fbshipit-source-id: 03fdc510562fab44b7d64a42284d4c3c1f8e940a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37366
- we can put both fake quant module and observer module tests in the test_workflow_module.py
- added test_quantized_functional.py
- moved tests in test_numerics.py to test_quantize.py and removed test_numerics.py
Test Plan:
python test/test_quantization.py
Imported from OSS
Differential Revision: D21282198
fbshipit-source-id: 60107cee7d1ed2cd14a45650e91ec28b8a262c52
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36604
Adds the logic to wrap the HardSwish module in FakeQuant
to support QAT.
Test Plan:
Added test to cover that this happens properly.
Imported from OSS
Differential Revision: D21045322
fbshipit-source-id: 8c46559ade58a5d5c56442285842627a3143eb0f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36545
* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization
Test Plan:
added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e
Imported from OSS
Differential Revision: D21045320
fbshipit-source-id: ab7e52f0f54a7d5923ab6f58197022cc28c12354
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36173
Previously we were ignoring the conv bias during training if it existed
This PR adds the bias from the conv op during the conv+bn fusion process
Test Plan:
python test/quantization/test_quantization.py
Imported from OSS
Differential Revision: D20921613
fbshipit-source-id: eacb2ccf9107f413ac4ef23163ba914af9b90924