mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
This reverts commit cd66ff8030.
Reverted https://github.com/pytorch/pytorch/pull/155520 on behalf of https://github.com/atalman due to breaks multiple test_quantization.py::TestQuantizationDocs::test_quantization_ ([comment](https://github.com/pytorch/pytorch/pull/155520#issuecomment-2981996091))
21 lines
607 B
ReStructuredText
21 lines
607 B
ReStructuredText
Quantization Backend Configuration
|
|
----------------------------------
|
|
|
|
FX Graph Mode Quantization allows the user to configure various
|
|
quantization behaviors of an op in order to match the expectation
|
|
of their backend.
|
|
|
|
In the future, this document will contain a detailed spec of
|
|
these configurations.
|
|
|
|
|
|
Default values for native configurations
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Below is the output of the configuration for quantization of ops
|
|
in x86 and qnnpack (PyTorch's default quantized backends).
|
|
|
|
Results:
|
|
|
|
.. literalinclude:: scripts/quantization_backend_configs/default_backend_config.txt
|