pytorch/docs/source/torch.quantization.rst
Sam Estep 8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00

69 lines
1.8 KiB
ReStructuredText

.. _torch_quantization:
torch.quantization
------------------
.. automodule:: torch.quantization
This module implements the functions you call
directly to convert your model from FP32 to quantized form. For
example the :func:`~torch.quantization.prepare` is used in post training
quantization to prepares your model for the calibration step and
:func:`~torch.quantization.convert` actually converts the weights to int8 and
replaces the operations with their quantized counterparts. There are
other helper functions for things like quantizing the input to your
model and performing critical fusions like conv+relu.
Top-level quantization APIs
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: quantize
.. autofunction:: quantize_dynamic
.. autofunction:: quantize_qat
.. autofunction:: prepare
.. autofunction:: prepare_qat
.. autofunction:: convert
.. autoclass:: QConfig
.. autoclass:: QConfigDynamic
.. FIXME: The following doesn't display correctly.
.. autoattribute:: default_qconfig
Preparing model for quantization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: fuse_modules
.. autoclass:: QuantStub
.. autoclass:: DeQuantStub
.. autoclass:: QuantWrapper
.. autofunction:: add_quant_dequant
Utility functions
~~~~~~~~~~~~~~~~~
.. autofunction:: add_observer_
.. autofunction:: swap_module
.. autofunction:: propagate_qconfig_
.. autofunction:: default_eval_fn
Observers
~~~~~~~~~~~~~~~
.. autoclass:: ObserverBase
:members:
.. autoclass:: MinMaxObserver
.. autoclass:: MovingAverageMinMaxObserver
.. autoclass:: PerChannelMinMaxObserver
.. autoclass:: MovingAveragePerChannelMinMaxObserver
.. autoclass:: HistogramObserver
.. autoclass:: FakeQuantize
.. autoclass:: NoopObserver
Debugging utilities
~~~~~~~~~~~~~~~~~~~
.. autofunction:: get_observer_dict
.. autoclass:: RecordingObserver
.. currentmodule:: torch
.. autosummary::
:nosignatures:
nn.intrinsic