pytorch/caffe2/quantization/server/kl_minimization.h
Jongsoo Park 3c2462cf24 use pragma once (#14163)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14163

Some of the names we were using to guard the header file was too short (e.g. DYNAMIC_HISTOGRAM_H).

Reviewed By: csummersea

Differential Revision: D13115451

fbshipit-source-id: cef8c84c62922616ceea17effff7bdf8d67302a2
2018-11-20 00:56:04 -08:00

19 lines
426 B
C++

#pragma once
#include "quantization_error_minimization.h"
namespace dnnlowp {
/**
* A quantization scheme that minimizes Kullback-Leiber divergence.
*/
class KLDivergenceMinimization final : public QuantizationErrorMinimization {
public:
TensorQuantizationParams ChooseQuantizationParams(
const Histogram& hist,
bool preserve_sparsity = false,
int precision = 8) override;
};
} // namespace dnnlowp