pytorch/caffe2/operators/sparse_lp_regularizer_op.h
Jamie King 7f1a96d43c Adding sparse Lp regularization operator to Caffe2 (#38574)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38574

Adding sparse L1 and L2 regularization operator to Caffe2.  This doesn't work using run_on_loss, only run_after_optimize.  Applying it to run_after_optimize rather than run_on_loss was easier to implement, particularly for the L1 norm which is preferable in some cases and is non-differentiable at zero.

Test Plan: Wrote and ran unit tests in operator_test:sparse_lp_regularizer_test.

Differential Revision: D21003029

fbshipit-source-id: 81070a621752560ce03e320d065ce27807a5d278
2020-06-01 15:21:19 -07:00

44 lines
1.1 KiB
C++

#pragma once
#include "caffe2/core/operator.h"
#include "caffe2/utils/math.h"
namespace caffe2 {
template <typename T, class Context>
class CAFFE2_API SparseLpRegularizerOp final : public Operator<Context> {
public:
USE_OPERATOR_CONTEXT_FUNCTIONS;
template <class... Args>
explicit SparseLpRegularizerOp(Args&&... args)
: Operator<Context>(std::forward<Args>(args)...),
p_(this->template GetSingleArgument<float>("p", 2.0)),
reg_lambda_(
this->template GetSingleArgument<float>("reg_lambda", 1e-5)) {
CAFFE_ENFORCE(
p_ == 1.0 || p_ == 2.0,
"Sparse Lp regularizer only implemented for p=1 or p=2.");
CAFFE_ENFORCE_GT(
reg_lambda_,
0.0,
"Lambda for sparse Lp regularizer must be greater than 0.");
CAFFE_ENFORCE_LT(
reg_lambda_,
1.0,
"Lambda for sparse Lp regularizer must be less than 1.");
}
bool RunOnDevice() override;
template <typename SIndex>
bool DoRunWithType();
protected:
float p_;
float reg_lambda_;
INPUT_TAGS(PARAM, INDICES);
OUTPUT_TAGS(OUTPUT_PARAM);
};
} // namespace caffe2