pytorch/caffe2/operators/feature_maps_ops.cc
Lu Fang 664fe34e0a
[Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
* [fix] Re-enable events in RNN ops

We have earlier added event disabling in RNN ops as back then we didn't use
events, with current use cases this is no longer true
(https://fburl.com/8vd0lp8y)

* use ops with cude impl

* Revert D7729695: [caffe2][fix] Re-enable events in RNN ops

This reverts commit 4b215c7496fb724656ff4c776933a15bdbbcde5e

@bypass-lint

An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files

* [observer] Clean up observer_config.h

#accept2ship

* [1/n] Refactor dataio_test.py

Replace code duplication with a common function

* Add barrier net that runs before training nets

Add a synchonize barrier net that is run before training nets.  With this net, shards that are faster will wait for other shards before start training.  This reduce chances of the faster shards timing out during GLOO AllReduce.

Removed explicit data_parallel_model.py.synchronize call in holmes workflow.  Similar change in speech/asr_training workflow will come in another diff.

* Support the dnnlowp backend in caffe2_benchmark

This is for SHARE operator latency evaluation

* Migrate integral_image_op to main caffe2

migrate integral_image_op(GPU version) given by https://fburl.com/yvqezigi
to caffe2/caffe2/operators and implement its CPU version. Write up a test
using the hypothesis_test mechanism

* [pos_disc, fbcode] Implement unjoined lr loss

As explained in https://our.intern.facebook.com/intern/wiki/Model_Based_Calibration/, when the dataset is an joined data set, where labels might change later, we need to use unjoined logloss.

The implementation is almost the same as in Sigrid (https://fburl.com/1trngsls), where
    loss = y (log(p) - log(1-p)) + (1-y)(log(1-p)) = xy - (1-y)x - (1-y)log(1+exp(-x))

For x < 0, to ensure stability and avoid overflow, we reformulate the above exp as
    loss = xy - (1-y)x - (1-y)x + (1-y)log(1+exp(x)) = xy + (1-y)log(1+exp(x))

Then the final expression becomes
    loss = xy + (y - 1) x (x >= 0) - (1 - y) log(1 + exp(x - 2 x (x >= 0)))

where y is the true label, x is the dot product and p = logistic(x).

This kind of implementation is align with the current implementation of the original cross entropy in
https://phabricator.intern.facebook.com/diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/cross_entropy_op.cc;0bae3b5d0f825897c5e0dd0ff10f489d7271bf25$7-13

* Keep the array to fix the conflict

* [C2] Compute Adagrad effective LR

The AdagradWithLR op outputs an extra blob which is contains the average effective learning rate across all weights in this blob.

* Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs

1. Open-source extractMetaNetDef and runGlobalInitialization, for use in
2. new Predictor constructor from db file.
3. Add new run function that returns outputs as TensorMap

* Disable eigen cpu

Disable eigen cpu in transpose and reduce

* Introduce request_only/object_only property of ModelLayer

by default this is False

* A simple TC Caffe2 benchmark

We can run tunner, get MappingOptions and then use them to
compare against cuBLAS

currently broken due to LLVM issues. How to run:

hg checkout eec1ab31b59c03b8deded1c755a9abaf8c45be01
add D7401202
add D7434625
add D7506031
add D7540728

buck run @mode/dev-nosan tc/tc/benchmarks_python:caffe2_benchmark

* Move Caffe2 feature_maps_ops to open source

Need feature maps operators in open source project facebookresearch/BlueWhale

* Manually fix the conflicts in channel shuffle op

* Fix the inconsistency between different gh and fbcode

* Skip Adagrad GPU Test (Because some gpu implementation is missing)

* Fix another test to make sure it won't run on gpu when implementation is not available yet
2018-05-01 20:49:00 -07:00

406 lines
14 KiB
C++

#include "feature_maps_ops.h"
#include "caffe2/core/context.h"
namespace caffe2 {
namespace {
const std::string doc = R"DOC(
Single-feature representation:
- scalar features:
<feature full name> T
- list features:
<feature full name>.lengths int32
<feature full name>.values T
- map features:
<feature full name>.lengths int32
<feature full name>.keys K
<feature full name>.values V
Missing values are set to zero, and value presence flag is set accordingly:
<feature full name>.presence bool
Multi-feature representation:
- scalar features:
<feature type>.lengths int32
<feature type>.keys int64
<feature type>.values T
- list features:
<feature type>.lengths int32
<feature type>.keys int64
<feature type>.values.lengths int32
<feature type>.values.values T
- map features:
<feature type>.lengths int32
<feature type>.keys int64
<feature type>.values.lengths int32
<feature type>.values.keys K
<feature type>.values.values V
You can read more about representing batches of lists and maps here:
https://our.intern.facebook.com/intern/dex/caffe2/sparse-operations/
)DOC";
REGISTER_CPU_OPERATOR(
MergeSingleScalarFeatureTensors,
MergeSingleScalarFeatureTensorsOp<CPUContext>);
OPERATOR_SCHEMA(MergeSingleScalarFeatureTensors)
.SetDoc(
"Merge given single-feature tensors with scalar features into one "
"multi-feature tensor." +
doc)
.NumInputs([](int n) { return n >= 2 && n % 2 == 0; })
.NumOutputs(3)
.Input(0, "in1", "")
.Input(1, "in1_presence", ".presence")
.Output(0, "out_lengths", ".lengths")
.Output(1, "out_keys", ".keys")
.Output(2, "out_values", ".values")
.Arg("feature_ids", "feature ids");
class GetMergeSingleScalarFeatureTensorsGradient : public GradientMakerBase {
using GradientMakerBase::GradientMakerBase;
vector<OperatorDef> GetGradientDefs() override {
vector<string> input_blob_names{};
vector<string> output_blob_names{};
for (int inputIdx = 0; inputIdx < def_.input_size() / 2; ++inputIdx) {
input_blob_names.push_back(I(inputIdx * 2 + 1));
output_blob_names.push_back(GI(inputIdx * 2));
}
input_blob_names.push_back(GO(2));
return SingleGradientDef(
"MergeSingleScalarFeatureTensorsGradient",
"", /* name */
input_blob_names,
output_blob_names);
}
};
REGISTER_CPU_OPERATOR(
MergeSingleScalarFeatureTensorsGradient,
MergeSingleScalarFeatureTensorsGradientOp<CPUContext>);
OPERATOR_SCHEMA(MergeSingleScalarFeatureTensorsGradient)
.SetDoc(
"Explode multi-feature tensor of scalar features into one or more"
"single-feature tensors" +
doc)
.NumInputs([](int n) { return n >= 2; })
.NumOutputs([](int n) { return n >= 1; })
.Input(0, "in1_presence", ".presence")
.Input(1, ".values_grad", ".values_grad")
.Output(0, "in1_grad", "_grad of inputs");
REGISTER_GRADIENT(
MergeSingleScalarFeatureTensors,
GetMergeSingleScalarFeatureTensorsGradient);
// ##########################################################
REGISTER_CPU_OPERATOR(
MergeSingleListFeatureTensors,
MergeSingleListFeatureTensorsOp<CPUContext>);
OPERATOR_SCHEMA(MergeSingleListFeatureTensors)
.SetDoc(
"Merge given single-feature tensors with list features into one "
"multi-feature tensor." +
doc)
.NumInputs([](int n) { return n >= 3 && n % 3 == 0; })
.NumOutputs(4)
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_values", ".values")
.Input(2, "in1_presence", ".presence")
.Output(0, "out_lengths", ".lengths")
.Output(1, "out_keys", ".keys")
.Output(2, "out_values_lengths", ".values.lengths")
.Output(3, "out_values_values", ".values.values")
.Arg("feature_ids", "feature ids");
class GetMergeSingleListFeatureTensorsGradient : public GradientMakerBase {
using GradientMakerBase::GradientMakerBase;
vector<OperatorDef> GetGradientDefs() override {
vector<string> input_blob_names{};
vector<string> output_blob_names{};
for (int inputIdx = 0; inputIdx < def_.input_size() / 3; ++inputIdx) {
input_blob_names.push_back(I(inputIdx * 3));
input_blob_names.push_back(I(inputIdx * 3 + 2));
output_blob_names.push_back(GI(inputIdx * 3 + 1));
}
input_blob_names.push_back(GO(3));
return SingleGradientDef(
"MergeSingleListFeatureTensorsGradient",
"",
input_blob_names,
output_blob_names);
}
};
REGISTER_CPU_OPERATOR(
MergeSingleListFeatureTensorsGradient,
MergeSingleListOrMapFeatureTensorsGradientOp<CPUContext>);
OPERATOR_SCHEMA(MergeSingleListFeatureTensorsGradient)
.SetDoc(
"Explode multi-feature tensors with list features into "
"single-feature tensors." +
doc)
.NumInputs([](int n) { return n >= 3 && n % 2 == 1; })
.NumOutputs([](int n) { return n >= 1; })
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_presence", ".presence")
.Input(2, "out_values_values", ".values.values_grad")
.Output(0, "out1_values", ".values_grad");
REGISTER_GRADIENT(
MergeSingleListFeatureTensors,
GetMergeSingleListFeatureTensorsGradient);
// ##########################################################
REGISTER_CPU_OPERATOR(
MergeSingleMapFeatureTensors,
MergeSingleMapFeatureTensorsOp<CPUContext>);
OPERATOR_SCHEMA(MergeSingleMapFeatureTensors)
.SetDoc(
"Merge given single-feature tensors with map features into one "
"multi-feature tensor." +
doc)
.NumInputs([](int n) { return n >= 4 && n % 4 == 0; })
.NumOutputs(5)
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_keys", ".keys")
.Input(2, "in1_values", ".values")
.Input(3, "in1_presence", ".presence")
.Output(0, "out_lengths", ".lengths")
.Output(1, "out_keys", ".keys")
.Output(2, "out_values_lengths", ".values.lengths")
.Output(3, "out_values_keys", ".values.keys")
.Output(4, "out_values_values", ".values.values")
.Arg("feature_ids", "feature ids");
class GetMergeSingleMapFeatureTensorsGradient : public GradientMakerBase {
using GradientMakerBase::GradientMakerBase;
vector<OperatorDef> GetGradientDefs() override {
vector<string> input_blob_names{};
vector<string> output_blob_names{};
for (int inputIdx = 0; inputIdx < def_.input_size() / 4; ++inputIdx) {
input_blob_names.push_back(I(inputIdx * 4));
input_blob_names.push_back(I(inputIdx * 4 + 3));
output_blob_names.push_back(GI(inputIdx * 4 + 2));
}
input_blob_names.push_back(GO(4));
return SingleGradientDef(
"MergeSingleMapFeatureTensorsGradient",
"",
input_blob_names,
output_blob_names);
}
};
REGISTER_CPU_OPERATOR(
MergeSingleMapFeatureTensorsGradient,
MergeSingleListOrMapFeatureTensorsGradientOp<CPUContext>);
OPERATOR_SCHEMA(MergeSingleMapFeatureTensorsGradient)
.SetDoc(
"Explode given multi-feature tensors with map features into "
"multiple single-feature tensor." +
doc)
.NumInputs([](int n) { return n >= 3 && n % 2 == 1; })
.NumOutputs([](int n) { return n >= 1; })
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_presence", ".presence")
.Input(2, "out_values_values_grad", ".values.values_grad")
.Output(0, "in1_values_grad", ".values_grad");
REGISTER_GRADIENT(
MergeSingleMapFeatureTensors,
GetMergeSingleMapFeatureTensorsGradient);
// ##########################################################
REGISTER_CPU_OPERATOR(
MergeMultiScalarFeatureTensors,
MergeMultiScalarFeatureTensorsOp<CPUContext>);
OPERATOR_SCHEMA(MergeMultiScalarFeatureTensors)
.SetDoc(
"Merge given multi-feature tensors with scalar features into one." +
doc)
.NumInputs([](int n) { return n >= 3 && n % 3 == 0; })
.NumOutputs(3)
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_keys", ".keys")
.Input(2, "in1_values", ".values")
.Output(0, "out_lengths", ".lengths")
.Output(1, "out_keys", ".keys")
.Output(2, "out_values", ".values");
class GetMergeMultiScalarFeatureTensorsGradient : public GradientMakerBase {
using GradientMakerBase::GradientMakerBase;
vector<OperatorDef> GetGradientDefs() override {
vector<string> input_blob_names{};
vector<string> output_blob_names{};
for (int inputIdx = 0; inputIdx < def_.input_size() / kNumTensorsPerInput;
++inputIdx) {
input_blob_names.push_back(I(inputIdx * kNumTensorsPerInput));
output_blob_names.push_back(GI(inputIdx * kNumTensorsPerInput + 2));
}
input_blob_names.push_back(GO(2));
return SingleGradientDef(
"MergeMultiScalarFeatureTensorsGradient",
"",
input_blob_names,
output_blob_names);
}
private:
const int kNumTensorsPerInput = 3;
};
REGISTER_CPU_OPERATOR(
MergeMultiScalarFeatureTensorsGradient,
MergeMultiScalarFeatureTensorsGradientOp<CPUContext>);
OPERATOR_SCHEMA(MergeMultiScalarFeatureTensorsGradient)
.SetDoc(
"Explode given multi-feature tensors with scalar features into many." +
doc)
.NumInputs([](int n) { return n >= 2; })
.NumOutputs([](int n) { return n >= 1; })
.Input(0, "in1_lengths", ".lengths")
.Input(1, "out_values_grad", ".values_grad")
.Output(0, "in1_values_grad", ".values_grad");
REGISTER_GRADIENT(
MergeMultiScalarFeatureTensors,
GetMergeMultiScalarFeatureTensorsGradient);
// ##########################################################
REGISTER_CPU_OPERATOR(
MergeMultiListFeatureTensors,
MergeMultiListFeatureTensorsOp<CPUContext>);
REGISTER_CPU_OPERATOR(
MergeMultiListFeatureTensorsGradient,
MergeMultiListOrMapFeatureTensorsGradientOp<CPUContext>);
OPERATOR_SCHEMA(MergeMultiListFeatureTensors)
.SetDoc(
"Merge given multi-feature tensors with list features into one." + doc)
.NumInputs([](int n) { return n >= 4 && n % 4 == 0; })
.NumOutputs(4)
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_keys", ".keys")
.Input(2, "in1_values_lengths", ".values.lengths")
.Input(3, "in1_values_values", ".values.values")
.Output(0, "out_lengths", ".lengths")
.Output(1, "out_keys", ".keys")
.Output(2, "out_values_lengths", ".values.lengths")
.Output(3, "out_values_values", ".values.values");
OPERATOR_SCHEMA(MergeMultiListFeatureTensorsGradient)
.SetDoc(
"Explode given multi-feature tensors with list features "
"into many." +
doc)
.NumInputs([](int n) { return n >= 3 && n % 2 == 1; })
.NumOutputs([](int n) { return n >= 1; })
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_values_lengths", ".values.lengths")
.Input(2, "out_values_values_grad", ".values.values_grad")
.Output(0, "in1_values_values_grad", ".values.values_grad");
class GetMergeMultiListFeatureTensorsGradient : public GradientMakerBase {
using GradientMakerBase::GradientMakerBase;
vector<OperatorDef> GetGradientDefs() override {
vector<string> input_blob_names{};
vector<string> output_blob_names{};
for (int inputIdx = 0; inputIdx < def_.input_size() / kNumTensorsPerInput;
++inputIdx) {
input_blob_names.push_back(I(inputIdx * kNumTensorsPerInput));
input_blob_names.push_back(I(inputIdx * kNumTensorsPerInput + 2));
output_blob_names.push_back(GI(inputIdx * kNumTensorsPerInput + 3));
}
input_blob_names.push_back(GO(3));
return SingleGradientDef(
"MergeMultiListFeatureTensorsGradient",
"",
input_blob_names,
output_blob_names);
}
private:
const int kNumTensorsPerInput = 4;
};
REGISTER_GRADIENT(
MergeMultiListFeatureTensors,
GetMergeMultiListFeatureTensorsGradient);
// ##########################################################
REGISTER_CPU_OPERATOR(
MergeMultiMapFeatureTensors,
MergeMultiMapFeatureTensorsOp<CPUContext>);
OPERATOR_SCHEMA(MergeMultiMapFeatureTensors)
.SetDoc(
"Merge given multi-feature tensors with map features into one." + doc)
.NumInputs([](int n) { return n >= 5 && n % 5 == 0; })
.NumOutputs(5)
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_keys", ".keys")
.Input(2, "in1_values_lengths", ".values.lengths")
.Input(3, "in1_values_keys", ".values.keys")
.Input(4, "in1_values_values", ".values.values")
.Output(0, "out_lengths", ".lengths")
.Output(1, "out_keys", ".keys")
.Output(2, "out_values_lengths", ".values_lengths")
.Output(3, "out_values_keys", ".values.keys")
.Output(4, "out_values_values", ".values.values");
class GetMergeMultiMapFeatureTensorsGradient : public GradientMakerBase {
using GradientMakerBase::GradientMakerBase;
vector<OperatorDef> GetGradientDefs() override {
vector<string> input_blob_names{};
vector<string> output_blob_names{};
for (int inputIdx = 0; inputIdx < def_.input_size() / kNumTensorsPerInput;
++inputIdx) {
input_blob_names.push_back(I(inputIdx * kNumTensorsPerInput));
input_blob_names.push_back(I(inputIdx * kNumTensorsPerInput + 2));
output_blob_names.push_back(GI(inputIdx * kNumTensorsPerInput + 4));
}
input_blob_names.push_back(GO(4));
return SingleGradientDef(
"MergeMultiMapFeatureTensorsGradient",
"",
input_blob_names,
output_blob_names);
}
private:
const int kNumTensorsPerInput = 5;
};
REGISTER_CPU_OPERATOR(
MergeMultiMapFeatureTensorsGradient,
MergeMultiListOrMapFeatureTensorsGradientOp<CPUContext>);
OPERATOR_SCHEMA(MergeMultiMapFeatureTensorsGradient)
.SetDoc(
"Explode given multi-feature tensors with map features "
"into many." +
doc)
.NumInputs([](int n) { return n >= 3 && n % 2 == 1; })
.NumOutputs([](int n) { return n >= 1; })
.Input(0, "in1_lengths", ".lengths")
.Input(1, "in1_values_lengths", ".values.lengths")
.Input(2, "out_values_values_grad", ".values.values_grad")
.Output(0, "in1_values_values_grad", ".values.values_grad");
REGISTER_GRADIENT(
MergeMultiMapFeatureTensors,
GetMergeMultiMapFeatureTensorsGradient);
} // namespace
} // namespace caffe2