pytorch/caffe2/python/layers
Huazhong Ning 8168e8ac25 allows to specify output names for functional layers
Summary:
currently the output schema and blobs are names as "field_i" which is
bad for debugging. This diff allows us to specify output names.

Reviewed By: kennyhorror

Differential Revision: D4744949

fbshipit-source-id: 8ac4d3c75cacbb4c9b5f55793ac969fe1cf20467
2017-03-23 13:18:58 -07:00
..
__init__.py fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00
batch_distill_lr_loss.py implemented DistillLRLoss 2017-03-20 16:01:29 -07:00
batch_lr_loss.py migrate mtml to dper2 2017-03-16 17:48:05 -07:00
batch_mse_loss.py added BatchL2Loss layer 2017-03-16 17:32:20 -07:00
batch_sigmoid_cross_entropy_loss.py BatchSigmoidCrossEntropyLoss 2017-03-17 09:35:51 -07:00
batch_softmax_loss.py BatchSoftmaxLoss layer 2017-03-17 10:19:06 -07:00
concat.py small change to concat layer to make tensor board vis nicer 2017-03-12 23:01:18 -07:00
dot_product.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
expand_dims.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
fc_without_bias.py FCWithoutBias layer 2017-03-15 11:03:37 -07:00
fc.py model and preprocessor can handle empty dense inputs 2017-02-22 11:19:15 -08:00
functional.py allows to specify output names for functional layers 2017-03-23 13:18:58 -07:00
layers.py Add SparseNN workflow for feed. 2017-03-01 11:02:38 -08:00
simple_operator_layers.py Fix random issues with some of the layers getting missing from registry. 2017-01-10 15:14:31 -08:00
sparse_lookup.py clean old unit test, add sum processor and sqrt pooling 2017-03-08 23:04:19 -08:00
sparse_to_dense.py fix for special case when dense dim is 1 2017-03-16 05:19:10 -07:00
split.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
tags.py fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00