Commit Graph

1536 Commits

Author SHA1 Message Date
Qinqing Zheng
90a3363f29 Return an empty TaskGroup if node managers exist in MultiNodeCheckpointManager
Summary: Current MultiNodeCheckpointManager return None in this case, yet in JobRunner we assume this function returns a valid task group, i.e. we call session.run(self.checkpoint_manager.init(...)) directly. This will fail the case we use LocalHostScheduler and reuse a MultiNodeCheckpointManager

Reviewed By: azzolini

Differential Revision: D6843450

fbshipit-source-id: a7ec942cfe692f19e8751b0078ae6a6108f29e54
2018-01-30 19:20:50 -08:00
Alexander Sidorov
98a4c3f9b2 Enable rnn_cell_test in jenkins
Summary: Closes https://github.com/caffe2/caffe2/pull/1839

Differential Revision: D6847623

Pulled By: salexspb

fbshipit-source-id: b8a32cb39a8063b8938c89556e5d42606735238d
2018-01-30 11:48:35 -08:00
Lu Fang
560e5c94bd Change default value of LeakyRelu's alpha from 0 to 0.01
Summary: To match the semantic in ONNX, change the default value of alpha of LeakyRelu to 0.01

Reviewed By: dzhulgakov

Differential Revision: D6840975

fbshipit-source-id: 08543f80fd86cbe96a0eee8d725ef137a5bf4ab8
2018-01-29 22:31:12 -08:00
Xiaomeng Yang
6b1f848df6 Adds gpu implementation for FCTransposed
Summary: Adds gpu implementation for FCTransposed.

Reviewed By: salexspb

Differential Revision: D6572785

fbshipit-source-id: a7cd0f7364ace286942c46b91e0287307cbfea83
2018-01-29 19:03:24 -08:00
mdschatz
3c952426fb Add operator attaching net observer
Summary:
Commonly, net observers attach operator observers at construction. This diff separates the logic into a base class to inherit from.
Closes https://github.com/caffe2/caffe2/pull/1806

Reviewed By: salexspb

Differential Revision: D6808623

Pulled By: mdschatz

fbshipit-source-id: 75ef0eea913ef30943541c829c0a976965f42736
2018-01-29 14:34:34 -08:00
Xiaolong Wang
f8575f6d68 Breakdown Dispatcher
Summary: dispatch by Ngram breakdown

Differential Revision: D6794082

fbshipit-source-id: 7f6e8fa3a0abe0dc6d0d466c95e8c4fc865e3abb
2018-01-26 17:47:54 -08:00
Anders Papitto
33d2212751 LSTM sequence lengths: allow unspecified sequence lengths
Summary:
In this case, each sequence is treated as having a length equal to the
first dimension of the input tensor. This matches the semantics of
ONNX when the sequence length input is left out.
Closes https://github.com/caffe2/caffe2/pull/1764

Reviewed By: dzhulgakov

Differential Revision: D6751219

Pulled By: anderspapitto

fbshipit-source-id: 89e0efd12339157627494e2b8c83e952bdd8a9f8
2018-01-26 16:32:56 -08:00
Lin Yang
252211b001 testPairwiseDotProduct
Summary: as title.

Reviewed By: kennyhorror

Differential Revision: D6793829

fbshipit-source-id: f803e0400635ca37184f1dd5bb711bfe0e4bea21
2018-01-26 11:33:08 -08:00
Alexander Sidorov
a3b8c459d4 Revamp MNIST tutorial
Summary:
Main changes:

1. Move reader creation to Brew in order to be consistent and avoid a wild use of param_init_net
2. Use optimizers for training function, avoid manual optimizer construction
3. Add MLP mode (a default)
4. Fix a bunch of too verbose comments and add a bit of new explanations
Closes https://github.com/caffe2/caffe2/pull/1760

Differential Revision: D6749059

Pulled By: salexspb

fbshipit-source-id: 9dfbbb2d9772a74a0300c2e404a92e791f7cc593
2018-01-26 09:17:31 -08:00
Peter Goldsborough
0fd41a63a1 Integrate Fused8BitRowwise ops with DPER
Summary: Updates `sparse_lookup.py` for the new fused 8-bit rowwise quantization. Mostly just changing the same files as the original diffs (D5753626 and D5761202). I know very little about this code here so please let me know if this is safe, also in terms of migration away from the non-fused storage.

Reviewed By: kennyhorror

Differential Revision: D6710784

fbshipit-source-id: 185f147af52a094a937ba631b0351225e660d205
2018-01-25 15:02:42 -08:00
Frank Jiang
304e607b70 Fix adam test
Reviewed By: pietern

Differential Revision: D6787780

fbshipit-source-id: a2d1428b0e028d6f3d8f7c312c90f3fa411cd0a2
2018-01-25 12:59:54 -08:00
Xiaolong Wang
b2cfc5ea53 add KeySplitOp
Summary:
as titled

After converting categorical to Ngram keys, use this op to extract eids

Differential Revision: D6794020

fbshipit-source-id: 4f9251a22d7a129da30b92845e312876e6510e7e
2018-01-25 10:50:53 -08:00
Xiaomeng Yang
d695027300 Adds cuda support for LC op
Summary: Adds cuda support for LC Op

Reviewed By: QueryConnectionException

Differential Revision: D6803659

fbshipit-source-id: 538bbf6fd202c79154132fda0e90e175eb09d025
2018-01-25 10:19:48 -08:00
Huazhong Ning
90543ff13a weighted sampling reader dequeue outputs table index
Summary: Weighted sampling reader dequeue randomly chooses a hive reader to read a mini-batch. This diff allows dequeue to output the index of the randomly chosen table to a specific blob.

Reviewed By: kennyhorror

Differential Revision: D6621070

fbshipit-source-id: 754b981fc2bcfdb0146d2a0a5b677e7cfe74211b
2018-01-24 19:06:25 -08:00
Huan Gui
c261b9ce70 Fix NGram from categorical test
Summary: Fix the flaky test for ngram from categorical test

Reviewed By: dragonxlwang

Differential Revision: D6801152

fbshipit-source-id: dcbae17b1d3737a41fb2f5c794c1146a02c542bb
2018-01-24 18:51:16 -08:00
Xiaomeng Yang
afafe8a466 Add LC Layer
Summary: Add the 1st version of LC layer.

Reviewed By: Yangqing

Differential Revision: D6788647

fbshipit-source-id: ebee9215a1d6e1e567548a0fef771802851682a3
2018-01-24 16:51:17 -08:00
Aarti Basant
fc56e86c7d Introduce init API for the optional Checkpoint Metadata Handler object
Summary:
Every call to the checkpoint_metadata_handler write() API requires us to pass all params like db_prefix, db_type etc.
Introducing an init API in the checkpoint_metadata_handler so that such params can be saved and need not be passed in every API call

Reviewed By: mraway, anshulverma

Differential Revision: D6792651

fbshipit-source-id: 059fa4309e8fce1ee5ab009af3e0570573c24245
2018-01-24 15:19:55 -08:00
Lukasz Wesolowski
29a4c942fe Add support for multi-device batch normalization through an option to data_parallel_model
Summary: Stage 3 in stack of diffs for supporting multi-device batch normalization. Adds input parameter to data_parallel_model to enable multi-device batch normalization. Depends on D6699258.

Reviewed By: pietern

Differential Revision: D6700387

fbshipit-source-id: 24ed62915483fa4da9b1760eec0c1ab9a64b94f8
2018-01-24 13:24:06 -08:00
Lukasz Wesolowski
9414072159 Add operators to support batch normalization across multiple devices on the same node
Summary: This is the first in a series of diffs to enable batch normalization across multiple devices on the same node with data parallel model. The diff contains the ops for computing the per-channel statistics required to obtain the mean and variance across multiple devices on the same node on the forward pass, and the gradient of the bias and scale during backpropagation. The actual modifications to SpatialBN and SpatialBNGradient to make use of these results will be in a separate diff.

Reviewed By: rbgirshick

Differential Revision: D6697336

fbshipit-source-id: 0de2750fe7e851795f238d9f625aeb4d74023dc2
2018-01-24 13:24:04 -08:00
Pieter Noordhuis
7a232aae49 Add random seed to NGramFromCategorical test
Summary: TSIA

Reviewed By: Yangqing, Maratyszcza, dzhulgakov

Differential Revision: D6797213

fbshipit-source-id: e1132229cda09d1fbde63686aaec81b995989c03
2018-01-24 13:05:28 -08:00
Xiaolong Wang
29c7c682d8 add NGramFromCategorical Op
Summary: as titled

Differential Revision: D6783763

fbshipit-source-id: 78280cf15c2cdc3c308562d3f27a81b61ef8d662
2018-01-23 15:08:25 -08:00
Xue Feng
0e9b0cf779 add error msg in fc input_record
Summary: as titled

Reviewed By: xianjiec

Differential Revision: D6787879

fbshipit-source-id: 4bbdd11455480b25fa18121fa4527a9f0a03addc
2018-01-23 14:48:15 -08:00
Anders Papitto
0aa1a6387e Add a seed to the gru unit test
Summary:
as it calls np.random and sometimes fails unreproducibly
Closes https://github.com/caffe2/caffe2/pull/1779

Reviewed By: pietern

Differential Revision: D6779802

Pulled By: anderspapitto

fbshipit-source-id: 2ad069f8a15f70a8110b1a6bdb06f81577c53ad4
2018-01-23 13:47:43 -08:00
Xianjie Chen
76a141f016 add error msg in get_key
Summary: as title

Differential Revision: D6782896

fbshipit-source-id: bd29f6d085e56f51deb4bf6ad81771787fd85a5a
2018-01-23 11:04:05 -08:00
Dániel Simig
2dd79eb53a Visualize distribution of activation functions
Summary:
This is a  first attempt at completing bootcamp task T24449916. This diff contains 3 major changes:
1) Change LayerModelHelper to allow for exposing the output and parameters of any layer to metrics
2) Added a runner that allows metrics to draw arbitrary plots to a matplotlib axes object
3) Implement a metric that aggregates distributions of values in a blob over the training, and try this out in a notebook

Reviewed By: kennyhorror

Differential Revision: D6671273

fbshipit-source-id: b8961837395e89c957edbf5c7c862bdb845ccf4b
2018-01-23 10:36:40 -08:00
Lin Yang
8e0177255e Test for PositionWeighted
Summary: add Test for SparseLookup with PositionWeighted.

Reviewed By: kennyhorror

Differential Revision: D6771612

fbshipit-source-id: b4b3bfd514f366f579b4192643330ae73843d4f9
2018-01-22 19:20:46 -08:00
Viswanath Sivakumar
231d6f7b09 Add SqueezeOp in MKLDNN
Summary:
SqueezeOp support to drop drop dims of size 1. MKLMemory now supports Reshape()
if the buffer is in plain layout, in which case just the dims and layouts are
modified similar to caffe2::Tensor. SqueezeOp takes care of converting the
input to plain layout if needed via an intermediate buffer before calling
Reshape().

Differential Revision: D6735656

fbshipit-source-id: 953309498370e1b8986e8c593bc6963f38036255
2018-01-22 18:39:42 -08:00
Wei Zhang
1d4e996b87 Separate parameter downloading tasks from training tasks and run them in a different group
Summary:
At the end of distributed training, trainer needs to download the parameters back from parameter servers for saving the model. Currently, this parameter downloading happens at the end of job's epoch task group, which creates several problems when checkpointing is enabled for distributed training:

1. When checkpointing is enabled, we run multiple training epochs. At the end of each epoch, the model download tasks will run to collect parameters, but we won't save the model until the true end of training, so there is a big waste of resource.
2. After trainer0 downloads the parameters, these parameters take a lot of memory, so trainer0 can easily run out of memory in the next epoch of training.

Our solution is to insert a parameter download task group between the job's training epoch_group and the job's exit_group.

Reviewed By: azzolini

Differential Revision: D6765393

fbshipit-source-id: 5a4f556fc3c1cd7834a7c406a3c0de3fccd50c49
2018-01-22 14:04:12 -08:00
Pieter Noordhuis
d618c05174 Increase lower bound of values for values in div test
Summary:
This should translate to an 1% error margin. The gradient checker uses a .5% threshold.
Closes https://github.com/caffe2/caffe2/pull/1766

Differential Revision: D6774077

Pulled By: pietern

fbshipit-source-id: f97c7ffb2ef34fdd71d69320a7fdcf4a6a457715
2018-01-22 09:06:12 -08:00
Viswanath Sivakumar
b5d513b1f9 Add op in MKLDNN
Summary:
Just redirects to MKLSumOp. Doesn't support broadcast though since dnnSumCreate
expects identical dims.

Differential Revision: D6729788

fbshipit-source-id: 3e189465ad9d026bec4954648562ffe4e67fc393
2018-01-21 08:21:43 -08:00
James Cross
91066559a8 truthy check for empty string in NameScope()
Summary:
As in name. LATTE translation team moving some code from Python 2 to 3 uncovered a case where comparison between unicode and str types leads NameScope('') to prepend a separator to the beginning of blob names. This fixes it.

Thank you so much to dzhulgakov for tracking down the cause of this so quickly!

Reviewed By: dzhulgakov

Differential Revision: D6766866

fbshipit-source-id: fbe46cff581f425ba10e8668400915ea40baab94
2018-01-19 21:34:09 -08:00
Ilia Cherniavskii
4ce4bc5c7f Fix occasional test timeouts
Summary: Make test less computationally expensive

Reviewed By: Yangqing, dzhulgakov

Differential Revision: D6766236

fbshipit-source-id: 59e51faa1331d804b11da9f7237ee9ce0cb27df8
2018-01-19 20:08:58 -08:00
Yangqing Jia
ced2c7e2b2 Remove Set/GetDefaultGPUID and move to use current gpu id instead.
Summary:
Reason for this change:

(1) Setting/Getting default gpu id doesn't seem to be used at all.
(2) It actually is confusing compared to the CUDA_VISIBLE_DEVICES options etc.
(3) When setting cuda_gpu_id=-1 in the CUDAContext arg, it used to use the
default gpu id but probably we should use the current gpu - so that the caller
will be able to control the device placement.

One use case is for TensorRT - if we have a custom callback layer, then it would
be easier for TRT or whatever caller to set the running device.

Reviewed By: dzhulgakov

Differential Revision: D6740357

fbshipit-source-id: 2ea710e434b10220d5a198e31c93847304636863
2018-01-19 18:03:21 -08:00
Peter Goldsborough
cded9683ad Implement fused 8bit rowwise sparse lengths reductions
Summary: Building on D6710785 (float <-> fused_8bit_rowwise conversions) and D6710843 (`FusedEmbeddingLookup`), this diff implements the new reduction operations for the fused 8-bit rowwise storage. I mostly followed the [old 8-bit quantized code](diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/lengths_reducer_rowwise_8bit_ops.h) and [full-precision code](diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/lengths_reducer_ops.h).

Reviewed By: kennyhorror

Differential Revision: D6710844

fbshipit-source-id: b9e85db7437bd32dd44d01733c3749f35c00b06e
2018-01-19 15:44:35 -08:00
Peter Goldsborough
8dc0702af5 Add float32 <-> fused_rowwise_8bit conversion Caffe2 operators
Summary: This first diff adds the conversion operators that go from float to our fused 8bit rowwise quantized storage and back again. For now I've put the scale and bias in front of each row because it makes the pointer arithmetic nicer here and in the EmebddingLookup perfkernel. If benchmarks or other reasons point out that this is a bad idea we can change it easily.

Reviewed By: kennyhorror

Differential Revision: D6710785

fbshipit-source-id: 086ab91c12d3b472564a06eff6329be6cb9e680e
2018-01-19 15:44:33 -08:00
Heng Wang
c052eb6bbb update the video input op in caffe2
Summary:
This is to update the video input op in caffe2 so that it is up to date.
It adds additional support for:
1, optical flow and early fusion
2, different ways of sampling clips from video
3, different ways of resizing the input video

Reviewed By: dutran

Differential Revision: D6752788

fbshipit-source-id: 0cbd4d4bbbe97b0ada4cba7a55adc91a7af60d5f
2018-01-19 09:52:25 -08:00
Lin Yang
4ea6e6a556 testSparseLookup
Summary: add basic test for SparseLookup

Reviewed By: kennyhorror

Differential Revision: D6749915

fbshipit-source-id: f97af785e4f89f36788a992843066fd1ec2b75a9
2018-01-19 09:27:20 -08:00
Orion Reblitz-Richardson
b28d5a3586 Build doxygen docs with cmake and fix catalog generation
Summary:
This updates https://github.com/caffe2/caffe2/pull/1096/ to build doxygen docs with cmake and fixes operator catalog generation. See the new README.md for details, but you can run

```
mkdir build && cd build
cmake -DBUILD_DOCS=ON .. && make
```
and

```
python caffe2/python/docs/github.py ~/c2docs/_docs/operators-catalogue.md
```

to generate docs.

There was one weird issue in `generator.py` that we sometimes receive tuples and sometimes objects. I handled this just by testing `isinstance`, but we might want to be more principled in the future.
Closes https://github.com/caffe2/caffe2/pull/1758

Reviewed By: pietern

Differential Revision: D6752127

Pulled By: orionr

fbshipit-source-id: 9ba9ad8efc920b27a57327f8a7d3050f3650d4ce
2018-01-18 18:47:59 -08:00
Anders Papitto
e3e6680b48 Add ElmanCell and ElmanRNN
Summary: Closes https://github.com/caffe2/caffe2/pull/1742

Reviewed By: dzhulgakov

Differential Revision: D6706809

Pulled By: anderspapitto

fbshipit-source-id: 15a05786a26aeb719ea4377f4dbbb62738d9e697
2018-01-18 12:14:02 -08:00
Anirban Roychowdhury
158e001238 Checking for positive epoch size before running epoch
Summary: Checking for positive epoch size before running epoch

Reviewed By: pietern

Differential Revision: D6738966

fbshipit-source-id: 64e1fb461d784786b20a316999e4c037787f3a14
2018-01-18 11:48:35 -08:00
Frank Jiang
6f0bb28afb Stop running RowWiseSparseAdam test on GPU
Reviewed By: pietern

Differential Revision: D6739194

fbshipit-source-id: 0892cdc6a575a84147f86984c67e7b4bf605a197
2018-01-17 15:05:21 -08:00
Frank Jiang
61356cbadc RowWiseSparseAdam operator
Summary: Added the RowWise functionality for SparseAdam, which saves roughly 2/3 memory usage by only keeping one first and second moment term for each row of the parameter tensor, rather than one for each individual parameter.

Differential Revision: D6679342

fbshipit-source-id: ce6fb27e35ce41a890c66f6089cd2748d10e7a44
2018-01-16 19:39:31 -08:00
Leon Masopust
81898e5d47 Fix for wrong newline in caffe_translator.py (Crop layer translation)
Summary:
- fixed the false newline at the initialization of the crop layer translation which caused the exceptions described in issue #1215
Closes https://github.com/caffe2/caffe2/pull/1746

Differential Revision: D6716228

Pulled By: Yangqing

fbshipit-source-id: dd93b06b3b903f96505d6e6f8e67caeb6981fe66
2018-01-12 16:17:53 -08:00
Anders Papitto
db6777eaf4 fix gru_cell bug
Summary:
the fc needs to be in the output_gate_t scope so it can find its input
weights correctly
Closes https://github.com/caffe2/caffe2/pull/1739

Reviewed By: dzhulgakov

Differential Revision: D6705443

Pulled By: anderspapitto

fbshipit-source-id: 139e83ac77589a203ffe404fedab98eea5b1a51c
2018-01-12 15:34:23 -08:00
Viswanath Sivakumar
b2964a92d9 Add MKLConcatOp
Summary:
MKLConcatOp along the channel dim of NCHW tensors. Spec:
https://software.intel.com/en-us/mkl-developer-reference-c-dnnconcatcreate

Reviewed By: ajtulloch

Differential Revision: D6689716

fbshipit-source-id: 492bc440474f8ce37caa85509789496659b03e79
2018-01-11 14:19:22 -08:00
Xue Feng
dda33ca53a enable setting model initialization seed
Summary: This diff enables setting model initialization seed, instead of random seed, when reproducible restults are desired.

Reviewed By: xianjiec

Differential Revision: D6642971

fbshipit-source-id: 387b1ee2ecef4f8f66570c882498fb97d7007e17
2018-01-11 14:04:03 -08:00
Aarti Basant
33d734fcf1 Generalize construction of db_name in checkpoint manager
Summary:
Instead of constructing db_name as a member of checkpoint_manager, generalize
this function

Reviewed By: anshulverma

Differential Revision: D6671088

fbshipit-source-id: c528538def66933619f2fdf67820bca5d13571ea
2018-01-10 11:49:17 -08:00
Di Yu
cd3e90c16f Fix failed test due to D6665466
Summary: Test in Jenkins fail becasue test_global_pooling_3d filtered too many tests.  We made use of infered value of global_pooling (pad and stride will be constant) to reduce the test samples generated.

Reviewed By: pietern

Differential Revision: D6686840

fbshipit-source-id: d316c0e9f9070b12770170ab9f36e33de68a9ab9
2018-01-09 16:40:35 -08:00
Di Yu
82198831e7 Fix pool op custom path issue 2, wrongful routing to global pooling
Summary:
In D5681122 - when routing to global maxpool and average pool, the condition is not correct.
see T24876217 for discussion

Reviewed By: Yangqing

Differential Revision: D6665466

fbshipit-source-id: dcb5b4686249e6ee8e1e976ab66b003ef09b32fd
2018-01-09 00:54:45 -08:00
Anders Papitto
12309f4aa6 GRU cell: add linear_before_reset boolean parameter
Summary:
This matches the semantics of cudnn (and others, like pytorch)
Closes https://github.com/caffe2/caffe2/pull/1695

Reviewed By: dzhulgakov

Differential Revision: D6658208

Pulled By: anderspapitto

fbshipit-source-id: 00e1716fba47b0ac296d1e9e0131165f4997ac7d
2018-01-08 13:22:56 -08:00