Summary: Add support for SparseMomentumSGDUpdate and tests for momentum SGD in both dense and sparse cases
Reviewed By: akyrola
Differential Revision: D6234834
fbshipit-source-id: 9848c29ea06794ef35f1ebaff0f5e81eac4f4db9
Summary:
This seems to be faster in a bunch of cases. Prefer to keep it as a
separate op instead of MatMul + Add so its easy to compare perf on per
op basis between this one and the baseline (normal FC)
Reviewed By: akyrola
Differential Revision: D6169187
fbshipit-source-id: 09b96325d44bd181896f396aec88b27314c435b0
Summary:
resnet50 trainer will save the 'optimizer_iteration' blob in checkpoints, but loads it i in GPU context. This fails because AtomicIter/Iter expect the blob to be in CPU context. So manually reset the optimizer_iteration in CPU context.
I am thinking of making the iter-operators automatically do this switch, but in the mean time this unbreaks the trainer.
Reviewed By: sf-wind
Differential Revision: D6232626
fbshipit-source-id: da7c183a87803e008f94c86b6574b879c3b76438
Summary:
Implementation of polling async net executor.
Notes:
- New net executor async_polling - schedules CPU and GPU ops asynchronously, uses single polling thread
- Events: update to Caffe2 events to support async CPU events, adding new methods:
Query() - non-blocking checking of event states: INITIALIZED -> RECORDED -> SUCCESS/FAILED
ErrorMessage() - when operation runs asynchronously and fails calling this on event will give error message
- Tasks: using existing DAGNet's algorithm to compute CPU and GPU chains, a separate task for each chain
- Polling: using single thread to query state of events - for CPU tasks atomically queries task state, for GPU task - uses cudaEventQuery; using Event
- Scheduling of CPU ops: using global thread pools
- Scheduling of GPU ops: using GPU thread pool per GPU device
Reviewed By: dzhulgakov
Differential Revision: D5985110
fbshipit-source-id: a9de7fcbb71d046a3aa1b573072b89a65dfeee8c
Summary: Before the boundary checking was happening after the first access for 8bit ops.
Reviewed By: Yangqing
Differential Revision: D6206753
fbshipit-source-id: 07ab240cae8c67b3048f03aa79af0b6399b9940b
Summary: Still assumes a complete subgraph, but slightly more generic.
Reviewed By: Yangqing
Differential Revision: D6103228
fbshipit-source-id: bfa0d46067e05baa0478a4c37a67ccf8f81f34ec
Summary:
See comments for where this can be useful (disabling the
OperatorDef::DeviceOption(...) so we can control the scope at the
NetDef::DeviceOption(...) level).
Reviewed By: viswanathgs
Differential Revision: D6103412
fbshipit-source-id: 75a9be54275760132f6d1e71acbe9190e7099289
Summary: Updated brew SpatialBN to use initializers similar to other brew ops such as conv and fc instead of initilaizing all of its parameters itself within the brew call.
Reviewed By: asaadaldien
Differential Revision: D5840359
fbshipit-source-id: 9f3d688d4957605eaf7ecd2488bc26bfb1da3f78
Summary:
My commit bab5bc broke things wiht fp16 compute, as i had tested it only with the null-input, that actually produced fp32 data (even dtype was given as float16). Also, I had confused the concepts of "float16 compute" and fp16 data. Issue #1408.
This fixes those issues, tested with both Volta and M40 GPUs. Basically restored much of the previous code and fixed the null input to do FloatToHalf.
Reviewed By: pietern
Differential Revision: D6211849
fbshipit-source-id: 5b41cffdd605f61a438a4c34c56972ede9eee28e
Summary: This cleans up the _hack_get_slice_end() using the Conditional operator.
Reviewed By: jmp84
Differential Revision: D6177797
fbshipit-source-id: 5ce0b76b8472123415bba39488aa2c69aad96111
Summary: Added a simple function to synchronize a blob across machines (but not across devices), i.e a blobs that are not synced over devices.
Reviewed By: yqwangustc
Differential Revision: D6192922
fbshipit-source-id: a4d653c9fb09f06b0c42330bdae07b42f5e6346c
Summary:
Implemented new CUDA class for operator SparseAdagrad. The param and moment inputs now can be float or float16.
The functions for mixed-precision add/mult/store are defined in a separate head file ("caffe2/core/float16_util.h") for reuse purpose.
Reviewed By: azzolini
Differential Revision: D5880200
fbshipit-source-id: dca227f38629a03a9d771f42efe2c0b673075c4d
Summary: Allow the GEMMs in the FC/FCGradient Op to do FP16 compute instead of FP32 if the appropriate op flag is set.
Reviewed By: asaadaldien
Differential Revision: D5839777
fbshipit-source-id: 8051daedadf72bf56c298c1cf830b019b7019f43
Summary:
RNN executor had a disadvantage to plain nets when running in forward-only mode: for plain nets, we only create two workspaces and two nets and alternate between them. With RNN executor, we had only four workspaces (4 > 2 because it was faster in some cases), but the nets (or rather the ops) were created for each of the timesteps. This has significant overhead. This diff changes this sos that if executor is is forward-only mode (i.e has limited parallelism setting), then it will use the same operators as the t - 4'th net -- excluding the ops that require the timestep blob. The latter exception is required because RNN executor needs different timestep blob for each timestep because it cannot modify the value of the timestep blob like when running nets in a loop.
Also removed redundancy in the dependency computation and added a debug flag to the executor that outputs the description of the rnn contents.
Reviewed By: salexspb
Differential Revision: D6155510
fbshipit-source-id: c47f727d2128649b081270d15020a08d41e5748d
Summary: Added initializer which sets up the ParameterInfo object in the opposite format as the pFP16Initializer. This is needed for when the op requires the initialized blob to be FP32 but a FP16 copy of the weights is needed.
Reviewed By: wesolwsk
Differential Revision: D5840832
fbshipit-source-id: 439e87f41a1dbc58bf63a5c0e7f7fc4cb00b4d65
Summary: Given an additional tensor containing the values corresponding to the weighted samples, add tensor output that contains the values selected by the sampled indexes.
Reviewed By: akyrola
Differential Revision: D6050094
fbshipit-source-id: 1eccc641b99e30d36ae83d49f630b018a53e4147
Summary: Sigmoid + CrossEntropy has numerical stability issue. The gradient of sigmoid is `dx = dy * y * (1-y)`. When `label=0` and `x` is large, `1-y` could be round to (near) 0 and we loss `dx`. Switch to `SigmoidCrossEntropyWithLogits` solve the issue because the gradient is not dependent of `y`.
Reviewed By: chocjy
Differential Revision: D6086950
fbshipit-source-id: f990ae726802aa5c56fa62cf5e23f2e61ee047fa
Summary:
We need to use Cluster to isolate the definition of the nodes.
Otherwise, the contexts are polluted and the run becomes
stateful.
Reviewed By: Yangqing
Differential Revision: D6140404
fbshipit-source-id: 09d1c86ef12bb01eaa16b1dade4d2e1e93be287a
Summary:
seq2seq/translate.py was running much slower on RNNExecutor. This was because RNNExecutor has significant init overhead (I have another diff to reduce, but not completely eliminate it), and translate was calling the decoder with RunNetOnce -- thus always recreating the net and the ops. Changhing this to RunNet() makes translate run faster than without executor. RunNet uses the net name and uses the already created net, while RunNetOnce passes the whole protobuffer.
Noticed similar bug in seq2seq ensemble bean model, which also calls CreateNet() but uses RunNetOnce() instead of RunNet().
Reviewed By: jhcross
Differential Revision: D6156566
fbshipit-source-id: a933453e36a0d8fd163d0584186fda427a680687
Summary:
In order to reproduce StarSpace model using the architecture of Two Tower model, we need to implement the ranking loss that is used in StarSpace as well as Filament model. In both StarSpace and Filament model, all negative samples come from random negative sampling, thus the number of negative sampler per positive record is fixed (say 64). To calculate the total loss, for each positive record, the hinge distance between the positive score and negative scores (the 64 scores in the example) are calculated. This diff implement this loss in Dper framework.
The main idea is to add an option so that negative_sampling.py can output random negative samples as an independent field rather than merged with the original input_record. In this way, we can calculate the positive score and negative score separately, which will eventually been used when calculating the ranking loss.
(Note: this ignores all push blocking failures!)
Reviewed By: kittipatv
Differential Revision: D5854486
fbshipit-source-id: f8a5b77be744a6cc8a2b86433282b3b5c7e1ab4a
Summary: Made the asesrtion messasge clearer to let people know that rowwise is not supported for dense adagrad.
Differential Revision: D6135363
fbshipit-source-id: d706135a335305627310c69a2a6d7721b0a47f0e
Summary:
RNN executor has significant overhead of creating the timestep-nets the first time, and this is especially bad with beamsearch that is complex.
So disable RNN executor for now until perf regression is fixed (I have pending diff on it).
Reviewed By: salexspb
Differential Revision: D6138878
fbshipit-source-id: ce63ab9ce9cc1c0f67097aea1e370494ca98c680
Summary:
Added two new ops, FP16MomentumSGDUpdate and FP32MomentumSGDUpdate, which perform both the momentum sgd and weight decay updates to a given parameter in a single op -- thus being more efficient.
Also updated the standard momentum sgd test to test if nesterov momentum works.
Reviewed By: asaadaldien
Differential Revision: D5837837
fbshipit-source-id: 5ad487b9c59434491d3a4fcfdeed820db6083f57
Summary:
Added FP16SgdOptimizer to optimizers. The optimizer updates the params using the FP16MomentumSGDUpdate and FP32MomentumSGDUpdate ops. To determine which update op to call the optimizer expects either the fp32_update flag to be set, or that the blobs are in a recognized format created by initializers.py.
These requirements can be loosened if the blob DataType can be queried in python, though I am unsure of how to do this.
It also forces FP32 updates to SpatialBN as CuDNN does not support FP32 params for SpatialBN.
Reviewed By: asaadaldien
Differential Revision: D5840806
fbshipit-source-id: 84ab8dc11a6e91a198ed72c00287f4809607079d
Summary: Adding "dtype" parameter for the GivenTensorOp. Also, providing backwards compatibility for the existing code, byt supporting the templating if "dtype" is not provided.
Reviewed By: bddppq
Differential Revision: D6090049
fbshipit-source-id: f5deaa57b49f2280289975f4583aba5bc064a2bc
Summary: CUDA version of weighted sampling operator; minor changes for CPU version
Reviewed By: asaadaldien
Differential Revision: D6106668
fbshipit-source-id: 42d7607bd845a4a39cf5b89d7476904cb5928431
Summary: Before we fix it properly with 'type' argument.
Reviewed By: bddppq
Differential Revision: D6103973
fbshipit-source-id: 8c00a93c373dd0ad0bbfe59944495f6574223ab6
Summary:
a parameter can be initialized multiple times in init_net if parameter sharing is enabled. With the original implementation, only the first parameter init will be replaced by pre-trained parameters and the next are still unchanged. This overwrites the initialization with pre-trained parameters.
This diff fixes this issue and also support model init for ads-intent project
Reviewed By: dragonxlwang
Differential Revision: D5991291
fbshipit-source-id: 36173f6239c56bd0d604a77bd94e36072f32faa7
Summary:
Currently, the type inference infers FLOAT as the type for all GivenTensor*Fill operators. However, the inferred type should match the actual operators.
Also, for `Slice` operator, there is a corner case where type inference fails
Reviewed By: azzolini
Differential Revision: D6096813
fbshipit-source-id: d65b7c0f42436138cbc49d8a5a62374fa5e927e1
Summary: Model with rowwise RMSProp does not work in net-rewriting pipeline (fbl 29841194). This diff solves the issue by changing the way Slice op is used in the model and adds a rule to `parallelize.py` to cover for needed cases.
Reviewed By: azzolini
Differential Revision: D6096022
fbshipit-source-id: c4f615b2ba99da9f77a1d49c9fb898e0e59401f8