Summary:
Implementation of ##LSTMWithAttention##
Still TBD:
1. There are problems with back propagation, because gradient is not implemented for ops with broadcasting
2. I need to make initial_recurrent_state to be of shape [dim] rather than [1, batch_size, dim], so one doesn't need to provide batch_size to LSTMWithAttention
Differential Revision: D4298735
fbshipit-source-id: 8903fcff4d6a66647ee6d45a6ef28803fc3091e5
Summary:
It could be that only first item
in the batch was really used in a case rest of the memory was 0. Or if
memory there had a big positive integer, then whole sequence was used. So we used rest of the batch depending on our luck :)
Reviewed By: Yangqing
Differential Revision: D4599569
fbshipit-source-id: ae89cee796bbcbc232e4abcab71dee360b0d8bc6
Summary:
In-place is ~30% speedup, but needs a change to torch2caffe
or a graph rewrite on the client.
Differential Revision: D4577582
fbshipit-source-id: c31bf8ba97f4fa4cedf355cf2475eb7bab48b304
Summary:
cudnn_ws args was already there. This PR only uses that args when model is created.
Closes https://github.com/caffe2/caffe2/pull/164
Differential Revision: D4598443
Pulled By: Yangqing
fbshipit-source-id: c2e83f73059360ecf2fedf2c62be7cacbb4034ca
Summary: we may not need dense feature inputs in some models (e.g., double helix).
Reviewed By: dzhulgakov
Differential Revision: D4568755
fbshipit-source-id: 6850508f86fafb53f81783b2a2a38776be5455d7
Summary: Another part of making DPER compatible with half-floats. This diffs adds supoprt of fp16 to segment reduction operators used in DPER.
Reviewed By: dzhulgakov
Differential Revision: D4587560
fbshipit-source-id: 0ae10648a7286a820bffaee802464dd9464584bc
Summary:
First part of adding half-floats support to DPER 2.0. Let's add an option use_half_floats to enable converting some weights of the model from fp32 to fp16 before saving it to predictor models parts. For now it's for SparseLookup layer's embeddings. All conversion is done after training is finished and saved models are ready to be used on remote predictors as-is (they will be stored compacted in memory). New fp16 blobs are saved to the model instead of original ones, under the same names, so we don't modify MetaNetDef at all.
Next steps:
1) support on delivery side -- operators working with these blobs should support both float and float16 input types
2) benchmark performance to make sure there is no regression
a) of serialization
b) of delivery
3) support realtime training (I'm thinking about adding new pre-publishing net which will be executed each time the realtime trainer stops to publish a new snapshot)
Depends on D4567304
Reviewed By: kennyhorror
Differential Revision: D4571710
fbshipit-source-id: 19967a17d3bd84878d66e8c0ed8c5342bf38d979
Summary:
This operator can always outputs dense gradients regardless of
the input gradients. For forward pass, it passes inputs to outputs in place.
Reviewed By: xianjiec
Differential Revision: D4582511
fbshipit-source-id: 7eb2c5d2142aa05d373f06cab1e7f89d8b747d34
Summary: Set up a server node that periodically gathers values of all nodes' perf counters, allowing to publish them at once.
Reviewed By: dzhulgakov
Differential Revision: D4555116
fbshipit-source-id: 8e49ac8353b52b2be82aedf305762478e7fa687a
Summary:
We were running into a problem where a Job could not be pickled. It needs to be pickled in order for the master flow operator to execute it using the session.
This creates a concept of "compiled" Job, that pretty much only stores protobufs with the Jobs to be executed, avoiding any issue with pickling.
Reviewed By: dzhulgakov
Differential Revision: D4554799
fbshipit-source-id: 2ee9877ca49a796d51925e5ec917436e3d930984
Summary:
Previously we had several limitations for a reporter net:
- needed to be a net, not an execution step
- only one allowed per execution step, with a single interval
Now, "reporter nets" become repoter steps and multiple of them can be specified with different timeouts.
Reviewed By: dzhulgakov
Differential Revision: D4583686
fbshipit-source-id: ad7266e16f96e7829fd24dcc1f165f39e9db573d
Summary: this is to fix the bug with eigen implementation which calculating crossentropy
Reviewed By: salexspb
Differential Revision: D4582078
fbshipit-source-id: 4c92047e9dbbe219fcbef618a45c584c2fbfaad5
Summary: Removed Model API because no one {seems to,should} be using it
Reviewed By: Yangqing
Differential Revision: D4575126
fbshipit-source-id: 174d39e9aa46750f1fae8295f7e1e5452559af33
Summary:
- Key-value store for counters.
- Counters are updated via macros that also export USTD probes.
- Counter values can be exported using caffe2 operators.
- Snapshot mechanism for tracking time-window counter values.
Reviewed By: dzhulgakov, pietern
Differential Revision: D4553761
fbshipit-source-id: 25a1a91a3168dcff2159c6fba7b357d3fd3aa9bf
Summary:
This diff adds ability to train multiclass classifier on sampled subset of classes. This basically implements what described in https://arxiv.org/abs/1412.2007 without the sampling probability correction. Since this implement uniform sampling, sampling probabilities are cancelled out in softmax anyway.
The trick to make this work is to have 2 different nets for prediction and training, both shared parameters. The model is built normally until the last layer. If sampling is needed, then we do the following:
The class sampling works as following:
Reviewed By: xianjiec
Differential Revision: D4512859
fbshipit-source-id: ab537bcac81d5e5877a8795045e8682c8064da68
Summary: Do I understand correctly? It must be of size 1 for sigrid
Reviewed By: kennyhorror
Differential Revision: D4576541
fbshipit-source-id: 92fa8dc62e36ff095e14cceeb80b03c0028f5695
Summary:
Move the open source version of build_ftrl to the open source directory.
Because build_ftrl can use several engines, the SIMD engine is fb specific.
We keep the build_ftrl in the fb/optimizers/sgd.py file.
So, if the caller only uses the open source engine, it can import the
open source build_ftrl. If the caller may use the SIMD engine, it needs
to import the fb specific build_ftrl.
Also move the tests to python directory.
Reviewed By: salexspb
Differential Revision: D4560384
fbshipit-source-id: 84fc915d3bbe42fd19503ef132d3277088f6fab3
Summary:
Remove the use of `NextName` in layer model helper, so that the same function return `model_helper` that should construct identical `Net`, when under the same NameScope.
The `NextScopedBlob` should only take effect when there is real name conflicting, otherwise it returns ScopedBlobReference.
This is critical for parameter blobs. In long run, we need to be able to specify parameter blobs more explicitly. (kennyhorror is working on this). This solution works in short term for e.g., two tower sparse nn models.
Reviewed By: kennyhorror
Differential Revision: D4555423
fbshipit-source-id: 2c4b99a61392e5d51aa878f7346466a8f14be187
Summary:
Pass through the h-value recurrent output unchanged at each LSTM step beyond the valid part of a sequence (computed based on seqLengths, allowing batching of sequences of different length). This enables using the final-step output of each sequence as the output when one vector is desired for the entire sequence. Gradient also passed back unchanged.
Also made some cosmetic changes to recurrent_network_test.py (seq_lengths offset corrected, should be in [1, T] rather than [0, T-1]).
Reviewed By: urikz
Differential Revision: D4540307
fbshipit-source-id: 73a9f6326069d713dcb0cdc8d17869317c6dbe96
Summary:
In current implementation of SaveOp we always use names for blobs from the
current workspace. But there is a use case for replacing names in saved model:
for example, to use half-floats in prediction model but keep full-floats for
training model we might want to save a blob "w_fp16" as "w".
Differential Revision: D4567304
fbshipit-source-id: 87bc84fa6a45d8bfa33edb55ac1fb1cff542dbe3
Summary: This diff adds shape inference for the SoftmaxWithLoss Operator
Differential Revision: D4565835
fbshipit-source-id: 1c2db398524c765977ec4d8a22c9b986bf9faf82
Summary: Every time data is put into the logger, it checks if a second has passed. If so, it displays how many inputs were put in the last second.
Differential Revision: D4527148
fbshipit-source-id: f197eb975ed81111449705e0719d1e56f385fd8d
Summary:
One can find a reason, why I need gradient for CopyOp in this post - https://fb.facebook.com/groups/1405155842844877/permalink/1639683782725414/
Gradient for CopyOp is trivial in case the device was the same (cpu, or same gpu), but get's a little harder, when the copy was made across two different gpu.
I introduce new operator CopyOnDeviceLike, which has additional second input. The op copies the first input to the same device as the second one. The default implementation is exactly the same as CopyOp, but I specialize it for CUDAContext.
Please, let me know if I'm doing anything wrong here! That's my first caffe2 diff, related to operators definitions.
Reviewed By: Yangqing
Differential Revision: D4557258
fbshipit-source-id: 9494be589cc1e5696bbbfe25b7622aaa4c9efe4a
Summary: As in headline. I had missed these originally.
Reviewed By: kennyhorror
Differential Revision: D4560255
fbshipit-source-id: e69458e8a2574b981e40e915d87c8e16dadee7d6
Summary:
(Caffe2) Modified RecurrentNetworkGradient operator so that training is possible with any of the output blob(s) receiving gradient during the backward pass. This is realized through a new argument for the RecurrentNetwork op, outputs_with_grads, which takes a list of the indices of the output blobs which will receive gradient. The default case (only receiving gradient from the first output blob) remains the default.
New unit test covers the case where outputs_with_grads = [1, 2] using Python LSTM wrapper.
Reviewed By: urikz
Differential Revision: D4518516
fbshipit-source-id: 5c531582b20f3cf727d1aa91239b4d5a2b8a7c1f
Summary:
The existing op tranforms the input in a general way. It needs M transform mappings to transform a NxM input tensor.
But for binary predictions X (Nx2 tensor), we know that X[:, 0] = 1 - X[:, 1].
So we just need one mapping for X[:, 1]. After being transformed, we can compute X[:, 0].
This diff is to handle this.
Differential Revision: D4550441
fbshipit-source-id: 42d8c6e88d830c97628ee930b543740a32acf904
Summary: This is like `UniformIntFill` but guarantee to return unique elements in the output, excluding the optional avoiding elements.
Reviewed By: xianjiec
Differential Revision: D4511814
fbshipit-source-id: 5dc98ee580616e60e46ee74ebb3f5ddd29a09965
Summary: Updates function revise_recurrent_network_op() which supports cloning recurrent networks by adding a blob-name prefix to string arguments to maintain correspondence. Previously relied on many hard-coded indices referring to the positions of arguments and inputs of RecurrentNetworkOp and its corresponding gradient operator, and therefore broke when the implementation changed. This fix should make it more general and robust
Differential Revision: D4559768
fbshipit-source-id: fb85b0b1ffb1393dc84760d6ae5dc473e8b764b0
Summary: to verify that a model only used a subset of the parameters of another model (e.g., the model doing training).
Differential Revision: D4557787
fbshipit-source-id: bd8ac96f5e78e05f6f56086db6e6ddcda36c1d37
Summary: generates a fair amount of documentation from the operators. also provides a framework for later documentation generation and custom syntax.
Reviewed By: dzhulgakov
Differential Revision: D4168311
fbshipit-source-id: 89ae9d023ad883623cdc1879c11e10b202b68793
Summary:
build_sgd, build_adagrad, and build_adam are in open source python directory
now.
Move the tests to the same directory.
Extract TestBase to test_util.py so that TestFtrl can still refer it.
Depends on D4552227
Reviewed By: salexspb
Differential Revision: D4554549
fbshipit-source-id: 35aed05b82c78530808ef623a25bb7532b2abbae
Summary:
The change migrates build_adam function to the open source python directory.
Depends on D4551871
Reviewed By: salexspb
Differential Revision: D4552227
fbshipit-source-id: 2b6bef183ecfd645d0f26215a784846d8841b845
Summary:
hasattr(x, ops) should always work, regardless whether you're inside or outside a NetBuilder context.
There's no ideal solution here. I think this is sensible enough.
Reviewed By: kennyhorror
Differential Revision: D4557228
fbshipit-source-id: 4b1c1db5c8b11e4ccbf977b3f82c63b2c3e6e7db
Summary: These operators update the state of the instance and therefor should have the instance in the output list.
Reviewed By: xianjiec
Differential Revision: D4554773
fbshipit-source-id: 556d484fcf58878308aa6b0f7cd7ea2446d3f29e
Summary:
The change migrates build_adagrad function to the open source python directory.
Depends on D4547016.
Reviewed By: salexspb
Differential Revision: D4551871
fbshipit-source-id: cb68d9b2a723b0f069c8a24cfa3062f1e676c016
Summary:
.In Tutorial, I found it not correct when calling Model(). After that changing, It works.
Closes https://github.com/caffe2/caffe2/pull/148
Reviewed By: bwasti
Differential Revision: D4556894
Pulled By: Yangqing
fbshipit-source-id: 949a8d0496861f19869436908ffe1ef1a0f853b1
Summary: ContextManager was thread local. This caused issues because the context registration needs to be global. What needs to be thread local is the current context.
Reviewed By: jhcross
Differential Revision: D4556050
fbshipit-source-id: 5de1c0d9fd0a778c4cb1eadef01f9a1ab488f603
Summary:
Currently build_sgd is in facebook specific directory. Need to move it to python so that
the open source world can use it.
Reviewed By: salexspb
Differential Revision: D4547016
fbshipit-source-id: d699b7b1ab8051afdeadedb4d247ec2a04a7a3e7
Summary:
Input have to be arranged in such a way so j-th example of
batch i goes right before j-th example in batch i+1 in the text.
Reviewed By: urikz
Differential Revision: D4519553
fbshipit-source-id: 9dd80658e0c4d9ff0f97a7904cbb164f267fe39f