Summary: As in the title + added scuba logging of the results.
Reviewed By: andrewwdye
Differential Revision: D4974261
fbshipit-source-id: 3e05b97133be95ffe37c8bcafd8a5a6bf3e7da93
Summary: Only CPU impl is available at the moment. Wrote simple cuda kernels.
Reviewed By: akyrola
Differential Revision: D4577736
fbshipit-source-id: c2540aa9d332fcdeac46cc7f89aab164d107d7a8
Summary: Both SquaredL2Distance and SquaredL2DistanceGradient had bad CUDA implementations. Use proper reductions and batched kernels.
Reviewed By: asaadaldien
Differential Revision: D4968527
fbshipit-source-id: f7cf82072d38bc127c757c5751863a9439aca8b5
Summary: Implement CPU and GPU gradient for Leaky ReLU op.
Differential Revision: D4943905
fbshipit-source-id: 541f13cd5f274a18b69ecf1362722b1bc0105ad9
Summary:
Instance norm failed grad check in some cases that needed a smaller step size. Decreased step size, but also increased threshold slightly.
Related diff: D4627379
Reviewed By: kennyhorror
Differential Revision: D4941827
fbshipit-source-id: d6f565340da92af40bfee90627960a3356c69412
Summary:
This is a naive layering approroach till we have a better
one. It could be c++ based and support diagonal execution. Not integrating into main LSTM API yet as this might be revised a bit. Would like to land so we can compare current implementation in the benchmark and also use this as an example of how LSTMs could be combined (as some folks are doing similar things with some variations).
Later we can LSTM() support API of layered_LSTM() and also change it under the hood so it stacks cells into a bigger cell instead. This way if we make RNN op use a kind of a DAG net, then RNN op can provide more parallelizm in stacked cells.
Reviewed By: urikz
Differential Revision: D4936015
fbshipit-source-id: b1e25f12d985dda582f0c67d9a02508027e5497f
Summary:
This is useful when data has standalone sequences which are
not connected to each other by any meaningful context
Reviewed By: yqwangustc
Differential Revision: D4835164
fbshipit-source-id: f95626acc26acc3eba3bca7efb08ed1dbdb36c83
Summary:
A new argument `blob_name_overrides` is added, which is to specify the
destination of loaded blob (in order to allow they have different names than
what are in the saved file/db).
This will be used for parameter initailization by pretrained model
in Dper 2. When loading a blob, we need to avoid name collision by assigning the
loaded blob with a new (temp) name.
Reviewed By: xianjiec
Differential Revision: D4952485
fbshipit-source-id: 4ce79bf40223314bb94981c22cbe537ae3f3d27c
Summary:
Free scratch blobs at data workers exit. Also add utility function that you can use to reset gradient blobs easily:
from caffe2.python import utils
grad_blobs = [b for b in workspace.Blobs() if b.endswith("_grad") or b.endswith("_shared")]
utils.ResetBlobs(grad_blobs)
Reviewed By: rpenggithub
Differential Revision: D4955531
fbshipit-source-id: d33b2bb2b5247dd2c4cff51c82b1257c871a4179
Summary: Current eval nets contain loss operators; see example: https://fburl.com/6otbe0n7, which is unnecessary. This diff is to remove them from the eval net.
Differential Revision: D4934589
fbshipit-source-id: 1ba96c20a3a7ef720414acb4124002fb54cabfc7
Summary: Now you can call coordinator.stop_coordinator("train") to stop the train model's data input and release its memory.
Reviewed By: rpenggithub
Differential Revision: D4955014
fbshipit-source-id: c1bc3ec67337b94aff8ea9b306c3b4158eeef42c
Summary:
The _param_init_net does not exist. All the other places reference
param_init_net instead. So far no one has encountered any problem
because all the passed params are BlobReferences. This diff makes
this assumption explicit.
Reviewed By: azzolini
Differential Revision: D4922930
fbshipit-source-id: e6dbd7a29ea640b7e62fcfec7ced3cc7d149f872
Summary:
ScaleGradient is a helper operator that does no actual numerical computation,
and in the gradient computation phase scales the gradient from being computed
through it.
Differential Revision: D4920719
fbshipit-source-id: 0e1e0888f79594be874fdbdda5ccef7389064c50
Summary:
Issue is that AliasOp doesn't work well with swaps that we do for
param.grad and param.accGrad. Tensors become the same if there is no
reallocation of the gradient tensor inside the backward cell net's
local workspace.
bug explanation from akyrola:
```
gpu_0/decoder/decoder_hidden_encoder_outputs_sum_grad: tensor A
on each timestap back to 0, we Alias
gpu_0/decoder/weighted_encoder_outputs_grad,
so then also
gpu_0/decoder/weighted_encoder_outputs_grad: tensor A
It's acc is:
gpu_0/decoder/weighted_encoder_outputs_grad_acc: tensor B
Now after timesteps, we swap (line 626) with _acc to get
gpu_0/decoder/weighted_encoder_outputs_grad: tensor B
gpu_0/decoder/weighted_encoder_outputs_grad_acc: tensor A
OPTION A -- batch size is same as before or smaller:
Then on next iteration, we do again the Alias to
gpu_0/decoder/decoder_hidden_encoder_outputs_sum_grad, so now
gpu_0/decoder/weighted_encoder_outputs_grad: tensor A
and also
gpu_0/decoder/weighted_encoder_outputs_grad_acc: tensor A
swapping them does nothing and they are the same
OPTION B -- batch size increases
gpu_0/decoder/decoder_hidden_encoder_outputs_sum_grad is reallocated,
becomes tensor C
gpu_0/decoder/weighted_encoder_outputs_grad becomes tensor C with
Alias
gpu_0/decoder/weighted_encoder_outputs_grad_acc: is tensor A
```
Reviewed By: urikz
Differential Revision:
D4946730
Tags: rnn, caffe2
fbshipit-source-id: b52d63cb238b81d2ad40e05e70deb32a81336f47
Summary: A layer that takes raw ids as inputs and outputs the indices which can be used as labels. The mapping will be stored with the model.
Reviewed By: kittipatv
Differential Revision: D4902556
fbshipit-source-id: 647db47b0362142cdba997effa2ef7a5294c84ee
Summary:
Adding add_weight_decay and image_input to brew module & remove `getWeights` and `getBias` from CNNModelHelper
With fbgs `useWeights`, the results show that noone but add_weight_decay is using this function. I checked with oculus people, their getWeights is a different function.
kennyhorror Please notice whether this is going to affect you :)
Reviewed By: salexspb
Differential Revision: D4945392
fbshipit-source-id: 4ef350fd81dd40a91847e9f3ebc5421eb564df32
Summary: printing resnet training loss and accuracy for each batch so that people will have better idea of what is going on
Reviewed By: pietern
Differential Revision: D4945390
fbshipit-source-id: 0fcd60f4735e81641355aba6e6cbf0e57e886e38
Summary:
lengthTile goes from 1 to multiple, the gradient op is simply the reverse,
by adding up the fanned-out rows of gradients together into 1
Reviewed By: kittipatv
Differential Revision: D4943375
fbshipit-source-id: deae9984e849974a0d484a10b94efdb1d30941cc
Summary:
Added optional support for using activation blobs for sharing as well. Doing this change revealed an non-optimal implementation in the blob sharing: we need to prefer to reuse freeblobs by prefering those blobs that are already shared by many other blobs. Otherwise the memory usage can increase when the pool of 'free blobs' grows.
Also, my first version only passed "free blobs" (i.e blobs in recycling pool) down the first branch when operators forked. But now we pass those blobs that were not used by the first branch down the second branch and so on.
Also added support for blob size information in the heuristic. This uses the shape inference mechanism.
I had to also do some small tweaks:
- use Sum() operator as a way to match shapes of blobs that had otherwise unknown shapes. This is related to the Sum() operator that is added to combine multiple incoming gradient inputs (with _autosplit gradients).
- a couple of random shape inference fixes
This reduces the Resnet-50 memory usage on 64 batch from 9.45 Gig to 8.5 Gig.
For a 32 batch, the memory usage is 4330 MiB, down from 4800 MB, compared to Torch's 6856MiB (thanks prigoyal for checking this for me).
This is unfortunately quite a bunch to review...
Reviewed By: asaadaldien
Differential Revision: D4393909
fbshipit-source-id: 9c7c94125f96512bea80463ebcb63c215ef95ff9
Summary:
This diff contains the following changes:
- implementing __repr__ on Field types; this makes it a little easier to see what broken in the unit tests
- preserve the shape of ndarray input to schema; previously, empty and scalar arrays lose their shape, while other keeps the shape.
- type-checking ndarray input; this ensures basic integrety of schema
Reviewed By: xianjiec
Differential Revision: D4913030
fbshipit-source-id: bd0f6b8722d95bfe800edf98ba05029c5b99d2af
Summary:
This PR is based on commit "977c6b3" as this version allows MKL to use all the cores available.
All MKL related files are added here after incorporating review comments, major changes include
1. usage of Clang-format(Linter) with --style = Google
2. usage of macros for checking input and filter dimension in the mkl operators
3. merged Max and Average pooling functions
4. created a new folder for mkl related python scripts in Python folder and moved them there
5. there is no mkl_alexnet_test.py as that was redundant while convnet_benchmark.py does the same thing
Closes https://github.com/caffe2/caffe2/pull/270
Differential Revision: D4905219
Pulled By: Yangqing
fbshipit-source-id: e5f5b189714a835b93b9ebda24c52e09572dfca7
Summary:
If exception is getting thrown inside of the namescope it won't be reset to
it's previous value. This diff is changing this behavior to expected one.
Reviewed By: kittipatv
Differential Revision: D4928621
fbshipit-source-id: 1d3579f2093ca60901b0d37ae3f2108deb2333ea
Summary: Instead of requiring gradient updates on GPU, this change will allow the usage when loss computation happens on GPU while all grad updates happen on CPU.
Reviewed By: jhcross
Differential Revision: D4943996
fbshipit-source-id: 1f2144c4277dfdb865877e0d0216ca1ac7dd7309
Summary:
Add a pointwise `IsMemberOf` operator to Caffe2.
The original idea was `In` but I think this is not so clear.
I used `UnaryElementwiseWithArgsOp` at some point, but it was making the code a bit more difficult to read without bringing any feature.
Reviewed By: ender-wieczorek
Differential Revision: D4912655
fbshipit-source-id: 716b66bb51468dd59db5f76f23d78cda85961b58
Summary:
Two new operators to pack and unpack a dataset. This is so that we can
re-use other operators that do not understand the schema format. The immediate
use-case is to use it with a partition operator.
Packing works by splitting the input into separate tensors, putting them in a
vector and wrapping in a shared_ptr (as opposed to a unique_ptr, so we can
copy).
Unpack takes the packed input and concatenates it back to the original.
I also had a gard time understanding the iteration, so I created a TreeWalker
that just hides the complexity of operating with all the arrays and makes the
short functions for a given purpose that at least for me are easier to
understand.
Reviewed By: dzhulgakov
Differential Revision: D4918002
fbshipit-source-id: ecbf9196ed25e886a94383961176b8c84dde2d2f
Summary:
Added option to recurrent_net and RNNCell's for forward_only. If this is set, the backward_step_net is not passed to the operator.
When backward_step_net is not available, operator knows it is in forward_only mode and does not create workspaces for each step but cycles
through only one private workspace.
Note: we could avoid doing a lot of work in recurrent.py:recurrent_network call when backward step is not needed, but doing that nicely requires
more refactoring that I did not want to do now. Thus, we create the backward step nets etc, but just don't pass it to the op.
This can be used to create more efficient inference models. You can also sanitize existing inference nets and remove the backward_step_net argument to
get the benefits.
Reviewed By: salexspb
Differential Revision: D4916482
fbshipit-source-id: c99b93c9cb897c32b0f449253f7f6d6a942618ad
Summary:
This is needed to have a stateful PythonOp (such as the PyTorch in the following diff) where computing f will produce a state (not tensors) thats consumed by grad_f.
python_func_type is a type that constructed as python_func_type(f) and provides forward, backward methods (will be delegated to f, &f_grad). We are constructing this object in at Op registration time to have it as thread local.
Differential Revision: D4900963
fbshipit-source-id: 00a6a55fa372e2244048921914e22e710d11f7ce
Summary:
rename model_helpers to brew. This is a big diff now. I did these things:
1. replace model_helpers with brew:
find . -type f -exec sed -i 's/model_helpers/brew/g' {} +
2. rename model_helpers.py and model_helpers_test.py
3. rename ModelHelpersTest to BrewTest
4. lowercase all the helper functions to distinguish them from single op
5. run my unittests
6. run converge tests
Reviewed By: salexspb
Differential Revision: D4930465
fbshipit-source-id: f420a1b03238df1cbe9f4426e0b9c43a12119661
Summary:
rename ModelHelperBase to Model.
This is the result of running:
find . -type f -exec sed -i 's/ModelHelperBase/ModelHelper/g' {} +
We had 19 results when fbgs ModelHelperBase. Here is 20 instances because I added 1 test in model_helpers_test.py
Reviewed By: salexspb
Differential Revision: D4928337
fbshipit-source-id: bc4c12b60b90c167e717de50ea9fe17521e142e3
Summary:
This is getting too messy again. So cleaning up it even more. One thing I added here - not calling random to generate the input sequence. Ideally we do this for all other inputs. This was reported to be an issue when hypothesis finds bad examples - it can make it run very long.
Also I tunned ranges a bit so test finishes faster. On my devgpu test the whole test took 600 before and now is 39 seconds.
One more important thing - we want to test all combinations of things that are in the for loop. While things provided by hypothesis are just random tensor inputs.
Differential Revision: D4902956
fbshipit-source-id: ceb02d6761406b3192101d3b255abe90b2866770
Summary:
CUDA version of PRelu and its gradient. Forward pass is straightforward, backward pass requires reductino over the weights.
tsaizhenling, please patch this and test.
Differential Revision: D4931630
fbshipit-source-id: 1238e7d536e41480713865ced91aaef88f4feef5
Summary:
Simple FindOp for CPU and GPU which searches a list of unordered needles from an unordered index. CPU version might be faster if first sorting the index / needles, but we can get back to that later.
CUDA op is also kind of brutish, but pretty parallel. Since the index and the queries are smallish at least in the use case currently in mind (Machine Translation's team word candidate search), I think this is a sufficient start.
Note that this is much simpler than the Index-class of ops which allow modifying the index etc. Since CUDA ops are more complex to implement for the full Index functionality, I decided to make a separate op with this very simple functionality.
Differential Revision: D4910131
fbshipit-source-id: 6df35c9e3c71d5392a500d5b98fd708ab0c8e587
Summary:
arg_scope module for model_helpers.
Some coding example with it:
with model_helpers.arg_scope([model_helpers.FC], kwargs):
model_helpers.FC(model, "x", "out_1", n, n)
with model_helpers.arg_scope([myhelper], n=-3):
with model_helpers.arg_scope([myhelper], n=-2):
with model_helpers.arg_scope([myhelper], n=n):
res = model_helpers.myhelper(None)
with model_helpers.arg_scope([myhelper], n=-3), \
model_helpers.arg_scope([myhelper], n=-2), \
model_helpers.arg_scope([myhelper], n=n):
res = model_helpers.myhelper(None)
Reviewed By: salexspb
Differential Revision: D4837180
fbshipit-source-id: 2cbd81681779d6cd1e61ee189edcc1cf3bb07d15
Summary: Work in progress for improving the performance of the TransposeOp on CPU. This is used extensively for inference in several neural MT systems, so optimizing this function is worthwhile and will reduce request latency.
Differential Revision: D4913075
fbshipit-source-id: fa2742829291d91f3eba00fdfe7d6c0dae83e206
Summary: CuDNN LSTM weights were incorrectly sized for layers > 0: there was assumption that the input size to middle layers is same as for the first layer, but actually the middle layer will get input from a layer below, which will have dimension equal to the output dimension (hidden dimension). This worked fine when input_dim and hidden_dim were equal, as are the default params for lstm_benchmark.
Reviewed By: salexspb
Differential Revision: D4922824
fbshipit-source-id: 3ed05529dcb0a4e66ad440084a55df1c5932fd33
Summary:
downloaded_size need to be added with the length of returned data_chunk.
When the last block's size less than chunk, the percentage should exceed 100%
Closes https://github.com/caffe2/caffe2/pull/329
Differential Revision: D4922227
Pulled By: Yangqing
fbshipit-source-id: 7d05d9bbf2dad0a9d330be96b60e658908185a46