Commit Graph

43 Commits

Author SHA1 Message Date
intel
b3b66e3d00 MKL related files with review comments incorporated
Summary:
This PR is based on commit "977c6b3" as this version allows MKL to use all the cores available.
All MKL related files are added here after incorporating review comments, major changes include

1. usage of Clang-format(Linter) with --style = Google
2. usage of macros for checking input and filter dimension in the mkl operators
3. merged Max and Average pooling functions
4. created a new folder for mkl related python scripts in Python folder and moved them there
5. there is no mkl_alexnet_test.py as that was redundant while convnet_benchmark.py does the same thing
Closes https://github.com/caffe2/caffe2/pull/270

Differential Revision: D4905219

Pulled By: Yangqing

fbshipit-source-id: e5f5b189714a835b93b9ebda24c52e09572dfca7
2017-04-25 00:31:29 -07:00
Ahmed Taei
7440cd5ef4 Add python_func_type to PythonOp
Summary:
This is needed to have a stateful PythonOp (such as the PyTorch in the following diff) where computing f will produce a state (not tensors) thats consumed by grad_f.
python_func_type is a type that constructed as python_func_type(f) and provides forward, backward methods (will be delegated to f, &f_grad). We are constructing this object in at Op registration time to have it as thread local.

Differential Revision: D4900963

fbshipit-source-id: 00a6a55fa372e2244048921914e22e710d11f7ce
2017-04-24 15:52:26 -07:00
Aapo Kyrola
570c6bb9b7 Fix backward pass computation when an input is used in a Fill-op input for shape
Summary:
Fix issue that amyzhang encountered. She was using ConstantFill to create a blob of same size as an another blob. This caused the gradient op computation flow to interrupt through the ConstantFil since the gradient for the input blob was set to None (although it had another gradient already set). The correct solution is to avoid overwriting gradient assignments with None, if they already have a gradient. UNLESS that blob is output of the same op, as with StopGradient op. (Note that Amy's problem was fixed by using instead a fixed shape ConstantFill and Add with broadcast=1, which is better solution anyway).

Not sure if I explained this well, but see the new unit tests. Before this change, the testAddAndDynamicConstant failed but the testAddAndStaticConstant succeeded.

Reviewed By: dzhulgakov

Differential Revision: D4861176

fbshipit-source-id: 3b53621bfaba2e36786a5e4664145038995f6616
2017-04-11 19:32:22 -07:00
Aapo Kyrola
8da2d75ec8 Caffe2/Recurrent] recurrent.py API to cuDNN LSTM
Summary:
Quite large diff to make cuDNN LSTM and our LSTM produce same results and provide python API for the cuDNN LSTM.

* Added operators RecurrentParamGet and RecurrentParamSet to access weights and biases for the different gates, input/recurrent.
* Removed RecurrentInit as not needed
* recurrent.cudnn_LSTM() returns a special net and mapping that can be used to retrieve the parameters from the LSTM
* recurrent.cudnn_LSTM() can be passed blobs that have the parameters for the individual gate weights and biases
* recurrnet.InitFromLSTMParams() can be used to initialize our own LSTM from CUDNN params.  This way we can test if cuDNN and our own produce the same result.

recurrent_test.py tests for the equivalency

Reviewed By: salexspb

Differential Revision: D4654988

fbshipit-source-id: 6c1547d873cadcf33e03b0e0110248f0a7ab8cb0
2017-04-05 14:20:23 -07:00
Aaron Markham
58f7f2b441 doxygen python block added
Summary: Closes https://github.com/caffe2/caffe2/pull/226

Differential Revision: D4793550

Pulled By: JoelMarcey

fbshipit-source-id: cc33e58186304fa8dcac2ee9115dcc271d785b1e
2017-03-29 06:46:16 -07:00
Aapo Kyrola
91f468b15c fixes to make data parallel model work for RecurrentNet + test case
Summary:
First, this diff includes a full test of data-parallel LSTM, which confirms it works correctly. To make it work, some changes had to be made:
 - cell net/step net external inputs must be namespace scoped
 - prevent double-namescoping of cellnet inputs
 - make data parallel model understand recurrentnets so the device-mapping works

Reviewed By: salexspb

Differential Revision: D4708840

fbshipit-source-id: 4b0ddc43642d449076a2b6f67ad1c47f84138ff4
2017-03-14 15:48:07 -07:00
Aapo Kyrola
783e40e806 Fix lengths-remapping again + better errors
Summary: When cloning recurrent net op, we do a remapping of the lengths-blobs. But if they don't exists (like with CRF), we should not do that.

Differential Revision: D4702123

fbshipit-source-id: 37a22d11e709011b8b98b2cc3d9f08eb9fda06c4
2017-03-14 11:04:45 -07:00
Andrey Malevich
3e54601bab New approach to metrics.
Summary:
This diff is modifying the way we're specifying metrics from doing reporter, that should know all the blobs which is should access in advance, to reporter that is connected through schema.

This diff is also reporting any arbitrary number of learning curves to Flow and provides really flexible way to specify all the metrics we care about.

TODO: Modify model helper to allow providing intermediate results for reporting
TODO: Add evaluation net (instead of prediction net).
TODO: Move all other places in DPER 2.0 to use that abstractions instead.
TODO: Get rid of LogScoreEstimator in favor of metric that is going to be really suiting our needs.

Reviewed By: azzolini, dzhulgakov, kittipatv

Differential Revision: D4577548

fbshipit-source-id: 3515bd41e0f92263ff90ce2f7207abf65d01b1f7
2017-03-06 14:48:16 -08:00
Huazhong Ning
f747bbec2e move the dper 1.0 utils to c2 or fb utils
Summary: so that the utils can be used by a wider range of audience.

Reviewed By: xianjiec

Differential Revision: D4637462

fbshipit-source-id: f0695f430902aef26360efa511069b3755eaf52a
2017-03-06 14:31:45 -08:00
Artem Volkhin
8c4310ac16 minor fix for _add_net_to_dict
Summary: fix a check if the net is net_dict

Reviewed By: kennyhorror

Differential Revision: D4647493

fbshipit-source-id: e0a62fc5847c99c85857c5635b4e39d59c66d5ce
2017-03-02 23:31:27 -08:00
Qichao Que
2f68632a32 Add SparseNN workflow for feed.
Summary: Add SparseNN workflow for feed. I haven't fully thought about the change needed for ads, as I added a property called 'preproc_output_schema' for LayerModelHelper.

Reviewed By: xianjiec

Differential Revision: D4585796

fbshipit-source-id: 060d08f4beb928e7e7863f2e563f612c358951fb
2017-03-01 11:02:38 -08:00
Zachary Mirman
1c92e85dae Added editDistance helper to caffe2 operators
Summary: Added editDistance helper to caffe2 operators

Differential Revision: D4622152

fbshipit-source-id: 4d6246b8226c1283d5883edfaa27e8f7748fdc4c
2017-02-28 13:31:56 -08:00
Xianjie Chen
b257fd8e83 Other places that may need NameScope
Summary:
For code in layer model helper, layers. It's intentionally to not have NameScope by default.

This looks another place that may need default NameScope.
https://fburl.com/wdwtxp0m

Reviewed By: kennyhorror

Differential Revision: D4606971

fbshipit-source-id: b560bf59d3242e3f9443cd5aeda5c7e2e4e89079
2017-02-23 21:16:35 -08:00
Yangqing Jia
47b65b6d8d Add a create your own dataset tutorial
Summary:
bwasti - will follow up via email.
Closes https://github.com/caffe2/caffe2/pull/166

Differential Revision: D4596858

Pulled By: Yangqing

fbshipit-source-id: 6d088ccf1604e0dc9b94cbf0a75b51587e734d95
2017-02-22 03:31:47 -08:00
Alisson Gusatti Azzolini
8fa156d082 Improve "reporter net" design
Summary:
Previously we had several limitations for a reporter net:
 - needed to be a net, not an execution step
 - only one allowed per execution step, with a single interval

Now, "reporter nets" become repoter steps and multiple of them can be specified with different timeouts.

Reviewed By: dzhulgakov

Differential Revision: D4583686

fbshipit-source-id: ad7266e16f96e7829fd24dcc1f165f39e9db573d
2017-02-21 20:17:40 -08:00
Xianjie Chen
d0621a2449 NextScopedBlob with well-defined behavior and respect namescope
Summary:
Remove the use of `NextName` in layer model helper, so that the same function return `model_helper` that should construct identical `Net`, when under the same NameScope.

The `NextScopedBlob` should only take effect when there is real name conflicting, otherwise it returns ScopedBlobReference.

This is critical for parameter blobs. In long run, we need to be able to specify parameter blobs more explicitly. (kennyhorror is working on this). This solution works in short term for e.g., two tower sparse nn models.

Reviewed By: kennyhorror

Differential Revision: D4555423

fbshipit-source-id: 2c4b99a61392e5d51aa878f7346466a8f14be187
2017-02-16 17:16:36 -08:00
Alisson Gusatti Azzolini
039ac56a68 Better names for nets, steps and tasks
Summary:
- NetBuilder now honors its name
- When Nets are created in the context of a NetBuilder, they take NetBuilder's name as prefix
- When a NetBuilder is created in the context of a Task, it takes the Tasks's name.
- pipe() now tries to find a good name based on its processor's, output or input queue's name.
- RPC tries to find a name from its handler's name.
- Better names in DataStream
- net_printer prints the name of Tasks and Steps
- net_printer optionally factors out common prefixes form blob names.

Differential Revision: D4527578

fbshipit-source-id: 5d3d1237c186e9576313c5aa01cc8800a9051217
2017-02-09 16:33:54 -08:00
Yangqing Jia
f2b3f0ab5c remove decode()
Summary: This should not be needed any more since we use pybind. It will help python3 migration.

Reviewed By: salexspb

Differential Revision: D4535490

fbshipit-source-id: a47615f73b5c35b940d21bb2d5d55060fa0850be
2017-02-09 10:08:13 -08:00
Alisson Gusatti Azzolini
1d3834eeb2 Nodes to support resource requirements and outputs
Summary: See distributed.py for example of usage

Reviewed By: xianjiec

Differential Revision: D4467723

fbshipit-source-id: c74f71bebaa1751098379838d3da55945aac62bd
2017-01-30 11:29:25 -08:00
Ou Jin
ed04a20289 distributed reader for evaluation
Summary:
Using multiple readers for model evaluation. Since it is built by new framework, only NativeLoader is supported.

With 5 readers, the evaluation speed is 124k. The speed for single evaluator is 32k. There is still room for improvement since the evaluator machine is under-utilized.
(Hive is the bottleneck. Adding more loading threads help to improve the speed to 240k. More readers can improve it further.)

Reviewed By: azzolini

Differential Revision: D4469393

fbshipit-source-id: b55af5f798faca4c150b2c0663fe5db0f154cb70
2017-01-27 10:44:24 -08:00
Dmytro Dzhulgakov
aed53dd7cf Pass cmd flags of GlobalInit down to workers in Flow
Summary:
It's a similar trick to dyndeps. The idea is that global state is better to be just replicated to gang workers as otherwise it causes a lot of confusion.

In particular it's useful if one wants to enable detailed logging (--v)

For other operators user still needs to call GlobalInit explicitly. We should consider doing it for all Flow operators, but I'll leave it for future considerations.

Reviewed By: kennyhorror

Differential Revision: D4460686

fbshipit-source-id: 5836737dd3195f9ad12589fd899a3ff63f173e05
2017-01-25 11:14:51 -08:00
Ross Girshick
e0c90de6e6 Speedup get_op_ids_in_path
Summary:
Perf bug report: https://www.facebook.com/groups/1405155842844877/permalink/1617904561570003/

Diagnosis:

I've done some digging into this and here's what I've found:
(1) In this use case, the call is disallowed_op_ids = get_op_ids_in_path(ssa, blob_versions, [], inputs)) where inputs = ['res4_22_sum'] is the last blob produced by the res4 stage of a ResNet101 model.
(2) get_op_ids_in_path has exponential running time in the number of blocks in the res4 stage of ResNet. This is based on empirical running times. This call should complete in 4.5 days on my devgpu.
(3) I haven't familiarized myself enough with the IR and SSA code in core.py to understand the algorithmic fix yet, but surely there's a more efficient algorithm to compute the same thing.

Reviewed By: Yangqing

Differential Revision: D4446278

fbshipit-source-id: 8bd147f92d62b865dc355d5802a53e92d64b6e21
2017-01-23 09:44:26 -08:00
Andrey Malevich
9f0a7935f6 Replace one more place from _net.external_input to _external_input_map
Summary: #accept2ship

Reviewed By: dzhulgakov

Differential Revision: D4435301

fbshipit-source-id: 6b62492c190325e82bc14d5397852106d07d5235
2017-01-19 12:29:30 -08:00
Xianjie Chen
4b3bd06a7f sparse nn converges better by dedupping sparse gradient by mean
Summary:
this normalizes the sparse gradient, so that the "effective learning rate" of each sparse parameter will NOT be affected by the number of examples in a batch that "use" this sparse parameter.

experiment shows it help convergence (about 0.1% better train NE): https://fburl.com/1230747813683956. It's not conclusive yet, and we still need to do more experiments. But this diff adds it as an option, and does not change the default behavior, so we can get this in first.

Differential Revision: D4367283

fbshipit-source-id: 49ea80dfa9ea776ff4160e220cf6c86593521607
2016-12-27 22:59:29 -08:00
Huazhong Ning
47bd606f63 Better visualization for gpu training plan
Summary:
The current gpu training plan has many sub-steps with same name (eg., "train/epoch"). This messes up the plan visualization. This diff fixes this.

before: https://our.intern.facebook.com/intern/graphviz?paste=56899036
after: https://our.intern.facebook.com/intern/graphviz?paste=56899704

Reviewed By: xianjiec

Differential Revision: D4343739

fbshipit-source-id: 8dbc01b4f3221999c78cb80a22ec8c11abf81172
2016-12-21 09:29:43 -08:00
Yury Zemlyanskiy
c2d28fb874 RNNs API simplification
Summary:
This is a first step in improving our RNN story. It provides a wrapper around current RecurrentNetworkOp implementation which infers most of the redundant parameters and makes API much simpler.

Also in order to support general step nets I added an extra argument to the RecurrentNetworkOp.

Future work:

1. Inferring step net output and internal blobs (scratches) sizes and type
2. Avoid accessing blobs by names in c++ part
3. Remove requirement for inputs / output 1:1 correspondence in the step net
4. Make python API support networks with operators like Sum being on the boarder of the Cell net (currently there is an issue with such networks where gradient blobs which are on the side are not explicitly created).

Differential Revision: D4268503

fbshipit-source-id: f8a66491c2b55daa730caeed7e9f2b3921541b49
2016-12-21 09:29:43 -08:00
Huazhong Ning
70dcba376c using BlobReference for Sum gradients.
Summary:
We create a Sum operator to sum up the gradients. Currently we use strings for its input/output blobs.
So the code will fail if AddAllGradients() runs within a NameScope.
To avoid this, just BlobReference instead of string for blobs.

Reviewed By: xianjiec

Differential Revision: D4343701

fbshipit-source-id: 2d008916e192d75c6e20f97921331ac4c7b73363
2016-12-18 09:29:22 -08:00
Aapo Kyrola
d38499f727 Optimize BlobIsDefined() + benchmark --> net construction 95 secs to 8.2 secs!
Summary:
I have noticed that constructing the Xray model takes quite a while. To measure this, I wrote a benchmark script that creates a resnet-50 model on 8 gpus. This takes about 95 secs -- which is kind of annoying when you want to quickly debug stuff.

Profiling (using Python's cProfile), I was able to see that the most of the time is used in net.BlobIsDefined(), which does a linear search over external inputs and operator outputs. Thus it gets slower and slower with large nets.  This can be fully optimized by keeping a separate lookup table of operator inputs and outputs (and external inputs and outputs). It is a bit annoying to keep this separate data structure, but I setup the unit tests to ensure things are doing correctly over Clones.

After the optimization, the net construction drops from 95 secs to 8.2 secs!

Reviewed By: azzolini

Differential Revision: D4288307

fbshipit-source-id: 0bb82c8bde9d86a2702b298f4aa706cba509346e
2016-12-15 12:01:30 -08:00
Dmytro Dzhulgakov
3125e6a821 Hacky fix for cloned model rewriting
Summary:
Disclaimer: this is really hacky

Continues a fix from D4218902. The root problem is that DPER builds net incrementally and input_record doesn't support it properly. For not I just manipulate the input record directly. Alisson wants to fix it properly later by allowing set_input_record to accept a superset of current record.

But it should unblock our experimentation.

I'm curious how it's going to look in dper_example world.

Reviewed By: azzolini

Differential Revision: D4255285

fbshipit-source-id: ff65b6f943d705a9b3399035597e2e8ded2e1ff3
2016-12-05 11:53:26 -08:00
Martin Raison
ea9a0f24bf automatic aggregation of sparse gradients
Summary:
This adds support for automatic aggregation of sparse gradients. We simply concatenate indices and values (no attempt to deduplicate, since this is already done before feeding into the optimizer). This should support various cases (indices and/or values can be generated by one or more gradient ops, or gradient outputs can be directly passed from inputs).

I tried to minimize the code footprint, but I introduced SparseGradGenMeta because GradGenMeta didn't lend itself very well to be used with sparse gradients.

Reviewed By: dzhulgakov

Differential Revision: D4219788

fbshipit-source-id: 1d074664cffd82a8764e4b1473ada6bc46e6c51a
2016-12-05 11:53:26 -08:00
Dmytro Dzhulgakov
119b687994 Allow PythonOp to access the workspace
Summary:
DPER has very strange python ops that play with Workspace - they are somewhat similar to LoadOp/SaveOp, so I guess the semantics is fine.

Thus it makes sense to allow python operators to receive workspace pointer similarly to regular Operators.

I didn't figure out a better way to implement optional argument than just checking the number of args function receives on python side.

Reviewed By: ajtulloch

Differential Revision: D4242943

fbshipit-source-id: d97d4227815b741c8f884cfe254b06d2b56b5a41
2016-12-05 11:53:26 -08:00
Martin Raison
da72658fa8 sparsehash-based implementation of UniqueOp
Summary:
Faster implementation of UniqueOp using google::dense_hash_map, as suggested by dzhulgakov. I haven't benchmarked it precisely but early measurements with my workflow show a significant speed bump (this operation went from using 20% of overall CPU time down to 7%).

I gated the implementation using the "engine" feature, to avoid adding sparsehash as a dependency to caffe2.

Reviewed By: dzhulgakov

Differential Revision: D4219768

fbshipit-source-id: 2f142981e772105b42fffa24afb199ef816f8e0c
2016-11-29 15:18:39 -08:00
Yangqing Jia
238ceab825 fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00
Yangqing Jia
d1e9215184 fbsync 2016-10-07 13:08:53 -07:00
Yangqing Jia
0a09d09431 fbsync 2016-09-08 17:56:14 -07:00
Yangqing Jia
b23e51d467 chunky sync 2016-09-06 15:55:19 -07:00
Yangqing Jia
05512d1e10 sync 2016-08-10 11:02:15 -07:00
Yangqing Jia
c15e45c9bb chunky sync again 2016-08-01 20:58:46 -07:00
Yangqing Jia
bcea409c82 sync 2016-07-28 15:06:43 -07:00
Yangqing Jia
6463eebc7b chunky sync - build scripts to be written 2016-07-21 10:16:42 -07:00
Yangqing Jia
559053d3a8 chunky sync 2016-05-13 14:43:48 -07:00
Yangqing Jia
cf7ca23fc1 make caffe2.python build 2016-03-08 16:48:19 -08:00
Yangqing Jia
9ae880bb6f move pycaffe2 to caffe2.python 2016-03-08 15:45:30 -08:00