Commit Graph

106 Commits

Author SHA1 Message Date
Scott Yost
a7a81351f2 Revert D6035393: [caffe2] expose observers to python, add multiple observers per observable
Summary:
This reverts commit 4563cf0203095fa979bb2160621cd16dd22ff830

bypass-lint

Differential Revision: D6035393

fbshipit-source-id: 090fba774ce433904f7ef769dda75c2fbbf784a8
2017-10-14 21:47:34 -07:00
Bram Wasti
58fe66e337 expose observers to python, add multiple observers per observable
Summary: observer framework can now be used in python + a small writeup of how to use it

Reviewed By: sf-wind

Differential Revision: D6035393

fbshipit-source-id: 4563cf0203095fa979bb2160621cd16dd22ff830
2017-10-14 13:09:29 -07:00
Yangqing Jia
b1508e8e86 Revert D5905002: [caffe2] expose observers to python
Summary:
This reverts commit e40ec24a55e08fb73beea9b4f3b68e71fc66ffb1

bypass-lint

Differential Revision: D5905002

fbshipit-source-id: 4f1b79d9a318978f6b74565f633f34b9701a9d5c
2017-10-10 22:12:00 -07:00
Andrey Malevich
e13f199452 Switch RNNOp to use NetDef argument for step represenetation.
Summary: Before this diff RNNOp was using TextFormat for representing steps. This diff is changing RNNOp to prefer NetDef argument instead. To be backward compatible it supports TextFormat for existing models, though we can compile RNNs without TextFormat as well.

Reviewed By: salexspb

Differential Revision: D5949330

fbshipit-source-id: 9336a8f5ccf30ad8d8e3a7067b9437e1704b1c9f
2017-10-10 22:01:51 -07:00
Bram Wasti
63caca89db expose observers to python
Summary: observer framework can now be used in python + a small writeup of how to use it

Reviewed By: salexspb

Differential Revision: D5905002

fbshipit-source-id: e40ec24a55e08fb73beea9b4f3b68e71fc66ffb1
2017-10-10 16:10:41 -07:00
Hassan Eslami
de43326cfc Identify components after sparse layers' tagging
Summary: Given a pair (init_net, train_net) where ops in sparse layers are tagged, this diff detects the components and rename the `node_name` (e.g. tag) to reflect the component name.

Reviewed By: azzolini

Differential Revision: D5948222

fbshipit-source-id: aeda9cfc88bb64922bf7a9942b969e3c5066718a
2017-10-04 21:03:47 -07:00
Junjie Bai
91bb6ce095 Allow explicitly specifying to use operators' default implementation
Reviewed By: dzhulgakov

Differential Revision: D5973635

fbshipit-source-id: 12dccc6332a8dd264ccc9f831a053a3be9b89c56
2017-10-04 12:17:36 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Aapo Kyrola
b9009df222 Add mask device, fix test
Reviewed By: azzolini

Differential Revision: D5930258

fbshipit-source-id: 16fdc2aeba7d95e815e55ca495118a5129495bb0
2017-09-28 12:33:01 -07:00
Alisson Gusatti Azzolini
e3609a0619 Correctly propagate remap_blob across net boundaries
Summary: If a blob is copy from device A to device B in the init_net, and then is used as an external_input in the train_net, we want the train_net to correctly use the blob already on device B instead of copying it over and over again.

Reviewed By: akyrola

Differential Revision: D5800870

fbshipit-source-id: d93f44bba80e4ed70eb03183d552496b54a966b5
2017-09-24 21:21:57 -07:00
Ilia Cherniavskii
f8f5e79f5f Backpropagation for If operator
Summary:
Adding backward pass support for If operator:
 - Implemented necessary changes to Do operator and generation of gradient Do operator to properly forward gradient blobs in and out of subnet
 - Using WorkspaceManager to keep track of workspaces used by Do, in case we need to have access to local blobs to compute gradients (also important for loop's backprop)
 - Update to Workspace to handle blob binding from multiple parent workspaces
 - Implemented generation of gradient If operator
 - Unit test to build and train a net with If control op

Reviewed By: azzolini

Differential Revision: D5745096

fbshipit-source-id: 1023c90a2113716254424d1e50b9e560fe9083e5
2017-09-18 16:17:42 -07:00
Jongsoo Park
e9581e47a2 fix comment on core.Net.RunAllOnMKL
Summary: Fix comment on core.Net.RunAllOnMKL (the comment was actually for core.Net.RunAllOnGPU)

Reviewed By: zem7

Differential Revision: D5734309

fbshipit-source-id: 2cc40a99a2c0083c73ec1e4c8279f55f296a003c
2017-09-13 19:32:18 -07:00
Luke Yeager
f775149205 tests: use assertRaises, not expectedFail
Summary:
I would expect that tests marked "expected failure" mean that there is a known issue in the code which will be fixed later. Both of these tests are simply verifying proper error-checking - nothing needs fixing.

Before (looks like something is wrong):
```
======================================= 2 xfailed in 0.27 seconds =======================================
```
After:
```
======================================= 2 passed in 0.28 seconds ========================================
```
/cc akyrola gsethi523
Closes https://github.com/caffe2/caffe2/pull/1209

Differential Revision: D5825373

Pulled By: akyrola

fbshipit-source-id: 1b98f503e4e406f69567d02425532f43bd16a465
2017-09-13 11:39:35 -07:00
Andrey Malevich
03711e9ab8 Handle bool's correctly in net.Const
Summary: As desc.

Reviewed By: volkhin

Differential Revision: D5745310

fbshipit-source-id: 66c3da37a42cf98bae05cead58f3f694eae19e0d
2017-08-31 12:02:58 -07:00
Jiyan Yang
33ef5f38a0 Fixed cuda loss op
Summary:
Currently the loss ops are still not on GPU even though ALL strategy is selected.
This diff is to enable it.

Reviewed By: xianjiec

Differential Revision: D5671255

fbshipit-source-id: 033863f171e1f89c8d75430d3af6a1e6d0d2eff2
2017-08-30 17:02:23 -07:00
Ilia Cherniavskii
a0204331a8 Control flow operators
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
 - Added If operator that executes then/else subnet
 - Branch subnet is executed in a separate isolated workspace, with some of the blobs transparently forwarded from the outer workspace
 - Adding a new NetBuilder subclass to construct nets using new operator
 - NetBuilder also keeps track of outer blob names and automatically sets blob bindings between outer and inner workspace, implementing generic convention on handling local/global variables in blocks

Reviewed By: volkhin

Differential Revision: D5720644

fbshipit-source-id: a674cde0c789f6a6ffdcd9d80159d1e42e49133f
2017-08-28 20:04:43 -07:00
Artem Volkhin
d3c8e68004 Revert D5641588: [caffe2] Control flow operators
Summary:
This reverts commit f9e04429961c3da7da4ebca3e8163bfcc2a09ec9

bypass-lint

Differential Revision: D5641588

fbshipit-source-id: bb23b213d08e9c3ea509216fce9367625943d007
2017-08-26 00:07:58 -07:00
Lei Chen
432cba6c05 Set up run_every_ms when constructing ExecutionStep
Summary: same as title.

Differential Revision: D5709274

fbshipit-source-id: f88b1325f3e6b948b836cc90f4d9c38a27be28ab
2017-08-25 15:58:29 -07:00
Alisson Gusatti Azzolini
ae0c4c8e66 Respect inplace blobs in InjectCrossDeviceCopies
Summary:
Before this diff, we were not respecting in-place blobs. E.g. if we had:

  with DeviceOption(CPU):
      blob = net.MyOpA([])
  with DeviceOption(CUDA):
      net.MyOpB([blob], [blob])

After the InjectCrossDevicesCopies we would have:

  blob = net.MyOpA([], device=CPU)
  blob_cuda0 = net.Copy([blob], [blob_cuda0], device=CUDA)
  net.MyOpB([blob_cuda0], [blob], device=CUDA)

Basically, we were not respecting inplace blobs. After this diff, we'll keep the inplace blob.

Reviewed By: harouwu

Differential Revision: D5671867

fbshipit-source-id: 6ad68c612dae19d7e1f45f4988d929644100b4d5
2017-08-25 14:57:58 -07:00
Ilia Cherniavskii
86cc7ace93 Control flow operators
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
 - Added If operator that executes then/else subnet
 - Branch subnet is executed in a separate isolated workspace, with some of the
   blobs transparently forwarded from the outer workspace
 - Adding a new NetBuilder subclass to construct nets using new operator
 - NetBuilder also keeps track of outer blob names and automatically sets
   blob bindings between outer and inner workspace, implementing generic
   convention on handling local/global variables in blocks

Reviewed By: azzolini

Differential Revision: D5641588

fbshipit-source-id: f9e04429961c3da7da4ebca3e8163bfcc2a09ec9
2017-08-25 12:31:14 -07:00
Lei Chen
14950a9082 Support session in distributed realtime trainer
Summary:
Convert from PlanDef ProtoBuf into python Plan object by recursively creating
Nets and ExecutionSteps.

Also support running Plan object directly in Session.

Reviewed By: azzolini

Differential Revision: D5608393

fbshipit-source-id: c0ae3b6da743a759af6db3b614a5a3935fe0b34c
2017-08-16 10:28:55 -07:00
Junjie Bai
1ce95090ca Add support for specifying engine preferences
Reviewed By: Yangqing

Differential Revision: D5460994

fbshipit-source-id: 08a8af699eebec37defc070389a8415b3e81ac16
2017-08-09 00:47:18 -07:00
Thomas Dudziak
676bedd298 Fixes for Python 3 in caffe2/caffe2/fb/data
Summary: As title

Reviewed By: MisterTea

Differential Revision: D5532387

fbshipit-source-id: 0a51ca40b93cc2eb5371f0b86f2800354cd1939c
2017-08-01 15:22:55 -07:00
Szymon Piechowicz
3324db447f Caffe2: allow nets that don't use all input in net.ClonePartial
Summary: Caffe2: allow nets that don't use all input in net.ClonePartial

Differential Revision: D5535564

fbshipit-source-id: 0ec8fb3ade4d7d6cd4a702c9c265d9c77f27a627
2017-08-01 10:05:46 -07:00
Yiming Wu
b51e0ec0c2 quick fix inplace blob bug
Summary: fixing the case where the init net will initialize same blob twice. I made an exception by allowing inplace blob among ops if the blob keeps on the same device. This should fix this problem in a generalized way as most of our training is only on CPU now.

Reviewed By: dzhulgakov

Differential Revision: D5450564

fbshipit-source-id: 525c4c9a2e5216a70dbd1229da2d9f8a58b89e47
2017-07-23 02:18:16 -07:00
Yiming Wu
4a256dfc97 save/load/run nets and params with device info correctly
Summary:
===Update log 7/10===

We are now restrained from problem of connection. Will post if this problem does not fix in 2hrs.

===Update 7/6===

Luke is experimenting on the convergence of this diff. Hopefully he could present results next week

Right now this is not affecting our original CPU training pipeline because the loading op is still correct in CPU situation now.

I will need final test to make sure. But that is now blocked by log device issue t19952135

I will do CPU/GPU nets saved in a separate diff.

====Update before 7.4====
It's actually working! Include local run screenshot
{F67959016}

dogscience

Reviewed By: dzhulgakov

Differential Revision: D5307058

fbshipit-source-id: cad5d9324c239419530f4b120392ec2ccbb72280
2017-07-23 02:18:15 -07:00
Alisson Gusatti Azzolini
8e80ef7e6d s/CopyGPUToGPU/Copy
Summary: CopyGPUToGPU does not exist. Copy seems to do the trick. Didn't go into details of how copy works, not sure if it ends up triggering UVA.

Reviewed By: akyrola

Differential Revision: D5471014

fbshipit-source-id: d8bc1aed9b19070c92f3ffc76f5617bdd0054563
2017-07-21 13:51:11 -07:00
Aapo Kyrola
cbb85545ec warn about orphan StopGradient output
Summary: Quite common confusion is how to use StopGradient, and typical bug is to forget to specify input=output. This adds a sanity check to gradient builder that checks if some StopGradient outputs are orphaned.

Reviewed By: dzhulgakov

Differential Revision: D5458341

fbshipit-source-id: 056fef4f0ee53eb10e66e9be0ecb55b55f9cc3d7
2017-07-20 21:41:41 -07:00
Tao Wu
78c4c4f885 handle RecurrentNetwork operator when clone net
Summary: added support of passing remap_funcs to clone_and_bind_net, so that it can pass it to clone method. Added other utils to ensure RecurrentNetwork operator is correctly cloned based on the remap_blob. The reason that RecurrentNetwork operator needs special treatment is that its arguments contain proto and blobs.

Reviewed By: kittipatv

Differential Revision: D5421532

fbshipit-source-id: 5de68365ce97df2de483f02ad260d78c8d35eead
2017-07-17 17:33:21 -07:00
Dmytro Dzhulgakov
b6c1c0ac4e Fix communication_schema decoding
Summary: Allows to override the input/output record as long as the field blobs are the same.

Reviewed By: yangyangyyy

Differential Revision: D5362132

fbshipit-source-id: 3ac2ac22802902b7eed5c226b00a7e1971ad264c
2017-07-02 13:04:20 -07:00
Aapo Kyrola
ab0fe0a5f4 add debug information when there is blob version mismatch
Summary:
It is quite common question when users get some variant of "blob has version 2 but gradient expects version 1" in their backward pass. The error message is completely unhelpful.
To remedy this, I added proper debug information which tells user how the version number of a blob was incremented over time. i.e which ops caused the version to go op. This should help
understand the issue.

Reviewed By: dzhulgakov

Differential Revision: D5358227

fbshipit-source-id: bc09d048ac33200c35d56460e44e86c2f2888f3f
2017-06-30 16:22:46 -07:00
Thomas Dudziak
5355634dac Dict fixes/improvements and unittest targets for Python 3 in caffe2 core
Summary: As title

Reviewed By: salexspb

Differential Revision: D5316104

fbshipit-source-id: aee43819d817842e5ce6ba3d045a55b1a2491c30
2017-06-29 17:05:41 -07:00
Yiming Wu
1fce3eac4e single trainer hybrid device
Summary:
First try of single trainer hybrid device training for sparsenn

Comparison results with CPU training:
https://our.intern.facebook.com/intern/fblearner/run/compare/?compare_to[0]=20016969&compare_to[1]=19660293&baseline_run=19660293&all_runs[0]=20016969&all_runs[1]=19660293

Reviewed By: dzhulgakov

Differential Revision: D5205723

fbshipit-source-id: 4a024324ac2efc3248dd470d4c533cf2ecec2e92
2017-06-27 22:06:30 -07:00
Alexander Sidorov
c8410859d9 Operator python stacktraces, attempt 2
Summary:
Last time I used uuid filled into OperatorDef. And operator_tracebacks was populated using traceback.extract_stack. There were several issues with this approach:

1. A random field in OperatorDef breaks workflows relying on memoization, i.e. when computation is skipped based on already computed result before.
2. Adding one more field revealed RNNs being non forward compatible wrt to new fields in there. prototxt format seems to not allow forward compatibility (thanks jamesr66a for the investigation!). For RNNs we need to swtich them to a more resilient approach. azzolini's proposed change to OperatorDef / NetDef would allow that by just nesting NetDef dirrectly inside OperatorDef without need for extra serialization.
3. traceback.extract_stack is very slow when executable is on a remote filesystem. It does one or more os.stat for each frame on the stack. For some cases it ended up being up to 15 extra minutes on model construction.

In this diff I use a different approach which should fix all those problems above.

1.2. are solved by not adding a new field at all. Instead I report operator idx wrt to a net it runs in. Thanks akyrola and dzhulgakov for the idea. Downside here is that operator list manipulation breaks the logic and separately created ops are not covered at all.
3. I solved this by operating on raw frames without using traceback and inspect modules which end up doing a lot of file system calls. See function extract_stacktace in core.py with additional comments.

Reviewed By: dzhulgakov

Differential Revision: D5286285

fbshipit-source-id: 626dd0f5f6b8b1d86bd6bf519078b122f43ddcaa
2017-06-25 19:32:58 -07:00
Thomas Dudziak
342de07231 Core unit test fixes for Python 3
Summary: As title

Differential Revision: D5291327

fbshipit-source-id: 7dd9279c53ba55d3422c31973ffcec5705787fdf
2017-06-23 13:22:16 -07:00
Alisson Gusatti Azzolini
7d482742fd Allow tasks/execution_steps to be cloned at runtime
Summary:
Advantages of cloning the tasks/execution_steps at runtime:
- Less complexity on the python side: no need to clone nets and add prefixes to blob names
- Faster start-up: we had cases of complex plans that took up to 30min to be created.
- Better isolation: each task cloned at runtime has its own child workspace, preventing false sharing of blobs.
- Opens up possibility for dynamic scheduling: Number of threads per task can be increased on the fly, at runtime.

Reviewed By: dzhulgakov

Differential Revision: D5100730

fbshipit-source-id: 71b83193b135da4e6eaf2536d8fc266528e1fdcc
2017-06-20 22:32:07 -07:00
Alexander Sidorov
83e6a0bec8 Revert uuid change to OperatorDef protobuf
Summary:
a few issues:

1. Randomization hurts memoization
1. Even if we make it non random, then we can get key colisions when loading it back.
2. RNNs use prototxt for step net and apparently its not forward compatible like normal protobuf is

I am thinking of a better less invasive solution now.

Reviewed By: jamesr66a

Differential Revision: D5272118

fbshipit-source-id: ab577fad04fbfc632e1fceffa923377a0d3da1be
2017-06-19 16:47:31 -07:00
Dmytro Dzhulgakov
12094b5114 Add random shuffle through the data to the benchmark workflow
Reviewed By: kdub0

Differential Revision: D5171727

fbshipit-source-id: 1d9182bb820224b479682fc0ca5014f909ba19d5
2017-06-16 13:22:46 -07:00
Aapo Kyrola
7ffd76db51 check operator schema before calling gradient creator
Summary: Hard-to-debug problems arise when a gradient creator fails when the forward op is incorrect itself. Add checking of the schema before callig the creator. Also clarify the error messages

Reviewed By: Yangqing

Differential Revision: D5256016

fbshipit-source-id: 78550f7e2ce5b88e26b69fdae4be0eece52edfea
2017-06-15 13:04:58 -07:00
Alisson Gusatti Azzolini
d03ffb211c Remove WORKER_INIT_CALLS
Summary: This was only needed in order to initialize stateful PythonOps. Now PythonOp has support for initialization at Op creation time, so this is not used anymore.

Reviewed By: dzhulgakov

Differential Revision: D5242908

fbshipit-source-id: dbaa249466dd0f37f25d204d387b1f99c6dd4fed
2017-06-13 20:18:48 -07:00
Alexander Sidorov
eebda50b79 Operator python traceback
Summary: This is going to show a python Caffe2 user where a failed operator was created. Motivation for having this information not right in protobuf is to avoid having it too verboose and keep ability to read protobufs of a net after a simple print() call.

Reviewed By: jamesr66a

Differential Revision: D5226047

fbshipit-source-id: 7edfe850e05a2ec209577142aa3368664a57a108
2017-06-13 18:50:02 -07:00
Alisson Gusatti Azzolini
d3ec6e8f55 Run python op builder at op creation time
Summary:
This allows to construct a python op by passing a pickled "builder function call" as an argument to the op.
The builder function is called at PythonOp construction time and returns a function that will be called when the op is run.

This way we allow to drop the dependency on 'tokens', which didn't work properly for protobufs that get distributed to other processes. Now, the PythonOp definition is self-contained: as long as the build dependencies are right, sharding the protobuf is enough to execute the net remotely.

Reviewed By: dzhulgakov

Differential Revision: D5080833

fbshipit-source-id: a5deaca5d3143024cdb121519689224e9dbec5ce
2017-06-13 16:29:22 -07:00
Thomas Dudziak
b877d4b5f8 Misc fixes for Python 3
Summary: As title

Differential Revision: D5216942

fbshipit-source-id: def5563f1b259efefab3a829d8a78d8d3297ffc7
2017-06-13 12:18:43 -07:00
Alexander Sidorov
7f1385e70c Improve gradient accumulation of the framework: 1.5x - 2x
Summary:
We waste extra memory by creating two autosplit gradient
blobs and then accumulating it into them main one. Sometimesk, when Sum
/ Sub ops are involved, we can avoid wasting extra memory at all.

Ideally we would not waste any memory and make ops add to the same
blob rather then calculating separate results and then mering
them. But it would require a substantial change to the frameworks and
rewriting a lot of operators.

Reviewed By: dzhulgakov

Differential Revision: D5157667

fbshipit-source-id: 8293824d6cdd971d8853ae90aee68e4a6d1e132b
2017-06-11 22:02:30 -07:00
Dmytro Dzhulgakov
638fe804dc Implement recover_input_schema_by_prefix
Summary:
It's very useful for simple cases like benchmarking nets where we want to encode input/output record in the net and don't want to go through the hurdles of storing input/output record in MetaNetDef.

For those cases I propose remapping the input/output record before saving to 'input_record/{field_name}'. Then we can recover input/output record back just based on the names of the blobs.

Differential Revision: D5170473

fbshipit-source-id: ac5daa60051605ed93022aec1377a49f08f15663
2017-06-11 15:37:12 -07:00
Alexander Sidorov
df72826ead Static RNN
Summary:
Static RNN allows to unroll an RNN into Caffe2 graph using all existing cell abstractions. In this diff I introduce several new tests that already caught a few bugs in our RecurrentNetworkOp gradient accumulation logic by comparing it to an unrolled version.

Another use case is perf - potentially we can run an unrolled net faster because DAGNet will have access to the whole graph. Same about memonger. But this work is not part of this diff

Reviewed By: akyrola

Differential Revision: D5200943

fbshipit-source-id: 20f16fc1b2ca500d06ccc60c4cec6e81839149dc
2017-06-08 17:48:48 -07:00
Alexander Sidorov
264f75fdd0 ZeroGradient op
Summary:
when building a multi layer static RNN the last timestep of
the first layer (and other layers except the last one) doesn't get a
gradient for the cell state as normally user uses results only from
the last layer and cell state doesn't go up either.

ZeroGradient provides a general solution for injecting 0 gradient
blobs. It is in some way similar to StopGradient operator which is
also specialcased

Reviewed By: bwasti

Differential Revision: D5198375

fbshipit-source-id: a21d0cfb3676a77fac72e5897a200d0bd25fc6de
2017-06-08 16:02:38 -07:00
Yiming Wu
4fefff0bbb Auto injecting device copy for single net and several nets
Summary:
This diff plan to attack the problem where we want to just annotate device option for operators and leave Caffe2 to help us inject cross device copy functions. This feature would be useful for mixed device training and multi device training with several nets, where previously we do the heavy lifting of adding copy functions ourselves.

Ideally, this feature will happen like this:

      //construct your nets first
      core.InjectDeviceCopyAmongNets([train_init, train_net, ...])

My ideas are written in comments. I will update them here as well later.

Reviewed By: dzhulgakov

Differential Revision: D5134103

fbshipit-source-id: 173f7da9d1773d1c50ccdc27f1b5cd3067b04af5
2017-06-07 20:03:18 -07:00
Thomas Dudziak
60c78d6160 Fixes range/xrange for Python 3
Summary: As title

Differential Revision: D5151894

fbshipit-source-id: 7badce5d3122e8f2526a7170fbdcf0d0b66e2638
2017-06-07 00:04:26 -07:00
Yiming Wu
8cd208ad6f Infer input and output device from OperatorDef through OperatorSchema
Summary: Infer input and output device from OperatorDef through OperatorSchema. This is inspired by shape inference. With this feature, we can easily analysis device information for all blobs in the net in a generic way. It is really helpful for auto cross device execution.

Reviewed By: akyrola, dzhulgakov

Differential Revision: D5161065

fbshipit-source-id: ee656123112171a4ca00f2fb3f6940f32ddf3135
2017-06-05 23:47:33 -07:00