Summary: Before this diff RNNOp was using TextFormat for representing steps. This diff is changing RNNOp to prefer NetDef argument instead. To be backward compatible it supports TextFormat for existing models, though we can compile RNNs without TextFormat as well.
Reviewed By: salexspb
Differential Revision: D5949330
fbshipit-source-id: 9336a8f5ccf30ad8d8e3a7067b9437e1704b1c9f
Summary: observer framework can now be used in python + a small writeup of how to use it
Reviewed By: salexspb
Differential Revision: D5905002
fbshipit-source-id: e40ec24a55e08fb73beea9b4f3b68e71fc66ffb1
Summary: Given a pair (init_net, train_net) where ops in sparse layers are tagged, this diff detects the components and rename the `node_name` (e.g. tag) to reflect the component name.
Reviewed By: azzolini
Differential Revision: D5948222
fbshipit-source-id: aeda9cfc88bb64922bf7a9942b969e3c5066718a
Summary: If a blob is copy from device A to device B in the init_net, and then is used as an external_input in the train_net, we want the train_net to correctly use the blob already on device B instead of copying it over and over again.
Reviewed By: akyrola
Differential Revision: D5800870
fbshipit-source-id: d93f44bba80e4ed70eb03183d552496b54a966b5
Summary:
Adding backward pass support for If operator:
- Implemented necessary changes to Do operator and generation of gradient Do operator to properly forward gradient blobs in and out of subnet
- Using WorkspaceManager to keep track of workspaces used by Do, in case we need to have access to local blobs to compute gradients (also important for loop's backprop)
- Update to Workspace to handle blob binding from multiple parent workspaces
- Implemented generation of gradient If operator
- Unit test to build and train a net with If control op
Reviewed By: azzolini
Differential Revision: D5745096
fbshipit-source-id: 1023c90a2113716254424d1e50b9e560fe9083e5
Summary: Fix comment on core.Net.RunAllOnMKL (the comment was actually for core.Net.RunAllOnGPU)
Reviewed By: zem7
Differential Revision: D5734309
fbshipit-source-id: 2cc40a99a2c0083c73ec1e4c8279f55f296a003c
Summary:
I would expect that tests marked "expected failure" mean that there is a known issue in the code which will be fixed later. Both of these tests are simply verifying proper error-checking - nothing needs fixing.
Before (looks like something is wrong):
```
======================================= 2 xfailed in 0.27 seconds =======================================
```
After:
```
======================================= 2 passed in 0.28 seconds ========================================
```
/cc akyrola gsethi523
Closes https://github.com/caffe2/caffe2/pull/1209
Differential Revision: D5825373
Pulled By: akyrola
fbshipit-source-id: 1b98f503e4e406f69567d02425532f43bd16a465
Summary:
Currently the loss ops are still not on GPU even though ALL strategy is selected.
This diff is to enable it.
Reviewed By: xianjiec
Differential Revision: D5671255
fbshipit-source-id: 033863f171e1f89c8d75430d3af6a1e6d0d2eff2
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
- Added If operator that executes then/else subnet
- Branch subnet is executed in a separate isolated workspace, with some of the blobs transparently forwarded from the outer workspace
- Adding a new NetBuilder subclass to construct nets using new operator
- NetBuilder also keeps track of outer blob names and automatically sets blob bindings between outer and inner workspace, implementing generic convention on handling local/global variables in blocks
Reviewed By: volkhin
Differential Revision: D5720644
fbshipit-source-id: a674cde0c789f6a6ffdcd9d80159d1e42e49133f
Summary:
Before this diff, we were not respecting in-place blobs. E.g. if we had:
with DeviceOption(CPU):
blob = net.MyOpA([])
with DeviceOption(CUDA):
net.MyOpB([blob], [blob])
After the InjectCrossDevicesCopies we would have:
blob = net.MyOpA([], device=CPU)
blob_cuda0 = net.Copy([blob], [blob_cuda0], device=CUDA)
net.MyOpB([blob_cuda0], [blob], device=CUDA)
Basically, we were not respecting inplace blobs. After this diff, we'll keep the inplace blob.
Reviewed By: harouwu
Differential Revision: D5671867
fbshipit-source-id: 6ad68c612dae19d7e1f45f4988d929644100b4d5
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
- Added If operator that executes then/else subnet
- Branch subnet is executed in a separate isolated workspace, with some of the
blobs transparently forwarded from the outer workspace
- Adding a new NetBuilder subclass to construct nets using new operator
- NetBuilder also keeps track of outer blob names and automatically sets
blob bindings between outer and inner workspace, implementing generic
convention on handling local/global variables in blocks
Reviewed By: azzolini
Differential Revision: D5641588
fbshipit-source-id: f9e04429961c3da7da4ebca3e8163bfcc2a09ec9
Summary:
Convert from PlanDef ProtoBuf into python Plan object by recursively creating
Nets and ExecutionSteps.
Also support running Plan object directly in Session.
Reviewed By: azzolini
Differential Revision: D5608393
fbshipit-source-id: c0ae3b6da743a759af6db3b614a5a3935fe0b34c
Summary: Caffe2: allow nets that don't use all input in net.ClonePartial
Differential Revision: D5535564
fbshipit-source-id: 0ec8fb3ade4d7d6cd4a702c9c265d9c77f27a627
Summary: fixing the case where the init net will initialize same blob twice. I made an exception by allowing inplace blob among ops if the blob keeps on the same device. This should fix this problem in a generalized way as most of our training is only on CPU now.
Reviewed By: dzhulgakov
Differential Revision: D5450564
fbshipit-source-id: 525c4c9a2e5216a70dbd1229da2d9f8a58b89e47
Summary:
===Update log 7/10===
We are now restrained from problem of connection. Will post if this problem does not fix in 2hrs.
===Update 7/6===
Luke is experimenting on the convergence of this diff. Hopefully he could present results next week
Right now this is not affecting our original CPU training pipeline because the loading op is still correct in CPU situation now.
I will need final test to make sure. But that is now blocked by log device issue t19952135
I will do CPU/GPU nets saved in a separate diff.
====Update before 7.4====
It's actually working! Include local run screenshot
{F67959016}
dogscience
Reviewed By: dzhulgakov
Differential Revision: D5307058
fbshipit-source-id: cad5d9324c239419530f4b120392ec2ccbb72280
Summary: CopyGPUToGPU does not exist. Copy seems to do the trick. Didn't go into details of how copy works, not sure if it ends up triggering UVA.
Reviewed By: akyrola
Differential Revision: D5471014
fbshipit-source-id: d8bc1aed9b19070c92f3ffc76f5617bdd0054563
Summary: Quite common confusion is how to use StopGradient, and typical bug is to forget to specify input=output. This adds a sanity check to gradient builder that checks if some StopGradient outputs are orphaned.
Reviewed By: dzhulgakov
Differential Revision: D5458341
fbshipit-source-id: 056fef4f0ee53eb10e66e9be0ecb55b55f9cc3d7
Summary: added support of passing remap_funcs to clone_and_bind_net, so that it can pass it to clone method. Added other utils to ensure RecurrentNetwork operator is correctly cloned based on the remap_blob. The reason that RecurrentNetwork operator needs special treatment is that its arguments contain proto and blobs.
Reviewed By: kittipatv
Differential Revision: D5421532
fbshipit-source-id: 5de68365ce97df2de483f02ad260d78c8d35eead
Summary: Allows to override the input/output record as long as the field blobs are the same.
Reviewed By: yangyangyyy
Differential Revision: D5362132
fbshipit-source-id: 3ac2ac22802902b7eed5c226b00a7e1971ad264c
Summary:
It is quite common question when users get some variant of "blob has version 2 but gradient expects version 1" in their backward pass. The error message is completely unhelpful.
To remedy this, I added proper debug information which tells user how the version number of a blob was incremented over time. i.e which ops caused the version to go op. This should help
understand the issue.
Reviewed By: dzhulgakov
Differential Revision: D5358227
fbshipit-source-id: bc09d048ac33200c35d56460e44e86c2f2888f3f
Summary:
Last time I used uuid filled into OperatorDef. And operator_tracebacks was populated using traceback.extract_stack. There were several issues with this approach:
1. A random field in OperatorDef breaks workflows relying on memoization, i.e. when computation is skipped based on already computed result before.
2. Adding one more field revealed RNNs being non forward compatible wrt to new fields in there. prototxt format seems to not allow forward compatibility (thanks jamesr66a for the investigation!). For RNNs we need to swtich them to a more resilient approach. azzolini's proposed change to OperatorDef / NetDef would allow that by just nesting NetDef dirrectly inside OperatorDef without need for extra serialization.
3. traceback.extract_stack is very slow when executable is on a remote filesystem. It does one or more os.stat for each frame on the stack. For some cases it ended up being up to 15 extra minutes on model construction.
In this diff I use a different approach which should fix all those problems above.
1.2. are solved by not adding a new field at all. Instead I report operator idx wrt to a net it runs in. Thanks akyrola and dzhulgakov for the idea. Downside here is that operator list manipulation breaks the logic and separately created ops are not covered at all.
3. I solved this by operating on raw frames without using traceback and inspect modules which end up doing a lot of file system calls. See function extract_stacktace in core.py with additional comments.
Reviewed By: dzhulgakov
Differential Revision: D5286285
fbshipit-source-id: 626dd0f5f6b8b1d86bd6bf519078b122f43ddcaa
Summary:
Advantages of cloning the tasks/execution_steps at runtime:
- Less complexity on the python side: no need to clone nets and add prefixes to blob names
- Faster start-up: we had cases of complex plans that took up to 30min to be created.
- Better isolation: each task cloned at runtime has its own child workspace, preventing false sharing of blobs.
- Opens up possibility for dynamic scheduling: Number of threads per task can be increased on the fly, at runtime.
Reviewed By: dzhulgakov
Differential Revision: D5100730
fbshipit-source-id: 71b83193b135da4e6eaf2536d8fc266528e1fdcc
Summary:
a few issues:
1. Randomization hurts memoization
1. Even if we make it non random, then we can get key colisions when loading it back.
2. RNNs use prototxt for step net and apparently its not forward compatible like normal protobuf is
I am thinking of a better less invasive solution now.
Reviewed By: jamesr66a
Differential Revision: D5272118
fbshipit-source-id: ab577fad04fbfc632e1fceffa923377a0d3da1be
Summary: Hard-to-debug problems arise when a gradient creator fails when the forward op is incorrect itself. Add checking of the schema before callig the creator. Also clarify the error messages
Reviewed By: Yangqing
Differential Revision: D5256016
fbshipit-source-id: 78550f7e2ce5b88e26b69fdae4be0eece52edfea
Summary: This was only needed in order to initialize stateful PythonOps. Now PythonOp has support for initialization at Op creation time, so this is not used anymore.
Reviewed By: dzhulgakov
Differential Revision: D5242908
fbshipit-source-id: dbaa249466dd0f37f25d204d387b1f99c6dd4fed
Summary: This is going to show a python Caffe2 user where a failed operator was created. Motivation for having this information not right in protobuf is to avoid having it too verboose and keep ability to read protobufs of a net after a simple print() call.
Reviewed By: jamesr66a
Differential Revision: D5226047
fbshipit-source-id: 7edfe850e05a2ec209577142aa3368664a57a108
Summary:
This allows to construct a python op by passing a pickled "builder function call" as an argument to the op.
The builder function is called at PythonOp construction time and returns a function that will be called when the op is run.
This way we allow to drop the dependency on 'tokens', which didn't work properly for protobufs that get distributed to other processes. Now, the PythonOp definition is self-contained: as long as the build dependencies are right, sharding the protobuf is enough to execute the net remotely.
Reviewed By: dzhulgakov
Differential Revision: D5080833
fbshipit-source-id: a5deaca5d3143024cdb121519689224e9dbec5ce
Summary:
We waste extra memory by creating two autosplit gradient
blobs and then accumulating it into them main one. Sometimesk, when Sum
/ Sub ops are involved, we can avoid wasting extra memory at all.
Ideally we would not waste any memory and make ops add to the same
blob rather then calculating separate results and then mering
them. But it would require a substantial change to the frameworks and
rewriting a lot of operators.
Reviewed By: dzhulgakov
Differential Revision: D5157667
fbshipit-source-id: 8293824d6cdd971d8853ae90aee68e4a6d1e132b
Summary:
It's very useful for simple cases like benchmarking nets where we want to encode input/output record in the net and don't want to go through the hurdles of storing input/output record in MetaNetDef.
For those cases I propose remapping the input/output record before saving to 'input_record/{field_name}'. Then we can recover input/output record back just based on the names of the blobs.
Differential Revision: D5170473
fbshipit-source-id: ac5daa60051605ed93022aec1377a49f08f15663
Summary:
Static RNN allows to unroll an RNN into Caffe2 graph using all existing cell abstractions. In this diff I introduce several new tests that already caught a few bugs in our RecurrentNetworkOp gradient accumulation logic by comparing it to an unrolled version.
Another use case is perf - potentially we can run an unrolled net faster because DAGNet will have access to the whole graph. Same about memonger. But this work is not part of this diff
Reviewed By: akyrola
Differential Revision: D5200943
fbshipit-source-id: 20f16fc1b2ca500d06ccc60c4cec6e81839149dc
Summary:
when building a multi layer static RNN the last timestep of
the first layer (and other layers except the last one) doesn't get a
gradient for the cell state as normally user uses results only from
the last layer and cell state doesn't go up either.
ZeroGradient provides a general solution for injecting 0 gradient
blobs. It is in some way similar to StopGradient operator which is
also specialcased
Reviewed By: bwasti
Differential Revision: D5198375
fbshipit-source-id: a21d0cfb3676a77fac72e5897a200d0bd25fc6de
Summary:
This diff plan to attack the problem where we want to just annotate device option for operators and leave Caffe2 to help us inject cross device copy functions. This feature would be useful for mixed device training and multi device training with several nets, where previously we do the heavy lifting of adding copy functions ourselves.
Ideally, this feature will happen like this:
//construct your nets first
core.InjectDeviceCopyAmongNets([train_init, train_net, ...])
My ideas are written in comments. I will update them here as well later.
Reviewed By: dzhulgakov
Differential Revision: D5134103
fbshipit-source-id: 173f7da9d1773d1c50ccdc27f1b5cd3067b04af5
Summary: Infer input and output device from OperatorDef through OperatorSchema. This is inspired by shape inference. With this feature, we can easily analysis device information for all blobs in the net in a generic way. It is really helpful for auto cross device execution.
Reviewed By: akyrola, dzhulgakov
Differential Revision: D5161065
fbshipit-source-id: ee656123112171a4ca00f2fb3f6940f32ddf3135
Summary:
I'm using Python ops in a project and need corresponding Python gradient ops. For my use case, only a subset of the forward op outputs have gradients and only a subset of forward op inputs have gradients. However the current implementation of `GetPythonGradient` forces all grad inputs and outputs to exist. This diff allows one to specify that only a subset of grad inputs / outputs are used when constructing the Python op.
I'm not sure if this is up to caffe2 standards, so please push back on style and content as needed.
Reviewed By: dzhulgakov
Differential Revision: D4897004
fbshipit-source-id: 96fffe8634c51a49b6bce7339a46c6235f7d4bbd
Summary:
fixing missing future package issue.
Recently we found some of our users does not have future module support. So we might need a try/catch wrapper around all past import
Reviewed By: Yangqing
Differential Revision: D5183547
fbshipit-source-id: 262fdf2940ee1be4454bf0b0abb9e6a0f1a0ee82
Summary:
Bug repro is in a test. Generally speaking accumulation was
not happening if len(ys) >= 2 (list of blobs we compute gradients
from) and for some blob in the net it was both in ys list and also got
a gradient propagated from another element in ys.
Reviewed By: akyrola
Differential Revision: D5121695
fbshipit-source-id: 282d88f2f4f6e27dadae311964f40246a2739130