Summary: Adds support for backprop to While op, fixes gradient computation for Pow
Reviewed By: azzolini
Differential Revision: D6456875
fbshipit-source-id: 9f660317ad6f3898ff7d8ce43098f85c3426409b
Summary: Currently, the device_option equality is done in a specialized private function. Ideally, we should be able to test the equality from other places in the code and have a more detailed check for the equality.
Reviewed By: akyrola
Differential Revision: D6316608
fbshipit-source-id: c3fd085583e535d7936d05e4c8b15d2eff91c744
Summary: Print the full operator definition when gradient creation fails. This helps debugging cases where same op type is used in many places.
Differential Revision: D6282832
fbshipit-source-id: 4b9dab2602c7c53f795da93a3085cf5c8ca741c1
Summary: observer framework can now be used in python + a small writeup of how to use it. this is D6035393 with a fix for ct-scan
Reviewed By: salexspb
Differential Revision: D6066380
fbshipit-source-id: 896c4c580d4387240b81ac2dbbc43db51d4bfeb9
Summary: observer framework can now be used in python + a small writeup of how to use it
Reviewed By: sf-wind
Differential Revision: D6035393
fbshipit-source-id: 4563cf0203095fa979bb2160621cd16dd22ff830
Summary: Before this diff RNNOp was using TextFormat for representing steps. This diff is changing RNNOp to prefer NetDef argument instead. To be backward compatible it supports TextFormat for existing models, though we can compile RNNs without TextFormat as well.
Reviewed By: salexspb
Differential Revision: D5949330
fbshipit-source-id: 9336a8f5ccf30ad8d8e3a7067b9437e1704b1c9f
Summary: observer framework can now be used in python + a small writeup of how to use it
Reviewed By: salexspb
Differential Revision: D5905002
fbshipit-source-id: e40ec24a55e08fb73beea9b4f3b68e71fc66ffb1
Summary: Given a pair (init_net, train_net) where ops in sparse layers are tagged, this diff detects the components and rename the `node_name` (e.g. tag) to reflect the component name.
Reviewed By: azzolini
Differential Revision: D5948222
fbshipit-source-id: aeda9cfc88bb64922bf7a9942b969e3c5066718a
Summary: If a blob is copy from device A to device B in the init_net, and then is used as an external_input in the train_net, we want the train_net to correctly use the blob already on device B instead of copying it over and over again.
Reviewed By: akyrola
Differential Revision: D5800870
fbshipit-source-id: d93f44bba80e4ed70eb03183d552496b54a966b5
Summary:
Adding backward pass support for If operator:
- Implemented necessary changes to Do operator and generation of gradient Do operator to properly forward gradient blobs in and out of subnet
- Using WorkspaceManager to keep track of workspaces used by Do, in case we need to have access to local blobs to compute gradients (also important for loop's backprop)
- Update to Workspace to handle blob binding from multiple parent workspaces
- Implemented generation of gradient If operator
- Unit test to build and train a net with If control op
Reviewed By: azzolini
Differential Revision: D5745096
fbshipit-source-id: 1023c90a2113716254424d1e50b9e560fe9083e5
Summary: Fix comment on core.Net.RunAllOnMKL (the comment was actually for core.Net.RunAllOnGPU)
Reviewed By: zem7
Differential Revision: D5734309
fbshipit-source-id: 2cc40a99a2c0083c73ec1e4c8279f55f296a003c
Summary:
I would expect that tests marked "expected failure" mean that there is a known issue in the code which will be fixed later. Both of these tests are simply verifying proper error-checking - nothing needs fixing.
Before (looks like something is wrong):
```
======================================= 2 xfailed in 0.27 seconds =======================================
```
After:
```
======================================= 2 passed in 0.28 seconds ========================================
```
/cc akyrola gsethi523
Closes https://github.com/caffe2/caffe2/pull/1209
Differential Revision: D5825373
Pulled By: akyrola
fbshipit-source-id: 1b98f503e4e406f69567d02425532f43bd16a465
Summary:
Currently the loss ops are still not on GPU even though ALL strategy is selected.
This diff is to enable it.
Reviewed By: xianjiec
Differential Revision: D5671255
fbshipit-source-id: 033863f171e1f89c8d75430d3af6a1e6d0d2eff2
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
- Added If operator that executes then/else subnet
- Branch subnet is executed in a separate isolated workspace, with some of the blobs transparently forwarded from the outer workspace
- Adding a new NetBuilder subclass to construct nets using new operator
- NetBuilder also keeps track of outer blob names and automatically sets blob bindings between outer and inner workspace, implementing generic convention on handling local/global variables in blocks
Reviewed By: volkhin
Differential Revision: D5720644
fbshipit-source-id: a674cde0c789f6a6ffdcd9d80159d1e42e49133f
Summary:
Before this diff, we were not respecting in-place blobs. E.g. if we had:
with DeviceOption(CPU):
blob = net.MyOpA([])
with DeviceOption(CUDA):
net.MyOpB([blob], [blob])
After the InjectCrossDevicesCopies we would have:
blob = net.MyOpA([], device=CPU)
blob_cuda0 = net.Copy([blob], [blob_cuda0], device=CUDA)
net.MyOpB([blob_cuda0], [blob], device=CUDA)
Basically, we were not respecting inplace blobs. After this diff, we'll keep the inplace blob.
Reviewed By: harouwu
Differential Revision: D5671867
fbshipit-source-id: 6ad68c612dae19d7e1f45f4988d929644100b4d5
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
- Added If operator that executes then/else subnet
- Branch subnet is executed in a separate isolated workspace, with some of the
blobs transparently forwarded from the outer workspace
- Adding a new NetBuilder subclass to construct nets using new operator
- NetBuilder also keeps track of outer blob names and automatically sets
blob bindings between outer and inner workspace, implementing generic
convention on handling local/global variables in blocks
Reviewed By: azzolini
Differential Revision: D5641588
fbshipit-source-id: f9e04429961c3da7da4ebca3e8163bfcc2a09ec9
Summary:
Convert from PlanDef ProtoBuf into python Plan object by recursively creating
Nets and ExecutionSteps.
Also support running Plan object directly in Session.
Reviewed By: azzolini
Differential Revision: D5608393
fbshipit-source-id: c0ae3b6da743a759af6db3b614a5a3935fe0b34c
Summary: Caffe2: allow nets that don't use all input in net.ClonePartial
Differential Revision: D5535564
fbshipit-source-id: 0ec8fb3ade4d7d6cd4a702c9c265d9c77f27a627
Summary: fixing the case where the init net will initialize same blob twice. I made an exception by allowing inplace blob among ops if the blob keeps on the same device. This should fix this problem in a generalized way as most of our training is only on CPU now.
Reviewed By: dzhulgakov
Differential Revision: D5450564
fbshipit-source-id: 525c4c9a2e5216a70dbd1229da2d9f8a58b89e47
Summary:
===Update log 7/10===
We are now restrained from problem of connection. Will post if this problem does not fix in 2hrs.
===Update 7/6===
Luke is experimenting on the convergence of this diff. Hopefully he could present results next week
Right now this is not affecting our original CPU training pipeline because the loading op is still correct in CPU situation now.
I will need final test to make sure. But that is now blocked by log device issue t19952135
I will do CPU/GPU nets saved in a separate diff.
====Update before 7.4====
It's actually working! Include local run screenshot
{F67959016}
dogscience
Reviewed By: dzhulgakov
Differential Revision: D5307058
fbshipit-source-id: cad5d9324c239419530f4b120392ec2ccbb72280
Summary: CopyGPUToGPU does not exist. Copy seems to do the trick. Didn't go into details of how copy works, not sure if it ends up triggering UVA.
Reviewed By: akyrola
Differential Revision: D5471014
fbshipit-source-id: d8bc1aed9b19070c92f3ffc76f5617bdd0054563
Summary: Quite common confusion is how to use StopGradient, and typical bug is to forget to specify input=output. This adds a sanity check to gradient builder that checks if some StopGradient outputs are orphaned.
Reviewed By: dzhulgakov
Differential Revision: D5458341
fbshipit-source-id: 056fef4f0ee53eb10e66e9be0ecb55b55f9cc3d7
Summary: added support of passing remap_funcs to clone_and_bind_net, so that it can pass it to clone method. Added other utils to ensure RecurrentNetwork operator is correctly cloned based on the remap_blob. The reason that RecurrentNetwork operator needs special treatment is that its arguments contain proto and blobs.
Reviewed By: kittipatv
Differential Revision: D5421532
fbshipit-source-id: 5de68365ce97df2de483f02ad260d78c8d35eead
Summary: Allows to override the input/output record as long as the field blobs are the same.
Reviewed By: yangyangyyy
Differential Revision: D5362132
fbshipit-source-id: 3ac2ac22802902b7eed5c226b00a7e1971ad264c
Summary:
It is quite common question when users get some variant of "blob has version 2 but gradient expects version 1" in their backward pass. The error message is completely unhelpful.
To remedy this, I added proper debug information which tells user how the version number of a blob was incremented over time. i.e which ops caused the version to go op. This should help
understand the issue.
Reviewed By: dzhulgakov
Differential Revision: D5358227
fbshipit-source-id: bc09d048ac33200c35d56460e44e86c2f2888f3f
Summary:
Last time I used uuid filled into OperatorDef. And operator_tracebacks was populated using traceback.extract_stack. There were several issues with this approach:
1. A random field in OperatorDef breaks workflows relying on memoization, i.e. when computation is skipped based on already computed result before.
2. Adding one more field revealed RNNs being non forward compatible wrt to new fields in there. prototxt format seems to not allow forward compatibility (thanks jamesr66a for the investigation!). For RNNs we need to swtich them to a more resilient approach. azzolini's proposed change to OperatorDef / NetDef would allow that by just nesting NetDef dirrectly inside OperatorDef without need for extra serialization.
3. traceback.extract_stack is very slow when executable is on a remote filesystem. It does one or more os.stat for each frame on the stack. For some cases it ended up being up to 15 extra minutes on model construction.
In this diff I use a different approach which should fix all those problems above.
1.2. are solved by not adding a new field at all. Instead I report operator idx wrt to a net it runs in. Thanks akyrola and dzhulgakov for the idea. Downside here is that operator list manipulation breaks the logic and separately created ops are not covered at all.
3. I solved this by operating on raw frames without using traceback and inspect modules which end up doing a lot of file system calls. See function extract_stacktace in core.py with additional comments.
Reviewed By: dzhulgakov
Differential Revision: D5286285
fbshipit-source-id: 626dd0f5f6b8b1d86bd6bf519078b122f43ddcaa
Summary:
Advantages of cloning the tasks/execution_steps at runtime:
- Less complexity on the python side: no need to clone nets and add prefixes to blob names
- Faster start-up: we had cases of complex plans that took up to 30min to be created.
- Better isolation: each task cloned at runtime has its own child workspace, preventing false sharing of blobs.
- Opens up possibility for dynamic scheduling: Number of threads per task can be increased on the fly, at runtime.
Reviewed By: dzhulgakov
Differential Revision: D5100730
fbshipit-source-id: 71b83193b135da4e6eaf2536d8fc266528e1fdcc
Summary:
a few issues:
1. Randomization hurts memoization
1. Even if we make it non random, then we can get key colisions when loading it back.
2. RNNs use prototxt for step net and apparently its not forward compatible like normal protobuf is
I am thinking of a better less invasive solution now.
Reviewed By: jamesr66a
Differential Revision: D5272118
fbshipit-source-id: ab577fad04fbfc632e1fceffa923377a0d3da1be
Summary: Hard-to-debug problems arise when a gradient creator fails when the forward op is incorrect itself. Add checking of the schema before callig the creator. Also clarify the error messages
Reviewed By: Yangqing
Differential Revision: D5256016
fbshipit-source-id: 78550f7e2ce5b88e26b69fdae4be0eece52edfea
Summary: This was only needed in order to initialize stateful PythonOps. Now PythonOp has support for initialization at Op creation time, so this is not used anymore.
Reviewed By: dzhulgakov
Differential Revision: D5242908
fbshipit-source-id: dbaa249466dd0f37f25d204d387b1f99c6dd4fed
Summary: This is going to show a python Caffe2 user where a failed operator was created. Motivation for having this information not right in protobuf is to avoid having it too verboose and keep ability to read protobufs of a net after a simple print() call.
Reviewed By: jamesr66a
Differential Revision: D5226047
fbshipit-source-id: 7edfe850e05a2ec209577142aa3368664a57a108
Summary:
This allows to construct a python op by passing a pickled "builder function call" as an argument to the op.
The builder function is called at PythonOp construction time and returns a function that will be called when the op is run.
This way we allow to drop the dependency on 'tokens', which didn't work properly for protobufs that get distributed to other processes. Now, the PythonOp definition is self-contained: as long as the build dependencies are right, sharding the protobuf is enough to execute the net remotely.
Reviewed By: dzhulgakov
Differential Revision: D5080833
fbshipit-source-id: a5deaca5d3143024cdb121519689224e9dbec5ce