Summary:
(Ignore the convolution-op related changes, they will be later patched separately)
This diff ignores work from latest few weeks:
- some refactoring of the flow ops
- no_bias setting
- MAP computation (instead of accuracy) for OC
- adaptive learning rate for Xray concepts
- various small bug fixes
Reviewed By: viswanathgs
Differential Revision: D4329500
fbshipit-source-id: 000d4fd22ec408af5290480c788eb86546bff52e
Summary: Reading Torch docs about Resnets, and soumith's comment, they mention significant memory-saving with in-place ReLu. prigoyal already had this in her code, but I did not. This saves memory a lot: 9851 MiB -> 7497 MiB.
Reviewed By: prigoyal
Differential Revision: D4346100
fbshipit-source-id: e9c5d5e93787f47487fade668b65b9619bfc9741
Summary:
prigoyal sharply noticed a bug in the Resnet models: we have not been checkpointing, nor synchronizing between gpus, the moving average and variance computed by the SpatialBN ops. Particularly the first problen is serious, since models starting from checkpoint would have started from a null-state for SpatialBN. Not synchronizing with the data parallel model is less tragic since each GPU should see very similar data.
Thus I propose keeping track of "computed params", i.e params that are computed from data but not optimized. I don't know if there are other examples, but SpatialBN's moving avg and var definitely are one.
- I modified the checkpointign for xray model to store those blobs + also ensure the synchronization of those blobs
- I modified data parallel model to broadcast those params from gpu0. I first tried averaging, but hit some NCCL deadlocks ... :(
Differential Revision: D4281265
fbshipit-source-id: 933311afeec4b7e9344a13cf2d38aa939c50ac31