Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9905
This diff improves lars operator in Caffe2 by applying clipping to the computed learning rate
Reviewed By: pjh5
Differential Revision: D9020606
fbshipit-source-id: b579f1d628113c09366feac9406002f1ef4bd54f
* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)
* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)
* add unit test for Lars
* set default value for lars to be None
* remove lars for subclasses of SgdOptimizer