pytorch/caffe2/python/modeling
sf-wind 602a09dde7 Update caffe2 from facebook 4f527ef46abf (#2234)
* [GanH]: two_task_discriminator

as titled

and adding label smooth

* [Dper2] Simplified UI options needed for blob magnitude visualization

* [GanH]: fix tags

as titled

* Added type and shape inference for GatherRange operator

This helps with type / shape inference when using this operator in layers.
Also just a nice to have in general.

* Demonstrate Caffe2 exception handling with StoreHandlerTimeoutError in Python

We'd like to catch and recover from certain Caffe2 net exceptions. Use this diff to demonstrate a pattern of registering a pybind exception mapping and catching in Pythonusing caffe2::StoreHandlerTimeoutException.

* Bind Gloo IoException to IoError in Python

Allow peer failure handling and recovery using an exception based mechanism. This diff registers gloo::IoException with pybind.

* [GanH]: add label smoothing to softmax with loss

as titled

* [C2] Enable LARS in Adagrad and hook it to DPER

* [DPER] Don't pass LayerModelHelper in create_trainer_nodes

Since we're planning to get rid of it eventually and I want to get access to
NetDef only interface ASAP - I'm looking towards removing all references to
LMH, where we don't really need them.

* fix bugs in LambdaRankNdcgOp

the loss and gradient in LambdaRankNdcgOp are incorrect. The loss should be negative log of probs instead of log.

* Restrict thread pool on iOS to only big cores

Historically, iPhones exposed only one type of cores, and Caffe2 thread pool used all of them.
However, iPhone 8/iPhone X exposes 2 big + 4 LITTLE cores. As our thread pool doesn't support work stealing or other forms of load balancing, fast cores end up waiting for the slow ones, and it may be better to restrict execution to only 2 fast cores, like we do on Android.

* Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

* make clang happy and get fewer warnings

make clang happy and get fewer warnings

* [Personalization] Support add_output_schema() in layer_model_helper

Problem:
Currently the output_schema of sparse_nn can only be set once. https://fburl.com/efth5zer.

Solution:
For flexibility, we want to add fields to output_schema incrementally.

Plan:
Wrap the change of `model._output_schema` into a new function `add_output_schema()` for adding additional output_schema.

Callsite:
The add_output_schema() should be called instead at https://fburl.com/efth5zer

Reference:
The newly added `add_output_schema()` will be similar to `add_loss()` in https://fburl.com/t2ii8njh
2018-03-12 12:22:59 -07:00
..
__init__.py Experimental support for setup.py develop mode install 2018-02-12 23:36:18 -08:00
compute_norm_for_blobs_test.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
compute_norm_for_blobs.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
compute_statistics_for_blobs_test.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
compute_statistics_for_blobs.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
initializers_test.py Clean up naming of FP16-related code, add comments 2018-03-09 13:51:42 -08:00
initializers.py Clean up naming of FP16-related code, add comments 2018-03-09 13:51:42 -08:00
net_modifier.py [Dper2] Add NetModifier abstraction and support for plotting the norm of blobs (#2201) 2018-03-08 13:41:32 -08:00
parameter_info.py Re-license to Apache 2017-09-28 16:22:00 -07:00
parameter_sharing_test.py Fix a bug in nested parameter sharing logic. 2018-02-08 13:32:53 -08:00
parameter_sharing.py Fix a bug in nested parameter sharing logic. 2018-02-08 13:32:53 -08:00