Commit Graph

44 Commits

Author SHA1 Message Date
Jiyan Yang
0199d59d3a Resubmit: Set the correct engine name for position weighted pooling when fp16 is used for training
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13768

Reviewed By: xianjiec

Differential Revision: D12996103

fbshipit-source-id: 5ca4cda4210f68ece2b5d6eced8cf52ee91fb36f
2018-11-27 14:51:56 -08:00
Huan Gui
60e7d04961 Add Recency Weighted into SparseLookup (#14291)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14291

Add RecencyWeighted into SparseLookup.

Reviewed By: Wakeupbuddy

Differential Revision: D13147738

fbshipit-source-id: de5dc3aaee8ce7d41c6d30d2ff47e9786a7fa4da
2018-11-24 02:43:31 -08:00
Andrey Malevich
eaf33f22c8 Revert D10123465: Set the correct engine name for position weighted pooling when fp16 is used for training
Differential Revision:
D10123465

Original commit changeset: e8d929d4153d

fbshipit-source-id: 36269e49ac79955fe695ac1a53a3c386aa2f5bec
2018-10-15 01:53:48 -07:00
Jiyan Yang
635cbff300 Set the correct engine name for position weighted pooling when fp16 is used for training
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12225

Reviewed By: hyuen, xianjiec

Differential Revision: D10123465

fbshipit-source-id: e8d929d4153d1ee987ae3d1c37892525d7574d16
2018-10-12 20:15:13 -07:00
Jiyan Yang
c5f7da3f4a Support FP16 sparse lookup (#11674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11674

Pull Request resolved: https://github.com/pytorch/pytorch/pull/11658

Reviewed By: hyuen

Differential Revision: D9676950

fbshipit-source-id: 89a115b9664b84e4e4436b7da033e5a428c2246d
2018-09-14 02:40:08 -07:00
Huayu Li
46d8002800 Fix bug that always uses the same blob when repeating poolings
Reviewed By: houseroad

Differential Revision: D9027902

fbshipit-source-id: 957702ad9736812ec5aa32066d286c2c3adffc49
2018-07-28 00:09:16 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
sf-wind
602a09dde7 Update caffe2 from facebook 4f527ef46abf (#2234)
* [GanH]: two_task_discriminator

as titled

and adding label smooth

* [Dper2] Simplified UI options needed for blob magnitude visualization

* [GanH]: fix tags

as titled

* Added type and shape inference for GatherRange operator

This helps with type / shape inference when using this operator in layers.
Also just a nice to have in general.

* Demonstrate Caffe2 exception handling with StoreHandlerTimeoutError in Python

We'd like to catch and recover from certain Caffe2 net exceptions. Use this diff to demonstrate a pattern of registering a pybind exception mapping and catching in Pythonusing caffe2::StoreHandlerTimeoutException.

* Bind Gloo IoException to IoError in Python

Allow peer failure handling and recovery using an exception based mechanism. This diff registers gloo::IoException with pybind.

* [GanH]: add label smoothing to softmax with loss

as titled

* [C2] Enable LARS in Adagrad and hook it to DPER

* [DPER] Don't pass LayerModelHelper in create_trainer_nodes

Since we're planning to get rid of it eventually and I want to get access to
NetDef only interface ASAP - I'm looking towards removing all references to
LMH, where we don't really need them.

* fix bugs in LambdaRankNdcgOp

the loss and gradient in LambdaRankNdcgOp are incorrect. The loss should be negative log of probs instead of log.

* Restrict thread pool on iOS to only big cores

Historically, iPhones exposed only one type of cores, and Caffe2 thread pool used all of them.
However, iPhone 8/iPhone X exposes 2 big + 4 LITTLE cores. As our thread pool doesn't support work stealing or other forms of load balancing, fast cores end up waiting for the slow ones, and it may be better to restrict execution to only 2 fast cores, like we do on Android.

* Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

* make clang happy and get fewer warnings

make clang happy and get fewer warnings

* [Personalization] Support add_output_schema() in layer_model_helper

Problem:
Currently the output_schema of sparse_nn can only be set once. https://fburl.com/efth5zer.

Solution:
For flexibility, we want to add fields to output_schema incrementally.

Plan:
Wrap the change of `model._output_schema` into a new function `add_output_schema()` for adding additional output_schema.

Callsite:
The add_output_schema() should be called instead at https://fburl.com/efth5zer

Reference:
The newly added `add_output_schema()` will be similar to `add_loss()` in https://fburl.com/t2ii8njh
2018-03-12 12:22:59 -07:00
Dmytro Dzhulgakov
f2ec5b7b0e [DPER] Fix bug in uint8 quantization shortcut.
After D6953547 some of the blobs were no longer impacted by uint8 quanitzation,
but they would still generate operators expecting uint8 inputs and thus fail.

This diff is adding a temporal hack to avoid doing this quantization when layer
is not quantized.

Will fix it with switching to Net rewriting instead.
2018-03-06 00:33:11 -08:00
Andrey Malevich
60dc3ca66f Use 8-bit quantization only in cases when it makes sense.
Summary:
In some cases we were doing quantization even we we should not. This diff is
preventing this from happening.

Reviewed By: rayleichen

Differential Revision: D6953547

fbshipit-source-id: 7c65baaf969e5e1bddb68ca8182f4f3b43f2431d
2018-02-15 19:33:03 -08:00
Yan Shang
e816c777eb Add regularization for sparse features
Reviewed By: xianjiec

Differential Revision: D5767997

fbshipit-source-id: b9b7c47d11417fbe67d861a2a6b4daa38adbe57b
2018-02-02 16:03:32 -08:00
Peter Goldsborough
0fd41a63a1 Integrate Fused8BitRowwise ops with DPER
Summary: Updates `sparse_lookup.py` for the new fused 8-bit rowwise quantization. Mostly just changing the same files as the original diffs (D5753626 and D5761202). I know very little about this code here so please let me know if this is safe, also in terms of migration away from the non-fused storage.

Reviewed By: kennyhorror

Differential Revision: D6710784

fbshipit-source-id: 185f147af52a094a937ba631b0351225e660d205
2018-01-25 15:02:42 -08:00
Lin Yang
8e0177255e Test for PositionWeighted
Summary: add Test for SparseLookup with PositionWeighted.

Reviewed By: kennyhorror

Differential Revision: D6771612

fbshipit-source-id: b4b3bfd514f366f579b4192643330ae73843d4f9
2018-01-22 19:20:46 -08:00
Yan Shang
cf07820849 Enable SparseLengthsMean
Differential Revision: D6445834

fbshipit-source-id: 5cbc95e6975b2447dc82dbe293d0ddd9adf6b5a3
2017-11-30 16:04:38 -08:00
Xianjie Chen
5250d7fd11 simplify logic for weighted pooling using id score list
Summary:
so that user can use 'WeightedSum' pooling method when there is mix of id list feature and id score list features.

- it's still intuitive to have "WeightedSum" for id list, and we do not need to introduce new "UnWeightedSum" etc.

Reviewed By: chocjy

Differential Revision: D6369270

fbshipit-source-id: 722fa08d1a7986bc6ecf4c7cb02bbae0825bcab4
2017-11-22 17:32:04 -08:00
Yan Shang
dcaaf51100 Support /sqrt(n) pooling
Differential Revision: D6378584

fbshipit-source-id: 3c6606c4e71afbd31dbb97ceeac38dfbe7b40090
2017-11-21 09:04:02 -08:00
Xue Feng
f0306c12ff add Mean Pooling distributed support
Reviewed By: dragonxlwang

Differential Revision: D6114111

fbshipit-source-id: bc0a79a4455e490bdfaa1d5d6d77badfacd2375c
2017-11-14 17:30:31 -08:00
Xianjie Chen
1b5c843a9c cleaner logic on sparse feature hashing
Reviewed By: kennyhorror

Differential Revision: D6195525

fbshipit-source-id: f687ac3d4914c3dbb0d35679e3a3d3a64a71ac53
2017-11-03 07:27:45 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Huazhong Ning
808c9e3e70 fix a small typo error in sparse_lookup
Summary: as title

Reviewed By: kittipatv

Differential Revision: D5908455

fbshipit-source-id: e7c66e84a27273156d66dfd043e9cfd9b0ab9a98
2017-09-25 21:46:56 -07:00
Xianjie Chen
ec713d437d make sure the output of sparse lookup layer is float
Summary: currently, if reduer=Nonoe, the output if fp16

Differential Revision: D5773560

fbshipit-source-id: 24d7e5fae366d70352582e9a1ee14c7613753b7a
2017-09-07 17:47:39 -07:00
Dmitrii Podoprikhin
c7684e3b27 Rowwise quantization
Reviewed By: kennyhorror

Differential Revision: D5753626

fbshipit-source-id: 680c627a81658bcd653feab68e7040db0cb7a185
2017-09-06 10:19:38 -07:00
Long Jin
3faeb621d3 support id_score_list for Feed
Reviewed By: xianjiec

Differential Revision: D5624894

fbshipit-source-id: 1b2caba9ffcce68f346020485cb1f4edb01ca5e7
2017-08-24 00:32:05 -07:00
Jiyan Yang
a8695178aa Adding parameter sharing API to Dper2
Summary:
To achive this, I modified the blob name scheme defined in a layer.
Before it was scope/fc_w and scope/fc_w_auto_0 (if there is another fc
    within the same scope).
Now I change it to scope/fc/w and scope/fc_auto_0/w.
That is, we rely on the uniqueness of the scoped layer name to define
names for blobs.

I also overwrote the create_param method in LayerModelHelper to let it
use the resolved name for blobs given the sharingparameter context.

There are some details such as making the initializer more structured
that I need to finalize.

Reviewed By: kennyhorror

Differential Revision: D5435132

fbshipit-source-id: a0525f5ea0977e255dd5ea765b38913f5951d455
2017-08-03 00:33:18 -07:00
Tao Wu
4a81b0f24a make SparseLookup support None pooling
Summary: Adding pooling option as None, and SparseLookup will gather the embedding for each id.

Reviewed By: kittipatv

Differential Revision: D5421667

fbshipit-source-id: 1e8e2b550893ff3869dab12f8eb1fe24a063c3d5
2017-07-18 16:39:55 -07:00
Wael Abdelghani
5447f5c0d7 Move position weighted to separate layer
Reviewed By: kennyhorror

Differential Revision: D5063086

fbshipit-source-id: 212c08946728437bcc8b6049438ae82235137ec6
2017-06-05 15:49:22 -07:00
Xianjie Chen
8a7f00d61b fix mean pooling
Summary:
Segment based Ops requires increasing seg id, and without gap. Lengths based Ops does not
have this requirements.

Otherpooling methods, e.g., LogExpMean does not have Lengths based Ops available yet.

Differential Revision: D5019165

fbshipit-source-id: ab01a220e10d4ed9fa2162939579d346607f905e
2017-05-08 01:09:07 -07:00
Chonglin Sun
e8e93066e7 add workflow for user complicated embedding
Summary: Correctly propagate request_only tag to all layer.

Reviewed By: kennyhorror

Differential Revision: D4751496

fbshipit-source-id: e65fd8cfe56d2989213d44e684a528ede691d316
2017-05-02 10:46:52 -07:00
Kittipat Virochsiri
3b4c950862 Add option to use id_score_list_features column
Summary: Somehow, feed-non-ranking training data usually have this type of column. Add option to support it.

Reviewed By: xianjiec, kennyhorror

Differential Revision: D4773960

fbshipit-source-id: 5a7ef4618a070e04f3cd8ddfcbf2b7441c00d92d
2017-04-03 17:03:09 -07:00
Ou Jin
cd4160c894 distributed training for dper2
Summary:
Add distributed training to dper2 and keep the dper1 working.

* Created a ModelDelegator to wrap ModelHelper and LayerModelHelper to mitigate the difference.
* To get the average length for sparse feature, I extracted some information in feature_processor. There should be some better way to do it after we have new compute_meta.
* metric right now only runs on the first trainer.
* The model is saved correctly for evaluation. But I'm still not sure how to handle the weights for adagrad.

Reviewed By: kennyhorror

Differential Revision: D4767745

fbshipit-source-id: 0559d264827a7fd9327071e8367d1e84a936bea9
2017-03-30 19:04:50 -07:00
Aaron Markham
58f7f2b441 doxygen python block added
Summary: Closes https://github.com/caffe2/caffe2/pull/226

Differential Revision: D4793550

Pulled By: JoelMarcey

fbshipit-source-id: cc33e58186304fa8dcac2ee9115dcc271d785b1e
2017-03-29 06:46:16 -07:00
Xianjie Chen
95501a0165 clean old unit test, add sum processor and sqrt pooling
Summary: sum processor and sqrt pooling is to mimic the DoubleHelix model.

Differential Revision: D4678413

fbshipit-source-id: fc1ccfe3c92c540ce5914dfd8ff1a040805c48db
2017-03-08 23:04:19 -08:00
Chonglin Sun
7472631e7f fix bug in Mean pooling
Summary: simple fix

Reviewed By: xianjiec

Differential Revision: D4655469

fbshipit-source-id: 6dbcfcd2f3f7f7bd74aca88af4f60c6ddffb9138
2017-03-06 11:31:10 -08:00
Artem Volkhin
000db87bc7 Half-floats support for the rest of segment ops
Summary:
previously fp16 type was supported in SparseLengthsSum operator, now it
works in all other segment operator as well.

Reviewed By: dzhulgakov

Differential Revision: D4624312

fbshipit-source-id: c9d72110e3762167270bb088405eaf9c56e88493
2017-02-28 11:19:15 -08:00
Artem Volkhin
45e1905722 add support of fp16 to SparseLengthsSum and SparseLengthsMean
Summary: Another part of making DPER compatible with half-floats. This diffs adds supoprt of fp16 to segment reduction operators used in DPER.

Reviewed By: dzhulgakov

Differential Revision: D4587560

fbshipit-source-id: 0ae10648a7286a820bffaee802464dd9464584bc
2017-02-22 11:05:55 -08:00
Artem Volkhin
b2cf0fad15 Convert SparseLookup layer's embedding to fp16 blobs for predictor
Summary:
First part of adding half-floats support to DPER 2.0. Let's add an option use_half_floats to enable converting some weights of the model from fp32 to fp16 before saving it to predictor models parts. For now it's for SparseLookup layer's embeddings. All conversion is done after training is finished and saved models are ready to be used on remote predictors as-is (they will be stored compacted in memory). New fp16 blobs are saved to the model instead of original ones, under the same names, so we don't modify MetaNetDef at all.

Next steps:
1) support on delivery side -- operators working with these blobs should support both float and float16 input types
2) benchmark performance to make sure there is no regression
 a) of serialization
 b) of delivery
3) support realtime training (I'm thinking about adding new pre-publishing net which will be executed each time the realtime trainer stops to publish a new snapshot)

Depends on D4567304

Reviewed By: kennyhorror

Differential Revision: D4571710

fbshipit-source-id: 19967a17d3bd84878d66e8c0ed8c5342bf38d979
2017-02-22 11:05:49 -08:00
Xianjie Chen
d0621a2449 NextScopedBlob with well-defined behavior and respect namescope
Summary:
Remove the use of `NextName` in layer model helper, so that the same function return `model_helper` that should construct identical `Net`, when under the same NameScope.

The `NextScopedBlob` should only take effect when there is real name conflicting, otherwise it returns ScopedBlobReference.

This is critical for parameter blobs. In long run, we need to be able to specify parameter blobs more explicitly. (kennyhorror is working on this). This solution works in short term for e.g., two tower sparse nn models.

Reviewed By: kennyhorror

Differential Revision: D4555423

fbshipit-source-id: 2c4b99a61392e5d51aa878f7346466a8f14be187
2017-02-16 17:16:36 -08:00
Andrey Malevich
86fb25cefa Rely on embedding size in split
Summary: As desc.

Differential Revision: D4471823

fbshipit-source-id: 2685c64c22556da1749b3e3e6b21a684a7231e7b
2017-01-27 19:44:31 -08:00
Vsevolod Oparin
5e5486491d Replace Gather + RowMul by SparseLengthsWeightedSum
Summary:
Improving performace using command SparseLenghtsWeightedSum. Results for my run:
Before:

  8.98474 RowMul
  6.89952 Gather
  0.80991 LengthsSum
  2.02056 SparseLengthsWeightedSum
  Total: 18.71

After:

  1.075 Gather
  6.54999 SparseLengthsWeightedSum
  Total: 7.62

Log of run: P56992396

With skip_backward. Command:

  CLASSPATH=/mnt/vol/gfsetlprocstore-oregon/users/cxj/hivereader-wrapper-1.0-SNAPSHOT-standalone.jar OMP_NUM_THREADS=1 MKL_NUM_THREADS=1 MKL_DYNAMIC=FALSE ./buck-out/gen/caffe2/caffe2/fb/dper/tools/speed_benchmark.par -loader_param /mnt/vol/gfsfblearner-altoona/flow/data/2017-01-22/d832bb7b-5598-422e-9fee-b3299a9c8c1f -negDownsampleRate 0.1 -hidden 'unary(dot{"num_dense": 6, "pooling_method": "PositionWeighted"}(128, 64)128-128, 1)' -model_type mlp_sparse -warmup_runs 10 -main_runs 1000 -run_individual -skip_backward 2>&1 | tee /tmp/log.txt

Before: P56993234$7509
After: P56992503$7344

Command:

  ./fblearner/nn/ads/canary all

https://our.intern.facebook.com/intern/fblearner/details/13320564/?notif_channel=cli

Cloned "caffe2 ads sparse nn canary" run: https://our.intern.facebook.com/intern/fblearner/details/13322337/

Reviewed By: xianjiec

Differential Revision: D4451073

fbshipit-source-id: 0a4e9693d7b8b0372b2efefa61154e987a493210
2017-01-24 20:44:21 -08:00
Ievgen Soboliev
1632f053e5 implement user-only metadata for input_record
Summary:
We want to implement request only net and to do this we decided to split the work into two parts. The first part will propagate required metadata and the second part will cut the nets properly.
This diff is to propagate request_only metadata across the layers.

A few notes about implementation:
  - Each layer contains a field request_only which can be set based on the input_record. If all the scalars from the input_record are marked request_only we mark a layer as request_only;
  - Sparse-To-Dense layer sets request_only metadata;
  - SigridTransformation and SparseLookup layers propagate request_only status;
  - As for now we join request_only and other sparse features together in input_record, but ideally we may want to separate this, because request_only should be served separately;

Reviewed By: xianjiec

Differential Revision: D4259505

fbshipit-source-id: db8a30ef92cba84f1a843981b9dde3a8b9633608
2016-12-15 12:01:29 -08:00
Xianjie Chen
c70e8115a1 dper_example use RowMul for speed
Summary:
Faster ~65k vs 25k:

After: 11444089
Before: 11259149

Differential Revision: D4275671

fbshipit-source-id: 57de414676799980632c1d29142ee698965b1b68
2016-12-15 12:01:28 -08:00
Xianjie Chen
2045a5de9f add position based weighting
Summary: adding more methods to the layer representation. The corresponding implementation in DPER is: https://fburl.com/563869364

Differential Revision: D4256583

fbshipit-source-id: 91326b7bb9e960a5bc70b5a13812fce90054eceb
2016-12-05 11:53:26 -08:00
Yangqing Jia
589398950f fbsync at f5a877 2016-11-18 15:41:06 -08:00
Yangqing Jia
238ceab825 fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00