mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Support attention weights input to SparseLookup. In attention sum pooling, if attention weights can be pre-calculated before embedding lookup, they can be passed to SparseLookup and processed by SparseLengthsWeightedSum op. One example is id_score attention sum pooling. Essentially the net is converted from: LengthsSum(Mul(Gather(keys, w), att_weight)) to: SpaseLenghtsWeightedSum(keys, w, att_weight) It unblocks potential efficiency gain with distributed training. Pull Request resolved: https://github.com/pytorch/pytorch/pull/26748 Test Plan: unit test Reviewed By: chocjy Differential Revision: D17553345 Pulled By: wheatkit fbshipit-source-id: 60cc3c4b0bc1eade5459ac598e85286f3849a412 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| adaptive_weight.py | ||
| add_bias.py | ||
| arc_cosine_feature_map.py | ||
| batch_huber_loss.py | ||
| batch_lr_loss.py | ||
| batch_mse_loss.py | ||
| batch_normalization.py | ||
| batch_sigmoid_cross_entropy_loss.py | ||
| batch_softmax_loss.py | ||
| blob_weighted_sum.py | ||
| bpr_loss.py | ||
| bucket_weighted.py | ||
| build_index.py | ||
| concat.py | ||
| constant_weight.py | ||
| conv.py | ||
| dropout.py | ||
| fc_without_bias.py | ||
| fc.py | ||
| feature_sparse_to_dense.py | ||
| functional.py | ||
| gather_record.py | ||
| homotopy_weight.py | ||
| label_smooth.py | ||
| last_n_window_collector.py | ||
| layer_normalization.py | ||
| layers.py | ||
| margin_rank_loss.py | ||
| merge_id_lists.py | ||
| pairwise_similarity.py | ||
| position_weighted.py | ||
| random_fourier_features.py | ||
| reservoir_sampling.py | ||
| sampling_train.py | ||
| sampling_trainable_mixin.py | ||
| select_record_by_context.py | ||
| semi_random_features.py | ||
| sparse_dropout_with_replacement.py | ||
| sparse_feature_hash.py | ||
| sparse_lookup.py | ||
| split.py | ||
| tags.py | ||
| uniform_sampling.py | ||