Commit Graph

23 Commits

Author SHA1 Message Date
Jamie King
1dfb687f3c Fixed off-by-one bug in Adam Smart Decay (#62135)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62135

The initial implementation of Adam with Smart Decay had an off-by-one error.  This was in the summation of the geometric series used to calculate how much built-up momentum would have been discharged in skipped minibatches.

The unit tests should have caught these, but the testing strategy missed this because k, the "number of skipped minibatches" was always either 0 or so high that the impact of the bug was too small.  The impact of the bug was proportional to 1/k.  The testing strategy has also been adjusted to cover this bug.

Differential Revision: D29889309

fbshipit-source-id: b086c0efed5c27f621061e726533c73658daffc6
2021-07-26 11:55:38 -07:00
Jamie King
c23db9327a Smart Decay for Adam - Caffe2 (#61548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61548

We want to decay learning parameters properly.  Previously this was not done when a parameter is absent from a minibatch.  We fix this by keeping track of missed minibatches and making decay catch up accordingly.

The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch.  Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively.

To avoid the computational overhead of touching every parameter for every minibatch, we:
* keep track of the last time a parameter is seen
* instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen
* we calculate the amount of momentum that would have been discharged over the missed minibatches and update the weight accordingly.

Differential Revision: D29654246

fbshipit-source-id: 7a6cd7966eb1f31116d99dfce79a78b2d3ee9e3e
2021-07-14 10:22:38 -07:00
Nikita Shulga
f291b1899f Revert D27978269: Smart Decay for Adam - Caffe2
Test Plan: revert-hammer

Differential Revision:
D27978269 (aaa1e07609)

Original commit changeset: e47524101ddf

fbshipit-source-id: 334824bbf9a6ed788e75af9c292754081f70a19b
2021-07-10 13:09:58 -07:00
Jamie King
aaa1e07609 Smart Decay for Adam - Caffe2 (#61488)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61488

We want to decay learning parameters properly.  Previously this was not done when a parameter is absent from a minibatch.  We fix this by keeping track of missed minibatches and making decay catch up accordingly.

The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch.  Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively.

To avoid the computational overhead of touching every parameter for every minibatch, we:
* keep track of the last time a parameter is seen
* instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen.

Differential Revision: D27978269

fbshipit-source-id: e47524101ddfcb281c46c505b9b7a8f0835bc64a
2021-07-09 18:28:21 -07:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Bugra Akyildiz
27c7158166 Remove __future__ imports for legacy Python2 supports (#45033)
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:

```2to3 -f future -w caffe2```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033

Reviewed By: seemethere

Differential Revision: D23808648

Pulled By: bugra

fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Yan Xie
bdd9ef1981 Support RowWiseSparseAdam on GPU (#35404)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35404

Implement RowWiseSparseAdam on CUDA

Reviewed By: xw285cornell

Differential Revision: D20650225

fbshipit-source-id: 5f871e2f259e362b713c9281b4d94534453995cf
2020-07-31 10:47:29 -07:00
Edward Yang
d927d58c2a Revert D20289209: Support RowWiseSparseAdam on GPU
Test Plan: revert-hammer

Differential Revision:
D20289209

Original commit changeset: a7a8a21bd18c

fbshipit-source-id: 4a8ae684d099a5499c28b7e65578fc7ab10b248d
2020-03-18 07:35:07 -07:00
Yan Xie
959a7138fd Support RowWiseSparseAdam on GPU (#34341)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34341

Implement RowWiseSparseAdam on CUDA

Reviewed By: xianjiec

Differential Revision: D20289209

fbshipit-source-id: a7a8a21bd18c1b9891f04f202d3ecaf183e30cad
2020-03-17 15:08:24 -07:00
Jongsoo Park
5f8e849d84 eliminate FE_INVALID in optimizer related operators and tests (#20501)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20501

Fixing unit tests related to optimizer related operators and tests

Reviewed By: hx89

Differential Revision: D15307410

fbshipit-source-id: e5400c26e08f26191ee542fe6b02e0a69bc4e1ae
2019-05-16 08:23:46 -07:00
Chenguang Xi
cdefc27795 Support lr adaption for SparseAdam and RowWiseSparseAdam (#11162)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11162

as title, fix pr test failure

Reviewed By: chocjy

Differential Revision: D9619308

fbshipit-source-id: 0a2228841ed8fadb15f07e94d3575aa701b10146
2018-09-17 10:29:03 -07:00
Edward Yang
3073051a18 Revert D9554375: Support lr adaption for SparseAdam and RowWiseSparseAdam
Differential Revision:
D9554375

Original commit changeset: b88768f470ef

fbshipit-source-id: 2c103c616c8680684892c7d9085fd7bb8289d2f1
2018-08-31 07:54:31 -07:00
Chenguang Xi
0555768e0f Support lr adaption for SparseAdam and RowWiseSparseAdam (#10993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10993

as title

Reviewed By: chocjy

Differential Revision: D9554375

fbshipit-source-id: b88768f470ef7d023dd481c6a97b91594892f422
2018-08-31 00:55:39 -07:00
bddppq
f94ae3ba1d
Update from facebook (#7696)
* Fix handling of empty batches in SumReduceDimsOp

As titled

* Deferrable async_scheduling finishRun fix

Proper order of finishing run operations in deferrable_async_scheduling net

* Simplify exception handling in async_scheduling

Simplify exception handling, no need to busy wait, thread that processes the
last task can finish the run

* [C2]worker_coordinator_memorize_worker_ids

As titled. This is related to T28689868, where the number of blobs we want to create is equal to the number of worker ids

* Add unit test for nets with no type set

* Ignore total length argument in sympolic_pad_packed_sequence

1- There was a mistake in the code that total_length was added to the wrong symbolic function (pack_padded_sequence) instead of (pad_packed_sequence)
2- No need to throw an exception if total_length is given since it is only used to enable data_parallel training on multi-gpus and doesn't have anything to do with onnx export, so just ignore it. https://fburl.com/tk4gciqp

* Add support for MKLDNN to async_scheduling

Just add MKLDNN as a possible CPU option to async_scheduling's pool function

* [AuFL][ensemble] support branch output for prediction

This diff supports using predictions from different branches and thus enables model ensembling (not fully independent).

* Fix a bug in add_loss in layer_model_helper

As titled.

* Support lradaption for adam

1.lr adaption operator
2.apply to dense adam

* Perf tweaks for async_scheduling

Restore single pool option + remove unnecessary (no-ops) calls

* add quantization to SparseSimdAdagradOp

add a bunch of quantization signatures to SparseSimdAdagradOp, implementations to come next

* [sr] [codemod] Change all SR callsites to use new API

@allow-large-files

This diff refactors all callsites of SR to use the slightly changed API introduced in the diff below. Really what this means is that you need to include the correct header. Also if you were using `ClientFactory::newFactory` you need to not prefix it with `ClientFactory::`.

```
cd ~/fbsource/fbcode
find ./ -type f -exec sed -i -e 's:#include "servicerouter/client/cpp2/ClientFactory.h":#include "servicerouter/client/cpp2/ServiceRouter.h":' -e 's:#include <servicerouter/client/cpp2/ClientFactory.h>:#include <servicerouter/client/cpp2/ServiceRouter.h>:' -e 's/ClientFactory::newFactory(/newFactory(/g' {} \;
```

Also manually fixed spots that couldn't be done automatically (or broke because they depended on transitive includes).

* Back out "Fix handling of empty batches in SumReduceDimsOp"

Original commit changeset: 282da1730cc2 This commit is blocking the
Github->fbcode sync, which really needs to get merged ASAP. D7881937 which this
diff depends on will be reverted in the sync D7990948 which causes this to
break. The sync diff cannot be patched with this reversion because it must be
landed against base revision 5c8c099 , and D7881937 must not be included in the
sync diff because it is breaking GPU tests that are not available in sandcastle
: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda8.0-cudnn6-ubuntu16.04-test/3638/console
for one example.

* Add the flow to support operator benchmark

1) generate model with the operator 2) upload to everstore 3) generate model spec into json file 4) start running the benchmark

* [tum][gpu] Connect DPM trainer with flow and unit tests

This diff:
- Fix some small bugs for Yiming's recent changes to parallelizer, so it suits real use cases.
- Add correct tags to the TUM code, so we can do data parallel transform
- pass extra info when instantiation.
- add unit test for using DPM in TUM model

After this diff, we can do simple box, multi-gpu fully-sync trainer for TUM in Fblearner workflow, but may still need to do speed benchmarking.

* w/o normalized lradaption for adam dense only

The previous lr adaption includes a normalization step when performing the dot product operation. This is not exactly same as what is proposed in the paper. I add normalization as an option. Without it, the operator performs exactly what the paper proposed. With the option, we add the normalization step

* [fb] Use SharedPromise in DeferrableAsyncSchedulingNet

This code is to simplify DeferrableAsyncSchedulingNet by removing condition
variable + small fixes

* [tum] implement cuda sparseLengthsMean and LengthsMean

as title

* Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.

Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.

* Move feature_to_index to FeatureSpec.feature_to_index

move feature_to_index to FeatureSpec.feature_to_index to avoid override other fields

* [Caffe2] Rename bytes_moved to bytes_written

Just a rename in preparation for supporting bytes_read.

* [c2] fix ReduceFrontSumOp for empty case by setting 0

otherwise, it may use the results from last iteration when it's empty batch.

* [Caffe2] [Int8] Improve Intel CPU performance

* [Easy] Improve PrependDim op logging

as titled

* DBFileReader expand db_path using os.path.expanduser(..)

Since there are a lot of possible use cases of `DBFileReader` to read from user home path, like `~/local/sample.db`, I want to save people's trouble of calling `os.path.expanduser(db_path)` themselves.

* [Caffe2] Add bytes_read to cost structure

We're adding analytical read bytes to cost functions.  This extends the structure accordingly for all CostInference defined operators.
Additionally, some small bug fixes were performed:
1) Cost functions now extract type information of operands instead of assuming float

* Fix sleef on aarch64 for hhvm

@bypass-lint

Rename flag

* Remove duplicated part in caffe2/ideep/operators/conv_op.cc

should be sync error

* Rename test helper function test_adagrad_sparse_helper to adagrad_sparse_test_helper to avoid confusing pytest
2018-05-19 23:10:48 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Frank Jiang
c809d89810 Fix RowWiseSparseAdam implementation
Summary: The original implementation averaged the momentum across the embedding dimensions, which doesn't make any sense. This meant all the embedding dimensions received the same update, becoming a very memory-expensive one-dimensional embedding.

Differential Revision: D7003135

fbshipit-source-id: ed54e3427bc13895a4e949e96b4b17f6ebfb6d53
2018-02-16 13:28:26 -08:00
Frank Jiang
304e607b70 Fix adam test
Reviewed By: pietern

Differential Revision: D6787780

fbshipit-source-id: a2d1428b0e028d6f3d8f7c312c90f3fa411cd0a2
2018-01-25 12:59:54 -08:00
Frank Jiang
6f0bb28afb Stop running RowWiseSparseAdam test on GPU
Reviewed By: pietern

Differential Revision: D6739194

fbshipit-source-id: 0892cdc6a575a84147f86984c67e7b4bf605a197
2018-01-17 15:05:21 -08:00
Frank Jiang
61356cbadc RowWiseSparseAdam operator
Summary: Added the RowWise functionality for SparseAdam, which saves roughly 2/3 memory usage by only keeping one first and second moment term for each row of the parameter tensor, rather than one for each individual parameter.

Differential Revision: D6679342

fbshipit-source-id: ce6fb27e35ce41a890c66f6089cd2748d10e7a44
2018-01-16 19:39:31 -08:00
Pieter Noordhuis
9835ca9bac Ensure indices list in sparse optimizer tests is unique
Summary:
There were no dimensionality constraints to the generated indices
array, causing many examples being generated and filtered out. Instead,
we should ensure the probability of unique indices is high.

There is a better fix for this by using the `unique` keyword argument
to `hypothesis.extra.numpy.arrays`, but this is available only in
hypothesis version 3.28.0 and later.

This is related to #1536 and #1599.

Once this change has proven to be OK, we can modify the other tests
that now have health check suppression enabled as well.
Closes https://github.com/caffe2/caffe2/pull/1686

Reviewed By: Yangqing

Differential Revision: D6651789

Pulled By: pietern

fbshipit-source-id: d80886c9ccf0a7a842a7580a279f33a2d6cca97c
2018-01-03 12:19:14 -08:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Luke Yeager
a47652379f Fix SparseAdagrad for indices.ndim>1
Summary:
Same fix as https://github.com/caffe2/caffe2/pull/249, but for SparseAdagrad.

Also update the tests for both ops to test this functionality.
Closes https://github.com/caffe2/caffe2/pull/675

Differential Revision: D5148750

Pulled By: akyrola

fbshipit-source-id: d30b722429bc547fd53400c1a29e4ee9e2e6ed18
2017-05-30 12:02:18 -07:00
Luke Yeager
8bd0522c20 Add tests and GPU impls for sparse optimizers
Summary:
These GPU paths are probably even buggier than the CPU paths for sparse gradients with duplicate indices. Both paths cause multiple momentum updates in a single iteration, but only the GPU path is non-deterministic. Depending on how we decide to address the issues on the CPU path, pooyadavoodi has a good idea for how to match dense behavior with the sparse GPU ops.
Closes https://github.com/caffe2/caffe2/pull/254

Reviewed By: bwasti

Differential Revision: D4871680

Pulled By: dzhulgakov

fbshipit-source-id: 220be57a0f699a22ea85ed4f7022d92d362d06b3
2017-04-13 11:07:40 -07:00