Commit Graph

1008 Commits

Author SHA1 Message Date
Alexander Golynski
25e07c6e91 Revert D27422219: [caffe2] Support Log1p operator
Test Plan: revert-hammer

Differential Revision:
D27422219 (d92e2520de)

Original commit changeset: f9eba82bf09c

fbshipit-source-id: 7cd5b778ae5f296187f57b6efaa782de97a6f013
2021-03-31 06:03:45 -07:00
Oleg Khabinov
d92e2520de [caffe2] Support Log1p operator (#54968)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54968

Support Log1p operator to add feature parity with PyTorch.

NumPy doc https://numpy.org/doc/stable/reference/generated/numpy.log1p.html
PyTorch doc https://pytorch.org/docs/stable/generated/torch.log1p.html

Test Plan:
```
$ buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:log1p_op_test
```

Differential Revision: D27422219

fbshipit-source-id: f9eba82bf09c1c440f11a33f8ae2bf8084609457
2021-03-30 16:38:37 -07:00
Sam Estep
5bcbbf5373 Lint trailing newlines (#54737)
Summary:
*Context:* https://github.com/pytorch/pytorch/issues/53406 added a lint for trailing whitespace at the ends of lines. However, in order to pass FB-internal lints, that PR also had to normalize the trailing newlines in four of the files it touched. This PR adds an OSS lint to normalize trailing newlines.

The changes to the following files (made in 54847d0adb9be71be4979cead3d9d4c02160e4cd) are the only manually-written parts of this PR:

- `.github/workflows/lint.yml`
- `mypy-strict.ini`
- `tools/README.md`
- `tools/test/test_trailing_newlines.py`
- `tools/trailing_newlines.py`

I would have liked to make this just a shell one-liner like the other three similar lints, but nothing I could find quite fit the bill. Specifically, all the answers I tried from the following Stack Overflow questions were far too slow (at least a minute and a half to run on this entire repository):

- [How to detect file ends in newline?](https://stackoverflow.com/q/38746)
- [How do I find files that do not end with a newline/linefeed?](https://stackoverflow.com/q/4631068)
- [How to list all files in the Git index without newline at end of file](https://stackoverflow.com/q/27624800)
- [Linux - check if there is an empty line at the end of a file [duplicate]](https://stackoverflow.com/q/34943632)
- [git ensure newline at end of each file](https://stackoverflow.com/q/57770972)

To avoid giving false positives during the few days after this PR is merged, we should probably only merge it after https://github.com/pytorch/pytorch/issues/54967.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54737

Test Plan:
Running the shell script from the "Ensure correct trailing newlines" step in the `quick-checks` job of `.github/workflows/lint.yml` should print no output and exit in a fraction of a second with a status of 0. That was not the case prior to this PR, as shown by this failing GHA workflow run on an earlier draft of this PR:

- https://github.com/pytorch/pytorch/runs/2197446987?check_suite_focus=true

In contrast, this run (after correcting the trailing newlines in this PR) succeeded:

- https://github.com/pytorch/pytorch/pull/54737/checks?check_run_id=2197553241

To unit-test `tools/trailing_newlines.py` itself (this is run as part of our "Test tools" GitHub Actions workflow):
```
python tools/test/test_trailing_newlines.py
```

Reviewed By: malfet

Differential Revision: D27409736

Pulled By: samestep

fbshipit-source-id: 46f565227046b39f68349bbd5633105b2d2e9b19
2021-03-30 13:09:52 -07:00
Lanlan Liu
695eef05a4 optimizer exploration - v1 and v2 + fix position_weighted optimizer + decoupled weight decay (#54042)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54042

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53881

1. Fix position_weighted optimizer: Position weighted layer uses default optimizer but is actually gradient_slice, which will cause problem if we do not handle it properly in the new optimizier. The solution is to use sparseadagrad when it is gradient_slices.
2. Optimizer implementation of v1 and v2: using 1st momentum with/without bias_correction.
3. also implemented decoupled weight decay in the new optimizer.

Test Plan:
buck test //caffe2/caffe2/fb/dper/layer_models/tests/split_1:sparse_nn_test_2 -- test_mlp_optimization

buck test //caffe2/caffe2/python:optimizer_test -- TestDecayAdagrad

buck test //caffe2/caffe2/python/operator_test:decay_adagrad_test

ctr_mbl_feed work flow: f255731660
oc work flow: f255739503

Reviewed By: 0x10cxR1

Differential Revision: D26839668

fbshipit-source-id: 2b6881c1a88540ef5766be40f5e80001257e2199
2021-03-27 23:03:29 -07:00
Adam Simpkins
87989a6cf9 [caffe2] support serializing float data as bfloat16 (#53735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53735

Add an option to BlobSerializationOptions to request that float data be
serialized as bfloat16.  This reduces the serialized data size at the expense
of some loss in precision.
ghstack-source-id: 124317910

Test Plan: Included a new unit test.

Reviewed By: mraway

Differential Revision: D26658205

fbshipit-source-id: 74521ed161059066355a3f208488ed01a344dbb5
2021-03-24 13:27:22 -07:00
Chester Liu
f6df18f6ca Clean up future imports for Python 2 (#53349)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53349

Reviewed By: malfet

Differential Revision: D27039089

Pulled By: bugra

fbshipit-source-id: 8063dc184248604506a8dbb1bcb73da8ec85bb18
2021-03-14 15:56:13 -07:00
Adam Simpkins
7e5ffbfa94 [caffe2] add a SerializationOptions field for the save operator (#53402)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53402

Add an `options` field to the `Save` operator which accepts options for how to
serialize different blobs.  At the moment this simply allows controlling the
existing `chunk_size` behavior, but in the future we can add other options,
such as the ability to control compression settings or other serialization
formats.
ghstack-source-id: 123567034

Test Plan:
Added a new test to `load_save_test.py` that passes in options and verifies
that blobs were serialized with the expected number of chunks.

  buck test caffe2/caffe2:caffe2_test_cpu \
    caffe2/caffe2/core:serialization_test \
    caffe2/caffe2/python/operator_test:load_save_test

Reviewed By: mraway

Differential Revision: D26502577

fbshipit-source-id: 6e302e530bb96990517c2e35c505db7f14a56284
2021-03-11 13:02:58 -08:00
Adam Simpkins
023948e6d7 [caffe2] update load_save_test.py to also verify the chunking behavior (#53401)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53401

This is a reland of D26641599 (cd9ac54ea7) after rebasing onto D26802576 (f595ba1bae).

Add some small utility functions to read the blob names back from the minidb
file so that we can verify how many chunks were written for each blob.
ghstack-source-id: 123567033

Test Plan: buck test caffe2/caffe2/python/operator_test:load_save_test

Reviewed By: mraway

Differential Revision: D26853942

fbshipit-source-id: 0b45078fdd279f547752c8fdb771e296374a00da
2021-03-10 15:29:36 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Nikita Shulga
68810c1836 Delete test_rand_quantization (#53234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53234

Test has been permanently skipped since Nov 2019, see https://github.com/pytorch/pytorch/pull/29463

Test Plan: CI

Reviewed By: mruberry

Differential Revision: D26802660

fbshipit-source-id: ea66be1afd4d7cfbe692594df5d9dd8c29bc5d23
2021-03-03 20:59:00 -08:00
Natalia Gimelshein
69b2d5c7c3 Revert D26641599: [caffe2] update load_save_test.py to also verify the chunking behavior
Test Plan: revert-hammer

Differential Revision:
D26641599 (cd9ac54ea7)

Original commit changeset: bccb0af157d8

fbshipit-source-id: 9fe35382876d19aefd16496bf8f920e12aa6f169
2021-02-25 21:30:36 -08:00
Adam Simpkins
cd9ac54ea7 [caffe2] update load_save_test.py to also verify the chunking behavior
Summary:
Add some small utility functions to read the blob names back from the minidb
file so that we can verify how many chunks were written for each blob.

Test Plan: buck test caffe2/caffe2/python/operator_test:load_save_test

Reviewed By: mraway

Differential Revision: D26641599

fbshipit-source-id: bccb0af157d85e585e95bc7be61c4584fba3cb04
2021-02-25 20:24:06 -08:00
Adam Simpkins
e2afb269b8 [caffe2] add a Python test for SaveOp chunking
Summary:
Add a test in `load_save_test.py` that passes in a chunk_size parameter,
to ensure that we exercise the logic that passes the chunk size to the C++
serialization code.

Test Plan:
Ran the tests with the vlog level set to 3 and manually verified the log
messages showed that we were serializing in the expected chunks.
There are existing C++ tests that confirm chunking behavior works as expected
in the pure C++ code.

Reviewed By: mraway

Differential Revision: D26502578

fbshipit-source-id: cd0074f2358da81c68b0fed2c2a94818d83a957d
2021-02-23 11:52:13 -08:00
Adam Simpkins
fa0a049d4e Add a make_tempdir() utility function to the TestCase base class (#51762)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51762

Update test_util.py to add a `make_tempdir()` function to the `TestCase`
class.  The main advantage of this function is that the temporary
directory will be automatically cleaned up when the test case finishes,
so that test case does not need to worry about manually cleaning up this
directory.

This also prefixes the directory name with `caffe2_test.` so that it is
more obvious where the temporary directories came from if they are ever
left behind after a crashed or killed test process.

This updates the tests in `operator_test/load_save_test.py` to use this
new function, so they no longer have to perform their own manual cleanup
in each test.

Test Plan: python caffe2/python/operator_test/load_save_test.py

Reviewed By: mraway

Differential Revision: D26271178

Pulled By: simpkins

fbshipit-source-id: 51175eefed39d65c03484482e84923e5f39a4768
2021-02-12 10:56:01 -08:00
Roy, Arindam
517185f946 test_lc_1d: Increase deadline to 5 seconds (#52013)
Summary:
Increasing the deadline as to avoid
flakiness of the test on ROCM.

Signed-off-by: Roy, Arindam <rarindam@gmail.com>

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52013

Reviewed By: albanD

Differential Revision: D26360209

Pulled By: mrshenli

fbshipit-source-id: 1ddc7062c5ff7c980233d22844073de9fb7dcbb3
2021-02-11 11:59:56 -08:00
Adam Simpkins
81b9aa743b [pytorch] Update caffe2/python to eliminate Pyre errors (#52083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52083

This makes minor fixes in `caffe2/python` to address all errors currently
reported by Pyre.

I update the code to fix errors when doing so looked simple and safe,
and added `pyre-fixme` comments in other places.
ghstack-source-id: 121109695

Test Plan: Confirmed that Pyre no longer reports errors under `caffe2/python`

Differential Revision: D26272279

fbshipit-source-id: b1eb19d323b613f23280ce9c71e800e874ca1162
2021-02-11 11:04:59 -08:00
Andrey Malevich
7e54a64828 [C2] Add shape inference logic for ColwiseMax operator. (#51914)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51914

As desc.

Test Plan: Unit-test.

Reviewed By: intermilan

Differential Revision: D26299115

fbshipit-source-id: 9c80236f843e907476da1747dcd623c85147fa90
2021-02-09 14:12:07 -08:00
Arindam Roy
09b896261c Skip test_lc_1d for ROCM (#50964)
Summary:
The test is flaky on ROCM when deadline is set to 1 second. This is affecting builds as it is failing randomly.
Disabling for now.

Signed-off-by: Arindam Roy <rarindam@gmail.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50964

Reviewed By: houseroad

Differential Revision: D26049370

Pulled By: BIT-silence

fbshipit-source-id: 22337590a8896ad75f1281e56fbbeae897f5c3b2
2021-01-25 11:43:37 -08:00
Richard Barnes
9945fd7253 Drop unused imports from caffe2/python (#49980)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49980

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727359

fbshipit-source-id: c4f60005b10546423dc093d31d46deb418352286
2021-01-05 13:17:46 -08:00
skyline75489
46b83212d1 Remove unused six code for Python 2/3 compatibility (#48077)
Summary:
This is basically a reborn version of https://github.com/pytorch/pytorch/issues/45254 .

Ref: https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48077

Reviewed By: ngimel

Differential Revision: D25687042

Pulled By: bugra

fbshipit-source-id: 05f20a6f3c5212f73d0b1505b493b720e6cf74e5
2020-12-22 18:07:08 -08:00
Andrey Malevich
f5a26a554b [C2] Revive unsafe CoalesceOp (#49402)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49402

In cases of NCCLAllReduce operations there could be non-trivial overhead for
launching cooperative kernels (especially in case of async execution of
different parts of the model). This diff is reviving this operator to make it
possible to fuse multiple operations into a single kernel.

Test Plan:
Unit-test.
Used in a later diff.

Reviewed By: xianjiec

Differential Revision: D25531206

fbshipit-source-id: 64b1c161233a726f9e2868f1059316e42a8ea1fc
2020-12-17 04:31:29 -08:00
Andrey Malevich
46debe7f23 [DPER] Introduce barrier operation to force synchronization of threads in async execution (#49322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49322

In some cases async execution might loose dependencies (Alias like ops) or produce suboptimal scheduling when there is an option which parts to schedule first. Example of the later behavior can happen in ModelParallel training where copy can get lower priority compared to the rest of the execution on the given GPU, which will caused other GPUs to starve.

This operator allows to address these issues by introducing extra explicit dependencies between ops.

Test Plan:
Unit-test/
E2E testing in the future diffs.

Reviewed By: xianjiec

Differential Revision: D24933471

fbshipit-source-id: 1668994c7856d73926cde022378a99e1e8db3567
2020-12-15 16:13:42 -08:00
Xiaomeng Yang
2039ff3fbb [Caffe2] Optimize MishOp on CPU (#48212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48212

Optimize MishOp on CPU

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:activation_ops_test -- "mish"

Reviewed By: houseroad

Differential Revision: D25071304

fbshipit-source-id: fe94bfab512188d60412d66962983eff4f37bc07
2020-11-19 14:17:27 -08:00
Shiyan Deng
c19eb4ad73 BoxWithNMSLimit support int batch_splits input (#47504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47504

allow int type input of `batch_splits`

Test Plan:
```
buck test caffe2/caffe2/python/operator_test:torch_integration_test -- test_box_with_nms_limits
```

Reviewed By: jackm321

Differential Revision: D24629522

fbshipit-source-id: 61cb132e792bddd8f9f1bca5b808f1a9131808f0
2020-11-07 00:27:51 -08:00
Yen-Jung Chang
6e22b6008d [MLF] Allow for computing prune quantile thresholds on absolute value of indicators in distributed-inference-compatible embedding LUT pruning (#46789)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46789

1. Now `SelfBinningHistogram` can calculate the binning histogram using the absolute values from the given an array of values.
2. Update the invocation of `SelfBinningHistogram` in `post_training_prune`.

Test Plan:
1. [buck test caffe2/caffe2/python/operator_test:self_binning_histogram_test](https://www.internalfb.com/intern/testinfra/testconsole/testrun/6473924488326108/)
2. [buck test dper3/dper3_backend/delivery/tests:post_training_prune_test](https://www.internalfb.com/intern/testinfra/testconsole/testrun/2251799854023163/)

Reviewed By: hwangjeff

Differential Revision: D24494097

fbshipit-source-id: 95e47137b25746e686ef9baa9409560af5d58fc1
2020-11-02 11:31:31 -08:00
Huan Gui
b5662ba0f0 [uhm][0/n] add cuda Mod Op (#46732)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46732

as titled

Test Plan:
unittest

buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:mod_op_test

Reviewed By: xianjiec

Differential Revision: D24368100

fbshipit-source-id: 1232d22a67ac268986043911d548fa9d657470ec
2020-10-26 11:07:51 -07:00
Alexander Grund
93719440b8 Replace map(lambda constructs (#46462)
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal

Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462

Reviewed By: zou3519

Differential Revision: D24422343

Pulled By: ezyang

fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
2020-10-22 09:50:22 -07:00
Jeff Hwang
9b5197b763 [mlf][efficiency] add tensor inference function to last-n collector op (#46693)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46693

title

Test Plan: unit tests

Reviewed By: hx89

Differential Revision: D23946770

fbshipit-source-id: f7c3d4a1b4ef3b0e5f56e5a9a30f5003ce9f40b0
2020-10-22 01:15:00 -07:00
Alexander Grund
5b0f400488 Replace list(map(...)) constructs by list comprehensions (#46461)
Summary:
As discussed in https://github.com/pytorch/pytorch/issues/46392 this makes the code more readable and possibly more performant.

It also fixes a bug detected by this where the argument order of `map` was confused: 030a24906e (diff-5bb26bd3a23ee3bb540aeadcc0385df2a4e48de39f87ed9ea76b21990738fe98L1537-R1537)

Fixes https://github.com/pytorch/pytorch/issues/46392

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46461

Reviewed By: ailzhang

Differential Revision: D24367015

Pulled By: ezyang

fbshipit-source-id: d55a67933cc22346b00544c9671f09982ad920e7
2020-10-19 18:42:49 -07:00
Jianyu Huang
5c67cc7a9e [caffe2] Enable fp16 for SparseNormalize op (#45551)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45551

The FP16 version of SparseNormalize op in Caffe2 is missing. This Diff adds FP16 support to unblock MC process of adding FP16 to Dper3.

Check https://fb.quip.com/L0T2AXGwUY3n#EReACAeifk3 .

One question is whether the pure FP16 Sparse Normalized op will affect the accuracy? Maybe we should do it in FP32 domain.
ghstack-source-id: 114184398

Test Plan:
```
 buck run mode/opt //caffe2/caffe2/python/operator_test:sparse_normalize_test
```

```
buck run mode/opt -c python.package_style=inplace mode/no-gpu //caffe2/caffe2/python/benchmarks:sparse_normalize_benchmark -- --fp16
```

Reviewed By: jspark1105

Differential Revision: D24005618

fbshipit-source-id: 8b918ec4063fdaafa444779b95206ba2b7b38537
2020-10-13 15:35:22 -07:00
Pawel Garbacki
fb50fcaa82 [C2] Add string equality operator (#45886)
Summary:
This diff adds a string equality checking operator.

Another attempt at reverted D24042344 (cf48872d28)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45886

Test Plan: unit tests, github builds

Reviewed By: dzhulgakov

Differential Revision: D24129953

fbshipit-source-id: caa53c7eac5c67c414c37e9d93416104f72556b9
2020-10-06 12:08:26 -07:00
Dmytro Dzhulgakov
519c086418 Revert D24042344: [C2] Add string equality operator
Test Plan: revert-hammer

Differential Revision:
D24042344 (cf48872d28)

Original commit changeset: c8997c6130e3

fbshipit-source-id: 3d8aec1104a2a59c67ab4b7e77caeaf9fc94ae1d
2020-10-05 15:09:03 -07:00
Pawel Garbacki
cf48872d28 [C2] Add string equality operator
Summary: This diff adds a string equality checking operator.

Test Plan: Unit tests

Differential Revision: D24042344

fbshipit-source-id: c8997c6130e3438f2ae95dae69f76978e2e95527
2020-10-05 10:47:53 -07:00
Marcio Porto
c31066ac9d Torch Integration Test Formatting Changes (#45740)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45740

Reviewed By: esqu1

Differential Revision: D23869021

fbshipit-source-id: 5910d44f9475bd7a53dc0478b69b39572dc8666f
2020-10-02 14:02:31 -07:00
Marcio Porto
b234acd414 Exposes SparseToDenseMask Caffe2 Operator (#45670)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45670

Reviewed By: esqu1

Differential Revision: D23868280

fbshipit-source-id: d6afa129c073fe611cb43a170025bc3c880a4bec
2020-10-02 10:05:13 -07:00
Thomas Bredillet
0fa551f0ab [c2] Fix int types for learning rate
Summary: Currently GetSingleArgument is overflowing since it's expecting an int instead of an int64 when using a 1cycle (hill policy) annealing schedule

Test Plan:
unittest

buck test  caffe2/caffe2/python/operator_test:learning_rate_op_test

Differential Revision: D23938169

fbshipit-source-id: 20d65df800d7a0f1dd9520705af31f63ae716463
2020-09-26 10:59:29 -07:00
Dianshi Li
03dde4c62a Resend diff D23858329 (#45315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45315

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45314

in D23858329 (721cfbf842), we put PriorCorrectionCalibrationPrediction unit test in OSS file which causes test failure issue in public trunk.

this diff moves it to FB only test file.

Test Plan:
```
 buck test //caffe2/caffe2/python/operator_test:torch_integration_test -- test_gather_ranges_to_dense_op

buck test //caffe2/caffe2/fb/python/operator_test:torch_integration_test -- test_prior_correct_calibration_prediction_op
```
all pass.

Reviewed By: houseroad

Differential Revision: D23899012

fbshipit-source-id: 1ed97d8702e2765991e6caf5695d4c49353dae82
2020-09-24 18:41:49 -07:00
Xiaomeng Yang
e2bcdc7b69 [Caffe2] Fix LayerNormOp when batch_size == 0. (#45250)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45250

[Caffe2] Fix LayerNormOp when batch_size == 0.

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:layer_norm_op_test

Reviewed By: houseroad

Differential Revision: D23892091

fbshipit-source-id: 9a34654dd6880c9d14b7111fcf850e4f48ffdf91
2020-09-24 12:30:03 -07:00
Mike Ruberry
956a25d061 Revert D23858329: [PT Model Split] Support 2 operators in PT by C2 conversion
Test Plan: revert-hammer

Differential Revision:
D23858329 (721cfbf842)

Original commit changeset: ed37118ca7f0

fbshipit-source-id: 30c700f80665be11afc608b00a77766064e60b35
2020-09-23 21:20:21 -07:00
Dianshi Li
721cfbf842 [PT Model Split] Support 2 operators in PT by C2 conversion (#45231)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45231

There are two operators:
`PriorCorrectionCalibrationPrediction` and `GatherRangesToDense` is not supported in PT which makes GLOW cannot work.

To unblock, we first try to use C2->PT conversion. In the long-term, we need to implement PT custom ops.

This diff does this conversion to unblock current project.

Test Plan:
Run unit test. the Test input is from current DPER example.
All pass.
```buck test //caffe2/caffe2/python/operator_test:torch_integration_test -- test_prior_correct_calibration_prediction_op  --print-passing-details

> c2 reference output
> [0.14285715 0.27272728 0.39130434 0.5 ]

> PT converted output
> tensor([0.1429, 0.2727, 0.3913, 0.5000])

buck test //caffe2/caffe2/python/operator_test:torch_integration_test -- test_gather_ranges_to_dense_op  --print-passing-details

c2 reference output
> [array([[6, 5, 4, 3], [0, 0, 0, 0]], dtype=int64)]

> PT converted output
> [tensor([[6, 5, 4, 3], [0, 0, 0, 0]])]
```

Reviewed By: allwu, qizzzh

Differential Revision: D23858329

fbshipit-source-id: ed37118ca7f09e1cd0ad1fdec3d37f66dce60dd9
2020-09-23 18:31:57 -07:00
Bugra Akyildiz
27c7158166 Remove __future__ imports for legacy Python2 supports (#45033)
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:

```2to3 -f future -w caffe2```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033

Reviewed By: seemethere

Differential Revision: D23808648

Pulled By: bugra

fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Yan Xie
285ba0d068 Enable fp16 for UniformFill (#44540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44540

Support output type to be fp16 for UniformFill

Reviewed By: jianyuh

Differential Revision: D23558030

fbshipit-source-id: 53a5b2c92cfe78cd11f55e6ee498e1bd682fe4a1
2020-09-15 15:09:18 -07:00
Yan Xie
4ce6af35c4 Enable fp16 for CUDA SparseLengthsSum/Mean (#44089)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44089

Add support of fp16 as input type in SparseLengthSum/Mean caffe2 operator

Reviewed By: xianjiec

Differential Revision: D23436877

fbshipit-source-id: 02fbef2fde17d4b0abea9ca5d17a36aa989f98a0
2020-09-15 11:10:54 -07:00
Brandon Lin
ea55820606 [dper3] Export PackSegments and UnpackSegments to Pytorch
Summary: As title.

Test Plan:
```
buck test //caffe2/caffe2/python/operator_test/:torch_integration_test -- test_pack_segments
```

Reviewed By: yf225

Differential Revision: D23610495

fbshipit-source-id: bd8cb61f2284a08a54091a4f982f01fcf681f215
2020-09-11 09:29:24 -07:00
Xiaomeng Yang
135ebbde6d [Caffe2] Add RMSNormOp (#44338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44338

Add RMSNormOp in Caffe2

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:rms_norm_op_test

Reviewed By: houseroad

Differential Revision: D23546424

fbshipit-source-id: 8f3940a0bb42230bfa647dc66b5e359cc84491c6
2020-09-08 23:50:44 -07:00
Brandon Lin
5de805d8a7 [dper3] Export Caffe2 operator LearningRate to PyTorch
Summary: Exports the operator to PyTorch, to be made into a low-level module.

Test Plan:
```
buck test //caffe2/caffe2/python/operator_test:torch_integration_test -- test_learning_rate
```

Reviewed By: yf225

Differential Revision: D23545582

fbshipit-source-id: 6b6d9aa6a47b2802ccef0f87c1263c6cc2d2fdf6
2020-09-08 08:50:09 -07:00
kshitij12345
c7787f7fbf [numpy compatibility]Fix argmin/argmax when multiple max/min values (#42004)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41998
Fixes https://github.com/pytorch/pytorch/issues/22853

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42004

Reviewed By: ngimel

Differential Revision: D23049003

Pulled By: mruberry

fbshipit-source-id: a6fddbadfec4b8696730550859395ce4f0cf50d6
2020-08-28 06:42:42 -07:00
Sean Lynch
f9a766bb39 Increase deadline time for load_save tests (#43205)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43205

A number of tests that forward to `TestLoadSaveBase.load_save` are all marked as flaky due to them regularly taking much longer to start up than hypothesis' default timeout of 200ms. This diff fixes the problem by removing the timeout for `load_save`. This is alright as these tests aren't meant to be testing the performance of these operators.

I would set the deadline to 60s if I could however it appears the that caffe2 github CI uses a different version of hypothesis that doesn't allow using `dateutil.timedelta` so instead of trying to figure out an approach that works on both I've just removed the deadline time.

I've also tagged all existing tasks WRT these failures.

Differential Revision: D23175752

fbshipit-source-id: 324f9ff034df1ac4874797f04f50067149a6ba48
2020-08-20 08:41:24 -07:00
Edson Romero
5014cf4a4d Export MergeIdLists Caffe2 Operator to PyTorch
Summary: As titled.

Test Plan: buck test //caffe2/caffe2/python/operator_test:torch_integration_test -- test_merge_id_lists

Reviewed By: yf225

Differential Revision: D23076951

fbshipit-source-id: c37dfd93003590eed70b0d46e0151397a402dde6
2020-08-14 14:46:17 -07:00
Ren Chen
e182ec97b3 Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
Summary:
1. Fix illegal memory access issue for SplitByLengths operator in the CUDA context.
2. Add support to scaling lengths vector for SplitByLengths operator.
3. Add support to test SplitByLengths operator in the CUDA context.

Example for SplitByLengths operator processing scaling lengths vector:
value vector A = [1, 2, 3, 4, 5, 6]
length vector B = [1, 2]
after execution of SplitByLengths operator,
the output should be [1,2] and [3,4,5,6]

Test Plan: buck test mode/dev-nosan caffe2/caffe2/python/operator_test:concat_split_op_test

Reviewed By: kennyhorror

Differential Revision: D23079841

fbshipit-source-id: 3700e7f2ee0a5a2791850071fdc16e5b054f8400
2020-08-14 01:04:08 -07:00
Christopher Whelan
7a9ae52550 [hypothesis] Deadline followup (#42842)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42842

Test Plan: `buck test`

Reviewed By: thatch

Differential Revision: D23045269

fbshipit-source-id: 8a3f4981869287a0f5fb3f0009e13548b7478086
2020-08-11 15:33:23 -07:00
Edson Romero
71dbfc79b3 Export BatchBucketOneHot Caffe2 Operator to PyTorch
Summary: As titled.

Test Plan:
```
buck test caffe2/caffe2/python/operator_test:torch_integration_test -- test_batch_bucket_one_hot_op
```

Reviewed By: yf225

Differential Revision: D23005981

fbshipit-source-id: 1daa8d3e7d6ad75e97e94964db95ccfb58541672
2020-08-11 14:00:19 -07:00
Mike Ruberry
dedcc30c84 Fix ROCm CI by increasing test timeout (#42827)
Summary:
ROCm is failing to run this test in the allotted time. See, for example, https://app.circleci.com/pipelines/github/pytorch/pytorch/198759/workflows/f6066acf-b289-46c5-aad0-6f4f663ce820/jobs/6618625.

cc jeffdaily

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42827

Reviewed By: pbelevich

Differential Revision: D23042220

Pulled By: mruberry

fbshipit-source-id: 52b426b0733b7b52ac3b311466d5000334864a82
2020-08-10 20:26:20 -07:00
Christopher Whelan
5cd0f5e8ec [PyFI] Update hypothesis and switch from tp2 (#41645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41645

Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1405

Test Plan: buck test

Reviewed By: thatch

Differential Revision: D20323893

fbshipit-source-id: 54665d589568c4198e96a27f0ed8e5b41df7b86b
2020-08-08 12:13:04 -07:00
Edson Romero
2b04712205 Exposing Percentile Caffe2 Operator in PyTorch
Summary: As titled.

Test Plan:
```
buck test caffe2/caffe2/python/operator_test:torch_integration_test -- test_percentile
```

Reviewed By: yf225

Differential Revision: D22999896

fbshipit-source-id: 2e3686cb893dff1518d533cb3d78c92eb2a6efa5
2020-08-07 16:22:37 -07:00
Rui Liu
92b7347fd7 Enforce counter value to double type in rowwise_counter
Summary:
Enforce counter value to double type in rowwise_counter.

**Context:**
The existing implementation is using float type for counter value. But due to the precision limit of a floating number [1], we observed that the counter value can't increment beyond 16777216.0 (i.e., the max value is 16777216.0) in our earlier experiments. We decide to enforce double type to avoid this issue.

[1] https://stackoverflow.com/questions/12596695/why-does-a-float-variable-stop-incrementing-at-16777216-in-c

Test Plan:
op test
```
ruixliu@devvm1997:~/fbsource/fbcode/caffe2/caffe2/python/operator_test(f0b0b48c)$ buck test :rowwise_counter_test
Trace available for this run at /tmp/testpilot.20200728-083200.729292.log
TestPilot test runner for Facebook. See https://fburl.com/testpilot for details.
Testpilot build revision cd2638f1f47250eac058b8c36561760027d16add fbpkg f88726c8ebde4ba288e1172a348c7f46 at Mon Jul 27 18:11:43 2020 by twsvcscm from /usr/local/fbprojects/packages/testinfra.testpilot/887/t.par
Discovering tests
Running 1 test
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/7881299364977047
      ✓ caffe2/caffe2/python/operator_test:rowwise_counter_test - test_rowwise_counter (caffe2.caffe2.python.operator_test.rowwise_counter_test.TestRowWiseCounter) 0.265 1/1 (passed)
      ✓ caffe2/caffe2/python/operator_test:rowwise_counter_test - main 14.414 (passed)
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/7881299364977047
Summary (total time 18.51s):
  PASS: 2
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

optimizer test
```
ruixliu@devvm1997:~/fbsource/fbcode/caffe2/caffe2/python(7d66fbb9)$ buck test :optimizer_test
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/7036874434841896
Summary (total time 64.87s):
  PASS: 48
  FAIL: 0
  SKIP: 24
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestMomentumSgd)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestGFtrl)
    caffe2/caffe2/python:optimizer_test - test_caffe2_cpu_vs_numpy (caffe2.caffe2.python.optimizer_test.TestYellowFin)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestSparseRAdam)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestRowWiseAdagradWithCounter)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestAdagrad)
    caffe2/caffe2/python:optimizer_test - test_caffe2_gpu_vs_numpy (caffe2.caffe2.python.optimizer_test.TestYellowFin)
    caffe2/caffe2/python:optimizer_test - testDense (caffe2.caffe2.python.optimizer_test.TestRowWiseAdagrad)
    caffe2/caffe2/python:optimizer_test - testGPUDense (caffe2.caffe2.python.optimizer_test.TestFtrl)
    caffe2/caffe2/python:optimizer_test - testSparse (caffe2.caffe2.python.optimizer_test.TestRmsProp)
    ...and 14 more not shown...
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

param download test
```
ruixliu@devvm1997:~/fbsource/fbcode/caffe2/caffe2/fb/net_transforms/tests(7ef20a38)$ sudo buck test :param_download_test
Finished test run: Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/6473924481526935
```

e2e flow:
f208394929
f207991149
f207967273

ANP notebook to check the counter value loaded from the flows
https://fburl.com/anp/5fdcbnoi

screenshot of the loaded counter (note that counter max is larger than 16777216.0)

{F250926501}

Reviewed By: ellie-wen

Differential Revision: D22711514

fbshipit-source-id: 426fed7415270aa3f276dda8141907534734337f
2020-08-05 20:40:51 -07:00
Mike Ruberry
24e2a8a171 Revert D22780307: Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
Test Plan: revert-hammer

Differential Revision:
D22780307 (76905527fe)

Original commit changeset: c5ca60ae16b2

fbshipit-source-id: f3c99eec5f05121e2bed606fe2ba84a0be0cdf16
2020-08-05 12:47:56 -07:00
Ren Chen
76905527fe Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
Summary:
1. Fix illegal memory access issue for SplitByLengths operator in the CUDA context.
2. Add support to scaling lengths vector for SplitByLengths operator.
3. Add support to test SplitByLengths operator in the CUDA context.

Example for SplitByLengths operator processing scaling lengths vector:
value vector A = [1, 2, 3, 4, 5, 6]
length vector B = [1, 2]
after execution of SplitByLengths operator,
the output should be [1,2] and [3,4,5,6]

Test Plan: buck test mode/dev-nosan caffe2/caffe2/python/operator_test:concat_split_op_test

Reviewed By: kennyhorror

Differential Revision: D22780307

fbshipit-source-id: c5ca60ae16b24032cedfa045a421503b713daa6c
2020-08-05 11:46:00 -07:00
Xiaomeng Yang
5769b06ab5 [Caffe2] Remove explicitly divide by zero in SpatialBN training mode (#42380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42380

[Caffe2] Remove explicitly divide by zero in SpatialBN training mode

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:spatial_bn_op_test

Reviewed By: houseroad

Differential Revision: D22873214

fbshipit-source-id: 70b505391b5db02b45fc46ecd7feb303e50c6280
2020-08-01 11:54:58 -07:00
Yan Xie
bdd9ef1981 Support RowWiseSparseAdam on GPU (#35404)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35404

Implement RowWiseSparseAdam on CUDA

Reviewed By: xw285cornell

Differential Revision: D20650225

fbshipit-source-id: 5f871e2f259e362b713c9281b4d94534453995cf
2020-07-31 10:47:29 -07:00
Xiaomeng Yang
60f51542dc [Caffe2] Fix spatial_bn bug for computing running_var on CPU or on CUDA without CuDNN (#42151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42151

Previously our Caffe2 SpatialBN op impl was incorrect for computing running_var without unbias coefficent. Actually it should fail the test because the output will be different with CuDNN's output. However, our tests are too weak to find this bug. This diff fix all of them.

Test Plan: buck test mode/dev-nosan //caffe2/caffe2/python/operator_test:spatial_bn_op_test

Reviewed By: houseroad

Differential Revision: D22786127

fbshipit-source-id: db80becb67d60c44faae180c7e4257cb136a266d
2020-07-29 11:20:03 -07:00
Nikita Shulga
2f61aca17b Skip DataIO tests relying on LevelDB if compiled without it (#42169)
Summary:
Found while trying to get RocM Caffe2 job green

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42169

Reviewed By: seemethere

Differential Revision: D22791896

Pulled By: malfet

fbshipit-source-id: 9df6233876aec5ead056365499bab970aa7e8bdc
2020-07-28 10:18:26 -07:00
Lingyi Liu
d6f1346c37 Add a new op for converting the dense feature to sparse representation
Summary: we need this op to avoid the splicing of a dense tensor and then use the Mergesinglescaler op

Test Plan: integrated test with dper2

Differential Revision: D22677523

fbshipit-source-id: f4f9a1f06841b0906ec8cbb435482ae0a89e1721
2020-07-27 12:45:37 -07:00
Hongzheng Shi
581e9526bb [GradualGating] support better k value change (#41557)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41557

 - add new learning rate functor "slope"
 - use "slope" learning rate in gated_sparse_feature module

Test Plan:
buck test dper3/dper3/modules/tests:core_modules_test -- test_gated_sparse_features_shape_num_warmup_tensor_k
buck test caffe2/caffe2/python/operator_test:learning_rate_op_test -- test_slope_learning_rate_op

Reviewed By: huayuli00

Differential Revision: D22544628

fbshipit-source-id: f2fcae564e79e1d8bcd3a2305d0c11ca7c0d3b3c
2020-07-17 20:44:28 -07:00
Stanislau Hlebik
b774ce54f8 remediation of S205607
fbshipit-source-id: 798decc90db4f13770e97cdce3c0df7d5421b2a3
2020-07-17 17:19:47 -07:00
Stanislau Hlebik
8fdea489af remediation of S205607
fbshipit-source-id: 5113fe0c527595e4227ff827253b7414abbdf7ac
2020-07-17 17:17:03 -07:00
Yavuz Yetim
d04a2e4dae Back out "Revert D22329069: Self binning histogram" (#41313)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41313

This diff backs out the backout diff.  The failure was due to C++ `or`
not being supported in MSVC. This is now replaced with ||

Original commit changeset: fc7f3f8c968d

Test Plan: Existing unit tests, check github CI.

Reviewed By: malfet

Differential Revision: D22494777

fbshipit-source-id: 3271288919dc3a6bfb82508ab9d021edc910ae45
2020-07-13 11:46:34 -07:00
Nikita Shulga
7bae5780a2 Revert D22329069: Self binning histogram
Test Plan: revert-hammer

Differential Revision:
D22329069 (16c8146da9)

Original commit changeset: 28406b94e284

fbshipit-source-id: fc7f3f8c968d1ec7d2a1cf7a4d05900f51055d82
2020-07-10 16:22:29 -07:00
Yavuz Yetim
16c8146da9 Self binning histogram (#40875)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40875

This op uses the given num_bins and a spacing strategy to automatically bin and compute the histogram of given matrices.

Test Plan: Unit tests.

Reviewed By: neha26shah

Differential Revision: D22329069

fbshipit-source-id: 28406b94e284d52d875f73662fc82f93dbc00064
2020-07-10 13:55:42 -07:00
rohithkrn
df252c059c [ROCm] Skip caffe2 unique op test for rocm3.5 (#41219)
Summary:
unique op test failure in caffe2 blocks upgrading CI to rocm3.5.1. Skipping the test to unblock will re-enable after root causing and fixing the issue.
jeffdaily sunway513

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41219

Differential Revision: D22471452

Pulled By: xw285cornell

fbshipit-source-id: 9e503c8b37c0a4b92632f77b2f8a90281a9889c3
2020-07-09 20:00:29 -07:00
lcskrishna
302cf6835e [ROCm][Caffe2] Enable MIOpen 3D Pooling (#38260)
Summary:
This PR contains the following updates:
1. MIOpen 3D pooling enabled in Caffe2.
2. Refactored the MIOpen pooling code in caffe2.
3. Enabled unit test cases for 3D pooling.

CC: ezyang jeffdaily ashishfarmer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38260

Differential Revision: D21524754

Pulled By: xw285cornell

fbshipit-source-id: ddfe09dc585cd61e42eee22eff8348d326fd0c3b
2020-07-08 17:42:55 -07:00
Alyssa Wang
e0e8b98c43 Export logic op to pytorch
Summary: Export logit op to pt for better preproc perf

Test Plan:
unit test
Also tested with model re-generation

Reviewed By: houseroad

Differential Revision: D22324611

fbshipit-source-id: 86accb6b4528e5c818d2c3f8c67926f279d158d6
2020-07-08 02:27:09 -07:00
Dongxin Liu
cbe52d762c Mish Activation Function (#40856)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40856

Add a new activation function - Mish: A Self Regularized Non-Monotonic Neural Activation Function https://arxiv.org/abs/1908.08681

Test Plan:
buck test //caffe2/caffe2/python/operator_test:elementwise_ops_test -- 'test_mish'

{F242275183}

Differential Revision: D22158035

fbshipit-source-id: 459c1dd0ac5b515913fc09b5f4cd13dcf095af31
2020-07-06 15:51:23 -07:00
Vitaly Fedyunin
a1c234e372 Revert D22330340: [C2] Fixed a bug in normalization operator
Test Plan: revert-hammer

Differential Revision:
D22330340 (ce63f70981)

Original commit changeset: 0bccf925bb76

fbshipit-source-id: e27d70dee0fbe9e708b0cf3be81dbd33c4015026
2020-07-02 16:05:23 -07:00
Pawel Garbacki
ce63f70981 [C2] Fixed a bug in normalization operator (#40925)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40925

normalization operator does not handle empty tensors correctly. This is a fix.

Test Plan: unit tests

Differential Revision: D22330340

fbshipit-source-id: 0bccf925bb768ebb997ed0c88130c5556308087f
2020-07-02 13:24:56 -07:00
Neha Shah
5ad885b823 [Caffe2][Pruning] Make the caffe2 Sum operator support long types (#40379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40379

The current sum operator doesn't support Long .. hence modify the code

Test Plan: Write a test case

Reviewed By: jspark1105, yinghai

Differential Revision: D21917365

fbshipit-source-id: b37d2c100c70d17d2f89c309e40360ddfab584ee
2020-06-23 14:18:29 -07:00
Jongsoo Park
7a837019a4 [caffe2] optimize 2/4-bit row-wise quantization (#387)
Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/39985

avx2 optimized 2/4-bit row-wise quantization/dequantization in perfkernels.
This diff slightly change the numerics of quantization by multiplying with the inverse of scale instead of dividing with scale.

Test Plan:
In my devserver

for i in 2 4 8; do echo $i; buck run mode/opt :fused_rowwise_nbit_conversion_bench -- --bit-rate=$i; done

Before this diff
2-bit
        3.35394 ms.        100%. FloatToFused2BitRowwiseQuantized
4-bit
        3.60351 ms.        100%. FloatToFused4BitRowwiseQuantized
8-bit
       0.434467 ms.        100%. FloatToFused8BitRowwiseQuantized

After this diff

2-bit
       0.606386 ms.        100%. FloatToFused2BitRowwiseQuantized
4-bit
       0.446683 ms.        100%. FloatToFused4BitRowwiseQuantized
8-bit
         0.4349 ms.        100%. FloatToFused8BitRowwiseQuantized

Reviewed By: choudharydhruv, jianyuh

Differential Revision: D22033195

fbshipit-source-id: d3a219e47b8345268d90a160c9314ed0d5b71467
2020-06-19 21:28:31 -07:00
Nikita Shulga
e2a178ca21 Update cafe2 hypothesis_test_util to support hypothesis-5 (#39498)
Summary:
Extracting forward-backward `hypothesis` interface update  parts of https://github.com/pytorch/pytorch/pull/39430 into a separate PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39498

Differential Revision: D21900210

Pulled By: malfet

fbshipit-source-id: 75e637cf839f49dc141d37e1686ce45ff4721245
2020-06-05 08:27:50 -07:00
Jongsoo Park
fca928cabf [caffe2] fix test error in video_input_op_test (#39382)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39382

Test Plan: buck test caffe2/caffe2/python/operator_test:video_input_op_test

Reviewed By: dutran

Differential Revision: D21832355

fbshipit-source-id: 47b1b0610b9600437fe1ed317d5af47d624767fb
2020-06-02 11:48:01 -07:00
Jongsoo Park
04ac41fe70 [caffe2] format video_input_op_test.py (#39381)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39381

To prepare D21832355

Test Plan: Just formatting

Reviewed By: dutran

Differential Revision: D21832354

fbshipit-source-id: bbf6a1377752adaa115ee2e2a5ba546964e3fd08
2020-06-02 11:46:01 -07:00
Jamie King
7f1a96d43c Adding sparse Lp regularization operator to Caffe2 (#38574)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38574

Adding sparse L1 and L2 regularization operator to Caffe2.  This doesn't work using run_on_loss, only run_after_optimize.  Applying it to run_after_optimize rather than run_on_loss was easier to implement, particularly for the L1 norm which is preferable in some cases and is non-differentiable at zero.

Test Plan: Wrote and ran unit tests in operator_test:sparse_lp_regularizer_test.

Differential Revision: D21003029

fbshipit-source-id: 81070a621752560ce03e320d065ce27807a5d278
2020-06-01 15:21:19 -07:00
Xiaodong Wang
fcef43965b [AMD] Fix broken test (#39297)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39297

histogram op doesn't have GPU implementation. It's breaking the CI GPU test. Make the test run cpu only.

Test Plan: CI

Reviewed By: hwangjeff

Differential Revision: D21800824

fbshipit-source-id: 9c835786f22bac7d420ce610397a6ee69084c19a
2020-05-30 13:12:24 -07:00
Jeff Hwang
0b9d537056 [dper][pruning] add histogram op (#38514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38514

this diff introduces the `Histogram` caffe2 op, which computes a histogram tensor for a list of input tensors. the bin edges of the histogram are defined by arg `bin_edges`.

Test Plan: tests

Reviewed By: chocjy

Differential Revision: D21553956

fbshipit-source-id: fc98c8db691d66d2dad57b6ad14867109913cb6f
2020-05-28 15:45:04 -07:00
Yan Zhu
c40a79a027 [c2] cuda impl for WeightScale op (#38712)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38712

as title

Test Plan: buck test;

Reviewed By: ustctf

Differential Revision: D21586705

fbshipit-source-id: 12cd34f04f074ee12b77304055f3ba6068cf38fb
2020-05-26 12:50:54 -07:00
Yan Zhu
dfbf9f397f Back out "Back out "[c2] register cuda op for LpNorm (fallback)"" (#38566)
Summary:
Previously we got a CI issue in original submission (D21562485), so we backout the original diff (D21588831). Resubmitting here to reprod the CI issue and ask caffe2 dev to take a look.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38566

Original commit changeset: 6dda4b71904d

Test Plan: buck test

Reviewed By: houseroad

Differential Revision: D21589352

fbshipit-source-id: de40ff2884019e14476e31c4c952f24d6e438f5f
2020-05-19 10:37:25 -07:00
Yan Zhu
fac9f36563 Back out "[c2] register cuda op for LpNorm (fallback)"
Summary: Original commit changeset: 573419e5a8da

Test Plan: D21562485  breaks CI build. Unlanding

Reviewed By: olittle

Differential Revision: D21588831

fbshipit-source-id: 6dda4b71904d7765f32f570f9722e4a9a6cbc97b
2020-05-14 20:25:30 -07:00
Yan Zhu
bbfd0ef244 [c2] register cuda op for LpNorm (fallback) (#38517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38517

as title

Test Plan: buck test

Reviewed By: olittle

Differential Revision: D21562485

fbshipit-source-id: 573419e5a8dae4121d99d5b72ed3960a92db7a54
2020-05-14 16:54:12 -07:00
Jongsoo Park
6be3e5d3bb [caffe2] weight_decay in reduced precision adagrad
Summary: As title

Test Plan: CI

Reviewed By: taiqing

Differential Revision: D21512729

fbshipit-source-id: 0777c90954ebad0cbd5785460e7b2a7c8c146316
2020-05-12 20:33:40 -07:00
Taiqing Wang
8cb1f2f9dc implement L2 regularization for Adagrad in caffe2 and dper (#37705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37705

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37372

Posted note: [Regularizing SparseNN Against Over-fitting](https://fb.workplace.com/notes/taiqing-wang/regularizing-sparsenn-against-over-fitting/220306075902708/)

**Problem formulation**

L(w) = J(w) + lambda/2 * ||w||^2
J(w) is the empirical loss, and ||w||^2 is the squared L2 norm of the parameters, a.k.a. L2 regularizer.

dL(w)/ dw_i = dJ(w)/dw_i + lambda w_i
dL(w)/ dw_i is the gradient of L(w) w.r.t. w_i.

To implement the L2 regularizer, the gradient of J(w) w.r.t. w_i is added with w_i. lambda is called as weight decay in this implementation.

**Code changes**
* In the initialization method of AdagradOptimizer, a new input argument, weight_decay, is added.
* In the _run function of AdagradOptimizer, the weight decay will be skipped for 1d bias vectors.
* In the parameter update functions of Adagrad, the gradient is updated by weight_decay * w_i. The default value for weight_decay is zero.

Test Plan:
`
buck build caffe2/caffe2/fb/dper/layer_models/tests/split_1:sparse_nn_test_weight_decay
`

`
./buck-out/gen/caffe2/caffe2/fb/dper/layer_models/tests/split_1/sparse_nn_test_weight_decay#binary.par
`

Reviewed By: jspark1105

Differential Revision: D21258652

fbshipit-source-id: d2366ddcd736a03205a2d16f914703b16d9fce8f
2020-05-03 10:42:49 -07:00
Nikita Shulga
527cf877d6 Delete old mkl_speed_test.py
Summary: It was always skipped for last 1.5 years (since D10372230 was landed)

Test Plan: CI

Reviewed By: ailzhang

Differential Revision: D21036194

fbshipit-source-id: 9ace60b45a123a9372a88310b91f33a69ae8880c
2020-04-15 11:02:01 -07:00
Yuxi Hu
f7c9faab05 Implementation and operator test for STORM optimizer (#36225)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36225

Implemented the [STORM](https://arxiv.org/abs/1905.10018) optimizer operator for dense and sparse cases.

Test Plan:
All newly added unit tests passed using "buck test //caffe2/caffe2/python/operator_test:storm_test".

{F233643713}

Reviewed By: chocjy

Differential Revision: D18702897

fbshipit-source-id: d25eeb492aa2a03c69754d3f076a8239230b3bf4
2020-04-14 23:04:26 -07:00
Hao Lu
fb70b4fb93 [caffe2] Add support for std::shared_ptr<std::vector<TensorList>> in PackRecordsOp and UnPackRecordsOp (#36550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36550

Separate dataset_ops changes into a separate diff.

Test Plan:
```
buck test caffe2/caffe2/python/operator_test:dataset_ops_test
```

AI/AF canary (tested with D20959214):
https://our.intern.facebook.com/intern/experiment_store/experiment/3298538636995/#commit1-commit2
https://our.intern.facebook.com/intern/experiment_store/experiment/2199027015376/#commit1-commit2

Reviewed By: yinghai

Differential Revision: D20988910

fbshipit-source-id: b37a7bfd131813e9472a5e2fa24d681d1ef19018
2020-04-14 03:43:21 -07:00
Devin He
b46fddf506 idtt + zch distributed inference (#35763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35763

Adds inference function and test for ScatterAssign

Test Plan: Updated unit test

Reviewed By: yyetim, shunting1986

Differential Revision: D20501079

fbshipit-source-id: 7ec6ef0127a151250dd699c90c2b80c35cfb1fe4
2020-04-03 12:09:34 -07:00
Tristan Rice
676fc929b7 [caffe2] fix type and shape inference for common gradient ops (#35857)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35857

This fixes a lot of common ops for InferBlobShapesAndTypes as well as adds support for testing the inferred shapes and types of gradient ops.

Ops:
* Concat
* Split
* LeakyReLU
* Relu
* Prelu
* Gelu
* Elu
* Sinh, Tanh, Cosh
* Abs
* ... and a number of other simple element wise ops

Test Plan:
Added support to hypothesis test to check the shape and type of gradient ops.

Enabled it for all the ops I fixed the shape and type inference for.

  buck test caffe2/caffe2/python/operator_test:

Reviewed By: pradeepd24

Differential Revision: D20806284

fbshipit-source-id: 77f796d9ff208e09e871bdbadf9a0a7c196b77f2
2020-04-02 11:17:04 -07:00
Yinghai Lu
af4d86788c Split SparseLengthsSumSparse into SparseLengthsSumSparseLookup + SparseLengthsSum (#35507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35507

We want to split up the SparseLengthsSumSparse op into an indirection op and the SparseLengthsSum op so that we can lower the later part.  The indirection part is a plain impl now.

Test Plan:
```
for i in `seq 10`; do buck test caffe2/caffe2/python/operator_test:lengths_reducer_fused_nbit_rowwise_ops_test -- test_sparse_lengths_sum_rowwise_sparse; done
```

Reviewed By: jspark1105

Differential Revision: D20683478

fbshipit-source-id: 509effe88719d20aa0c4783bbe0ce1f183ee473c
2020-03-30 13:33:29 -07:00
Tristan Rice
d4f3bc7f8e [dt] [caffe2] add/fix shape inference for StumpFunc, SliceGradient and ResizeLike (#35430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35430

This fixes and adds tests for several commonly used operators.

There's some formatting differences due to running clang-format on one of the files.

Test Plan: buck test //caffe2/caffe2/fb/operators:hypothesis_test //caffe2/caffe2/python/operator_test:utility_ops_test //caffe2/caffe2/python/operator_test:concat_split_op_test

Reviewed By: yyetim

Differential Revision: D20657405

fbshipit-source-id: 51d86d0834003b8ac8d6acb5149ae13d7bbfc6ab
2020-03-26 17:50:32 -07:00
Xiaodong Wang
53fceff1e1 Change weight scale test to cpu only (#35346)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35346

weight scale op doesn't have GPU impl. This is breaking OSS CI from D20506032. Making it cpu only

Test Plan: OSS CI

Reviewed By: ustctf

Differential Revision: D20637440

fbshipit-source-id: 9aa6cce63ce637ab7856788e5d02f527decb2a26
2020-03-25 09:18:58 -07:00
Fei Tian
845b19c4ef Add weight_scale in Adagrad (#34944)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34944

Reviewed By: chonglinsun

Differential Revision: D20506032

fbshipit-source-id: ef025e536da01fdcabc783466bc065685b80ab9a
2020-03-20 22:36:51 -07:00
Edward Yang
d927d58c2a Revert D20289209: Support RowWiseSparseAdam on GPU
Test Plan: revert-hammer

Differential Revision:
D20289209

Original commit changeset: a7a8a21bd18c

fbshipit-source-id: 4a8ae684d099a5499c28b7e65578fc7ab10b248d
2020-03-18 07:35:07 -07:00
Jongsoo Park
bcbdba450c [caffe2] open source 2/4-bit SLS operators (#34903)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34903

Reattempt of D20461609

Moving 2/4-bit SLS and row-wise 2/4-bit conversion operator to open source to be used by DLRM

Test Plan: CI

Reviewed By: jianyuh

Differential Revision: D20495304

fbshipit-source-id: 66a99677583f50fd40e29c514710c7b1a8cdbc29
2020-03-17 22:55:10 -07:00