Commit Graph

2940 Commits

Author SHA1 Message Date
Ramanpreet Nara
f587267dc7 Revert D31705359: use irange for loops 8
Test Plan: revert-hammer

Differential Revision:
D31705359 (17e5200441)

Original commit changeset: c9ea2fbc0f9c

fbshipit-source-id: 08fff2d12beca953ad30dd0baabf86e39ac84f14
2021-12-02 12:55:08 -08:00
Richard Barnes
17e5200441 use irange for loops 8 (#66743)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66743

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D31705359

fbshipit-source-id: c9ea2fbc0f9cd29e97a52dcb203addc5f2abb09b
2021-12-02 10:21:29 -08:00
Jane Xu
8b0c2c18eb Fix pretrained=True for test_pt_onnx_trt (#67818)
Summary:
Addresses https://github.com/pytorch/pytorch/pull/66312#issuecomment-960357403

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67818

Reviewed By: malfet

Differential Revision: D32161208

Pulled By: janeyx99

fbshipit-source-id: 076e52ddc8718c74eb2941e867d92bfa4fe70f80
2021-11-04 09:49:42 -07:00
Shashank Chaudhry
06d1be2447 [NOOP][clangformat][codemod] Enable CLANGFORMAT for caffe2/caffe2/* (#67624)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67624

Test Plan: Visual inspection. Sandcastle.

Reviewed By: malfet

Differential Revision: D31986628

fbshipit-source-id: c872bded7325997a2945dbf5d4d052628dcb3659
2021-11-02 22:14:04 -07:00
Xue Li
2f099c7555 Revert D30652629: use irange for loops
Test Plan: revert-hammer

Differential Revision:
D30652629 (687c2267d4)

Original commit changeset: 0ae6c4bbbb55

fbshipit-source-id: 5c4f067b584a021c8c9656454d1ee60999600fb3
2021-10-15 15:23:10 -07:00
Richard Barnes
687c2267d4 use irange for loops (#66234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66234

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

bypass_size_limit
allow-large-files

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D30652629

fbshipit-source-id: 0ae6c4bbbb554bad42e372792a6430e1acf15e3e
2021-10-15 13:50:33 -07:00
Lu Fang
a6eec0c60f Upgrade onnx submodule to 85546f8c44e627f8ff1181725d03cc49f675e44f (#66427)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66427

Update the onnx submodule, so https://github.com/pytorch/pytorch/pull/66140 can land.

Test Plan: ci

Reviewed By: ezyang

Differential Revision: D31544610

fbshipit-source-id: 94831ef531bbd654a6aeb744cd53a38155848079
2021-10-12 09:46:08 -07:00
Atul Jangra
49f1605392 [RFC] Reduce logging noise from AdagradOptimizer (#66443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66443

For some reason, this logging is adding noise to a lot of flow jobs. I am not sure if this is actually needed.
This is called from the __init__ so it's logged all the time and logs all key:values the current local symbol.

Test Plan: N/A

Reviewed By: chowarfb

Differential Revision: D31534372

fbshipit-source-id: bed032b66fed548c97a6f66b1b9e905fd2738851
2021-10-11 13:25:41 -07:00
Jane Xu
7c2f53b363 [BE] set pretrained=False for onnx tests (#66312)
Summary:
Addresses this network risk mitigation mentioned in https://github.com/pytorch/pytorch/issues/65439#issuecomment-924627239.

I didn't include any mobile app/benchmarking changes because I think the pretrained matters there.

I ended up removing the changes in test_utils because those were sensitive to the pretrained variable.

I am saving the quantization test changes for another PR because they are currently disabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66312

Reviewed By: ejguan

Differential Revision: D31542992

Pulled By: janeyx99

fbshipit-source-id: 57b4f70247af25cc96c57abd9e689c34641672ff
2021-10-11 08:29:11 -07:00
Hector Yuen
0fc6bd2e47 [gpu ne eval] disable adam decay unit test for gpu (#66056)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66056

keep running into this unrelated failure when landing diffs regarding the gpu inference project,
disabling this operator unit test in gpu because it doesn't exist

RuntimeError: [enforce fail at operator.cc:277] op. Cannot create operator of type 'SmartDecaySparseAdam' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: input: "param" input: "mom1" input: "mom2" input: "last_seen" input: "indices" input: "grad" input: "lr" input: "iter" output: "param" output: "mom1" output: "mom2" output: "last_seen" name: "" type: "SmartDecaySparseAdam" arg { name: "beta1" f: 0 } arg { name: "beta2" f: 0.9 } arg { name: "epsilon" f: 1e-05 } device_option { device_type: 1 }

https://www.internalfb.com/intern/testinfra/diagnostics/5910974579962988.562949996565057.1633122845/

Test Plan: sandcastle

Reviewed By: jianyuh

Differential Revision: D31364731

fbshipit-source-id: 7fbd994cbe7f6ca116f5f34506a1ed7f14759bdf
2021-10-03 07:40:23 -07:00
Pruthvi Madugundu
085e2f7bdd [ROCm] Changes not to rely on CUDA_VERSION or HIP_VERSION (#65610)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65610

- Replace HIP_PLATFORM_HCC with USE_ROCM
- Dont rely on CUDA_VERSION or HIP_VERSION and use USE_ROCM and ROCM_VERSION.

- In the next PR
   - Will be removing the mapping from CUDA_VERSION to HIP_VERSION and CUDA to HIP in hipify.
   - HIP_PLATFORM_HCC is deprecated, so will add HIP_PLATFORM_AMD to support HIP host code compilation on gcc.

cc jeffdaily sunway513 jithunnair-amd ROCmSupport amathews-amd

Reviewed By: jbschlosser

Differential Revision: D30909053

Pulled By: ezyang

fbshipit-source-id: 224a966ebf1aaec79beccbbd686fdf3d49267e06
2021-09-29 09:55:43 -07:00
Nikita Shulga
399214efd6 Revert D31172530: [pytorch][PR] Enable CUPTI for kineto by default on windows
Test Plan: revert-hammer

Differential Revision:
D31172530 (6b60884f12)

Original commit changeset: 2c69ed0282c5

fbshipit-source-id: 649e040a8c44b0f536a8db397b4325309a285934
2021-09-24 19:18:15 -07:00
Guangyun Han
6b60884f12 Enable CUPTI for kineto by default on windows (#65608)
Summary:
Retry of https://github.com/pytorch/pytorch/pull/62175

See https://github.com/pytorch/pytorch/pull/62175#issuecomment-926411151 for more information.

malfet gdankel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65608

Reviewed By: zou3519

Differential Revision: D31172530

Pulled By: gdankel

fbshipit-source-id: 2c69ed0282c54fa6cdb6e604096d0370e230fd66
2021-09-24 13:00:49 -07:00
BowenBao
e6c39a521b [ONNX] Update submodule to 1.10.1 (#63716) (#64576)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **https://github.com/pytorch/pytorch/issues/64576 [ONNX] Update submodule to 1.10.1 (https://github.com/pytorch/pytorch/issues/63716)**

* [ONNX] Update IR version to 7

* [ONNX] update submodule to 1.10.1

* Disable some tests in caffe2 that fail b/c caffe2 doesn't support the
  new ops.
* Update Bazel file.

* Update expect files for new ONNX IR version

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64576

Reviewed By: jansel

Differential Revision: D31006896

Pulled By: msaroufim

fbshipit-source-id: f3bf97709f23a5a2cd49c708e7363231f2c1961a
2021-09-16 22:29:54 -07:00
Tanvir Zaman
25e2578967 Fix bytes_written and bytes_read (#64244)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64244

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64040

In operator cost inference functions, in many places we are using sizeof(x.data_type()). Since data_type() returns a 32 bit integer from [this enum](https://www.internalfb.com/code/fbsource/[15e7ffe4073cf08c61077c7c24a4839504b964a2]/fbcode/caffe2/caffe2/proto/caffe2.proto?lines=20), we are basically always getting 4 for sizeof(x.data_type()) no matter what actual data type x has. Big thanks to Jack Langman for specifically pointing to this bug.

We would instead use the size in bytes based on actual data type.

Test Plan:
Added unit tests BatchMatMulMemCostTest:

buck test //caffe2/caffe2/fb/fbgemm:batch_matmul_op_test -- BatchMatMulMemCostTest

Extended existing unit test test_columnwise_concat for different data types:

buck test //caffe2/caffe2/python/operator_test:concat_op_cost_test -- test_columnwise_concat

Reviewed By: CrazySherman

Differential Revision: D30656698

fbshipit-source-id: d42c0c9a0c5b0ddc5dba39e4994f1f85a5e618bf
2021-09-01 13:35:41 -07:00
Alban Desmaison
c3464e78a4 Revert D30561459: Fix bytes_written and bytes_read
Test Plan: revert-hammer

Differential Revision:
D30561459 (e98173ff34)

Original commit changeset: 976fa5167097

fbshipit-source-id: 43f4c234ca400820fe6db5b4f37a25e14dc4b0dd
2021-08-30 14:59:54 -07:00
Tanvir Zaman
e98173ff34 Fix bytes_written and bytes_read (#64040)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64040

In operator cost inference functions, in many places we are using sizeof(x.data_type()). Since data_type() returns a 32 bit integer from [this enum](https://www.internalfb.com/code/fbsource/[15e7ffe4073cf08c61077c7c24a4839504b964a2]/fbcode/caffe2/caffe2/proto/caffe2.proto?lines=20), we are basically always getting 4 for sizeof(x.data_type()) no matter what actual data type x has. Big thanks to Jack Langman for specifically pointing to this bug.

We would instead use the size in bytes based on actual data type.

Test Plan:
Added unit tests BatchMatMulMemCostTest:

buck test //caffe2/caffe2/fb/fbgemm:batch_matmul_op_test -- BatchMatMulMemCostTest

Extended existing unit test test_columnwise_concat for different data types:

buck test //caffe2/caffe2/python/operator_test:concat_op_cost_test -- test_columnwise_concat

Differential Revision: D30561459

fbshipit-source-id: 976fa5167097a35af548498480001aafd7851d93
2021-08-30 12:57:31 -07:00
Tanvir Zaman
cc6b023cba Add CostInferenceFunction for SplitOp (#63133)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63133

SplitOp is costly but missing cost inference function which hurts cost based balancing. Changes are:
(1) Addition of CostInferenceFunction for SplitOp
(2) Small fix in CostInferenceFunction for ConcatOp

Test Plan:
Added unit tests:

buck test //caffe2/caffe2/python/operator_test:split_op_cost_test

buck test //caffe2/caffe2/python/operator_test:concat_op_cost_test

Reviewed By: smacke

Differential Revision: D30247360

fbshipit-source-id: 989e962f3a981acc85b73aac3fb23e603b7d1591
2021-08-13 12:28:15 -07:00
Nikita Shulga
709ac6853a Fix warnings (#62930)
Summary:
Add `-Wno-writable-strings`(which is clang's flavor of `-Wwrite-strings`) to list of warnings ignored while compiling torch_python.
Avoid unnecessary copies in range loop
Fix number of signed-unsigned comparisons

Found while building locally on M1

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62930

Reviewed By: albanD

Differential Revision: D30171981

Pulled By: malfet

fbshipit-source-id: 25bd43dab5675f927ca707e32737ed178b04651e
2021-08-11 14:07:10 -07:00
Stephen Macke
3d3ad0a52f [easy] add an inplace argument to MutableNetProto.to_net() and core.Net() constructor (#63068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63068

The caffe2 core.Net constructor can accept a caffe2_pb2.NetDef proto, but it always creates a copy. This is wasteful when we can prove that the proto being passed to it will not be used anywhere else. So we add an "inplace" argument to the `core.Net` constructor that allows clients to give away ownership of the passed proto without copying. We default this argument to `False`, ensuring that behavior does not change unless explicitly requested.

Test Plan: Let CI run.

Differential Revision: D29976510

fbshipit-source-id: 26e13ca76f3431b8ef0de51f08bbf263491d323e
2021-08-11 11:10:52 -07:00
Pyre Bot Jr
6915bc0781 [typing] suppress errors in fbcode/caffe2 - batch 2
Test Plan: Sandcastle

Differential Revision: D30222378

fbshipit-source-id: 6a0a5d210266f19de63273240a080365c9143eb0
2021-08-10 10:26:52 -07:00
Stephen Macke
174433267c [dte] fastpath implementation for broadcast utility function (4/x) (#62493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62493

This diff adds a broadcast fastpath for the caffe2 broadcast utility function, which just copies the contents of a smaller tensor into a larger one. We also update the tests to exercise the new functionality.

Test Plan: unit tests + let CI run

Differential Revision: D29938285

fbshipit-source-id: 543ecc548500380e307be91902696033454964a2
2021-07-30 16:15:10 -07:00
Stephen Macke
956c22b1f9 [dte] fastpath implementations for mulgrad / divgrad (3/x) (#62437)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62437

In this diff we add a broadcast fastpath for MulGradient and DivGradient ops, whose tests we update to exercise the new functionality.

Test Plan: Added test cases to elementwise ops (which will exercise the new MulGradient / DivGradient broadcast fastpath functionality) that will be run by CI. It's worth noting there's still no code (outside of the new test cases) that takes the new code paths added -- the user must explicitly request  allow_broadcast_fastpath=True, and nothing outside of the added tests currently does so.

Differential Revision: D29938273

fbshipit-source-id: 281c1a109e38c25b9bf9ff8d832de60ac3c231a9
2021-07-30 00:05:34 -07:00
Stephen Macke
eef85f89b9 [dte] broadcast fastpath implementations for reduce utility functions (2/x) (#62428)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62428

In this diff we add a broadcast fastpath for reduce utility functions. These functions are used by various elementwise ops, whose tests we update to exercise the new functionality.

Test Plan: Added test cases to elementwise ops (which will exercise the new reducer functionality) that will be run by CI. It's worth noting there's still no code (outside of the new test cases) that takes the new code paths added -- the user must explicitly request  `allow_broadcast_fastpath=True`, and nothing outside of the added tests currently does so.

Differential Revision: D29938264

fbshipit-source-id: 5d5542bd93afb85fd9f7a4073f766adc07eb3b65
2021-07-29 17:27:39 -07:00
Tanvir Zaman
df18d05429 Make bytes_read available for OperatorCost (#62059)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62059

GetOperatorCost in Workspace exposes flops and bytes_written only. Make the an additional piece, bytes_read, available from OperatorSchema::Cost.

Test Plan:
Added the two additional pieces in the unit test testGetOperatorCost in workspace_test

buck test caffe2/caffe2/python:workspace_test -- testGetOperatorCost

buck test //aml/ml_foundation/exp_platform/large_scale_training/distributed_hogwild/auto_device_placement/tests/...

buck test //aiplatform/training/autotuning/tests/...

buck test //aiplatform/training/pipelining/tests/...

buck test //deeplearning/fblsim/tests/...

Flow tests:

ADP Greedy: f288078287
ADP MILP: f288079278

Reviewed By: CrazySherman, xtaofb

Differential Revision: D29860676

fbshipit-source-id: 8b3a9f2bf17c0dae48cfe2800e8821bf441e0b03
2021-07-27 12:48:36 -07:00
Jamie King
1dfb687f3c Fixed off-by-one bug in Adam Smart Decay (#62135)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62135

The initial implementation of Adam with Smart Decay had an off-by-one error.  This was in the summation of the geometric series used to calculate how much built-up momentum would have been discharged in skipped minibatches.

The unit tests should have caught these, but the testing strategy missed this because k, the "number of skipped minibatches" was always either 0 or so high that the impact of the bug was too small.  The impact of the bug was proportional to 1/k.  The testing strategy has also been adjusted to cover this bug.

Differential Revision: D29889309

fbshipit-source-id: b086c0efed5c27f621061e726533c73658daffc6
2021-07-26 11:55:38 -07:00
Jamie King
812bc1dde6 Smart Decay for Adam - DPER3 (#62058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62058

This is the second diff in this stack.  This diff includes the changes to DPER3; the first diff includes the changes to Caffe2.

We want to decay learning parameters properly.  Previously this was not done when a parameter is absent from a minibatch.  We fix this by keeping track of missed minibatches and making decay catch up accordingly.

The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch.  Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively.

To avoid the computational overhead of touching every parameter for every minibatch, we:
* keep track of the last time a parameter is seen
* instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen.

We hope this will significantly improve the inconsistent learning parameter issue we have seen with Adam.

Differential Revision: D29638897

fbshipit-source-id: 18d8e227d72c2e23010ca81e0f6eeb78872c8d3c
2021-07-23 13:26:30 -07:00
Nikita Shulga
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
Pyre Bot Jr
d00bb45846 [typing] suppress errors in fbcode/caffe2 - batch 2
Test Plan: Sandcastle

Differential Revision: D29827809

fbshipit-source-id: 7ca7c2a33d691ac57392945b78a320d253c84ed4
2021-07-21 17:56:26 -07:00
Kaige Liu
094abf5fd0 [BE] Include a unit test for Save Operator with db_options
Summary: A test case that triggers db_options with the save operator is missing.

Test Plan: buck test

Differential Revision: D29642719

fbshipit-source-id: 72b7374d40430398abac26dfe91538550525384d
2021-07-19 12:22:59 -07:00
Jamie King
c23db9327a Smart Decay for Adam - Caffe2 (#61548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61548

We want to decay learning parameters properly.  Previously this was not done when a parameter is absent from a minibatch.  We fix this by keeping track of missed minibatches and making decay catch up accordingly.

The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch.  Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively.

To avoid the computational overhead of touching every parameter for every minibatch, we:
* keep track of the last time a parameter is seen
* instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen
* we calculate the amount of momentum that would have been discharged over the missed minibatches and update the weight accordingly.

Differential Revision: D29654246

fbshipit-source-id: 7a6cd7966eb1f31116d99dfce79a78b2d3ee9e3e
2021-07-14 10:22:38 -07:00
Kaige Liu
58adaaba60 Enable C2 load rate limiter [2/n] (#61551)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61551

We aim to enable rate limiter in C2 load, with a fix bandwidth limit.
This diff update LoadOp to pass down the manifold db options.

Test Plan:
```
buck test mode/opt caffe2/caffe2/python/operator_test:load_save_test
```

Differential Revision: D29639102

fbshipit-source-id: cf69549adadf4c7f12a8a2b7f3ca39092cab4b99
2021-07-14 08:27:05 -07:00
Nikita Shulga
f291b1899f Revert D27978269: Smart Decay for Adam - Caffe2
Test Plan: revert-hammer

Differential Revision:
D27978269 (aaa1e07609)

Original commit changeset: e47524101ddf

fbshipit-source-id: 334824bbf9a6ed788e75af9c292754081f70a19b
2021-07-10 13:09:58 -07:00
Jamie King
aaa1e07609 Smart Decay for Adam - Caffe2 (#61488)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61488

We want to decay learning parameters properly.  Previously this was not done when a parameter is absent from a minibatch.  We fix this by keeping track of missed minibatches and making decay catch up accordingly.

The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch.  Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively.

To avoid the computational overhead of touching every parameter for every minibatch, we:
* keep track of the last time a parameter is seen
* instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen.

Differential Revision: D27978269

fbshipit-source-id: e47524101ddfcb281c46c505b9b7a8f0835bc64a
2021-07-09 18:28:21 -07:00
Feng Shi
b4a4a8434d [1/n]support double for Caffe2 ScatterWeightedSum (#60402)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60402

Add float64 data type support for ScatterWeightedSum for cases that 10^7 precision is not sufficient.

Test Plan: buck test caffe2/caffe2/python/operator_test:sparse_ops_test -- testScatterWeightedSum

Reviewed By: jianyuh

Differential Revision: D29190324

fbshipit-source-id: 871a60744694e901a2c7685a67350860745d6729
2021-06-29 14:17:04 -07:00
Adam Simpkins
fadaa52f64 [caffe2] add an EstimateAllBlobSizes operator (#59775)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59775

This operator is similar to `GetAllBlobNames` but also returns the estimated
size required to serialize each node.

One goal of this operator is to allow checkpoint saving logic to estimate the
amount of space/bandwidth required to save a checkpoint when first starting
training, without actually serializing any blobs yet.  Currently the
checkpointing logic uses `GetAllBlobNames` to determine the blobs to
checkpoint.  It can instead be updated to use `EstimateAllBlobSizes` to also
get an estimate for how much space will be required for the checkpoint.
ghstack-source-id: 132275153

Test Plan: Included a new unit test.

Reviewed By: mraway

Differential Revision: D29020227

fbshipit-source-id: 811e5d86c4b59183e84e6424c48c97739be09043
2021-06-24 16:55:22 -07:00
Baichuan Yuan
dca97b4394 Weighted decay with frequency (count-based) (#60382)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60382

Instead of setting weight_decay w uniformly for all ids, for each row i in the sparse embedding table, the actual weight_decay `w_i` becomes `w*freq_i` where `freq_i = halflife/counter_i \in [\log(2), halflife]`. Counter is from `rowwise_counter` with definition `counter_i = 1 + \exp(-iter_{\delta}*\rho)*counter_i`.

Test Plan:
buck test //caffe2/caffe2/python/operator_test:adagrad_test -- test_row_wise_sparse_adagrad

buck test caffe2/caffe2/fb/dper/layer_models/tests/split_1:sparse_nn_test_weight_decay

Reviewed By: 0x10cxR1

Differential Revision: D25581030

fbshipit-source-id: 54b3831b20516c76c559b13d8deb809e2ee3b446
2021-06-21 18:46:35 -07:00
Stephen Macke
769c299dcf [caffe2] add tests for inplace elementwise ops (#60106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60106

In Caffe2, some elementwise in-place compatible ops lack coverage for the in-place case. We add tests for a subset of them here and thereby increase coverage.

Test Plan:
```
buck test //caffe2/caffe2/python/operator_test:elementwise_ops_test
```
Let CI run.

Reviewed By: clrfb

Differential Revision: D29143189

fbshipit-source-id: 83138ad8eff8fe95c40aece53714da3577396a23
2021-06-21 12:04:18 -07:00
Masaki Kozuki
c19acf816f Replace TensorRT's deprecated API in caffe2/python/trt/test_pt_onnx_trt.py (#60236)
Summary:
TensorRT v8 is going to remove some functions/methods that used in test.

ref:
- getMaxWorkspaceSize deprecation: b2d60b6e10/include/NvInfer.h (L6984-L6993)
- buildCudaEngine deprecation: b2d60b6e10/include/NvInfer.h (L7079-L7087)

cc ptrblck

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60236

Reviewed By: gchanan

Differential Revision: D29232376

Pulled By: ngimel

fbshipit-source-id: 2b8a48787bf61c68a81568b6026d6afd5a83e751
2021-06-19 19:56:30 -07:00
Stephen Macke
e50f264b51 [caffe2] make MulGradient implementation in-place compatible (#60035)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60035

In Caffe2, the operator schema for the MulGradient op indicates that MulGradient may be performed in-place, overwriting one of its inputs as the output. The implementation is not safe to perform in-place however, due to an accidentally-introduced write-read dependency on the overwriten input in the in-place case. We fix it here.

Test Plan:
```
buck test //caffe2/caffe2/python/operator_test:elementwise_ops_test
```

Note that the newly added test fails without this change, but passes with this change:

```
    ✓ ListingSuccess: caffe2/caffe2/python/operator_test:elementwise_ops_test - main (24.992)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_exp (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_log1p (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_abs (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_bitwise_and (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_reciprocal (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_sqr (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_rsqrt (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_mul (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_sqrt (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_add (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_swish_gradient_inplace (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_sigmoid (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_bitwise_or (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_cbrt_grad (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_not (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_sub (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_div (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_eq (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_softsign (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_eq_bcast (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_powt (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
*************************************************************************************************************************************************************************************
***********************************<NEW_TEST_YAY>************************************************************************************************************************************
*************************************************************************************************************************************************************************************

   ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_mul_gradient_inplace (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)

*************************************************************************************************************************************************************************************
***********************************</NEW_TEST_YAY>***********************************************************************************************************************************
*************************************************************************************************************************************************************************************
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_hard_sigmoid (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_bitwise_xor (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_log (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_cube (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_swish (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_cbrt (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - test_div_legacy_grad (caffe2.caffe2.python.operator_test.elementwise_ops_test.TestElementwiseOps) (125.898)
    ✓ Pass: caffe2/caffe2/python/operator_test:elementwise_ops_test - main (125.898)
Summary
  Pass: 30
  ListingSuccess: 1
```

Reviewed By: clrfb

Differential Revision: D29034265

fbshipit-source-id: 98550e1d5976398e45d37ff2120591af1439c42a
2021-06-15 20:26:04 -07:00
Wei Wen
3b0c6a7b50 fix AddPadding tensor shape inference (#59572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59572

fix AddPadding tensor shape inference

Test Plan: sandcastle

Reviewed By: dehuacheng

Differential Revision: D28686983

fbshipit-source-id: 03f70335fcfd94a1241562f8fbf12043a0deac2b
2021-06-08 11:02:33 -07:00
Jeongmin Lee
bca25d97ad [itemwise-dropout][1/x][low-level module] Implement Itemwise Sparse Feature Dropout in Dper3 (#59322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59322

Implement sparse feature dropout (with replacement) that can drop out individual items in each sparse feature. For example, the existing sparse feature dropout with replacement drops out whole feature (e.g., a list of page ids) when the feature is selected for drop out. This itemwise dropout assigns probability and drops out to individual items in sparse features.

Test Plan:
```
buck test mode/dev caffe2/torch/fb/sparsenn:test
```

https://www.internalfb.com/intern/testinfra/testrun/281475166777899/

```
buck test mode/dev //dper3/dper3/modules/tests:sparse_itemwise_dropout_with_replacement_test
```
https://www.internalfb.com/intern/testinfra/testrun/6473924504443423

```
buck test mode/opt caffe2/caffe2/python:layers_test
```
https://www.internalfb.com/intern/testinfra/testrun/2533274848456607

```
buck test mode/opt caffe2/caffe2/python/operator_test:sparse_itemwise_dropout_with_replacement_op_test
```
https://www.internalfb.com/intern/testinfra/testrun/8725724318782701

Reviewed By: Wakeupbuddy

Differential Revision: D27867213

fbshipit-source-id: 8e173c7b3294abbc8bf8a3b04f723cb170446b96
2021-06-04 19:59:17 -07:00
Nikita Shulga
eae84f0d5d Fix ONNX forward compatibility (#59327)
Summary:
Fixes `onnx.utils.polish_model` not found exception when executed using onnx-1.9

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59327

Reviewed By: H-Huang

Differential Revision: D28840563

Pulled By: malfet

fbshipit-source-id: 403a29a88e7dee8b3414602b9fe2b31baf737dce
2021-06-02 12:39:56 -07:00
neginraoof
599f5058cf [ONNX] Update ONNX to rel-1.9 (#55889) (#57080)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57080

ONNX optimizer is removed in ONNX 1.9
This PR removes ONNX optimizer from a C++ code path and uses `try-except` block in Python to keep it compatible with both ONNX-1.8 and 1.9.

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D28467330

Pulled By: malfet

fbshipit-source-id: 5e4669dd0537648898e593f9e253da18d6dc7568

Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-06-02 08:27:17 -07:00
Janet Yang
c06d2afa99 [caffe2] Add support for int32 lengths in BatchSparseToDense (#58062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58062

Make templated function to make sure BatchSparseToDense supports int32 lengths/indices

Test Plan:
```buck test //caffe2/caffe2/python/operator_test:batch_sparse_to_dense_op_test
```

Reviewed By: khabinov

Differential Revision: D28271423

fbshipit-source-id: 41b88b7a3663616b533aaf4731ff35cdf6ec4c85
2021-05-26 10:33:32 -07:00
Natalia Gimelshein
db5e5781ad replace all remaining occurrences of deadline=1000, to prevent test flakiness
Summary: Per title

Test Plan: Fixes existing tests

Reviewed By: robieta

Differential Revision: D28690296

fbshipit-source-id: d7b5b5065517373b75d501872814c89b24ec8cfc
2021-05-25 15:55:30 -07:00
Natalia Gimelshein
45aa54d83c relax test deadlines
Summary: Relax test deadlines for c2 tests. We run on loaded machines, and timings are unreliable.

Test Plan: Fixes existing tests

Reviewed By: mruberry

Differential Revision: D28690006

fbshipit-source-id: 457707e81a1ec92548c1f23ea7a0022fa0a3bfda
2021-05-25 15:02:52 -07:00
Natalia Gimelshein
056287aec4 turn off deadline for adagrad test
Summary: Tests are frequently failing with "exceeded the deadline of 1000.00ms", we expect this to happen, so remove the deadline

Test Plan: N/A: Fix breakages

Reviewed By: robieta

Differential Revision: D28581051

fbshipit-source-id: 4825ada9af151fa5d57c45c549138c15ba613705
2021-05-20 13:47:02 -07:00
Taylor Robie
6989eb60e5 Remove timeouts for C2 tests
Summary: When run on very heavily loaded machines, some of these tests are timing out. It's not an issue with the test, it's an issue with the environment. I've removed the timeout so we at least keep unit test coverage.

Test Plan: N/A: Fix breakages

Reviewed By: ngimel

Differential Revision: D28492334

fbshipit-source-id: aed3ee371763161aab2d356f5623c7df053fda6f
2021-05-17 16:39:30 -07:00
Sam Estep
3507ca320b Remove unused python2 shebang (#58409)
Summary:
This is the only line (not in `third_party`) matching the regex `^#!.*python2`, and [it is not the first line of its file](https://github.com/koalaman/shellcheck/wiki/SC1128), so it has no effect. As a followup to https://github.com/pytorch/pytorch/issues/58275, this PR removes that shebang to reduce confusion, so now all Python shebangs in this repo are `python3`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58409

Reviewed By: walterddr

Differential Revision: D28478469

Pulled By: samestep

fbshipit-source-id: c17684c8651e45d3fc383cbbc04a31192d10f52f
2021-05-17 13:19:32 -07:00