Commit Graph

49 Commits

Author SHA1 Message Date
ashish
5500b62f28 Enable zero batch conv tests for ROCm (#46305)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/26669

This PR enables convolution tests for zero batch size implemented in https://github.com/pytorch/pytorch/pull/26214/.

jamesr66a jeffdaily sunway513

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46305

Reviewed By: navahgar

Differential Revision: D24307981

Pulled By: heitorschueroff

fbshipit-source-id: dfc595fa855ae084b60a693e209b0fdcc714221d
2020-10-14 11:36:30 -07:00
Natalia Gimelshein
9201c37d02 Use addmm directly for 1x1 convolution (#45557)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45274
Based on https://github.com/pytorch/pytorch/issues/44041, sets intermediate for backward computation (otherwise, backward tests are failing).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45557

Reviewed By: izdeby

Differential Revision: D24030655

Pulled By: ngimel

fbshipit-source-id: 368fe9440668dffc004879f8b1d2dd3787d915c9
2020-10-02 00:26:53 -07:00
lixinyu
417e3f85e5 Support tuple inputs in NN Module test (#44853)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44853

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23750441

Pulled By: glaringlee

fbshipit-source-id: 1b111a370a726b40521134b711c35f48dda99411
2020-09-28 22:05:05 -07:00
Gregory Chanan
1097fe0088 Remove CriterionTest.test_cuda code for dtype None. (#45316)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45316

It's never used.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23919449

Pulled By: gchanan

fbshipit-source-id: f9aaeeabf3940389156bfc01bc3118d348ca4cf6
2020-09-28 15:08:09 -07:00
Gregory Chanan
5d1fee23b3 Remove convert_target from NN tests. (#45291)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45291

It's not necessary, you can just check if the dtype is integral.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23911963

Pulled By: gchanan

fbshipit-source-id: 230139e1651eb76226f4095e31068dded30e03e8
2020-09-28 14:21:42 -07:00
Brian Hirsh
439930c81b adding a beta parameter to the smooth_l1 loss fn (#44433)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44433

Not entirely sure why, but changing the type of beta from `float` to `double in autocast_mode.cpp and FunctionsManual.h fixes my compiler errors, failing instead at link time

fixing some type errors, updated fn signature in a few more files

removing my usage of Scalar, making beta a double everywhere instead

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D23636720

Pulled By: bdhirsh

fbshipit-source-id: caea2a1f8dd72b3b5fd1d72dd886b2fcd690af6d
2020-09-25 16:36:28 -07:00
Gao, Xiang
3f5eee666c Adjust TF32 tests (#44240)
Summary:
- The thresholds of some tests are bumped up. Depending on the random generator, sometimes these tests fail with things like 0.0059 is not smaller than 0.005. I ran `test_nn.py` and `test_torch.py` for 10+ times to check these are no longer flaky.
- Add `tf32_on_and_off` to new `matrix_exp` tests.
- Disable TF32 on test suites other than `test_nn.py` and `test_torch.py`

cc: ptrblck

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44240

Reviewed By: mruberry

Differential Revision: D23882498

Pulled By: ngimel

fbshipit-source-id: 44a9ec08802c93a2efaf4e01d7487222478b6df8
2020-09-24 10:25:58 -07:00
Gao, Xiang
2111ec3bf3 CUDA BFloat16 losses (#45011)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45011

Reviewed By: mruberry

Differential Revision: D23805840

Pulled By: ngimel

fbshipit-source-id: 3eb60d4367c727100763879e20e9df9d58bf5ad6
2020-09-21 22:51:17 -07:00
Gregory Chanan
a6895d43b6 Turn on gradgrad check for BCELoss Criterion Tests. (#44894)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44894

Looks like we added double backwards support but only turned on the ModuleTests.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23762544

Pulled By: gchanan

fbshipit-source-id: b5cef579608dd71f3de245c4ba92e49216ce8a5e
2020-09-21 07:14:22 -07:00
Gregory Chanan
07b7e44ed1 Stop using check_criterion_jacobian. (#44786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44786

This predates gradcheck and gradcheck does the same and more.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23731902

Pulled By: gchanan

fbshipit-source-id: 425fd30e943194f63a663708bada8960265b8f05
2020-09-18 07:04:57 -07:00
Gregory Chanan
6d178f6b8e Stop ignoring errors in cuda nn module tests. (#44783)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44783

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23731778

Pulled By: gchanan

fbshipit-source-id: 32df903a9e36bbf3f66645ee2d77efa5ed6ee429
2020-09-18 07:03:41 -07:00
Gregory Chanan
192c4111a3 Simplify target handling in nn gradcheck. (#44507)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44507

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23635799

Pulled By: gchanan

fbshipit-source-id: 75090d6a48771e5c92e737a0829fbfa949f7c8a7
2020-09-11 13:25:59 -07:00
Gregory Chanan
5579b53a7f Fix SmoothL1Loss when target.requires_grad is True. (#44486)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44486

SmoothL1Loss had a completely different (and incorrect, see #43228) path when target.requires_grad was True.

This PR does the following:

1) adds derivative support for target via the normal derivatives.yaml route
2) kill the different (and incorrect) path for when target.requires_grad was True
3) modify the SmoothL1Loss CriterionTests to verify that the target derivative is checked.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23630699

Pulled By: gchanan

fbshipit-source-id: 0f94d1a928002122d6b6875182867618e713a917
2020-09-11 12:13:36 -07:00
Gregory Chanan
3de2c0b42f Fix L1Loss when target.requires_grad is True. (#44471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44471

L1Loss had a completely different (and incorrect, see #43228) path when target.requires_grad was True.

This PR does the following:

1) adds derivative support for target via the normal derivatives.yaml route
2) kill the different (and incorrect) path for when target.requires_grad was True
3) modify the L1Loss CriterionTests to verify that the target derivative is checked.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23626008

Pulled By: gchanan

fbshipit-source-id: 2828be16b56b8dabe114962223d71b0e9a85f0f5
2020-09-11 09:51:16 -07:00
Gregory Chanan
d07d25a8c5 Fix MSELoss when target.requires_grad is True. (#44437)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44437

MSELoss had a completely different (and incorrect, see https://github.com/pytorch/pytorch/issues/43228) path when target.requires_grad was True.

This PR does the following:
1) adds derivative support for target via the normal derivatives.yaml route
2) kill the different (and incorrect) path for when target.requires_grad was True
3) modify the MSELoss CriterionTests to verify that the target derivative is checked.

TODO:
1) do we still need check_criterion_jacobian when we run grad/gradgrad checks?
2) ensure the Module tests check when target.requires_grad
3) do we actually test when reduction='none' and reduction='mean'?

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D23612166

Pulled By: gchanan

fbshipit-source-id: 4f74d38d8a81063c74e002e07fbb7837b2172a10
2020-09-11 08:51:28 -07:00
Gregory Chanan
c8914afdfa Merge criterion_tests and new_criterion_tests. (#44398)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44398

These end up executing the same tests, so no reason to have them separate.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23600855

Pulled By: gchanan

fbshipit-source-id: 0952492771498bf813f1bf8e1d7c8dce574ec965
2020-09-10 08:29:59 -07:00
Gregory Chanan
fa158c4ca6 Combine criterion and new criterion tests in test_jit. (#43958)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43958

There is not any difference between these tests (I'm merging them), so let's merge them in the JIT as well.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23452337

Pulled By: gchanan

fbshipit-source-id: e6d13cdb164205eec3dbb7cdcd0052b02c961778
2020-09-10 08:28:14 -07:00
Gregory Chanan
af9cad761a Stop ignoring NotImplementedErrors in cuda CriterionTests. (#44381)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44381

Perhaps this was necessary when the test was originally introduced, but it's difficult to figure out what is actually tested.  And I don't think we actually use NotImplementedErorrs.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23598646

Pulled By: gchanan

fbshipit-source-id: aa18154bfc4969cca22323e61683a301198823be
2020-09-10 08:18:33 -07:00
Gregory Chanan
49215d7f26 For CriterionTests, have check_gradgrad actually only affect gradgrad checks. (#44060)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44060

Right now it skips grad checks as well.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23484018

Pulled By: gchanan

fbshipit-source-id: 24a8f1af41f9918aaa62bc3cd78b139b2f8de1e1
2020-09-03 12:29:32 -07:00
Gregory Chanan
5973b44d9e Rename NewCriterionTest to CriterionTest. (#44056)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44056

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23482573

Pulled By: gchanan

fbshipit-source-id: dde0f1624330dc85f48e5a0b9d98fb55fdb72f68
2020-09-03 10:29:20 -07:00
Gregory Chanan
cae52b4036 Merge CriterionTest into NewCriterionTest. (#44055)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44055

There is no functional change here.  Another patch will rename NewCriterionTest to CriterionTest.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23482572

Pulled By: gchanan

fbshipit-source-id: de364579067e2cc9de7df6767491f8fa3a685de2
2020-09-03 08:14:34 -07:00
Gregory Chanan
68a1fbe308 Allow criterion backwards test on modules requiring extra args (i.e. CTCLoss). (#44050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44050

We don't actually turn on the CTCLoss tests since they fail, but this allows you to toggle check_forward_only and for the code to actually run.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23481091

Pulled By: gchanan

fbshipit-source-id: f2a3b0a2dee27341933c5d25f1e37a878b04b9f6
2020-09-03 07:41:21 -07:00
Gregory Chanan
5f89aa36cf Actually run backward criterion tests. (#44030)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44030

This looks to have been a mistake from https://github.com/pytorch/pytorch/pull/9287.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23476274

Pulled By: gchanan

fbshipit-source-id: 81ed9d0c9a40d49153fc97cd69fdcd469bec0c73
2020-09-03 07:39:13 -07:00
Gregory Chanan
c61a16b237 Kill dead code in common_nn as part of merging Criterion and NewCriterionTests. (#43956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43956

See https://github.com/pytorch/pytorch/pull/43769 and https://github.com/pytorch/pytorch/pull/43776 for proof this code is dead.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23452217

Pulled By: gchanan

fbshipit-source-id: 6850aab2daaa1c321a6b7714f6f113f364f41973
2020-09-02 07:54:05 -07:00
Gao, Xiang
5e97f251a8 Enable TF32 support for cuDNN (#40737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40737

Reviewed By: mruberry

Differential Revision: D22801525

Pulled By: ngimel

fbshipit-source-id: ac7f7e728b4b3e01925337e8c9996f26a6433fd2
2020-09-01 15:34:24 -07:00
Xiaomeng Yang
4ae832e106 Optimize SiLU (Swish) op in PyTorch (#42976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42976

Optimize SiLU (Swish) op in PyTorch.

Some benchmark result

input = torch.rand(1024, 32768, dtype=torch.float, device="cpu")
forward: 221ms -> 133ms
backward: 600ms -> 170ms

input = torch.rand(1024, 32768, dtype=torch.double, device="cpu")
forward: 479ms -> 297ms
backward: 1438ms -> 387ms

input = torch.rand(8192, 32768, dtype=torch.float, device="cuda")
forward: 24.34ms -> 9.83ms
backward: 97.05ms -> 29.03ms

input = torch.rand(4096, 32768, dtype=torch.double, device="cuda")
forward: 44.24ms -> 30.15ms
backward: 126.21ms -> 49.68ms

Test Plan: buck test mode/dev-nosan //caffe2/test:nn -- "SiLU"

Reviewed By: houseroad

Differential Revision: D23093593

fbshipit-source-id: 1ba7b95d5926c4527216ed211a5ff1cefa3d3bfd
2020-08-16 13:21:57 -07:00
Kurt Mohler
ec683299eb Reland Add non-deterministic alert to CUDA operations that use atomicAdd() (#41538)
Summary:
Reland PR https://github.com/pytorch/pytorch/issues/40056

A new overload of upsample_linear1d_backward_cuda was added in a recent commit, so I had to add the nondeterministic alert to it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41538

Reviewed By: zou3519

Differential Revision: D22608376

Pulled By: ezyang

fbshipit-source-id: 54a2aa127e069197471f1feede6ad8f8dc6a2f82
2020-07-22 13:12:29 -07:00
Shen Li
954c260061 Revert D22480638: [pytorch][PR] Add non-deterministic alert to CUDA operations that use atomicAdd()
Test Plan: revert-hammer

Differential Revision:
D22480638 (6ff306b8b5)

Original commit changeset: 4cc913cb3ca6

fbshipit-source-id: e47fa14b5085bb2b74a479bd0830efc2d7604eea
2020-07-15 12:10:05 -07:00
Kurt Mohler
6ff306b8b5 Add non-deterministic alert to CUDA operations that use atomicAdd() (#40056)
Summary:
Issue https://github.com/pytorch/pytorch/issues/15359

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40056

Differential Revision: D22480638

Pulled By: ezyang

fbshipit-source-id: 4cc913cb3ca6d4206de80f4665bbc9031aa3ca01
2020-07-15 10:57:32 -07:00
Natalia Gimelshein
155fb22e77 Run single-threaded gradgradcheck in testnn (#41147)
Summary:
Reland https://github.com/pytorch/pytorch/issues/40999

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41147

Reviewed By: mruberry

Differential Revision: D22450357

Pulled By: ngimel

fbshipit-source-id: 02b6e020af5e6ef52542266bd9752b9cfbec4159
2020-07-08 22:53:27 -07:00
Brian Vaughan
a04af4dccb Revert D22396896: [pytorch][PR] run single-threaded gradgradcheck in test_nn
Test Plan: revert-hammer

Differential Revision:
D22396896 (dac63a13cb)

Original commit changeset: 3b247caceb65

fbshipit-source-id: 90bbd71ca5128a7f07fe2907c061ee0922d16edf
2020-07-07 07:43:39 -07:00
Natalia Gimelshein
dac63a13cb run single-threaded gradgradcheck in test_nn (#40999)
Summary:
Most time-consuming tests in test_nn (taking about half the time) were gradgradchecks on Conv3d. Reduce their sizes, and, most importantly, run gradgradcheck single-threaded, because that cuts the time of conv3d tests by an order of magnitude, and barely affects other tests.
These changes bring test_nn time down from 1200 s to ~550 s on my machine.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40999

Differential Revision: D22396896

Pulled By: ngimel

fbshipit-source-id: 3b247caceb65d64be54499de1a55de377fdf9506
2020-07-06 17:21:25 -07:00
Mike Ruberry
13120bf677 Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21740237

Pulled By: mruberry

fbshipit-source-id: acbc027aa1d7877a49664d94db9a5fff91a07042
2020-05-27 06:31:07 -07:00
Rohan Varma
63e545e0fe Revert D21717199: [pytorch][PR] Updates assertEqual to require atol and rtol, removes positional atol
Test Plan: revert-hammer

Differential Revision:
D21717199

Original commit changeset: 9feb856f94ee

fbshipit-source-id: bfde9c39a5ce99f0ca6183a7dde703c65b7c8259
2020-05-26 18:23:59 -07:00
Mike Ruberry
6ddca30b2d Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21717199

Pulled By: mruberry

fbshipit-source-id: 9feb856f94eee911b44f6c7140a1d07c1b026d3a
2020-05-26 08:30:23 -07:00
SsnL
5f2a274015 Fix conv non zero padding being applied in wrong dim (#37881)
Summary:
Turns out F.pad takes in dims in reverse order. Fixes https://github.com/pytorch/pytorch/issues/37844
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37881

Differential Revision: D21554011

Pulled By: soumith

fbshipit-source-id: a85a7f6db9f981d915728965903c5c57b6617c93
2020-05-14 11:56:38 -07:00
Vitaly Fedyunin
57d01be92b Replacing assertEqual with assertEqualIgnoreType wherever types missmatch (#38102)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38102

Test Plan: Imported from OSS

Differential Revision: D21477060

Pulled By: VitalyFedyunin

fbshipit-source-id: 25e0fd837ca9bfccf0ce994c80f7790c894096d4
2020-05-09 14:48:55 -07:00
lixinyu
0692804747 add slope == 0 case into standard leaky relu nn test (#37559)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37559

Test Plan: Imported from OSS

Differential Revision: D21319922

Pulled By: glaringlee

fbshipit-source-id: 212ef8e9d0f0d55a312d282693cd5990e0376c6a
2020-04-30 20:56:11 -07:00
Nikita Shulga
3b832ee2bf Use Python3 super() throughout torch.testing. (#37024)
Summary:
Hattip to ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37024

Differential Revision: D21173244

Pulled By: malfet

fbshipit-source-id: 7079703e28777d873f69bf9fd4dcbad8d53a2682
2020-04-22 09:00:28 -07:00
Mike Ruberry
71ec8b2002 Switches test_jit to use float32 as its default scalar type (#36982)
Summary:
Our test suite used to set double as its default scalar type, and when it was switched to not do so (to be more consistent with how users experience PyTorch), a few tests had to still set the default scalar type to double to function properly. Now that the jit no longer creates double tensors so frequently, it appears that test_jit no longer needs to set double as its default scalar type, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36982

Differential Revision: D21152120

Pulled By: mruberry

fbshipit-source-id: ea6d3c1ad55552dc5affa1fe1bd0e5189849e6d7
2020-04-21 11:23:28 -07:00
lixinyu
b55dee9fe1 fix max_pool2d cuda version Dimension out of range issue(#36046) (#36095)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36095

Test Plan: Imported from OSS

Differential Revision: D20876733

Pulled By: glaringlee

fbshipit-source-id: a2b92fd2dd0254c5443af469e3fb2faa2323e5c9
2020-04-07 01:12:00 -07:00
Will Feng (FAIAR)
2fa3c1570d Refactor C++ API parity test mechanism and turn it on in CI again (#35190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35190

The following are the main changes:
- The main logic of C++ API parity test mechanism is moved from `test/test_cpp_api_parity.py` to `test/cpp_api_parity/module_impl_check.py` and `test/cpp_api_parity/functional_impl_check.py`, so that there is a clear separation between module tests and functional tests, although they still share a lot of common utility functions which are all in `test/cpp_api_parity/utils.py`.
- Module init tests (i.e. testing whether C++ module accepts the same constructor options as the corresponding Python module) is removed and will be added again in the future.
- `cpp_constructor_args` / `cpp_options_args` / `cpp_function_call` are added as appropriate to all test params dict in `torch/testing/_internal/common_nn.py`, to indicate how to run C++ API parity test for this test params dict.

Test Plan: Imported from OSS

Differential Revision: D20588198

Pulled By: yf225

fbshipit-source-id: 11238c560c8247129584b9b49df73fff40c4d81d
2020-04-03 11:20:36 -07:00
Nik Ved
35cdb78522 Make kl_div accept target in log space (#34586)
Summary:
Fixes [32520](https://github.com/pytorch/pytorch/issues/32520), implements [34536](https://github.com/pytorch/pytorch/issues/34536).

Here are some benchmarks:
```python
import torch
import torch.nn.functional as F
from IPython import get_ipython

ipython = get_ipython()

torch.set_num_threads(1)

for d in [5, 10, 20, 50, 100, 1000]:
    i = torch.rand(d, d)
    t = torch.rand(d, d)
    print(f"Size: {d}x{d}")
    ipython.magic("timeit F.kl_div(i, t, reduction='none', log_target=False)")
    ipython.magic("timeit F.kl_div(i, t.log(), reduction='none', log_target=True)")
```
Output:
```
Size: 5x5
16 µs ± 33 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
8.24 µs ± 17.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 10x10
16.7 µs ± 17.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
8.7 µs ± 20.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 20x20
17.7 µs ± 47.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
9.7 µs ± 28.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 50x50
23.6 µs ± 60.1 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
15 µs ± 33.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 100x100
42.8 µs ± 223 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
34 µs ± 17.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Size: 1000x1000
3.9 ms ± 1.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.45 ms ± 364 ns per loop (mean ± std. dev. of 7 runs, 100 loops each)

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34586

Differential Revision: D20652726

Pulled By: ezyang

fbshipit-source-id: 480697b4cd01341bbeee7514a8b812705a0600ea
2020-04-01 12:26:58 -07:00
Will Feng
2dc2933358 Move NewModuleTest and NewCriterionTest from test_nn.py to common_nn.py (#35189)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35189

Test Plan: Imported from OSS

Differential Revision: D20588197

Pulled By: yf225

fbshipit-source-id: 5a28159b653895678c250cbc0c1ddd51bc7a3123
2020-03-24 14:05:45 -07:00
rohithkrn
29b673392f [ROCm] Enable BFloat16 type for loss functions and few misc ops required for resnet50 (#34469)
Summary:
This PR enables bfloat16 type for loss criterion ops(and the ops they depend on) and few miscellaneous ops required to train resnet50.

iotamudelta ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34469

Differential Revision: D20348856

Pulled By: ezyang

fbshipit-source-id: 0a8f06c2169cfa3c9cf319120e27150170095f6c
2020-03-10 08:39:07 -07:00
Xiao Wang
ccf6fab65e Fix doc and type hints for "torch.add"; fix deprecated python calls in tests (#33935)
Summary:
This PR fixed documentation for `torch.add` with alpha. It also fixed these deprecated python calls `torch.add` and `torch.addmm` in tests, which may affect performance in *test/test_sparse.py* and *test/test_nn.py*.

cc csarofeen ptrblck
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33935

Differential Revision: D20313320

Pulled By: ngimel

fbshipit-source-id: fb08413d7e244865952e3fc0e1be7f1794ce4e9a
2020-03-06 15:53:58 -08:00
Sameer Deshmukh
5ca7bf453d Tests for verifying behaviour of BatchNorm using 0-dim batch sizes. (#32384)
Summary:
The `BatchNorm*` part of the issue (see gh-12013) seems to have been fixed in the master branch and these tests would make it concrete.

However I would appreciate comments on https://github.com/pytorch/pytorch/issues/12013#issuecomment-575871264 on whether the current behaviour is satisfactory.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32384

Differential Revision: D19704154

Pulled By: ngimel

fbshipit-source-id: 1bbbbf1ae1215a460b22cf26e6b263e518ecf60b
2020-02-03 16:58:23 -08:00
Sameer Deshmukh
ca9dc67094 0-dim batch size input for interpolate. (#32400)
Summary:
This PR adds support for 0-dim batch size input for `torch.nn.functional.interpolate` for various modes of interpolation.

Fixes part of gh-12013

CC: rgommers  ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32400

Differential Revision: D19557090

Pulled By: ezyang

fbshipit-source-id: 6822f148bb47bfbcacb5e03798bf2744f24a2a32
2020-01-27 09:24:46 -08:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00