Commit Graph

174 Commits

Author SHA1 Message Date
Philip Meier
0809553cf0 refactor assert_close to be more modular (#67794)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67794

This change is needed to conveniently use the same comparison mechanism for our internal testsuite (see #67796). The reworked version is on par with the previous version except for the ability to pass a custom message as callable. Before we converted everything to a tensor so it was fairly easy to provide consistent mismatch diagnostics to the callable. Now, with arbitrary `Pair`'s that are used for comparison that is no longer viable.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D32532206

Pulled By: mruberry

fbshipit-source-id: dc847fba6a795c1766e01bc3e88b680a68287b1e
2021-11-19 12:37:16 -08:00
Jane Xu
f3e2fefe09 Actually enable PYTORCH_RETRY_TEST_CASES for linux tests (#68486)
Summary:
After realizing that CUDA mem leaks were not rerun, I realized I forgot to pass the env var as a Docker variable.

What a noob mistake.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68486

Reviewed By: seemethere

Differential Revision: D32501718

Pulled By: janeyx99

fbshipit-source-id: 9918d626e90bea1562a3094c6eb12cb7d86dbf6a
2021-11-17 11:50:48 -08:00
kshitij12345
885a8e53ba replace onlyOnCPUAndCUDA with onlyNativeDeviceTypes (#65201)
Summary:
Reference https://github.com/pytorch/pytorch/issues/53849

Replace `onlyOnCPUandCUDA` with `onlyNativeDeviceTypes` which includes `cpu, cuda and meta`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65201

Reviewed By: mrshenli

Differential Revision: D31299718

Pulled By: mruberry

fbshipit-source-id: 2d8356450c035d6a314209ab51b2c237583920fd
2021-11-01 09:22:34 -07:00
Natalia Gimelshein
a72a6365c9 disallow requires_grad=True in make_tensor for integral inputs (#67149)
Summary:
per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67149

Reviewed By: albanD

Differential Revision: D31928613

Pulled By: ngimel

fbshipit-source-id: 4491954c4fcd4a4e3121155d4451cc7370c27a0b
2021-10-26 16:19:28 -07:00
Jane Xu
8a65047acc [skip ci] Set test owners for everything considered with module: tests (#66865)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66865

Reviewed By: anjali411

Differential Revision: D31771147

Pulled By: janeyx99

fbshipit-source-id: 8bebe5ac2098364ef1ee93b590abb5f4455b0f89
2021-10-20 09:37:03 -07:00
Philip Meier
f9c2dc860d make layout check optional in torch.testing.assert_close() (#65419)
Summary:
In case the inputs have a different layout, `assert_close(..., check_layout=False)` converts them to strided before comparison. This is helpful if you just want to compare the values of sparse COO / CSR tensor against a strided reference.

This keeps BC, since the default `check_layout=True` was the old, hard-coded behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65419

Reviewed By: H-Huang

Differential Revision: D31133629

Pulled By: mruberry

fbshipit-source-id: ca8918af81fb0e0ba263104836a4c2eeacdfc7e6
2021-09-28 23:23:41 -07:00
Joel Schlosser
b7ec7d760d Generic test parametrization functionality (#60753)
Summary:
This PR plays around with implementation & usage of a `parametrize` decorator for test parametrization similar to `pytest.mark.parametrize`, based on previous work introducing a `_TestParametrizer` class. It works with the internal `DeviceTest` hierarchy & composes with `dtype`, `skip*`, and other decorators. Basic usage is demonstrated in `test/test_blah.py`:

```python
import unittest
from itertools import product
from torch.testing._internal.common_device_type import (
    instantiate_device_type_tests, deviceCountAtLeast, ops)
from torch.testing._internal.common_methods_invocations import op_db
from torch.testing._internal.common_utils import (
    TestCase, run_tests, parametrize, instantiate_parametrized_tests, subtest)

class TestBlah(TestCase):
    parametrize("x", range(5))
    def test_default_names(self, x):
        print('Passed in:', x)

    # Use default names but add an expected failure.
    parametrize("x", [subtest(0, decorators=[unittest.expectedFailure]),
                       *range(1, 5)])
    def test_default_names_expected_failure(self, x):
        if x == 0:
            raise RuntimeError('Boom')
        print('Passed in:', x)

    parametrize("bias", [False, True], name_fn=lambda b: 'bias' if b else 'no_bias')
    def test_custom_names(self, bias):
        print('Passed in:', bias)

    parametrize("bias", [subtest(True, name='bias'),
                          subtest(False, name='no_bias')])
    def test_custom_names_alternate(self, bias):
        print('Passed in:', bias)

    parametrize("x,y", [(1, 2), (1, 3), (1, 4)])
    def test_two_things_default_names(self, x, y):
        print('Passed in:', x, y)

    parametrize("x", [1, 2, 3])
    parametrize("y", [4, 5, 6])
    def test_two_things_composition(self, x, y):
        print('Passed in:', x, y)

    parametrize("x", [subtest(0, decorators=[unittest.expectedFailure]),
                       *range(1, 3)])
    parametrize("y", [4, 5, subtest(6, decorators=[unittest.expectedFailure])])
    def test_two_things_composition_expected_failure(self, x, y):
        if x == 0 or y == 6:
            raise RuntimeError('Boom')
        print('Passed in:', x, y)

    parametrize("x", [1, 2])
    parametrize("y", [3, 4])
    parametrize("z", [5, 6])
    def test_three_things_composition(self, x, y, z):
        print('Passed in:', x, y, z)

    parametrize("x", [1, 2], name_fn=str)
    parametrize("y", [3, 4], name_fn=str)
    parametrize("z", [5, 6], name_fn=str)
    def test_three_things_composition_custom_names(self, x, y, z):
        print('Passed in:', x, y, z)

    parametrize("x,y", product(range(2), range(3)))
    def test_two_things_product(self, x, y):
        print('Passed in:', x, y)

    parametrize("x,y", [subtest((1, 2), name='double'),
                         subtest((1, 3), name='triple'),
                         subtest((1, 4), name='quadruple')])
    def test_two_things_custom_names(self, x, y):
        print('Passed in:', x, y)

    parametrize("x,y", [(1, 2), (1, 3), (1, 4)], name_fn=lambda x, y: '{}_{}'.format(x, y))
    def test_two_things_custom_names_alternate(self, x, y):
        print('Passed in:', x, y)

class TestDeviceBlah(TestCase):
    parametrize("x", range(10))
    def test_default_names(self, device, x):
        print('Passed in:', device, x)

    parametrize("x,y", [(1, 2), (3, 4), (5, 6)])
    def test_two_things(self, device, x, y):
        print('Passed in:', device, x, y)

    deviceCountAtLeast(1)
    def test_multiple_devices(self, devices):
        print('Passed in:', devices)

    ops(op_db)
    parametrize("flag", [False, True], lambda f: 'flag_enabled' if f else 'flag_disabled')
    def test_op_parametrized(self, device, dtype, op, flag):
        print('Passed in:', device, dtype, op, flag)

instantiate_parametrized_tests(TestBlah)
instantiate_device_type_tests(TestDeviceBlah, globals())

if __name__ == '__main__':
    run_tests()
```

Generated tests:
```
TestBlah.test_custom_names_alternate_bias
TestBlah.test_custom_names_alternate_no_bias
TestBlah.test_custom_names_bias
TestBlah.test_custom_names_no_bias
TestBlah.test_default_names_expected_failure_x_0
TestBlah.test_default_names_expected_failure_x_1
TestBlah.test_default_names_expected_failure_x_2
TestBlah.test_default_names_expected_failure_x_3
TestBlah.test_default_names_expected_failure_x_4
TestBlah.test_default_names_x_0
TestBlah.test_default_names_x_1
TestBlah.test_default_names_x_2
TestBlah.test_default_names_x_3
TestBlah.test_default_names_x_4
TestBlah.test_three_things_composition_custom_names_1_3_5
TestBlah.test_three_things_composition_custom_names_1_3_6
TestBlah.test_three_things_composition_custom_names_1_4_5
TestBlah.test_three_things_composition_custom_names_1_4_6
TestBlah.test_three_things_composition_custom_names_2_3_5
TestBlah.test_three_things_composition_custom_names_2_3_6
TestBlah.test_three_things_composition_custom_names_2_4_5
TestBlah.test_three_things_composition_custom_names_2_4_6
TestBlah.test_three_things_composition_x_1_y_3_z_5
TestBlah.test_three_things_composition_x_1_y_3_z_6
TestBlah.test_three_things_composition_x_1_y_4_z_5
TestBlah.test_three_things_composition_x_1_y_4_z_6
TestBlah.test_three_things_composition_x_2_y_3_z_5
TestBlah.test_three_things_composition_x_2_y_3_z_6
TestBlah.test_three_things_composition_x_2_y_4_z_5
TestBlah.test_three_things_composition_x_2_y_4_z_6
TestBlah.test_two_things_composition_expected_failure_x_0_y_4
TestBlah.test_two_things_composition_expected_failure_x_0_y_5
TestBlah.test_two_things_composition_expected_failure_x_0_y_6
TestBlah.test_two_things_composition_expected_failure_x_1_y_4
TestBlah.test_two_things_composition_expected_failure_x_1_y_5
TestBlah.test_two_things_composition_expected_failure_x_1_y_6
TestBlah.test_two_things_composition_expected_failure_x_2_y_4
TestBlah.test_two_things_composition_expected_failure_x_2_y_5
TestBlah.test_two_things_composition_expected_failure_x_2_y_6
TestBlah.test_two_things_composition_x_1_y_4
TestBlah.test_two_things_composition_x_1_y_5
TestBlah.test_two_things_composition_x_1_y_6
TestBlah.test_two_things_composition_x_2_y_4
TestBlah.test_two_things_composition_x_2_y_5
TestBlah.test_two_things_composition_x_2_y_6
TestBlah.test_two_things_composition_x_3_y_4
TestBlah.test_two_things_composition_x_3_y_5
TestBlah.test_two_things_composition_x_3_y_6
TestBlah.test_two_things_custom_names_alternate_1_2
TestBlah.test_two_things_custom_names_alternate_1_3
TestBlah.test_two_things_custom_names_alternate_1_4
TestBlah.test_two_things_custom_names_double
TestBlah.test_two_things_custom_names_quadruple
TestBlah.test_two_things_custom_names_triple
TestBlah.test_two_things_default_names_x_1_y_2
TestBlah.test_two_things_default_names_x_1_y_3
TestBlah.test_two_things_default_names_x_1_y_4
TestBlah.test_two_things_product_x_0_y_0
TestBlah.test_two_things_product_x_0_y_1
TestBlah.test_two_things_product_x_0_y_2
TestBlah.test_two_things_product_x_1_y_0
TestBlah.test_two_things_product_x_1_y_1
TestBlah.test_two_things_product_x_1_y_2
TestDeviceBlahCPU.test_default_names_x_0_cpu
TestDeviceBlahCPU.test_default_names_x_1_cpu
TestDeviceBlahCPU.test_default_names_x_2_cpu
TestDeviceBlahCPU.test_default_names_x_3_cpu
TestDeviceBlahCPU.test_default_names_x_4_cpu
TestDeviceBlahCPU.test_default_names_x_5_cpu
TestDeviceBlahCPU.test_default_names_x_6_cpu
TestDeviceBlahCPU.test_default_names_x_7_cpu
TestDeviceBlahCPU.test_default_names_x_8_cpu
TestDeviceBlahCPU.test_default_names_x_9_cpu
TestDeviceBlahCPU.test_multiple_devices_cpu
TestDeviceBlahCPU.test_op_parametrized_<opname>_<variant>_cpu_uint8_flag_enabled_cpu
TestDeviceBlahCPU.test_two_things_x_1_y_2_cpu
TestDeviceBlahCPU.test_two_things_x_3_y_4_cpu
TestDeviceBlahCPU.test_two_things_x_5_y_6_cpu
TestDeviceBlahMETA.test_default_names_x_0_meta
TestDeviceBlahMETA.test_default_names_x_1_meta
TestDeviceBlahMETA.test_default_names_x_2_meta
TestDeviceBlahMETA.test_default_names_x_3_meta
TestDeviceBlahMETA.test_default_names_x_4_meta
TestDeviceBlahMETA.test_default_names_x_5_meta
TestDeviceBlahMETA.test_default_names_x_6_meta
TestDeviceBlahMETA.test_default_names_x_7_meta
TestDeviceBlahMETA.test_default_names_x_8_meta
TestDeviceBlahMETA.test_default_names_x_9_meta
TestDeviceBlahMETA.test_multiple_devices_meta
TestDeviceBlahMETA.test_op_parametrized_<opname>_<variant>_meta_uint8_flag_enabled_meta
TestDeviceBlahMETA.test_two_things_x_1_y_2_meta
TestDeviceBlahMETA.test_two_things_x_3_y_4_meta
TestDeviceBlahMETA.test_two_things_x_5_y_6_meta
```

Caveats:
* `parametrize` decorators cannot be "stacked" yet; each one overwrites the previous. This will change to either:
  * Allow stacking of multiple decorators
  * Error out with a nice error message if multiple decorators are specified

The PR introduces `instantiate_parametrized_tests()` in addition to `instantiate_device_type_tests()`. The former should be used for non-device-specific tests, and the latter should be used for device-specific tests, as usual. Both of these support the `parametrize` decorator. Only the latter supports the `ops` decorator (no change here- this was already the case).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60753

Reviewed By: saketh-are

Differential Revision: D30606615

Pulled By: jbschlosser

fbshipit-source-id: a34f36d643f68a6e221f419d9bb3e1ae1d84dd65
2021-09-14 19:52:59 -07:00
Philip Meier
26b7ff5aea deprecate dtype getters from torch.testing namespace (#63554)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63554

Following https://github.com/pytorch/pytorch/pull/61840#issuecomment-884087809, this deprecates all the dtype getters publicly exposed in the `torch.testing` namespace. The reason for this twofold:

1. If someone is not familiar with the C++ dispatch macros PyTorch uses, the names are misleading. For example `torch.testing.floating_types()` will only give you `float32` and `float64` skipping `float16` and `bfloat16`.
2. The dtype getters provide very minimal functionality that can be easily emulated by downstream libraries.

We thought about [providing an replacement](https://gist.github.com/pmeier/3dfd2e105842ad0de4505068a1a0270a), but ultimately decided against it. The major problem is BC: by keeping it, either the namespace is getting messy again after a new dtype is added or we need to somehow version the return values of the getters.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D30662206

Pulled By: mruberry

fbshipit-source-id: a2bdb10ab02ae665df1b5b76e8afa9af043bbf56
2021-09-07 08:58:51 -07:00
Philip Meier
eafe33c995 remove componentwise comparison of complex values in torch.testing.assert_close (#63841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63841

Closes #61906.

cc ezyang gchanan

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30633526

Pulled By: mruberry

fbshipit-source-id: ddb5d61838cd1e12d19d0093799e827344382cdc
2021-08-30 12:38:44 -07:00
Philip Meier
401bbb2aa0 remove componentwise comparison of complex values in TestCase.assertEqual (#63572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63572

Addresses #61906. Issue will be fixed later in the stack when `torch.testing.assert_close` got the same treatment.

cc ezyang gchanan

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30633527

Pulled By: mruberry

fbshipit-source-id: c2002a4998a7a75cb2ab83f87190bde43a9d4f7c
2021-08-30 12:36:45 -07:00
Kushashwa Ravi Shrimali
d37636901e [Doc] make_tensor to torch.testing module (#63925)
Summary:
This PR aims to add `make_tensor` to the `torch.testing` module in PyTorch docs.

TODOs:

* [x] Add examples

cc: pmeier mruberry brianjo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63925

Reviewed By: ngimel

Differential Revision: D30633487

Pulled By: mruberry

fbshipit-source-id: 8e5a1f880c6ece5925b4039fee8122bd739538af
2021-08-30 12:25:40 -07:00
Philip Meier
b1154cc774 enable equal_nan for complex values in isclose (#63571)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63571

Test Plan: Imported from OSS

Reviewed By: malfet, ngimel

Differential Revision: D30560127

Pulled By: mruberry

fbshipit-source-id: 8958121ca24e7c139d869607903aebbe87bc0740
2021-08-25 22:05:49 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Philip Meier
f16c73b9f3 Improve error messages of torch.testing.assert_close for sparse inputs (#61583)
Summary:
This utilizes the feature introduced in https://github.com/pytorch/pytorch/issues/60091 to modify the header of the error message.

Before:

```python
AssertionError: Tensor-likes are not equal!

Mismatched elements: 1 / 2 (50.0%)
Greatest absolute difference: 1 at index 1
Greatest relative difference: 0.3333333432674408 at index 1

The failure occurred for the values.
```

After:

```python
AssertionError: Sparse COO values of tensor-likes are not equal!

Mismatched elements: 1 / 2 (50.0%)
Greatest absolute difference: 1 at index 1
Greatest relative difference: 0.3333333432674408 at index 1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61583

Reviewed By: malfet

Differential Revision: D30014797

Pulled By: cpuhrsch

fbshipit-source-id: 66e30645e94de5c8c96510822082ff9aabef5329
2021-07-30 11:23:26 -07:00
Xue Haotian
3d6aa3a2f6 Enable torch.isclose to suppport bool tensors (#61271)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60533

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61271

Reviewed By: zhxchen17

Differential Revision: D29737618

Pulled By: SplitInfinity

fbshipit-source-id: 45314bc7e0b9a28c10700455b1e6267c0db3eefc
2021-07-21 18:50:14 -07:00
Philip Meier
8ad584823f add shortcircuit in isclose for zero tolerances (#61529)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61412.

Large integers gave false positives, because the comparison always takes place in floating point dtypes. This happens, because their integer precision is lower than the range of an integer dtype with the same number of bits.

For non-extremal values, `isclose` is defined by [this equation]:

```python
abs(a - b) <= atol + rtol * abs(b)
```

For `rtol == 0 and atol==0`, this is equivalent to `a == b`. This PR goes for the low hanging fruit and adds a shortcut for this case that falls back to an actual equality check.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61529

Reviewed By: gchanan

Differential Revision: D29707534

Pulled By: mruberry

fbshipit-source-id: 71b8c4901e9cd4f366442437e52032b0d3002b4a
2021-07-16 12:48:16 -07:00
Philip Meier
736bb26746 use rand over empty in flaky test (#61710)
Summary:
Fixes https://github.com/pytorch/pytorch/pull/61694#issuecomment-880641635. cc krshrimali.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61710

Reviewed By: anjali411

Differential Revision: D29719660

Pulled By: mruberry

fbshipit-source-id: 589574a039ad431acc7d095d452f0b3e52260208
2021-07-16 10:50:05 -07:00
Philip Meier
682ebc1dd1 remove UsageError in favor of ValueError (#61031)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61031

See https://github.com/pytorch/pytorch/pull/58916#issuecomment-868519515.

Test Plan: Imported from OSS

Reviewed By: iramazanli

Differential Revision: D29626810

Pulled By: mruberry

fbshipit-source-id: 25ddf26815f9ef82b8234d7dac811a6a13a53c54
2021-07-09 11:28:33 -07:00
Philip Meier
09c90b3589 relax type equality constraint (#60638)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60638

Initial proposal in https://github.com/pytorch/pytorch/pull/58981#issuecomment-866690334. Opposed to the proposal, this PR only allows relaxing the type equality constraint to a common superclass constraint, for example `torch.Tensor` vs `torch.nn.Parameter`. Inputs that do not share a common superclass will still fail.

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D29626811

Pulled By: mruberry

fbshipit-source-id: 1916c3b710d38889de7ce57eb0770c76cbbb8166
2021-07-09 11:27:32 -07:00
Philip Meier
29ecb9f90b Don't check stride by default (#60637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60637

We now have ~three out of three~  four out of four datapoints that `check_stride` will be `partial`'ed to `False`:

- `torch` test suite: https://github.com/pytorch/pytorch/pull/58981#discussion_r639514081
- `torchvision` test suite: https://github.com/pytorch/pytorch/issues/56544#issuecomment-845352605
- `kornia`: 9041c42b41/test/utils.py (L25)
- `torch.fft`: https://github.com/pytorch/pytorch/pull/60304#pullrequestreview-687882323

Given that the strides in most cases are in implementation detail, IMO we should change the default to `False`. In cases were matching strides is a requirement for closeness / equality it can always set to `True` manually.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D29556355

Pulled By: mruberry

fbshipit-source-id: 0029a44280d8f4369fbdb537dce3202eeee4b1d9
2021-07-07 09:55:36 -07:00
Philip Meier
e2a3f4b560 Use maximum of tolerances in case of mismatching dtypes (#60636)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60636

See https://github.com/pytorch/pytorch/pull/58981#issuecomment-866654600.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D29556352

Pulled By: mruberry

fbshipit-source-id: 36e97e0f338df5d17a94af078f172c668ef51ecb
2021-07-07 09:55:34 -07:00
Philip Meier
5f18ba7075 upcast to most precise dtype within their category before the comparison (#60536)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60536

`torch.isclose` does not do this bool tensors, which results in a test failure since subtraction (`abs(actual - expected)`) is not supported for them (see #58981). Since the `dtype` is already checked at this point, we can safely move the upcasting before `torch.isclose` is invoked.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D29556356

Pulled By: mruberry

fbshipit-source-id: 4c65fad4f06cf402d6aab9dde5b127235766d5e0
2021-07-07 09:55:32 -07:00
Philip Meier
5ac87cde30 tests for diagnostics in callable msg in torch.testing.assert_close (#60254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60254

Before we only tested that the correct error message is returned if `msg` is passed as callable. This adds tests that make sure that

- the inputs passed to the callable are the same inputs passed to `torch.assert_close` and
- the `diagnostics` namespace has the same attributes and types as documented.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D29556354

Pulled By: mruberry

fbshipit-source-id: 9793c6d86fda842b6329381fc03b945eee878464
2021-07-07 09:55:30 -07:00
Philip Meier
76d9e680d7 update docstring examples of torch.testing.assert_close (#60163)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60163

Changes to the default error message in case of mismatching values need to be reflected in the examples given in the docstring. Normally this should be enforced by a [`doctest`](https://docs.python.org/3/library/doctest.html). mruberry do you know why we don't have such a check?

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D29556353

Pulled By: mruberry

fbshipit-source-id: 8dbc3f566f429618811b542a059d9abde9a6530b
2021-07-07 09:55:29 -07:00
Philip Meier
9979289037 Improve error messages of torch.testing.assert_close in case of mismatching values (#60091)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60091

Closes #58383. (1) and (2) are implemented. (3) was rejected. No consensus was reached on (4) and (5).

Improvements:

- Instead of calling everything "Tensors" we now use "Scalars" and "Tensor-likes" depending on the shape. Plus, we now internally have the option to adapt this identifier for example to report "Imaginary components of complex tensor-likes", which is even more expressive.
- The reported conditions "not close" and "not equal" are now determined based on `rtol` and `atol`.
- The number of mismatched elements and the offending indices are only reported in case the inputs are not scalar
- The allowed `rtol` and `atol` is only reported if `> 0`

**Example 1**

```python
torch.testing.assert_close(1, 3, rtol=0, atol=1)
```

Before:

```
AssertionError: Tensors are not close!

Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 2 at 0 (up to 1 allowed)
Greatest relative difference: 0.6666666865348816 at 0 (up to 0 allowed)
```

After:

```
AssertionError: Scalars are not close!

Absolute difference: 2 (up to 1 allowed)
Relative difference: 0.6666666865348816
```

**Example 2**

```python
torch.manual_seed(0)
t = torch.rand((2, 2), dtype=torch.complex64)
torch.testing.assert_close(t, t + complex(0, 1))
```

Before:

```
AssertionError: Tensors are not close!

Mismatched elements: 4 / 4 (100.0%)
Greatest absolute difference: 1.0000000596046448 at (0, 0) (up to 1e-05 allowed)
Greatest relative difference: 0.8833684352411922 at (0, 1) (up to 1.3e-06 allowed)

The failure occurred for the imaginary part.
```

After:

```
AssertionError: Imaginary components of tensor-likes are not close!

Mismatched elements: 4 / 4 (100.0%)
Greatest absolute difference: 1.0000000596046448 at index (0, 0) (up to 1e-05 allowed)
Greatest relative difference: 0.8833684352411922 at index (0, 1) (up to 1.3e-06 allowed)
```

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D29556357

Pulled By: mruberry

fbshipit-source-id: 559d4a19ad4fc069b2b4f8cb5fc2f6058621e33d
2021-07-07 09:54:09 -07:00
Philip Meier
db1dd9e7e0 add support for quantized tensors in torch.testing.assert_close (#58926)
Summary:
This adds support for quantized tensors the same way torch.testing._internal.common_utils.TestCase.assertEqual does:

bf269fdc98/torch/testing/_internal/common_utils.py (L1314-L1341)

- `.qscheme()` is checked for equality
- `.q_scale` and `q_zero_point` are checked for equality (see comment below) for `.qscheme() == torch.per_tensor_affine`
- `.q_per_channel_scales`, `q_per_channel_zero_points`, and `q_per_channel_axis` are checked for equality (see comment below) for `.qscheme() == torch.per_tensor_affine`
- values are checked with the default checks after a `.int_repr().to(torch.int32)` call

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58926

Reviewed By: jerryzh168

Differential Revision: D29483532

Pulled By: mruberry

fbshipit-source-id: 003fde7e21cf844778a879c3de0a7c84d13877bd
2021-06-30 21:43:02 -07:00
Philip Meier
44b3dc4eac resolve conjugate bit in torch.testing.assert_close (#60522)
Summary:
We need to resolve the conjugate bit for complex tensors, because otherwise we may not be able to access the imaginary component:

```python
>>> torch.tensor(complex(1, 1)).conj().imag
RuntimeError: view_as_real doesn't work on unresolved conjugated tensors.  To resolve the conjugate tensor so you can view it as real, use self.resolve_conj(); however, be warned that the resulting tensor will NOT alias the original.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60522

Reviewed By: ngimel

Differential Revision: D29353095

Pulled By: mruberry

fbshipit-source-id: c36eaf883dd55041166f692f7b1d35cd2a34acfb
2021-06-30 01:31:30 -07:00
Rong Rong (AI Infra)
7e619b9588 First step to rearrange files in tools folder (#60473)
Summary:
Changes including:
- introduced `linter/`, `testing/`, `stats/` folders in `tools/`
- move appropriate scripts into these folders
- change grepped references in the pytorch/pytorch repo

Next step
- introduce `build/` folder for build scripts

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60473

Test Plan:
- CI (this is important b/c pytorch/test-infra also rely on some script reference.
- tools/tests/

Reviewed By: albanD

Differential Revision: D29352716

Pulled By: walterddr

fbshipit-source-id: bad40b5ce130b35dfd9e59b8af34f9025f3285fd
2021-06-24 10:13:58 -07:00
Philip Meier
6ea22672c4 add support for sparse tensors in torch.testing.assert_close (#58844)
Summary:
This adds support for sparse tensors the same way `torch.testing._internal.common_utils.TestCase.assertEqual` does:

5c7dace309/torch/testing/_internal/common_utils.py (L1287-L1313)

- Tensors are coalesced before comparison.
- Indices and values are compared individually.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58844

Reviewed By: zou3519

Differential Revision: D29160250

Pulled By: mruberry

fbshipit-source-id: b0955656c2c7ff3db37a1367427ca54ca14f2e87
2021-06-23 21:59:01 -07:00
Philip Meier
7d39608a29 split TestAsserts by functionality (#58919)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58919

Instead of having one large TestAsserts test case, we split of tests for
self-contained functionality like container or complex checking into
separate test cases. That makes it a lot easier to keep an overview over
what is tested.

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D29259407

Pulled By: mruberry

fbshipit-source-id: 9769cb6d56c1a3790280542db398cb247986b09a
2021-06-21 20:44:23 -07:00
Philip Meier
14b0191d1f make assert_equal an example how to partial torch.testing.assert_close (#58918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58918

~Instead of a distinct `torch.testing.assert_close` and `torch.testing.assert_equal`, this makes `torch.testing.assert_equal` a special case of `torch.testing.assert_close` for `rtol=atol=0`. In this case the closeness definition `abs(actual - expected) <= atol + rtol * abs(expected)` boils down to `abs(actual - expected) <= 0`. Since `abs(x)` can never be `<0`, this is equivalent to `abs(a - b) == 0` and this again boils down to `a == b`.~

Following https://github.com/pytorch/pytorch/pull/58918#issuecomment-860642057 and some offline discussions, we opted to use `assert_equal` as an example how to `partial` it.

This makes maintaing the module a lot easier, because we don't need to keep two functions in sync.

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D29259404

Pulled By: mruberry

fbshipit-source-id: fa1a1fa93672a7ed1c5f0e4beb0dcd45b5c14fce
2021-06-21 20:44:21 -07:00
kshitij12345
64aec8d2ca [testing] OpInfoHelper tool (#58698)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/57577

Usage:
Add OpInfo entry to `common_methods_invocations` with `dtypes=_DYNAMIC_DYTPES`
Eg.
```
OpInfo('atan2',
        dtypes=_DYNAMIC_DTYPES,
        sample_inputs_func=sample_inputs_atan2,)
```

Run the helper with `python -m torch.testing._internal.opinfo_helper`

Output
```
OpInfo(atan2,
       # hint: all_types + (torch.bool,),
       dtypes=[torch.float32, torch.float64, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.bool],
       # hint: all_types + (torch.bool, torch.bfloat16, torch.float16),
       dtypesIfCUDA=[torch.float32, torch.float64, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.bool, torch.bfloat16, torch.float16],
       sample_inputs_func=sample_inputs_atan2)
```

Output without CUDA (run with `$ CUDA_VISIBLE_DEVICES=-1 python -m torch.testing._internal.opinfo_helper`)
```
UserWarning: WARNING: CUDA is not available, information pertaining to CUDA could be wrong
  warnings.warn("WARNING: CUDA is not available, information pertaining to CUDA could be wrong")
OpInfo(atan2,
       # hint: all_types + (torch.bool,),
       dtypes=[torch.float32, torch.float64, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.bool],
       sample_inputs_func=sample_inputs_atan2)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58698

Reviewed By: H-Huang

Differential Revision: D29160668

Pulled By: mruberry

fbshipit-source-id: 707370a83b451b02ad2fe539775c8c50ecf90be8
2021-06-16 17:17:03 -07:00
Sam Estep
2e26976ad3 Disallow versionless Python shebangs (#58275)
Summary:
Some machines don't have a versionless `python` on their PATH, which breaks these existing shebangs.

I'm assuming that all the existing versionless `python` shebangs are meant to be `python3` and not `python2`; please let me know if my assumption was incorrect for any of these.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58275

Test Plan: CI.

Reviewed By: zhouzhuojie

Differential Revision: D28428143

Pulled By: samestep

fbshipit-source-id: 6562be3d12924db72a92a0207b060ef740f61ebf
2021-05-14 08:26:02 -07:00
Philip Meier
9148f19e85 enable support for nested containers in torch.testing.assert(equal|close) (#57270)
Summary:
In contrast to the initial opinion in https://github.com/pytorch/pytorch/issues/55385, there are legitimate use cases for nested containers. One such example is the [output of `LSTM`'s](https://pytorch.org/docs/stable/generated/torch.nn.LSTM):

```python
output: Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]] = torch.nn.LSTM()(input)
assert_close(output, expected)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57270

Reviewed By: albanD

Differential Revision: D28249303

Pulled By: mruberry

fbshipit-source-id: 75caa4414cc184ff0ce4cfc0dd5aafddfad42bcf
2021-05-12 15:37:42 -07:00
Philip Meier
8824f49e68 Split test_testing.py::TestAsserts for multiple devices (#56365)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56365

Follow-up to https://github.com/pytorch/pytorch/pull/54784#discussion_r614156172. Instead of having one large testcase where most methods are decorated with `onlyCPU`, this factors out all tests that actually need another device into a separate test case.

Test Plan: Imported from OSS

Reviewed By: walterddr, albanD

Differential Revision: D28247529

Pulled By: mruberry

fbshipit-source-id: 946e7694b70e736941565f29b5dd459ed7fbca4e
2021-05-11 19:47:56 -07:00
Philip Meier
71ca3e99af Only use actually mismatched elements for reporting in torch.testing (#57923)
Summary:
Redo of https://github.com/pytorch/pytorch/issues/57135 out of stack

 ---

Currently all values are used for the reported absolute and relative differences. This usually works fine, but breaks down for the extremals:

```python
torch.testing.assert_close(torch.tensor([1.0, 0.0]), torch.tensor([2.0, 0.0]))
```

```
[...]
Greatest absolute difference: 1.0 at 0 (up to 1e-05 allowed)
Greatest relative difference: nan at 1 (up to 1.3e-06 allowed)
```

Although the second element is matching it is listed as offender for the greatest relative difference. The `NaN` stems from the `0 / 0` division.

To overcome this, we should only use the values that were considered a mismatch for the reported stats.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57923

Reviewed By: ngimel

Differential Revision: D28317316

Pulled By: mruberry

fbshipit-source-id: 4c604493bbe13b37f41225ea9af9e839a7304161
2021-05-10 20:58:47 -07:00
Philip Meier
0dd0151c64 add torch.testing to docs (#57247)
Summary:
Redo of https://github.com/pytorch/pytorch/issues/56373 out of stack.

 ---

To reviewers: **please be nitpicky**. I've read this so often that I probably missed some typos and inconsistencies.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57247

Reviewed By: albanD

Differential Revision: D28247402

Pulled By: mruberry

fbshipit-source-id: 71142678ee5c82cc8c0ecc1dad6a0b2b9236d3e6
2021-05-07 09:16:39 -07:00
Philip Meier
126ea1ccad relax type equality constraint for scalars (#57532)
Summary:
Currently we require type equality for `torch.testing.assert_(equal|close)`:

3db45bcb91/torch/testing/_asserts.py (L509-L513)

That means `assert_equal(1, 1.0)` will correctly fail. Although the type of a scalar is similiar to a dtype of a tensor, `assert_equal(1, 1.0, check_dtype=False)` will also fail while `assert_equal(torch.as_tensor(1), torch.as_tensor(1.0), check_dtype=False)` will pass.

To make the interface more consistent, this PR relaxes the type equality constraint, by disabling it in case both inputs are scalars.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57532

Reviewed By: ngimel

Differential Revision: D28242428

Pulled By: mruberry

fbshipit-source-id: b643c77f48b64fc2c8a43925120d2b634ec336b5
2021-05-05 22:42:51 -07:00
Philip Meier
5c68072ee8 add support for complex input to torch.testing.assert_(equal|close) (#57162)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57162

Reviewed By: ngimel

Differential Revision: D28141902

Pulled By: mruberry

fbshipit-source-id: fd35e73e10167e3e44da4daf6582183bc4a0de7f
2021-05-02 16:13:12 -07:00
Philip Meier
805129f957 enable support for custom error messages in torch.testing (#55890)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55890

Proof-of-concept for https://github.com/pytorch/pytorch/pull/55145#issuecomment-817297273

With this the user is able to pass a custom error message to `assert_(equal|close)` which will be used in case the values mismatch. Optionally, a callable can be passed which will be called with mismatch diagnostics and should return an error message:

```python
def make_msg(a, b, info):
    return (
        f"Argh, we found {info.total_mismatches} mismatches! "
        f"That is {info.mismatch_ratio:.1%}!"
    )

torch.testing.assert_equal(torch.tensor(1), torch.tensor(2), msg=make_msg)
```

If you imagine `a` and `b` as the outputs of binary ufuncs, the error message could look like this:

```python
def make_msg(input, torch_output, numpy_output, info):
    return (
        f"For input {input} torch.binary_op() and np.binary_op() do not match: "
        f"{torch_output} != {numpy_output}"
    )

torch.testing.assert_equal(
    torch.binary_op(input),
    numpy.binary_op(input),
    msg=lambda a, b, info: make_msg(input, a, b, info),
)
```

This should make it much easier for developers to find out what is actually going wrong.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903842

Pulled By: mruberry

fbshipit-source-id: 4c82e3d969e9a621789018018bec6399724cf388
2021-04-24 23:37:44 -07:00
Philip Meier
edfbc989d1 add support for equal_nan in torch.testing.assert_close (#55788)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55788

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903821

Pulled By: mruberry

fbshipit-source-id: c10254b2cdc7c1ae5a31b22913136013f0472b26
2021-04-24 23:37:43 -07:00
Philip Meier
27148db5df Add support for scalars and numpy in torch.testing (#55786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55786

Add support to compare scalars as well as `np.ndarray`'s with torch.testing. We are reusing the mathcing functionality that is already in place for tensors, by casting the inputs. The approach can easily extended if we want to support other input types as long as they can be cast to a tensor.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903814

Pulled By: mruberry

fbshipit-source-id: fe3d063d0c9513cbd8b3408a2023e94c490c817e
2021-04-24 23:37:41 -07:00
Philip Meier
dbf3451c6e Add support for checking tensor containers in torch.testing (#55385)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55385

This renames `assert_tensors_(equal|close)` to `_check_tensors_(equal|close)` and exposes two new functions: `assert_(equal|close)`. In addition to tensor pairs, the newly added functions also support the comparison of tensors in sequences or mappings. Otherwise their signature stays the same.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903805

Pulled By: mruberry

fbshipit-source-id: 719d19a1d26de8d14cb25846e3d22a6ac828c80a
2021-04-24 23:36:36 -07:00
Philip Meier
d168eae114 make torch.testing error messages more expressive (#55145)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55145

Repeating the discussion from https://github.com/pytorch/pytorch/pull/54784#issuecomment-811792089

The error messages for mismatched values are directly adapted from the old `_compare_tensors_internal`:

50cb75edce/torch/testing/__init__.py (L104-L111)

A sample error message right now looks like this

```
With rtol=1.3e-06 and atol=1e-05, found 1 different element(s) out of 12 (8.3%). The greatest difference of 4.0 (5.0 vs. 9.0) occurred at index (2, 3)
```

Using the same data with `numpy.testing.assert_equal` gives the following output:

```
Not equal to tolerance rtol=1.3e-06, atol=1e-05

Mismatched elements: 1 / 12 (8.33%)
Max absolute difference: 4.
Max relative difference: 0.44444445
 x: array([[5., 5., 5., 5.],
       [5., 5., 5., 5.],
       [5., 5., 5., 5.]], dtype=float32)
 y: array([[5., 5., 5., 5.],
       [5., 5., 5., 5.],
       [5., 5., 5., 9.]], dtype=float32)
```

Pros:

- The info is presented in a list instead of a sentence. IMO this makes it more readable
- The maximum relative difference is reported, which is beneficial in case a comparison fails due to the `rtol`

Cons:

- The values of the inputs are reported (this can be disabled by passing `verbose=False`, but lets face it: most users will use the default setting). In case the inputs are large, the output gets truncated with `...`. Not only is it hard to visually find the mismatching values, they could also live within the truncated part, making the output completely useless.
- Even when visually find the offending values it is hard to parse this back to the index in the inputs.

This implements a mix of both to get a short but expressive message:

```
Tensors are not close according to rtol=1.3e-6 and atol=1e-05:

Mismatched elements: 1 / 12 (8.3%)
Max. rel. diff.: 4.44e-1 at (2, 3)
Max. abs. diff.: 4.0 at (2, 3)
```

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D27877157

Pulled By: mruberry

fbshipit-source-id: 6898a995f116f127e3ae8ed0bcb1ada63eadc45a
2021-04-21 06:29:42 -07:00
Philip Meier
0e106fce9c add tests for torch.testing (#54784)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54784

* #54769 make torch.testing asserts importable

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D27717422

Pulled By: mruberry

fbshipit-source-id: 7526af4f17d8ffcc4ea5e5a5d98f07ceac89df40
2021-04-19 03:47:31 -07:00
Rong Rong (AI Infra)
5ed3be799d skip test_filtering_env_var for rocm (#56178)
Summary:
ROCM doesn't report the correct number of expected test device type. Skipping for now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56178

Reviewed By: seemethere

Differential Revision: D27802139

Pulled By: walterddr

fbshipit-source-id: 2e58df1a3ba2411e690be52babf946e284c4efcc
2021-04-15 13:20:03 -07:00
Rong Rong (AI Infra)
e0f9a5fed8 [BE] add test selector to test_testing (#55931)
Summary:
This is a reflection of recent failures in https://github.com/pytorch/pytorch/issues/55753 and https://github.com/pytorch/pytorch/issues/55522.
We are lacking a test to safeguard these test env var.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55931

Test Plan:
1. CI
2. Run locally using `python test/test_testing.py -k test_filtering_env_var -v`
  - gives failure on 2ca45cb9e8 and d0cd16899f
  - passes on 159e1100bf and current master

Reviewed By: jbschlosser

Differential Revision: D27747537

Pulled By: walterddr

fbshipit-source-id: c88e1c818199c7838866037d702d4012cacf510e
2021-04-15 08:00:46 -07:00
Mike Ruberry
399b66c813 Ports logdet from method_tests() to op_db (#55743)
Summary:
Per title. Also updates some tensor construction helpers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55743

Reviewed By: ngimel

Differential Revision: D27702060

Pulled By: mruberry

fbshipit-source-id: f64b7bee855733ad1f4fd182819ceec5831d9878
2021-04-11 20:39:16 -07:00
Sam Estep
8cd4dac78f Move mypy wrapper to tools (#54268)
Summary:
This PR

- moves `torch/testing/_internal/mypy_wrapper.py` (and its accompanying tests from `test/test_testing.py`) to `tools`,
- removes the now-unused `test_run_mypy` from `test/test_type_hints.py`, and
- replaces the hardcoded list of `mypy` configs (previously duplicated across `mypy_wrapper.py` and `.github/workflows/lint.yml`) with a simpler glob

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54268

Test Plan:
Should also be run in the "Test tools" GHA workflow in CI:
```
python tools/test/test_mypy_wrapper.py
```

Reviewed By: janeyx99

Differential Revision: D27168095

Pulled By: samestep

fbshipit-source-id: a8dc18407b5e4c103ace23a636b0a8534951905a
2021-03-18 15:41:27 -07:00
Jane Xu
0645e2b490 Use shard file if present, improve functions used for sharding (#54210)
Summary:
Step 2 to fixing https://github.com/pytorch/pytorch/issues/53882 :)

This changes TARGET_DET_LIST and sharding automation by checking if there's already cached data from the commit in `.pytorch-test-times`. If not, it pulls data from S3 and updates the file to have the stats. This way, S3 pulling does not need to happen more than once for the same commit.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54210

Test Plan:
the following methods should run the same set of tests.
First `export CIRCLE_JOB=pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test2` or your favorite CIRCLE JOB.

1. Pull data first and use it:
Download the data from S3 and write it to the cache file with `python test/run_test.py --export-historic-test-times .pytorch-test-times`
Now run `python test/run_test.py --shard 1 10`

2. Make the sharding job pull data:
Delete the file you just created: `rm .pytorch-test-times`
Now run `python test/run_test.py --shard 1 10`

Reviewed By: walterddr

Differential Revision: D27136849

Pulled By: janeyx99

fbshipit-source-id: 51a42c4e2fa3f8cf15e682679dd3eb6130aad927
2021-03-18 13:25:51 -07:00
Jane Xu
2e7311ef25 First step to refactoring S3 reading logic (#53755)
Summary:
This is an initial attempt in refactoring and consolidating our S3 read logic for print_test_stats.py, test_history.py, and run_test.py. This way, boto3 and botocore do not need to be imported in various places throughout the code base, and duplicated logic (such as the many type definitions) can exist in one place: `tools/stat_utils/s3_stat_parser.py`. walterddr contributed to this PR by moving print_test_stats.py to the tools folder and the corresponding tests a subfolder within tools.

**NOTE: this removes those tests from CI as the new `tools/test/test_stats.py` is not in the test/ directory as the other tests in TESTS in run_test.py.**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53755

Test Plan:
This refactoring change should not break anything, so running the files as before should work as they did previously.
To make sure that print_test_stats.py still functions: run `python tools/test/test_stats.py` and make sure all tests pass.
To make sure that test_history.py works, run the example commands from `tools/test_history.py --help` and check that their output matches that shown. Note that the script will continue printing for a while, so don't be alarmed.

Some next steps:
- Actually coming up with similarities among the three current use cases and further refactoring/consolidating of functions (e.g., combining simplify and get_cases)
- Moving more parsing logic to s3_stat_parser.py to have better abstraction between our files
- Adding tests for s3_stat_parser.py when there is more functionality in it

Reviewed By: agolynski, samestep

Differential Revision: D27030285

Pulled By: janeyx99

fbshipit-source-id: e664781324ef7c0c30943bfd7f17c895075ef7a7
2021-03-17 12:38:09 -07:00
Sam Estep
c0fafcc766 Don't actually print anomalies in TTRR (#54078)
Summary:
This PR disables the bulk of the output for test time regression reporting, since it's obscuring more important signal (especially in cases where shards are shifting around).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54078

Test Plan:
```
python test/test_testing.py
```

Reviewed By: ezyang, walterddr

Differential Revision: D27088987

Pulled By: samestep

fbshipit-source-id: 06a4eeb75641552bad2ab4b9154a8c70c57b0d68
2021-03-16 14:26:32 -07:00
Jane Xu
ee35060888 Fix sharding algo + test it (#53942)
Summary:
This PR:
1. moves sharding algorithm from run_test.py to framework_utils.py (let me know if you have a better place for it)
2. adds tests for the algorithm in test_testing.py
3. fixes the algorithm so that it doesn't tack on the unknown jobs all to the shard with the minimum time, but instead distributes them around the shards.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53942

Test Plan: python test/test_testing.py -k TestFrameworkUtils

Reviewed By: samestep

Differential Revision: D27047223

Pulled By: janeyx99

fbshipit-source-id: 824b20009c0bb707aa5361de445cdec795d5e3f1
2021-03-15 16:33:56 -07:00
Jane Xu
09ce9b5877 Store test file in S3 as well for every TestSuite (#52869)
Summary:
We want to store the file names that triggers each test suite so that we can use this data for categorizing those test files.

~~After considering several solutions, this one is the most backwards compatible, and the current test cases in test_testing.py for print test stats don't break.~~

The previous plan did not work, as there are multiple Python test jobs that spawn the same suites. Instead, the new S3 format will store test files (e.g., `test_nn` and `distributed/test_distributed_fork`) which will contain the suites they spawn, which will contain the test cases run within the suite. (Currently, there is no top layer of test files.)

Because of this major structural change, a lot of changes have now been made (thank you samestep!) to test_history.py and print_test_stats.py to make this new format backwards compatible.

Old test plan:
Make sure that the data is as expected in S3 after https://github.com/pytorch/pytorch/pull/52873 finishes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52869

Test Plan: Added tests to test_testing.py which pass, and CI.

Reviewed By: samestep

Differential Revision: D26672561

Pulled By: janeyx99

fbshipit-source-id: f46b91e16c1d9de5e0cb9bfa648b6448d979257e
2021-03-02 07:36:00 -08:00
Heitor Schueroff
08d7f29601 Add discontiguous kwarg to make_tensor (#51985)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51985

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26375733

Pulled By: heitorschueroff

fbshipit-source-id: bb7831dc28c24b90c6f83885681eeccfdbb83438
2021-02-24 08:57:24 -08:00
Rong Rong (AI Infra)
e8ab58bfc7 [reland] Early terminate CUDA on common_utils TestCases (#52126)
Summary:
Take 2 of https://github.com/pytorch/pytorch/issues/50914
This change moves the early termination logic into common_utils.TestCase class.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52126

Test Plan: CI with ci-all tag

Reviewed By: malfet

Differential Revision: D26391762

Pulled By: walterddr

fbshipit-source-id: a149ecc47ccda7f2795e107fb95915506ae060b4
2021-02-12 07:32:42 -08:00
Nikita Shulga
9f1f5636d7 Revert D26019289: [pytorch][PR] Early terminate CUDA on common_utils TestCases
Test Plan: revert-hammer

Differential Revision:
D26019289 (c1b7ca8062)

Original commit changeset: ddc7c1c0d00d

fbshipit-source-id: 6902d03fa06cda5d03191846bc4dd98af501b594
2021-02-10 17:29:10 -08:00
Sam Estep
ce8ba5f3bc Fix test time history report if no ancestor report (#52054)
Summary:
This fixes an issue (currently blocking https://github.com/pytorch/pytorch/issues/51905) where the test time regression reporting step will fail if none of the most recent `master` ancestors have any reports in S3 (e.g. if a new job is added).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52054

Test Plan:
```
python test/test_testing.py
```

Reviewed By: walterddr

Differential Revision: D26369507

Pulled By: samestep

fbshipit-source-id: 4c4e1e290cb943ce8fcdadacbf51d66b31c3262a
2021-02-10 11:02:46 -08:00
Rong Rong (AI Infra)
c1b7ca8062 Early terminate CUDA on common_utils TestCases (#50914)
Summary:
This is a follow up on https://github.com/pytorch/pytorch/issues/49869.

Previously CUDA early termination only happens for generic test classes that extends from `DeviceTypeTestBase`. However, JIT test cases which extends from common_utils.TestCase cannot benefit from the early termination.

This change moves the early termination logic into common_utils.TestCase class.
- all tests extended from common_utils.TestCase now should early terminate if CUDA assert occurs.
- For TestCases that extends from common_device_type.DeviceTypeTestBase, still only do torch.cuda.synchronize() when RTE is thrown.
- For TestCases extends common_utils.TestCase, regardless of whether a test case uses GPU or not, it will always synchronize CUDA as long as `torch.cuda.is_initialize()` returns true.
- Disabling this on common_distributed.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50914

Reviewed By: malfet

Differential Revision: D26019289

Pulled By: walterddr

fbshipit-source-id: ddc7c1c0d00db4d073a6c8bc5b7733637a7e77d1
2021-02-10 07:15:40 -08:00
Sam Estep
21ef248fb8 [reland] Report test time regressions (#50171)
Summary:
This is a followup to https://github.com/pytorch/pytorch/issues/49190. Vaguely speaking, the goals are to make it easy to identify test time regressions introduced by PRs. Eventually the hope is to use this information to edit Dr CI comments, but this particular PR just does the analysis and prints it to stdout, so a followup PR would be needed to edit the actual comments on GitHub.

**Important:** for uninteresting reasons, this PR moves the `print_test_stats.py` file.

- *Before:* `test/print_test_stats.py`
- *After:* `torch/testing/_internal/print_test_stats.py`

Notes on the approach:

- Just getting the mean and stdev for the total job time of the last _N_ commits isn't sufficient, because e.g. if `master` was broken 5 commits ago, then a lot of those job times will be much shorter, breaking the statistics.
- We use the commit history to make better estimates for the mean and stdev of individual test (and suite) times, but only when the test in that historical commit is present and its status matches that of the base commit.
- We list all the tests that were removed or added, or whose status changed (e.g. skipped to not skipped, or vice versa), along with time (estimate) info for that test case and its containing suite.
- We don't list tests whose time changed a lot if their status didn't change, because there's a lot of noise and it's unclear how to do that well without too many false positives.
- We show a human-readable commit graph that indicates exactly how many commits are in the pool of commits that could be causing regressions (e.g. if a PR has multiple commits in it, or if the base commit on `master` doesn't have a report in S3).
- We don't show an overall estimate of whether the PR increased or decreased the total test job time, because it's noisy and it's a bit tricky to aggregate stdevs up from individual tests to the whole job level. This might change in a followup PR.
- Instead, we simply show a summary at the bottom which says how many tests were removed/added/modified (where "modified" means that the status changed), and our best estimates of the mean times (and stdevs) of those changes.
- Importantly, the summary at the bottom is only for the test cases that were already shown in the more verbose diff report, and does not include any information about tests whose status didn't change but whose running time got much longer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50171

Test Plan:
To run the unit tests:
```
$ python test/test_testing.py
$ python test/print_test_stats.py
```

To verify that this works, check the [CircleCI logs](https://app.circleci.com/pipelines/github/pytorch/pytorch/258628/workflows/9cfadc34-e042-485e-b3b3-dc251f160307) for a test job run on this PR; for example:
- pytorch_linux_bionic_py3_6_clang9_test

To test locally, use the following steps.

First run an arbitrary test suite (you need to have some XML reports so that `test/print_test_stats.py` runs, but we'll be ignoring them here via the `--use-json` CLI option):
```
$ DATA_DIR=/tmp
$ ARBITRARY_TEST=testing
$ python test/test_$ARBITRARY_TEST.py --save-xml=$DATA_DIR/test/test_$ARBITRARY_TEST
```
Now choose a commit and a test job (it has to be on `master` since we're going to grab the test time data from S3, and [we only upload test times to S3 on the `master`, `nightly`, and `release` branches](https://github.com/pytorch/pytorch/pull/49645)):
```
$ export CIRCLE_SHA1=c39fb9771d89632c5c3a163d3c00af3bef1bd489
$ export CIRCLE_JOB=pytorch_linux_bionic_py3_6_clang9_test
```
Download the `*.json.bz2` file(s) for that commit/job pair:
```
$ aws s3 cp s3://ossci-metrics/test_time/$CIRCLE_SHA1/$CIRCLE_JOB/ $DATA_DIR/ossci-metrics/test_time/$CIRCLE_SHA1/$CIRCLE_JOB --recursive
```
And feed everything into `test/print_test_stats.py`:
```
$ bzip2 -kdc $DATA_DIR/ossci-metrics/test_time/$CIRCLE_SHA1/$CIRCLE_JOB/*Z.json.bz2 | torch/testing/_internal/print_test_stats.py --compare-with-s3 --use-json=/dev/stdin $DATA_DIR/test/test_$ARBITRARY_TEST
```
The first part of the output should be the same as before this PR; here is the new part, at the end of the output:

- https://pastebin.com/Jj1svhAn

Reviewed By: malfet, izdeby

Differential Revision: D26317769

Pulled By: samestep

fbshipit-source-id: 1ba06cec0fafac77f9e7341d57079543052d73db
2021-02-08 15:35:21 -08:00
Sam Estep
21dccbca62 Revert D26232345: [pytorch][PR] Report test time regressions
Test Plan: revert-hammer

Differential Revision:
D26232345 (7467f90b13)

Original commit changeset: b687b1737519

fbshipit-source-id: 10a031c5500b083f7c82f2ae2743b671c5a07bff
2021-02-08 10:15:07 -08:00
Sam Estep
7467f90b13 Report test time regressions (#50171)
Summary:
This is a followup to https://github.com/pytorch/pytorch/issues/49190. Vaguely speaking, the goals are to make it easy to identify test time regressions introduced by PRs. Eventually the hope is to use this information to edit Dr CI comments, but this particular PR just does the analysis and prints it to stdout, so a followup PR would be needed to edit the actual comments on GitHub.

**Important:** for uninteresting reasons, this PR moves the `print_test_stats.py` file.

- *Before:* `test/print_test_stats.py`
- *After:* `torch/testing/_internal/print_test_stats.py`

Notes on the approach:

- Just getting the mean and stdev for the total job time of the last _N_ commits isn't sufficient, because e.g. if `master` was broken 5 commits ago, then a lot of those job times will be much shorter, breaking the statistics.
- We use the commit history to make better estimates for the mean and stdev of individual test (and suite) times, but only when the test in that historical commit is present and its status matches that of the base commit.
- We list all the tests that were removed or added, or whose status changed (e.g. skipped to not skipped, or vice versa), along with time (estimate) info for that test case and its containing suite.
- We don't list tests whose time changed a lot if their status didn't change, because there's a lot of noise and it's unclear how to do that well without too many false positives.
- We show a human-readable commit graph that indicates exactly how many commits are in the pool of commits that could be causing regressions (e.g. if a PR has multiple commits in it, or if the base commit on `master` doesn't have a report in S3).
- We don't show an overall estimate of whether the PR increased or decreased the total test job time, because it's noisy and it's a bit tricky to aggregate stdevs up from individual tests to the whole job level. This might change in a followup PR.
- Instead, we simply show a summary at the bottom which says how many tests were removed/added/modified (where "modified" means that the status changed), and our best estimates of the mean times (and stdevs) of those changes.
- Importantly, the summary at the bottom is only for the test cases that were already shown in the more verbose diff report, and does not include any information about tests whose status didn't change but whose running time got much longer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50171

Test Plan:
To run the unit tests:
```
$ python test/test_testing.py
$ python test/print_test_stats.py
```

To verify that this works, check the [CircleCI logs](https://app.circleci.com/pipelines/github/pytorch/pytorch/258628/workflows/9cfadc34-e042-485e-b3b3-dc251f160307) for a test job run on this PR; for example:
- pytorch_linux_bionic_py3_6_clang9_test

To test locally, use the following steps.

First run an arbitrary test suite (you need to have some XML reports so that `test/print_test_stats.py` runs, but we'll be ignoring them here via the `--use-json` CLI option):
```
$ DATA_DIR=/tmp
$ ARBITRARY_TEST=testing
$ python test/test_$ARBITRARY_TEST.py --save-xml=$DATA_DIR/test/test_$ARBITRARY_TEST
```
Now choose a commit and a test job (it has to be on `master` since we're going to grab the test time data from S3, and [we only upload test times to S3 on the `master`, `nightly`, and `release` branches](https://github.com/pytorch/pytorch/pull/49645)):
```
$ export CIRCLE_SHA1=c39fb9771d89632c5c3a163d3c00af3bef1bd489
$ export CIRCLE_JOB=pytorch_linux_bionic_py3_6_clang9_test
```
Download the `*.json.bz2` file(s) for that commit/job pair:
```
$ aws s3 cp s3://ossci-metrics/test_time/$CIRCLE_SHA1/$CIRCLE_JOB/ $DATA_DIR/ossci-metrics/test_time/$CIRCLE_SHA1/$CIRCLE_JOB --recursive
```
And feed everything into `test/print_test_stats.py`:
```
$ bzip2 -kdc $DATA_DIR/ossci-metrics/test_time/$CIRCLE_SHA1/$CIRCLE_JOB/*Z.json.bz2 | torch/testing/_internal/print_test_stats.py --compare-with-s3 --use-json=/dev/stdin $DATA_DIR/test/test_$ARBITRARY_TEST
```
The first part of the output should be the same as before this PR; here is the new part, at the end of the output:

- https://pastebin.com/Jj1svhAn

Reviewed By: walterddr

Differential Revision: D26232345

Pulled By: samestep

fbshipit-source-id: b687b1737519d2eed68fbd591a667e4e029de509
2021-02-08 07:54:34 -08:00
Sam Estep
6dda0363bb [reland] Refactor mypy configs list into editor-friendly wrapper (#50826)
Summary:
Closes https://github.com/pytorch/pytorch/issues/50513 by resolving all four checkboxes. If this PR is merged, I will also modify one or both of the following wiki pages to add instructions on how to use this `mypy` wrapper for VS Code editor integration:

- [Guide for adding type annotations to PyTorch](https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch)
- [Lint as you type](https://github.com/pytorch/pytorch/wiki/Lint-as-you-type)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50826

Test Plan:
Unit tests for globbing function:
```
python test/test_testing.py TestMypyWrapper -v
```

Manual checks:

- Uninstall `mypy` and run `python test/test_type_hints.py` to verify that it still works when `mypy` is absent.
- Reinstall `mypy` and run `python test/test_type_hints.py` to verify that this didn't break the `TestTypeHints` suite.
- Run `python test/test_type_hints.py` again (should finish quickly) to verify that this didn't break `mypy` caching.
- Run `torch/testing/_internal/mypy_wrapper.py` on a few Python files in this repo to verify that it doesn't give any additional warnings when the `TestTypeHints` suite passes. Some examples (compare with the behavior of just running `mypy` on these files):
  ```sh
  torch/testing/_internal/mypy_wrapper.py $PWD/README.md
  torch/testing/_internal/mypy_wrapper.py $PWD/tools/fast_nvcc/fast_nvcc.py
  torch/testing/_internal/mypy_wrapper.py $PWD/test/test_type_hints.py
  torch/testing/_internal/mypy_wrapper.py $PWD/torch/random.py
  torch/testing/_internal/mypy_wrapper.py $PWD/torch/testing/_internal/mypy_wrapper.py
  ```
- Remove type hints from `torch.testing._internal.mypy_wrapper` and verify that running `mypy_wrapper.py` on that file gives type errors.
- Remove the path to `mypy_wrapper.py` from the `files` setting in `mypy-strict.ini` and verify that running it again on itself no longer gives type errors.
- Add `test/test_type_hints.py` to the `files` setting in `mypy-strict.ini` and verify that running the `mypy` wrapper on it again now gives type errors.
- Change a return type in `torch/random.py` and verify that running the `mypy` wrapper on it again now gives type errors.
- Add the suggested JSON from the docstring of `torch.testing._internal.mypy_wrapper.main` to your `.vscode/settings.json` and verify that VS Code gives the same results (inline, while editing any Python file in the repo) as running the `mypy` wrapper on the command line, in all the above cases.

Reviewed By: walterddr

Differential Revision: D26049052

Pulled By: samestep

fbshipit-source-id: 0b35162fc78976452b5ea20d4ab63937b3c7695d
2021-01-26 09:04:14 -08:00
Sam Estep
5c1c858ca8 Revert D25977352: [pytorch][PR] Refactor mypy configs list into editor-friendly wrapper
Test Plan: revert-hammer

Differential Revision:
D25977352 (73dffc8452)

Original commit changeset: 4b3a5e8a9071

fbshipit-source-id: a0383ea4158f54be6f128b9ddb2cd12fc3a3ea53
2021-01-22 15:53:44 -08:00
Sam Estep
73dffc8452 Refactor mypy configs list into editor-friendly wrapper (#50826)
Summary:
Closes https://github.com/pytorch/pytorch/issues/50513 by resolving the first three checkboxes. If this PR is merged, I will also modify one or both of the following wiki pages to add instructions on how to use this `mypy` wrapper for VS Code editor integration:

- [Guide for adding type annotations to PyTorch](https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch)
- [Lint as you type](https://github.com/pytorch/pytorch/wiki/Lint-as-you-type)

The test plan below is fairly manual, so let me know if I should add more automated tests to this PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50826

Test Plan:
Unit tests for globbing function:
```
python test/test_testing.py TestMypyWrapper -v
```

Manual checks:

- Uninstall `mypy` and run `python test/test_type_hints.py` to verify that it still works when `mypy` is absent.
- Reinstall `mypy` and run `python test/test_type_hints.py` to verify that this didn't break the `TestTypeHints` suite.
- Run `python test/test_type_hints.py` again (should finish quickly) to verify that this didn't break `mypy` caching.
- Run `torch/testing/_internal/mypy_wrapper.py` on a few Python files in this repo to verify that it doesn't give any additional warnings when the `TestTypeHints` suite passes. Some examples (compare with the behavior of just running `mypy` on these files):
  ```sh
  torch/testing/_internal/mypy_wrapper.py README.md
  torch/testing/_internal/mypy_wrapper.py tools/fast_nvcc/fast_nvcc.py
  torch/testing/_internal/mypy_wrapper.py test/test_type_hints.py
  torch/testing/_internal/mypy_wrapper.py torch/random.py
  torch/testing/_internal/mypy_wrapper.py torch/testing/_internal/mypy_wrapper.py
  ```
- Remove type hints from `torch.testing._internal.mypy_wrapper` and verify that running `mypy_wrapper.py` on that file gives type errors.
- Remove the path to `mypy_wrapper.py` from the `files` setting in `mypy-strict.ini` and verify that running it again on itself no longer gives type errors.
- Add `test/test_type_hints.py` to the `files` setting in `mypy-strict.ini` and verify that running the `mypy` wrapper on it again now gives type errors.
- Remove type hints from `torch/random.py` and verify that running the `mypy` wrapper on it again now gives type errors.
- Add the suggested JSON from the docstring of `torch.testing._internal.mypy_wrapper.main` to your `.vscode/settings.json` and verify that VS Code gives the same results (inline, while editing any Python file in the repo) as running the `mypy` wrapper on the command line, in all the above cases.

Reviewed By: glaringlee, walterddr

Differential Revision: D25977352

Pulled By: samestep

fbshipit-source-id: 4b3a5e8a9071fcad65a19f193bf3dc7dc3ba1b96
2021-01-22 13:35:44 -08:00
Rong Rong (AI Infra)
71766d89ea [BE] unified run_process_no_exception code (#49774)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49774

Reviewed By: janeyx99

Differential Revision: D25756811

Pulled By: walterddr

fbshipit-source-id: 4d2b3bd772572764ff96e5aad70323b58393e332
2021-01-04 13:43:09 -08:00
Rong Rong (AI Infra)
9c64b9ffba early termination of CUDA tests (#49869)
Summary:
This is follow up on https://github.com/pytorch/pytorch/issues/49799.

* uses `torch.cuda.synchronize()` to validate CUDA assert instead of inspecting error message.
* remove non CUDA tests.

hopefully can reproduce why slow_tests fails but not normal test. since the test still runs for >1min.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49869

Reviewed By: mruberry

Differential Revision: D25714385

Pulled By: walterddr

fbshipit-source-id: 04f8ccb50d8c9ee42826a216c49baf90285b247f
2020-12-28 09:18:00 -08:00
Rong Rong (AI Infra)
69b1373587 Revert D25692616: [pytorch][PR] [reland] Early terminate when CUDA assert were thrown
Test Plan: revert-hammer

Differential Revision:
D25692616 (e6a215592e)

Original commit changeset: 9c5352220d63

fbshipit-source-id: dade8068cad265d15ee908d98abe0de5b81a195d
2020-12-23 17:48:12 -08:00
Rong Rong (AI Infra)
e6a215592e [reland] Early terminate when CUDA assert were thrown (#49799)
Summary:
this is a reland of https://github.com/pytorch/pytorch/issues/49527.

fixed slow test not running properly in py36 because capture_output is introduced in py37.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49799

Reviewed By: janeyx99

Differential Revision: D25692616

Pulled By: walterddr

fbshipit-source-id: 9c5352220d632ec8d7464e5f162ffb468a0f30df
2020-12-23 14:25:14 -08:00
Natalia Gimelshein
abacf27038 Revert D25623219: [pytorch][PR] early terminate when CUDA assert were thrown
Test Plan: revert-hammer

Differential Revision:
D25623219 (be091600ed)

Original commit changeset: 1b414623ecce

fbshipit-source-id: ba304c57eea29d19550ac1e864ccfcd0cec68bec
2020-12-22 17:57:19 -08:00
Rong Rong (AI Infra)
be091600ed early terminate when CUDA assert were thrown (#49527)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49019

I marked the test_testing function as slow since it took ~1 minute to finish the subprocess test suite.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49527

Reviewed By: malfet

Differential Revision: D25623219

Pulled By: walterddr

fbshipit-source-id: 1b414623ecce14aace5e0996d5e4768a40e12e06
2020-12-22 14:33:41 -08:00
Rong Rong
69522410fa add user vs internal msg support in common_utils.TestCase (#48935)
Summary:
should fixes https://github.com/pytorch/pytorch/issues/48879.

To test the effect of the messages: make test break, such as add `self.assertEqual(1, 2, "user_msg")` to any test
* Before:
```
AssertionError: False is not true : user_msg
```
* After
```
AssertionError: False is not true : Scalars failed to compare as equal! Comparing 1 and 2 gives a difference of 1, but the allowed difference with rtol=0 and atol=0 is only 0!
user_msg;
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48935

Reviewed By: samestep

Differential Revision: D25382153

Pulled By: walterddr

fbshipit-source-id: 95633a9f664f4b05a28801786b12a10bd21ff431
2020-12-10 15:25:46 -08:00
Mike Ruberry
36c87f1243 Refactors test_torch.py to be fewer than 10k lines (#47356)
Summary:
Creates multiple new test suites to have fewer tests in test_torch.py, consistent with previous test suite creation like test_unary_ufuncs.py and test_linalg.py.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47356

Reviewed By: ngimel

Differential Revision: D25202268

Pulled By: mruberry

fbshipit-source-id: 75fde3ca76545d1b32b86d432a5cb7a5ba8f5bb6
2020-11-28 20:11:40 -08:00