Commit Graph

2053 Commits

Author SHA1 Message Date
Michael Suo
17641fed2a Revert D32942007: OpInfo: Convert more sample_input_funcs to generators
Test Plan: revert-hammer

Differential Revision:
D32942007 (d21646c432)

Original commit changeset: bb5b253d6d87

Original Phabricator Diff: D32942007 (d21646c432)

fbshipit-source-id: d37c78174f0acea48e4cd4af3ac67ca4ee7ac54d
2021-12-09 10:54:41 -08:00
Peter Bell
d21646c432 OpInfo: Convert more sample_input_funcs to generators (#69257)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69257

These are sample functions that already use generators internally, this just moves the `yield` into the sample function itself.
Diff is best viewed ignoring whitespace changes https://github.com/pytorch/pytorch/pull/69257/files?diff=unified&w=1

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D32942007

Pulled By: mruberry

fbshipit-source-id: bb5b253d6d87b3495b7059924bed35b09d2768a2
2021-12-09 08:38:51 -08:00
Peter Bell
6de9f0fc94 OpInfo: Allow sample_inputs_func to be any iterable (#69256)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69256

Closes #52486

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D32942008

Pulled By: mruberry

fbshipit-source-id: f5b01b0298c0160b0bec6e86e2b6db8cfe746206
2021-12-09 08:37:26 -08:00
Gao, Xiang
d2917f705a Fix errors in common_utils.py (#69578)
Summary:
This fixes the following error:
```python
Traceback (most recent call last):
  File "/home/gaoxiang/pytorch-ucc2/test/distributed/test_distributed_spawn.py", line 40, in <module>
    run_tests()
  File "/home/gaoxiang/.local/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 618, in run_tests
    ['--import-slow-tests'] if IMPORT_SLOW_TESTS else List[str]([]))
  File "/usr/lib/python3.9/typing.py", line 680, in __call__
    raise TypeError(f"Type {self._name} cannot be instantiated; "
TypeError: Type List cannot be instantiated; use list() instead
Traceback (most recent call last):
  File "/home/gaoxiang/pytorch-ucc2/test/run_test.py", line 1058, in <module>
    main()
  File "/home/gaoxiang/pytorch-ucc2/test/run_test.py", line 1036, in main
    raise RuntimeError(err_message)
RuntimeError: distributed/test_distributed_spawn failed!
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69578

Reviewed By: mrshenli

Differential Revision: D32963113

Pulled By: malfet

fbshipit-source-id: b064e230c5e572e890b4ac66ebdda2707b8c12d7
2021-12-09 07:33:43 -08:00
Pritam Damania
eb2a803406 Run test_embedding_bag_with_no_grad_tensors only for TensorPipe (#69626)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69626

Sparse tensors are only supported by the TensorPipe RPC backend. As a
result, moving test_embedding_bag_with_no_grad_tensors to be a TensorPipe
specific test.
ghstack-source-id: 145134888

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D32959952

fbshipit-source-id: d65f2edbb6dad7705475690a8c6293a322299dde
2021-12-08 18:29:38 -08:00
Bryan Reese
51b6981c36 [PyTorch Tests] Split out skip logic, make changes for plugins (#67256)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67256

To change what tests can be run in various cases, the check logic should be moved to functions and variables that can be changed.

One challenge here is that decorators don't have dynamic functionality. If something is read in when imported and then changed afterwards, it will not actually change. This means we need to separate out the variables that need to be changed for our use case.

Those are put into common_distributed.py and can be changed before importing the distributed_test.py code.

The use case is to add new backends to the tests and split it into tests that can be ran on demand as a separate instance. To do so, you would change DistTestSkipCases after importing it into a launcher or a setup script and then load distributed_test.

Test Plan: Check the signals

Reviewed By: mrshenli

Differential Revision: D31906947

fbshipit-source-id: 45e3258c55f4dc34e12a468bed65280f4c25748f
2021-12-08 12:23:15 -08:00
kshitij12345
7407e3d6fd [fix] cross_entropy : fix weight with ignore_index and label_smoothing (#69511)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/69339

cc albanD mruberry jbschlosser walterddr

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69511

Reviewed By: mrshenli

Differential Revision: D32951935

Pulled By: jbschlosser

fbshipit-source-id: 482eae851861a32f96bd6231dd3448fb6d44a015
2021-12-08 12:08:33 -08:00
Rohan Varma
d44d59aa70 [BE] Enable C++ stacktraces for MultiProcessTestCase (#69175)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69175

Shows C++ stacktraces for python distributed tests that inherit from
MultiProcessTestCase. Closes https://github.com/pytorch/pytorch/issues/69168
ghstack-source-id: 145085858

Test Plan: CI

Reviewed By: H-Huang

Differential Revision: D32736872

fbshipit-source-id: 743e870eefa7a9e77c5791d0936e2ebd5c9b1016
2021-12-08 11:57:51 -08:00
anjali411
3e6164449f Add efficient zero tensors (#64837)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64837

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D32834987

Pulled By: anjali411

fbshipit-source-id: 20ea08ade0db0044ca633d9c1a117a6a2e65d1fd
2021-12-08 10:37:39 -08:00
Kushashwa Ravi Shrimali
2cb385dd6e OpInfo for nn.functional.dropout2d, revise sample inputs for dropout (#67891)
Summary:
Earlier, we were only testing for inputs with the shape of `(5,)` for `nn.functional.dropout`, but since it's used a lot - I feel it's a good idea to test for a few more shapes including scalars. This PR:

1. Revises sample inputs for `nn.functional.dropout`
2. Adds an OpInfo for `nn.functional.dropout2d`.

A note regarding the documentation:

Looks like `nn.functional.dropout2d` also supports inputs of shape `(H, W)` apart from `(N, C, H, W) / (C, H, W)` but the [documentation](https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html#torch.nn.Dropout2d) doesn't mention that (`H, W` case). Should that be revised or am I missing anything here? (Filed an issue here: https://github.com/pytorch/pytorch/issues/67892)

```python
# A 2D tensor is a valid input for Dropout2d
In [11]: tensor = torch.randn((3, 4), device='cpu', dtype=torch.float32)
In [12]: dropout2d = torch.nn.Dropout2d(p=0.5)

In [13]: dropout2d(tensor)
Out[13]:
tensor([[-0.1026, -0.0000, -0.0000, -0.0000],
        [-1.5647,  0.0000, -0.0000, -0.5820],
        [-0.0000, -3.2080,  0.1164, -3.6780]])
```

Issue Tracker: https://github.com/pytorch/pytorch/issues/54261

cc: mruberry zou3519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67891

Reviewed By: mrshenli

Differential Revision: D32628527

Pulled By: mruberry

fbshipit-source-id: 4c9b89550f1d49526e294378ce107eba9f29cabb
2021-12-08 08:54:16 -08:00
Philip Meier
f54745a6ff add OpInfo for torch.diagflat (#65680)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65680

cc mruberry

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D31730001

Pulled By: mruberry

fbshipit-source-id: 487e41da4b043944cc5b26d6081209fb0875f4de
2021-12-08 08:49:45 -08:00
Philip Meier
7e49f4638c add OpInfo for torch.nn.functional.kl_div (#65469)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65469

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D31111698

Pulled By: mruberry

fbshipit-source-id: 0af41a2ef2b199db3d8c63050277e72213f04565
2021-12-08 08:48:18 -08:00
Nikita Vedeneev
c236247826 OpInfo tests for (svd|pca)_lowrank (#69107)
Summary:
As per title.

While working on this I have discovered several issues with these methods related to grad instabilities. I will file them and link here later. These were quite painful to force to pass all the tests with these discovered issues, sorry for the delay, mruberry!

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69107

Reviewed By: zou3519

Differential Revision: D32920341

Pulled By: mruberry

fbshipit-source-id: 15b33e2b46acdcbff8a37d8e43e381eb55d1a296
2021-12-07 19:50:12 -08:00
Kushashwa Ravi Shrimali
63470f9449 Sparse CSR: Implement unary ufuncs (with 0->0 correspondence) (#69292)
Summary:
This PR attempts to add support for unary ufuncs (with 0->0 correspondence) for Sparse CSR Layout.

Ops supported: `['abs', 'asin', 'asinh', 'atan', 'atanh', 'ceil', 'conj_physical', 'floor', 'log1p', 'neg', 'round', 'sin', 'sinh', 'sign', 'sgn', 'signbit', 'tan', 'tanh', 'trunc', 'expm1', 'sqrt', 'angle', 'isinf', 'isposinf', 'isneginf', 'isnan', 'erf', 'erfinv']`

cc nikitaved pearu cpuhrsch IvanYashchuk peterbell10

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69292

Reviewed By: pbelevich

Differential Revision: D32805514

Pulled By: cpuhrsch

fbshipit-source-id: 9ae20817e77a36d3aa6c5afa532b9dc3b8cf1dd3
2021-12-07 12:07:41 -08:00
Shiyan Deng
3211588308 [fx2trt] Separate sign from trunc_div and use it for acc_ops.sign (#69486)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69486

As the title. Migrate from sign plugin to native trt layers. All the layers are fused into one single PWN kernel in TRT.

```
[TensorRT] VERBOSE: Engine Layer Information:
Layer(PointWiseV2): PWN(sign_1_sign_rhs + sign_1_sign_rhs_broadcast, PWN(PWN(sign_1_floor_div*2_rhs + sign_1_floor_div*2_rhs_broadcast, PWN(PWN(PWN([UNARY]-[acc_ops.sign]-[sign_1_prod_abs], [UNARY]-[acc_ops.sign]-[sign_1_prod_abs_exp]), PWN([UNARY]-[acc_ops.sign]-[sign_1_prod_exp], [ELEMENTWISE]-[acc_ops.sign]-[sign_1_exp_floor_div])), [ELEMENTWISE]-[acc_ops.sign]-[sign_1_floor_div*2])), [ELEMENTWISE]-[acc_ops.sign]-[sign_1_sign])), Tactic: 0, x[Float(2,2,3)] -> output0[Float(2,2,3)]
```

Test Plan: CI

Reviewed By: wushirong

Differential Revision: D32887537

fbshipit-source-id: ac250b5197e340319de29653a27f879a0e1ea9cd
2021-12-06 16:54:44 -08:00
Shiyan Deng
e23827e6d6 [fx2trt] [Prep for release] Add type hints to converters and separate main files (#69458)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69458

1. Added type hints to acc ops converters.
2. Put some of the class/logic in fx2trt.py to some separated files. (input_tensor_spec.py, trt_module.py, converter_registry.py).
3. Added import in `__init__.py` so that user can just call `from torch.fx.experimental.fx2trt import xxx` instead of `experimental.fx2trt.fx2trt`.

Test Plan: CI

Reviewed By: wushirong

Differential Revision: D32884637

fbshipit-source-id: e3e1e597edb9a08b47b4595bd371f570f2f3c9b6
2021-12-06 16:54:41 -08:00
Mike Ruberry
b6f41bb848 The Jiterator (#69439)
Summary:
This PR:

- creates the "jiterator" pattern, allowing elementwise unary and binary kernels that don't accept scalars to be jit compiled when called
- ports the gcd and i1 CUDA kernels to use the jiterator
- extends elementwise binary systemic testing to be comparable to elementwise unary systemic testing
- separates one test case from test_out in test_ops.py
- updates more OpInfos to use expected failures instead of skips

The jiterator currently does not support half, bfloat16 or complex dtypes. It also (as mentioned above) doesn't support scalar inputs. In the future we expect to add support for those datatypes and scalars.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69439

Reviewed By: ngimel

Differential Revision: D32874968

Pulled By: mruberry

fbshipit-source-id: d44bb9cde4f602703e75400ec5a0b209f085e9b3
2021-12-06 07:32:48 -08:00
Saketh Are
6a4fa86026 Add OpInfos for misc nn.functional operators (#68922)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68922

Reviewed By: Chillee

Differential Revision: D32842301

Pulled By: saketh-are

fbshipit-source-id: b7166faefb64668fc76cca6c528501b0d360c43b
2021-12-03 17:03:02 -08:00
Saketh Are
a07ffe8d0e Add OpInfos for combinations, cartesian_prod, sum_to_size, ldexp, as_strided (#68853)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68853

Reviewed By: davidberard98

Differential Revision: D32811147

Pulled By: saketh-are

fbshipit-source-id: 941dcf949072f8d10faf4d5a0fa0ef409ac6a7db
2021-12-02 21:22:56 -08:00
Mark Richardson
834bd3134e Back out "Add efficient zero tensors" (#69327)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69327

Original commit changeset: d44096d88265

Original Phabricator Diff: D32144240 (668574af4a)

Test Plan:
CI

original diff failed 175 builds in CI

Reviewed By: airboyang, anjali411

Differential Revision: D32809407

fbshipit-source-id: c7c8e69bcee0274992e2d5da901f035332e60071
2021-12-02 19:11:41 -08:00
ankitaS11
c572a603a6 fix for python 3.10 for gradient opinfo (#68113)
Summary:
This PR fixes https://github.com/pytorch/pytorch/issues/67612 by creating a tensor first and then converting the dtype explicitly using `.to(dtype)` call.

Looking forward to your feedback and suggestions on this.

cc: kshitij12345 mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68113

Reviewed By: zou3519

Differential Revision: D32797329

Pulled By: saketh-are

fbshipit-source-id: 5c34709ab277c82cda316a3ea1cf01e853e4c38b
2021-12-02 19:01:01 -08:00
kshitij12345
9f39a2de0a [fix] add range check for k kthvalue_cpu (#68863)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68813

Long-term it might make more sense to port it to structured

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68863

Reviewed By: H-Huang

Differential Revision: D32749372

Pulled By: mruberry

fbshipit-source-id: 85a1b2a31e922ff1df0d0f3f576ad219e652aa49
2021-12-02 15:33:06 -08:00
kshitij12345
5b2586fe09 [testing] Ignore expected_regex in assertRaisesRegex for non-native device (#68723)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/29719

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68723

Reviewed By: zou3519

Differential Revision: D32797061

Pulled By: mruberry

fbshipit-source-id: 3bcae6d3d62d180059dbe39be520b0e7f9aea19f
2021-12-02 14:52:27 -08:00
Joel Schlosser
36ba1b6b3a Remove unused _convolution_nogroup op (#68829)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68829

Test Plan: Imported from OSS

Reviewed By: zou3519, albanD

Differential Revision: D32627578

Pulled By: jbschlosser

fbshipit-source-id: 8a4c0ac58aae184a465b1fd40cce880a60d67339
2021-12-02 14:42:08 -08:00
anjali411
668574af4a Add efficient zero tensors (#64837)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64837

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D32144240

Pulled By: anjali411

fbshipit-source-id: d44096d882657c7f9270a16636900e0b73cefa40
2021-12-02 08:47:45 -08:00
Pearu Peterson
370d0afc1b Strided masked var. (#68738)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68738

Test Plan: Imported from OSS

Reviewed By: davidberard98

Differential Revision: D32767155

Pulled By: cpuhrsch

fbshipit-source-id: a5c095103405fbfc28b9f4fd624bdbbc45e7f715
2021-12-01 19:19:37 -08:00
Yanli Zhao
92f168941e remove accidentally committed redundant debug print (#68510)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68510

remove accidentally committed redundant debug print
ghstack-source-id: 144362817

Test Plan: unit tests

Reviewed By: rohan-varma

Differential Revision: D32487736

fbshipit-source-id: 279030f782e6b716a6bbfd591c5ce761de3ddd63
2021-12-01 11:35:34 -08:00
Pearu Peterson
1842364b30 Strided masked normalize. (#68694)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68694

Test Plan: Imported from OSS

Reviewed By: samdow

Differential Revision: D32724552

Pulled By: cpuhrsch

fbshipit-source-id: 82f579a86b0b265e0b9b3715a8a327b775dd55e1
2021-12-01 10:45:16 -08:00
Kshiteej K
e5e0c19882 OpInfo : embedding_bag (#67252)
Summary:
Adds OpInfo for `embedding_bag`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67252

Reviewed By: VitalyFedyunin

Differential Revision: D32462157

Pulled By: zou3519

fbshipit-source-id: 70303349a718720c4fa47519fa94ae900e052939
2021-12-01 07:00:50 -08:00
Peter Bell
1da1707568 Sparse: Implement simple unary ufuncs operators (#68887)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68887

Closes #46988, closes #46987, closes #46761

By "simple" I mean operators that map 0->0 so we can implement it by
just re-dispatching on the values tensor. That does mean we have `sin`
but not `cos` for example, but without fill value support this is the
best that can be done.

Most of these don't support autograd because the derivative formulas
use unsupported operators.

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D32734911

Pulled By: cpuhrsch

fbshipit-source-id: 203ab105799f3d2d682b01ca3d6b18e7c994776a
2021-12-01 05:43:19 -08:00
Elias Ellison
a23d1036ab Add ops for BI (mean) (#68826)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68826

Test Plan: Imported from OSS

Reviewed By: samdow

Differential Revision: D32732465

Pulled By: eellison

fbshipit-source-id: e8b185d89e5ecbe5c8e09d576c84a1f0a402a5e0
2021-12-01 00:45:00 -08:00
Rohan Varma
7fad758e02 [FSDP] AutoWrap Main API (#68155)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68155

Per title
ghstack-source-id: 144398229

Test Plan: CI

Reviewed By: pbelevich, mrshenli

Differential Revision: D32327954

fbshipit-source-id: 36bdf06c1c50932a93acbfa97017c549fa490a6c
2021-12-01 00:16:38 -08:00
Ivan Yashchuk
219db3b4e1 Add OpInfo for torch.linalg.tensorsolve (#68810)
Summary:
This PR adds an OpInfo entry for tensorsolve function.
The keyword argument is different from NumPy so a lambda function is needed to be passed to `ref=`.
I had to change the dtypes for `test_reference_testing` because NumPy does computation internally using double for all linear algebra functions and maybe for some other functions. Using `torch.float64` and `torch.complex128` is more reliable for NumPy comparisons.

cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68810

Reviewed By: soulitzer

Differential Revision: D32696065

Pulled By: mruberry

fbshipit-source-id: a4305065d3e7d0097503dc05938b3c4784e14996
2021-11-30 20:31:12 -08:00
Eli Uriegas
251686fc4c Revert D32706197: Sparse: Implement simple unary ufuncs operators
Test Plan: revert-hammer

Differential Revision:
D32706197 (fbaa19a6fa)

Original commit changeset: 65e1acb36457

fbshipit-source-id: 45c4b486f9eee200d5a1f6d46d267617124f8a5e
2021-11-30 10:50:12 -08:00
Richard Zou
6fea7499c2 CompositeImplicitAutograd compliance testing (#65819)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65819

Related to #61669.

Functions registered as CompositeImplicitAutograd MUST work for most, if
not all, backends. This includes Tensor subclasses.

To achieve this, we (PyTorch) impose a set of constraints on how a
CompositeImplicitAutograd function can be written.

Concretely, this PR adds tests for all OpInfos that checks for
compliance. The things that get tested in this PR apply to composite
ops and are that:
- the op does not change the metadata of a Tensor without performing
dispatches
- the op does not call set_ or resize_
- the op does not directly access the data ptr

The mechanism for the test is to create a new __torch_dispatch__
object, CompositeCompliantTensor. For each operator, we wrap all inputs
in CompositeCompliantTensor, turn on python mode for it,
and send it through the operator.

Non-CompositeImplicitAutograd operators will pass the test because they
perform a dispatch to backend code. Here's how CompositeCompliantTensor
catches problems:

- If it sees set_ or resize_ getting called, it will directly error
out
- After each operation, CompositeCompliantTensor checks to make sure
that its metadata is consistent with that of the thing it is wrapping.
If the CompositeImplicitAutograd op modifes the metadata directly
(through e.g. the TensorImpl API) then the metadata will go out of sync.
- If data_ptr gets called, that returns a nice error (because the
storage is meta).

CompositeCompliantTensor is written in an interesting way. First off,
if a view operation occurs (e.g. `B = A.view_op(...)`), then B.storage()
must alias A.storage() where B.storage() is CompositeCompliantTensor's
storage, NOT the storage of the tensor it is wrapping. This is an
invariant in autograd, see #62182 for details. To handle
this we replay the view on A's storage and set it as B's storage.

Secondly, there are cases where the metadata is allowed to go out of
sync. I believe this is only possible with in-place view functions, like
transpose_, t_, squeeze_, unsqueeze_. Those are special cased.

Finally, I added a new section to aten/src/ATen/native/README.md about
what it means to be CompositeImplicitAutograd Compliant

Test Plan: - run tests

Reviewed By: ezyang, bdhirsh

Differential Revision: D31268369

Pulled By: zou3519

fbshipit-source-id: 31634b1cbe1778ab30196013cfc376ef9bd2e8b1
2021-11-30 07:35:22 -08:00
Peter Bell
fbaa19a6fa Sparse: Implement simple unary ufuncs operators (#68887)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68887

Closes #46988, closes #46987, closes #46761

By "simple" I mean operators that map 0->0 so we can implement it by
just re-dispatching on the values tensor. That does mean we have `sin`
but not `cos` for example, but without fill value support this is the
best that can be done.

Most of these don't support autograd because the derivative formulas
use unsupported operators.

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D32706197

Pulled By: cpuhrsch

fbshipit-source-id: 65e1acb3645737ca7bdb7f2db739d8e118906f4b
2021-11-30 00:30:30 -08:00
Joel Schlosser
61ea2fc35e Fix device type / dtype handling for parametrized test names (#65217)
Summary:
This PR absolves `_TestParametrizer`s (e.g. `ops`, `modules`, `parametrize`) of the responsibility of adding device type (e.g. `'cpu'`, `'cuda'`, etc.) / dtype (e.g. 'float32') to generated test names. This fixes repeated instances of the device string being added to generated test names (e.g. `test_batch_norm_training_True_cuda_track_running_stats_True_cuda_affine_True_cuda`).

The responsibility for placing device / dtype suffixes is now handled by `instantiate_device_type_tests()` instead so it is added a single time. It will place `<device>_<dtype>` at the end of the test name unconditionally, maintaining the current naming convention.

As part of this work, I also tightened the semantics through some additional error case handling:
* Composing multiple decorators that each try to handle the same parameter will error out with a nice message. This includes the case to trying to compose `modules` + `ops`, as they each try to handle `dtype`. Similarly, `ops` + `dtypes` is forbidden when both try to handle `dtype`. This required changes in the following test files:
  * `test/test_unary_ufuncs.py`
  * `test/test_foreach.py`
* The `modules` / `ops` decorators will now error out with a nice message if used with `instantiate_parametrized_tests()` instead of `instantiate_device_type_tests()`, since they're not (currently) written to work outside of a device-specific context.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65217

Reviewed By: mruberry

Differential Revision: D32627303

Pulled By: jbschlosser

fbshipit-source-id: c2957228353ed46a0b7da8fa1a34c67598779312
2021-11-29 19:02:23 -08:00
Pearu Peterson
fb63bb60ec Strided masked norm. (#68584)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68584

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D32581285

Pulled By: cpuhrsch

fbshipit-source-id: 896ee1e58957b46c2f6a16a170adff4cb3b8da62
2021-11-29 14:23:27 -08:00
Peter Bell
f5fa91ba2e Sparse: Add additional opinfo tests (#68886)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68886

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D32697933

Pulled By: cpuhrsch

fbshipit-source-id: fffdd1bc663cc1bc49abe8cf3680982d1cb497bc
2021-11-29 12:49:20 -08:00
Peter Bell
9ee5db490b neg_sparse: Fix output dtype (#68885)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68885

`torch.neg` should preserve the input dtype but for sparse tensors it
was promoting integers to floating point. This would have been picked
up by the OpInfo-based test, but `neg` wasn't marked with
`supports_sparse=True` so it was never run.

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D32680008

Pulled By: cpuhrsch

fbshipit-source-id: 502f8743c1c33ab802e3d9d097792887352cd220
2021-11-29 08:48:22 -08:00
Richard Zou
871cd7c5b9 Forward-mode AD support for torch.split, torch.split_with_sizes (#68566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68566

These are just auto-linear as pointed out by Jeffrey.
ghstack-source-id: 143814393

Test Plan: - Run OpInfo tests.

Reviewed By: albanD, soulitzer

Differential Revision: D32520239

Pulled By: zou3519

fbshipit-source-id: 807115157b131e6370f364f61db1b14700279789
2021-11-29 07:50:53 -08:00
Philip Meier
3315c4b31e add instructions for unhandled exceptions in assert_close (#68722)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68722

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D32684446

Pulled By: mruberry

fbshipit-source-id: 04fe5730721d24e44692cdc9bb327484356ead3f
2021-11-28 21:35:53 -08:00
Mike Ruberry
6ae34ea6f8 Revert D32521980: Add linalg.lu_factor
Test Plan: revert-hammer

Differential Revision:
D32521980 (b10929a14a)

Original commit changeset: 26a49ebd87f8

fbshipit-source-id: e1a6bb9c2ece9bd78190fe17e16a46e3358c5c82
2021-11-28 17:22:15 -08:00
lezcano
b10929a14a Add linalg.lu_factor (#66933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66933

This PR exposes `torch.lu` as `torch.linalg.lu_factor` and
`torch.linalg.lu_factor_ex`.

This PR also adds support for matrices with zero elements both in
the size of the matrix and the batch. Note that this function simply
returns empty tensors of the correct size in this case.

We add a test and an OpInfo for the new function.

This PR also adds documentation for this new function in line of
the documentation in the rest of `torch.linalg`.

Fixes https://github.com/pytorch/pytorch/issues/56590
Fixes https://github.com/pytorch/pytorch/issues/64014

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D32521980

Pulled By: mruberry

fbshipit-source-id: 26a49ebd87f8a41472f8cd4e9de4ddfb7f5581fb
2021-11-27 17:52:48 -08:00
kshitij12345
01ddd5dde6 [opinfo] use dtypes instead of dtypesIfCPU (#68732)
Summary:
Reland https://github.com/pytorch/pytorch/issues/67619

Replace usage of dtypesIfCPU with dtypes in OpInfo class and also make it a mandatory argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68732

Reviewed By: jbschlosser

Differential Revision: D32594344

Pulled By: mruberry

fbshipit-source-id: 660b38aef97752ba064228e8989041ed1d5777fe
2021-11-27 16:07:51 -08:00
Xiang Gao
cffad597ea Tune test_reference_numerics_normal (#68019)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68019

Reviewed By: albanD

Differential Revision: D32482535

Pulled By: mruberry

fbshipit-source-id: 48300a5c6a4484fb81789f9049d3f08272d9f31c
2021-11-26 18:59:31 -08:00
Nikita Shulga
14dc9759f2 Revert D32650384: OpInfos for torch.{flatten, column_stack}
Test Plan: revert-hammer

Differential Revision:
D32650384 (aceb46e4ce)

Original commit changeset: 9ead83b378d0

fbshipit-source-id: 3ef281e536b1f21a6f13c6c51309021cf92b53b2
2021-11-24 14:55:26 -08:00
anjali411
aceb46e4ce OpInfos for torch.{flatten, column_stack} (#67555)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67555

Test Plan: Imported from OSS

Reviewed By: cpuhrsch

Differential Revision: D32650384

Pulled By: anjali411

fbshipit-source-id: 9ead83b378d0ece60569e1a0fc7d8849f89566b3
2021-11-24 10:25:37 -08:00
anjali411
c7d5e0f53f OpInfos for torch.atleast_{1d, 2d, 3d} (#67355)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67355

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D32649416

Pulled By: anjali411

fbshipit-source-id: 1b42e86c7124427880fff52fbe490481059da967
2021-11-24 09:55:39 -08:00
Samantha Andow
23288fdacc Making norms inputs independent (#68526)
Summary:
An update to https://github.com/pytorch/pytorch/issues/67442 to make sure all of the inputs produced are independent

Updates group_norm and instance_norm (local_response_norm was already producing independent inputs)

Also updates instance_norm for a bug in one set of inputs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68526

Reviewed By: ngimel

Differential Revision: D32532076

Pulled By: samdow

fbshipit-source-id: 45b9320fd9aecead052b21f838f95887cfb71821
2021-11-23 09:41:36 -08:00