Commit Graph

1626 Commits

Author SHA1 Message Date
Nikita Vedeneev
0048d97eda remove index_fill side-effect for scalar tensors (#52209)
Summary:
`index_fill` silently promotes zero dim Tensors to 1-dim Tensors. This PR fixes that.
Was:
```
In [1]: import torch

In [2]: x = torch.tensor(1)

In [3]: idx = torch.tensor(0).long()

In [4]: x.dim()
Out[4]: 0

In [5]: x.index_fill(0, idx, -1).dim()
Out[5]: 1

```
Now:
```
In [6]: x.index_fill(0, idx, -1).dim()
Out[6]: 0

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52209

Reviewed By: ejguan

Differential Revision: D26446470

Pulled By: ngimel

fbshipit-source-id: 4737e6941a7216b57f3416b59362817834df3a3a
2021-02-25 00:35:27 -08:00
Jane Xu
09516d2d0c Reenables skipped tests for all CUDA versions except 11.2 (#52359)
Summary:
This PR adds functionality to skip a test based on CUDA version.

This way, we can be more specific when skipping a test, such as when the test only fails for a particular CUDA version.

This allows us to add back the skipped tests for CUDA 11.2 for other CUDA versions, such as 10.1 and 11.1.

I tested this locally (by using 11.0 instead of 11.2), but will run all the CI to make sure it works.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52359

Reviewed By: walterddr

Differential Revision: D26487951

Pulled By: janeyx99

fbshipit-source-id: 45c71cc6105ffd9985054880009cf68ea5ef3f6a
2021-02-19 15:30:55 -08:00
Nikita Vedeneev
9699c703c2 Stable sort for the CPU take 2. (#51790)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/38681.
A duplicate of https://github.com/pytorch/pytorch/pull/50052 created to become importable to the fb internal tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51790

Reviewed By: agolynski

Differential Revision: D26279045

Pulled By: glaringlee

fbshipit-source-id: 348e171dee9c370a76002b65d0c82c329f57a421
2021-02-19 09:28:57 -08:00
Xiong Wei
c7b0005831 Enhance Tensor.unflatten to support -1 as the inferred size (#51955)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51719, https://github.com/pytorch/pytorch/issues/28142

**Change**
- Update `torch.Tensor.unflatten` to support users pass`-1` as the inferred size for both tensors and named tensors.
- Examples of using `-1` in the `unflatten` function are added to the docs.
- Fix the rendered issue of original `unflatten` docs by removing a blank line between its example section.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51955

Reviewed By: agolynski

Differential Revision: D26467198

Pulled By: zou3519

fbshipit-source-id: 6a3ede25561223187273796427ad0cb63f125364
2021-02-18 08:37:41 -08:00
Ailing Zhang
83fa713f2b Fix test to use proper condition. (#52216)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52216

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26427506

Pulled By: ailzhang

fbshipit-source-id: ba4f2f66794cb2843926e5566eb4d25582f7fb2b
2021-02-12 12:59:35 -08:00
Kshiteej K
d7ea0fe75a [testing] Add OpInfo for rad2deg and deg2rad (#51283)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50006

We should probably add aliases for these operators to be consistent with NumPy names i.e. `np.degrees` and `np.radians`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51283

Reviewed By: ngimel

Differential Revision: D26171163

Pulled By: mruberry

fbshipit-source-id: 1869604ed400820d95f6ff50a0e3cba1de1ffa84
2021-02-10 19:45:10 -08:00
Jane Xu
bff8194522 Replace 11.1 with 11.2 on CI for Windows (#51598)
Summary:
Adding CUDA 11.2 to Windows CI.

Disabled tests:

The following ran into `CUDA error: misaligned address` for CUDA 11.2: (issue linked below)
`test_where_scalar_valid_combination_cuda_complex128` in test_torch.py
`test_sgn_complex_cuda` in test_autograd.py

The following ran into `CUDA error: too many resources requested for launch` for CUDA 11.2: (https://github.com/pytorch/pytorch/issues/52002)
test_EmbeddingBag_per_sample_weights_and_new_offsets_cuda_int64_float64
test_EmbeddingBag_per_sample_weights_and_offsets_cuda_int64_float64

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51598

Reviewed By: mrshenli

Differential Revision: D26344965

Pulled By: janeyx99

fbshipit-source-id: 3c9a4ed16d748969e96593220ec0a9f33e1ffcef
2021-02-10 17:59:11 -08:00
vfdev
8b0cb5ede3 OpInfo: Added clamp and trunc tests with aliases (#51167)
Summary:
Description:
- Added clamp, trunc tests with aliases
- Added tests for aliases for asin(h), acos(h), etc
- fixed 'fix' alias implementation
- fixed annotations in test_jit_alias_remapping
- updated native_functions.yaml aliases guidelines

Blocked by https://github.com/pytorch/pytorch/issues/50368

cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51167

Reviewed By: gchanan

Differential Revision: D26245753

Pulled By: mruberry

fbshipit-source-id: e17b657f0515139735a8a677b1ae284904f98aef
2021-02-10 05:36:18 -08:00
Mike Ruberry
594a66d778 Warn about floor_divide performing incorrect rounding (#50281) (#50281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51745

Test Plan: Imported from OSS

Reviewed By: ngimel

Pulled By: mruberry

Differential Revision: D26257855

fbshipit-source-id: e5d497cf07b0c746838ed081c5d0e82fb4cb701b
2021-02-10 03:13:34 -08:00
kshitij12345
768662913a Migrate masked_fill__cuda to ATen (#51404)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49543

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51404

Reviewed By: mrshenli

Differential Revision: D26329833

Pulled By: ngimel

fbshipit-source-id: 510988888fad015239ab4766eb391a89b742130b
2021-02-09 22:57:03 -08:00
mattip
b97a040f71 ENH: toggle TORCH_WARN_ONCE to TORCH_WARN for tests (#48560)
Summary:
Toward fixing https://github.com/pytorch/pytorch/issues/47624

~Step 1: add `TORCH_WARN_MAYBE` which can either warn once or every time in c++, and add a c++ function to toggle the value.
Step 2 will be to expose this to python for tests. Should I continue in this PR or should we take a different approach: add the python level exposure without changing any c++ code and then over a series of PRs change each call site to use the new macro and change the tests to make sure it is being checked?~

Step 1: add a python and c++ toggle to convert TORCH_WARN_ONCE into TORCH_WARN so the warnings can be caught in tests
Step 2: add a python-level decorator to use this toggle in tests
Step 3: (in future PRs): use the decorator to catch the warnings instead of `maybeWarnsRegex`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48560

Reviewed By: ngimel

Differential Revision: D26171175

Pulled By: mruberry

fbshipit-source-id: d83c18f131d282474a24c50f70a6eee82687158f
2021-02-08 08:21:19 -08:00
wanyu2018umac
444203c52f Fix torch.cdist backward CUDA error due to illegal gridDim setting (#51569)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49928

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51569

Reviewed By: mruberry

Differential Revision: D26215694

Pulled By: ngimel

fbshipit-source-id: 0710417e6a802424e2dcada325f27452c95d042f
2021-02-02 20:41:24 -08:00
Jeffrey Wan
b18eeaa80a Implement np.diff for single order differences (#50569)
Summary:
Implements `np.diff` for single order differences only:
 - method and function variants for `diff` and function variant for `diff_out`
 - supports out variant, but not in-place since shape changes
 - adds OpInfo entry, and test in `test_torch`
 - automatic autograd because we are using the `Math` dispatch

_Update: we only support Tensors for prepend and append in this PR. See discussion below and comments for more details._

Currently there is a quirk in the c++ API based on how this is implemented: it is not possible to specify scalar prepend and appends without also specifying all 4 arguments.

That is because the goal is to match NumPy's diff signature of `diff(int n=1, int dim=-1, Union[Scalar, Tensor] prepend=None, Union[Scalar, Tensor] append)=None` where all arguments are optional, positional and in the correct order.
There are a couple blockers. One is c++ ambiguity. This prevents us from simply doing `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)` etc for all combinations of {Tensor, Scalar} x {Tensor, Scalar}.

Why not have append, prepend not have default args and then write out the whole power set of {Tensor, Scalar, omitted} x {Tensor, Scalar, omitted} you might ask. Aside from having to write 18 overloads, this is actually illegal because arguments with defaults must come after arguments without defaults. This would mean having to write `diff(prepend, append, n, dim)` which is not desired. Finally writing out the entire power set of all arguments n, dim, prepend, append is out of the question because that would actually involve 2 * 2 * 3 * 3 = 36 combinations. And if we include the out variant, that would be 72 overloads!

With this in mind, the current way this is implemented is actually to still do `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)`. But also make use of `cpp_no_default_args`. The idea is to only have one of the 4 {Tensor, Scalar} x {Tensor, Scalar} provide default arguments for the c++ api, and add `cpp_no_default_args` for the remaining 3 overloads. With this, Python api works as expected, but some calls such as `diff(prepend=1)` won't work on c++ api.

We can optionally add 18 more overloads that cover the {dim, n, no-args} x {scalar-tensor, tensor-scalar, scalar-scalar} x {out, non-out} cases for c++ api. _[edit: counting is hard - just realized this number is still wrong. We should try to count the cases we do cover instead and subtract that from the total: (2 * 2 * 3 * 3) - (3 + 2^4) = 17. 3 comes from the 3 of 4 combinations of {tensor, scalar}^2 that we declare to be `cpp_no_default_args`, and the one remaining case that has default arguments has covers 2^4 cases. So actual count is 34 additional overloads to support all possible calls]_

_[edit: thanks to https://github.com/pytorch/pytorch/issues/50767 hacky_wrapper is no longer necessary; it is removed in the latest commit]_
 hacky_wrapper was also necessary here because `Tensor?` will cause dispatch to look for the `const optional<Tensor>&` schema but also generate a `const Tensor&` declaration in Functions.h. hacky_wrapper allows us to define our function as `const Tensor&` but wraps it in optional for us, so this avoids both the errors while linking and loading.

_[edit: rewrote the above to improve clarity and correct the fact that we actually need 18 more overloads (26 total), not 18 in total to complete the c++ api]_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50569

Reviewed By: H-Huang

Differential Revision: D26176105

Pulled By: soulitzer

fbshipit-source-id: cd8e77cc2de1117c876cd71c29b312887daca33f
2021-02-02 20:25:16 -08:00
Max Balandat
a990ff7001 [SobolEngine] Fix edge case of dtype of first sample (#51578)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51578

https://github.com/pytorch/pytorch/pull/49710 introduced an edge case in which
drawing a single sample resulted in ignoring the `dtype` arg to `draw`. This
fixes this and adds a unit test to cover this behavior.

Test Plan: Unit tests

Reviewed By: danielrjiang

Differential Revision: D26204393

fbshipit-source-id: 441a44dc035002e7bbe6b662bf6d1af0e2cd88f4
2021-02-02 14:24:56 -08:00
vfdev
b106250047 Introduced AliasInfo for OpInfo (#50368)
Summary:
Introduced AliasInfo for OpInfo.

Context: Split of https://github.com/pytorch/pytorch/issues/49158

cc mruberry , please let me know if you'd like to see here more code to cover

> [ ] fold test_op_aliases.py into OpInfo-based testing in test_ops.py

from https://github.com/pytorch/pytorch/issues/50006

and/or add `UnaryUfuncInfo('abs')` as discussed https://github.com/pytorch/pytorch/pull/49158/files#r548774221

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50368

Reviewed By: ngimel

Differential Revision: D26177261

Pulled By: mruberry

fbshipit-source-id: 2e3884a387e8d5365fe05945375f0a9d1b5f5d82
2021-02-02 00:10:09 -08:00
kshitij12345
4b65a27a35 [testing] Add OpInfo for round and logit (#51272)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50006

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51272

Reviewed By: ngimel

Differential Revision: D26177020

Pulled By: mruberry

fbshipit-source-id: 4728b14c7a42980c7ca231ca1946430e0e38ed5b
2021-02-01 21:15:40 -08:00
Nikita Vedeneev
b198cf4f1c port index_fill_ from TH to ATen. (#50578)
Summary:
As per title. The port is based on TensorIterator.
Supports complex input.

Resolves https://github.com/pytorch/pytorch/issues/24714.
Resolves https://github.com/pytorch/pytorch/issues/24577.
Resolves https://github.com/pytorch/pytorch/issues/36328.
Possibly resolves https://github.com/pytorch/pytorch/issues/48230

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50578

Reviewed By: ngimel

Differential Revision: D26049539

Pulled By: anjali411

fbshipit-source-id: 2be4e78f7a01700c593a9e893e01f69191e51ab1
2021-02-01 16:08:37 -08:00
kshitij12345
50fa415a4d [testing] Add OpInfo for ceil and floor (#51198)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50006

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51198

Reviewed By: malfet

Differential Revision: D26105099

Pulled By: mruberry

fbshipit-source-id: 6cfa89f42b87cca66dbc5bf474d17a6cad7eb45a
2021-02-01 10:10:36 -08:00
Max Balandat
449098c2d2 [SobolEngine] Update direction numbers to 21201 dims (#49710)
Summary:
Performs the update that was suggested in https://github.com/pytorch/pytorch/issues/41489

Adjust the functionality to largely match that pf the scipy companion PR https://github.com/scipy/scipy/pull/10844/, including
- a new `draw_base2` method
- include zero as the first point in the (unscrambled) Sobol sequence

The scipy PR is also quite opinionated if the `draw` method doesn't get called with a base 2 number (for which the resulting sequence has nice properties, see the scipy PR for a comprehensive discussion of this).

Note that this update is a **breaking change** in the sense that sequences generated with the same parameters after as before will not be identical! They will have the same (better, arguably) distributional properties, but calling the engine with the same seed will result in different numbers in the sequence.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49710

Test Plan:
```
from torch.quasirandom import SobolEngine

sobol = SobolEngine(3)
sobol.draw(4)

sobol = SobolEngine(4, scramble=True)
sobol.draw(5)

sobol = SobolEngine(4, scramble=True)
sobol.draw_base2(2)
```

Reviewed By: malfet

Differential Revision: D25657233

Pulled By: Balandat

fbshipit-source-id: 9df50a14631092b176cc692b6024aa62a639ef61
2021-02-01 08:44:31 -08:00
kshitij12345
a88e1d3ddf [complex] Complex support for masked_scatter and autograd support for masked_scatter and masked_select (#51281)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/33152

Changes
* Enable complex support for masked_scatter
* Enable half support for masked_scatter CPU
* Enable complex autograd support for masked_scatter CPU and masked_select (both CPU and CUDA).

**Note**:
Complex Support for masked_scatter CUDA is disabled as it depends on `masked_fill` which is yet to be ported to ATen.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51281

Reviewed By: ailzhang

Differential Revision: D26127561

Pulled By: anjali411

fbshipit-source-id: 6284926b934942213c5dfc24b5bcc8538d0231af
2021-01-29 13:49:31 -08:00
kshitij12345
eaf5ca09dc Migrate masked_scatter_ CUDA to ATen (#50039)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49542

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50039

Reviewed By: heitorschueroff

Differential Revision: D26096247

Pulled By: ngimel

fbshipit-source-id: ec1810d3412e0d7ab6b950265a3123519ad886c1
2021-01-27 14:17:02 -08:00
kshitij12345
6d098095eb [numpy] torch.lgamma: promote integer inputs to float (#50140)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50140

Reviewed By: mrshenli

Differential Revision: D25951094

Pulled By: mruberry

fbshipit-source-id: e53f1dbddff889710f05d43dbc9587382d3decb0
2021-01-27 12:08:46 -08:00
Peter Bell
9b6d463704 Move std and var tests to OpInfos (#50901)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50901

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D26083289

Pulled By: mruberry

fbshipit-source-id: 7e14ff37bba46dd456e0bc0aa9c4e0a632d0734c
2021-01-27 10:50:51 -08:00
mattip
345844d9d8 test, fix deepcopy of tensor with grad (#50663)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/3307

Previously, `self.grad` was not ~cloned~ deepcopied to the returned tensor in `deepcopy`. Added a test and an implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50663

Reviewed By: heitorschueroff

Differential Revision: D26074811

Pulled By: albanD

fbshipit-source-id: 536dad36415f1d03714b4ce57453f406ad802b8c
2021-01-26 16:19:53 -08:00
anjali411
e544d74c55 [CPU] Add torch.trace for complex tensors (#50380)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50380

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D25949361

Pulled By: anjali411

fbshipit-source-id: 9910bc5b532c9bf3add530221d643b2c41c62d01
2021-01-23 09:04:31 -08:00
kshitij12345
a291b254ee Migrate masked_scatter_ CPU to ATen (#49732)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49541

Reference: https://github.com/pytorch/pytorch/issues/24507

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49732

Reviewed By: ejguan

Differential Revision: D25991438

Pulled By: ngimel

fbshipit-source-id: a43bd0bfe043d8e32a6cadbbf736a0eaa697e7ec
2021-01-22 12:05:56 -08:00
Kurt Mohler
8ab1a1495d Rename set_deterministic to use_deterministic_algorithms (#49904)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49100

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49904

Reviewed By: ezyang, mrshenli

Differential Revision: D25956761

Pulled By: mruberry

fbshipit-source-id: 86a59289d50825a0ebbd7c358b483c8d8039ffa6
2021-01-22 11:27:07 -08:00
Kyle Chen
16faabe7f0 [ROCm] re-enable tests (#50691)
Summary:
Signed-off-by: Kyle Chen <kylechen@amd.com>

cc: jeffdaily

re-enable test_torch.py and test_unary_ufuncs.py tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50691

Reviewed By: mruberry

Differential Revision: D25967842

Pulled By: ngimel

fbshipit-source-id: dc0f6cb68fe4d151c2719bdf67ead96e1396acf2
2021-01-20 11:23:39 -08:00
Xinyu Li
7526e38cd3 Revert "Stable sort for CPU (#50052)" (#50752)
Summary:
This reverts commit c99f356051.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50752

Reviewed By: zou3519

Differential Revision: D25958146

Pulled By: glaringlee

fbshipit-source-id: f4068d038f9bd337bac8b673eaeb46a4646f6c77
2021-01-19 18:21:25 -08:00
kshitij12345
316f0b89c3 [testing] Port torch.{repeat, tile} tests to use OpInfo machinery (#50199)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50013

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50199

Reviewed By: ngimel

Differential Revision: D25949791

Pulled By: mruberry

fbshipit-source-id: 10eaf2d749fac8c08847f50461e72ad1c75c61e3
2021-01-19 06:02:27 -08:00
nikitaved
c458558334 kill multinomial_alias_setup/draw (#50489)
Summary:
As per title. Partially Fixes https://github.com/pytorch/pytorch/issues/49421.
These functions appear to be dead code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50489

Reviewed By: mruberry

Differential Revision: D25948912

Pulled By: ngimel

fbshipit-source-id: 108723bd4c76cbc3535eba902d6f74597bfdfa58
2021-01-19 00:23:58 -08:00
76181208+imaginary-person@users.noreply.github.com
3f052ba07b Remove unnecessary dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops (#50465)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/50064

**PROBLEM DESCRIPTION:**
1. Had not removed dtype checks for complex types in the previous PR (https://github.com/pytorch/pytorch/issues/50347) for this issue.
These type-checks were added in https://github.com/pytorch/pytorch/issues/36377, but are no longer necessary,
as we now rely upon dispatch macros to produce error messages.
2. dtype checks in `clamp_max()` and `clamp_min()` for complex inputs had not been removed either.
3. For min/max pointwise ops in TensorCompareKernel.cpp, complex dispatch had not been removed for min/max functions.

### **FIX DESCRIPTION:**
**FIX SUMMARY:**
1. Removed dtype checks added in https://github.com/pytorch/pytorch/issues/36377, and added 3 more in TensorCompare.cpp.
2. Removed dtype checks for complex inputs in `clamp_max()` and `clamp_min()`.
3.  Disabled complex dispatch for min/max pointwise ops in TensorCompareKernel.cpp.
4. Error messages in the exceptions raised due to min/max ops not being implemented are now checked for containing the text _not support_ (which can also be present in _not supported_), or _not implemented_, so one of them should be a part of error messages, in order for them to be informative.

**REASON FOR NOT CHANGING DISPATCH FOR CUDA AND CLAMP OPS**:

As for the CUDA min/max operations, their kernels do not seem to be compiled & dispatched for complex types anyway, so no further changes seem to be required. Basically, the dispatch macros currently being used don't have cases for complex types.

For example,

1. the reduce CUDA ops use [AT_DISPATCH_ALL_TYPES_AND2 (678fe9f077)](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Dispatch.h#L548-L575) in [ReduceMinMaxKernel.cu](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/ReduceMinMaxKernel.cu), and that macro doesn't allow complex types.

2. In [MinMaxElementwiseKernel.cu](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/MaxMinElementwiseKernel.cu), the CUDA pointwise ops use [`AT_DISPATCH_FLOATING_TYPES_AND2 (678fe9f077)`](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Dispatch.h#L240-L263) for non-integral & non-boolean types, and this marco doesn't have a case for complex types either.

3. [clamp CUDA ops](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/UnaryOpsKernel.cu#L170-L211) use `AT_DISPATCH_ALL_TYPES_AND2 (678fe9f077)`, which doesn't have a case for complex types.

Similarly, [CPU clamp min/max ops](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp#L428-L458) use the `AT_DISPATCH_ALL_TYPES_AND `dispatch macro, which doesn't have a case for complex types.

**REASON FOR ADDING 3 dtype CHECKS:**
There are a few cases in which the methods corresponding to `min_stub()` or `max_stub()` are not called, so dispatch macros don't get invoked, resulting in no exceptions being raised. Hence, `dtype` checks are necessary at 3 places to raise exceptions:

1. 52dcc72999/aten/src/ATen/native/TensorCompare.cpp (L342)
2. 52dcc72999/aten/src/ATen/native/TensorCompare.cpp (L422)
3. 52dcc72999/aten/src/ATen/native/TensorCompare.cpp (L389)

The first dtype check requirement can be verified from the following example Python code based on `test_complex_unsupported()`:
```
import unittest
import torch

class MyTestCase(unittest.TestCase):

   def test_1(self):
      t = torch.tensor((1 + 1j), device='cpu', dtype=torch.complex128)
      with self.assertRaises(Exception):
         torch.max(t, dim=0)

if __name__ == '__main__':
    unittest.main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50465

Reviewed By: mruberry

Differential Revision: D25938106

Pulled By: ngimel

fbshipit-source-id: 95e2df02ba8583fa3ce87d4a2fdcd60b912dda46
2021-01-17 22:00:05 -08:00
nikitaved
c99f356051 Stable sort for CPU (#50052)
Summary:
Fixes [https://github.com/pytorch/pytorch/issues/38681](https://github.com/pytorch/pytorch/issues/38681) for the CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50052

Reviewed By: mrshenli

Differential Revision: D25900823

Pulled By: glaringlee

fbshipit-source-id: 1a3fa336037d0aa2344d79f46dcacfd478a353d1
2021-01-15 19:34:27 -08:00
kshitij12345
5546a12fe3 remove redundant tests from tensor_op_tests (#50096)
Summary:
All these Unary operators have been an entry in OpInfo DB.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50096

Reviewed By: zhangguanheng66

Differential Revision: D25870048

Pulled By: mruberry

fbshipit-source-id: b64e06d5b9ab5a03a202cda8c22fdb7e4ae8adf8
2021-01-12 04:53:12 -08:00
kshitij12345
9f832c8d3e [numpy] torch.exp: promote integer inputs to float (#50093)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50093

Reviewed By: H-Huang

Differential Revision: D25803549

Pulled By: mruberry

fbshipit-source-id: e6f245b5e728f2dca6072f8c359f03dff63aa14d
2021-01-08 06:30:18 -08:00
Thomas Viehmann
def8aa5499 Remove cpu half and dead code from multinomial (#50063)
Summary:
Based on ngimel's (Thank you!) feedback, cpu half was only accidental, so I'm removing it.

This lets us ditch the old codepath for without replacement in favour of the new, better one.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50063

Reviewed By: mruberry

Differential Revision: D25772449

Pulled By: ngimel

fbshipit-source-id: 608729c32237de4ee6d1acf7e316a6e878dac7f0
2021-01-05 19:46:33 -08:00
anjali411
8fb5f16931 Complex backward for indexing, slicing, joining, and mutating ops (#49552)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49552

This PR:
1. Migrates independent autograd test for `hstack`, `dstack`, `vstack`, `movedim`, `moveaxis` from `test_autograd.py` to the new `OpInfo` based tests.
2. Migrates autograd test for `gather`, `index_select` from the method_tests to the new `OpInfo` based tests.
2. Enables complex backward for `stack, gather, index_select, index_add_` and adds tests for complex autograd for all the above mentioned ops.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25682511

Pulled By: anjali411

fbshipit-source-id: 5d8f89db4a9ec340ab99a6196987d44a23e2c6c6
2021-01-04 19:44:15 -08:00
kshitij12345
42d2e31cd6 [numpy] torch.rsqrt : promote integer inputs to float (#47909)
Summary:
Reference https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47909

Reviewed By: ngimel

Differential Revision: D25730876

Pulled By: mruberry

fbshipit-source-id: c87a8f686e1dd64e511640e0278021c4a584ccf2
2020-12-30 10:33:14 -08:00
kshitij12345
963f7629b5 [numpy] torch.digamma : promote integer inputs to float (#48302)
Summary:
**BC-breaking Note:**

This PR updates PyTorch's digamma function to be consistent with SciPy's special.digamma function. This changes the result of the digamma function on the nonpositive integers, where the gamma function is not defined. Since the gamma function is undefined at these points, the (typical) derivative of the logarithm of the gamma function is also undefined at these points, and for negative integers this PR updates digamma to return NaN. For zero, however, it returns -inf to be consistent with SciPy.

Interestingly, SciPy made a similar change, which was noticed by at least one user: https://github.com/scipy/scipy/issues/9663#issue-396587679.

SciPy's returning of negative infinity at zero is intentional:
59347ae8b8/scipy/special/cephes/psi.c (L163)

This change is consistent with the C++ standard for the gamma function:
https://en.cppreference.com/w/cpp/numeric/math/tgamma

**PR Summary:**
Reference https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48302

Reviewed By: ngimel

Differential Revision: D25664087

Pulled By: mruberry

fbshipit-source-id: 1168e81e218bf9fe5b849db0e07e7b22e590cf73
2020-12-24 22:42:55 -08:00
Kshiteej K
3f4b98d568 [numpy] torch.erfinv: promote integer inputs to float (#49155)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49155

Reviewed By: ngimel

Differential Revision: D25664234

Pulled By: mruberry

fbshipit-source-id: 630fd1d334567d78c8130236a67dda0f5ec02560
2020-12-23 14:22:03 -08:00
Kshiteej K
461aafe389 [numpy] torch.angle: promote integer inputs to float (#49163)
Summary:
**BC-Breaking Note:**

This PR updates PyTorch's angle operator to be consistent with NumPy's. Previously angle would return zero for all floating point values (including NaN). Now angle returns `pi` for negative floating point values, zero for non-negative floating point values, and propagates NaNs.

**PR Summary:**

Reference: https://github.com/pytorch/pytorch/issues/42515

TODO:

* [x] Add BC-Breaking Note (Prev all real numbers returned `0` (even `nan`)) -> Fixed to match the correct behavior of NumPy.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49163

Reviewed By: ngimel

Differential Revision: D25681758

Pulled By: mruberry

fbshipit-source-id: 54143fe6bccbae044427ff15d8daaed3596f9685
2020-12-22 18:43:14 -08:00
Xiang Gao
50b361a821 Enable BF16 for indexing on CUDA (#48801)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48801

Reviewed By: glaringlee

Differential Revision: D25542914

Pulled By: ngimel

fbshipit-source-id: 4113eb2729d15b40a89268172cc37122b5213624
2020-12-14 17:24:31 -08:00
Chester Liu
3a943e9f82 Use Unicode friendly API on Win32 in THAllocator (#47905)
Summary:
This replaces the narrow character set APIs with the wide character set ones in `THAllocator.cpp`. This fixes the potential crashes caused by passing non-ASCII characters in `torch::from_file` on Windows.

See: https://github.com/pytorch/pytorch/issues/47422

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47905

Reviewed By: zhangguanheng66

Differential Revision: D25399146

Pulled By: ezyang

fbshipit-source-id: 0a183b65de171c48ed1718fa71e773224eaf196f
2020-12-14 14:24:20 -08:00
Brian Hirsh
f54ab8fbfe Revert "Revert D25003113: make validate debug-only in Device copy ctr" (#49123)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49123

This reverts commit 7a4a2df225.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D25463531

Pulled By: bdhirsh

fbshipit-source-id: 7c7ecdc1d63ffd137b84a129887c424b2083a958
2020-12-14 07:33:37 -08:00
kiyosora
15200e385a Enable torch.where() to support Float16 & BFloat16 type inputs (#49004)
Summary:
Fixed https://github.com/pytorch/pytorch/issues/49075

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49004

Reviewed By: zou3519

Differential Revision: D25495225

Pulled By: H-Huang

fbshipit-source-id: 09418ee5503f65c8862e40119c5802779505a4db
2020-12-11 13:36:41 -08:00
kshitij12345
eb9516eaa4 [numpy] torch.exp{2, m1}: promote integer inputs to float (#48926)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48926

Reviewed By: zhangguanheng66

Differential Revision: D25392344

Pulled By: mruberry

fbshipit-source-id: ddbabcfd58cc4c944153b1a224cc232efa022104
2020-12-10 00:14:22 -08:00
Kurt Mohler
27f7d1c286 Port eig CPU from TH to ATen (#43215)
Summary:
Also consolidates shared logic between `eig` CPU and CUDA implementations

Fixes https://github.com/pytorch/pytorch/issues/24693

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43215

Reviewed By: VitalyFedyunin, zhangguanheng66

Differential Revision: D23862622

Pulled By: ngimel

fbshipit-source-id: ca1002428850520cd74cd5b7ed8cb4d12dbd9c52
2020-12-09 23:27:35 -08:00
Peter Bell
5765bbd78c Review memory overlap checks for advanced indexing operations (#48651)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45964

Indexing operators e.g. `scatter`/`gather` use tensor restriding so the `TensorIterator` built in overlap checking needs to be disabled. This adds the missing overlap checks for these operators.

In addition, some indexing operators don't work will with `MemOverlapStatus::FULL` which is explicitly allowed by `assert_no_partial_overlap`. So, I've introduced `assert_no_overlap` that will raise an error on partial _or_ full overlap.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48651

Reviewed By: zhangguanheng66

Differential Revision: D25401047

Pulled By: ngimel

fbshipit-source-id: 53abb41ac63c4283f3f1b10a0abb037169f20b89
2020-12-09 15:10:52 -08:00
Supriya Rao
7a4a2df225 Revert D25003113: make validate debug-only in Device copy ctr
Test Plan: revert-hammer

Differential Revision:
D25003113 (4b26cafb8f)

Original commit changeset: e17e6495db65

fbshipit-source-id: fd636c954a97bd80892464feb974a11b9dd96899
2020-12-09 13:58:11 -08:00
Brian Hirsh
4b26cafb8f make validate debug-only in Device copy ctr (#47854)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47854

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D25003113

Pulled By: bdhirsh

fbshipit-source-id: e17e6495db65c48c7daf3429acbd86742286a1f3
2020-12-09 08:11:24 -08:00