Commit Graph

28 Commits

Author SHA1 Message Date
davidriazati@fb.com
4b96fc060b Remove distutils (#57040)
Summary:
[distutils](https://docs.python.org/3/library/distutils.html) is on its way out and will be deprecated-on-import for Python 3.10+ and removed in Python 3.12 (see [PEP 632](https://www.python.org/dev/peps/pep-0632/)). There's no reason for us to keep it around since all the functionality we want from it can be found in `setuptools` / `sysconfig`. `setuptools` includes a copy of most of `distutils` (which is fine to use according to the PEP), that it uses under the hood, so this PR also uses that in some places.

Fixes #56527
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57040

Pulled By: driazati

Reviewed By: nikithamalgifb

Differential Revision: D28051356

fbshipit-source-id: 1ca312219032540e755593e50da0c9e23c62d720
2021-04-29 12:10:11 -07:00
Arindam Roy
b907d6e3b6 [ROCm] skip some tests to enable 4.1 CI upgrade (#54536)
Summary:
Skips the tests indicated as failing in https://github.com/pytorch/pytorch/issues/54535.

During the ROCm CI upgrade from 4.0.1 to 4.1, some tests regressed. Specifically, FFT tests in test_spectral_ops.py and test_grid_sample in test_nn.py. In order to keep a passing CI signal, we need to disable these temporarily.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54536

Reviewed By: H-Huang

Differential Revision: D27442974

Pulled By: malfet

fbshipit-source-id: 07dffb957757a5fc7afaa5bf78b935a427251ef4
2021-03-30 17:49:45 -07:00
Michael Melesse
fef0219f7e [ROCM] Fix hipfft transform type error (#53411)
Summary:
This PR enable some failing unit tests for fft in pytorch on ROCM.

The reason these tests were failing was due to an error in how hipfft was executed for different transform types for float inputs causing a mismatch error when compared to baselines.

We solved the problem by calling hipfft with the right config for each transformation type.

There PR doesnot enable all fft tests. There are still other issues that need to be resolved before that can happen.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53411

Reviewed By: albanD

Differential Revision: D27008323

Pulled By: mruberry

fbshipit-source-id: 649c65d0f12a889a426ec475f7d8fcc6f1d81bd3
2021-03-17 19:26:04 -07:00
Peter Bell
f62e9156dc Add missing decorators in test_spectral_ops (#53736)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53456

I'm confused why this wasn't picked up in CI. There's definitely at least one CI job that builds without MKL. Are spectral_ops not being run at all on that job?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53736

Reviewed By: albanD

Differential Revision: D27007901

Pulled By: mruberry

fbshipit-source-id: cd93a2c48f4ccb2fd2e0e35768ee059039868a1b
2021-03-12 12:00:25 -08:00
mattip
54a2498919 Modify tests to use assertWarnsOnceRegex instead of maybeWarnsRegex (#52387)
Summary:
Related to https://github.com/pytorch/pytorch/issues/50006

Follow on for https://github.com/pytorch/pytorch/issues/48560 to ensure TORCH_WARN_ONCE warnings are caught. Most of this is straight-forward find-and-replace, but I did find one place where the TORCH_WARN_ONCE warning was not wrapped into a python warning.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52387

Reviewed By: albanD

Differential Revision: D26773387

Pulled By: mruberry

fbshipit-source-id: 5be7efbc8ab4a32ec8437c9c45f3b6c3c328f5dd
2021-03-08 03:32:14 -08:00
Michael Melesse
51c28e4d7e [ROCm] enable fft tests (#51581)
Summary:
This PR enable some failing unit tests for fft in pytorch on ROCM.

The reason these tests were failing was due to hipfft clobbering inputs causing a mismatch in tests that were checking that applying ffts and their reverse would get you back to the input.

We solve this by cloning the input using an existing flag on the ROCM platform.

There PR doesnot enable all fft tests. There are other issues that need to be resolved before that can happen.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51581

Reviewed By: ejguan

Differential Revision: D26489344

Pulled By: seemethere

fbshipit-source-id: 472fce8e514adcf91e7f46a686cbbe41e91235a9
2021-02-17 13:27:55 -08:00
Peter Bell
52ab858f07 STFT: Improve error message when window is on wrong device (#51128)
Summary:
Closes https://github.com/pytorch/pytorch/issues/51042

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51128

Reviewed By: mruberry

Differential Revision: D26108998

Pulled By: ngimel

fbshipit-source-id: 1166c19c2ef6846e29b16c1aa06cb5c1ce3ccb0d
2021-01-27 22:31:57 -08:00
Peter Bell
db079a9877 Padding: support complex dtypes (#50594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50594

Fixes #50234

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25987316

Pulled By: anjali411

fbshipit-source-id: c298b771fe52b267a86938e886ea402badecfe3e
2021-01-22 11:57:42 -08:00
Peter Bell
fb73cc4dc4 Migrate some torch.fft tests to use OpInfos (#48428)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48428

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25868666

Pulled By: mruberry

fbshipit-source-id: ca6d0c4e44f4c220675dc264a405d960d4b31771
2021-01-12 04:42:54 -08:00
Peter Bell
26391143b6 Support out argument in torch.fft ops (#49335)
Summary:
Ref https://github.com/pytorch/pytorch/issues/42175

This adds out argument support to all functions in the `torch.fft` namespace except for `fftshift` and `ifftshift` because they rely on `at::roll` which doesn't have an out argument version.

Note that there's no general way to do the transforms directly into the output since both cufft and mkl-fft only support single batch dimensions. At a minimum, the output may need to be re-strided which I don't think is expected from `out` arguments normally. So, on cpu this just copies the result into the out tensor. On cuda, the normalization is changed to call `at::mul_out` instead of an inplace multiply.

If it's desirable, I could add a special case to transform into the output when `out.numel() == 0` since there's no expectation to preserve the strides in that case anyway. But that would lead to the slightly odd situation where `out` having the correct shape follows a different code path from `out.resize_(0)`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49335

Reviewed By: mrshenli

Differential Revision: D25756635

Pulled By: mruberry

fbshipit-source-id: d29843f024942443c8857139a2abdde09affd7d6
2021-01-05 17:17:49 -08:00
Mike Ruberry
5e1c8f24d4 Make stft (temporarily) warn (#50102)
Summary:
When continuing the deprecation process for stft it was made to throw an error when `use_complex` was not explicitly set by the user. Unfortunately this PR missed a model relying on the historic stft functionality. Before re-enabling the error we'll need to write an upgrader for that model.

This PR turns the error back into a warning to allow that model to continue running as before.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50102

Reviewed By: ngimel

Differential Revision: D25784325

Pulled By: mruberry

fbshipit-source-id: 825fb38af39b423ce11b376ad3c4a8b21c410b95
2021-01-05 15:39:00 -08:00
Peter Bell
5c25f8faf3 stft: Change require_complex warning to an error (#49022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

**BC-breaking note**:

Previously torch.stft took an optional `return_complex` parameter that indicated whether the output would be a floating point tensor or a complex tensor. By default `return_complex` was False to be consistent with the previous behavior of torch.stft. This PR changes this behavior so `return_complex` is a required argument.

**PR Summary**:

* **#49022 stft: Change require_complex warning to an error**

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25658906

Pulled By: mruberry

fbshipit-source-id: 11932d1102e93f8c7bd3d2d0b2a607fd5036ec5e
2020-12-20 14:48:25 -08:00
Mike Ruberry
47c65f8223 Revert D25569586: stft: Change require_complex warning to an error
Test Plan: revert-hammer

Differential Revision:
D25569586 (5874925b46)

Original commit changeset: 09608088f540

fbshipit-source-id: 6a5953b327a4a2465b046e29bb007a0c5f4cf14a
2020-12-16 16:21:52 -08:00
Peter Bell
5874925b46 stft: Change require_complex warning to an error (#49022)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25569586

Pulled By: mruberry

fbshipit-source-id: 09608088f540c2c3fc70465f6a23f2aec5f24f85
2020-12-16 12:47:56 -08:00
Peter Bell
fc0a3a1787 Improve torch.fft n-dimensional transforms (#46911)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46911

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25420647

Pulled By: mruberry

fbshipit-source-id: bf7e6a2ec41f9f95ffb05c128ee0f3297e34aae2
2020-12-09 12:40:06 -08:00
Peter Bell
5180caeeb4 Remove deprecated spectral ops from torch namespace (#48594)
Summary:
Ref https://github.com/pytorch/pytorch/issues/42175

This removes the 4 deprecated spectral functions: `torch.{fft,rfft,ifft,irfft}`. `torch.fft` is also now imported by by default.

The actual `at::native` functions are still used in `torch.stft` so can't be full removed yet. But will once https://github.com/pytorch/pytorch/issues/47601 has been merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48594

Reviewed By: heitorschueroff

Differential Revision: D25298929

Pulled By: mruberry

fbshipit-source-id: e36737fe8192fcd16f7e6310f8b49de478e63bf0
2020-12-05 04:12:32 -08:00
Peter Bell
7df8445242 torch.fft: Remove complex gradcheck workaround (#48425)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48425

gradcheck now natively supports functions with complex inputs and/or outputs.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25176377

Pulled By: mruberry

fbshipit-source-id: d603e2511943f38aeb3b8cfd972af6bf4701ed29
2020-11-26 22:45:59 -08:00
Nikita Shulga
172ed51a17 Mark parts of spectral tests as slow (#46509)
Summary:
According to https://app.circleci.com/pipelines/github/pytorch/pytorch/228154/workflows/31951076-b633-4391-bd0d-b2953c940876/jobs/8290059
TestFFTCUDA.test_fftn_backward_cuda_complex128 takes 242 seconds to finish, where most of the time spent checking 2nd gradient

Refactor common part of test_fft_backward and test_fftn_backward into _fft_grad_check_helper
Introduce `slowAwareTest` decorator
Split test into fast and slow parts by checking 2nd degree gradient only during the slow part

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46509

Reviewed By: walterddr

Differential Revision: D24378901

Pulled By: malfet

fbshipit-source-id: 606670c2078480219905f63b9b278b835e760a66
2020-10-19 10:11:46 -07:00
Peter Bell
da95eec613 torch.fft: Two dimensional FFT functions (#45164)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45164

This PR implements `fft2`, `ifft2`, `rfft2` and `irfft2`. These are the last functions required for `torch.fft` to match `numpy.fft`. If you look at either NumPy or SciPy you'll see that the 2-dimensional variants are identical to `*fftn` in every way, except for the default value of `axes`. In fact you can even use `fft2` to do general n-dimensional transforms.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D24363639

Pulled By: mruberry

fbshipit-source-id: 95191b51a0f0b8e8e301b2c20672ed4304d02a57
2020-10-17 16:23:06 -07:00
Peter Bell
99d3f37bd4 Run gradgradcheck on torch.fft transforms (#46004)
Summary:
Ref https://github.com/pytorch/pytorch/issues/42175

As already noted in the `torch.fft` `gradcheck` tests, `gradcheck` isn't fully working for complex types yet and the function inputs need to be real. A similar workaround for `gradgradcheck` works, viewing the complex outputs as real before returning them makes `gradgradcheck` pass.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46004

Reviewed By: ngimel

Differential Revision: D24187000

Pulled By: mruberry

fbshipit-source-id: 33c2986b07bac282dff1bd4f2109beb70e47bf79
2020-10-08 00:02:05 -07:00
Peter Bell
d44eaf63d1 torch.fft helper functions (#44877)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44877

Part of gh-42175. This implements the `torch.fft` helper functions: `fftfreq`, `rfftfreq`, `fftshift` and `ifftshift`.

* #43009 Cleanup tracer handling of optional arguments

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D24043473

Pulled By: mruberry

fbshipit-source-id: 35de7b70b27658a426773f62d23722045ea53268
2020-10-05 22:04:52 -07:00
Peter Bell
6a2e9eb51c torch.fft: Multi-dimensional transforms (#44550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44550

Part of the `torch.fft` work (gh-42175).
This adds n-dimensional transforms: `fftn`, `ifftn`, `rfftn` and `irfftn`.

This is aiming for correctness first, with the implementation on top of the existing `_fft_with_size` restrictions. I plan to follow up later with a more efficient rewrite that makes `_fft_with_size` work with arbitrary numbers of dimensions.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D23846032

Pulled By: mruberry

fbshipit-source-id: e6950aa8be438ec5cb95fb10bd7b8bc9ffb7d824
2020-09-23 22:09:58 -07:00
Peter Bell
da7863f46b Add one dimensional FFTs to torch.fft namespace (#43011)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43011

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D23751850

Pulled By: mruberry

fbshipit-source-id: 8dc5fec75102d8809eeb85a3d347ba1b5de45b33
2020-09-19 23:32:22 -07:00
Peter Bell
caea1adc35 Complex support for stft and istft (#43886)
Summary:
Ref https://github.com/pytorch/pytorch/issues/42175, fixes https://github.com/pytorch/pytorch/issues/34797

This adds complex support to `torch.stft` and `torch.istft`. Note that there are really two issues with complex here: complex signals, and returning complex tensors.

## Complex signals and windows
`stft` currently assumes all signals are real and uses `rfft` with `onesided=True` by default. Similarly, `istft` always takes a complex fourier series and uses `irfft` to return real signals.

For `stft`, I now allow complex inputs and windows by calling the full `fft` if either are complex. If the user gives `onesided=True` and the signal is complex, then this doesn't work and raises an error instead. For `istft`, there's no way to automatically know what to do when `onesided=False` because that could either be a redundant representation of a real signal or a complex signal. So there, the user needs to pass the argument `return_complex=True` in order to use `ifft` and get a complex result back.

## stft returning complex tensors
The other issue is that `stft` returns a complex result, represented as a `(... X 2)` real tensor. I think ideally we want this to return proper complex tensors but to preserver BC I've had to add a `return_complex` argument to manage this transition. `return_complex` defaults to false for real inputs to preserve BC but defaults to True for complex inputs where there is no BC to consider.

In order to `return_complex` by default everywhere without a sudden BC-breaking change, a simple transition plan could be:
1. introduce `return_complex`, defaulted to false when BC is an issue but giving a warning. (this PR)
2. raise an error in cases where `return_complex` defaults to false, making it a required argument.
3. change `return_complex` default to true in all cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43886

Reviewed By: glaringlee

Differential Revision: D23760174

Pulled By: mruberry

fbshipit-source-id: 2fec4404f5d980ddd6bdd941a63852a555eb9147
2020-09-18 01:39:47 -07:00
Mike Ruberry
6cb0807f88 Fixes ROCm CI (#42701)
Summary:
Per title. ROCm CI doesn't have MKL so this adds a couple missing test annotations.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42701

Reviewed By: ngimel

Differential Revision: D22986273

Pulled By: mruberry

fbshipit-source-id: efa717e2e3771562e9e82d1f914e251918e96f64
2020-08-06 15:24:50 -07:00
Mike Ruberry
85a00c4c92 Skips spectral tests to prevent ROCm build from timing out (#42667)
Summary:
Per title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42667

Reviewed By: ailzhang

Differential Revision: D22978531

Pulled By: mruberry

fbshipit-source-id: 0c3ba116836ed6c433e2c6a0e1a0f2e3c94c7803
2020-08-06 12:41:32 -07:00
Mike Ruberry
ccfce9d4a9 Adds fft namespace (#41911)
Summary:
This PR creates a new namespace, torch.fft (torch::fft) and puts a single function, fft, in it. This function is analogous to is a simplified version of NumPy's [numpy.fft.fft](https://numpy.org/doc/1.18/reference/generated/numpy.fft.fft.html?highlight=fft#numpy.fft.fft) that accepts no optional arguments. It is intended to demonstrate how to add and document functions in the namespace, and is not intended to deprecate the existing torch.fft function.

Adding this namespace was complicated by the existence of the torch.fft function in Python. Creating a torch.fft Python module makes this name ambiguous: does it refer to a function or module? If the JIT didn't exist, a solution to this problem would have been to make torch.fft refer to a callable class that mimicked both the function and module. The JIT, however, cannot understand this pattern. As a workaround it's required to explicitly `import torch.fft` to access the torch.fft.fft function in Python:

```
import torch.fft

t = torch.randn(128, dtype=torch.cdouble)
torch.fft.fft(t)
```

See https://github.com/pytorch/pytorch/issues/42175 for future work. Another possible future PR is to get the JIT to understand torch.fft as a callable class so it need not be imported explicitly to be used.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41911

Reviewed By: glaringlee

Differential Revision: D22941894

Pulled By: mruberry

fbshipit-source-id: c8e0b44cbe90d21e998ca3832cf3a533f28dbe8d
2020-08-06 00:20:50 -07:00
Mike Ruberry
4b6e5f42a4 Creates spectral ops test suite (#42157)
Summary:
In preparation for creating the new torch.fft namespace and NumPy-like fft functions, as well as supporting our goal of refactoring and reducing the size of test_torch.py, this PR creates a test suite for our spectral ops.

The existing spectral op tests from test_torch.py and test_cuda.py are moved to test_spectral_ops.py and updated to run under the device generic test framework.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42157

Reviewed By: albanD

Differential Revision: D22811096

Pulled By: mruberry

fbshipit-source-id: e5c50f0016ea6bb8b093cd6df2dbcef6db9bb6b6
2020-07-29 11:36:18 -07:00