Commit Graph

517 Commits

Author SHA1 Message Date
Li-Huai (Allan) Lin
740137df6f [MPS] Add bucketize op (#112830)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112830
Approved by: https://github.com/kulinseth, https://github.com/malfet
ghstack dependencies: #112829
2023-11-07 17:22:08 +00:00
Li-Huai (Allan) Lin
c4bb77323d [MPS] Add searchsorted op (#112829)
The metal kernels implemented are closely following `Bucketization.cu`.

Benchmark:
```
[----------------------------- searchsorted ----------------------------]
                                                         |  cpu   |  mps
1 threads: --------------------------------------------------------------
      Batch size: 8; In features: 64; Sorter: True       |    44  |   530
      Batch size: 8; In features: 64; Sorter: False      |    31  |    12
      Batch size: 8; In features: 256; Sorter: True      |   131  |   520
      Batch size: 8; In features: 256; Sorter: False     |   107  |    12
      Batch size: 8; In features: 1024; Sorter: True     |   499  |   590
      Batch size: 8; In features: 1024; Sorter: False    |   398  |    12
      Batch size: 16; In features: 64; Sorter: True      |    71  |   540
      Batch size: 16; In features: 64; Sorter: False     |    57  |    12
      Batch size: 16; In features: 256; Sorter: True     |   242  |   610
      Batch size: 16; In features: 256; Sorter: False    |   200  |    12
      Batch size: 16; In features: 1024; Sorter: True    |   999  |   720
      Batch size: 16; In features: 1024; Sorter: False   |   842  |    12
      Batch size: 32; In features: 64; Sorter: True      |   124  |   509
      Batch size: 32; In features: 64; Sorter: False     |   103  |    12
      Batch size: 32; In features: 256; Sorter: True     |   477  |   650
      Batch size: 32; In features: 256; Sorter: False    |   407  |    12
      Batch size: 32; In features: 1024; Sorter: True    |  1940  |   833
      Batch size: 32; In features: 1024; Sorter: False   |  1710  |    12
      Batch size: 64; In features: 64; Sorter: True      |   231  |   590
      Batch size: 64; In features: 64; Sorter: False     |   194  |    12
      Batch size: 64; In features: 256; Sorter: True     |   937  |   710
      Batch size: 64; In features: 256; Sorter: False    |   800  |    13
      Batch size: 64; In features: 1024; Sorter: True    |  3980  |  1290
      Batch size: 64; In features: 1024; Sorter: False   |  3330  |    12
      Batch size: 128; In features: 64; Sorter: True     |   448  |   650
      Batch size: 128; In features: 64; Sorter: False    |   390  |    13
      Batch size: 128; In features: 256; Sorter: True    |  1830  |   850
      Batch size: 128; In features: 256; Sorter: False   |  1590  |    12
      Batch size: 128; In features: 1024; Sorter: True   |  7790  |  2850
      Batch size: 128; In features: 1024; Sorter: False  |  6670  |    13
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112829
Approved by: https://github.com/malfet
2023-11-07 17:22:08 +00:00
CaoE
455241bbd3 Add Half for aten2, logaddexp, logaddexp2, hypot, and nextafter on CPU (#112138)
Add Half for aten2, logaddexp, logaddexp2, hypot, and nextafter on CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112138
Approved by: https://github.com/cpuhrsch
2023-11-06 06:01:29 +00:00
CaoE
26b5e27ace Add Half support for cummax, cummin, cumprod, logcumsumexp, and prod on CPU (#112132)
Add Half support for cummax, cummin, cumprod, logcumsumexp, and prod on CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112132
Approved by: https://github.com/cpuhrsch
2023-11-05 12:31:38 +00:00
Li-Huai (Allan) Lin
30237aaeec [MPS] Fix bug when value is of complex (#111937)
When the value of `fill` is of complex, this line `value.toDouble() == 0.0` will error out saying that converting complex to double will cause overflow. So we should firstly handle the complex value and then enter this condition.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111937
Approved by: https://github.com/malfet
ghstack dependencies: #111885
2023-10-31 17:50:56 +00:00
CaoE
a310cc8968 Add Half support for kthvalue, cross, hist, and logit on CPU (#112135)
Add Half support for kthvalue, cross, hist, and logit on CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112135
Approved by: https://github.com/cpuhrsch
2023-10-31 09:12:47 +00:00
Peter Bell
bbd5b935e4 Use pytree.tree_leaves everywhere (#112324)
This changes all the instances I could find of `tree_flatten(...)[0]` or
`x, _ = tree_flatten` to use `tree_leaves`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112324
Approved by: https://github.com/lezcano
ghstack dependencies: #112327, #112323
2023-10-30 03:39:04 +00:00
Cao E
1c89ea7f72 Add Half support for softmax and log_softmax on CPU (#103315)
Add Half support for softmax and log_softmax on CPU.
Note: This introduces a correctness issue with MPS https://github.com/pytorch/pytorch/issues/111416 and https://github.com/pytorch/pytorch/issues/111479.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103315
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki, https://github.com/malfet
2023-10-26 08:38:54 +00:00
Peter Bell
46e80ce58a [ATen] Support multi dim any and all reductions (#110310)
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110310
Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/justinchuby
2023-10-24 21:33:53 +00:00
Li-Huai (Allan) Lin
4b804dac33 [MPS] Add complex support for fill (#111885)
Fixes #110537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111885
Approved by: https://github.com/malfet
2023-10-24 06:41:10 +00:00
CaoE
4b324a8717 Add Half support for aminmax on CPU (#106853)
Add Half support for aminmax on CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106853
Approved by: https://github.com/cpuhrsch
2023-10-23 17:43:47 +00:00
CaoE
d1afb7d43d add Half support for multinomial on CPU (#104178)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104178
Approved by: https://github.com/jgong5, https://github.com/kulinseth, https://github.com/cpuhrsch
2023-10-20 19:16:04 +00:00
CaoE
2a40b7efcb Add Half support for addcmul, addcdiv, cumsum, and topk on CPU (#103319)
Add Half support for addcmul, addcdiv, cumsum, and topk on CPU.
Note: This PR will introduce the issue  https://github.com/pytorch/pytorch/issues/111454.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103319
Approved by: https://github.com/jgong5, https://github.com/cpuhrsch
2023-10-19 17:47:45 +00:00
CaoE
8713a1a363 add Half support for bernoulli on CPU (#104176)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104176
Approved by: https://github.com/mingfeima, https://github.com/cpuhrsch
2023-10-13 01:18:55 +00:00
Kurt Mohler
5292a92e03 Add torch.unravel_index (#110580)
Fixes #35674

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110580
Approved by: https://github.com/lezcano, https://github.com/kulinseth
2023-10-12 00:55:51 +00:00
igm503
95ff51d8ed [MPS] Add support for Softshrink to MPS Backend (#110814)
Adds the softshrink activation function to the mps backend.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110814
Approved by: https://github.com/kulinseth
2023-10-11 07:55:39 +00:00
igm503
4b881b0da3 [MPS] add support for sgn to MPS backend (#110829)
Fixes #86805

Adds support for sgn to MPS backend.

Notes:

1. @malfet self-assigned this when he was working on implementing polar, but from what I can tell, he didn't end up needing to implement it.

2. @Berzeg implemented this last year, before view_as_complex was supported. Because of @malfet recent contributions, however, @Berzeg 's implementation works. I've removed the part of his implementation that dealt with non-complex dtypes (since these can just be passed to at::sign), matched the more recent pattern we've been using in UnaryOps.mm, and thrown in a simple implementation of _efficientzerotensor for mps, so that the backward function works.
3. @Berzeg deserves a good bit of credit for this, so let me know if there's a way to assign him some without jamming up the pr (he seems to be AWOL since last working on this)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110829
Approved by: https://github.com/malfet
2023-10-09 16:53:25 +00:00
vfdev-5
d2a2a67fa4 Added new test sample to interpolate op in OpInfo (#104181)
Description:
- Added new test sample to interpolate op in OpInfo
- Fixed silent issue with zero tensor test sample for uint8 dtype

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104181
Approved by: https://github.com/pmeier, https://github.com/lezcano
2023-10-09 10:55:56 +00:00
igm503
a389181f2e [MPS] add support for aten::nextafter (#109685)
Fixes https://github.com/pytorch/pytorch/issues/77764#issuecomment-1722515591

Adds support for aten::nextafter to the MPS backend. Supports float and half types.

Notes:
- I've added nextafter to the output_grad_check XFAILLIST since neither this nor the cpu implementations have grad functions
- Metal Shading Language 3.1 seems to have a native nextafter() function, so once that's available, this kernel can just call that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109685
Approved by: https://github.com/kulinseth
2023-10-03 19:20:22 +00:00
PyTorch MergeBot
df3ab70dde Revert "Added new test sample to interpolate op in OpInfo (#104181)"
This reverts commit 87f8bc65f8.

Reverted https://github.com/pytorch/pytorch/pull/104181 on behalf of https://github.com/peterbell10 due to Causing OOM in slow-gradcheck ([comment](https://github.com/pytorch/pytorch/pull/104181#issuecomment-1745472323))
2023-10-03 18:07:02 +00:00
vfdev-5
87f8bc65f8 Added new test sample to interpolate op in OpInfo (#104181)
Description:
- Added new test sample to interpolate op in OpInfo
- Fixed silent issue with zero tensor test sample for uint8 dtype

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104181
Approved by: https://github.com/pmeier, https://github.com/lezcano
2023-10-02 15:35:48 +00:00
CaoE
9399e0b1ff add fp16 support for gemm (#99498)
### Testing

Native matmul vs. mkldnn matmul  on SPR (with avx512_fp16 support)

single core:

Input | Naïve impl   / ms | oneDNN /   ms | Speed up
-- | -- | -- | --
M: 128, N: 128, K: 128, trans_a: False, trans_b: False | 2010.387 | 64.700 | 31.072
M: 128, N: 256, K: 128, trans_a: False, trans_b: False | 4027.116 | 107.780 | 37.364
M: 8192, N: 768, K: 768, trans_a: False, trans_b: False | 28685868.488 | 90663.008 | 316.401

56 cores:
Input | Naïve impl   / ms | oneDNN /   ms | Speed up
-- | -- | -- | --
M: 128, N: 128, K: 128, trans_a: False, trans_b: False | 5.091 | 0.24 | 211.30
M: 128, N: 128, K: 128, trans_a: False, trans_b: True | 5.224 | 0.23 | 220.09
M: 128, N: 256, K: 128, trans_a: False, trans_b: False | 10.006 | 0.30 | 330.31
M: 8192, N: 768, K: 768, trans_a: False, trans_b: False | 29435.372 | 1.770 | 1662.80
M: 8192, N: 768, K: 768, trans_a: False, trans_b: True | 31464.961 | 1.728 |  18204.76
M: 8192, N: 768, K: 3072, trans_a: False, trans_b: False | 115035.849  | 7.990 | 14396.90
M: 8192, N: 768, K: 3072, trans_a: False, trans_b: True | 122981.023 |  7.725 | 15918.34
Batch: 768, M: 128, N: 64, K: 128  | 2032.523 | 0.705 | 2882.23

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99498
Approved by: https://github.com/jgong5, https://github.com/malfet
2023-09-28 01:03:50 +00:00
Li-Huai (Allan) Lin
ac1e85161e [MPS] Fix nll_loss with default ignore_index (#109574)
`-100` should be a valid `ignore_index` as indicated in the linked issue. This PR also cleans up some unnecessary MPSTensor copies.

Fixes #108148
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109574
Approved by: https://github.com/kulinseth
ghstack dependencies: #109557
2023-09-26 04:13:09 +00:00
Li-Huai (Allan) Lin
0087118997 [MPS] Fix mps to cpu copy with storage offset (#109557)
Fix #108978

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109557
Approved by: https://github.com/DenisVieriu97
2023-09-26 04:13:08 +00:00
CaoE
7c9052165a add fp16 support for native conv and deconv on CPU (#99497)
### Testing

Native conv vs. mkldnn conv on SPR (with avx512_fp16 support)

Single core:

Input | Naïve impl   / us | oneDNN /   us | Speed up
-- | -- | -- | --
IC:   64, OC: 256, kernel: 1, stride: 1, N: 256, H: 56, W: 56, G: 1, pad: 0 | 34676789 | 524199.8 | 66.15185
IC:   128, OC: 512, kernel: 1, stride: 1, N: 256, H: 28, W: 28, G: 1, pad: 0 | 33454125 | 349844.4 | 95.62573
IC: 256, OC: 256, kernel: 3, stride: 1,   N: 1, H: 16, W: 16, G: 1, pad: 0 | 317650.1 | 2317.677 | 137.0554
IC: 128, OC: 256, kernel: 3, stride: 1,   N: 1, L: 64 | 15334.68 | 167.264 | 91.67952

56 cores:
Input | Naïve impl   / us | oneDNN /   us | Speed up
-- | -- | -- | --
IC:   64, OC: 256, kernel: 1, stride: 1, N: 256, H: 56, W: 56, G: 1, pad: 0 | 1032064 | 11073.58 | 93.20061
IC:   128, OC: 512, kernel: 1, stride: 1, N: 256, H: 28, W: 28, G: 1, pad: 0 | 1000097 | 16371.19 | 61.08883
IC:   256, OC: 1024, kernel: 1, stride: 1, N: 256, H: 14, W: 14, G: 1, pad: 0 | 981813.4 | 9008.908 | 108.9825
IC: 1024, OC: 256, kernel: 1, stride: 1,   N: 256, H: 14, W: 14, G: 1, pad: 0 | 1082606 | 10150.47 | 106.6558
IC: 256, OC: 256, kernel: 3, stride: 1,   N: 1, H: 16, W: 16, G: 1, pad: 0 | 319980.6 | 181.598 | 1762.027

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99497
Approved by: https://github.com/jgong5, https://github.com/cpuhrsch
2023-09-25 01:31:26 +00:00
igm503
255d1a776a [MPS] Add support for Mish to MPS backend (#109786)
Fixes [#ISSUE_NUMBER](https://github.com/pytorch/pytorch/issues/77764#issuecomment-1712894444)

Adds the mish activation function to the mps backend.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109786
Approved by: https://github.com/kulinseth
2023-09-21 21:01:20 +00:00
igm503
0317626df5 [MPS] adding weight_norm_interface support for mps (#108008)
Fixes #104513

Adds support for aten::_weight_norm_interface to the mps backend.

Also adds a consistency test for the output and the grad.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108008
Approved by: https://github.com/kulinseth
2023-09-20 02:18:28 +00:00
CaoE
54c28c564f add Half support for BatchNorm on CPU (#102070)
Fixes #106543

### Testing

Single core:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.7116 | 0.1427 | 0.1744 | 0.2638 | 0.2002 | 0.2556
(1, 32, 100, 100) | 0.8579 | 0.1725 | 0.2077 | 0.3023 | 0.2399 | 0.2995
(32, 16, 200, 200) | 57.3466 | 12.2179 | 13.1320 | 45.9524 | 24.1526 | 24.9882

28 cores:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.2571 | 0.0713 | 0.0846 | 0.1140 | 0.0883 |  0.1043
(1, 32, 100, 100) | 0.1077 | 0.0510 | 0.0548 | 0.0700 | 0.0645 | 0.0713
(32, 16, 200, 200) | 5.5060 | 1.4195 | 1.4663 | 6.773 | 3.0886 | 3.1343

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102070
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki, https://github.com/mingfeima
2023-09-19 10:43:33 +00:00
PyTorch MergeBot
be9f73f031 Revert "Add meta and OpInfo for _embedding_bag_dense_backward (#109211)"
This reverts commit fe14e43d14.

Reverted https://github.com/pytorch/pytorch/pull/109211 on behalf of https://github.com/clee2000 due to Sorry I think the test_ops.py::TestCommonCUDA::test_compare_cpu__embedding_bag_dense_backward_cuda_float32 is failing 492a93d185 https://github.com/pytorch/pytorch/actions/runs/6190707847/job/16808644559 not sure why this is run in slow when it looks to be a new test ([comment](https://github.com/pytorch/pytorch/pull/109211#issuecomment-1720235918))
2023-09-14 22:29:12 +00:00
Edward Z. Yang
fe14e43d14 Add meta and OpInfo for _embedding_bag_dense_backward (#109211)
The sample inputs is a bit involved because there are a lot of
shenanigans in the derivative formula.  Check comments.

This is exercised in vdd, internal test `buck2 run '@fbcode//mode/opt' fbcode//pytorch/benchmark/fb/test_gpu:run_test_gpu -- 'pytorch.benchmark.fb.test_gpu.test_gpu.TestBenchmarkFbGpu.test_train_blue_reels_vdd_v3_inductor_speedup'`

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109211
Approved by: https://github.com/albanD, https://github.com/zou3519
2023-09-14 18:49:32 +00:00
PyTorch MergeBot
b226373d16 Revert "add Half support for BatchNorm on CPU (#102070)"
This reverts commit b6a1d3fb97.

Reverted https://github.com/pytorch/pytorch/pull/102070 on behalf of https://github.com/clee2000 due to I'm very sorry but it looks like #106543 was not fixed, I still see it failing on main b6a1d3fb97 https://github.com/pytorch/pytorch/actions/runs/6185704949/job/16793975677 ([comment](https://github.com/pytorch/pytorch/pull/102070#issuecomment-1719747065))
2023-09-14 16:13:34 +00:00
CaoE
b6a1d3fb97 add Half support for BatchNorm on CPU (#102070)
Fixes #106543

### Testing

Single core:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.7116 | 0.1427 | 0.1744 | 0.2638 | 0.2002 | 0.2556
(1, 32, 100, 100) | 0.8579 | 0.1725 | 0.2077 | 0.3023 | 0.2399 | 0.2995
(32, 16, 200, 200) | 57.3466 | 12.2179 | 13.1320 | 45.9524 | 24.1526 | 24.9882

28 cores:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.2571 | 0.0713 | 0.0846 | 0.1140 | 0.0883 |  0.1043
(1, 32, 100, 100) | 0.1077 | 0.0510 | 0.0548 | 0.0700 | 0.0645 | 0.0713
(32, 16, 200, 200) | 5.5060 | 1.4195 | 1.4663 | 6.773 | 3.0886 | 3.1343

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102070
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki
2023-09-14 12:23:59 +00:00
PyTorch MergeBot
04a765f95d Revert "add Half support for BatchNorm on CPU (#102070)"
This reverts commit 6065e7a97c.

Reverted https://github.com/pytorch/pytorch/pull/102070 on behalf of https://github.com/clee2000 due to sorry it looks like this is causing an unexpected success for `test_jit_fuser_te.py::TestNNCOpInfoCPU::test_nnc_correctness_nn_functional_batch_norm_cpu_float16` 6065e7a97c https://github.com/pytorch/pytorch/actions/runs/6178069462/job/16770849782 ([comment](https://github.com/pytorch/pytorch/pull/102070#issuecomment-1718402208))
2023-09-13 22:38:42 +00:00
Nikita Shulga
916183a012 [MPS] Fix crash if nonzero is called concurrently (#108996)
Surrounds `stream->synchronize()` call with `dispatch_sync(stream->queue(), ^{});`,  which is a noop for signle threaded program, but serializes calls to the synchronize across the threads using the same stream.

Prevent `[IOGPUMetalCommandBuffer validate]:215: failed assertion 'commit an already committed command buffer'` non-recoverable exception, which is triggered every time one is using PyCharm to inspect tensors on MPS device

Fixes https://github.com/pytorch/pytorch/issues/100285
<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at 1662ce2</samp>

> _Sing, O Muse, of the swift and skillful coders_
> _Who fixed the dreadful deadlock of the stream_
> _That crashed the mighty tensors of the MPS_
> _When they sought out the nonzero elements._

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108996
Approved by: https://github.com/kulinseth
2023-09-13 19:28:47 +00:00
CaoE
6065e7a97c add Half support for BatchNorm on CPU (#102070)
Fixes #106543

### Testing

Single core:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.7116 | 0.1427 | 0.1744 | 0.2638 | 0.2002 | 0.2556
(1, 32, 100, 100) | 0.8579 | 0.1725 | 0.2077 | 0.3023 | 0.2399 | 0.2995
(32, 16, 200, 200) | 57.3466 | 12.2179 | 13.1320 | 45.9524 | 24.1526 | 24.9882

28 cores:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
(1, 4, 256, 256) | 0.2571 | 0.0713 | 0.0846 | 0.1140 | 0.0883 |  0.1043
(1, 32, 100, 100) | 0.1077 | 0.0510 | 0.0548 | 0.0700 | 0.0645 | 0.0713
(32, 16, 200, 200) | 5.5060 | 1.4195 | 1.4663 | 6.773 | 3.0886 | 3.1343

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102070
Approved by: https://github.com/jgong5, https://github.com/mikaylagawarecki
2023-09-13 17:30:16 +00:00
igm503
1b9b3a2d15 [MPS] Adding lgamma, digamma, and polygamma implementations (#106292)
Fixes issue mentioned in #77764

e.g. https://github.com/pytorch/pytorch/issues/77764#issuecomment-1654111744

Adds MPS support for the following ops:

- lgamma
- mvlgamma
- digamma
- polygamma

The lgamma fucntion does not yet have an MPS backend implementation. I've added one using a custom metal kernel (following John D. Cook's c++ implementation of the log gamma function: https://www.johndcook.com/blog/cpp_gamma/). For the backward pass op, I've added a digamma kernel that follows the cpu+cuda digamma implementation, and for the backward pass of the digamma op, I've added a polygamma + trigamma kernel following, again, the cpu+cuda implementations.

NOTE:

The cpu implementation of the polygamma function incorrectly (as far as I can tell) outputs a finite number for order = 1 and x in the negative integers. The mps implementation correctly outputs infinite. (see https://github.com/pytorch/pytorch/issues/106692)

The polygamma tests currently don't pass because of the error in the cpu+cuda kernels, but also because there are smallish discrepancies near the negative integers between the cpu+cuda and the mps polygamma and trigamma kernels. I'm not sure exactly why this is, but let me know if the discrepancies are too big.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106292
Approved by: https://github.com/kulinseth
2023-09-12 16:43:37 +00:00
Li-Huai (Allan) Lin
293d3b89d8 Add Opinfos for the Tensor overload of linspace/logspace (#107958)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107958
Approved by: https://github.com/zou3519
2023-09-11 22:30:19 +00:00
Nikita Shulga
9b12a28d89 [MPS] Implement mul operation for complex types (#108395)
Using existing BinaryKernel template

Add `mul` as well as `kron` and `outer` to list of MPS ops that support complex types

This should add all the missing ops mentioned in https://github.com/pytorch/pytorch/issues/105665
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108395
Approved by: https://github.com/albanD
ghstack dependencies: #108393, #108394
2023-09-10 05:39:12 +00:00
Nikita Shulga
c7bb842d35 [MPS] Add complex add/sub (#108394)
Using `view_as_real` and running elementwise ops in resulted tensors
Add `add` and `sub` to list of complex ops that should work on MPS
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108394
Approved by: https://github.com/albanD
ghstack dependencies: #108393
2023-09-10 05:39:12 +00:00
Nikita Shulga
53a4ca4b58 [MPS][BE] Add dispatch_sync_with_rethrow (#108393)
And enable testing for match_output for complex types.
Most of them should throw an "unsupported XYZ" error, rather than crash.
This fixed several crashes when linalg ops were invoked with complex inputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108393
Approved by: https://github.com/kit1980, https://github.com/kulinseth
2023-09-10 02:07:12 +00:00
alexdremov
b60273b88a [MPS] Pixel shuffle unshuffle support (#99306)
Fixes #83196

Now, MPS implementation is blazingly fast.

Though, I have several questions on improving this PR:

1. I copied code from `test_nn.py`. Is there better way to test this?
2. I decided to use `usepixelshuffleorder:YES`. Am I right performance-wise? According to docs:
```
`usePixelShuffleOrder` can be
used to control how the data within spatial blocks is ordered in the
`depthAxis` dimension: with `usePixelShuffleOrder=YES` the values within the
spatial blocks are stored contiguosly within the `depthAxis` dimension whereas
otherwise they are stored interleaved with existing values in the `depthAxis` dimension.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99306
Approved by: https://github.com/kulinseth, https://github.com/malfet
2023-09-06 09:11:39 +00:00
CaoE
42f94d7e9f add Half support for maxpool on CPU (#98819)
### Testing
Single socket (28 cores):

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: contig | 4.12895 | 6.9669 | 5.30297 | 0.55775 | 1.98917 | 0.72233
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: CL | 0.85093 | 1.88813 | 1.38063 | 5.5742 | 36.5086 | 10.58552
size: (32, 16, 200, 200), kernel: 3,   stride: 1, mem_format: contig | 22.37212 | 37.90383 | 30.94482 | 6.85868 | 10.6116 | 3.9993
size: (32, 16, 200, 200), kernel: 3,   stride: 1, mem_format: CL | 5.41658 | 4.71098 | 4.66578 | 6.69875 | 14.7171 | 5.1167
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: contig | 10.69831 | 18.0468 | 13.71657 | 2.61192 | 4.96172 | 1.68635
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: CL | 2.52637 | 2.0096 | 2.0055 | 2.60314 | 7.2093 | 2.49843
size: (4, 19, 10, 16, 16), kernel: 3,   stride: 1, mem_format: contig | 0.47605 | 0.88398 | 0.65326 | 0.06525 | 0.115489 | 0.0674
size: (4, 19, 10, 16, 16), kernel: 3,   stride: 1, mem_format: CL3d | 0.10902 | 0.25293 | 0.157475 | 0.11386 | 0.53319 | 0.17836

Single core:

shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: contig | 90.9809 | 163.473 | 126.1276 | 6.57721 | 41.40833 | 11.82505
size: (1, 56, 264, 264), kernel: 3,   stride: 1, mem_format: CL | 9.88405 | 38.39137 | 29.62069 | 7.10636 | 36.97535 | 11.0525
size: (32, 16, 200, 200), kernel: 3,   stride: 1, mem_format: contig | 476.782 | 855.4769 | 648.2248 | 46.6488 | 219.2586 | 67.10599
size: (32, 16, 200, 200), kernel: 3,   stride: 1, mem_format: CL | 80.29271 | 91.33854 | 87.80345 | 48.81692 | 203.9974 | 63.39004
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: contig | 235.2113 | 419.0799 | 315.4284 | 20.6049 | 107.1524 | 32.39169
size: (32, 32, 100, 100), kernel: 3,   stride: 1, mem_format: CL | 29.47653 | 33.54905 | 32.82823 | 22.59674 | 98.5586 | 30.05763
size: (4, 19, 10, 16, 16), kernel: 3,   stride: 1, mem_format: contig | 7.90684 | 13.9208 | 10.03272 | 0.23725 | 1.35269 | 0.41728
size: (4, 19, 10, 16, 16), kernel: 3,   stride: 1, mem_format: CL3d | 2.33638 | 3.36894 | 2.64635 | 0.26535 | 1.244 | 0.38895

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98819
Approved by: https://github.com/mingfeima, https://github.com/mikaylagawarecki
2023-09-05 18:23:41 +00:00
Nikita Shulga
bae409388c [MPS] Fix .item() for multi-dim scalar (#107913)
By refactoring `_local_scalar_dense_mps` to use `_empty_like` to allocate CPU tensor.
Also, print a more reasonable error message when dst dim is less than src in mps_copy_

This fixes regression introduced by https://github.com/pytorch/pytorch/pull/105617 and adds regression test.

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at abd06e6</samp>

> _Sing, O Muse, of the valiant deeds of the PyTorch developers_
> _Who strive to improve the performance and usability of tensors_
> _And who, with skill and wisdom, fixed a bug in the MPS backend_
> _That caused confusion and dismay to many a user of `item()`_

Fixes https://github.com/pytorch/pytorch/issues/107867

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107913
Approved by: https://github.com/albanD
2023-08-31 21:08:29 +00:00
vfdev
b7624fc91e Cleaned up test_mps.py::test_output*_match (#108092)
Description:
- cleaned up test_mps.py::test_output_match and test_mps.py::test_output_grad_match tests
  - removed unused variables and useless brackets
  - simplified atol/rtol setup if/else code

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108092
Approved by: https://github.com/kulinseth
2023-08-29 10:46:02 +00:00
Nikita Shulga
6e85a68829 [MPS] Implement polar via metal shader (#107324)
Use `view_as_real` to cast complex into a pair of floats and then it becomes just another binary operator.

Enable `polar` and `view_as_complex` consistency tests, but skip `test_output_grad_match_polar_cpu` as `mul` operator is yet not supported

Remove redundant `#ifdef __OBJC__` and capture and re-throw exceptions captured during `createCacheBlock` block.
Fixes https://github.com/pytorch/pytorch/issues/78503

TODOs(in followup PRs):
  - Implement backwards (requires complex mul and sgn)
  - Measure the perf impact of computing the strides on the fly rather than ahead of time (unrelated to this PR)

Partially addresses https://github.com/pytorch/pytorch/issues/105665
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107324
Approved by: https://github.com/albanD
2023-08-25 03:16:23 +00:00
Aaron Gokaslan
660e8060ad [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
PyTorch MergeBot
d59a6864fb Revert "[BE]: Update ruff to 0.285 (#107519)"
This reverts commit 88ab3e4322.

Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480))
2023-08-22 19:53:32 +00:00
Aaron Gokaslan
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
arunppsg
4bfc55ba8b [MPS] Enable forward test for renorm (#106666)
Enabled forward test for renorm

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106666
Approved by: https://github.com/kulinseth, https://github.com/albanD
2023-08-17 16:46:06 +00:00
Jason Lu
bc88028e8e Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
Summary:
Original commit changeset: 81319beb97f3

Original Phabricator Diff: D47961182

Test Plan: revert to maintain backward compat with legacy ads_dper3 production package. Read details in: S357822

Reviewed By: atuljangra

Differential Revision: D48131623

@diff-train-skip-merge
(D48131623 landed internally)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106743
Approved by: https://github.com/malfet
2023-08-08 15:27:34 +00:00
Ramin Azarmehr
cdfd0ea162 [MPS] Introduce torch.mps.Event() APIs (#102121)
- Implement `MPSEventPool` to recycle events.
- Implement python bindings with `torch.mps.Event` class using the MPSEventPool backend. The current member functions of the Event class are `record()`, `wait()`, `synchronize()`, `query()`, and `elapsed_time()`.
- Add API to measure elapsed time between two event recordings.
- Added documentation for Event class to `mps.rst`.
- Added test case to `test_mps.py`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102121
Approved by: https://github.com/albanD, https://github.com/kulinseth
2023-08-08 03:45:45 +00:00
Li-Huai (Allan) Lin
d4d086ce7b [MPS] Fix Clamp with strided outputs/inputs (#97858)
Fixes #94396
Fixes #87348

1. If output is strided, we don't gather input tensors.
2. If output is not strided but min_t or max_t is strided, we make min_t or max_t contiguous.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97858
Approved by: https://github.com/kulinseth
2023-08-04 09:32:12 +00:00
Peter Stefek
c9c2b14c53 Fix copy_ broadcast behavior on mps (#105617)
Fixes #105277

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105617
Approved by: https://github.com/malfet
2023-08-03 04:03:32 +00:00
PyTorch MergeBot
d83b887f2a Revert "Add error checking for padding modules (#106147)"
This reverts commit 0547b6279d.

Reverted https://github.com/pytorch/pytorch/pull/106147 on behalf of https://github.com/jeanschmidt due to sadly it is breaking internal builds, and I can't coordinate a FF due to timezone differences ([comment](https://github.com/pytorch/pytorch/pull/106147#issuecomment-1661870970))
2023-08-02 09:37:40 +00:00
Denis Vieriu
d1a2aa1909 [MPS] Fix MPS clamp issue with different dtypes between input and min/max tensors (#105747)
- Fix the FP16 clamp issue (FP32 and FP16 are not broadcast compatible)
- Fix clamp (cached graph nodes were previously replaced with the cast version)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105747
Approved by: https://github.com/kulinseth
2023-08-02 02:51:34 +00:00
Peter Stefek
97e5055a69 Add cumprod support for device mps (#104688)
Related to #77764

Add support for the cumprod operation (which in turn allows its gradient). This also allows us to compute the gradient of prod since it was blocked behind cumprod in the case where exactly one element of the tensor was 0.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104688
Approved by: https://github.com/kulinseth
2023-08-01 21:51:20 +00:00
Mikayla Gawarecki
0547b6279d Add error checking for padding modules (#106147)
Fixes https://github.com/pytorch/pytorch/issues/105627

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106147
Approved by: https://github.com/albanD
ghstack dependencies: #106325
2023-08-01 12:49:58 +00:00
Mikayla Gawarecki
d8e5f2aa6d Reland "Make adding buffers more like adding parameters (#104069)" (#106224)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106224
Approved by: https://github.com/atalman, https://github.com/albanD
2023-07-31 17:18:56 +00:00
cyy
b8eb827d93 use UBSAN on some tests (#103655)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103655
Approved by: https://github.com/kshitij12345, https://github.com/zou3519
2023-07-24 14:24:49 +00:00
Peter Pham
bba06ad751 [MPS] aten::erfinv metal kernel ops (#101507)
I've added the implementation of erfinv using the algorithm from 4154c8ea15/aten/src/ATen/native/Math.h (L152) in order for the MPS based algorithm to match the CPU automatic test. This PR is using the new metal api calls from https://github.com/pytorch/pytorch/pull/100661

Testing shows MPS has a decent speed up (270x) compared to CPU on tensor size of 100 mil elements.
```
import torch
x = torch.arange(-1, 1, 1e-8) # default cpu tensor
#measure CPU compute time by calling torch.erfinv
time = %timeit -o -q -r 5 torch.erfinv(x)
cpu_time = time.average
print("CPU torch.erfinv time: ", cpu_time)
x = x.to("mps")
# measure MPS compute time
time = %timeit -o -q -r 5 torch.erfinv(x)
mps_time = time.average
print("MPS torch.erfinv time: ", mps_time)
print(f"MPS torch.erfinv is {cpu_time/mps_time*100} percent faster than CPU torch.erfinv")

# compute MSE between MPS and CPU torch.erfinv
x = x.to("cpu")
y_cpu = torch.erfinv(x)
x = x.to("mps")
y_mps = torch.erfinv(x)
y_mps = y_mps.to("cpu")
mask = torch.isfinite(y_cpu) & torch.isfinite(y_mps.to("cpu"))
y_mps = y_mps[mask]
y_cpu = y_cpu[mask]
x = x[mask]
print(f"length of y_mps: {len(y_mps)}, length of y_cpu: {len(y_cpu)}, length of x: {len(x)}")
mse = torch.square(y_cpu - y_mps).mean()
print("MSE between MPS and CPU torch.erfinv: ", mse)
diff = torch.abs(y_cpu - y_mps)
print("Largest difference")
print(f"x:  {x[torch.argmax(diff)]}, y_cpu: {y_cpu[torch.argmax(diff)]}, y_mps: {y_mps[torch.argmax(diff)]} , diff = {y_cpu[torch.argmax(diff)] - y_mps[torch.argmax(diff)]}")
```
CPU torch.erfinv time:  2.654937833400254
MPS torch.erfinv time:  0.009831255332002912
MPS torch.erfinv is 27005.07456822776 percent faster than CPU torch.erfinv
length of y_mps: 199999992, length of y_cpu: 199999992, length of x: 199999992
MSE between MPS and CPU torch.erfinv:  tensor(4.2339e-14)
Largest difference
x:  -0.9999980330467224, y_cpu: -3.363569736480713, y_mps: -3.3635685443878174 , diff = -1.1920928955078125e-06

Fixes #https://github.com/pytorch/pytorch/issues/86808

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101507
Approved by: https://github.com/kulinseth
2023-07-23 01:36:43 +00:00
Jane Xu
803d42e457 add lerp cpu support for half (#105607)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105607
Approved by: https://github.com/albanD
2023-07-21 20:29:05 +00:00
Andrey Talman
c6653b65d8 Back out "Make adding buffers more like adding parameters (#104069)" (#105581)
Summary:
D47537831 is breaking pyper tests: https://fb.workplace.com/groups/802176577445480/posts/1018902842439518/

with `TypeError: register_buffer() takes 3 positional arguments but 4 were given`

Original commit changeset: d4b4069fbd38

Original Phabricator Diff: D47537831

Test Plan:
```
buck2 run //caffe2/torch/fb/training_toolkit/integration_tests/training_lifecycle/cogwheel_tests/pyper_release_v2:cogwheel_smallworld_inline_cvr_infer_pyper_pyper__canary_offline_training-launcher -- --run-harness-in-tupperware --build-fbpkg ads_dper3 --build-fbpkg training_platform
```

Reviewed By: atalman

Differential Revision: D47600140

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105581
Approved by: https://github.com/mikaylagawarecki
2023-07-20 03:39:53 +00:00
Justin Chu
73e1455327 [BE] Enable ruff's UP rules and autoformat test/ (#105434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
Peter Stefek
d2c24eca8a Fix mps unary op issue on non densely stored tensors (#105512)
This pr fixes a bug where non densely stored tensors were not converted to the dense tensors of the correct scalar type in the mps `unary_op` helper function

Fixes https://github.com/pytorch/pytorch/issues/105284
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105512
Approved by: https://github.com/malfet
2023-07-19 03:56:38 +00:00
Nikita Shulga
8cd94e1eab [MPS] Add lerp implementation (#105470)
lerp.Scalar fits very well into binary op template
Add a very naive implementation for `lerp.Tensor` as `add_out(self, weights.mul(end.sub(self)))`

Enable `lerp` testing in `test_mps`

Fixes https://github.com/pytorch/pytorch/issues/105382

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105470
Approved by: https://github.com/albanD
2023-07-18 20:01:04 +00:00
ekamiti
32d422f335 Make adding buffers more like adding parameters (#104069)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
2023-07-17 17:59:05 +00:00
David Radley
17250976f3 correct empty tensor mps all operation (#105218)
Fixes #104694

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105218
Approved by: https://github.com/ezyang, https://github.com/kulinseth
2023-07-14 17:42:54 +00:00
albanD
08cbfb2a58 Avoid tensor creation and use scalar overload (#104264)
I would expect this preserves the behavior but there might be weird edge cases?
@mruberry might know?

The aim is to fix https://github.com/pytorch/pytorch/pull/104254 (and make `1 ** t` capturable via cudagraph)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104264
Approved by: https://github.com/zou3519
2023-07-12 18:11:27 +00:00
Nikita Shulga
5e4ee15e85 [MPS] Fix unique flatten logic (#104938)
Tensor must be flatted if dim is none before checking whether or not dim dimension is already None

Fixes https://github.com/pytorch/pytorch/issues/104879

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104938
Approved by: https://github.com/albanD
2023-07-11 19:55:56 +00:00
soulitzer
91dcc3b272 Fix activation checkpoint for mps (#104787)
Fixes https://github.com/pytorch/pytorch/issues/104478

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104787
Approved by: https://github.com/albanD
2023-07-08 14:57:05 +00:00
Jerry Zhang
611febf6cf [quant] Support integer implementations for max_pool2d (#104225)
Summary:
This is needed for representing quantized model in pt2 export quantization flow

Test Plan:
tested by opinfo, python test/test_ops.py

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104225
Approved by: https://github.com/kimishpatel
2023-07-05 23:54:07 +00:00
Nikita Shulga
01e6d64dd2 [MPS] Fix unary ops over sparse-mapped tensors (#100765)
If input tensor is backed by a sparse view, create a dense copy before running unary op, otherwise op will be applied against the wrong elements.
Introduce `is_dense_in_storage` that returns true if tensor/view are mapped to a dense area in  the tensor storage.
Add unit test to validate the fix.

Fixes https://github.com/pytorch/pytorch/issues/98074
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100765
Approved by: https://github.com/albanD
2023-07-05 23:17:43 +00:00
Denis Vieriu
28720ad585 Fix argmax and argmin clamp value on MPS (#104374)
Replace clamp `LLONG_MAX` clamp value with the largest integer value that can be stored in a double. `constantWithScalar` takes as input a `double` value, for which `LLONG_MAX` was not fitting in a dobule, resulting in failures on x86.

Fixes https://github.com/pytorch/pytorch/issues/98191, https://github.com/pytorch/pytorch/issues/92311

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104374
Approved by: https://github.com/razarmehr, https://github.com/kulinseth
2023-06-30 18:11:49 +00:00
cyy
54cb61f7d9 enable ASAN on some tests (#103647)
Enabling more tests on ASAN, meanwhile we disable float-divide-by-zero and float-cast-overflow, both are disabled because they are also disabled by default in latest clang.
The following cited doc explains the reasons.
```
-fsanitize=float-cast-overflow: Conversion to, from, or between floating-point types
which would overflow the destination. Because the range of representable values
for all floating-point types supported by Clang is [-inf, +inf], the only cases detected are
conversions from floating point to integer types.
-fsanitize=float-divide-by-zero: Floating point division by zero.
This is undefined per the C and C++ standards,
 but is defined by Clang (and by ISO/IEC/IEEE 60559 / IEEE 754) as producing
either an infinity or NaN value,
so is not included in -fsanitize=undefined.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103647
Approved by: https://github.com/kit1980
2023-06-28 02:17:14 +00:00
magic-akari
e56cdfd74b [MPS] Handle deserialization more permissively (#98834)
MPS deserialization should handle `mps:0`.
It is generated from some codes like the following

```python
torch.rand(size=(3, 4)).to("mps")
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98834
Approved by: https://github.com/kulinseth, https://github.com/kit1980, https://github.com/malfet
2023-06-15 15:51:03 +00:00
Pearu Peterson
45401ef745 Enable float16 and complex32 support for sparse CSR elementwise multiplication operation. (#100394)
As in the title. In addition, the PR adds float16 addcmul support for CPU device.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100394
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-06-14 14:42:39 +00:00
Li-Huai (Allan) Lin
cce58a43c9 [MPS] Fix softplus with f16 input (#101948)
Fixes #101946
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101948
Approved by: https://github.com/malfet
2023-05-31 00:40:10 +00:00
ecao
3f4fee735a add Half support for logsigmoid, threshold, elu, gelu, hardtanh, hardsigmoid, hardswish, hardshrink, softshrink, leakyrelu, softplus, glu, silu, mish, and prelu on CPU (#98745)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98745
Approved by: https://github.com/jgong5, https://github.com/mingfeima, https://github.com/ngimel
2023-05-27 16:20:21 +00:00
Li-Huai (Allan) Lin
0db704d240 [OpInfo] Add multi_head_attention_forward (#100153)
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 8f8d620</samp>

This pull request improves the testing of the `nn.functional.multi_head_attention_forward` function by adding it to the `OpInfo` framework, adjusting the tolerance and skipping criteria for some test cases, and restricting the dtype for the `MetaProgrammingSystem` tests. These changes aim to address the randomness and numerical precision issues of the function.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100153
Approved by: https://github.com/drisspg
2023-05-26 01:58:17 +00:00
Denis Vieriu
de7ec2ddd7 [MPS] Allow saved models to be loaded directly to MPS through torch.jit.load (#102204)
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 94eed69</samp>

This pull request adds support for serializing and deserializing tensors on the `mps` device using JIT. It includes a test case in `test/test_mps.py` and a device handling logic in `torch/csrc/jit/serialization/unpickler.cpp`.

Fixes https://github.com/pytorch/pytorch/issues/88820, https://github.com/pytorch/pytorch/issues/87504
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102204
Approved by: https://github.com/kulinseth, https://github.com/malfet
2023-05-25 23:32:29 +00:00
Li-Huai (Allan) Lin
02a7318a5b [MPS] Add aminmax op (#101691)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101691
Approved by: https://github.com/malfet
2023-05-23 18:01:34 +00:00
Li-Huai (Allan) Lin
330c907301 [MPS] Fix embedding cache key (#101857)
Fixes #101198

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101857
Approved by: https://github.com/kulinseth
2023-05-21 06:11:25 +00:00
Aaron Gokaslan
3e2ea32dab [BE]: Enable ruff rule TRY302 and apply fixes (#101874)
Removes useless try statements and unreachable code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101874
Approved by: https://github.com/malfet
2023-05-19 17:30:52 +00:00
Khushi
1aaf0396eb [reland][opinfo] empty_strided (#101782)
Follows #100223

Previous PR: #100890

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101782
Approved by: https://github.com/ezyang
2023-05-19 03:06:29 +00:00
PyTorch MergeBot
dfac4364c4 Revert "[opinfo] empty_strided (#100890)"
This reverts commit 01c7106580.

Reverted https://github.com/pytorch/pytorch/pull/100890 on behalf of https://github.com/PaliC due to broke test_ops.py slow test ([comment](https://github.com/pytorch/pytorch/pull/100890#issuecomment-1551903975))
2023-05-17 19:00:15 +00:00
Li-Huai (Allan) Lin
bb3558961f [MPS] Add histogram ops (#96652)
Adds `torch.histc`, `torch.histogram`, `torch.histogramdd`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96652
Approved by: https://github.com/kulinseth, https://github.com/malfet
2023-05-17 01:25:43 +00:00
Khushi
01c7106580 [opinfo] empty_strided (#100890)
Follows: #100223

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100890
Approved by: https://github.com/ezyang
2023-05-15 23:39:39 +00:00
Nikita Shulga
9e089db32e [MPS] Enable arange for int8 and uint8 dtypes (#101303)
Not sure, why it was not enabled previously.
Sort types in `AT_DISPATCH_MPS_TYPES` by group (floats first then integers) and size.
Test implicitly in `test_bernoulli`.

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at 80c7ed7</samp>

> _`Char` and `Byte` types_
> _MPS can dispatch them now_
> _Winter of tensors_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101303
Approved by: https://github.com/huydhn, https://github.com/ZainRizvi, https://github.com/atalman, https://github.com/kulinseth
2023-05-13 01:19:08 +00:00
Ramin Azarmehr
0be53d83fc [MPS] Add support for MPSProfiler Python bindings (#101002)
- Added torch.mps.profiler.[start() and stop()] APIs with RST documentation
- Added test case in test_mps
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101002
Approved by: https://github.com/malfet
2023-05-12 21:55:34 +00:00
Sun, Jiayi
d56e1b2f67 add Half support for unary ops on CPU (#98493)
Add Half support for log_sigmoid and some unary ops on CPU, including sinc, acosh, asinh, atanh, digamma, trigamma, rsqrt, acos, asin, atan, ceil, cos, erf, erfc, erfinv, exp, expml, floor, log, log10, log1p, log2, i0, round, sin, sqrt, tan, tanh, trunc, lgamma.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98493
Approved by: https://github.com/jgong5, https://github.com/mingfeima, https://github.com/ngimel
2023-05-12 04:52:34 +00:00
Nikita Shulga
b7bf953bbc [MPS] Fix bernoulli for int types (#100946)
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 069fd23</samp>

This pull request enhances the MPS implementation of random operations in `Distributions.mm` and adds more dtype tests for the bernoulli distribution in `test_mps.py`. This improves the performance, correctness, and usability of the MPS backend for PyTorch.

Fixes https://github.com/pytorch/pytorch/issues/100717

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100946
Approved by: https://github.com/kulinseth
2023-05-11 23:52:38 +00:00
Nikita Shulga
87084643e5 [CI][MPS] Actually make grid_sampler_2d available (#101108)
In CI older MacOS SDK can be used to compile the binary, so add guard for availability of `MPSGraphResizeNearestRoundingModeRoundToEven` enum value.
MPS feature availability checks are deliberately done at runtime (by using `is_macos_13_or_newer` and forward-declaring methods in `MPSGraphVenturaOps.h`) rather than at compile time (by using `#ifdef`s).

Modify error message and XFAIL condition in `test_mps.py` to fail test due to missing conditional on macOS-13.2 or newer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101108
Approved by: https://github.com/kulinseth
2023-05-11 10:35:09 +00:00
Khushi
51fe53e619 [opinfo] item (#100313)
Follows #100223

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100313
Approved by: https://github.com/ezyang
2023-05-10 11:32:45 +00:00
Ramin Azarmehr
cecfcf1e17 [MPS] Handle MPS failures of test_modules.py in common_modules.py (#95334)
- Also cleaned up `test_modules.py` from skipMPS code.
- Added `skipMPS` for unsupported or failing tests on MPS backend in common_modules.py.
   (We'll remove `skipMPS` from those tests once a fix is available for them.)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95334
Approved by: https://github.com/kulinseth, https://github.com/albanD
2023-05-09 03:55:16 +00:00
Li-Huai (Allan) Lin
3b6a7f4d51 [MPS] Fix index_put with deterministic algorithm enabled (#97660)
Prevent using parallel computing when deterministic algorithm is set.

Fixes #97574

Benchmark:
```
[--------------- index_put_ Deterministic Algorithm Enabled ---------------]
                                                              |  cpu  |  mps
1 threads: -----------------------------------------------------------------
      Dtype: torch.float32 Features: 1024; Num Indices: 512   |   37  |   49
      Dtype: torch.float32 Features: 1024; Num Indices: 1024  |   54  |   50
      Dtype: torch.float32 Features: 1024; Num Indices: 2048  |   86  |   50
      Dtype: torch.float32 Features: 1024; Num Indices: 4096  |  150  |   49

Times are in microseconds (us).

[-------------- index_put_ Deterministic Algorithm Disabled ---------------]
                                                              |  cpu  |  mps
1 threads: -----------------------------------------------------------------
      DType: torch.float32 Features: 1024; Num Indices: 512   |   37  |   49
      DType: torch.float32 Features: 1024; Num Indices: 1024  |   53  |   49
      DType: torch.float32 Features: 1024; Num Indices: 2048  |   86  |   49
      DType: torch.float32 Features: 1024; Num Indices: 4096  |  147  |   50

Times are in microseconds (us).
```

<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at ebf2ff3</samp>

Added a deterministic version of `index_put` for MPS tensors that runs on a single thread and can be enabled by a global context flag. Refactored the existing `index_put` function and the kernel selection logic to support both parallel and serial modes. Added a test function to verify the deterministic behavior of `index_put` under different conditions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97660
Approved by: https://github.com/kulinseth
2023-05-08 00:57:29 +00:00
Kulin Seth
e20c94bda9 [MPS] Add the test for 5D in test_mps which is skipped. (#99271)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99271
Approved by: https://github.com/DenisVieriu97
2023-05-05 22:57:06 +00:00
Li-Huai (Allan) Lin
13da6585b6 [MPS] Skip all empty ops tests (#100368)
Fixes #100175

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100368
Approved by: https://github.com/kulinseth
2023-05-02 00:43:58 +00:00
Li-Huai (Allan) Lin
a50fb50c51 [MPS] Fix exception regex not compared (#100367)
Previously when using `self.assertRaisesRegex` to test raised exception and its regex, the regex wasn't actually compared because mps was not in the `NATIVE_DEVICES`. This PR fixes that by enabling exception regex comparisons for mps device.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100367
Approved by: https://github.com/albanD
2023-05-02 00:43:58 +00:00
Nikita Shulga
2442858f52 [MPS] Fix layer_norm_backward_mps key (#100295)
Followup after https://github.com/pytorch/pytorch/pull/98794
See report in https://github.com/pytorch/pytorch/issues/98602#issuecomment-1527312211 and reproducer in https://github.com/pytorch/pytorch/issues/98602#issuecomment-1528214175

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100295
Approved by: https://github.com/kit1980, https://github.com/izaitsevfb
2023-04-29 03:37:35 +00:00
Li-Huai (Allan) Lin
81978120ec [MPS] Fix trace exceptions not raised for error inputs (#99239)
Also rename `trace_mps_out` to `trace_mps` as it is not an out version.

Remove `index_add` from XFAILLIST as it seems working as expected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99239
Approved by: https://github.com/kulinseth
2023-04-26 14:41:50 +00:00
Li-Huai (Allan) Lin
f4a37c9a5d [MPS] Fix max_pool2d exceptions not raised for error inputs (#99238)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99238
Approved by: https://github.com/kulinseth
2023-04-26 14:41:50 +00:00
Li-Huai (Allan) Lin
f4cf744380 [MPS] Fix gelu exceptions not raised for error inputs (#99237)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99237
Approved by: https://github.com/kulinseth
2023-04-26 14:41:46 +00:00
Li-Huai (Allan) Lin
1fcf40da63 [MPS] Add linear inputs check (#99228)
Fixes #98211

https://github.com/pytorch/pytorch/issues/98211#issuecomment-1496005668
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99228
Approved by: https://github.com/kit1980
2023-04-26 04:44:23 +00:00
Denis Vieriu
89baa1a74c [MPS] Add support for linalg.vector_norm (#99811)
Summary of changes:

- Add support for linalg.vector_norm
- Fix zero norm, correct formula is: sum(x != 0)
- Add additional tests in test_mps
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99811
Approved by: https://github.com/kulinseth
2023-04-26 01:34:29 +00:00
Justin Chu
79c9e82e27 Fix flake8 lint errors reported by ruff - take 2 (#99798)
Replaces #99784. This PR is pure autofix.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99798
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-04-23 23:09:51 +00:00
BJ Hargrave
dc52ba2906 Fix test_mps for macos 13.3 (#98739)
Expected dtype is changed from torch.int64 to torch.int32 prior to
macos 13.3.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98739
Approved by: https://github.com/kulinseth
2023-04-12 19:23:08 +00:00
Li-Huai (Allan) Lin
be8a4eb8e3 [MPS] Add index_fill op (#98694)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98694
Approved by: https://github.com/kulinseth
2023-04-12 18:13:33 +00:00
Li-Huai (Allan) Lin
71aea7f56e [MPS] Add error inputs check (#98167)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98167
Approved by: https://github.com/kulinseth
2023-04-12 17:19:13 +00:00
Nikita Shulga
583193e1d9 [MPS] Fix batch_norm_backwards key (#98794)
One needs different graphs for batch_norm_backwards depending whether or
not gradients are required for some of the params

Fixes https://github.com/pytorch/pytorch/issues/98602

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98794
Approved by: https://github.com/kulinseth
2023-04-11 17:23:36 +00:00
Guang Yang
c377a8590b Add nonzero_static() op to pytorch to unblock export (#97417)
Summary: Add new experimental python op (`torch.nonzero_static`) for export. There is NO cuda impl included in this PR

Example:

Say input tensor is `x = torch.tensor([[1, 0], [3, 2]])`

call regular `nonzero()` on x will give you a tensor `tensor([[0, 0], [1, 0], [1, 1])`
call `nonzero_static(x, size=4)` on x will give you a tensor `tensor([[0, 0], [1, 0], [1, 1], [fill_value, fill_value])` (padded)
call `nonzero_static(x, size=2)` on x will give you a tensor `tensor([[0, 0], [1, 0])` (truncated)

Test Plan:
**Unit Tests**
```
buck test @mode/dev-nosan //caffe2/test:test_dynamo -- 'caffe2/test:test_dynamo - test_export.py::ExportTests::test_export_with_nonzero_static' -- 'caffe2/test:test_dynamo - test_misc.py::MiscTests::test_nonzero_static'
```

**PT2 Export with `nonzero_static()`**
Example of `GraphModule` in the exported graph
```
def forward(self, x):
    arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
    nonzero_static_default = torch.ops.aten.nonzero_static.default(arg0, size = 4);  arg0 = None
    return pytree.tree_unflatten([nonzero_static_default], self._out_spec)
```

Differential Revision: D44324808

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97417
Approved by: https://github.com/ezyang
2023-04-11 05:13:36 +00:00
Nikita Shulga
29cde00701 [MPS] Add random_ overload (#98333)
That simply calls `torch.random_(from=0, to=None)`

Also, fix optional upper bound calculation for all `dtypes` but int64:
As one can see from https://pytorch.org/docs/stable/generated/torch.Tensor.random_.html
`from` boundary is inclusive, but `to` is exclusive, i.e. if `to` is
omitted for `torch.int8` dtype, it should be set to `128` and to `2`
for torch.bool.

Add test for `torch.random_`

Fixes https://github.com/pytorch/pytorch/issues/98118

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98333
Approved by: https://github.com/kulinseth
2023-04-05 21:24:45 +00:00
Li-Huai (Allan) Lin
db8abde9b6 [MPS] Enable conditional indexing tests (#97871)
The tests seem to be working now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97871
Approved by: https://github.com/kulinseth
2023-04-01 16:15:08 +00:00
Li-Huai (Allan) Lin
7776653a0c Add linear gradgrad (#97151)
Fixes #92206
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97151
Approved by: https://github.com/albanD
2023-03-30 07:25:02 +00:00
Philip Meier
2f6c18d1a2 improve memory footprint of torch.testing.assert_close (#96131)
Redo of #90172 out of stack.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96131
Approved by: https://github.com/pearu, https://github.com/mruberry
2023-03-29 23:49:56 +00:00
Li-Huai (Allan) Lin
4afef85dda [MPS] Fix index_select_scalar test (#97773)
#96408 introduced a check that prevents the index to scalar from being non-singleton.

Fixes #94162

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97773
Approved by: https://github.com/kulinseth
2023-03-28 19:23:59 +00:00
Li-Huai (Allan) Lin
100641aadf [MPS] Fix torch.eye unsupported bool constant on macOS 12 (#97027)
Fixes #91620

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97027
Approved by: https://github.com/kulinseth
2023-03-20 18:08:36 +00:00
Ramin Azarmehr
50beab2978 [MPS] Fix the failure with ReplicatePad3D (#96988)
- Only ReflectPad needs the torch checks for input arguments and not the ReplicatePad
- Added a test case
- The failure was originally found in test_modules with test `test_forward_nn_ReplicationPad3d_mps_float32`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96988
Approved by: https://github.com/DenisVieriu97
2023-03-17 01:41:12 +00:00
alexdremov
62eb7a2e97 [MPS] LSTM grad_y missing fix (#96601)
Fixes #96416
Added tests that do not use LSTM output simalarly to the issue

Seems like this fix once again introduces backward incompatibility.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96601
Approved by: https://github.com/albanD, https://github.com/kulinseth
2023-03-16 15:53:56 +00:00
Li-Huai (Allan) Lin
c95bcb6694 [MPS] Fix flip where no dims need to be flipped (#96605)
Fixes #96558

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96605
Approved by: https://github.com/kulinseth
2023-03-14 00:34:30 +00:00
Li-Huai (Allan) Lin
a87f3f612e [MPS] Fall back multi-layer LSTM on macOS 12 (#90909)
The native implementation of LSTM has been fixed on macOS 13.

On macOS 12, the multi-layer LSTM still has a numerical correctness issue that cannot be resolved on OS's side.

Thus, we fall back the multi-layer LSTM on macOS 12 to LSTMCell iteration. It might have performance impact but will make LSTM on macOS 12 fully usable.

Fixes: #90421
Issues related: #80306, #83144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90909
Approved by: https://github.com/albanD, https://github.com/kulinseth
2023-03-10 03:10:49 +00:00
Nikita Shulga
075a49442d [MPS] Allow float16 input to float32 LayerNorm (#96430)
Only for forward pass

Subset of https://github.com/pytorch/pytorch/pull/96208

Create constant with scalar using `input_mps_dtype` and use
`reciprocalWithTensor` instead of `divisionWithPrimaryTensor:1.0
secondaryTensor:`

Fixes https://github.com/pytorch/pytorch/issues/96113

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96430
Approved by: https://github.com/kulinseth
2023-03-09 22:09:10 +00:00
Kulin Seth
2bb022e902 [MPS] Adding xfaillist with all categories of failures. (#96176)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96176
Approved by: https://github.com/malfet
2023-03-08 08:41:21 +00:00
Catherine Lee
eea0733045 Reduce pytest blocklist (#96016)
`TestCase = object` or variations of it get switched to `TestCase = NoTest`.

unittest collects test based on subclassing unittest.TestCase, so setting TestCase = object removes it from unittest test collection.  pytest collects based on name (https://docs.pytest.org/en/7.1.x/reference/reference.html#confval-python_classes) but can be told to ignore a class (bottom of https://docs.pytest.org/en/7.1.x/example/pythoncollection.html#changing-naming-conventions)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96016
Approved by: https://github.com/ZainRizvi, https://github.com/huydhn
2023-03-07 18:30:27 +00:00
Li-Huai (Allan) Lin
2f66b57a7a [MPS] Fix in-place add and sub with alpha == 0.0 (#96184)
Apart from fixing the below issue, this PR integrates the test for `sub` into the test for `add` as they are implemented using the same template.

Fixes #96065

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96184
Approved by: https://github.com/kulinseth
2023-03-07 17:17:53 +00:00
Nikita Shulga
769cc8a614 [MPS] Add type promotion to torch.addcmul (#96164)
Fixes crash while running something like `python -c "import torch;x=torch.rand(3, 3, dtype=torch.float16, device='mps');y=x.addcmul(torch.ones(3, device='mps'), torch.ones(3, device='mps'));print(y)"`

Modify `castMPSTensor` to become a no-op if cast is not needed

Define `common_dtype` as `c10::promoType` between self, tensor1 and
tensor2. Cast to any output type.

Add mixed-types test to `TestMPS.test_addcmul`, though it does not cover
all the permutations

Discovered while looking at https://github.com/pytorch/pytorch/issues/96113

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96164
Approved by: https://github.com/kulinseth
2023-03-07 04:19:30 +00:00
alexdremov
78da315afd [MPS] Fix bidirectional LSTM & small one-direction LSTM fix (#95563)
Fixes #94754

With this PR I hope to finish my breathtaking journey of fixing MPS LSTM.

Here, I enable `bidirectional` on MPS. Also, I've noticed that cache key did not account for all parameters, so there could have been problems with one-directional LSTM when created without bias or dropout and then with one of them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95563
Approved by: https://github.com/jhavukainen, https://github.com/kulinseth, https://github.com/malfet
2023-03-05 00:19:54 +00:00
Nikita Shulga
436993d52b [MPS] Error on unsupported types (#95982)
I.e. attempt to create tensor of all possible types and make sure that
it raises a structured error for non-MPS types

Also, rename `test_resize_as_all_dtypes_and_devices` to `test_resize_as_mps_dtypes` and `test_resize_all_dtypes_and_devices` to `test_resize_mps_dtypes` and run both test for all MPS dtypes (rather than just bool, float16 and bfloat16 as they were running before)

Fixes https://github.com/pytorch/pytorch/issues/95976

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95982
Approved by: https://github.com/kulinseth
2023-03-04 01:29:07 +00:00
Denis Vieriu
304a95435d [MPS] Disallow reshape in slice (#95905)
Disallow reshapes for arrayViews.
Current code allows a base shape of `[2, 4, 256]` to be sliced into `[4, 1, 256]` (view's shape) - which is not possible. Slicing a smaller dimension into a bigger one will always error out.

Fixes https://github.com/pytorch/pytorch/issues/95883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95905
Approved by: https://github.com/razarmehr, https://github.com/kulinseth
2023-03-03 08:08:34 +00:00
Denis Vieriu
d0dd898943 [MPS] Remove remaining casts from 13.3 (#95870)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95870
Approved by: https://github.com/kulinseth
2023-03-02 12:44:59 +00:00
Denis Vieriu
4d3352ed90 [MPS] Remove casts from reduction/cumsum/sort ops starting with macOS 13.3 (#95817)
MPS in macOS13.3 has added support for int64 in reduction ops / cumsum / sort / argsort. This change removes the hard-coded casts and error messages prior macOS 13.3, allowing the op to run natively with int64.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95817
Approved by: https://github.com/kulinseth
2023-03-02 00:26:24 +00:00
Kulin Seth
5d9d8c6154 [MPS] Add fixes for div with floor and raise error for div_trunc (#95769)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95769
Approved by: https://github.com/DenisVieriu97
2023-03-01 20:52:28 +00:00
Denis Vieriu
e5a959a2d4 [MPS] Fix views with 3 or more sliced dimensions (#95762)
Fixes https://github.com/pytorch/pytorch/issues/95482
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95762
Approved by: https://github.com/razarmehr
2023-03-01 16:16:49 +00:00
Denis Vieriu
ed1957dc19 [MPS] Add support for masked_scatter (#95743)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95743
Approved by: https://github.com/kulinseth
2023-03-01 01:36:36 +00:00
Li-Huai (Allan) Lin
f33180fb7f [MPS] Add pow.Scalar (#95201)
1. Adds `pow.Scalar`.
2. Modifies testing `atol` and `rtol` to get pow output match tests pass.
3. Xfails numerically incorrect dtypes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95201
Approved by: https://github.com/kulinseth
2023-02-28 16:11:15 +00:00
Li-Huai (Allan) Lin
9e16f1281f [MPS] Add copysign op. (#95552)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95552
Approved by: https://github.com/kulinseth
2023-02-28 06:49:46 +00:00
Li-Huai (Allan) Lin
b7c2a65139 [MPS] Fix type casting copy with storage offset (#95573)
This PR handles the case where the `dst` tensor of type casting has a storage offset by creating a temporary buffer to store results and then copy them back to the dst with the offset added.

Fixes #95417

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95573
Approved by: https://github.com/kulinseth
2023-02-28 05:24:31 +00:00
Li-Huai (Allan) Lin
4930ae7f82 [MPS] Add roll op (#95168)
Reuse the cpu implementation here as currently there is no native roll implementation from the MPS api (if any, please let me know).

Compared to falling back to cpu using `PYTORCH_ENABLE_MPS_FALLBACK=1`, this way we keep tensors on MPS.

Did a small benchmark:

```python
for num in [10, 100, 1000, 10000]:
    for shft in [1, 5]:
        sz = num * num
        x = torch.arange(sz, device="cpu").view(num, num)
        s = time.time()
        r = torch.roll(x, shft)
        cpu_e = time.time() - s
        x = torch.arange(sz, device="mps").view(num, num)
        s = time.time()
        r = torch.roll(x, shft)
        mps_e = time.time() - s
        print(f"size: ({num}, {num}) shft: {shft} cpu: {cpu_e} mps: {mps_e}")
```

```
size: (10, 10) shft: 1 cpu: 0.00015163421630859375 mps: 0.003078937530517578
size: (10, 10) shft: 5 cpu: 6.794929504394531e-05 mps: 0.0014979839324951172
size: (100, 100) shft: 1 cpu: 0.0001621246337890625 mps: 0.0016200542449951172
size: (100, 100) shft: 5 cpu: 0.00016379356384277344 mps: 0.00154876708984375
size: (1000, 1000) shft: 1 cpu: 0.0022068023681640625 mps: 0.0017690658569335938
size: (1000, 1000) shft: 5 cpu: 0.009071111679077148 mps: 0.0020020008087158203
size: (10000, 10000) shft: 1 cpu: 0.16785407066345215 mps: 0.011695146560668945
size: (10000, 10000) shft: 5 cpu: 0.1160881519317627 mps: 0.011452913284301758
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95168
Approved by: https://github.com/albanD
2023-02-27 18:31:17 +00:00
Nikita Shulga
fd8367a7b1 [MPS][BE] Introduce xfail (#95045)
Add `mps_ops_modifier` function that adds `unittest.expectedFailure` decorators to the operators that supposed to fail on MPS.

This allows one to know whether or not operation will fail, rather than skip it.
For example:
```
% python test_mps.py -v -k test_output_match_dot
test_output_match_dot_cpu_float32 (__main__.TestConsistencyCPU) ... ok
test_output_match_dot_cpu_int16 (__main__.TestConsistencyCPU) ... ok
test_output_match_dot_cpu_int32 (__main__.TestConsistencyCPU) ... ok
test_output_match_dot_cpu_int64 (__main__.TestConsistencyCPU) ... expected failure
test_output_match_dot_cpu_uint8 (__main__.TestConsistencyCPU) ... ok

----------------------------------------------------------------------
Ran 5 tests in 0.175s

OK (expected failures=1)
```

Moved a few functions from blocklist to xfail, and find out that some of the functions in the list actually work, for example `torch.long`.

Also, allow `None` to be used in `ALLOWLIST`  instead of specifying all types explicitly (which aligns with `DecorateInfo` semantic)

Eventually, we should get rid of `ALLOWLIST` (i.e. all ops are allowed), keep small `BLOCKLIST` and move the rest to `XFAILLIST`

Add step to print HW/SW info before running MPS tests.

Fix type promotion in `trace_mps_out`

Introduce `MACOS_12_X_XFAILLIST` and skip almost every function for `torch.uint8`,  although some of those doesn't make much sense and feels like a regression from PyTorch-1.13

Re-enabled MPS testing on MacOS 12, as runners seems to be available again
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95045
Approved by: https://github.com/albanD
2023-02-27 15:01:01 +00:00
Li-Huai (Allan) Lin
4dca9bde05 [MPS] Add fmax fmin op (#95191)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95191
Approved by: https://github.com/kulinseth
2023-02-25 07:21:48 +00:00
Li-Huai (Allan) Lin
5cad542e43 [MPS] Add log_sigmoid op (#95280)
1. Add log_sigmoid.
2. Make log1p a common function. Operators that use log1p: mish, softplus, log_sigmoid (maybe more).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95280
Approved by: https://github.com/kulinseth
2023-02-24 01:38:30 +00:00
alexdremov
b9e95158d5 [MPS] Fix LSTM backward and forward pass (#95137)
Fixes #91694
Fixes #92615

Several transpositions were missing for backward graph in case of `batch_first=True`. The #91694 is not reproduced with `batch_first=False`.

After fixing transpose issue, I finally thought that now I can use LSTM freely in my project. And then I got horrific results on train. Seems related to #92615.

After that I decided to fix LSTM's backward step completely. I collected all my findings in this thread — seems like I succeeded

Funny enough, backward tests were completely disabled before and were not passing:
```python
    @unittest.skipIf(True, "Backward of lstm returns wrong result")
    def test_lstm_2(self, device="mps", dtype=torch.float32):
```

UPD: forward pass of multi-layer version also was wrong due to the incorrect `initState, initCell` slices. Tests were passing because states were inited with zeros. *Accidentally* fixed this too

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95137
Approved by: https://github.com/jhavukainen, https://github.com/kulinseth, https://github.com/soulitzer
2023-02-23 17:32:42 +00:00
Denis Vieriu
86efa104f5 [MPS] Fix view op slicing for 2nd dim in case of 0 offset (#95381)
* Fix view op slicing for 2nd dim in case of 0 offset

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95381
Approved by: https://github.com/razarmehr
2023-02-23 17:26:10 +00:00
XiaobingSuper
5730cabdd0 using float type to do the computation of norm reduce for cpu half and bfloat16 dtype (#95166)
As the title, we should use a higher dtype to compute norm reduce for half and bfloat1 dtype.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95166
Approved by: https://github.com/peterbell10, https://github.com/jgong5, https://github.com/ngimel, https://github.com/lezcano
2023-02-23 05:00:25 +00:00
Li-Huai (Allan) Lin
69c76ff05e [MPS] Add xlogy op (#95213)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95213
Approved by: https://github.com/kulinseth, https://github.com/soulitzer
2023-02-22 19:43:12 +00:00
Denis Vieriu
5e47571a13 [MPS] Convolution cleanup; remove unnecessary contiguous calls (#95078)
- Fixes convolution crashes in backward with weights
- Removes unnecessary contiguous calls
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95078
Approved by: https://github.com/kulinseth
2023-02-22 18:04:12 +00:00
Kulin Seth
02a6d4334b [MPS] Handle broadcasting by expanding src tensor in Copy.mm (#95272)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95272
Approved by: https://github.com/DenisVieriu97
2023-02-22 18:02:42 +00:00
Denis Vieriu
8475af7761 [MPS] Cast int64 to int32 for reduction ops (#95231)
- give warnings of converting int64 for reduction ops
- use cast tensor for reduction sum on trace
- unblock trace from running
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95231
Approved by: https://github.com/razarmehr
2023-02-22 17:23:25 +00:00
Li-Huai (Allan) Lin
f70a3430aa [MPS] Add hypot op (#95196)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95196
Approved by: https://github.com/kulinseth
2023-02-21 22:40:20 +00:00
Li-Huai (Allan) Lin
e0a0329a67 [MPS] Add hardsigmoid op (#95164)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95164
Approved by: https://github.com/kulinseth
2023-02-21 07:06:37 +00:00
Li-Huai (Allan) Lin
d96aac8d2a [MPS] Add logit op (#95162)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95162
Approved by: https://github.com/kulinseth
2023-02-21 07:02:45 +00:00
alexdremov
a17a7ccc92 [MPS] LogSoftmax numerical stability (#95091)
Fixes #94043

Calculations are now consistent with numericaly stable formula and CPU:

$LogSoftmax(X, \dim) = X - \max(X, \dim) - \log(sum(X - \max(X, \dim), \dim))$

@malfet

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95091
Approved by: https://github.com/malfet, https://github.com/kulinseth
2023-02-18 18:26:29 +00:00
Ramin Azarmehr
9511b9fad2 [MPS] Fix copy_cast_mps() on tensors with storage offset (#95093)
- The copy_cast path requires storage_offset to be applied before casting
- This should fix some correctness issues in transformer models

Fixes #94980

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95093
Approved by: https://github.com/kulinseth
2023-02-18 16:29:01 +00:00
Li-Huai (Allan) Lin
25ee6dd335 [MPS] Fix fill_ where input tensor has a storage offset (#95113)
Fixes #94390

Apart from fixing the issue above, this PR also fixes a bug that when an input tensor can be sliced, a sliced array view is created. This array view seems to be not writable or have a different storage from the original tensor, causing incorrect results with the in-place `fill`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95113
Approved by: https://github.com/kulinseth
2023-02-18 16:19:15 +00:00
Li-Huai (Allan) Lin
0a9c608461 [MPS] Fix tensor with non-zero storage offset graph gathering (#91071)
Previously, the "can slice" flag in Placeholder constructor in `OperationUtils.mm` is conditioned on whether the numbers of dimensions of base shape and view shape are the same. This doesn't consider the situation that a view tensor could be the base tensor's sliced and then unsqueezed version, resulting in different num of dims.

For example, if we want to stack `y_mps` and `x_mps` on the last dim:
```
t_mps = torch.tensor([1, 2, 3, 4], device="mps")
x_mps = t_mps[2:]  # [3, 4]
y_mps = t_mps[:2]  # [1, 2]

res_mps = torch.stack((y_mps, x_mps), dim=-1)
```

the kernel will unsqueeze both of them on the last dim and then concatenate them, which is equivalent to:

```
res_mps = torch.cat((y_mps.unsqueeze(-1), x_mps.unsqueeze(-1)), dim=-1)
```

`x_mps.unsqueeze(-1)` is an unsqueezed and contiguous tensor with a storage offset, this kind of tensors should be sliceable without cloning its storage.

Fixes #87856
Fixes #91065

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91071
Approved by: https://github.com/kulinseth
2023-02-17 18:44:20 +00:00
Denis Vieriu
a2afc657da [MPS] Fix upsample for NHWC output (#94963)
Fixes https://github.com/huggingface/diffusers/issues/941

**Before**:
<img width="1144" alt="Screenshot 2023-02-15 at 8 11 53 PM" src="https://user-images.githubusercontent.com/104024078/219266709-6a77636a-2fc0-4802-b130-85069b95953f.png">

**After**:
<img width="1144" alt="Screenshot 2023-02-15 at 8 12 02 PM" src="https://user-images.githubusercontent.com/104024078/219266694-ea743c02-fb55-44f1-b7d6-5946106527c3.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94963
Approved by: https://github.com/razarmehr
2023-02-17 05:07:22 +00:00
Denis Vieriu
5d1e9fd214 [MPS] Fix prelu backward pass (#94933)
Allocate the correct shape for the weights gradient
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94933
Approved by: https://github.com/razarmehr
2023-02-17 03:45:12 +00:00
Denis Vieriu
bc361fdfdf [MPS] Fix bilinear backward pass (#94892)
Fixes backward pass for bilinear.

Summary of changes:
- bilinear op is able to produce **contiguous, non-view** tensors with a storage offset, such as: shape=`[1, 1, 1, 1]`, `storage_offset=12`. This seems a weird case, but it is valid, and for these type of tensors we wouldn't be able to gather/scatter since we look at the view flag (which is not set here). This change looks into `storage_offset` only rather than the is_view flag which is not being set
- **reduction sum** must return a zeroed out output if passing an input with 0 elements (e.g a shape of (0, 5)).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94892
Approved by: https://github.com/kulinseth
2023-02-16 00:30:29 +00:00
Kulin Seth
54ebf255ab [MPS] Fixes for LSTM. (#94889)
- Backward pass has to give explicit bias tensor of zeros if none is passed to the op or the bias gradient will not be calculated.
- Fixed bias tensor mistakenly getting overwritten to zeros
- Fixes crash when lstm op called with has_biases set to false. Change takes into account the changed shape of the input params TensorList depending on the bias flag.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94889
Approved by: https://github.com/DenisVieriu97
2023-02-15 16:10:40 +00:00
Denis Vieriu
71ec2617d2 [MPS] Block uint8 data type for unary and binary ops on macOS 12 (#94876)
Blocks uint8 data type for unary and binary ops on macOS 12
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94876
Approved by: https://github.com/kulinseth
2023-02-15 06:09:56 +00:00
Kulin Seth
94f0808629 [MPS] Add fmod op. (#94722)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94722
Approved by: https://github.com/DenisVieriu97
2023-02-14 14:55:26 +00:00
Xuehai Pan
b005ec62b9 [BE] Remove dependency on six and future (#94709)
Remove the Python 2 and 3 compatibility library [six](https://pypi.org/project/six) and [future](https://pypi.org/project/future) and `torch._six`. We only support Python 3.8+ now. It's time to retire them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94709
Approved by: https://github.com/malfet, https://github.com/Skylion007
2023-02-14 09:14:14 +00:00
Denis Vieriu
1f06a71797 [MPS] Error out for square int64 input (#94766)
- add checks for whether macOS is greater than 13.2
- remove square from block list
- throw error messages if power int64 is called before macOS 13.2

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94766
Approved by: https://github.com/kulinseth
2023-02-14 04:45:41 +00:00
Denis Vieriu
cedb7e3d77 [MPS] Fix remainder op for integral dtypes (#94757)
Map remainder op to the same template as div (integral dtypes will be cast to float)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94757
Approved by: https://github.com/kulinseth
2023-02-14 01:06:49 +00:00
Denis Vieriu
4acdc446b2 [MPS] Fix batch norm for NHWC (#94760)
Fixes `test_modules.py` batch norm NHWC testcases:
- `test_memory_format_nn_BatchNorm2d_eval_mode_mps_float32`
- `test_memory_format_nn_BatchNorm2d_eval_mode_mps_float32`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94760
Approved by: https://github.com/kulinseth
2023-02-13 23:31:10 +00:00
OwenPendrighElliott
840fb74ec8 86990 range mps support (#91075)
Fixes #86990

- Added range_mps_out to RangeFactories.mm
- Updated native_functions.yaml
- Added tests in test_mps.py

I did observe that despite [the documentation for torch.range](https://pytorch.org/docs/stable/generated/torch.range.html), the existing implementations do not adjust their return type based off the arguments passed to them. The MPS implementation provided here behaves the same way as the existing CPU and CUDA implementations in this regard, hence the conversion to float32 in the test cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91075
Approved by: https://github.com/kulinseth, https://github.com/DenisVieriu97
2023-02-13 23:19:10 +00:00
Ramin Azarmehr
b57e6fdb50 [MPS] Enable Memory Leak Detection for test_mps.py (#94646)
- To check for Memory Leaks in `test_mps.py`, set the env-variable `PYTORCH_TEST_MPS_MEM_LEAK_CHECK=1` when running test_mps.py (used CUDA code as reference).
- Added support for the following new python interfaces in MPS module:
`torch.mps.[empty_cache(), set_per_process_memory_fraction(), current_allocated_memory(), driver_allocated_memory()]`
- Renamed `_is_mps_on_macos_13_or_newer()` to `_mps_is_on_macos_13_or_newer()`, and `_is_mps_available()` to `_mps_is_available()` to be consistent in naming with prefix `_mps`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94646
Approved by: https://github.com/malfet
2023-02-13 17:56:24 +00:00
Kulin Seth
18587cb31f [MPS] Add sort and argSort Op. (#94697)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94697
Approved by: https://github.com/DenisVieriu97
2023-02-13 01:03:22 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
Ramin Azarmehr
bdd8f518d7 [MPS] Add Python Module Bindings for the MPS backend (#94417)
- This PR is a prerequisite for the upcoming Memory Leak Detection PR.
- Enable global manual seeding via `torch.manual_seed()` + test case
- Add `torch.mps.synchronize()` to wait for MPS stream to finish + test case
- Enable the following python interfaces for MPS:
  `torch.mps.[get_rng_state(), set_rng_state(), synchronize(), manual_seed(), seed()]`
- Added some test cases in test_mps.py
- Added `mps.rst` to document the `torch.mps` module.
- Fixed the failure with `test_public_bindings.py`

Description of new files added:
- `torch/csrc/mps/Module.cpp`: implements `torch._C` module functions for `torch.mps` and `torch.backends.mps`.
- `torch/mps/__init__.py`: implements Python bindings for `torch.mps` module.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94417
Approved by: https://github.com/albanD
2023-02-12 21:22:30 +00:00
Henry Cheng
fe0c7fbcf8 [MPS] Add repeat_interleave to MPS (#88649)
Fixes #87219

Implements new ``repeat_interleave`` function into ``aten/src/ATen/native/mps/operations/Repeat.mm``
Adds it to ``aten/src/ATen/native/native_functions.yaml``
Adds new test ``test_repeat_interleave`` to ``test/test_mps/py``
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88649
Approved by: https://github.com/kulinseth
2023-02-12 08:43:55 +00:00
Denis Vieriu
b794fd19c5 [MPS] Add scatter gather kernels (support up to 5 dimensions) (#94663)
Add scatter gather kernels (support up to 5 dimensions)
- Fixes int64 issues for `mH`, `mT`, `T`, `H` on Monterey

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94663
Approved by: https://github.com/kulinseth
2023-02-12 08:17:26 +00:00
Kulin Seth
54c0f37646 [MPS] Add support for TopK k>16 (#94639)
Fixes: https://github.com/pytorch/pytorch/issues/78915

* Add the topk>16 support

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94639
Approved by: https://github.com/DenisVieriu97
2023-02-12 00:57:53 +00:00
Denis Vieriu
4a762cb622 [MPS] Fix channels last copies in ELU,ReLU and Hardswish (#94664)
Fixes test_modules.py tests:
```
test_memory_format_nn_Hardswish_mps_float32
test_non_contiguous_tensors_nn_Hardswish_mps_float32
test_memory_format_nn_ReLU_mps_float32
```
Fixes elu when ran with `ChannelsLast` memory format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94664
Approved by: https://github.com/kulinseth
2023-02-11 22:05:21 +00:00
Kulin Seth
c74f438c01 [MPS] Fix the cat op for NHWC case (#94662)
* add unit test cat with non-contiguous

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94662
Approved by: https://github.com/DenisVieriu97
2023-02-11 19:43:33 +00:00
PyTorch MergeBot
4fe365774a Revert "[MPS] Add Python Module Bindings for the MPS backend (#94417)"
This reverts commit beb4f5bf39.

Reverted https://github.com/pytorch/pytorch/pull/94417 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it seems to break MacOS test in trunk bae397ec63
2023-02-11 05:24:45 +00:00
Ramin Azarmehr
030209088f [MPS] Fix the regression with test_index_select_scalar() (#94645)
The PR #94347 caused a regression in test_mps which this patch fixes it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94645
Approved by: https://github.com/DenisVieriu97
2023-02-11 01:36:51 +00:00
Denis Vieriu
7ce785b50b [MPS] Fix gelu forward and backward ops (#94529)
Forward pass:
```
fix gelu_out_mps key
add calculation for gelu with tanh
remove gelu from blocklist
```
Backward pass:
```
fix gelu_backward_out_mps key
uniform format
add caculation for tanh approximate backward pass
unblock grad test from blocklist
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94529
Approved by: https://github.com/razarmehr, https://github.com/kulinseth
2023-02-11 00:24:30 +00:00
Denis Vieriu
507b8c3423 [MPS] Native implementation for addr (#94538)
```
addr_out_mps to perform res = betainput + alpha(vec1Xvec2)
move addr f16 to low precision list
move addr none float to unsupported list
add test_addr tests
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94538
Approved by: https://github.com/razarmehr
2023-02-11 00:16:50 +00:00
Denis Vieriu
0b31ebf9e4 [MPS] Added zero check to inverse & fix for any op to avoid segfault issue (#94551)
Fixes empty placeholder error in inverse op. Change to any op should also resolve previously seen segfaults
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94551
Approved by: https://github.com/kulinseth
2023-02-10 23:39:12 +00:00
Ramin Azarmehr
beb4f5bf39 [MPS] Add Python Module Bindings for the MPS backend (#94417)
- This PR is a prerequisite for the upcoming Memory Leak Detection PR.
- Enable global manual seeding via `torch.manual_seed()` + test case
- Add `torch.mps.synchronize()` to wait for MPS stream to finish + test case
- Enable the following python interfaces for MPS:
  `torch.mps.[get_rng_state(), set_rng_state(), synchronize(), manual_seed(), seed()]`
- Added some test cases in test_mps.py
- Added `mps.rst` to document the `torch.mps` module.
- Fixed the failure with `test_public_bindings.py`

Description of new files added:
- `torch/csrc/mps/Module.cpp`: implements `torch._C` module functions for `torch.mps` and `torch.backends.mps`.
- `torch/mps/__init__.py`: implements Python bindings for `torch.mps` module.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94417
Approved by: https://github.com/albanD
2023-02-10 23:18:41 +00:00
Denis Vieriu
728dfeee48 [MPS] Fix ops with bool issues in macOS Monterey (#94464)
Summary:
- Remove redundant bool casts from scatter/gather
- Make the workarounds for scatter/gather (for bool/uint8 data types) OS specific - use them only in macOS Monterey, ignore them starting with macOS Ventura
- Make all tensors ranked in scatter

Fixes following tests:
```
test_output_match_slice_scatter_cpu_bool
test_output_match_select_scatter_cpu_bool
test_output_match_diagonal_scatter_cpu_bool
test_output_match_repeat_cpu_bool
test_output_match_rot90_cpu_bool
etc..
```

Still failing on macOS Monterey (needs additional investigation):
```
test_output_match_scatter_cpu_bool
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94464
Approved by: https://github.com/kulinseth
2023-02-10 21:36:25 +00:00
Ramin Azarmehr
7c4acdad4a [MPS] Fix the crash in huberloss with Float16 (#94567)
- Also fix FP16 correctness issues in several other ops by lowering their FP16 precision in the new list `FP16_LOW_PRECISION_LIST`.
- Add atol/rtol to the `AssertEqual()` of Gradient tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94567
Approved by: https://github.com/kulinseth
2023-02-10 19:20:29 +00:00
Denis Vieriu
92d8c4b37c [MPS] Fix cumsum for integral data types (#94530)
- Make intermediate type for cumsum ScalarType::Int: fixes https://github.com/pytorch/pytorch/issues/90635
- Add support for negative dimensions in cumsum: fixes https://github.com/pytorch/pytorch/issues/92329
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94530
Approved by: https://github.com/kulinseth
2023-02-10 17:40:29 +00:00
Kulin Seth
1d3980656c [MPS] Fix min/max_reduction_with_dim ops (#94386)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94386
Approved by: https://github.com/DenisVieriu97, https://github.com/razarmehr
2023-02-10 15:23:47 +00:00
Kulin Seth
0fe11589df [MPS] Add im2col and col2im to Fallback (#94491)
These are not in the hot path  as they are mostly used in Preprocessing layers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94491
Approved by: https://github.com/razarmehr
2023-02-10 15:22:59 +00:00
PyTorch MergeBot
f152a79be9 Revert "update aten op overload to not use from to avoid compile errors (#89797)"
This reverts commit 021d267694.

Reverted https://github.com/pytorch/pytorch/pull/89797 on behalf of https://github.com/jeanschmidt due to breaking internal builds - more details on https://fburl.com/sandcastle/bz8mgkil
2023-02-10 11:32:25 +00:00
Denis Vieriu
a1f15fb987 [MPS] Fix batchnorm forward and backward pass (#94351)
Fixes batchnorm forward/backward pass and layer_norm:

Batchnorm Forward pass:
```
- fix batch_norm_mps_out key
- return 1/sqrt(var+epsilon) instead of var
- return empty tensor for mean and var if train is not enabled
- remove native_batch_norm from block list
```

Batchnorm Backward pass:
```
- add revert caculation for save_var used in backward path
- add backward test for native_batch_norm and _native_batch_norm_legit
```

Layer norm:
```
- remove the duplicate calculation from layer_norm_mps
- enable native_layer_norm backward test
- raise atol rtol for native_layer_norm
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94351
Approved by: https://github.com/razarmehr
2023-02-10 05:53:36 +00:00
Denis Vieriu
016f0b2f62 [MPS] Calculate nonzero count inside nonzero op (#94442)
Calculate nonzero count directly in the nonzero op.
Additionally, synchronize before entering nonzero op to make sure all previous operations finished (output shape is allocated based on the count_nonzero count)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94442
Approved by: https://github.com/kulinseth
2023-02-10 00:53:52 +00:00
Denis Vieriu
336d9354d6 [MPS] Enable index add for TestConsistency (#94356)
Enable index_add TestConsistency TestCase
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94356
Approved by: https://github.com/kulinseth
2023-02-10 00:21:11 +00:00
Kulin Seth
299ada9cff [MPS] Add the floor_divide fixes. (#94488)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94488
Approved by: https://github.com/razarmehr
2023-02-10 00:10:08 +00:00
Kulin Seth
f35f12320a [MPS] Fixes for arange_mps for empty tensor. (#94485)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94485
Approved by: https://github.com/razarmehr
2023-02-09 19:30:17 +00:00
Kulin Seth
105f7205bd [MPS] Fix and unblock TestConsistency for median (#94489)
- fix num_output_dims calculation
- fix median_out_mps key
- cast tensor sent to sortWithTensor and argSortWithTensor
- note down same issue for unique
- unblock median from blocklist
- adding test_median_int16 test

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94489
Approved by: https://github.com/razarmehr
2023-02-09 19:29:07 +00:00
Ramin Azarmehr
4f691d2e2f [MPS] Fix correctness issue with fill_scalar_mps() (#94479)
- The self was not contiguous and inline filling produced wrong results
- Added a test case for the issue

Fixes the zero_like() issue reported in #94190

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94479
Approved by: https://github.com/DenisVieriu97, https://github.com/kulinseth
2023-02-09 19:07:13 +00:00
jinsu kim
a5b052259b Add MPS support for aten::remainder.Tensor_out (#92139)
Fixes #86806

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92139
Approved by: https://github.com/kulinseth, https://github.com/DenisVieriu97
2023-02-09 15:32:30 +00:00
Soof Golan
e4fe11eecb [MPS] Fix torch.topk for empty tensors and k=0 on mps (#91884)
Fixes #91878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91884
Approved by: https://github.com/kulinseth
2023-02-09 10:42:52 +00:00
Soof Golan
19264b50bb [MPS] Add support for nansum on mps (#93845)
* Add `nansum_out_mps` and `nansum_mps` functions
* Moved `get_dtype_from_self` into ReduceOpsUtils.h

Fixes #86809

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93845
Approved by: https://github.com/malfet
2023-02-09 10:30:55 +00:00
Kulin Seth
02ca2253cc [MPS] Fixes for Binary ops with casting issues from FP to uint8 (#94382)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94382
Approved by: https://github.com/razarmehr
2023-02-09 09:44:02 +00:00
Denis Vieriu
5b8e485a34 [MPS] Add 2d grid sampler (#94273)
Add support for MPS grid sampler
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94273
Approved by: https://github.com/razarmehr
2023-02-09 02:25:46 +00:00
Ramin Azarmehr
6c80d0a5a5 [MPS] Fix correctness issues with Pool2D ops (#94348)
- Fix wrong results in AvgPool2D when `count_include_pad=True`
- Fix issues with adaptive average and max pool2d
- Remove the redundant blocking copies from `AdaptiveMaxPool2d`
- Add `divisor` to cached string key to avoid conflicts
- Add test case when both `ceil_mode` and `count_include_pad` are True (previously failed).
- Clean up redundant code
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94348
Approved by: https://github.com/kulinseth
2023-02-09 02:06:40 +00:00
Elias Ellison
021d267694 update aten op overload to not use from to avoid compile errors (#89797)
Fix for https://github.com/pytorch/pytorch/issues/93591 by changing `random_.from` to `random_.from_int`.

The previous signature would fail when printed in an fx graph, because `from` is a reserved python keyword. This change affects serialization but I have added an adapter.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89797
Approved by: https://github.com/tugsbayasgalan
2023-02-08 22:04:59 +00:00