Commit Graph

65 Commits

Author SHA1 Message Date
Evgeni Burovski
324b23f337 MAINT: torch/_numpy: remove stubs raising NIError (#108902)
Remove remaining stubs. There is no point raising NotImplementedError, now that a missing function triggers a graph break just by being missing in `torch._numpy` namespace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108902
Approved by: https://github.com/lezcano
2023-09-09 00:11:14 +00:00
Evgeni Burovski
1f20531939 fall back to eager on NotImplementedError (#107863)
Follow-up to https://github.com/pytorch/pytorch/pull/107710:

Help  dynamo fall back to eager when compiling unimplemented numpy constructs:

- arrays of strings
- (arg){min, max} for complex types
- various arguments typed as NotImplemented (`np.ones(4, order="F")` etc)
- numpy functions which torch._numpy does not implement

To test, run (we do not implement arrays of strings)

```
import torch
import numpy as np

@torch.compile(fullgraph=False)
def fn():
    return np.asarray(["L", "U"])
```

and observe it compiles with fullgraph=False and fails with fullgraph=True

Fixes https://github.com/pytorch/pytorch/issues/107970

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107863
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-09-07 21:22:20 +00:00
lezcano
2a6ef9b04d [dynamo] Avoid recompilation when the PyTorch function accepts scalars (#108162)
Before, it would create a 0D tensor with the input, which would incur in
a guard and specialisation.

It's not clear whether the guard and specialisation is the right behaviour
when we create 0D tensors, but that's a story for another day.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108162
Approved by: https://github.com/ev-br, https://github.com/peterbell10
2023-09-01 14:35:42 +00:00
Jirka Borovec
9178deedff removing some redundant str splits (#106089)
drop some redundant string splits, no factual changes, just cleaning the codebase

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106089
Approved by: https://github.com/albanD, https://github.com/malfet
2023-09-01 00:22:58 +00:00
Evgeni Burovski
01dfa7620d MAINT: np.unique works with f16 directly (#108228)
(follow up on gh-107768)

Remove a f16->f32 workaround from np.unique, since torch.unique and np.unique seem to just work with float16 tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108228
Approved by: https://github.com/lezcano
2023-08-31 16:21:13 +00:00
Evgeni Burovski
55d6b80188 torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768)
Contain workarounds for _RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'_ to CPU tensors, do computations on CUDA tensors in f16.

Fixes https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/170

We do not really systematically test CUDA tensors in torch._numpy, so I only spot-checked locally that the affected functions work with `tensor.to("cuda")`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107768
Approved by: https://github.com/lezcano
2023-08-23 18:35:47 +00:00
lezcano
b5c90ba7e7 [dynamo] Fix ndarray.__pow__ (#107746)
As per title. Tests in the next PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107746
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710, #107711
2023-08-23 13:55:36 +00:00
lezcano
fada0527fa Dispatch take_along_axis to gather (#107711)
Gather does the same thing, but it's much better supported in the
`torch.compile` stack

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107711
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710
2023-08-23 01:21:23 +00:00
lezcano
62113a2361 [dynamo] np.sort(complex) is not implemented (#107710)
This issue was discovered once we were able to trace without breaking
in https://github.com/pytorch/pytorch/pull/107689. Same for the next
one.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107710
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688
2023-08-23 01:21:23 +00:00
lezcano
2fc828312c Support negative indices in ndarray.__getitem__ (#107688)
In this case, we copy, but this is part of the set of divergences
described in https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/73.

This does not work with dynamic shapes, but it's not clear to me what
would be the best fix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107688
Approved by: https://github.com/ezyang
ghstack dependencies: #107687
2023-08-23 01:21:23 +00:00
Aaron Gokaslan
660e8060ad [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
PyTorch MergeBot
d59a6864fb Revert "[BE]: Update ruff to 0.285 (#107519)"
This reverts commit 88ab3e4322.

Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480))
2023-08-22 19:53:32 +00:00
Evgeni Burovski
da67b414d9 torch._numpy: remove noops and half-implemented nan-functions (#107596)
As discussed in the review of https://github.com/pytorch/pytorch/pull/106211, remove several noops (https://github.com/pytorch/pytorch/pull/106211#pullrequestreview-1559806543 and https://github.com/pytorch/pytorch/pull/106211#pullrequestreview-1559809287).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107596
Approved by: https://github.com/lezcano
2023-08-21 21:17:55 +00:00
Aaron Gokaslan
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00