Commit Graph

15 Commits

Author SHA1 Message Date
Evgeni Burovski
3603f646eb BUG: fix torch._numpy.arange(5, dtype="float32") (#110005)
Make `np.arange` respect an explicitly provided dtype.

Also remove duplicated tests:
- torch_np/test_function_base.py::TestArange is a dupe of
- torch_np/numpy_tests/core/test_multiarray.py::TestArange

Fixes #109975

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110005
Approved by: https://github.com/lezcano
2023-09-28 18:21:18 +00:00
Guilherme Leobas
d046376c4f Dispatch numpy.take_along_axis to torch.take_along_dim (#108880)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108880
Approved by: https://github.com/lezcano
ghstack dependencies: #108879
2023-09-13 23:13:09 +00:00
Evgeni Burovski
cd46b5db76 make sure all torch._numpy tests run on CI (#108762)
- Add `if __name__ == "__main__": run_tests()` stanzas to test files in `torch_np` folder so that these tests run on CI
- Skip / xfail things smoked out by this change
- remove a stray python file which should not have been added to tests in the first place.
- fix einsum if opt_einsum is present
- add skips for older numpies

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108762
Approved by: https://github.com/lezcano
2023-09-12 17:12:21 +00:00
PyTorch MergeBot
090fe45e1c Revert "make sure all torch._numpy tests run on CI (#108762)"
This reverts commit 7abeb92796.

Reverted https://github.com/pytorch/pytorch/pull/108762 on behalf of https://github.com/clee2000 due to sorry but I think the asan test_scalarmath failure is real 7abeb92796 https://github.com/pytorch/pytorch/actions/runs/6132913963/job/16645381921 ([comment](https://github.com/pytorch/pytorch/pull/108762#issuecomment-1714214523))
2023-09-11 16:29:20 +00:00
Evgeni Burovski
7abeb92796 make sure all torch._numpy tests run on CI (#108762)
- Add `if __name__ == "__main__": run_tests()` stanzas to test files in `torch_np` folder so that these tests run on CI
- Skip / xfail things smoked out by this change
- remove a stray python file which should not have been added to tests in the first place.
- fix einsum if opt_einsum is present
- add skips for older numpies

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108762
Approved by: https://github.com/lezcano
2023-09-09 20:05:27 +00:00
Evgeni Burovski
324b23f337 MAINT: torch/_numpy: remove stubs raising NIError (#108902)
Remove remaining stubs. There is no point raising NotImplementedError, now that a missing function triggers a graph break just by being missing in `torch._numpy` namespace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108902
Approved by: https://github.com/lezcano
2023-09-09 00:11:14 +00:00
Evgeni Burovski
1f20531939 fall back to eager on NotImplementedError (#107863)
Follow-up to https://github.com/pytorch/pytorch/pull/107710:

Help  dynamo fall back to eager when compiling unimplemented numpy constructs:

- arrays of strings
- (arg){min, max} for complex types
- various arguments typed as NotImplemented (`np.ones(4, order="F")` etc)
- numpy functions which torch._numpy does not implement

To test, run (we do not implement arrays of strings)

```
import torch
import numpy as np

@torch.compile(fullgraph=False)
def fn():
    return np.asarray(["L", "U"])
```

and observe it compiles with fullgraph=False and fails with fullgraph=True

Fixes https://github.com/pytorch/pytorch/issues/107970

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107863
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-09-07 21:22:20 +00:00
lezcano
2a6ef9b04d [dynamo] Avoid recompilation when the PyTorch function accepts scalars (#108162)
Before, it would create a 0D tensor with the input, which would incur in
a guard and specialisation.

It's not clear whether the guard and specialisation is the right behaviour
when we create 0D tensors, but that's a story for another day.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108162
Approved by: https://github.com/ev-br, https://github.com/peterbell10
2023-09-01 14:35:42 +00:00
Jirka Borovec
9178deedff removing some redundant str splits (#106089)
drop some redundant string splits, no factual changes, just cleaning the codebase

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106089
Approved by: https://github.com/albanD, https://github.com/malfet
2023-09-01 00:22:58 +00:00
Evgeni Burovski
01dfa7620d MAINT: np.unique works with f16 directly (#108228)
(follow up on gh-107768)

Remove a f16->f32 workaround from np.unique, since torch.unique and np.unique seem to just work with float16 tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108228
Approved by: https://github.com/lezcano
2023-08-31 16:21:13 +00:00
Evgeni Burovski
55d6b80188 torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768)
Contain workarounds for _RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'_ to CPU tensors, do computations on CUDA tensors in f16.

Fixes https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/170

We do not really systematically test CUDA tensors in torch._numpy, so I only spot-checked locally that the affected functions work with `tensor.to("cuda")`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107768
Approved by: https://github.com/lezcano
2023-08-23 18:35:47 +00:00
lezcano
fada0527fa Dispatch take_along_axis to gather (#107711)
Gather does the same thing, but it's much better supported in the
`torch.compile` stack

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107711
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710
2023-08-23 01:21:23 +00:00
lezcano
62113a2361 [dynamo] np.sort(complex) is not implemented (#107710)
This issue was discovered once we were able to trace without breaking
in https://github.com/pytorch/pytorch/pull/107689. Same for the next
one.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107710
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688
2023-08-23 01:21:23 +00:00
Evgeni Burovski
da67b414d9 torch._numpy: remove noops and half-implemented nan-functions (#107596)
As discussed in the review of https://github.com/pytorch/pytorch/pull/106211, remove several noops (https://github.com/pytorch/pytorch/pull/106211#pullrequestreview-1559806543 and https://github.com/pytorch/pytorch/pull/106211#pullrequestreview-1559809287).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107596
Approved by: https://github.com/lezcano
2023-08-21 21:17:55 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00