Commit Graph

31 Commits

Author SHA1 Message Date
Evgeni Burovski
dc3d0caab3 BUG: fix np.ndarray.resize under dynamo (#113931)
Make sure ndarray.resize actually works in-place, so that dynamo does the right thing tracking the result.

Fixes #113539

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113931
Approved by: https://github.com/lezcano
2023-11-17 18:12:17 +00:00
Evgeni Burovski
237cbd5be6 BUG: trace frames with numpy scalar -> ndarray functions (#112959)
Fixes #112951

Make dynamo detect that `np.arange(3)` returns a FakeTensor, so the frame needs to be traced.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112959
Approved by: https://github.com/lezcano
2023-11-17 03:00:24 +00:00
lezcano
6ce5de5275 Avoid calling as_tensor twice (#112866)
Sometimes doing so may copy and that's not good. We avoid that by
setting global flags.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112866
Approved by: https://github.com/kit1980, https://github.com/ev-br
2023-11-07 16:10:59 +00:00
Evgeni Burovski
6a3922d523 BUG: compile np.array(list_of_arrays) (#112711)
Add a shortcut for a sequence of arrays only. This remove a graph break on a common pattern of
`np.array([np.cos(theta), np.sin(theta)])` and its ilk.

This PR is a simpified alternative to https://github.com/pytorch/pytorch/pull/112521 --- it still breaks on mixing arrays and scalars or array_likes (e.g.  `np.array([[1, 2], np.array[3, 4]])`) and instead adds a simple shortcut.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112711
Approved by: https://github.com/lezcano
2023-11-02 20:18:16 +00:00
PyTorch MergeBot
7e654c8f88 Revert "WIP / TST: allow testing torch._numpy under Dynamo (#110401)"
This reverts commit 5ed4a423de.

Reverted https://github.com/pytorch/pytorch/pull/110401 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing dynamo job in trunk 5ed4a423de ([comment](https://github.com/pytorch/pytorch/pull/110401#issuecomment-1779811943))
2023-10-25 18:21:16 +00:00
Evgeni Burovski
5ed4a423de WIP / TST: allow testing torch._numpy under Dynamo (#110401)
Use conditional imports: when running under dynamo, import the original NumPy not torch._numpy. This is what we want to trace, not our implementation.

With this, the test suite passes with and without `PYTORCH_TEST_WITH_DYNAMO=1` (modulo a couple of test modules which are not meant to be compiled, e.g. `test_nep50_examples`). There are two new decorators, `x{fail,pass}ifTorchDynamo`, the `xpass` in most cases indicates a graph break and a fallback to eager for things we do not implement.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110401
Approved by: https://github.com/lezcano
2023-10-25 16:02:16 +00:00
Aaron Gokaslan
cb856b08b2 [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
2023-10-19 21:56:36 +00:00
Evgeni Burovski
48989bc820 trace frames with np.ndarray (#110512)
Fixes #109604

Resubmit gh-109715 + several skips and small fixes to make tests pass.

The main fix here is by @ysiraichi : previously, dynamo did not resume tracing numpy ndarrays after a graph break.
While at it, fix several small issues Yukio's fix uncovers:

- graph break gracefully on numpy dtypes which do not map to torch.dtypes (uint16 etc)
- recognize array scalars in dynamo, treat them as 0D ndarrays
- make sure that iterating over torch.ndarray generates arrays not bare tensors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110512
Approved by: https://github.com/lezcano
2023-10-15 00:56:10 +00:00
Kazuaki Ishizaki
19ce68a45c Fix typo under torch/_numpy directory (#110782)
This PR fixes typo of comments in files under torch/_numpy directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110782
Approved by: https://github.com/Skylion007
2023-10-07 17:42:35 +00:00
Evgeni Burovski
3603f646eb BUG: fix torch._numpy.arange(5, dtype="float32") (#110005)
Make `np.arange` respect an explicitly provided dtype.

Also remove duplicated tests:
- torch_np/test_function_base.py::TestArange is a dupe of
- torch_np/numpy_tests/core/test_multiarray.py::TestArange

Fixes #109975

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110005
Approved by: https://github.com/lezcano
2023-09-28 18:21:18 +00:00
aashishthakur10
ee8983da70 109605 dynamo scalar ndarray pow gen (#109953)
Fixes #109605

Generated code before:
```
def call(args):
    arg0_1, = args
    args.clear()
    assert_size_stride(arg0_1, (8, ), (1, ))
    buf0 = empty_strided((), (), device='cpu', dtype=torch.int64)
    cpp_fused_lift_fresh_0(c_void_p(buf0.data_ptr()))
    # Source Nodes: [wrapped_pow], Original ATen: [aten.lift_fresh, aten.pow]
    buf1 = aten.pow(arg0_1, reinterpret_tensor(buf0, (8, ), (0, ), 0))
    del arg0_1
    del buf0
    buf2 = buf1
    assert_size_stride(buf2, (8, ), (1, ))
    del buf1
    return (buf2, )
```

Generated code now:
```
def call(args):
    arg0_1, = args
    args.clear()
    assert_size_stride(arg0_1, (8, ), (1, ))
    buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64)
    cpp_fused_pow_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()))
    del arg0_1
    return (buf0, )
```
@lezcano What would be a good way to add a test for this?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109953
Approved by: https://github.com/lezcano
2023-09-28 13:11:06 +00:00
Guilherme Leobas
d046376c4f Dispatch numpy.take_along_axis to torch.take_along_dim (#108880)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108880
Approved by: https://github.com/lezcano
ghstack dependencies: #108879
2023-09-13 23:13:09 +00:00
Evgeni Burovski
3ac2396e00 Fix torch._numpy.random (#108944)
Fix several issues with `torch._numpy.random` functions on eager

1. actually return scalars when `size is None`
2. fix dispatch with USE_NUMPY_STREAM
3. make tnp.random functions composable: make numpy functions receive numpy arguments, not `tnp.ndarray`s
4. fix random.shuffle for e.g. lists

The main need for this gymnastics is due to `np.random` functions returning an ndarray or python scalar depending on the `size` argument. We decided a while ago to replicate this behavior in `tnp.random` and not elsewhere where we always return 0D arrays instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108944
Approved by: https://github.com/lezcano
2023-09-13 05:08:19 +00:00
Evgeni Burovski
cd46b5db76 make sure all torch._numpy tests run on CI (#108762)
- Add `if __name__ == "__main__": run_tests()` stanzas to test files in `torch_np` folder so that these tests run on CI
- Skip / xfail things smoked out by this change
- remove a stray python file which should not have been added to tests in the first place.
- fix einsum if opt_einsum is present
- add skips for older numpies

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108762
Approved by: https://github.com/lezcano
2023-09-12 17:12:21 +00:00
PyTorch MergeBot
090fe45e1c Revert "make sure all torch._numpy tests run on CI (#108762)"
This reverts commit 7abeb92796.

Reverted https://github.com/pytorch/pytorch/pull/108762 on behalf of https://github.com/clee2000 due to sorry but I think the asan test_scalarmath failure is real 7abeb92796 https://github.com/pytorch/pytorch/actions/runs/6132913963/job/16645381921 ([comment](https://github.com/pytorch/pytorch/pull/108762#issuecomment-1714214523))
2023-09-11 16:29:20 +00:00
Evgeni Burovski
7abeb92796 make sure all torch._numpy tests run on CI (#108762)
- Add `if __name__ == "__main__": run_tests()` stanzas to test files in `torch_np` folder so that these tests run on CI
- Skip / xfail things smoked out by this change
- remove a stray python file which should not have been added to tests in the first place.
- fix einsum if opt_einsum is present
- add skips for older numpies

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108762
Approved by: https://github.com/lezcano
2023-09-09 20:05:27 +00:00
Evgeni Burovski
324b23f337 MAINT: torch/_numpy: remove stubs raising NIError (#108902)
Remove remaining stubs. There is no point raising NotImplementedError, now that a missing function triggers a graph break just by being missing in `torch._numpy` namespace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108902
Approved by: https://github.com/lezcano
2023-09-09 00:11:14 +00:00
Evgeni Burovski
1f20531939 fall back to eager on NotImplementedError (#107863)
Follow-up to https://github.com/pytorch/pytorch/pull/107710:

Help  dynamo fall back to eager when compiling unimplemented numpy constructs:

- arrays of strings
- (arg){min, max} for complex types
- various arguments typed as NotImplemented (`np.ones(4, order="F")` etc)
- numpy functions which torch._numpy does not implement

To test, run (we do not implement arrays of strings)

```
import torch
import numpy as np

@torch.compile(fullgraph=False)
def fn():
    return np.asarray(["L", "U"])
```

and observe it compiles with fullgraph=False and fails with fullgraph=True

Fixes https://github.com/pytorch/pytorch/issues/107970

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107863
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-09-07 21:22:20 +00:00
lezcano
2a6ef9b04d [dynamo] Avoid recompilation when the PyTorch function accepts scalars (#108162)
Before, it would create a 0D tensor with the input, which would incur in
a guard and specialisation.

It's not clear whether the guard and specialisation is the right behaviour
when we create 0D tensors, but that's a story for another day.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108162
Approved by: https://github.com/ev-br, https://github.com/peterbell10
2023-09-01 14:35:42 +00:00
Jirka Borovec
9178deedff removing some redundant str splits (#106089)
drop some redundant string splits, no factual changes, just cleaning the codebase

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106089
Approved by: https://github.com/albanD, https://github.com/malfet
2023-09-01 00:22:58 +00:00
Evgeni Burovski
01dfa7620d MAINT: np.unique works with f16 directly (#108228)
(follow up on gh-107768)

Remove a f16->f32 workaround from np.unique, since torch.unique and np.unique seem to just work with float16 tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108228
Approved by: https://github.com/lezcano
2023-08-31 16:21:13 +00:00
Evgeni Burovski
55d6b80188 torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768)
Contain workarounds for _RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'_ to CPU tensors, do computations on CUDA tensors in f16.

Fixes https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/170

We do not really systematically test CUDA tensors in torch._numpy, so I only spot-checked locally that the affected functions work with `tensor.to("cuda")`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107768
Approved by: https://github.com/lezcano
2023-08-23 18:35:47 +00:00
lezcano
b5c90ba7e7 [dynamo] Fix ndarray.__pow__ (#107746)
As per title. Tests in the next PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107746
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710, #107711
2023-08-23 13:55:36 +00:00
lezcano
fada0527fa Dispatch take_along_axis to gather (#107711)
Gather does the same thing, but it's much better supported in the
`torch.compile` stack

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107711
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710
2023-08-23 01:21:23 +00:00
lezcano
62113a2361 [dynamo] np.sort(complex) is not implemented (#107710)
This issue was discovered once we were able to trace without breaking
in https://github.com/pytorch/pytorch/pull/107689. Same for the next
one.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107710
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688
2023-08-23 01:21:23 +00:00
lezcano
2fc828312c Support negative indices in ndarray.__getitem__ (#107688)
In this case, we copy, but this is part of the set of divergences
described in https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/73.

This does not work with dynamic shapes, but it's not clear to me what
would be the best fix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107688
Approved by: https://github.com/ezyang
ghstack dependencies: #107687
2023-08-23 01:21:23 +00:00
Aaron Gokaslan
660e8060ad [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
PyTorch MergeBot
d59a6864fb Revert "[BE]: Update ruff to 0.285 (#107519)"
This reverts commit 88ab3e4322.

Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480))
2023-08-22 19:53:32 +00:00
Evgeni Burovski
da67b414d9 torch._numpy: remove noops and half-implemented nan-functions (#107596)
As discussed in the review of https://github.com/pytorch/pytorch/pull/106211, remove several noops (https://github.com/pytorch/pytorch/pull/106211#pullrequestreview-1559806543 and https://github.com/pytorch/pytorch/pull/106211#pullrequestreview-1559809287).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107596
Approved by: https://github.com/lezcano
2023-08-21 21:17:55 +00:00
Aaron Gokaslan
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00