Fixes#141563
In NumPy, an ellipsis always acts as a separator between advanced indices, even when the ellipsis doesn't actually match any dimensions. In PyTorch an empty ellipsis doesn't cause a separation. This leads to differing behavior between Numpy and PyTorch in this edge case.
This difference in behavior leads to a bug when using torch.compile:
```python
>>> import numpy as np
>>> f = lambda x: x[:,(0,1),...,(0,1)].shape
>>> a = np.ones((3, 4, 5))
>>> f(a)
(2, 3)
>>> torch.compile(f)(a)
(3, 2)
```
Similarly to #157676, this PR doesn't change PyTorch's behavior, but it fixes the translation layer, ensuring torch._numpy compatibility with NumPy. I am marking this PR as fixing #141563, even though PyTorch behavior isn't modified.
Notice that there are still some other bugs in PyTorch's advanced indexing, that need to be fixed (mainly regarding proper accounting of dimensions when multidimensional boolean masks are present). But those need to be fixed at the ATen operator level. Examples:
- #71673
- #107699
- #158125
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158297
Approved by: https://github.com/soumith
Fixes#157720
### What's in this PR?
This PR improves the error handling in `torch.compile` for `ndarray.astype('O')` (or `object`). It now explicitly raises a `torch._dynamo.exc.Unsupported` exception with a clear explanation, instead of failing with a less intuitive error during fake tensor propagation.
This is achieved by adding a check within `NumpyNdarrayVariable.call_method` for this specific `astype` pattern.
A new test, `test_ndarray_astype_object_graph_break`, is also added to `test/test_numpy_interop.py` to verify this new behavior.
### Background
Previously, attempting to `torch.compile` a function containing `ndarray.astype('O')` would result in a `TorchRuntimeError` wrapping a `TypeError: data type 'O' not understood`. This error message, originating deep within the tensor mechanism, was not very user-friendly and didn't clearly state *why* it was unsupported.
This change makes the failure more explicit and provides a better user experience by giving a direct, actionable error message.
**Old Behavior (Error Traceback):**
```
torch.dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: ... got TypeError("data type 'O' not understood")
```
**New Behavior (Error Message):**
```
torch.dynamo.exc.Unsupported: ndarray.astype(object)
Explanation: ndarray.astype('O') or ndarray.astype(object) is not supported by torch.compile, as there is no equivalent to object type in torch.
```
### Testing
A new test has been added to `test_numpy_interop.py` which decorates a function containing `ndarray.astype("O")` with `torch.compile`. The test asserts that a `torch._dynamo.exc.Unsupported` exception is raised, confirming the new error handling works as expected.
The test can be run with:
`pytest test/test_numpy_interop.py -k test_ndarray_astype_object_graph_break`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157810
Approved by: https://github.com/jansel
Written with Claude Code.
Fixes https://github.com/pytorch/pytorch/issues/157569
Fixes https://github.com/pytorch/pytorch/issues/158134
NumPy and PyTorch handle advanced indexing differently when advanced indices are separated by slices (e.g., arr[:, [0], :, 0]). PyTorch uses "outer" indexing placing result dimensions in original positions, while NumPy uses "vectorized"
indexing moving advanced index dimensions to the front.
This adds _numpy_style_advanced_indexing() to detect separated advanced indices and transpose results to match NumPy's dimension ordering, ensuring torch._numpy maintains compatibility with NumPy's indexing behavior.
Fixes cases like:
- arr[:, [0], :, 0] now returns shape (1, 5, 7) instead of (5, 1, 7)
- arr[:, [0, 1], :, 0] now returns shape (2, 5, 7) instead of (5, 2, 7)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157676
Approved by: https://github.com/manuelcandales
Co-authored-by: Claude <noreply@anthropic.com>
Dynamo was aggressively specializing on lazy VTs over `set_name_hint` in
`STORE_FAST`, etc., and `isinstance` in `LOAD_FAST_CHECK`. This causes
regional `torch.compile` from optimizing ComfyUI GGUF + LoRA to either
(1). exceed the recompialtion limit of 8, which results in suboptimal
performance, and (2). even if recompilation limit is increased, the
compilation time gets unnecessarily high (180s v.s. 20s for Flux).
This patch fixes the recompilation issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156891
Approved by: https://github.com/williamwen42, https://github.com/mlazos
When op is unimplemented for a specific dtype
Which makes more sense, than a RuntimeError
Example
```python
>>> import torch
>>> torch.nn.Hardshrink()(torch.randint(0, 5, (10,)))
NotImplementedError: "hardshrink_cpu" not implemented for 'Long'
```
release notes bc-breaking: After this release `NotImplementedError` exception will be raised when ATen operation is called on the combinaiton of input tensor dtypes it has not been implemented for
Mark few more unary ops as unimplemented to satisfy foreach testing error reporting consistency between CPU and CUDA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155470
Approved by: https://github.com/albanD, https://github.com/Skylion007
When op is unimplemented for a specific dtype
Which makes more sense, than a RuntimeError
Example
```python
>>> import torch
>>> torch.nn.Hardshrink()(torch.randint(0, 5, (10,)))
NotImplementedError: "hardshrink_cpu" not implemented for 'Long'
```
release notes bc-breaking: After this release `NotImplementedError` exception will be raised when ATen operation is called on the combinaiton of input tensor dtypes it has not been implemented for
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155470
Approved by: https://github.com/albanD, https://github.com/Skylion007
This enables a check that which a class which only inherits from immutable classes like str, tuple, and NamedTuple, also defined `__slots__` so they don't allocate memory unnecessarily. This also ensure contributors think about how they define their classes with subclass NamedTuples and str, of which we have many in our codebase
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146276
Approved by: https://github.com/aorenste
Previously `TestGradient.test_second_order_accurate` was failing because
of a small tolerance error (0.03... which is above the 0.03 tolerance).
Upon investigating, `np.random.random` caused some divergence between
eager and compiled randomness because in compiled we are not using
`np.random`'s random seed, rather we end up using `torch`'s. This in
turn caused numerical divergence and aforementioned accuracy issue.
This patch fixes the failure by patching the test case with
`use_numpy_random_stream=True`, which forces a graph break on
`np.random.random()` and thereby falling back to eager to ensure
consistency of the numpy randomness.
Fixes#116746.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145293
Approved by: https://github.com/lezcano
Changes:
1. Bump `ruff` from 0.7.4 to 0.8.4
2. Change `%`-formatted strings to f-string
3. Change arguments with the `__`-prefix to positional-only arguments with the `/` separator in function signature.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143753
Approved by: https://github.com/Skylion007
Related to #107302
`TestExport.test_exported_objects` in `test/torch_np/test_basic.py` is failing with NumPy 2.
The test is checking if all methods under `torch._numpy` exist in `numpy`.
However, some of them are removed in NumPy 2.
This PR fixes the issue by not checking the removed methods when running with NumPy 2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137814
Approved by: https://github.com/albanD
Continuation of https://github.com/pytorch/pytorch/pull/131909. This PR makes numpy tests compatible with numpy>=2.0.0. Specifically it deals with APIs that have been removed from numpy-2.0.
Changes in this PR:
1. Use `numpy.exceptions.ComplexWarning` if `numpy.exceptions` namespace is present. In numpy-2.0 `numpy.ComplexWarning` has been removed in favor of using `numpy.exceptions.ComplexWarning` (see [numpy-2.0 migration guide](https://numpy.org/devdocs/numpy_2_0_migration_guide.html#changes-to-namespaces)). Note that `numpy.exceptions` was introduced in numpy-1.25.0 hence does not exist in numpy<=1.24.x.
2. Do the same for `numpy.exceptions.VisibleDeprecationWarning`
3. Use `np.sort(...,axis=0)` over `np.msort()`(`np.msort()` removed in numpy-2.0)
4. Use `np.pad()` over `np.lib.pad()` (`np.lib` removed in numpy-2.0)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136152
Approved by: https://github.com/atalman
Migrates usages of deprecated APIs in NumPy-2.0 per [numpy-2.0 migration guide](https://numpy.org/devdocs/numpy_2_0_migration_guide.html#numpy-2-0-migration-guide).
I did a grep on the old API usages (see list below) and these were used only referenced in test files under `test/torch_np/numpy_tests/**/*.py`.
Specifically, migrates the usages of the following APIs:
1. `np.sctypes` → Access dtypes explicitly instead
2. `np.float_` → `np.float64`
3. `np.complex_` → `np.complex128`
4. `np.longcomplex` → `np.clongdouble`
5. `np.unicode_` → `np.str_`
6. `np.product` → `np.prod`
7. `np.cumproduct` → `np.cumprod`
8. `np.alltrue` → `np.all`
9. `np.sometrue` → `np.any`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131909
Approved by: https://github.com/rgommers, https://github.com/Skylion007, https://github.com/atalman