pytorch/test/functorch
Nikita Shulga a952956d05 Add isnan exit condition to special ops (#157464)
They might have been slow on CUDA-11.3, but this version of CUDA is long gone. More fundamental underlying issue were linear complexity of the recursive polynomial definitions for higher order polynomials, for example see this loop from implementation of Chebyshev polynomial of the first kind
7081b8233a/aten/src/ATen/native/Math.h (L2969-L2973)
which were tested by `test_compare_cpu` using following values (as sample index 16)
7081b8233a/torch/testing/_internal/opinfo/core.py (L2079)

Luckily chebyshev polynomials for absolute values higher than 1 pretty quickly reach infinity, see below
```
python3 -c "import torch;print(torch.special.chebyshev_polynomial_v(torch.nextafter(torch.tensor(1.0), torch.tensor(2.0)), torch.tensor(1e6)))"
tensor(nan)
```
Which is not the case for Laguerre polynomials, but it's probably fine to just limit it to 1e7

Before
```
$ PYTORCH_TEST_WITH_SLOW=1 python test_ops.py -k chebyshev_polynomial_
ssssssss..ssssss..ssssss..ssssssssssssssssssssss..ssssss/home/ubuntu/py3.10-nightly/lib/python3.10/site-packages/torch/backends/cuda/__init__.py:131: UserWarning: This API is going to be deprecated, please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:78.)
  return torch._C._get_cublas_allow_tf32()
....ssssssssssss..ssssss..ssssss............ssssssssssssssssssssssssssssssssssss..ssssssssssssss..ssssss..ssssssssssssssssssssssssssssss..ssssss....ssssssssssss..ssssss..ssssss............ssssssssssssssssssssssssssssssssssss..ssssss..ssssssssssssss..ssssss..ssssss..ssssssssssssss..ssssss..ssssss..ssssss..ssssss..ssssss..ssssss..ssssss..ssssss..ssssss..ssssss..ssssssssssssss
----------------------------------------------------------------------
Ran 432 tests in 8.575s

OK (skipped=344)
```
After
```
$ PYTORCH_TEST_WITH_SLOW=1 python test_ops.py -k chebyshev_polynomial_
ssssssss........................ssssssssssssssss......../home/ubuntu/pytorch/torch/backends/cuda/__init__.py:131: UserWarning: This API is going to be deprecated, please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at /home/ubuntu/pytorch/aten/src/ATen/Context.cpp:78.)
  return torch._C._get_cublas_allow_tf32()
........................................................................................xxxxxxxx................ssssssssssssssssssssssss........................................................................................................ssssssss........................ssssssss........................................................................................ssssssss
----------------------------------------------------------------------
Ran 432 tests in 45.580s

OK (skipped=72, expected failures=8)
```

Fixes https://github.com/pytorch/pytorch/issues/79528

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157464
Approved by: https://github.com/Skylion007, https://github.com/dcci
ghstack dependencies: #157488
2025-07-05 04:19:50 +00:00
..
attn_ft.py [BE][Easy][9/19] enforce style for empty lines in import segments in test/[e-h]*/ (#129760) 2024-07-17 14:25:29 +00:00
attn_positional.py Enable UFMT on test/functorch (#123541) 2024-04-15 06:21:52 +00:00
common_utils.py [BE][PYFMT] migrate PYFMT for test/[a-h]*/ to ruff format (#144555) 2025-06-24 04:53:54 +00:00
discover_coverage.py Apply ruff fixes to tests (#146140) 2025-02-04 05:41:01 +00:00
functorch_additional_op_db.py fix numpy compatibility for 2d small list indices (#154806) 2025-06-04 01:58:52 +00:00
test_ac_knapsack.py [BE][PYFMT] migrate PYFMT for test/[a-h]*/ to ruff format (#144555) 2025-06-24 04:53:54 +00:00
test_ac_logging.py PEP585: More UP006 fixes (#146392) 2025-02-20 06:18:13 +00:00
test_ac.py Respect checkpointed boundaries when using knapsack formulation in the partitioner (#141684) 2025-05-01 15:28:41 +00:00
test_aotdispatch.py Add AOTDispatcher config to set backward autocast behavior (#156356) 2025-06-27 14:58:58 +00:00
test_control_flow.py Revert "[cond] support gen_schema for cond (#154193)" 2025-06-26 17:10:00 +00:00
test_dims.py [functorch] clean up asserts in test_dims.py (#144276) 2025-01-08 01:21:40 +00:00
test_eager_transforms.py [BE][PYFMT] migrate PYFMT for test/[a-h]*/ to ruff format (#144555) 2025-06-24 04:53:54 +00:00
test_logging.py Enable UFMT on test/functorch (#123541) 2024-04-15 06:21:52 +00:00
test_memory_efficient_fusion.py [BE][PYFMT] migrate PYFMT for test/[a-h]*/ to ruff format (#144555) 2025-06-24 04:53:54 +00:00
test_minifier.py Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
test_ops.py [BE][PYFMT] migrate PYFMT for test/[a-h]*/ to ruff format (#144555) 2025-06-24 04:53:54 +00:00
test_parsing.py [BE][PYFMT] migrate PYFMT for test/[a-h]*/ to ruff format (#144555) 2025-06-24 04:53:54 +00:00
test_rearrange.py [BE][PYFMT] migrate PYFMT for test/[a-h]*/ to ruff format (#144555) 2025-06-24 04:53:54 +00:00
test_vmap_registrations.py Add batching rule for torch.matrix_exp (#155202) 2025-06-18 17:35:35 +00:00
test_vmap.py Add isnan exit condition to special ops (#157464) 2025-07-05 04:19:50 +00:00
xfail_suggester.py [BE][Easy][9/19] enforce style for empty lines in import segments in test/[e-h]*/ (#129760) 2024-07-17 14:25:29 +00:00