Commit Graph

121 Commits

Author SHA1 Message Date
Aaron Gokaslan
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
Jane Xu
803d42e457 add lerp cpu support for half (#105607)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105607
Approved by: https://github.com/albanD
2023-07-21 20:29:05 +00:00
Justin Chu
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
Justin Chu
73e1455327 [BE] Enable ruff's UP rules and autoformat test/ (#105434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
BJ Hargrave
def7b3ed60 Enable bitwise shift operations tests (#97150)
With #70904 fixed, we can remove the skips for the bitwise shift tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97150
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-07-06 15:32:57 +00:00
cyy
54cb61f7d9 enable ASAN on some tests (#103647)
Enabling more tests on ASAN, meanwhile we disable float-divide-by-zero and float-cast-overflow, both are disabled because they are also disabled by default in latest clang.
The following cited doc explains the reasons.
```
-fsanitize=float-cast-overflow: Conversion to, from, or between floating-point types
which would overflow the destination. Because the range of representable values
for all floating-point types supported by Clang is [-inf, +inf], the only cases detected are
conversions from floating point to integer types.
-fsanitize=float-divide-by-zero: Floating point division by zero.
This is undefined per the C and C++ standards,
 but is defined by Clang (and by ISO/IEC/IEEE 60559 / IEEE 754) as producing
either an infinity or NaN value,
so is not included in -fsanitize=undefined.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103647
Approved by: https://github.com/kit1980
2023-06-28 02:17:14 +00:00
fuwenguang
c2498d3deb Fixed indentation error in test_binary_ufuncs.py (#102244)
Fixes #102147

Move the code where calling _scalar_helper out of its defination scope. Otherwise  test_div_and_floordiv_vs_python will test nothing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102244
Approved by: https://github.com/kit1980
2023-05-25 21:21:30 +00:00
Yanbo Liang
075d36d37f [Dynamo] Fix nested function resume execution (#100426)
Fixes #99665

Let me explain the root cause using the unit test I added:
* This bug is triggered when:
  * ```wrapped``` is a nested function.
  * ```wrapped``` is in another module which is different from the main function ```fn```.
  * There is a graph break inside of ```wrapped```.
* The root cause is when resuming nested function, actually we are using the outermost function(```fn``` in my example)'s global variables, but ```wrapped``` calls ```inner_func``` which is not part of ```fn```'s globals, so we have to set correct globals when nested function resume execution.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100426
Approved by: https://github.com/jansel
2023-05-11 03:10:23 +00:00
PyTorch MergeBot
4b8127b90e Revert "[Dynamo] Fix nested function resume execution (#100426)"
This reverts commit d719f0276d.

Reverted https://github.com/pytorch/pytorch/pull/100426 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/100426#issuecomment-1540915913))
2023-05-09 21:32:13 +00:00
Yanbo Liang
d719f0276d [Dynamo] Fix nested function resume execution (#100426)
Fixes #99665

Let me explain the root cause using the unit test I added:
* This bug is triggered when:
  * ```wrapped``` is a nested function.
  * ```wrapped``` is in another module which is different from the main function ```fn```.
  * There is a graph break inside of ```wrapped```.
* The root cause is when resuming nested function, actually we are using the outermost function(```fn``` in my example)'s global variables, but ```wrapped``` calls ```inner_func``` which is not part of ```fn```'s globals, so we have to set correct globals when nested function resume execution.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100426
Approved by: https://github.com/jansel
2023-05-06 05:04:50 +00:00
mingfeima
a8429342df fix mul/div overflow issue on CPU float16 (#98820)
Fix https://github.com/pytorch/pytorch/issues/63482 and https://github.com/pytorch/pytorch/issues/98691

The above two issues have the same root cause:

**binary_ops** will create TensorIterator with the flag `promote_inputs_to_common_dtype` on, which will convert both input tensors to the common_dtype_ (the logic is bypassed on CUDA), which might overflow on Half. If one of the inputs is a scalar with abs value larger than ~65000, it will overflow.

This patch will try to fetch the scalar value from the `original_tensor_base` which records the original scalar input value, then in the `cpu_kernel_vec` the TensorIterator is treated as an unary Op.

So previously, CPU and CUDA would have different behaviors for such scenario. This is aligned with this patch, test cases added for both CPU and CUDA device.

The following is the results:

#### before:
```
>>> torch.tensor([3388.], dtype=torch.half).div(524288.0)
tensor([0.], dtype=torch.float16)
>>> torch.tensor([0.01], dtype=torch.float16) * torch.tensor(65536, dtype=torch.float32)
tensor([inf], dtype=torch.float16)
```

#### after:
```
>>> torch.tensor([3388.], dtype=torch.half).div(524288.0)
tensor([0.0065], dtype=torch.float16)
>>> torch.tensor([0.01], dtype=torch.float16) * torch.tensor(65536, dtype=torch.float32)
tensor([655.5000], dtype=torch.float16)
```

Also need to update `RRelu` implementation, to use float to store the intermediate results, otherwise the following test case would fail:
```
. build/bin/test_api --gtest_filter=ModulesTest.RReLU
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98820
Approved by: https://github.com/jgong5, https://github.com/ngimel
2023-04-17 07:12:53 +00:00
Aaron Gokaslan
47dca20d80 [BE] Enable flake8-comprehension rule C417 (#97880)
Enables flake8-comprehension rule C417. Ruff autogenerated these fixes to the codebase.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97880
Approved by: https://github.com/ezyang, https://github.com/kit1980, https://github.com/albanD
2023-03-30 14:34:24 +00:00
BJ Hargrave
b59a60ddff Fix CPU bitwise shifts for out-of-limit shift values (#96659)
Negative shift values and positive shift values greater than the bit size of the dtype (limit `0..bits`) now yield expected results which are consistent with numpy.

Left shift with an out-of-limit shift value result in a value of `0`. Right shift with an out-of-limit shift value results in a value of `-1` for negative inputs and `0` for non-negative inputs (sign preserving).

Fixes https://github.com/pytorch/pytorch/issues/70904

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96659
Approved by: https://github.com/ngimel, https://github.com/albanD, https://github.com/zou3519, https://github.com/jgong5, https://github.com/malfet
2023-03-17 21:35:34 +00:00
mfkasim1
975333d80c Logaddexp for complex in CPU (#95717)
Continuation of PR #93153 where I implemented logaddexp for complex, but didn't expose it to `torch.logaddexp`. So this PR is to expose the complex logaddexp to `torch.logaddexp`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95717
Approved by: https://github.com/lezcano
2023-03-01 20:37:46 +00:00
Xuehai Pan
b005ec62b9 [BE] Remove dependency on six and future (#94709)
Remove the Python 2 and 3 compatibility library [six](https://pypi.org/project/six) and [future](https://pypi.org/project/future) and `torch._six`. We only support Python 3.8+ now. It's time to retire them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94709
Approved by: https://github.com/malfet, https://github.com/Skylion007
2023-02-14 09:14:14 +00:00
Aaron Gokaslan
67d9790985 [BE] Apply almost all remaining flake8-comprehension checks (#94676)
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
2023-02-12 01:01:25 +00:00
Muhammad Firmansyah Kasim
aaa27a6b6d Vectorized more stable complex division (#93277)
Fixes #92043 and completing #92539 by implementing the vectorized more stable complex division.
I implement this using the internal `abs_` function to avoid branching. I also re-implement the internal `abs_` to make it more stable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93277
Approved by: https://github.com/peterbell10, https://github.com/lezcano
2023-02-03 11:48:20 +00:00
mfkasim1
1f55f3b0de Solving the under/overflow for complex division (#92539)
Fixes #92043.
I'm following numpy's implementation as suggested by @min-jean-cho.
I found out that this implementation still produces overflow if we're working with numbers greater than `finfo.max / 2`, but this is still much better than the previous implementation where it gets overflow with numbers greater than `finfo.max ** 0.5`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92539
Approved by: https://github.com/lezcano
2023-01-26 01:14:06 +00:00
Natalia Gimelshein
f1b78224ca Fix type promotion for 2 wrapped scalar args (#87845)
Fixes #76801

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87845
Approved by: https://github.com/SherlockNoMad, https://github.com/mruberry
2022-10-27 15:53:11 +00:00
CaoE
cca909645f Add bfloat16 support for lerp on CPU (#84327)
### Description
Add bfloat16 support for lerp on CPU

### Testing
single core:
<html>
<body>
<!--StartFragment-->

op | shape |fp32 forward/ms|bf16 forward/s|fb32 backward/s| bf16 backward/s
-- | -- | -- | -- | -- | --
lerp (tensor) | [10, 128, 10, 124] | 0.005489 | 0.000613 | 0.006658 | 0.003385
  | [10, 128, 20, 124] | 0.011057 | 0.001204 | 0.016032 | 0.007869
  | [10, 128, 30, 124] | 0.016691 | 0.001954 | 0.025549 | 0.012823
  |   |   |   |   |  
lerp (scalar) | [10, 128, 10, 124] | 0.001096 | 0.000507 | 0.002024 | 0.001479
  | [10, 128, 20, 124] | 0.00247 | 0.000997 | 0.005468 | 0.002907
  | [10, 128, 30, 124] | 0.004178 | 0.001513 | 0.009775 | 0.004859

<!--EndFragment-->
</body>
</html>

single socket (28cores):
<html>
<body>
<!--StartFragment-->

op | shape | fp32 forward/s| bf16 forward/s| fb32backward/s| bf16 backward/s
-- | -- | -- | -- | -- | --
lerp (tensor) | [10, 128, 10, 124] | 0.000236 | 3.93E-05 | 0.000494 | 0.000235
  | [10, 128, 20, 124] | 0.000525 | 7.39E-05 | 0.002485 | 0.000638
  | [10, 128, 30, 124] | 0.000801 | 0.000121 | 0.004235 | 0.001529
  |   |   |   |   |  
lerp (scalar) | [10, 128, 10, 124] | 5.90E-05 | 3.32E-05 | 0.000129 | 0.000116
  | [10, 128, 20, 124] | 0.000155 | 5.87E-05 | 0.000368 | 0.000206
  | [10, 128, 30, 124] | 0.000324 | 9.04E-05 | 0.001322 | 0.000313

<!--EndFragment-->
</body>
</html>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84327
Approved by: https://github.com/frank-wei
2022-09-29 01:16:16 +00:00
Natalia Gimelshein
a998a8eb10 Fix segfault for out with a large number of dims (#85294)
Fixes #85166, #85167, #79218, #85251

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85294
Approved by: https://github.com/malfet
2022-09-19 22:00:43 +00:00
Mario Lezcano
88d3acd6b1 Fix and improve the efficiency of the backward of xlog* functions. (#82713)
That is `xlogy`, `special.xlogy`, `special.xlog1py`.

Fixes https://github.com/pytorch/pytorch/issues/80770
Fixes https://github.com/pytorch/pytorch/issues/74279
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82713
Approved by: https://github.com/albanD
2022-08-18 21:55:42 +00:00
Sergii Dymchenko
a0b3854548 Change seperate -> separate (#83056)
One instance was caught by Meta-internal "exact-word-misspell" linter in D38505529.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83056
Approved by: https://github.com/huydhn, https://github.com/seemethere
2022-08-09 23:11:34 +00:00
Peter Bell
5e3d1ef49f Allow ufunc OpInfos to have no reference (#82348)
The `ref` property was moved down from `{Unary,Binary}UfuncInfo` into
`OpInfo` quite some time ago, but `OpInfo` uses `None` to signal no
reference is available while the others use `_NOTHING`. This makes
everything consistently use `None`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82348
Approved by: https://github.com/ngimel
2022-08-09 04:38:17 +00:00
PyTorch MergeBot
814c19b266 Revert "Allow ufunc OpInfos to have no reference (#82348)"
This reverts commit 566d734396.

Reverted https://github.com/pytorch/pytorch/pull/82348 on behalf of https://github.com/peterbell10 due to This stack broke macos tests on trunk
2022-08-06 21:09:09 +00:00
Peter Bell
566d734396 Allow ufunc OpInfos to have no reference (#82348)
The `ref` property was moved down from `{Unary,Binary}UfuncInfo` into
`OpInfo` quite some time ago, but `OpInfo` uses `None` to signal no
reference is available while the others use `_NOTHING`. This makes
everything consistently use `None`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82348
Approved by: https://github.com/ngimel
2022-08-06 20:01:39 +00:00
Natalia Gimelshein
ac8c6d09d1 Don't check overflow for scalar arg in comparison ops (#78881)
That check was potentially synchronizing (people were running into synchronization in real workloads) and mostly unneeded.
Type promotion takes care of comparing to values that cannot be safely converted to the type of other argument.
This now works for `comp(int_tensor, float('inf'))` as expected. For `comp(uint8_tensor, large_int)` it silently wraps integer to uint8 whereas it would error out before, but this also happens with other arithmetic ops.
Also adds reference np implementation to comparison op OpInfos, and additional reference inputs to test newly enabled behavior.

Fixes #76805
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78881
Approved by: https://github.com/mruberry
2022-06-05 01:43:51 +00:00
Kshiteej K
849b08f14b [reland][chalf] where(cpu and cuda), pow(cuda) (#78665)
Reland: https://github.com/pytorch/pytorch/pull/77640
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78665
Approved by: https://github.com/ngimel
2022-06-02 18:04:06 +00:00
PyTorch MergeBot
4bb8db85e9 Revert "[chalf] where(cpu and cuda), pow(cuda) (#77640)"
This reverts commit 3697cf7f76.

Reverted https://github.com/pytorch/pytorch/pull/77640 on behalf of https://github.com/mruberry due to as it broke ROCM on trunk
2022-06-01 19:39:38 +00:00
kshitij12345
3697cf7f76 [chalf] where(cpu and cuda), pow(cuda) (#77640)
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77640
Approved by: https://github.com/anjali411, https://github.com/ngimel
2022-06-01 18:35:53 +00:00
Mike Ruberry
089203f8bc Updates floor_divide to perform floor division (#78411)
Fixes https://github.com/pytorch/pytorch/issues/43874

This PR changes floor_divide to perform floor division instead of truncation division.

This is a BC-breaking change, but it's a "bug fix," and we've already warned users for several releases this behavior would change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78411
Approved by: https://github.com/ngimel
2022-05-29 21:28:45 +00:00
kshitij12345
efb2c093fc [fix] complex type promotion (#77524)
Fixes https://github.com/pytorch/pytorch/issues/76803

Before Fix:
```python
>> a = torch.randn((2, 2), dtype=torch.float)
>> b = torch.tensor(1, dtype=torch.cdouble)
>> (a + b).dtype
torch.complex128
```

After Fix:
```python
>> a = torch.randn((2, 2), dtype=torch.float)
>> b = torch.tensor(1, dtype=torch.cdouble)
>> (a + b).dtype
torch.complex64
```

**Note**: This is a BC Breaking change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77524
Approved by: https://github.com/anjali411, https://github.com/mruberry
2022-05-20 10:23:56 +00:00
Xiang Gao
f274558018 Bitwise ops improvements (#77621)
- Bitwise shift remove floating point support
- Bitwise and, or, xor add (scalar, tensor) overload
- Use `test_ops.py` to test these ops, including error cases
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77621
Approved by: https://github.com/ngimel
2022-05-17 21:16:42 +00:00
Andrew M. James
39d3a7ffe5 Connect Tensor.__ipow__ to pow_ method
The `pow_` method should be connected to `Tensor.__ipow__` so that the
operator `**=` works correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76900

Approved by: https://github.com/mruberry
2022-05-15 15:06:30 +00:00
Jiayi Sun
cdf9572c52 add BFloat16 operators on CPU: histc, atan2 (#72695)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72695
Approved by: https://github.com/VitalyFedyunin, https://github.com/frank-wei
2022-05-12 17:45:08 +00:00
Xiang Gao
cc9d0f309e lshift and rshift stop support floating types (#77146)
Fixes #74358

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77146
Approved by: https://github.com/ngimel
2022-05-11 22:29:30 +00:00
Mike Ruberry
c031643e39 Adds decorators for Python References and extends Python Reference testing (#76945)
This PR does the following...

Tests:
- fixes test_type_promotion in test_binary_ufuncs to correctly generate scalar cpu tensors
- fixes test_python_reference_consistency to use the Python Reference's reference inputs
- extends Python reference testing to test_conj_view, test_neg_view, and test_neg_conj_view
- adds a NaN propagation sample input for elementwise unary and binary operations
- fixes the UnaryUfuncInfo class to properly register its reference inputs
- Updates the Python Reference OpInfos to skip error inputs when their behavior on scalar inputs is inconsistent with their reference operators

Code organization:
- moves elementwise type promotion functionality to prims.utils

Prims & Refs:
- fixes scalar cpu tensor handling by having them pass through broadcasting and device and shape checks
- adds two decorators, `elementwise_type_promotion_wrapper` and `out_wrapper`, the former allows for elementwise type promotion to be automated and the latter automatically adds the out kwarg and handles it properly

cc @ezyang who also had some thoughts on cpu scalar tensor handling
cc @chillee -- might want to use this new decorator as we converge decompositions and references
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76945
Approved by: https://github.com/ngimel
2022-05-07 03:42:24 +00:00
Mike Ruberry
b557e102d8 Fixes prim type promotion and updates type promotion testing
This PR fixes prim elementwise type promotion, tests elementwise binary references using `test_type_promotion` in the elementwise binary test suite, and updates that test with additional cases for float x complex and scalar type promotion.

The following issues were discovered while working on this PR:

- https://github.com/pytorch/pytorch/issues/76806
- https://github.com/pytorch/pytorch/issues/76805
- https://github.com/pytorch/pytorch/issues/76804
- https://github.com/pytorch/pytorch/issues/76803
- https://github.com/pytorch/pytorch/issues/76801

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76809
Approved by: https://github.com/ngimel
2022-05-04 17:58:10 +00:00
Mike Ruberry
f6bbecf8b5 Adds python ref consistency test, elementwise unary reference inputs, and formats test files
Per title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76626
Approved by: https://github.com/ngimel
2022-05-01 22:42:46 +00:00
kshitij12345
1f1d0b30ce [fix] mul chalf: cpu scalar and cuda tensor case
https://github.com/pytorch/pytorch/pull/76158 added `chalf` support for `mul` on CUDA incorrectly.

Following sample
```python
torch.mul(torch.zeros(3, device='cuda'), 2.5) # CUDA Tensor and CPU Scalar
```

fails with
```
RuntimeError: iter.device(arg).is_cuda() INTERNAL ASSERT FAILED at "../aten/src/ATen/native/cuda/JitLoops.cuh":83, please report a bug to PyTorch. argument 2: expected a CUDA device but found cpu
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76364
Approved by: https://github.com/mruberry
2022-04-30 06:31:52 +00:00
Mike Ruberry
4048d4cdd2 [primTorch] Prototype tracer and elementwise unary reference opinfo class
Adds a prototype tracer with no caching support and the `ElementwiseUnaryPythonRefInfo` class. A reference for `floor` is added to test the latter, and the elementwise binary reference inputs are extended to also return noncontiguous inputs. The SampleInput transform operation has been updated to return an actual SampleInput instead of a tuple to facilitate uniform handling of (transformed) SampleInputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76388
Approved by: https://github.com/ngimel
2022-04-27 14:40:21 +00:00
Jeff Daily
24d06182f6 [ROCm] unskip test_fmod_remainder_by_zero_integral
The test is updated to check for ROCm-specific undefined behavior for
integral fmod division by zero.

Fixes #48130.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76119
Approved by: https://github.com/mruberry
2022-04-22 17:34:36 +00:00
Natalia Gimelshein
2f9a239ae9 do intermediate math in opmath_t in lerp
Per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76062
Approved by: https://github.com/mruberry
2022-04-20 04:05:55 +00:00
Mike Ruberry
de949a0e59 Various OpInfo architecture improvements
This PR makes the following improvements:

- moves the custom skip list for test_normalize_operator_exhaustive in test_fx_experimental to use the typical OpInfo skip architecture. The skips were updated to xfails, and that identified some operators which were no longer failing the test
- redundant tests with OpInfo-based testing in test_jit.py were removed
- test_dtypes was improved so its error messages are clear and it makes test_nondifferentiable redundant; the latter test has been removed
- OpInfo.supports_complex_autograd() is removed in favor of a more accurate and general test for whether the particular dtype is in the backward dtypes of the operator
- gradchecks have been improved to verify that an operator doesn't support grad if it claims not to
- gradchecks have been improved to test the gradient of all input tensors that require gradient
- the concept of "default test dtypes" has been removed
- excessive and mostly redundant out testing for elementwise unary operators has been removed
- metadata for whether an op supports nuanced "safe casting" to out behavior has been removed from OpInfos
- numerous skips have been converted to xfails
- numerous OpInfos have had their metadata fixed based on the new checks
- jit-specific utilities in common_methods_invocations.py have been moved to jit_programming_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75951
Approved by: https://github.com/ngimel
2022-04-18 21:55:32 +00:00
Mike Ruberry
b09769992f Improves the OpInfo out= tests
Edit: OpInfos separated into their own PRs to debug an ASAN failure that doesn't identify the failing test properly. This PR now just updates the out tests.

Adds OpInfos for:

- nn.functional.smooth_l1_loss
- nn.functional.l1_loss
- nn.functional.pdist
- nn.functional.binary_cross_entropy
- nn.functional.triplet_margin_loss
- nn.functional.triplet_margin_with_distance_loss
- nn.functional.max_unpool{1, 2, 3}D
- nn.functional.alpha_dropout
- nn.functional.soft_margin_loss
- nn.functional.multilabel_soft_margin_loss
- nn.functional.multilabel_margin_loss
- nn.functional.multi_margin_loss
- nn.functional.margin_ranking_loss

These OpInfos were taken from https://github.com/pytorch/pytorch/pull/67560, https://github.com/pytorch/pytorch/pull/67823, https://github.com/pytorch/pytorch/pull/68625, and https://github.com/pytorch/pytorch/pull/67079. The sample input update from https://github.com/pytorch/pytorch/pull/67017 is also rolled into this PR.

cc @zou3519 @nikitaved @pmeier @vfdev-5 @dagitses
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75782
Approved by: https://github.com/ngimel
2022-04-15 06:16:01 +00:00
Richard Zou
791321e44d Fix minimum, maximum forward-ad formula for float32
fixes https://github.com/pytorch/pytorch/issues/69034

Test Plan:
- new tests

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75277
Approved by: https://github.com/soulitzer
2022-04-07 13:58:39 +00:00
Nikita Shulga
bfac65dfe5
[testing] Update dispatch macros (#74977)
This PR is reland of #74289 
Co-authored-by: Khushi Agrawal <khushiagrawal411@gmail.com>
2022-03-30 14:13:21 -07:00
PyTorch MergeBot
2e4152b118 Revert "[testing] Update dispatch macros"
This reverts commit eed19a0f38.

Reverted https://github.com/pytorch/pytorch/pull/74289 on behalf of https://github.com/malfet
2022-03-30 19:52:37 +00:00
Khushi Agrawal
eed19a0f38 [testing] Update dispatch macros
Hi,
This PR is the follow-up PR of #71561. (the previous PR had a couple of merge conflicts and was reverted, this PR resolves that).
Please take a look. Thanks!

cc: @pmeier @mruberry @kshitij12345
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74289
Approved by: https://github.com/pmeier, https://github.com/mruberry
2022-03-30 16:10:16 +00:00
Mike Ruberry
0aa3c39e5f Extends OpInfo architecture with reference inputs, adds them for elementwise binary operators
This PR extends our OpInfo test architecture with "reference inputs," an optional expansion of typical sample inputs that allows for more thorough testing. Currently only the elementwise binary operations implement an extended set of reference inputs. This PR also cleans up some smaller OpInfo-related issues, including several bugs, and it identified https://github.com/pytorch/pytorch/issues/74279.

A reference inputs function can be specified for an OpInfo by filling in its "reference_inputs_func" metadata. If this is done it's recommended that the reference inputs function first call the sample inputs function, then produce additional sample inputs. See `reference_inputs_elementwise_binary` for an example of this pattern.

In addition to implementing reference inputs for the elementwise binary operations, this PR improves their consistency and simplifies how their metadata is represented. The great majority now use a generic sample input function, and those that want extensions start by calling the generic sample input function and then adding additional samples. This removes many older sample input functions. The BinaryUfuncInfo subclass also now allows specifying scalar support more precisely, and reference inputs and error inputs are generated based on this metadata to ensure it's correct.

cc @kshitij12345 @pmeier @zou3519 @Chillee

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74280
Approved by: https://github.com/ngimel
2022-03-21 03:24:16 +00:00