Nikita Shulga
78f3937ee8
[BE] Handle errors in set_num_threads ( #113684 )
...
and `set_num_interop_threads`
Before that, call `torch.set_num_threads(2**65)` resulted in segmentation fault, afterwards it becomes a good old runtime error:
```
% python -c "import torch;torch.set_num_threads(2**65)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: Overflow when unpacking long
```
Similar to https://github.com/pytorch/pytorch/pull/60073
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113684
Approved by: https://github.com/Skylion007 , https://github.com/albanD
2023-11-15 06:17:41 +00:00
Kurt Mohler
8bdce9bb74
Fix UntypedStorage.resize_ to keep same CUDA device index ( #113386 )
...
Fixes #113300
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113386
Approved by: https://github.com/albanD
2023-11-10 01:57:25 +00:00
Kurt Mohler
fd209543d5
Add torch.utils.deterministic.fill_uninitialized_memory flag ( #111377 )
...
Part of #109802
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111377
Approved by: https://github.com/albanD , https://github.com/aaronenyeshi
2023-11-01 16:10:09 +00:00
PyTorch MergeBot
ace2713d1e
Revert "Add torch.utils.deterministic.fill_uninitialized_memory flag ( #111377 )"
...
This reverts commit f1785373c0 .
Reverted https://github.com/pytorch/pytorch/pull/111377 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/111377#issuecomment-1784179040 ))
2023-10-29 17:41:55 +00:00
Nikita Shulga
b61efe1c2b
Fix torch.[size|stride](dim=None)` invocation ( #111991 )
...
Per documentation, one should be able to explicitly pass dim argument as None to get tensor size across all dimentions/strides, but before this change it was incorrectly interpreted as named tensor call.
Modify `size` and `stride` signatures generated by `gen_pyi.py` to highlight that overload with `None` will return a Tuple, but one with `dim: _int` returns `int`.
Add regression test to validate the behavior, and remove the check for asserts from two named tensors tests (NamedTensors are dead, aren't they?)
Fixes https://github.com/pytorch/pytorch/issues/111944
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111991
Approved by: https://github.com/zou3519
2023-10-26 04:14:35 +00:00
Kurt Mohler
f1785373c0
Add torch.utils.deterministic.fill_uninitialized_memory flag ( #111377 )
...
Part of #109802
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111377
Approved by: https://github.com/albanD
2023-10-26 02:39:06 +00:00
Nikita Shulga
7709382b50
Fix regression in torch.equal behavior for NaNs ( #111699 )
...
`torch.equal(x, x)` should return false if one of `x` is a tenor of floats one of which is NaN.
So, it renders some of the optimization proposed in https://github.com/pytorch/pytorch/pull/100024 invalid, though as result `torch.equal` will become much slower for identical floating point tensors.
Add regression test that calls torch.equal for tensor containing NaN
Fixes https://github.com/pytorch/pytorch/issues/111251
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111699
Approved by: https://github.com/Skylion007 , https://github.com/albanD
2023-10-21 00:02:45 +00:00
CaoE
d1afb7d43d
add Half support for multinomial on CPU ( #104178 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104178
Approved by: https://github.com/jgong5 , https://github.com/kulinseth , https://github.com/cpuhrsch
2023-10-20 19:16:04 +00:00
Evgeni Burovski
48989bc820
trace frames with np.ndarray ( #110512 )
...
Fixes #109604
Resubmit gh-109715 + several skips and small fixes to make tests pass.
The main fix here is by @ysiraichi : previously, dynamo did not resume tracing numpy ndarrays after a graph break.
While at it, fix several small issues Yukio's fix uncovers:
- graph break gracefully on numpy dtypes which do not map to torch.dtypes (uint16 etc)
- recognize array scalars in dynamo, treat them as 0D ndarrays
- make sure that iterating over torch.ndarray generates arrays not bare tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110512
Approved by: https://github.com/lezcano
2023-10-15 00:56:10 +00:00
CaoE
8713a1a363
add Half support for bernoulli on CPU ( #104176 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104176
Approved by: https://github.com/mingfeima , https://github.com/cpuhrsch
2023-10-13 01:18:55 +00:00
Prachi Gupta
53a9ac534c
Added decorator skipRocmIfTorchInductor and skipped failing tests ( #107760 )
...
This PR adds a skip decorator which will disable tests in CI for ROCm inductor workflow. This new workflow will be coming in via https://github.com/pytorch/pytorch/pull/110544
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107760
Approved by: https://github.com/jataylo , https://github.com/pruthvistony , https://github.com/atalman
2023-10-12 16:00:35 +00:00
Elias Ellison
cf1da9bd17
enable index add test ( #111016 )
...
Dynamo is swallowing a user exception when suppress_errors is set to True. There's an issue filed for that: https://github.com/pytorch/pytorch/issues/108798 . In the meantime we still like the functionality in this test which works without the default setting (dont suppress errors) to not regress.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111016
Approved by: https://github.com/yanboliang
2023-10-11 19:41:35 +00:00
eellison
fb4b9e9c8e
Re-enable a couple of fixed tests ( #110770 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110770
Approved by: https://github.com/yanboliang , https://github.com/int3 , https://github.com/Skylion007
ghstack dependencies: #110651
2023-10-10 19:13:14 +00:00
eellison
c5f06b9753
Re-enable test_copy_transpose_math_view, neg_view/dce fix ( #110651 )
...
- neg view can just be lowered to neg() post functionalization
- we were treating all fallback kernels as not having side effects. we shouldn't dce mutating fallback kernels - either mutations induced by the reinplacing pass or clone_ with unsupported arguments (complex)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110651
Approved by: https://github.com/Chillee , https://github.com/jansel , https://github.com/malfet , https://github.com/Skylion007
2023-10-10 16:34:01 +00:00
jjsjann123
37567fdf31
Nvfuser cpp api deprecation attempt 2 ( #110881 )
...
attempting to re-try #110318 deprecating nvfuser c++ API
warning has been updated to TORCH_WARN_ONCE;
Warning thrown inside torch::jit::fuser::cuda::isEnabled() is turned off and will be deprecated when we pulled out TorchScript integration in the follow up PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110881
Approved by: https://github.com/davidberard98 , https://github.com/NicolasHug
2023-10-10 08:07:03 +00:00
PyTorch MergeBot
bbdc8c7b05
Revert "deprecating nvfuser c++ API ( #110318 )"
...
This reverts commit bf0866fc16 .
Reverted https://github.com/pytorch/pytorch/pull/110318 on behalf of https://github.com/davidberard98 due to too many warnings being thrown in torchvision https://github.com/pytorch/pytorch/issues/110857 ([comment](https://github.com/pytorch/pytorch/pull/110318#issuecomment-1753245449 ))
2023-10-09 15:41:50 +00:00
jjsjann123
bf0866fc16
deprecating nvfuser c++ API ( #110318 )
...
deprecating nvfuser c++ API
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110318
Approved by: https://github.com/davidberard98
2023-10-07 02:25:21 +00:00
eellison
3812f2e40c
Preserve layout on like constructors ( #110242 )
...
Partially fixes `test_memory_format_factory_like_functions_preserve` with PYTORCH_TEST_WITH_INDUCTOR. Inductor preserves memory layouts for user-visible outputs as annotated on the fx graph that it is passed in. That graph is generated from running aot_autograd with decompositions. If the decompositions give incorrect strides, so will inductor.
This preserves the layout of `_like` operators when it corresponds to a `torch.memory_format`. It doesnt fix a) arbitrary permutations, b) striding of non-dense outputs. Both of these are lower-pri compared to preserving channels last. We would need either https://github.com/pytorch/pytorch/issues/92920 or a `to` variant that takes in a physical layout arbitrary permutations. I converted the output of rand to the correct layout instead of passing the layout in so that this would compose with the `replace_random` pass, and because the two pointwise ops will get fused anyway.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110242
Approved by: https://github.com/int3
2023-10-02 23:53:55 +00:00
Moritz Hennen
09c598745c
Rename torch._C._TensorBase to TensorBase ( #109940 )
...
I have gone ahead and implemented the renaming of the type `torch._C._TensorBase` to a non-private class name `TensorBase`.
The changes also include leaving `torch._C._TensorBase` as an alias to the new type: 70458768fb/torch/csrc/autograd/python_variable.cpp (L2196-L2197) both in the c++ code and in the corresponding `__init__.pyi.in` file:
70458768fb/torch/_C/__init__.pyi.in (L1522)
Fixes #109438
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109940
Approved by: https://github.com/ezyang
2023-09-25 19:10:22 +00:00
Jez Ng
063a62622b
Add memory overlap check to meta_copy_ ( #108989 )
...
Fixes `test_copy_many_to_one`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108989
Approved by: https://github.com/eellison
2023-09-12 23:28:14 +00:00
Kurt Mohler
4c5e43574c
Reland 2: Add PyObject preservation for UntypedStorage ( #109039 )
...
Relands #103907 after it was reverted. This PR makes the new `ignore_hermetic_tls` argument of `check_pyobj` optional to avoid causing a compilation error in torchdistx
Part of #91395
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109039
Approved by: https://github.com/ezyang
2023-09-12 22:26:05 +00:00
PyTorch MergeBot
41bd0fde7e
Revert "Remove fixed skips ( #108674 )"
...
This reverts commit ab9fb03d6f .
Reverted https://github.com/pytorch/pytorch/pull/108674 on behalf of https://github.com/huydhn due to Sorry for picking this up a bit late, but with https://github.com/pytorch/pytorch/pull/108647 reverted, these tests are failing again. So we need to wait for the PR to reland before we can land this change ([comment](https://github.com/pytorch/pytorch/pull/108674#issuecomment-1715202692 ))
2023-09-12 08:04:32 +00:00
PyTorch MergeBot
59f605be57
Revert "Reland 2: Add PyObject preservation for UntypedStorage ( #109039 )"
...
This reverts commit 419e4e17a2 .
Reverted https://github.com/pytorch/pytorch/pull/109039 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing linter job in trunk, probably due to a landrace ([comment](https://github.com/pytorch/pytorch/pull/109039#issuecomment-1715147020 ))
2023-09-12 07:26:11 +00:00
Kurt Mohler
419e4e17a2
Reland 2: Add PyObject preservation for UntypedStorage ( #109039 )
...
Relands #103907 after it was reverted. This PR makes the new `ignore_hermetic_tls` argument of `check_pyobj` optional to avoid causing a compilation error in torchdistx
Part of #91395
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109039
Approved by: https://github.com/ezyang
2023-09-12 01:19:40 +00:00
Li-Huai (Allan) Lin
b2cba439b4
Introduce Tensor overload to linspace and logspace ( #104889 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104889
Approved by: https://github.com/zou3519
ghstack dependencies: #107958
2023-09-11 23:30:40 +00:00
PyTorch MergeBot
a7f5abeade
Revert "Introduce Tensor overload to linspace and logspace ( #104889 )"
...
This reverts commit 57e5239321 .
Reverted https://github.com/pytorch/pytorch/pull/104889 on behalf of https://github.com/clee2000 due to sorry have to revert this to revert https://github.com/pytorch/pytorch/pull/107958 ([comment](https://github.com/pytorch/pytorch/pull/104889#issuecomment-1714305768 ))
2023-09-11 17:33:48 +00:00
Li-Huai (Allan) Lin
57e5239321
Introduce Tensor overload to linspace and logspace ( #104889 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104889
Approved by: https://github.com/zou3519
ghstack dependencies: #107958
2023-09-11 15:29:39 +00:00
Edward Z. Yang
137afe74e0
Don't fastpath conj copy when conj/neg bit mismatch ( #108881 )
...
Fixes https://github.com/pytorch/pytorch/issues/106051
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108881
Approved by: https://github.com/soulitzer
2023-09-08 20:44:43 +00:00
PyTorch MergeBot
68238606f3
Revert "Reland: Add PyObject preservation for UntypedStorage ( #103907 )"
...
This reverts commit 56b848157c .
Reverted https://github.com/pytorch/pytorch/pull/103907 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing torchdistx build which uses check_pyobj here 9c1b9f5cb2/src/python/torchdistx/_C/deferred_init.cc (L87) ([comment](https://github.com/pytorch/pytorch/pull/103907#issuecomment-1712121158 ))
2023-09-08 19:27:07 +00:00
Evgeni Burovski
1f20531939
fall back to eager on NotImplementedError ( #107863 )
...
Follow-up to https://github.com/pytorch/pytorch/pull/107710 :
Help dynamo fall back to eager when compiling unimplemented numpy constructs:
- arrays of strings
- (arg){min, max} for complex types
- various arguments typed as NotImplemented (`np.ones(4, order="F")` etc)
- numpy functions which torch._numpy does not implement
To test, run (we do not implement arrays of strings)
```
import torch
import numpy as np
@torch.compile(fullgraph=False)
def fn():
return np.asarray(["L", "U"])
```
and observe it compiles with fullgraph=False and fails with fullgraph=True
Fixes https://github.com/pytorch/pytorch/issues/107970
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107863
Approved by: https://github.com/ezyang , https://github.com/lezcano
2023-09-07 21:22:20 +00:00
eellison
ab9fb03d6f
Remove fixed skips ( #108674 )
...
These no longer fail with TEST_WITH_TORCHINDUCTOR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108674
Approved by: https://github.com/desertfire
2023-09-07 17:36:56 +00:00
Kurt Mohler
56b848157c
Reland: Add PyObject preservation for UntypedStorage ( #103907 )
...
This relands #97470 after #102553 reverted it. This PR attempts to fix the internal failure by avoiding an unnecessary intermediate storage buffer allocation in `c10::newStorageImplFromRefcountedDataPtr`.
Part of #91395
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103907
Approved by: https://github.com/ezyang
2023-09-07 04:24:11 +00:00
Kurt Mohler
3f88e3105f
Reland: Remove remaining global set_default_dtype calls from tests ( #108088 )
...
Fixes #68972
Relands #107246
To avoid causing Meta-internal CI failures, this PR avoids always asserting that the default dtype is float in the `TestCase.setUp/tearDown` methods. Instead, the assert is only done if `TestCase._default_dtype_check_enabled == True`. `_default_dtype_check_enabled` is set to True in the `if __name__ == "__main__":` blocks of all the relevant test files that have required changes for this issue
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108088
Approved by: https://github.com/ezyang
2023-09-07 03:04:34 +00:00
PyTorch MergeBot
43527d41a2
Revert "Remove fixed skips ( #108674 )"
...
This reverts commit 518cfda2dd .
Reverted https://github.com/pytorch/pytorch/pull/108674 on behalf of https://github.com/huydhn due to Sorry for reverting this, but one test is failing on inductor 518cfda2dd , and it seems easier to revert this than disabling the test ([comment](https://github.com/pytorch/pytorch/pull/108674#issuecomment-1709310192 ))
2023-09-07 00:56:46 +00:00
eellison
518cfda2dd
Remove fixed skips ( #108674 )
...
These no longer fail with TEST_WITH_TORCHINDUCTOR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108674
Approved by: https://github.com/desertfire
2023-09-06 22:33:43 +00:00
PyTorch MergeBot
161ea463e6
Revert "Remove remaining global set_default_dtype calls from tests ( #107246 )"
...
This reverts commit aa8ea1d787 .
Reverted https://github.com/pytorch/pytorch/pull/107246 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/107246#issuecomment-1693838522 ))
2023-08-25 19:34:55 +00:00
Digant Desai
8a7a6867b9
[PyTorch][Tensor] Introduce tensor.dim_order ( #106835 )
...
Summary:
This is a stride based attribute for a tensor available in Python.
This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.
Differential Revision: D48134476
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106835
Approved by: https://github.com/ezyang
2023-08-25 00:06:03 +00:00
Kurt Mohler
aa8ea1d787
Remove remaining global set_default_dtype calls from tests ( #107246 )
...
Fixes #68972
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107246
Approved by: https://github.com/ezyang
2023-08-24 16:10:48 +00:00
Aaron Gokaslan
660e8060ad
[BE]: Update ruff to 0.285 ( #107519 )
...
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
PyTorch MergeBot
d59a6864fb
Revert "[BE]: Update ruff to 0.285 ( #107519 )"
...
This reverts commit 88ab3e4322 .
Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480 ))
2023-08-22 19:53:32 +00:00
Aaron Gokaslan
88ab3e4322
[BE]: Update ruff to 0.285 ( #107519 )
...
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
Catherine Lee
bc053070f8
Mark test_gradient_extreme_cases as slow for inductor ( #107189 )
...
test_gradient_extreme_cases_* takes ~5 minutes on the inductor sm86 shard and possibly even longer on the inductor workflow since it's timing out right now although I'm not sure what the difference between the two is, and sometimes auto slow test detection isn't catching it
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107189
Approved by: https://github.com/ZainRizvi
2023-08-15 22:03:00 +00:00
Sam Larsen
3d00170b20
[inductor] fix test_dim_function_empty ( #106994 )
...
Summary: Looks like the assert syntax was just wrong
Test Plan:
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_torch.py -k test_dim_function_empty
PYTORCH_TEST_WITH_AOT_EAGER=1 python test/test_torch.py -k test_dim_function_empty
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106994
Approved by: https://github.com/eellison
2023-08-11 21:38:53 +00:00
Kshiteej K
a899333ffc
fix: nll_loss batch rule with negative ignore_idx ( #106118 )
...
We use python decompositions instead of writing our own for batching rules.
Fixes https://github.com/pytorch/pytorch/issues/105736
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106118
Approved by: https://github.com/lezcano , https://github.com/zou3519
2023-08-04 07:43:02 +00:00
Fuzzkatt
ae1c0f42a3
update tf32 thresholds for H100 ( #105879 )
...
Addresses tf32 threshold related failures from NVIDIA internal testing for following unit tests:
H100:
- test_nn.py: test_ConvTranspose2d_dilated_cuda_tf32, test_ConvTranspose2d_no_bias_cuda_tf32, test_Transformer_multilayer_coder_cuda_tf32
- test_torch.py: test_cdist_non_contiguous_batch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105879
Approved by: https://github.com/ezyang
2023-08-02 16:44:01 +00:00
Scott Wolchok
b435bff53a
[PyTorch] Add tests for empty tensors w/storage null data_ptr ( #101426 )
...
Further investigation seems to show that changing this behavior (making empty tensors sometimes have non-null data_ptr) was the real problem with #98090 . Adding tests to lock down this behavior so we don't change it by accident again.
Differential Revision: [D45873002](https://our.internmc.facebook.com/intern/diff/D45873002/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101426
Approved by: https://github.com/zou3519
2023-07-27 05:19:42 +00:00
Nikita Karetnikov
eac9e1b35f
[OpInfo] add reference and error inputs for multilabel_margin_loss ( #105523 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105523
Approved by: https://github.com/ezyang
2023-07-23 02:16:29 +00:00
Justin Chu
4cc1745b13
[BE] f-stringify torch/ and scripts ( #105538 )
...
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.
- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/
Command used:
```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```
and excluded `collect_env.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang , https://github.com/malfet
2023-07-21 19:35:24 +00:00
Justin Chu
73e1455327
[BE] Enable ruff's UP rules and autoformat test/ ( #105434 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
Kurt Mohler
fcb7d4b358
Mark bincount CUDA deterministic if weights are not given ( #105244 )
...
Fixes #98316
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105244
Approved by: https://github.com/mikaylagawarecki
2023-07-18 01:16:51 +00:00