Commit Graph

245 Commits

Author SHA1 Message Date
Edward Z. Yang
03b254e49f Extend torch function support to ALL arguments, not just scalar type (but not insides of list) (#145089)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145089
Approved by: https://github.com/albanD, https://github.com/zou3519
2025-08-07 23:43:53 +00:00
cyy
b0556110e5 Remove unsafe PyTorchError constructor (#154961)
Use libfmt in call sites of PyTorchError.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154961
Approved by: https://github.com/albanD
2025-07-11 18:22:53 +00:00
PyTorch MergeBot
d5e6f42094 Revert "Use std::string_view in torchgen (#157050)"
This reverts commit 064288cbab.

Reverted https://github.com/pytorch/pytorch/pull/157050 on behalf of https://github.com/jeanschmidt due to Seems to have broken internal builds, more details on D77449943. @ezyang may I count on your help to get those changes merged? ([comment](https://github.com/pytorch/pytorch/pull/157050#issuecomment-3020222668))
2025-06-30 18:08:54 +00:00
cyy
064288cbab Use std::string_view in torchgen (#157050)
Let the generated code use std::sv

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157050
Approved by: https://github.com/ezyang
2025-06-27 06:36:10 +00:00
Aaron Gokaslan
0cd18ba1ca [BE][Ez] Update deprecated pybind11 functions (#154798)
Some checks failed
pull / linux-jammy-py3.9-gcc11 (push) Has been cancelled
pull / linux-docs (push) Has been cancelled
pull / linux-jammy-py3.9-gcc11-no-ops (push) Has been cancelled
pull / linux-jammy-py3.9-gcc11-pch (push) Has been cancelled
pull / linux-jammy-py3.10-clang15-asan (push) Has been cancelled
pull / linux-focal-py3.9-clang10-onnx (push) Has been cancelled
pull / linux-focal-py3.9-clang10 (push) Has been cancelled
pull / linux-focal-py3.13-clang10 (push) Has been cancelled
pull / linux-focal-cuda12.6-py3.10-gcc11-build-distributed (push) Has been cancelled
pull / linux-focal-cuda12.6-py3.10-gcc11-test (push) Has been cancelled
pull / linux-focal-cuda12.6-py3.10-gcc11 (push) Has been cancelled
pull / linux-jammy-py3-clang12-mobile-build (push) Has been cancelled
pull / linux-jammy-cuda11.8-cudnn9-py3.9-clang12 (push) Has been cancelled
pull / linux-focal-py3_9-clang9-xla (push) Has been cancelled
pull / linux-focal-cpu-py3.10-gcc11-bazel-test (push) Has been cancelled
pull / linux-jammy-py3.9-gcc11-mobile-lightweight-dispatch-build (push) Has been cancelled
pull / linux-jammy-rocm-py3.10 (push) Has been cancelled
pull / linux-focal-cuda12.6-py3.10-gcc11-sm89 (push) Has been cancelled
pull / unstable-linux-focal-cuda12.6-py3.10-gcc11-sm89-xfail (push) Has been cancelled
pull / linux-jammy-py3-clang12-executorch (push) Has been cancelled
pull / cuda12.8-py3.10-gcc9-sm75 (push) Has been cancelled
pull / linux-jammy-xpu-2025.1-py3.9 (push) Has been cancelled
inductor-unittest / cuda12.6-py3.10-gcc9-sm86 (push) Has been cancelled
inductor-unittest / cuda12.6-py3.12-gcc9-sm86 (push) Has been cancelled
inductor-unittest / linux-jammy-cpu-py3.12-gcc11-inductor-halide (push) Has been cancelled
inductor-unittest / linux-jammy-cpu-py3.12-gcc11-inductor-triton-cpu (push) Has been cancelled
inductor-unittest / linux-jammy-cpu-py3.9-gcc11-inductor (push) Has been cancelled
inductor-unittest / cuda12.6-py3.13-gcc9-sm86 (push) Has been cancelled
ossf-scorecard / Scorecards analysis (push) Has been cancelled
Close nonexistent disable issues / close-nonexistent-disable-issues (push) Has been cancelled
* getType() is deprecated, replace it with new/proper static method. These are backwards compatible with old pybind11 versions we support. So break this off before we upgrade to pybind11 3.0 where these methods are dropped in #154115

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154798
Approved by: https://github.com/jansel, https://github.com/cyyever
2025-06-01 06:17:50 +00:00
Laith Sakka
ab5137b048 used guard_or_false instead of guard_size_oblivious in is_int_or_symint (#154167)
This is a short circuit, that we should not fail on. Before this PR we would not fail on u0, u0+u1,
only if they are size like.  but we will fail on u0-u1.. etc for no need.
guard_or_false seems appropriate for that reason.

This was added in https://github.com/pytorch/pytorch/pull/122145 there was no unit tests for me to verify
why it was added, i could not repo using the associated issue , the example does not work.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154167
Approved by: https://github.com/bobrenjc93
ghstack dependencies: #154154, #154164
2025-05-26 21:59:45 +00:00
cyy
116af809eb Use std::string_view (#145906)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145906
Approved by: https://github.com/albanD
2025-01-30 03:14:27 +00:00
Edward Z. Yang
b3e90c8c33 Add support for torch function on dtype arguments (#145085)
Along the lines of https://github.com/pytorch/pytorch/issues/119194 although it doesn't actually address the FCD case.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145085
Approved by: https://github.com/vmoens, https://github.com/Skylion007
2025-01-21 17:44:47 +00:00
Brian Hirsh
4831f89790 support numbers as tensors for aten.copy(Tensor, Tensor) (#141161)
Fixes https://github.com/pytorch/pytorch/issues/141149. `aten.copy_` supports numbers as tensors in the python arg parser. So we need to give the same treatment to `aten.copy`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141161
Approved by: https://github.com/ezyang
2025-01-16 00:08:25 +00:00
cyy
96be048f06 [1/N] Avoid copy in std::get (#141812)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141812
Approved by: https://github.com/Skylion007
2024-12-01 03:53:35 +00:00
cyy
bf1b8adee6 Turn static inline into static function (#139843)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139843
Approved by: https://github.com/ezyang
2024-11-07 23:58:18 +00:00
PyTorch MergeBot
68f1b52d8a Revert "Turn static inline into static function (#139843)"
This reverts commit 72d3f5b26d.

Reverted https://github.com/pytorch/pytorch/pull/139843 on behalf of https://github.com/ZainRizvi due to Sorry but this is causing tests to fail on trunk. See [GH job link](https://github.com/pytorch/pytorch/actions/runs/11729669425/job/32675829894) [HUD commit link](72d3f5b26d) ([comment](https://github.com/pytorch/pytorch/pull/139843#issuecomment-2463354131))
2024-11-07 22:29:45 +00:00
cyy
83fa1014f1 [3/N] Replace c10::sv with std::sv (#139861)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139861
Approved by: https://github.com/ezyang
2024-11-07 20:03:57 +00:00
cyy
72d3f5b26d Turn static inline into static function (#139843)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139843
Approved by: https://github.com/ezyang
2024-11-07 19:08:41 +00:00
Bob Ren
8d3d47e439 Trigger symfloat specialization in argument binding code (#139454)
Fixes the test `python test/inductor/test_torchinductor.py CpuTests.test_upsample_cat_conv_cpu` when `specialize_float=False`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139454
Approved by: https://github.com/ezyang
ghstack dependencies: #139569, #139457, #139568, #139572, #139846
2024-11-07 16:10:23 +00:00
cyy
4a2da52137 [1/N] Replace c10::sv with std::sv (#139453)
Picks some safe replacements.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139453
Approved by: https://github.com/Skylion007
2024-11-01 05:39:37 +00:00
albanD
88e54de219 More nogil unsafe API fix (#137142)
Cover the PyDict APIs and confirms no update needed for PyModule one.
The rest was already covered in https://github.com/pytorch/pytorch/pull/136899

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137142
Approved by: https://github.com/eqy, https://github.com/Skylion007
2024-10-04 21:56:34 +00:00
PyTorch MergeBot
9670e9e5b0 Revert "Mark PyTorch module as no-gil valid and pythoncapi_compat.h (#136899)"
This reverts commit 4f93de8951.

Reverted https://github.com/pytorch/pytorch/pull/136899 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/136899#issuecomment-2392721534))
2024-10-04 03:28:31 +00:00
albanD
4f93de8951 Mark PyTorch module as no-gil valid and pythoncapi_compat.h (#136899)
PyList_GetItem are audited but not other APIs yet (they will be done in a follow up PR to keep this one small enough).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136899
Approved by: https://github.com/colesbury, https://github.com/atalman
2024-10-01 22:05:35 +00:00
albanD
067d203b22 Upgrade pybind11 API calls for 3.13t (#136370)
This is a modified version of https://github.com/pytorch/pytorch/pull/130341 that preserve support for older pybind version.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136370
Approved by: https://github.com/Skylion007, https://github.com/malfet
2024-09-20 23:09:55 +00:00
Basil Wong
b7bcfdaff2 Change deprecate warning on dispatch_on_subclass to warn once (#132374)
Summary:
# Problem

`TORCH_WARN` can cause massive log spam.

I output the logs for before and after adding this change.

*Before:*

* The log file size was ~61.15 MB(61148028 bytes).

*After:*

* The log filesize was ~56.44 MB(56444057) bytes.

# Context

Looks like we tried to land this change earlier but it was reverted:

* D59413413
* Reverted https://github.com/pytorch/pytorch/pull/130047 on behalf of https://github.com/clee2000 due to broke test_overrides.py::TestTorchFunctionWarning::test_warn_on_invalid_torch_function

# Testing Update

`test_warn_on_invalid_torch_function` would fail because the warning would not be called on the handling of the second torch function class since `TORCH_WARN_ONCE` stops repeats globally.

Updated so that it runs separate programs. (Was not able to actually run the test, could someone help me with that

Test Plan: Need help with this...

Differential Revision: D60561181

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132374
Approved by: https://github.com/ezyang
2024-08-05 20:02:33 +00:00
Yidi Wu
32c57e78ed Specialize sym node when used as device kwarg (#131811)
Fixes https://github.com/pytorch/pytorch/issues/131189.

We specialize the symint in python_arg_parser when used as kwarg device.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131811
Approved by: https://github.com/yanboliang, https://github.com/jansel, https://github.com/albanD
2024-07-30 17:11:57 +00:00
PyTorch MergeBot
161bb67116 Revert "Fix static py::object dangling pointer with py::gil_safe_call_once_and_store (#130341)"
This reverts commit ace6decc99.

Reverted https://github.com/pytorch/pytorch/pull/130341 on behalf of https://github.com/clee2000 due to unfortunately the internal pybind update got reverted cc @malfet ([comment](https://github.com/pytorch/pytorch/pull/130341#issuecomment-2253147079))
2024-07-26 17:02:56 +00:00
cyy
2988d33c80 [3/N] Fix clang-tidy warnings in jit (#131830)
Follows #131735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131830
Approved by: https://github.com/ezyang
2024-07-26 15:46:28 +00:00
Brian Hirsh
5570a0da0a dont dispatch aten.conj(scalar_tensor) back to python (#131482)
https://github.com/pytorch/pytorch/issues/105290

The problem in the original flow is that:

(1) the user calls `torch.mul(complex_tensor, complex_scalar)
(2) python arg parser wraps the complex scalar in a `scalar_tensor`, and dispatches to `aten.mul.Tensor(self, scalar_other)`
(3) autograd sees `aten.mul.Tensor`, calls `scalar_other.conj()` [here](https://github.com/pytorch/pytorch/blob/main/torch/csrc/autograd/FunctionsManual.cpp#L597)
(4) during proxy tensor tracing, this gets dispatched to `aten._conj(scalar_tensor)`
(5) when we hit __torch_dispatch__, the scalar_tensor is converted back into a plain python scalar
(6) we error during tracing, because in `FunctionalTensorMode.__torch_dispatch__` we try to redispatch on `aten._conj.default(plain_python_scalar)`, and this overload does not accept python scalars.

My attempted fix in this PR is to update `TensorBase::conj()` to check if the current tensor is a scalar tensor (wrapped number), and if so, manually:
(1) convert the scalar tensor back into a scalar
(2) call scalar.conj() directly
(3) convert the result back into a wrapped tensor

This avoids having to go through python entirely in the tracing case (which is fine, because these scalar tensors are constants that we can const-prop during tracing anyway).

Notable, I did **not** add e.g. a new `aten._conj.Scalar` overload. This would not actually fix the problem, since the bug is that we call `aten._conj.default(python_scalar)` directly. we would also need to muck with all `__torch_dispatch__` call sites to know to convert python scalars back into tensors directly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131482
Approved by: https://github.com/zou3519, https://github.com/ezyang
ghstack dependencies: #131403
2024-07-26 14:22:20 +00:00
Xuehai Pan
ace6decc99 Fix static py::object dangling pointer with py::gil_safe_call_once_and_store (#130341)
Fix static `py::object`s with `py::gil_safe_call_once_and_store`.

The following code will leak a `py::object` which will call its destructor when shutdown the program. The destructor will call `Py_DECREF(obj.m_ptr)` which may raise a segmentation fault.

```c++
void func() {
    static py::object obj = py::module_::import("foo").attr("bar");

    ...
}
```

The correct code is to use raw pointers rather than the instance.

```c++
void func() {
    static py::object* obj_ptr = new py::object{py::module_::import("foo").attr("bar")};
    py::object obj = *obj_ptr;

    ...
}
```

This PR uses the `py::gil_safe_call_once_and_store` function from `pybind11`, which can run arbitrary initialization code only once under the Python GIL thread safely.

```c++
void func() {
    PYBIND11_CONSTINIT static py::gil_safe_call_once_and_store<py::object> storage;
    py::object obj = storage
                         .call_once_and_store_result(
                             []() -> py::object {
                                 return py::module_::import("foo").attr("bar");
                             }
                         )
                         .get_stored();

    ...
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130341
Approved by: https://github.com/ezyang, https://github.com/malfet
2024-07-25 05:53:09 +00:00
Richard Zou
3e9cf1cc80 Fix potential segfault during deletion (#131036)
Summary: See comment in code

Test Plan: code reading

Reviewed By: albanD

Differential Revision: D59872819

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131036
Approved by: https://github.com/Skylion007, https://github.com/albanD
2024-07-18 21:18:31 +00:00
rzou
ba941769b5 Add API for open registration between operators and subclasses (and modes) (#130064)
We add torch.library.Library._register_torch_dispatch_rule. Here, a user
can provide us a specific rule to run for a specific
(torch_dispatch_class, operator) pair. The motivation is that a user
might want to extend a subclass/mode but may not have access to the
source code of the subclass/mode.

I'll make this public in a follow-up PR if we think the approach and API
is good.

Keep in mind that many subclasses will likely deliver their own open
registration solution (DTensor has register_sharding_prop_rule and NJT
has register_jagged_op); _register_torch_dispatch_rule is meant as a
catch-all open registration mechanism for when the subclass hasn't
provided anything more specific.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130064
Approved by: https://github.com/albanD
2024-07-12 14:13:01 +00:00
PyTorch MergeBot
492de213e2 Revert "Change deprecated warning on dispatch_on_subclass to warn once (#130047)"
This reverts commit f21a21828a.

Reverted https://github.com/pytorch/pytorch/pull/130047 on behalf of https://github.com/albanD due to The failure on the PR are valid, they should not have been ignored ([comment](https://github.com/pytorch/pytorch/pull/130047#issuecomment-2223488933))
2024-07-11 17:24:02 +00:00
Iris Zhang (PyTorch)
f21a21828a Change deprecated warning on dispatch_on_subclass to warn once (#130047)
Summary:
Right now the deprecated warning fires on every operator that calls into torch_function. Changing it to TORCH_WARN_ONCE instead.

More context in https://fb.workplace.com/groups/260102303573409/permalink/445299188387052/

Test Plan: Sandcastle

Differential Revision: D59338775

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130047
Approved by: https://github.com/XilunWu
2024-07-11 17:02:26 +00:00
PyTorch MergeBot
ce499eee0c Revert "Add API for open registration between operators and subclasses (and modes) (#130064)"
This reverts commit c23d103afa.

Reverted https://github.com/pytorch/pytorch/pull/130064 on behalf of https://github.com/izaitsevfb due to fails internal builds, see [D59553526](https://www.internalfb.com/diff/D59553526) ([comment](https://github.com/pytorch/pytorch/pull/130064#issuecomment-2221587575))
2024-07-10 21:50:32 +00:00
rzou
c23d103afa Add API for open registration between operators and subclasses (and modes) (#130064)
We add torch.library.Library._register_torch_dispatch_rule. Here, a user
can provide us a specific rule to run for a specific
(torch_dispatch_class, operator) pair. The motivation is that a user
might want to extend a subclass/mode but may not have access to the
source code of the subclass/mode.

I'll make this public in a follow-up PR if we think the approach and API
is good.

Keep in mind that many subclasses will likely deliver their own open
registration solution (DTensor has register_sharding_prop_rule and NJT
has register_jagged_op); _register_torch_dispatch_rule is meant as a
catch-all open registration mechanism for when the subclass hasn't
provided anything more specific.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130064
Approved by: https://github.com/albanD
2024-07-09 21:11:27 +00:00
cyy
29861779ce [2/N] Change #include <c10/util/Optional.h> to #include <optional> (#130236)
Follows  #128301. The changes were made by grep and sed

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130236
Approved by: https://github.com/ezyang
2024-07-09 03:17:24 +00:00
PyTorch MergeBot
d44c30e2f9 Revert "Add API for open registration between operators and subclasses (and modes) (#130064)"
This reverts commit 922d2737d5.

Reverted https://github.com/pytorch/pytorch/pull/130064 on behalf of https://github.com/huydhn due to Sorry for reverting your change but test_profiler_tree is failing in trunk after this lands 922d2737d5, maybe a landrace ([comment](https://github.com/pytorch/pytorch/pull/130064#issuecomment-2216135497))
2024-07-09 01:48:38 +00:00
rzou
922d2737d5 Add API for open registration between operators and subclasses (and modes) (#130064)
We add torch.library.Library._register_torch_dispatch_rule. Here, a user
can provide us a specific rule to run for a specific
(torch_dispatch_class, operator) pair. The motivation is that a user
might want to extend a subclass/mode but may not have access to the
source code of the subclass/mode.

I'll make this public in a follow-up PR if we think the approach and API
is good.

Keep in mind that many subclasses will likely deliver their own open
registration solution (DTensor has register_sharding_prop_rule and NJT
has register_jagged_op); _register_torch_dispatch_rule is meant as a
catch-all open registration mechanism for when the subclass hasn't
provided anything more specific.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130064
Approved by: https://github.com/albanD
2024-07-08 22:13:05 +00:00
cyy
f4dcf2ae93 [1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128301
Approved by: https://github.com/ezyang, https://github.com/r-barnes
2024-07-08 07:03:53 +00:00
PyTorch MergeBot
6fc771d19b Revert "Change depreacate warning on dispatch_on_subclass to warn once (#130047)"
This reverts commit 8ff243bcf1.

Reverted https://github.com/pytorch/pytorch/pull/130047 on behalf of https://github.com/clee2000 due to broke test_overrides.py::TestTorchFunctionWarning::test_warn_on_invalid_torch_function on multiple jobs 8ff243bcf1 https://github.com/pytorch/pytorch/actions/runs/9812489165/job/27097342443.  Dr CI is doing something weird about the unstable failures ([comment](https://github.com/pytorch/pytorch/pull/130047#issuecomment-2211409090))
2024-07-05 21:03:36 +00:00
Iris Zhang (PyTorch)
8ff243bcf1 Change depreacate warning on dispatch_on_subclass to warn once (#130047)
Summary:
Right now the deprecated warning fires on every operator that calls into torch_function. Changing it to TORCH_WARN_ONCE instead.

More context in https://fb.workplace.com/groups/260102303573409/permalink/445299188387052/

Test Plan: Sandcastle

Differential Revision: D59338775

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130047
Approved by: https://github.com/XilunWu
2024-07-05 18:52:49 +00:00
PyTorch MergeBot
846bb30e13 Revert "[1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)"
This reverts commit bd72e28314.

Reverted https://github.com/pytorch/pytorch/pull/128301 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it fails XLA build bd72e28314. Please rebase your PR before relanding because I think the failure is hidden by an unrelated broken trunk XLA failure from your current base commit ([comment](https://github.com/pytorch/pytorch/pull/128301#issuecomment-2169035822))
2024-06-15 01:58:20 +00:00
cyy
bd72e28314 [1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128301
Approved by: https://github.com/ezyang
2024-06-14 23:21:01 +00:00
feifan
c9172d4471 print default value in FunctionSignature (#127059)
Fixes #[126758](https://github.com/pytorch/pytorch/issues/126758) and #[126759](https://github.com/pytorch/pytorch/issues/126759)

The output information in the issue is not accurate because `FunctionSignature::toString()` print the schema strings without default.
cb6ef68caa/torch/csrc/utils/python_arg_parser.cpp (L1282-L1283)
This pr, by adding a `default_value` to save the default str ,which shoule be priented. Of course, can also add an new api to reverse `default_bool/default_int` to string, which is slightly more complicated.
result:
![image](https://github.com/pytorch/pytorch/assets/37650440/f58a4cbf-b0f4-4c81-9106-59f0d35c54ea)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127059
Approved by: https://github.com/janeyx99
2024-05-28 18:04:31 +00:00
Richard Barnes
ed327876f5 [codemod] c10:optional -> std::optional (#126135)
Generated by running the following from PyTorch root:
```
find . -regex ".*\.\(cpp\|h\|cu\|hpp\|cc\|cxx\)$" | grep -v "build/" | xargs -n 50 -P 4 perl -pi -e 's/c10::optional/std::optional/'
```

`c10::optional` is just an alias for `std::optional`. This removes usages of that alias in preparation for eliminating it entirely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126135
Approved by: https://github.com/Skylion007, https://github.com/malfet, https://github.com/albanD, https://github.com/aaronenyeshi
2024-05-14 19:35:51 +00:00
cyy
fb90b4d4b2 [TorchGen] Use std::optional in generated code (#121454)
This PR changes TorchGen to generate std::optional.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121454
Approved by: https://github.com/ezyang
2024-03-29 14:11:09 +00:00
Edward Z. Yang
7678be4667 Replace numel with sym_numel in is_int_or_symint (#122145)
Fixes https://github.com/pytorch/pytorch/issues/122124

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122145
Approved by: https://github.com/Skylion007
2024-03-19 21:58:43 +00:00
PyTorch MergeBot
dabb90f2a4 Revert "[Exception] [6/N] Remove use of torch::TypeError (#117964)"
This reverts commit 87335fabae.

Reverted https://github.com/pytorch/pytorch/pull/117964 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/117964#issuecomment-1913079096))
2024-01-27 08:44:34 +00:00
cyy
87335fabae [Exception] [6/N] Remove use of torch::TypeError (#117964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117964
Approved by: https://github.com/albanD
2024-01-25 03:35:58 +00:00
Edward Z. Yang
220cf46c2a Always accept 0-d scalar tensors as int, even if __index__ fails (#117451)
Fixes https://github.com/pytorch/pytorch/issues/117288

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117451
Approved by: https://github.com/yanboliang
2024-01-14 14:39:46 +00:00
cyy
2b5a201aa6 [Exception] [3/N] Replace torch::NotImplementedError and torch::LinAlgError with C10 counterparts. (#116824)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116824
Approved by: https://github.com/albanD
2024-01-11 11:27:04 +00:00
Edward Z. Yang
8bcdde5058 Support uint{16,32,64} deterministic empty fill and scalar Python binding handling (#116807)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116807
Approved by: https://github.com/albanD
ghstack dependencies: #116805, #116806
2024-01-10 02:17:23 +00:00
Nikita Shulga
d7caef7996 [CI] Update clang-format (#116002)
To 17.0.6 build using https://github.com/pytorch/test-infra/blob/main/.github/workflows/clang-tidy-linux.yml

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116002
Approved by: https://github.com/suo
2023-12-18 14:58:46 +00:00