Commit Graph

25 Commits

Author SHA1 Message Date
Laith Sakka
7cfd054075 [attempt 2] Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#157472)
Summary:
When we compute contiguity for a tensor with dynamic shapes we first:
1) Try to compute it without guarding.
2) If all shapes hinted, compute it with potentially adding guards.
3) if any input is not hinted, compute it symbolically.

sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called
on it to avoid data dependent errors.

ex:
 bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__);
is_contiguous_or_false is a helper function that does that.

In this PR I only handle default contiguity, will follow up with changes for other formats like  channel_last .
We use this patter in this PR for several locations to avoid DDEs.

Test Plan:
contbuild & OSS CI,

Rollback Plan:

Reviewed By: malfet

Differential Revision: D77639021

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157472
Approved by: https://github.com/aorenste
2025-07-02 23:12:29 +00:00
PyTorch MergeBot
c6a27bae36 Revert "[do not revert] Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#155590)"
This reverts commit d0a9629435.

Reverted https://github.com/pytorch/pytorch/pull/155590 on behalf of https://github.com/laithsakka due to was asked by to land this internally  ([comment](https://github.com/pytorch/pytorch/pull/155590#issuecomment-3025796794))
2025-07-01 22:58:14 +00:00
Laith Sakka
d0a9629435 [do not revert] Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#155590)
When we compute contiguity for a tensor with dynamic shapes we first:
1) Try to compute it without guarding.
2) If all shapes hinted, compute it with potentially adding guards.
3) if any input is not hinted, compute it symbolically.

sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called
on it to avoid data dependent errors.

ex:
 bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__);
is_contiguous_or_false is a helper function that does that.

In this PR I only handle default contiguity, will follow up with changes for other formats like  channel_last .
We use this patter in this PR for several locations to avoid DDEs.
Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155590
Approved by: https://github.com/ezyang
2025-07-01 21:39:38 +00:00
PyTorch MergeBot
1586521461 Revert "Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#155590)"
This reverts commit 2c76f31221.

Reverted https://github.com/pytorch/pytorch/pull/155590 on behalf of https://github.com/jeanschmidt due to Breaking 1000s of internal builds, it cant be properly landed internally, there are no options except revert and codev. ([comment](https://github.com/pytorch/pytorch/pull/155590#issuecomment-3023503929))
2025-07-01 11:23:00 +00:00
Laith Sakka
2c76f31221 Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#155590)
When we compute contiguity for a tensor with dynamic shapes we first:
1) Try to compute it without guarding.
2) If all shapes hinted, compute it with potentially adding guards.
3) if any input is not hinted, compute it symbolically.

sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called
on it to avoid data dependent errors.

ex:
 bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__);
is_contiguous_or_false is a helper function that does that.

In this PR I only handle default contiguity, will follow up with changes for other formats like  channel_last .
We use this patter in this PR for several locations to avoid DDEs.
Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155590
Approved by: https://github.com/ezyang
2025-06-27 04:59:52 +00:00
Xu Han
6c1f1563e1 [inductor] fix UndefinedTensorImpl singleton can't export on Windows. (#132326)
This PR fix the `UndefinedTensorImpl::_singleton` can't export on Windows issue.
Snapshot:
<img width="1346" alt="image" src="https://github.com/user-attachments/assets/b34256ac-a0ae-473b-89e6-10d755eaad24">

The reason is MSVC can't export class static data to external linkage, ref: https://learn.microsoft.com/en-us/cpp/cpp/using-dllimport-and-dllexport-in-cpp-classes?view=msvc-170#_pluslang_using_dllimport_and_dllexport_in_c2b2bselectivememberimportexport

I use another singleton implenmentation to avoid the issue, for Windows.

Since this PR, cpp_wrapper on Windows would start to work.
<img width="1916" alt="image" src="https://github.com/user-attachments/assets/c1d7d7e7-64ca-4c6d-9fb7-e3b91e675b58">

Next step, I will enable the cpp_wrapper UTs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132326
Approved by: https://github.com/jgong5, https://github.com/desertfire
2024-08-01 13:37:12 +00:00
cyy
1544c37520 [7/N] Fixes clang-tidy warnings in c10/{core,util}/*.h (#115495)
This PR continues to fix clang-tidy warnings for headers in c10/core and c10/util.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115495
Approved by: https://github.com/malfet
2023-12-19 02:14:30 +00:00
Edward Z. Yang
c5a8946e40 Revert "Revert "Redo how custom/python_custom methods on TensorImpl work (#84796)" (#84806)
This reverts commit ca3b2bfbe3.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84806
Approved by: https://github.com/Chillee
2022-09-10 06:17:35 +00:00
Eli Uriegas
ca3b2bfbe3 Revert "Redo how custom/python_custom methods on TensorImpl work (#84796)
This reverts commit 591b75bf98.

Manual revert of https://github.com/pytorch/pytorch/pull/84641

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84796
Approved by: https://github.com/izaitsevfb
2022-09-10 00:18:13 +00:00
Edward Z. Yang
591b75bf98 Redo how custom/python_custom methods on TensorImpl work (#84641)
A longstanding confusion in the implementation of fake tensor and proxy tensor is what to do about torch.ops.aten.sym_sizes and related calls. In particular, when you have a tensor that (1) has symbolic shapes and (2) has a `__torch_dispatch__` call, previously, you would always get `__torch_dispatch__` calls for sizes/strides query, *even if you didn't request it* via the dispatch kwargs in `make_wrapper_subclass`.

The reason for this is because we were previously mixing several concepts: "I want to dispatch to Python", "I want to call a virtual method" and "I have dynamic shapes". A single boolean variable controlled all of these things, and so it was not possible to understand inside TensorImpl what the user had actually originally requested.

In this PR, we track each of these concepts individually so that we can preserve user intent. Then, we combine these into a single "policy" variable that controls whether or not we can use the fastpath or not. For the policy to trigger, we only need one of the exceptional cases to be true.

Billing of changes:
* Rename `set_sizes_strides_policy` to `set_custom_sizes_strides`; in general, you cannot DIRECTLY set policy; you have to indirectly set it by the public functions.
* Some helpers for sizes and strides, since it's more complicated (as it is an enum, rather than just bools as is the case for device and layout). `matches_python_custom` is used to test the Python dispatch user ask. `matches_policy` does the policy test (only used in the user facing functions.)
* I reorged the accessor methods so that they are more logical. This makes the diff bad, so I recommend reading the final code directly.
* The default custom implementations now more reliably call their default() implementations
* As bonus refactor, I devirtualized some functions that don't need to be virtual
* `set_sym_sizes_and_strides` is renamed to `set_sizes_and_strides` to make it easier to use in template contexts; it optionally takes a storage offset now so you can set all three values at the same time. If you use the SymInt overload but there are no symbolic integers, we give you a normal resize.
* This adds `sym_storage_offset` since we had that in the symbolic shapes branch and there's no reason not to put it in (and it reduces merge conflicts)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84641
Approved by: https://github.com/wconstab
2022-09-09 13:41:13 +00:00
Edward Z. Yang
2896f81dd4 Consolidate customization contiguous/sizes policy into unified policy
Prior to this PR, we had a mish-mash of ways of getting unconventional
sizes/strides behavior:

- In OSS (but not in fbcode), some methods are virtual and you
  can override them directly

- There is a is_contiguous policy which is a bitfield tag that lets
  you toggle is_contiguous to error or hit a virtual method
  is_contiguous_custom if it is set.  Ordinarily is_contiguous()
  is virtual and you can just override it, but this works EVEN IF
  is_contiguous() is non-virtual (e.g., in fbcode)

- There is also a sizes policy which is the same idea but for sizes

This PR unifies these mechanisms, and in doing so, eliminates the
maybe virtual/not-virtualness of the methods in question.  The primary
downside of this change is that it is BC-breaking (but the BC break is
very easy to fix!)

The new scheme works like this: we have three levels of policy for
sizes/strides (order matters).

- The Default policy is a conventional dense tensor, where we use
  all of the built-in fields to directly represent the
  sizes/strides/numel/contiguity of the tensor, and it is possible
  to bypass virtual call entirely.

- The CustomStrides policy represent tensors which have a custom
  notion of strides (most typically, that they don't support them),
  shunting strides() and is_contiguous() to virtual methods
  strides_custom() and is_contiguous_custom().  This INCLUDES handling
  for contiguity, since they typically go hand-in-hand (although
  the situation is murky with batched tensors).  The default
  implementations of these functions raise errors saying the tensor
  doesn't support them.

- The CustomSizes policy represent tensors which have a custom
  notion of sizes (the two notable examples are nested tensor, which
  doesn't have a representation of sizes in the conventional form, and
  XLA/LTC tensor, which synchronizes its sizes with an underlying
  compiler backend).  This shunts sizes(), numel() and dim() (along
  with everything from strides) to _custom() variants.

There is no special policy for erroring; instead, we just do a vcall
and expect the virtual method to raise an exception (the performance
hit from the vcall doesn't matter because you're about to raise a C++
exception anyway).  The default implementations of all overridable
functions are available at _default() which is helpful in some
situations when you just want to do a "sync" and then run the
conventional semantics.

This PR could be extended further in two ways but I did not do them
due to time constraints:

- Ideally, all TENSORIMPL_MAYBE_VIRTUAL would be eliminated from
  TensorImpl, by using the same policy trick.

- set_size and set_stride are still virtual; it's not entirely clear
  the same trick should be used here though as these methods are
  deprecated.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77036

Approved by: https://github.com/bdhirsh
2022-05-11 00:23:07 +00:00
Scott Wolchok
44cc873fba [PyTorch] Autoformat c10 (#56830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56830

Opt into formatting on GitHub and format everything. This is a trial run before turning on formatting for more and eventually all of the codebase.

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D27979080

fbshipit-source-id: a80f0c48691c08ae8ca0af06377b87e6a2351151
2021-04-30 21:23:28 -07:00
Scott Wolchok
059ee85ca4 [PyTorch] Devirtualize TensorImpl::storage() (#51050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51050

Subclasses want to be able to make storage() calls throw, so
we find some free space in TensorImpl to add a flag that they can set
to make that happen without making storage() virtual. It should still
be inlineable.
ghstack-source-id: 121819684

Test Plan:
Compared `perf stat` on 1M iterations on AdIndexer benchmark before/after

Before:
```
         74,483.15 msec task-clock                #    0.999 CPUs utilized            ( +-  0.14% )
            16,637      context-switches          #    0.223 K/sec                    ( +- 11.97% )
                 3      cpu-migrations            #    0.000 K/sec                    ( +-  7.20% )
           107,085      page-faults               #    0.001 M/sec                    ( +-  2.39% )
   147,356,440,831      cycles                    #    1.978 GHz                      ( +-  0.14% )  (50.06%)
   278,678,430,378      instructions              #    1.89  insn per cycle           ( +-  0.01% )  (50.05%)
    43,540,698,177      branches                  #  584.571 M/sec                    ( +-  0.01% )  (50.05%)
       141,028,843      branch-misses             #    0.32% of all branches          ( +-  1.00% )  (50.05%)

```

After:
```
         74,178.77 msec task-clock                #    0.999 CPUs utilized            ( +-  0.31% )
            17,125      context-switches          #    0.231 K/sec                    ( +-  3.41% )
                 3      cpu-migrations            #    0.000 K/sec
           109,535      page-faults               #    0.001 M/sec                    ( +-  1.04% )
   146,803,364,372      cycles                    #    1.979 GHz                      ( +-  0.30% )  (50.03%)
   277,726,600,254      instructions              #    1.89  insn per cycle           ( +-  0.02% )  (50.03%)
    43,299,659,815      branches                  #  583.720 M/sec                    ( +-  0.03% )  (50.03%)
       130,504,094      branch-misses             #    0.30% of all branches          ( +-  1.14% )  (50.03%)

```

Looks like approximately 0.3% instruction count win (and similarly for cycles, but that's within noise).

Reviewed By: ezyang

Differential Revision: D26013815

fbshipit-source-id: 07939957929070e18b9981d492d8279c9bb33c55
2021-02-17 11:48:06 -08:00
Scott Wolchok
6c24296795 [PyTorch] Devirtualize TensorImpl::has_storage (#51049)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51049

This diff makes it OK to query has_storage() on all TensorImpls. I added debug assertions that storage_ is indeed never set on them, which is required for this to be correct.
ghstack-source-id: 120714380

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D26008498

fbshipit-source-id: b3f55f0b57b04636d13b09aa55bb720c6529542c
2021-02-01 11:30:23 -08:00
Scott Wolchok
765062c085 [PyTorch] Devirtualize TensorImpl::storage_offset (#51048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51048

There doesn't seem to be any reason to prohibit accessing the always-zero storage_offset of those TensorImpls that prohibit set_storage_offset.
ghstack-source-id: 120714379

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D26008499

fbshipit-source-id: cd92ac0afdebbd5cf8f04df141843635113b6444
2021-02-01 11:27:13 -08:00
Scott Wolchok
9ebea77299 [PyTorch] Reapply D25687465: Devirtualize TensorImpl::dim() with macro (#50290)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50290

This was reverted because it landed after D24772023 (b73c018598), which
changed the implementation of `dim()`,  without rebasing on top of it,
and thus broke the build.
ghstack-source-id: 119608505

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D25852810

fbshipit-source-id: 9735a095d539a3a6dc530b7b3bb758d4872d05a8
2021-01-13 15:15:32 -08:00
Scott Wolchok
b5d3826950 [PyTorch] Devirtualize TensorImpl::sizes() with macro (#50176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50176

UndefinedTensorImpl was the only type that overrode this, and IIUC we don't need to do it.
ghstack-source-id: 119609531

Test Plan: CI, internal benchmarks

Reviewed By: ezyang

Differential Revision: D25817370

fbshipit-source-id: 985a99dcea2e0daee3ca3fc315445b978f3bf680
2021-01-12 10:33:46 -08:00
Lucian Grijincu
c215ffb6a2 Revert D25687465: [PyTorch] Devirtualize TensorImpl::dim() with macro
Test Plan: revert-hammer

Differential Revision:
D25687465 (4de6b279c8)

Original commit changeset: 89aabce165a5

fbshipit-source-id: fa5def17209d1691e68b1245fa0873fd03e88eaa
2021-01-07 22:07:42 -08:00
Scott Wolchok
4de6b279c8 [PyTorch] Devirtualize TensorImpl::dim() with macro (#49770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49770

Seems like the performance cost of making this commonly-called method virtual isn't worth having use of undefined tensors crash a bit earlier (they'll still fail to dispatch).
ghstack-source-id: 119528065

Test Plan: framework overhead benchmarks

Reviewed By: ezyang

Differential Revision: D25687465

fbshipit-source-id: 89aabce165a594be401979c04236114a6f527b59
2021-01-07 19:05:41 -08:00
Basil Hosmer
377a09c8e8 reland fast TypeMeta/ScalarType conversion (#45544)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45544

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D24006482

Pulled By: bhosmer

fbshipit-source-id: 5da2401ab40bbf58da27a5d969e00bcee7562ed6
2020-10-29 14:07:39 -07:00
Mike Ruberry
ab5edf21b0 Revert D23789657: [wip] fast typeMeta/ScalarType conversion approach 2
Test Plan: revert-hammer

Differential Revision:
D23789657 (1ed1a2f5b0)

Original commit changeset: 5afdd52d24bd

fbshipit-source-id: 6d827be8895bcb39c8e85342eee0f7a3f5056c76
2020-09-29 09:40:53 -07:00
Basil Hosmer
1ed1a2f5b0 [wip] fast typeMeta/ScalarType conversion approach 2 (#44965)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44965

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D23789657

Pulled By: bhosmer

fbshipit-source-id: 5afdd52d24bd097891ff4a7313033f7bd400165e
2020-09-29 02:39:36 -07:00
Will Feng
e2a5b203fc Enforce same input tensor storage in VariableType functions (#16305)
Summary:
In VariableType.cpp, when a function modifies its input tensors, it should only change the input tensors' storage data in-place, and should never change the input tensors' storage pointers. This PR adds checks for this, and also fixes functions that fail this test.

This is part of the Variable/Tensor merge work (https://github.com/pytorch/pytorch/issues/13638).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16305

Differential Revision: D13897855

Pulled By: yf225

fbshipit-source-id: 0c4fc7eb530d30db88037b1f0981f6f8454d3b79
2019-02-11 13:33:12 -08:00
Edward Yang
4404762d7d Rename IntList to IntArrayRef. (#16751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16751

This was made more complicated by the fact that ivalue::IntList
is a thing.  So I had to fix all of the sites where we referring
to IValue post facto.

The following codemods were run, in this order:

```
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntList IntArrayRef
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntArrayRef::create IntList::create
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in ivalue::IntArrayRef ivalue::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in Tag::IntArrayRef Tag::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in isIntArrayRef isIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in toIntArrayRef toIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'Shared<IntArrayRef>' 'Shared<IntList>'
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'intrusive_ptr<IntArrayRef>' 'intrusive_ptr<IntList>'
```

Some manual fixups were done afterwards; they can be reviewed separately
at https://github.com/pytorch/pytorch/pull/16752

Reviewed By: dzhulgakov

Differential Revision: D13954363

fbshipit-source-id: b5c40aacba042402155a2f5a229fa6db7992ac64
2019-02-05 14:54:34 -08:00
Sebastian Messmer
63db95dd11 Move UndefinedTensorImpl to c10 (meh) (#14817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14817

unfortunately, we still need this.

Reviewed By: ezyang

Differential Revision: D13348041

fbshipit-source-id: e8dcc89f5c71bd1ea2c9813990dac6e58e63b1fd
2018-12-11 21:01:42 -08:00