Commit Graph

43731 Commits

Author SHA1 Message Date
Eli Simhayev
b2f5a5311b RMSNorms docs - remove biases initialization (#139620)
RMSNorm doesn't use a bias in `elementwise_affine`, so I've removed it from the documentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139620
Approved by: https://github.com/mikaylagawarecki
2024-11-05 05:59:41 +00:00
Chen, Zejun
9aaf3a04fa [profiler][UT] instantiate profiler UTs for devices and enable UTs for xpu profiler (#134316)
This PR enables the profiler related UT to be device-agnostic. It instantiates the profiler UTs for different device types and enable them on XPU backend.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134316
Approved by: https://github.com/etaf, https://github.com/aaronenyeshi, https://github.com/gujinghui
2024-11-05 05:46:13 +00:00
CaoE
9e14d86573 [Inductor][CPP] Add oneDNN BRGEMM config for Half cpp gemm template (#136255)
`kernel_micro_gemm` generated using BRGEMM:
```
template <bool accum>
inline void kernel_micro_gemm(
    const half* __restrict__ A,
    const half* __restrict__ B,
    float* __restrict__ C,
    int64_t M,
    int64_t N,
    int64_t K,
    int64_t lda,
    int64_t ldb,
    int64_t ldc
) {
    at::native::cpublas::brgemm(
      M, N, K,
      lda, ldb, ldc,
      1.f, accum ? 1.f : 0.f,
      A,
      B,
      C);
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136255
Approved by: https://github.com/jgong5, https://github.com/jansel
2024-11-05 05:33:29 +00:00
Meet Vadakkanchery
c8a55eea88 [DCP] Fix process_group logging for DCP methods (#139428)
Summary:
Currently, we incorrectly log process_group for DCP based events.

We rely on [c10d_logger.py](https://fburl.com/v4mdme9z) to fill in information about process_group (e.g. backend, nccl_version if available).

In [checkpoint/logger.py](https://fburl.com/yho9nqbu) we pass the `msg_dict` to c10d_logger which never contains the `process_group` param, so [c10d_logger](https://fburl.com/zlw2ukxp) logs information about the default process_group which is always `NCCL`.

Test Plan:
Before:

Always defaults to NCCL even though GLOO is passed by caller.

{F1950847585}

After:

GLOO backend shows up.

{F1950848375}

Differential Revision: D65255871

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139428
Approved by: https://github.com/teja-rao, https://github.com/mhorowitz
2024-11-05 05:24:38 +00:00
Animesh Jain
fe4fa1df9f [dynamo][eval_frame] Set the callback to None earlier for guard eval (#139655)
xref - https://fb.workplace.com/groups/1075192433118967/permalink/1536570810314458/

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139655
Approved by: https://github.com/jansel, https://github.com/williamwen42
2024-11-05 05:18:46 +00:00
Gabriel Ferns
a766d84a3c Allow inplacing buffer when other users are inconsequential (#138383)
Summary:
I think we can inplace a buffer if all of the users of said buffer are "inconsequential", defined as having been removed, being completed, or being part of the ancestors set. In particular, this allows LayerNorm to inplace its input buffer.

Implements:
https://github.com/pytorch/pytorch/issues/132826

Test Plan:
New unit test of matmul followed by LayerNorm, make sure there's an inplaced buffer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138383
Approved by: https://github.com/eellison
2024-11-05 03:44:09 +00:00
Andrew Gu
9039fbb47e [FSDP2] Make module-to-state mapping use weakrefs (#139650)
Without this, `del model` does not free memory of a module with FSDP2 applied.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139650
Approved by: https://github.com/yf225
2024-11-05 02:16:52 +00:00
cyy
5008d15ae9 [2/N] Remove usage of C array (#139589)
Follows  #139567
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139589
Approved by: https://github.com/ezyang
2024-11-05 01:58:12 +00:00
CaoE
3672c688e3 Fix layout for SetSourceTensorKernel (#137973)
Fixes #136837.
`aten.set_.source_Tensor` will make the size and stride of the first input and output follow that of the second input: https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/TensorShape.cpp#L440. If the layouts of the two inputs are different, the following `assert_size_stride` will fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137973
Approved by: https://github.com/jgong5, https://github.com/jansel
2024-11-05 00:55:17 +00:00
Edward Yang
639162f39a Add cache size to pt2_compile_events (#139627)
Summary:
I realized I wanted to check "are my cache entries/IO unreasonably large"
and there's no easy way to do it.  This lets me do it.

Test Plan: servicelab

Differential Revision: D65390363

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139627
Approved by: https://github.com/c00w
2024-11-05 00:30:10 +00:00
Nikita Shulga
0058f71002 Don't use deprecated type properties in UpsampleKernel (#139399)
By replacing `at::CPU(dtype)` pattern with `at::device(kCPU).dtype(dtype)` pattern

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139399
Approved by: https://github.com/Skylion007
ghstack dependencies: #139353, #139358
2024-11-05 00:29:58 +00:00
PyTorch MergeBot
4a3ee96427 Revert "Don't use deprecated type properties in UpsampleKernel (#139399)"
This reverts commit 9d096e4d9f.

Reverted https://github.com/pytorch/pytorch/pull/139399 on behalf of https://github.com/ZainRizvi due to Change reverted internally due to broken builds. See D65378845 ([comment](https://github.com/pytorch/pytorch/pull/139358#issuecomment-2455959040))
2024-11-05 00:13:48 +00:00
cyy
64d9ee88d7 [11/N] Fix extra warnings brought by clang-tidy-17 (#139599)
Follows #139385
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139599
Approved by: https://github.com/sraikund16
2024-11-04 23:57:41 +00:00
Laith Sakka
3f248a5735 Classify miss-inplaced tensors in logs. (#139240)
Summary:
use signpost logs,
a followup is to remove the field possibly_missed_reinplacing_opportunities form dynamo compile table.

Differential Revision: D65180194

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139240
Approved by: https://github.com/zou3519
2024-11-04 23:56:14 +00:00
Mikayla Gawarecki
e947649e8f [BE] Change _marked_safe_globals_list to set (#139303)
Prevent same global from being added multiple times

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139303
Approved by: https://github.com/janeyx99
ghstack dependencies: #138936, #139221, #139433, #139541, #137602
2024-11-04 23:50:55 +00:00
Pian Pawakapan
a678eaf1ad check fake/real mismatches during real tensor prop (#137747)
Summary:
While testing exportability for PT2 Inference models, we found various cases of invalid op inputs during tracing, for example errors like: `a and b must have same reduction dim`, `expected scalar type Long but found Int`, etc. Looking more closely, these happened to due the same few meta kernels & eager kernels producing mismatched outputs upstream (e.g. different output tensor dtype, int output).

Adding checks to catch mismatched outputs in real tensor prop upstream, so errors are raised at the mismatched op, instead of the downstream ops taking them as inputs. Relies a lot on utils from [CrossRefFakeMode](929797dedb/torch/_subclasses/fake_utils.py (L78))

Follow ups: could add more checks, and maybe have a flag to only enable these for cases like draft mode, so perf doesn't suffer?

Test Plan: test_export, test_fake_tensor

Differential Revision: D64210055

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137747
Approved by: https://github.com/zou3519
2024-11-04 23:39:48 +00:00
Bob Ren
9919932783 Specialize symfloats that flow through is_integer (#139572)
Fixes `python test/dynamo/test_dynamic_shapes.py DynamicShapesFunctionTests.test_number_method_method_is_integer_num_type6_dynamic_shapes` when specialize_float = False

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139572
Approved by: https://github.com/ezyang
ghstack dependencies: #139569, #139457, #139568
2024-11-04 23:35:35 +00:00
Henry Tsang
350bc2a166 [export] Add support for symbool to make it usable for torch.cond (#138765)
# Why?

I want the following code to work.

minimal repro:
```
class M(torch.nn.Module):
    def forward(self, dilate_flag):
        return dilate_flag.item()

input1 = (torch.tensor([1], dtype=torch.bool, device="cuda"),)
model = M().cuda()

ep = torch.export.export(model, input1, strict=True)
path = torch._inductor.aot_compile(ep.module(), input1)
aot_model = torch._export.aot_load(path, device="cuda")
actual_output = aot_model(*input1)
```

error: AssertionError: Encountered an unsupported object of type <class 'torch.SymBool'> while writing the metadata for exported program

second error will be handled by https://github.com/pytorch/pytorch/pull/138760

# Motivation

I could technically bypass it with a torch.int tensor. However, it doesn't work with torch.cond. I want the following to work. It would also require https://github.com/pytorch/pytorch/pull/138760 for aot compile to work.

```
class M(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.dilate_flag = 0

    def forward(self, dilate_flag):
        self.dilate_flag = dilate_flag.item()

        def true_fn(dilate_flag):
            return dilate_flag.clone()

        def false_fn(dilate_flag):
            return dilate_flag.clone()

        torch.cond(
            self.dilate_flag,
            true_fn,
            false_fn,
            (dilate_flag,),
        )
        return self.dilate_flag

input1 = (torch.tensor([1], dtype=torch.bool, device="cuda"),)
input2 = (torch.tensor([0], dtype=torch.bool, device="cuda"),)
inputs = (input1, input2)
model = M().cuda()

for input in inputs:
    expected_output = model(*input)

    ep = torch.export.export(model, input, strict=False)
    path = torch._inductor.aot_compile(ep.module(), input)
    aot_model = torch._export.aot_load(path, device="cuda")
    actual_output = aot_model(*input)

    assert (
        expected_output == actual_output
    ), f"henry they are not equal {expected_output} != {actual_output}"
```

Differential Revision: D64867504

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138765
Approved by: https://github.com/ydwu4
2024-11-04 23:31:49 +00:00
PyTorch MergeBot
6add86a29f Revert "Tighten type hints for tensor arithmetic (#135392)"
This reverts commit bf5cd8d011.

Reverted https://github.com/pytorch/pytorch/pull/135392 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking lint on trunk. See [GH job link](https://github.com/pytorch/pytorch/actions/runs/11673543178/job/32504499599) [HUD commit link](bf5cd8d011) ([comment](https://github.com/pytorch/pytorch/pull/135392#issuecomment-2455908056))
2024-11-04 23:30:15 +00:00
Jane Xu
23169a6bcc Disable foreach tests for complex128 internally (#139649)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139649
Approved by: https://github.com/ngimel
2024-11-04 23:24:47 +00:00
Tugsbayasgalan Manlaibaatar
87a379b61b Move pippy to training IR (#139233)
Differential Revision: [D65282662](https://our.internmc.facebook.com/intern/diff/D65282662)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139233
Approved by: https://github.com/kwen2501
ghstack dependencies: #138658, #139209
2024-11-04 23:07:14 +00:00
Yidi Wu
397938b453 [hop free symbols][refactor] lift freevar to parent graph before lifting to subgraph (#138559)
This refactoring is for getting a deterministic ordering of binding tensors and sizes of tensors. When seeing a free tensor  x with shape (s0,) in subgraph, the ordering of lifting changes from
```
lift_x_in_child, lift_s0_in_child, lift_s0_in_parent, lift_x_in_parent
```
to
```
lift_x_in_parent, lift_s0_in_parent, lift_x_in_child, lift_s0_in_child
```
This produces a determinstic ordering of handling the symints in lifted tensors.

This is also the current contract of dynamo top-level graph: we lift free_symbols in sizes after tensor x and insert the free symbols before the tensor x's proxy.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138559
Approved by: https://github.com/zou3519
ghstack dependencies: #138345, #138428, #138558, #138737
2024-11-04 22:48:14 +00:00
Yidi Wu
c5b79699e1 [hop free symbols] replace ctx.save_for_backward to support symints/ints (#138737)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138737
Approved by: https://github.com/drisspg, https://github.com/zou3519, https://github.com/Chillee
ghstack dependencies: #138345, #138428, #138558
2024-11-04 22:48:14 +00:00
Yidi Wu
ac20d0f893 [hop free symbols][refactor] make map's save_for_backward to handle int (#138558)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138558
Approved by: https://github.com/zou3519
ghstack dependencies: #138345, #138428
2024-11-04 22:48:07 +00:00
Yidi Wu
dc3a6a9d08 [hop free symbols][refactor] make create_graph_input always take example_value (#138428)
Code refactoring only. We move the wrap_to_fake_tensor_logic out of wrap_fx_proxy for placeholders to provide the invariant that **all graph inputs must set their example values when creating the inputs**. This invariant helps us to identify all the free symbols in the graph in top-level and sub-graphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138428
Approved by: https://github.com/ezyang, https://github.com/zou3519
ghstack dependencies: #138345
2024-11-04 22:47:49 +00:00
Yidi Wu
54c69a785b [hop free symbols][refactor] make bound_symbols a dictionary (#138345)
Code refactoring only. Change all self.tx.output.bound_symbols to self.tx.output.root_tracer.bound_symbols.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138345
Approved by: https://github.com/zou3519
2024-11-04 22:47:41 +00:00
Felix Zimmermann
bf5cd8d011 Tighten type hints for tensor arithmetic (#135392)
Fixes #124015

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135392
Approved by: https://github.com/ezyang
2024-11-04 22:10:04 +00:00
Shunting Zhang
888110841c [inductor] don't fuse two nodes if likely increase peak memory (#138756)
Partially fixing https://github.com/pytorch/pytorch/issues/138685

Add a (relatively safe?) heuristics to skip fusion if we can potentially increasing peak memory.

The doc string mainly explains what this PR is doing:
```
        The implementation is more like a heuristic since we don't really know if we are at peak
        or not when trying to fuse these two ndoes. The order of nodes may change later which makes the
        peak memory estimation hard.
        Here is how we decide the LOWER BOUND of extra memory allocation if we fuse these 2 nodes:
        1. find all buffers read by each node with a single user. These buffers are supposed to
           be reused if we don't fuses these 2 nodes
        2. find the intersection of these buffers for the two node and sum the total buffer size.
           If we don't fuse these two nodes, we can at lease avoid this much memory allocation.
           Note that the extra memory allocation is not necessarily causing peak memory increase.
           This is just a heuristic.
        We return true only if the saving for fusion can not trade off the extra memory allocation.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138756
Approved by: https://github.com/jansel
ghstack dependencies: #139136
2024-11-04 20:49:29 +00:00
Ze Sheng
1aa71be56c [PT2] Decouple decompose_triton_kernel_wrapper_functional from decompose_auto_functionalized (#139526)
As title. We may not always want to remove the `triton_kernel_wrapper_functional` for example the references of [`unsafe_remove_auto_functionalized_pass`](c8ab9b06a2/torch/export/_remove_auto_functionalized_pass.py (L48)).

Test Plan: CI & [D62592946](https://www.internalfb.com/diff/D62592946)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139526
Approved by: https://github.com/zou3519
2024-11-04 20:16:18 +00:00
Will Constable
71dc5df93c [pipelining] Fix 'last backward' counting for dI / dW (#139415)
Since any stage can run a mixture of full backwards and split backwards,
it is important to count the sum of (full_backwards + backward_weight)
when comparing to num microbatches to determine last backward.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139415
Approved by: https://github.com/H-Huang
2024-11-04 20:14:10 +00:00
Ryan Guo
30a83ca991 [dynamo] Improve codegen for DataPtrVariable and fix tensor reference issue (#139487)
This addresses
https://github.com/pytorch/pytorch/pull/137677/files#r1799836499, which
had to set `allow_cache=False` for codegen on `DataPtrVariable.base`,
which is a `TensorVariable`, otherwise we observe failure of
`test_no_grad_copy` when testing with Dynamo.

I've seen `test_no_grad_copy` failing a few times, and every single time
it's related to cyclic reference, my best guess is the cyclic reference
holds some tensor object longer in memory than necessary, preventing the
optimization introduced in #11165.

This patch makes `OutputGraph.cleanup()` more aggressive by clearing out
all fields that might reference a `VariableTracker`. As a result, we can
remove the aforementioned `allow_cache=False`, which helps generate
better code (e.g., in the case of `test_no_grad_copy`, it skipped generating
a redundant graph whose only op is returning the input tensor; instead we just
generate a single `LOAD_FAST`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139487
Approved by: https://github.com/jansel, https://github.com/aakhundov
2024-11-04 19:14:06 +00:00
Bin Bao
740054ffe6 [AOTI][reland] Switch OSS dashboard to use aoti_compile_and_package (#139597)
Summary: Reland https://github.com/pytorch/pytorch/pull/139154

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139597
Approved by: https://github.com/angelayi
2024-11-04 18:53:17 +00:00
Oguz Ulgen
e76ce20177 Log to pt2 compile events (#139601)
Summary: This option was added after I wrote the original diff, lets publish to pt2_compile_events

Test Plan: CI

Differential Revision: D65404910

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139601
Approved by: https://github.com/jamesjwu
2024-11-04 18:39:06 +00:00
Shunting Zhang
4930c4b716 [inductor] patterns to remove pointless view/permute pairs (#139136)
These are not artificial patterns I come up. They shows up in linear+CrossEntropyLoss graph.

Consider this snippet:
```
        class LinearAndCEL(nn.Module):
            def __init__(self):
                super().__init__()
                self.linear = nn.Linear(C, V)
                self.ce = nn.CrossEntropyLoss()

            def forward(self, x, y):
                return self.ce(self.linear(x).view(B * T, V), y.view(-1))
```

`x` passed to `forward` is a 3D tensor of shape [B, T, C].
The `self.linear` will view x as [BxT, C] shape tensor first, do the matmul and produce a [BxT, V] tensor, and then view this output back to a 3D tensor with shape [B, T, V]. User code is gonna add another view op to convert the tensor shape to [B x T, V]. This generates a pair of redundant views . A pair of redundant permute happens in the backward part when we compute gradients.

The view ops makes it hard to chunk linear+CEL. When the view op breaks up the dimension being chunked, what should the chunker do (even if we merge those dimension again later)? Removing these pointless view pairs makes the chunker simpler. And I think it's in general nice to do.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139136
Approved by: https://github.com/Chillee, https://github.com/jansel
2024-11-04 18:39:02 +00:00
Mikayla Gawarecki
ca43ecd599 Flip default on weights_only (#137602)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137602
Approved by: https://github.com/malfet, https://github.com/albanD
ghstack dependencies: #138936, #139221, #139433, #139541
2024-11-04 18:30:29 +00:00
Mikayla Gawarecki
f55dfbcf87 Remove hasattr(__slots__) for BUILD logic in weights_only unpickler (#139541)
This is tested in PR stacked above in

```python
python test/distributed/fsdp/test_fsdp_state_dict.py TestFSDPStateDict.test_torch_save_load
```

We cannot depend on whether `hasattr(..., __slots__)` to know whether a BUILD instruction has slotstate. For example, if a class subclasses ABC `hasattr(__slots__)` will be `True` but there might be no slots (and hence `state` will not be a tuple). So revert #138936 to following the pickle library's code

```python

>>> from abc import ABC
>>> hasattr(ABC, "__slots__")
True
```

So

```python
import torch
from abc import ABC
from dataclasses import dataclass

class Foo(ABC):
    pass

class FooWrapper(Foo):
    def __init__(self, x, y):
        self.x = x
        self.y = y

f = FooWrapper(1, 2)
torch.save(f, "temp.pt")
with torch.serialization.safe_globals([FooWrapper]):
    torch.load("temp.pt")
```

Would fail on the previous code with
```
File "/data/users/mg1998/pytorch/torch/serialization.py", line 1934, in _load
    result = unpickler.load()
  File "/data/users/mg1998/pytorch/torch/_weights_only_unpickler.py", line 366, in load
    for k, v in slotstate.items():
```

As there is actually no slotstate

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139541
Approved by: https://github.com/malfet
ghstack dependencies: #138936, #139221, #139433
2024-11-04 18:30:29 +00:00
Tugsbayasgalan Manlaibaatar
ae0e7042f6 Fix custom obj being input (#139209)
Differential Revision: [D65158939](https://our.internmc.facebook.com/intern/diff/D65158939)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139209
Approved by: https://github.com/ydwu4
ghstack dependencies: #138658
2024-11-04 18:24:29 +00:00
rzou
85c3c4132d no-op torch.library.custom_op APIs on torch.deploy (#139509)
We forgot this case in the previous PR. Fixes
https://github.com/pytorch/pytorch/issues/137536

Test Plan:
- better tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139509
Approved by: https://github.com/williamwen42
2024-11-04 18:01:08 +00:00
PyTorch MergeBot
6dada2136a Revert "Refactor FxGraphDrawer to use HTML-like labels (#137726)"
This reverts commit 1e73842029.

Reverted https://github.com/pytorch/pytorch/pull/137726 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it looks like some internal components are failing after this change and need to be updated ([comment](https://github.com/pytorch/pytorch/pull/137726#issuecomment-2455332612))
2024-11-04 17:44:44 +00:00
Tugsbayasgalan Manlaibaatar
e080c89bdc Make test_torchbind.py training IR compatible (#138658)
In this diff, i make test_torchbind.py tests to handle training IR. Today in the training IR, we don't see the effect token and HOP because this happens at the FunctionalTensorMode. Maybe in the future, we should move this logic up to the training IR so that writing passes etc on training Ir is safer. But for the migration purposes, i think it is ok for now.  I also fixed two bugs:
1. ep.module() doesn't register all aliased constants in the module.
2. When we retrace, we need to fakify the original Torchbind object.
3. We don't run any DCE on training IR so we need to add some more torch ops to verifier.

Differential Revision: [D64853530](https://our.internmc.facebook.com/intern/diff/D64853530)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138658
Approved by: https://github.com/ydwu4, https://github.com/zhxchen17
2024-11-04 17:43:11 +00:00
Bob Ren
68c515b292 don't run z3 analysis on backed symfloat nodes (#139568)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139568
Approved by: https://github.com/ezyang
ghstack dependencies: #139569, #139457
2024-11-04 17:04:29 +00:00
PyTorch MergeBot
3ca794783f Revert "[SymmetricMemory] introduce a binding for cuMemset32Async (#138755)"
This reverts commit 924e726c3a.

Reverted https://github.com/pytorch/pytorch/pull/138755 on behalf of https://github.com/ZainRizvi due to Sorry but this breaks internally.  Can you please fix this PR so it works internally and re-merge it? See D65401876 for more details ([comment](https://github.com/pytorch/pytorch/pull/138755#issuecomment-2455173596))
2024-11-04 16:34:34 +00:00
Bob Ren
87404b6ca6 support symfloats in translation validation (#139457)
fixes `python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_cond_pytree_operands_with_non_tensor_leaves_dynamic_shapes` when `specialize_float=False`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139457
Approved by: https://github.com/ezyang
ghstack dependencies: #139569
2024-11-04 15:40:08 +00:00
Richard Barnes
6b8e3022f2 Remove c10::optional usages in PyTorch (#139525)
Test Plan: Sandcastle

Reviewed By: swolchok

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139525
Approved by: https://github.com/malfet, https://github.com/Skylion007
2024-11-04 15:35:23 +00:00
cyy
419a7e197d [6/N] Fix Wextra-semi warning (#139605)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139605
Approved by: https://github.com/ezyang
2024-11-04 13:43:16 +00:00
Bob Ren
12d225d91c add opaque unary sin and cos to SYMPY_INTERP (#139569)
Fixes `PYTORCH_TEST_WITH_DYNAMO=1 python test/test_nn.py TestNNDeviceTypeCPU.test_affine_3d_rotateRandom_cpu` when specialize_float = False

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139569
Approved by: https://github.com/ezyang
2024-11-04 07:37:11 +00:00
Sun, Jiayi
3337439dc0 [inductor] modify the heuristic for disabling vectorization (#136422)
Summary
Since we have already implemented tail loop mask vectorization (https://github.com/pytorch/pytorch/pull/126526), I re-tuned the heuristics for disabling vectorization from performance perspective. I changed the heuristic to: when the total number of elements along the vec dim is less than `tiling_factor/4` and the number of operations is less than 10, we disable the vectorization.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136422
Approved by: https://github.com/leslie-fang-intel, https://github.com/jgong5, https://github.com/jansel
2024-11-04 07:33:32 +00:00
James Wu
f4ee5a243d Add PT2 Compile Events for triton and kernel compilation + load_by_key_path (#139402)
Adds a few more dynamo_timed() to measure triton compilation and load_by_key_path times.

In the case of async compilation with multiple threads, we'll generate a single `kernel_compile` event that occurs when waiting on all the parallel compiles to finish.

In the case where async parallel compilation is disabled (or, compile threads are warming up), we'll generate a `triton_compile` event for each kernel.

The `triton_compile` events is a bit questionable: do we need a row for each triton compile event? It might eat up on our already low retention, so I might just remove that. Will discuss with @slarsen.

Differential Revision: [D65215707](https://our.internmc.facebook.com/intern/diff/D65215707/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139402
Approved by: https://github.com/oulgen
2024-11-04 06:37:18 +00:00
cyy
3179eb15ae [1/N] Remove usage of C array (#139567)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139567
Approved by: https://github.com/Skylion007, https://github.com/ezyang
2024-11-04 04:52:46 +00:00
Yuxin Wu
cadc50e7e9 LOG(INFO) -> VLOG(2) in ProcessGroupNCCL (#130696)
In the same spirit as https://github.com/pytorch/pytorch/pull/105695

Initialization and error handling logs are mostly kept. Routine logs are changed to VLOG.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130696
Approved by: https://github.com/kwen2501

Co-authored-by: Ke Wen <kw2501@fb.com>
2024-11-04 04:43:42 +00:00
Jason Ansel
ed30fa74ab [inductor] sympy.Integer([01]) -> sympy.S.(Zero|One) (#139523)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139523
Approved by: https://github.com/ezyang
ghstack dependencies: #139364, #139365, #139370, #139452
2024-11-04 04:28:40 +00:00
Jason Ansel
b6fb135c2c [inductor] Simplify remove_kernel_local_buffers (#139452)
I plan to reuse `can_buffer_be_removed_through_fusion` in some heuristics.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139452
Approved by: https://github.com/shunting314
ghstack dependencies: #139364, #139365, #139370
2024-11-04 04:28:40 +00:00
Jason Ansel
3d633f12ba [inductor] Move remove_kernel_local_buffers to Kernel (#139370)
This method mutates the kernel, so it fits better in that class.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139370
Approved by: https://github.com/shunting314
ghstack dependencies: #139364, #139365
2024-11-04 04:28:33 +00:00
Jason Ansel
66d5e2405d [inductor] Remove Node.last_usage mutation (#139365)
I can't figure out why this is needed.  Let's see if tests fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139365
Approved by: https://github.com/shunting314
ghstack dependencies: #139364
2024-11-04 04:28:25 +00:00
Jason Ansel
d189f92eb1 [inductor] Remove SIMDKernel.last_usage (#139364)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139364
Approved by: https://github.com/eellison, https://github.com/shunting314
2024-11-04 04:28:18 +00:00
Animesh Jain
e6ff07f00e [dynamo][guards] Consider tensors as immutable for dict tag matches (#139560)
This is a bug on the main exposed by https://github.com/pytorch/pytorch/issues/139476

We have dict tag optimization where if the dict tag does not change, we
skip guards on all the items of the dict that are "immutable". We
considered tensors as immutable in such scenarios. This is critical for
guard eval performance, because generally users dont change their
parameters.

If I try to remove this optimization, we see slowdowns, e.g, 3.03x to
2.95x on conv_mixer TIMM benchamrk.

So, I am adding a flag which keeps the current state but allows the
users to remove this optimization. Not ideal, but given how serious guard eval perf has to be,
we are in the gray are of unsoundness vs performance tradeoff.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139560
Approved by: https://github.com/jansel
2024-11-04 00:54:20 +00:00
cyy
7f387fa612 [10/N] Fix extra warnings brought by clang-tidy-17 (#139385)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139385
Approved by: https://github.com/Skylion007
2024-11-04 00:47:19 +00:00
briancoutinho
3242049daa [profiler] Annotate triton kernels with kernel hash (#139531)
As above, annotates triton kernel hash in the profile attributes.

Added a new unit test in profiler to triton/dynamo events.

Testplan:

Running new unit test in CI

Internal:
  buck2 run @mode/dev-nosan caffe2/test:profiler -- -r test_pt2_triton_attributes

Running on an example, this is how the kernel hash file looks
```
  {
    "ph": "X", "cat": "cpu_op", "name": "triton_poi_fused_add_cos_sin_0", "pid": 1670242, "tid": 1670242,
    "ts": 2413669097354.058, "dur": 95.812,
    "args": {
      "External id": 3,"kernel_hash": "cqaokwf2bph4egogzevc22vluasiyuui4i54zpemp6knbsggfbuu",
"grid": "grid(100,)", "Record function id": 0, "stream": 0, "Concrete Inputs": ["", "", "", "100"], "kernel_file": "/tmp/torchinductor_bcoutinho/qa/cqaokwf2bph4egogzevc22vluasiyuui4i54zpemp6knbsggfbuu.py", "kernel_backend": "triton", "Input type": ["float", "float", "float", "Scalar"], "Input Strides": [[10, 1], [10, 1], [10, 1], []], "Input Dims": [[10, 10], [10, 10], [10, 10], []], "Ev Idx": 2

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139531
Approved by: https://github.com/davidberard98
2024-11-03 23:19:35 +00:00
Yifu Wang
924e726c3a [SymmetricMemory] introduce a binding for cuMemset32Async (#138755)
## This Stack

This stack does the following things to support `xformers`-style, comm-aware Triton kernels:
- Exposes `signal_pad`s as tensors in Python
- Adds a binding for `cuMemsetAsync`

These in combination aims to provide users with more flexibility to express custom signaling/synchronization patterns.

## This PR
Make `cuMemset32Async` available via `_SymmetricMemory.memset32`. We chose `cuMemset32Async` over `cudaMemsetAsync` because it allows for `uint32_t`-wise memset. This provides users with better flexibility.

To enable this, we also added the following cuda driver APIs in `c10::cuda::DriverAPI`:
- `cuDevicePrimaryCtxRetain` - for obtaining the primary context of a device in the form of `CUcontext`.
- `cuCtxGetCurrent`/`cuCtxSetCurrent` - for setting and restoring the context for cuda driver APIs such as `cuMemset32Async`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138755
Approved by: https://github.com/weifengpy, https://github.com/eqy, https://github.com/lw
2024-11-03 21:37:31 +00:00
Bob Ren
5d07651c72 only use hint_size in _smart_symbol_sort for size type symbols (#139571)
Fixes `PYTORCH_TEST_WITH_DYNAMO=1 python test/test_torch.py TestTorchDeviceTypeCPU.test_exponential_kstest_cpu_bfloat16` when specialize_float = False

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139571
Approved by: https://github.com/ezyang
ghstack dependencies: #139451, #139482, #139484, #139486
2024-11-03 21:15:08 +00:00
leslie-fang-intel
d84a344410 [Inductor] Skip coordinate_descent_tuning for mm/bmm decomposition on CPU (#139537)
**Summary**
Fix issue: https://github.com/pytorch/pytorch/issues/138823, `coordinate_descent_tuning` doesn't benefit on CPU and prefer lowering `mm`/`bmm` into ATEN kernels or CPP GEMM Template.

**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_select_algorithm.py -k test_cpp_coordinate_descent_tuning
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139537
Approved by: https://github.com/jansel
2024-11-03 10:10:29 +00:00
Edward Z. Yang
585dbfa583 Profile guided optimization for automatic_dynamic (#139001)
Previously: https://github.com/pytorch/pytorch/pull/138052 but the implementation is done from scratch, so I open a new PR.

This implements the ability to save and load profiles of automatic dynamic decisions, so on subsequent runs we can directly make something automatically dynamic. Unlike the previous implementation, this cache is never enabled by default; instead, you have to specify a "job id" that says it's OK to share results. We will be able to automatically populate this id for internal MAST jobs but for generic OSS users you will have to explicitly opt into it.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139001
Approved by: https://github.com/oulgen
2024-11-03 06:29:57 +00:00
Bob Ren
a1370259ba always specialize float on export path (#139486)
This is the next step in support dynamic float arguments in PT2: docs.google.com/document/d/1HswUSp9H6mg8Vg27mhRk8YzC9q_uf63b6wz-gwx65BQ/edit?pli=1#heading=h.xvyiqp8tuje6. To make this more incremental and tractable, we've decided to opt the export path our of this first phase of the rollout.

Fixes python test/export/test_export.py TestExport.test_export_input_mutation_dynamic_shape when specialize_float=False

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139486
Approved by: https://github.com/ezyang
ghstack dependencies: #139451, #139482, #139484
2024-11-03 04:47:12 +00:00
Bob Ren
25f243ff5d Update tensorify pass to specialize symfloats we didn't tensorify away (#139564)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139564
Approved by: https://github.com/huydhn
2024-11-03 04:27:43 +00:00
PyTorch MergeBot
067d2a089d Revert "Expose Storage _use_count API in Python (#139426)"
This reverts commit e31136d07b.

Reverted https://github.com/pytorch/pytorch/pull/139426 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing some inductor job in trunk ([comment](https://github.com/pytorch/pytorch/pull/139426#issuecomment-2453269063))
2024-11-03 02:40:45 +00:00
Bob Ren
b8b60e0bc5 add is_integer to support example_value function whitelist (#139484)
Fixes python test/dynamo/test_dynamic_shapes.py DynamicShapesFunctionTests.test_is_integer_dynamic_shapes when specialize_float=False

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139484
Approved by: https://github.com/ezyang
ghstack dependencies: #139451, #139482
2024-11-03 02:01:38 +00:00
Ke Wen
f121eab018 [c10d] Remove dead Dynamo marker (#139545)
Per discussion with @anijain2305, `dynamo_unsupported_distributed_c10d_ops` is not referenced anywhere.
Removing this dead code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139545
Approved by: https://github.com/Skylion007
2024-11-03 00:40:26 +00:00
Yukio Siraichi
a3cb8ee38b AOTAutograd: Make general SymInt hashable when merging view inputs. (#139553)
Fix: #139111

This PR wraps `SymInt` input arguments with `SymIntEqByExpr`, making them hashable when
merging view inputs (`merge_view_inputs` function).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139553
Approved by: https://github.com/ezyang
2024-11-02 23:57:11 +00:00
Yuanhao Ji
b46e1fc141 [Dynamo] Fix graph break when tensor.split() is called within a device context manager (#139270)
Fixes: #139183

Note: this case can also be reproduced on cpu

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139270
Approved by: https://github.com/ezyang

Co-authored-by: Vincent Moens <vincentmoens@gmail.com>
2024-11-02 23:55:51 +00:00
Jane Xu
e31136d07b Expose Storage _use_count API in Python (#139426)
Would be nice to replace the torch._C._storage_Use_Count call in https://github.com/pytorch/torchtune/pull/1936, at least without needing to know about _cdata in OSS code.

Initially keeping it private as Tensor._use_count is also private.

In favor over https://github.com/pytorch/pytorch/pull/139109 in solving the same problem, as exposing an existing API is better than adding a new one (and this enables a more robust fix)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139426
Approved by: https://github.com/soulitzer
2024-11-02 23:36:31 +00:00
Bob Ren
232af152b5 Fix graph breaks related to specialized float inputs (#139482)
Fixes issue with timm models where

example_value = 0.09999
proxy.node.target = <built-in function sub>

would fall through to

```
        unimplemented(
            "torch.* op returned non-Tensor "
            + f"{typestr(example_value)} {proxy.node.op} {proxy.node.target}",
            case_name="unsupported_operator",
        )
```

and graph break

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139482
Approved by: https://github.com/ezyang
ghstack dependencies: #139451
2024-11-02 21:58:46 +00:00
PyTorch MergeBot
854be65fa0 Revert "[PGNCCL] Make sure we do not use split for P2P comm creation (#139013)"
This reverts commit 55038aa661.

Reverted https://github.com/pytorch/pytorch/pull/139013 on behalf of https://github.com/kwen2501 due to More flavor of test_manual_with_data_parallel failed ([comment](https://github.com/pytorch/pytorch/pull/139013#issuecomment-2453085932))
2024-11-02 18:29:10 +00:00
PyTorch MergeBot
92d7f29e59 Revert "Profile guided optimization for automatic_dynamic (#139001)"
This reverts commit f6be44c74e.

Reverted https://github.com/pytorch/pytorch/pull/139001 on behalf of https://github.com/ezyang due to more fbcode errors ([comment](https://github.com/pytorch/pytorch/pull/139001#issuecomment-2452985581))
2024-11-02 13:11:04 +00:00
Edward Z. Yang
f6be44c74e Profile guided optimization for automatic_dynamic (#139001)
Previously: https://github.com/pytorch/pytorch/pull/138052 but the implementation is done from scratch, so I open a new PR.

This implements the ability to save and load profiles of automatic dynamic decisions, so on subsequent runs we can directly make something automatically dynamic. Unlike the previous implementation, this cache is never enabled by default; instead, you have to specify a "job id" that says it's OK to share results. We will be able to automatically populate this id for internal MAST jobs but for generic OSS users you will have to explicitly opt into it.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Differential Revision: [D65065497](https://our.internmc.facebook.com/intern/diff/D65065497)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139001
Approved by: https://github.com/oulgen
2024-11-02 11:50:11 +00:00
Ke Wen
55038aa661 [PGNCCL] Make sure we do not use split for P2P comm creation (#139013)
Resolve comment https://github.com/pytorch/pytorch/pull/138527#issuecomment-2438613172

There was a split-vs-P2P bug:
When P2P comm creation invokes `getNCCLComm`, it may see a `split_from` options which is meant for the previous PG creation. Then the P2P comm creation may use `ncclCommSplit` and hang, because not all ranks join this call. The bug slips previously/today because there is no CI test with the following recipe: eager init + new group + P2P in that new group.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139013
Approved by: https://github.com/shuqiangzhang
2024-11-02 07:47:55 +00:00
PyTorch MergeBot
2a3fe06ce0 Revert "[Partitioner] Enumerate partitions by iterating partition ids (#136598)"
This reverts commit 39ec5a20ea.

Reverted https://github.com/pytorch/pytorch/pull/136598 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it fails an executorch test https://github.com/pytorch/executorch/blob/main/exir/backend/test/test_graph_partition.py#L114-L175 ([comment](https://github.com/pytorch/pytorch/pull/136598#issuecomment-2452903705))
2024-11-02 07:19:22 +00:00
PyTorch MergeBot
f3238106fd Revert "Allow inplacing buffer when other users are inconsequential (#138383)"
This reverts commit 030f70b40b.

Reverted https://github.com/pytorch/pytorch/pull/138383 on behalf of https://github.com/huydhn due to Sorry for reverting this again, but I think it has a test failing internally and also on ROCm ([comment](https://github.com/pytorch/pytorch/pull/138383#issuecomment-2452898229))
2024-11-02 06:53:48 +00:00
PyTorch MergeBot
0863d6a08e Revert "[inductor] Remove SIMDKernel.last_usage (#139364)"
This reverts commit 286d3ce266.

Reverted https://github.com/pytorch/pytorch/pull/139364 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing lots of internal tests in D65345157 ([comment](https://github.com/pytorch/pytorch/pull/139364#issuecomment-2452897337))
2024-11-02 06:49:11 +00:00
PyTorch MergeBot
9331640e26 Revert "[inductor] Remove Node.last_usage mutation (#139365)"
This reverts commit 1e934b473c.

Reverted https://github.com/pytorch/pytorch/pull/139365 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing lots of internal tests in D65345157 ([comment](https://github.com/pytorch/pytorch/pull/139364#issuecomment-2452897337))
2024-11-02 06:49:10 +00:00
PyTorch MergeBot
dc4b459737 Revert "[inductor] Move remove_kernel_local_buffers to Kernel (#139370)"
This reverts commit b57b4b7f9b.

Reverted https://github.com/pytorch/pytorch/pull/139370 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing lots of internal tests in D65345157 ([comment](https://github.com/pytorch/pytorch/pull/139364#issuecomment-2452897337))
2024-11-02 06:49:10 +00:00
PyTorch MergeBot
66a401c9e1 Revert "[inductor] Simplify remove_kernel_local_buffers (#139452)"
This reverts commit 73c0762a34.

Reverted https://github.com/pytorch/pytorch/pull/139452 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing lots of internal tests in D65345157 ([comment](https://github.com/pytorch/pytorch/pull/139364#issuecomment-2452897337))
2024-11-02 06:49:10 +00:00
PyTorch MergeBot
98e11b0021 Revert "[inductor] sympy.Integer([01]) -> sympy.S.(Zero|One) (#139523)"
This reverts commit c53beab377.

Reverted https://github.com/pytorch/pytorch/pull/139523 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing lots of internal tests in D65345157 ([comment](https://github.com/pytorch/pytorch/pull/139364#issuecomment-2452897337))
2024-11-02 06:49:10 +00:00
Bob Ren
fdd298dcb7 add hex method on SymFloat (#139451)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139451
Approved by: https://github.com/ezyang
2024-11-02 05:33:19 +00:00
PyTorch MergeBot
8d1eaa3da6 Revert "Profile guided optimization for automatic_dynamic (#139001)"
This reverts commit a6630bcf87.

Reverted https://github.com/pytorch/pytorch/pull/139001 on behalf of https://github.com/ezyang due to internal code triggers import cycle ([comment](https://github.com/pytorch/pytorch/pull/139001#issuecomment-2452833882))
2024-11-02 03:38:15 +00:00
drisspg
540f3ef9b1 Fix flex_decode to build offsets off of strides (#139516)
Fixes PR: https://github.com/pytorch/pytorch/issues/139462

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139516
Approved by: https://github.com/Chillee
2024-11-02 03:17:46 +00:00
Bin Bao
a46a79fe92 [AOTI] Ignore .o files in package_aoti (#139153)
Summary: There is no point to package .o files since a .so file is included in that package.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139153
Approved by: https://github.com/angelayi
2024-11-02 03:10:05 +00:00
Jason Ansel
c53beab377 [inductor] sympy.Integer([01]) -> sympy.S.(Zero|One) (#139523)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139523
Approved by: https://github.com/ezyang
ghstack dependencies: #139364, #139365, #139370, #139452
2024-11-02 03:04:22 +00:00
Justin Chu
387b120549 [ONNX] Remove type promotion rule for pow (#139527)
ONNX supports different input types in Pow, so type promotion is not needed.

The resulting graph is the following:

```py
ONNXProgram(
    model=
        <
            ir_version=9,
            opset_imports={'': 18, 'pkg.onnxscript.torch_lib.common': 1},
            producer_name='pytorch',
            producer_version='2.6.0a0+git59a1af5',
            domain=None,
            model_version=None,
        >
        graph(
            name=main_graph,
            inputs=(
                %"x"<FLOAT16,[3]>
            ),
            outputs=(
                %"pow_1"<FLOAT16,[3]>
            ),
        ) {
            0 |  # node_Constant_0
                 %"val_0"<?,?> ⬅️ ::Constant() {value=Tensor<FLOAT,[]>(array(2., dtype=float32), name=None)}
            1 |  # node_Pow_1
                 %"pow_1"<FLOAT16,[3]> ⬅️ ::Pow(%"x", %"val_0")
            return %"pow_1"<FLOAT16,[3]>
        }
...
    ,
    exported_program=
        ExportedProgram:
            class GraphModule(torch.nn.Module):
                def forward(self, x: "f16[3]"):
                     # File: /workspace/pytorch/test/onnx/exporter/test_small_models_e2e.py:53 in forward, code: return x**2.0
                    pow_1: "f16[3]" = torch.ops.aten.pow.Tensor_Scalar(x, 2.0);  x = None
                    return (pow_1,)

        Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='pow_1'), target=None)])
        Range constraints: {}

)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139527
Approved by: https://github.com/titaiwangms
2024-11-02 02:19:50 +00:00
Chen, Zejun
edd3f5a94d [profiler] fix a building warning by adding USE_KINETO namespace for setTraceID (#139461)
Fix: https://github.com/pytorch/pytorch/issues/139460
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139461
Approved by: https://github.com/EikanWang, https://github.com/guangyey, https://github.com/sraikund16
2024-11-02 01:02:29 +00:00
Angela Yi
092fe2f422 Handle nan case when checking mutations (#139483)
Test Plan: PT2 readiness models

Differential Revision: D65340986

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139483
Approved by: https://github.com/zou3519
2024-11-02 00:49:05 +00:00
William Wen
b71e813bce [dynamo, 3.13] fix bytecode nop tests (#139323)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139323
Approved by: https://github.com/jansel
2024-11-02 00:39:36 +00:00
Bin Bao
8c17830dea [AOTI] Unify how weights are stored as data section (#139471)
Summary: https://github.com/pytorch/pytorch/pull/118076 introduced a cleaner way to link weights as a data section for macos. Unify the code by adopting that approach for Linux as well.

Test Plan: CI

Differential Revision: D65302273

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139471
Approved by: https://github.com/chenyang78
2024-11-02 00:23:24 +00:00
eellison
ee2f8a50d3 Class rename (#139490)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139490
Approved by: https://github.com/exclamaforte, https://github.com/zou3519
ghstack dependencies: #139295
2024-11-02 00:10:17 +00:00
PyTorch MergeBot
b617d4813c Revert "fix dynamo tracking numpy 2 ops (#138686)"
This reverts commit 124eac255e.

Reverted https://github.com/pytorch/pytorch/pull/138686 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but I am seeing inductor failure with hf_BigBird number of graph breaks after it lands ([comment](https://github.com/pytorch/pytorch/pull/138686#issuecomment-2452718164))
2024-11-01 23:34:06 +00:00
eellison
2382b3b6d8 [Easy] Add joint graph passes, fallback_random to bisector (#139295)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139295
Approved by: https://github.com/zou3519, https://github.com/exclamaforte
2024-11-01 23:21:53 +00:00
Gabriel Ferns
1e73842029 Refactor FxGraphDrawer to use HTML-like labels (#137726)
Fixes https://github.com/pytorch/pytorch/issues/137499
Testing: Added a new unit test to make sure that the regression case succeeds.
I'm debating about whether to make the borders visible. I'm partial to no borders, but it might make it harder for some people to read?
![68a2b0e3-orig_fx_graph_diagram](https://github.com/user-attachments/assets/fbc2fd98-9e76-488e-8ebe-c64fbf206932)
Vs.
![2bfe1c4f-orig_fx_graph_diagram](https://github.com/user-attachments/assets/b6bc88ba-dda2-4cf7-84ac-a615e1e03a74)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137726
Approved by: https://github.com/eellison, https://github.com/malfet
2024-11-01 23:19:50 +00:00
David Berard
60542eeb33 [inductor] set sanitize_overflow=False for triton kernels (#139502)
In upstream triton, https://github.com/triton-lang/triton/pull/4589 introduces overflow checks. However, overflow checks likely add some overhead, and have some correctness bugs at the moment (e.g. https://github.com/triton-lang/triton/pull/5033). Let's set `sanitize_overflow=False` but keep `debug=True` so that we can keep using device_assert but without the additional asserts added by `sanitize_overflow`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139502
Approved by: https://github.com/bertmaher
2024-11-01 23:10:21 +00:00
Mikayla Gawarecki
a979318ef7 Add section to serialization note re weights_only (#139433)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139433
Approved by: https://github.com/malfet
ghstack dependencies: #138936, #139221
2024-11-01 21:51:50 +00:00
Edward Z. Yang
a6630bcf87 Profile guided optimization for automatic_dynamic (#139001)
Previously: https://github.com/pytorch/pytorch/pull/138052 but the implementation is done from scratch, so I open a new PR.

This implements the ability to save and load profiles of automatic dynamic decisions, so on subsequent runs we can directly make something automatically dynamic. Unlike the previous implementation, this cache is never enabled by default; instead, you have to specify a "job id" that says it's OK to share results. We will be able to automatically populate this id for internal MAST jobs but for generic OSS users you will have to explicitly opt into it.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Differential Revision: [D65065497](https://our.internmc.facebook.com/intern/diff/D65065497)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139001
Approved by: https://github.com/oulgen
2024-11-01 21:43:25 +00:00
Xuan Zhang
9c2ffce71a add condition for freeable input buffer (#139480)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139480
Approved by: https://github.com/yf225
ghstack dependencies: #139396
2024-11-01 21:15:40 +00:00
Sam Larsen
c412a42ae2 [pt2 logging] move remote cache get/put logging up one level (#139423)
Summary: I need to refactor the way we record CompilationMetrics. It will be much easier to do in OSS and having the relevant timing code in the OSS area of the codebase will make this much easier. I doubt this meaningfully changes the values we see.

Test Plan: Made sure samples show up: https://fburl.com/scuba/dynamo_compile/sandbox/c38zjq0x

Differential Revision: D65290089

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139423
Approved by: https://github.com/oulgen
2024-11-01 21:06:59 +00:00
Animesh Jain
0e57f2b589 [invoke_subgraph] Change the joint_graph output signature to simplify min-cut partitioner (#139326)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139326
Approved by: https://github.com/zou3519
ghstack dependencies: #139216, #139130
2024-11-01 21:02:32 +00:00
Animesh Jain
6a268c3fbb [invoke_subgraph] Generate fake_inputs correctly (#139130)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139130
Approved by: https://github.com/zou3519
ghstack dependencies: #139216
2024-11-01 21:02:32 +00:00
Animesh Jain
4c756cacfd [invoke_subgraph] Re-enable fake tensor model in the fake tensor impl (#139216)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139216
Approved by: https://github.com/zou3519
2024-11-01 21:02:32 +00:00
Justin Chu
5d67efb809 [ONNX] New registration API (#135403)
The ONNX custom ops registration API.

## Design

1. Create a "custom_translation_table: dict[Callable, Sequence[Callable] | Callable" parameter for specifying extra functions
2. Use a callable as the key to support all possible call_function targets in the fx graph
3. Allow a callable or a Sequence of callables as values.
		- When there is a single callable, it is the translation function for the op
		- When there is a Sequence of callable, the exporter's dispatcher will dispatch to these callables in order based on input dtypes.
		- The translation functions can be a plain python function that calls onnxscript ops (traced), or an onnxscript function.
		- Complex input support: We create special type annotations for annotating real representations of complex inputs, which are needed to handle complex computation in the ONNX graph, as we don't have any ops in ONNX that handle complex inputs. The dispatcher will have knowledge of these newly created type annotations and dispatch correctly. The complex functions will be in the same overload pool as the real functions.

```py
torch.onnx.export(dynamo=True,
	custom_translation_table = {
	torch.ops.aten.add: [overload1, overload2],
	torch.sym_not: sym_not_onnx,
})
```
Support for functions that handles complex inputs will be in separate PRs.

fixes https://github.com/pytorch/pytorch/issues/138391

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135403
Approved by: https://github.com/titaiwangms
2024-11-01 20:58:54 +00:00
Jason Ansel
73c0762a34 [inductor] Simplify remove_kernel_local_buffers (#139452)
I plan to reuse `can_buffer_be_removed_through_fusion` in some heuristics.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139452
Approved by: https://github.com/shunting314
ghstack dependencies: #139364, #139365, #139370
2024-11-01 20:36:39 +00:00
Yifu Wang
0dbc284a72 [SymmetricMemory] expose signal_pads as tensors in Python (#138754)
## This Stack

This stack does the following things to support `xformers`-style, comm-aware Triton kernels:
- Exposes `signal_pad`s as tensors in Python
- Adds a binding for `cuMemsetAsync`

These in combination aims to provide users with more flexibility to express custom signaling/synchronization patterns.

## This PR

```python
# Obtain the signal pad of the specified peer rank as a tensor.
# If both shape and dtype are unspecified, the returned tensor will be a
# 1d uint32 tensor, which is most natural for signaling purposes.
symm_mem.get_signal_pad(peer_rank)

# If only shape is specified, it is equivalent to:
# symm_mem.get_signal_pad(peer_rank)[:shape.numel()].view(shape)
symm_mem.get_signal_pad(peer_rank, shape)

# If only dtype is specified, it is equivalent to:
# symm_mem.get_signal_pad(peer_rank).view(dtype)
symm_mem.get_signal_pad(peer_rank, dtype=dtype)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138754
Approved by: https://github.com/weifengpy, https://github.com/lw
2024-11-01 20:17:15 +00:00
Haifeng Jin
124eac255e fix dynamo tracking numpy 2 ops (#138686)
Fixes #136559
As we upgrade to NumPy 2, torch falsely filtered out `numpy.random` as unsupported in dynamo tracking.
This PR changes the filtering rules to include them while keeping behavior with numpy 1 unchanged.

Before this PR, the following tests failed:

```
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/dynamo/test_functions.py -k FunctionTests.test_numpy_random
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/dynamo/test_unspec.py -k UnspecTests.test_to_tensor
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/test_fake_tensor.py -k FakeTensorTest.test_export_numpy
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/test_fake_tensor.py -k PropagateRealTensorsFakeTensorTest.test_export_numpy_propagate_real_tensors
```

With this PR, the supported/unsupported ops in NumPy 1 are not changed.
For NumPy 2, only the `numpy.random` ops that are already supported with NumPy 1 are added to the supported list.

I used the following scripts to check the differences before and after the change for both NumPy 1 & 2.
The output is empty for NumPy 1 since there is no change.
The output is a list of `numpy.random` that considered supported for NumPy 2.

```py
from torch._dynamo import trace_rules
import numpy as np

def new_numpy_function_ids():
    unsupported_funcs = {"seed", "ranf", "get_bit_generator", "RandomState", "set_bit_generator", "sample"}

    def is_supported(k, v, mod):
        if not callable(v):
            return False
        if not getattr(v, "__module__", None):
            return True
        if v.__module__ == mod.__name__:
            return True
        if v.__module__ == "numpy.random.mtrand" and mod.__name__== "numpy.random" and k not in unsupported_funcs:
            return True
        return False
    rv = {}
    for mod in trace_rules.NP_SUPPORTED_MODULES:
        for k, v in mod.__dict__.items():
            if is_supported(k, v, mod):
                rv[id(v)] = f"{mod.__name__}.{k}"
    return rv

def old_numpy_function_ids():
    rv = {}
    for mod in trace_rules.NP_SUPPORTED_MODULES:
        rv.update(
            {
                id(v): f"{mod.__name__}.{k}"
                for k, v in mod.__dict__.items()
                if callable(v)
                and (getattr(v, "__module__", None) or mod.__name__) == mod.__name__
            }
        )
    return rv

rv1 = set(old_numpy_function_ids().values())
rv2 = set(new_numpy_function_ids().values())

for v in (rv1 - rv2):
    print(v)
print("****")
for v in (rv2 - rv1):
    print(v)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138686
Approved by: https://github.com/lezcano, https://github.com/williamwen42
2024-11-01 19:51:40 +00:00
Mikayla Gawarecki
ea0e09b3f3 Add utility to get all unsafe globals in checkpoint (no pickletools dependency) (#139221)
Fixes https://github.com/pytorch/pytorch/issues/129698

https://github.com/pytorch/pytorch/pull/139106 without pickletools

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139221
Approved by: https://github.com/malfet
ghstack dependencies: #138936
2024-11-01 19:31:39 +00:00
rzou
f3b485eb2a [reland] Flip triton kernel default layout constraint to "needs_fixed_stride_order" (#137064)
This is to match the default layout constraint for custom operators. By
default, Inductor should match the stride order of inputs to a triton
kernel.

IF THIS IS BREAKING YOU, PLEASE REACH OUT, especially if it's been
more than two weeks since this landed. You can flip the config locally
as a workaround.

Test Plan:
- existing tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137064
Approved by: https://github.com/albanD, https://github.com/eellison
2024-11-01 19:21:16 +00:00
Colin L. Rice
abc5d59dcb config: create Config objects with JK support (#138766)
This teaches install_config_module (and the underlying code) to
understands Config objects. Additionally we've added a JK option to this
which resolves the JK.

This config gets stored within the _ConfigEntry class and is evaluated
when __getattr__ is called. If justknobs is set, it'll call
justknobs_check to see the result.

Due to preceeding work, basically everything works correctly here and we
had to update a couple of tests, and modify the getattr behaviour.

Note that we are updating the justknob_check function to support a
default option, to make default work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138766
Approved by: https://github.com/ezyang
2024-11-01 19:20:37 +00:00
Sam Larsen
d8b606ecb5 [fx graph cache] Support freezing with FX graph caching (#136505)
Summary: The main changes to support freezing are:
1) When pickling constant tensors as part of the cache key calculation: If freezing has not been applied, then keep the existing behavior (pickle the metadata and values). If freezing has been applied, then pickle the values if the constant will be inlined; otherwise, consider only the metadata.
2) If freezing has been applied, modify what we store in the cache: Instead of storing the constant attributes in the cache entry, store the _names_ of the constants, and then grab those constants from the GraphModule when we need attache the attributes to a newly-loaded Python module. Since the cache lookup path loads the Python module, this bullet means we need to thread through a GraphModule argument in several places.
3) Since this feature means that we may need to reload the same Python module path more than once (but attach different constant attributes), I changed PyCodeCache.load_by_key_path to not store an in-memory map of path to module (since there may be more than one). I don't _think_ this will have any affect on performance, however.. It's unclear why we were using an in-memory cache here anyway, since this function should only be called once for each module needed to be loaded.
4) Several tests were removing on-disk PyCodeCache artifacts by iterating over the modules. I made this more straightforward by implementing a cache_clear method that removes the on-disk artifacts. Arguably, this should have been the implementation all along.

Differential Revision: [D63542170](https://our.internmc.facebook.com/intern/diff/D63542170)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136505
Approved by: https://github.com/eellison
2024-11-01 18:29:29 +00:00
vladimirrotariu
7d644f025f make equation behind torch.isclose element-wise (#138459)
The current formula behind torch.isclose, according to the docs, is
![imagen](https://github.com/user-attachments/assets/6b79f6d8-e675-4585-b26b-0c6933f7ecdd)

However, torch.isclose acts element-wise, so this formula may be misleading at first, given that the docs said that `input` and `other` are the first, respectively second tensor to compare. I propose the following change, to stress the element-wise nature of the norms in the equation:
![imagen](https://github.com/user-attachments/assets/2926a1c6-c4fa-4c48-8874-106521d3f54c)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138459
Approved by: https://github.com/soulitzer
2024-11-01 18:18:33 +00:00
sanchitintel
3cbf0c0bbf [Inductor][CPP] Cache weight tiles in L1D for AMX int8 WoQ GEMM (#136688)
# Summary

The AMX ISA based GEMM micro-kernel template for int8 weight-only quantization (BF16 activation, int8 weights) should cache dequantized weights (int8 -> int32 -> fp32 -> bf16) so that they would not have to be dequantized again in subsequent calls to the _inner-kernel_ that uses the same weights.

This change leverages the fact that even for BF16 x BF16 GEMM template, cache-blocking ensures that `Nr * Kc` weight elements are cached in L1D cache (more info [here](https://static.sched.com/hosted_files/pytorch2024/59/TorchInductor%20CPU%20Backend%20Advancements%20-%20New%20Features%20and%20Performance%20Improvements_20240915.pdf)). Here, `Nr` is the register blocking size for `N` dimension (at the granularity of the GEMM micro-kernel, it's currently also the cache blocking size for `N` dimension, although that may change in the future), and `Kc` is the cache blocking size for `K` dimension.

The figure below is from the document linked above -

<img width="476" alt="image" src="https://github.com/user-attachments/assets/e23e5476-d910-46d1-a9b3-cbf77de76d94">

## Performance data

Collected on 48 physical cores of one socket of Intel Xeon  Platinum 8468H (Xeon SP 4th gen). Intel OpenMP & tcmalloc were preloaded.

|M | N | K | Latency with ATen _weight_int8pack_mm | Latency with codegened templated GEMM (current main branch) | Latency with codegened templated GEMM (this PR) |
|-----|-----|-----|------|----------|----|
|4096|4096|4096| 45.844 ms | 9.322 ms| 5.2181 ms |
|4096|11008|4096| 127.618 ms |24.6258 ms | 13.6046 ms|
|4096|4096|11008| 121.953 ms | 25.4692 ms | 10.2669 ms |
|4096|32000|4096| 478.450 ms| 75.3942 ms | 48.21 ms |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136688
Approved by: https://github.com/jgong5
2024-11-01 16:32:22 +00:00
Jason Ansel
b57b4b7f9b [inductor] Move remove_kernel_local_buffers to Kernel (#139370)
This method mutates the kernel, so it fits better in that class.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139370
Approved by: https://github.com/shunting314
ghstack dependencies: #139364, #139365
2024-11-01 16:28:15 +00:00
Jason Ansel
1e934b473c [inductor] Remove Node.last_usage mutation (#139365)
I can't figure out why this is needed.  Let's see if tests fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139365
Approved by: https://github.com/shunting314
ghstack dependencies: #139364
2024-11-01 16:28:15 +00:00
Jason Ansel
286d3ce266 [inductor] Remove SIMDKernel.last_usage (#139364)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139364
Approved by: https://github.com/eellison, https://github.com/shunting314
2024-11-01 16:28:15 +00:00
Shuqiang Zhang
df0c1eceb9 [pgnccl][simple] clean up unused members of PGNCCL (#139436)
Summary:
Found those unused members when prototying something else.
Better remove unused members
Test Plan:
CI

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139436
Approved by: https://github.com/Skylion007
2024-11-01 16:25:04 +00:00
Bin Bao
33dce10ece [AOTI][reland] Update zero size computation in clone_preserve_strides (#139458)
Summary: Reland https://github.com/pytorch/pytorch/pull/139224. clone_preserve_strides implemented in _inductor/utils.py does not handle multi-dimensional 0-size tensor correctly.

Differential Revision: [D65317451](https://our.internmc.facebook.com/intern/diff/D65317451)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139458
Approved by: https://github.com/hl475
2024-11-01 13:51:02 +00:00
Yifu Wang
e6e140c3d7 [Inductor] fix a compilation time regression caused by user-visible output handling (#139420)
Some checks failed
docker-builds / docker-build (pytorch-linux-focal-py3-clang10-onnx, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-focal-py3-clang9-android-ndk-r21e, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-focal-py3.11-clang10, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-focal-py3.12-clang10, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-focal-py3.9-clang10, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-focal-rocm-n-1-py3, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-focal-rocm-n-py3, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-aarch64-py3.10-gcc11, linux.arm64.2xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-aarch64-py3.10-gcc11-inductor-benchmarks, linux.arm64.m7g.4xlarge, 600) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-cuda11.8-cudnn9-py3.9-clang12, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-cuda11.8-cudnn9-py3.9-linter, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-py3-clang12-executorch, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-py3-clang15-asan, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-py3-clang18-asan, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-py3.12-halide, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-py3.9-gcc11, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks, linux.12xlarge) (push) Has been cancelled
docker-builds / docker-build (pytorch-linux-jammy-xpu-2024.0-py3, linux.12xlarge) (push) Has been cancelled
ossf-scorecard / Scorecards analysis (push) Has been cancelled
Nightly Upload to rockset / upload-stats-to-rockset (push) Has been cancelled
inductor-cu124-unittest / get-default-label-prefix (push) Has been cancelled
inductor-cu124-unittest / cuda12.4-py3.12-gcc9-sm86 (push) Has been cancelled
inductor-cu124-unittest / cuda12.4-py3.10-gcc9-sm86 (push) Has been cancelled
inductor-rocm / get-label-type (push) Has been cancelled
inductor-cu124 / inductor-unittest (push) Has been cancelled
inductor-cu124 / get-default-label-prefix (push) Has been cancelled
inductor-cu124 / get-a100-test-label-type (push) Has been cancelled
inductor-rocm / rocm6.2-py3.10-inductor (push) Has been cancelled
inductor-cu124 / cuda12.4-py3.10-gcc9-sm86 (push) Has been cancelled
inductor-cu124 / cuda12.4-py3.10-gcc9-sm80 (push) Has been cancelled
This PR fixes a compilation time regression manifested in timm_models/hrnet_w18 caused by https://github.com/pytorch/pytorch/pull/136732.

The regression is reproducible locally. The compilation time is a bit noisy, but it's still possible to tell the difference.

```
Before the offending PR

compilation_latency mean=176.022 seconds
compilation_latency mean=176.564 seconds

On the offending PR

compilation_latency mean=180.096 seconds
compilation_latency mean=179.101 seconds

On the fix

compilation_latency mean=173.153 seconds
compilation_latency mean=174.182 seconds
```

(I think the fix being faster than the baseline is due to noise)

The cause of the regression is an inefficiency in `is_user_visible_output()`. Specifically, it used `output_node.args[0].index(node)` to obtain the output idx for each node (and we called this for each node twice). The offending PR had the assumption that `len(output_node.args[0])` is rather small. However, it has been proven false by the benchmark (it was 1900+ for timm_models/hrnet_w18).

The fix is to precompute `user_visible_output_strides` once by iterating only over the nodes in `output_node.args[0]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139420
Approved by: https://github.com/ezyang
2024-11-01 08:27:40 +00:00
Shunting Zhang
5e4c8b671c [inductor] loaf-fix (#139376)
Fix https://github.com/pytorch/pytorch/issues/128063 .

Now for this snippet
```
        def f(x):
            y = torch.sum(torch.sum(x, dim=-1))

            z = x / 10.0
            z_t = z.t().contiguous().t()
            return y, z, z_t
```
Inductor could generate a single kernel for the first reduction and the two ponitwise kernels (if loop-ordering after fusion is enabled). And the generated kernel read `x` only ONCE. (with no proper handling, the two pointwise's may each access x once even if they are fused).

The PR needs fix 2 subtile bugs regarding LOAF .
1. when we reorder loops for a FusedSchedulerNode, we check if each sub-node's sizes matches. But some node has sizes in `list` type (if its loop is not reordered) while others have its sizes in `tuple` type (if its loop is reordered). I could change the upstream code to uniformly use either `list` or `tuple`. But without strong enforcement, future code could break this. So I just convert sizes to uniform type before comparison.
2. We have a cache for tiling decisions of a BaseSchedulerNode. If we reorder loops for the node, we should invalidate the cache. Otherwise, a stale tiling decision can result in (very) bad kernel.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139376
Approved by: https://github.com/jansel, https://github.com/eellison
2024-11-01 07:54:32 +00:00
lingzhi98
39ec5a20ea [Partitioner] Enumerate partitions by iterating partition ids (#136598)
Currently, we get all partition id by iterating assignment whose size is same as the number of nodes in graph. But we can reach same results by iterating partitions_by_id whose size is much smaller than the nodes number. Assume the number of nodes is N, the number of partitions is P, the time complexity decrease from O(N * N) to O(N * P) after this patch.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136598
Approved by: https://github.com/tarun292

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2024-11-01 07:42:36 +00:00
andras_matyassy
61df90e3f6 Add TORCHDYNAMO_EXTENDED_ADVICE (#137159) (#137196)
Fixes #137159

Happy to contribute to this project for the first time. If I missed any contribution guidelines, please let me know, I'm happy to adjust.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137196
Approved by: https://github.com/ezyang
2024-11-01 06:43:26 +00:00
angelayi
86db2cd194 [export] Initial draft export (#139383)
Differential Revision: [D65288590](https://our.internmc.facebook.com/intern/diff/D65288590)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139383
Approved by: https://github.com/zou3519
2024-11-01 06:25:44 +00:00
FFFrog
300ca6368f Remove depracated alias macro(2/3) (#137559)
**Detailed Descriptions:**
- Remove AT_ASSERTM Macro
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137559
Approved by: https://github.com/ezyang
2024-11-01 06:17:57 +00:00
William Wen
0c47657b05 [dynamo] ignore False/None callback in fail_on_recompile/force_backend stances (#139215)
Fix https://github.com/pytorch/pytorch/issues/139202

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139215
Approved by: https://github.com/jansel
2024-11-01 06:15:28 +00:00
cyy
4a2da52137 [1/N] Replace c10::sv with std::sv (#139453)
Picks some safe replacements.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139453
Approved by: https://github.com/Skylion007
2024-11-01 05:39:37 +00:00
Will Constable
84416618a6 [Pipelining] Update schedules to use I, B actions. (#138886)
Also, update tests to use I (BACKWARD_INPUT) vs B (FULL_BACKWARD)
consistently.

Previously, schedules would issue a 'B' operation and leave it ambiguous
whether that operation should be BACKWARD_INPUT or FULL_BACKWARD,
depending on a separate flag (use_full_backward) passed to the schedule
class, which would determine which behavior was taken at runtime.

Now, use_full_backward is removed and the schedule class is required to
produce unambiguous IR.  The logic for 'use_full_backward' is removed
from the runtime.

_validate_pipeline_order is replaced  with _simulate_comms_compute. Both
offer similar functionality, to validate the corrrectness of a schedule
IR.  'validate' operates on compute-only IR, while simulate operates on
compute + comm IR.  To convert from using validate to simulate, you have
to first insert comm actions via '_add_send_recv'.

'simulate' was inefficiently written before this PR and needed to be
optimized to run quickly for extra large schedules with >32 ranks and
microbatches per rank used in some unit tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138886
Approved by: https://github.com/H-Huang
2024-11-01 03:54:06 +00:00
Bob Ren
094d288f40 Update tensorify pass to specialize symfloats we didn't tensorify away (#138868)
As discussed w/ @ezyang offline, one way to de-risk the `specialize_float=False` rollout is to specialize all backed symfloats that we fail to tensorify away. This diff does a few things:

1) It fixes a bug where item_memo gets dropped (due to incorrect epoch invalidation)
2) It updates the tensorify pass to do the backup specialization

This pass was originally part of the [PR](https://github.com/pytorch/pytorch/pull/137782) that flips `specialize_float=False` but we learned that the blast radius is simply too large. We've pivoted to a more milestone driven approach where we learn from the failures of the aforementioned PR and cherry pick fixes into main first. After this current PR lands our strategy is as follows:

1) Integrate turning off specialize float only in the automatic dynamic pass.
2) Put up a canary diff that only turns off specialize float in `backend=eager` mode to sniff out symfloat related bugs in dynamo due to code paths we previously never exercised.
3) Put up a canary diff that only turns off specialize float in `backend=aot_eager` mode to sniff out symfloat related bugs in aotautograd due to code paths we previously never exercised.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138868
Approved by: https://github.com/ezyang
2024-11-01 03:18:02 +00:00
James Wu
c8a648d4df Add option to dynamo_timed and chromium_event_logger for logging pt2 compile events (#139309)
This diff considerably changes the column format of PT2 Compile Events:

- Now, instead of logging one new column per every piece of metadata, we just log a single column, "metadata". This vastly decreases the number of columns we need to log, which should help with retention.

- Now, we only log to scuba for a set of dynamo_timed() events that we actually care about aggregating. To do so, we add a boolean to dynamo_timed() that decides whether or not to log a pt2_compile_event. We'll always log a chromium_event for every dynamo_timed(), but only log a subset of those to scuba.

Differential Revision: [D65225598](https://our.internmc.facebook.com/intern/diff/D65225598/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139309
Approved by: https://github.com/oulgen
2024-11-01 02:40:25 +00:00
Gabriel Ferns
030f70b40b Allow inplacing buffer when other users are inconsequential (#138383)
Summary:
I think we can inplace a buffer if all of the users of said buffer are "inconsequential", defined as having been removed, being completed, or being part of the ancestors set. In particular, this allows LayerNorm to inplace its input buffer.

Implements:
https://github.com/pytorch/pytorch/issues/132826

Test Plan:
New unit test of matmul followed by LayerNorm, make sure there's an inplaced buffer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138383
Approved by: https://github.com/eellison
2024-11-01 01:24:40 +00:00
cyyever
8ace3e8023 Add sv starts/ends_with (#139261)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139261
Approved by: https://github.com/Skylion007

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2024-11-01 01:17:42 +00:00
Mikayla Gawarecki
2a309c0997 Fix weights_only for BUILD instructions for user allowlisted objects with __slots__ (#138936)
Previously `BUILD` instruction missed handling for `__slots__`. **This only applies for things allowlisted via `add_safe_globals`/`safe_globals` that use slots.**

### Background
When does pickle serialize a `BUILD` instruction? When `state` is not `None` and `state_setter` is `None` [[link](c5b99f5c2c/Lib/pickle.py (L765))]. In this case, the docs tell us that either `__setstate__` or a `__dict__` update will be performed [[link](https://github.com/python/cpython/blob/3.13/Lib/pickletools.py#L1984)]

`__reduce__`/`__reduce_ex__` are expected to return tuples of length 2 to 6 where `state` is the 3rd argument. When user doesn't patch `__reduce__` but patches `__setstate__`/`__getstate__`, state will be what is yielded by `__getstate__`

Note the return type for [`__getstate__` ](https://docs.python.org/3/library/pickle.html#object.__getstate__)

- For a class that has no instance [`__dict__`](https://docs.python.org/3/reference/datamodel.html#object.__dict__) and no [`__slots__`](https://docs.python.org/3/reference/datamodel.html#object.__slots__), the default state is None.
- For a class that has an instance [`__dict__`](https://docs.python.org/3/reference/datamodel.html#object.__dict__) and no [`__slots__`](https://docs.python.org/3/reference/datamodel.html#object.__slots__), the default state is `self.__dict__`.
- For a class that has an instance [`__dict__`](https://docs.python.org/3/reference/datamodel.html#object.__dict__) and [`__slots__`](https://docs.python.org/3/reference/datamodel.html#object.__slots__), the default state is a tuple consisting of two dictionaries: `self.__dict__`, and a dictionary mapping slot names to slot values. Only slots that have a value are included in the latter.
- For a class that has [`__slots__`](https://docs.python.org/3/reference/datamodel.html#object.__slots__) and no instance [`__dict__`](https://docs.python.org/3/reference/datamodel.html#object.__dict__), the default state is a tuple whose first item is None and whose second item is a dictionary mapping slot names to slot values described in the previous bullet.

see handling in pickle code c5b99f5c2c/Lib/pickle.py (L1846-L1867)

Before this PR, we didn't account for the fact that when `__setstate__` is not defined, `state` might be a tuple so this would fail

```python
from dataclasses import dataclass

# Define the dataclass
@dataclass
class MyDataClass:
    __slots__ = ["x", "y"]
    x: int
    y: str
# Create an instance of the dataclass
my_data = MyDataClass(x=2, y=3)
# Save the dataclass to a file
torch.save(my_data, "my_data.pt")
with torch.serialization.safe_globals([MyDataClass]):
    loaded_my_data = torch.load("my_data.pt", weights_only=True)
# AttributeError: 'MyDataClass' object has no attribute '__dict__'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138936
Approved by: https://github.com/malfet
2024-11-01 00:59:29 +00:00
Jason Ansel
f9ef880c0b [inductor] Refactor kernel args into SIMDKernelFeatures (#139327)
This is a refactor PR to move stuff around.  I'm planning to use the SIMDKernelFeatures class (in a future PR) to host new heuristics for selecting kernel types and block sizes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139327
Approved by: https://github.com/eellison, https://github.com/shunting314
2024-11-01 00:30:14 +00:00
PyTorch MergeBot
b6b9596607 Revert "[dynamo] Fix constant propagation in builtins and UserClasses (#131354)"
This reverts commit 44257c063e.

Reverted https://github.com/pytorch/pytorch/pull/131354 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it seems to break some internal tests ([comment](https://github.com/pytorch/pytorch/pull/131354#issuecomment-2451050605))
2024-11-01 00:13:20 +00:00
IvanKobzarev
d33849908d [aotd] Fuse tangents subclasses runtime traversals (#139068)
Reason:
Currently we have multiple traversals for tangents in runtime:
 - To check that types and structure are identical to what we guessed during tracing time
 - Coerce metadata
 - Coerce memory_format
 - Unwrap_tensor_subclass
All of them are traversing tangents via __tensor_flatten__ calls the tree of Subclasses.

Change:
To do everything in one traversal at runtime (including flattening)

Implementation details:

Add memory_format information inside SubclassCreationMeta, for PlainTensors keep not only (int) of unwrapped_index, but memory_format too.

Preparing memory_format is optional (controlled by with_memory_format=True).

2. Removing unused subclass_utils.create_metadata_for_subclass which does not have any usages inside torch and would require update of the logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139068
Approved by: https://github.com/bdhirsh
2024-11-01 00:03:02 +00:00
Xuan Zhang
86602a66d7 [orm] fix live_memory computation in lpmf algorithm (#139396)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139396
Approved by: https://github.com/yf225
2024-10-31 23:45:30 +00:00
PyTorch MergeBot
3d3551506d Revert "[dynamo, 3.13] fix bytecode nop tests (#139323)"
This reverts commit c2d754441f.

Reverted https://github.com/pytorch/pytorch/pull/139323 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it seems to cause a regression in instruction count metric ([comment](https://github.com/pytorch/pytorch/pull/139323#issuecomment-2451017609))
2024-10-31 23:34:00 +00:00
Will Constable
8e8040a5c2 [Pipelining] Optimize ready_to_schedule logic (#138924)
Used in both simulator and add_send_recv pass, the ready_to_schedule
logic works by looking at all the previously scheduled ops on a rank to
see if any of them 'unblocks' the current op to be scheduled.  For example,
to schedule a FORWARD op, a previous RECV_F op is needed, unless this is
stage 0 or there is a previous stage on the same rank that ran FORWARD
already.

The old implementation iteratively compared the candidate op to the
previous ops.  The new implementation uses set lookups to reduce
complexity.  It also maintains the set of previous ops as ops are
scheduled rather than constructing a set on demand.

I did not save benchmark results, but this results in a 10-100x speedup
which is most noticeable for unit tests with artificially huge schedule
IR, the largest of which took longer than 20m before (I never let it
finish) but now takes less than 14s.  Most schedules take less than
10ms.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138924
Approved by: https://github.com/H-Huang
ghstack dependencies: #138928, #131762
2024-10-31 22:49:45 +00:00
Will Constable
c82e0d117a [Pipelining] Support separate dI / dW and V-schedules (#131762)
### Separate dI / dW:

PipelineScheduleRuntime now supports execution of merged FULL_BACKWARD
or separate dI / dW operations.

Separating the B and W may add execution overhead or may be suboptimal
in cases where BW are 'fused', but it is worthwhile when separating B, W
lets the schedule be more efficient by filling in bubbles.  In some
cases, the schedule will still issue B followed by W at certain points,
so in these cases just merge them back into BW ops and execute them as
full backwards rather than executing a B followed by a W.

### V-schedules:

V-schedules have a special case where the last rank has 2 adjacent
stages.

E.g. if rank3 had stage 3 and stage 4, then we should implement direct
transfer of stage3 outputs to stage4 inputs without a
send/recv.

In the schedling logic, we also must allow scheduling the
stage 4 forward after running stage 3 forward, without expecting a stage
4 RECV_F

In the runtime, we pass activations between adjacent stages without
using SEND/RECV ops since the stages are on the same rank/process.  We
add new APIs to PipelineStage abstraction for passing the activations
both during forward and backward.  Currently the implementation directly
modifies the 'recv buffers' the stage is managing, so the
forward/backwrad execution logic does not need to know the difference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131762
Approved by: https://github.com/H-Huang
ghstack dependencies: #138928
2024-10-31 22:49:45 +00:00
Zhengxu Chen
45da80b970 reland D65167805 "[export] Update min_val and max_val to Optional[int] in serialization." (#139394)
Summary:
had a land racing with another diff D65166035 to fix the schema.

According to export team's discussion, we are upgrading min_val and max_val to optional fields which shouldn't break BC and allows the schema to express infinity.

Test Plan: buck2 test 'fbcode//mode/opt' fbcode//apf/rec/ir/tests:ir_export_deserialize_test

Differential Revision: D65273170

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139394
Approved by: https://github.com/yiming0416
2024-10-31 22:28:32 +00:00
Donald Tolley
c1e7d85ce6 Add Weighted Loss Functions to PyTorch : WMSE, WMAE, and Weighted Huber Loss (#132049)
#### Summary
This pull request introduces new weighted loss functions to the PyTorch library: `weighted_huber_loss`, `wmse_loss`, and `wmae_loss`. These functions allow for precise control over the influence of each sample during training, important for imbalanced data or when certain samples are more significant than others.

#### Changes
- **`weighted_huber_loss`**: Huber loss modified to incorporate weights, providing a balance between L1 and L2 loss based on the `delta` parameter.
- **`wmse_loss`** (Weighted Mean Squared Error): Applies weights to the standard MSE loss, useful for emphasizing certain samples in regression tasks.
- **`wmae_loss`** (Weighted Mean Absolute Error): Adjusts MAE loss calculation by including weights, ideal for datasets with outliers.

#### Code Details
- **Input Validation**: Ensures `input`, `target`, and `weights` tensors match in size to prevent broadcasting errors.
- **Reduction Options**: Supports `none`, `mean`, and `sum` reductions to suit various computational needs.
- **Backward Compatibility**: Maintains support for deprecated arguments `size_average` and `reduce`, while encouraging use of the `reduction` argument.

#### Usage Example
```python
import torch
input = torch.tensor([0.5, 2.5, 2.0], dtype=torch.float32)
target = torch.tensor([0.0, 2.0, 1.5], dtype=torch.float32)
weights = torch.tensor([1.0, 0.5, 1.5], dtype=torch.float32)

loss = weighted_huber_loss(input, target, weights, delta=1.0)
print(loss)
```
---

Feedback on these implementations is welcome; please let me know if further modifications are required.

Resolves #132465

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132049
Approved by: https://github.com/mikaylagawarecki

Co-authored-by: mikaylagawarecki <mikaylagawarecki@gmail.com>
2024-10-31 21:59:43 +00:00
Simon Fan
82e74ad40e [aot autograd] refactor CompiledFunction.backward: control flow (3/N) (#139347)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139347
Approved by: https://github.com/zou3519
ghstack dependencies: #139331, #139343
2024-10-31 21:53:03 +00:00
Simon Fan
8134456a27 [aot autograd] refactor CompiledFunction.backward: epilogue (2/N) (#139343)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139343
Approved by: https://github.com/zou3519
ghstack dependencies: #139331
2024-10-31 21:53:03 +00:00
Simon Fan
04ce9ec087 [aot autograd] refactor CompiledFunction.backward: prologue (1/N) (#139331)
So for functional autograd + CA, most nodes are inlined in aot autograd. But user-defined callables aren't safe to make_fx unless dynamo traces through them. The AOT backward must be inlined by dynamo time. We plan to directly insert calls to the backward in the graph:
- call prologue
- call bwd graph
- call epilogue

Restructuring our AOT bwd implementation will make this implementation easier.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139331
Approved by: https://github.com/zou3519
2024-10-31 21:53:03 +00:00
angelayi
8c22e09e39 [aoti] Add masked_select to cshim (#139071)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139071
Approved by: https://github.com/desertfire
2024-10-31 21:52:53 +00:00
PyTorch MergeBot
b9acbde4fd Revert "Update tensorify pass to specialize symfloats we didn't tensorify away (#138868)"
This reverts commit a494572799.

Reverted https://github.com/pytorch/pytorch/pull/138868 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I think the new tests are failing on fbcode ([comment](https://github.com/pytorch/pytorch/pull/138868#issuecomment-2450863895))
2024-10-31 21:46:06 +00:00
Laith Sakka
6a1c451479 Don't uselessly recompute axiom dict every static eval call (#138967)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138967
Approved by: https://github.com/ezyang
2024-10-31 21:16:55 +00:00
PyTorch MergeBot
c4d9428b17 Revert "[AOTI] Update zero size computation in clone_preserve_strides (#139224)"
This reverts commit 206a8dde68.

Reverted https://github.com/pytorch/pytorch/pull/139224 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/139224#issuecomment-2450811914))
2024-10-31 21:05:07 +00:00
Joel Schlosser
ddb291a881 Fix and test several NJT reductions (#139317)
I'm sick of reductions not working properly - spotty dim coverage, missing backwards, etc. This PR fixes quite a bit.

It applies to the following ops:
* `sum` / `mean` / `prod`
* `all` / `any`
* `amin` / `amax`
* `min` / `max`
* `argmin` / `argmax`

The general reduction logic has been factored out into a helper `_apply_reduction(func, func_name, identity_element, *args, **kwargs)`. The idea is that by providing a valid identity element, we can utilize conversions to padded dense when needed for reducing over the ragged dim.

Extensive test coverage includes:
* reductions across ragged dim
* reductions across non-batch, non-ragged dims
* reductions across both batch and ragged dims
* multiple dim reductions (for ops that support this)
* full reduction -> scalar

Bonus: the PR includes backwards fixes for `sum` and `mean`, which have never worked.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139317
Approved by: https://github.com/cpuhrsch
2024-10-31 20:55:38 +00:00
PyTorch MergeBot
abb0dd4b00 Revert "[inductor] patterns to remove pointless view/permute pairs (#139136)"
This reverts commit 2b86cd74a6.

Reverted https://github.com/pytorch/pytorch/pull/139136 on behalf of https://github.com/ZainRizvi due to Sorry but this PR seems to have broken on trunk. The failure: distributed/_composable/test_replicate_with_compiler.py::ReplicateTest::test_bucketing_coalesced_op [GH job link](https://github.com/pytorch/pytorch/actions/runs/11615060962/job/32346609889) [HUD commit link](2b86cd74a6) ([comment](https://github.com/pytorch/pytorch/pull/139136#issuecomment-2450796414))
2024-10-31 20:54:17 +00:00
drisspg
53c9c19e76 [Autotune Inductor] Some clean up and dataclassing (#139157)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139157
Approved by: https://github.com/eellison
2024-10-31 20:04:55 +00:00
William Wen
c2d754441f [dynamo, 3.13] fix bytecode nop tests (#139323)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139323
Approved by: https://github.com/jansel
2024-10-31 20:03:43 +00:00
Shivam Raikundalia
ac7acfb894 [Profiler] Create Auto-Trace Frontend for Trace ID (#139310)
Summary:
This PR adds Auto-Trace implementation for Trace ID. By default, the python side will generate a uuid in the same format as the one set in the backend by kineto. Upon running an auto-trace, the python generated trace id will overwrite the one set in kineto using the Config variable. Since we don't expect users to generate on-demand traces after an auto-trace we can simply keep overwriting the backend trace id whenever autotrace is ran. If we one day want to eventually do something like this, we simply have to add a call in kineto on the backend to generate a new ID upon start of profiling.

We also implement a custom callback in the frontend such that users can generate their own trace ids if they wish to. This works similarly as the default, only difference being that they have to manually set this callback after a profiler is generated. We use a specific call to set this rather then putting it in the frontend initializer in case users want to change the trace_id for different repeats.

Test Plan: Tested both default and custom callbacks using the verbose prints added. Trace ids on the frontend and the prints on the backend for the manifold upload matched.

Differential Revision: D65178308

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139310
Approved by: https://github.com/shengfukevin
2024-10-31 19:02:57 +00:00
Xuehai Pan
7faf0ad913 [dyanmo] fix deque.maxlen support when extending elements from left (#139279)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139279
Approved by: https://github.com/jansel
2024-10-31 18:38:11 +00:00
bskrlj
8e27833e30 Ensure SWA boundary conditions w.r.t. definition (#133773)
According to the documentation, decay is a number in [0,1] range,[ i.e.](https://pytorch.org/docs/stable/optim.html)
```
Decay is a parameter between 0 and 1 that controls how fast the averaged parameters are decayed. If not provided to get_ema_multi_avg_fn, the default is 0.999.
```
An inspection of `swa_utils.py`  indicates there are no checks for invalid values of `decay`. Adding asserts as suggested in this PR ensures valid compute range (one way to enforce correct behavior, there are perhaps more suitable ones). Papers `torch` cites for reference idea/implementation also consider exclusively this range (e.g., https://arxiv.org/pdf/2310.04415).

Fixes https://github.com/pytorch/pytorch/issues/133772

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133773
Approved by: https://github.com/janeyx99
2024-10-31 18:24:08 +00:00
Will Constable
547d921462 [Pipelining] Remove unused special case from simulator (#138928)
The special case was added during experimentation with batched send/recv
ops.  The ops needed to be jointly scheduled or the simulator would
think that each op was unschedulable since each contained a recv that
depended on the other's send.  The workaround I added was to let the
scheduler 'peek' one op ahead for unblocking, which let batched ops be
scheduled but also changed the behavior or non-batched ops.  It let RECV
ops be simulated one step earlier than the unblocking SEND ops, which
shortened the simulated duration of schedules.

Removing this workaround simplifies the simulator but more importantly
lends to optimizing the runtime of the simulator by making it much
easier to avoid copying or extending lists of previous ops on each
iteration.  It also restores the output of the simulator for non-batched
ops to a more natural output where RECV must happen at the same time or
later than matching SEND, rather than possibly a step earlier.

For example, for this test:
`python test/distributed/pipelining/test_schedule.py -k test_send_recv_test_info0`

Before:

```
Step 0: 0F0      1RECV_F0
Step 1: 0SEND_F0
Step 2: 0F1      1RECV_F1
Step 3: 0SEND_F1 1F0
Step 4: 0RECV_B0 1B0
Step 5: 0B0      1SEND_B0
Step 6:          1F1
Step 7: 0RECV_B1 1B1
Step 8: 0B1      1SEND_B1
```

After:
```
Rank 0   Rank 1
Step 00: 0F0
Step 01: 0SEND_F0 1RECV_F0
Step 02: 0F1
Step 03: 0SEND_F1 1RECV_F1
Step 04:          1F0
Step 05:          1B0
Step 06: 0RECV_B0 1SEND_B0
Step 07: 0B0      1F1
Step 08:          1B1
Step 09: 0RECV_B1 1SEND_B1
Step 10: 0B1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138928
Approved by: https://github.com/H-Huang
2024-10-31 17:48:35 +00:00
Nikita Shulga
9d096e4d9f Don't use deprecated type properties in UpsampleKernel (#139399)
By replacing `at::CPU(dtype)` pattern with `at::device(kCPU).dtype(dtype)` pattern

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139399
Approved by: https://github.com/Skylion007
ghstack dependencies: #139353, #139358
2024-10-31 17:32:19 +00:00
Bin Bao
206a8dde68 [AOTI] Update zero size computation in clone_preserve_strides (#139224)
Summary: clone_preserve_strides implemented in _inductor/utils.py does not handle multi-dimensional 0-size tensor correctly. Fix that.

Differential Revision: [D65250405](https://our.internmc.facebook.com/intern/diff/D65250405)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139224
Approved by: https://github.com/angelayi
2024-10-31 17:07:18 +00:00
eellison
f93ebb2cf4 [Easy] Refactor post grad application of passes (#139293)
Refactors GraphTransformObserver to hook into the bisect manager pass application. And reworks post grad passes to use it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139293
Approved by: https://github.com/exclamaforte
ghstack dependencies: #139292
2024-10-31 17:05:27 +00:00
Shuqiang Zhang
5075046db2 [c10d] separate comm init from getNCClComm (#139362)
Summary:
This PR is a non op. But it clearly separate the init logic from the
getNCCLCOMM. getNCClComm is now a purely a 'read' only function
Test Plan:
existing CI

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139362
Approved by: https://github.com/wconstab
2024-10-31 16:58:20 +00:00
James Wu
864beebb41 [easy] Add start event metadata to collected metadata for PT2 Compile Events (#139289)
We should be logging metadata from event starts to PT2 Compile Events too.

Differential Revision: [D65070086](https://our.internmc.facebook.com/intern/diff/D65070086/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139289
Approved by: https://github.com/oulgen
2024-10-31 16:52:30 +00:00
PyTorch MergeBot
87f1990697 Revert "Don't uselessly recompute axiom dict every static eval call (#138967)"
This reverts commit 24b695ae2d.

Reverted https://github.com/pytorch/pytorch/pull/138967 on behalf of https://github.com/ZainRizvi due to Sorry, looks like this PR introduced a failure that was incorrectly classified as flaky, and the log classifier didn't identify the right log line either ([comment](https://github.com/pytorch/pytorch/pull/138967#issuecomment-2450228525))
2024-10-31 15:54:18 +00:00
Shunting Zhang
2b86cd74a6 [inductor] patterns to remove pointless view/permute pairs (#139136)
These are not artificial patterns I come up. They shows up in linear+CrossEntropyLoss graph.

Consider this snippet:
```
        class LinearAndCEL(nn.Module):
            def __init__(self):
                super().__init__()
                self.linear = nn.Linear(C, V)
                self.ce = nn.CrossEntropyLoss()

            def forward(self, x, y):
                return self.ce(self.linear(x).view(B * T, V), y.view(-1))
```

`x` passed to `forward` is a 3D tensor of shape [B, T, C].
The `self.linear` will view x as [BxT, C] shape tensor first, do the matmul and produce a [BxT, V] tensor, and then view this output back to a 3D tensor with shape [B, T, V]. User code is gonna add another view op to convert the tensor shape to [B x T, V]. This generates a pair of redundant views . A pair of redundant permute happens in the backward part when we compute gradients.

The view ops makes it hard to chunk linear+CEL. When the view op breaks up the dimension being chunked, what should the chunker do (even if we merge those dimension again later)? Removing these pointless view pairs makes the chunker simpler. And I think it's in general nice to do.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139136
Approved by: https://github.com/Chillee, https://github.com/jansel
2024-10-31 15:35:46 +00:00
Sam Larsen
d21a25c6b7 [fx graph cache] Refactor FxGraphCachePickler, step 2 (#138683)
Summary: Move all the custom `_reduce_*` functions inside the FxGraphCachePickler class. This is mostly a cosmetic change since they're conceptually members of FxGraphCachePickler. But also in an upcoming diff, I'll add a member variable to the class to control how we handle constant tensors, so it will be convenient to be able to query that setting via `self`. I made the analogous changes to AOTAutogradCachePickler for consistency.

Test Plan: unit tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138683
Approved by: https://github.com/eellison
ghstack dependencies: #138681, #138682
2024-10-31 15:12:18 +00:00
FFFrog
ea07718a5a Remove redundant warning compress (#139367)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139367
Approved by: https://github.com/cyyever, https://github.com/Skylion007
2024-10-31 14:39:19 +00:00
augusto.yjh
c934ed6567 init kineto after torch module initialized (#131448)
Fixes #131020

As discussed in the issue thread,  we can use ` KINETO_DAEMON_INIT_DELAY_S` to delay the initialization of `kineto`  in case `kineto` is initialized before `libtorch_cuda.so`.

It's not clear to set a proper value of environmental variable `KINETO_DAEMON_INIT_DELAY_S`, here's a trick to make the initialization of `kineto` after the initialization of module `torch`. I'm not sure whether this is an acceptable trick, please take a look at this pr, thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131448
Approved by: https://github.com/sraikund16, https://github.com/briancoutinho
2024-10-31 13:24:24 +00:00
rzou
ccaa2a206a [inductor] make requires_stride_order more unbacked-symint-aware (#137063)
Previously, we tried to sort SymInt strides to determine the stride
order. This PR makes the sorting more unbacked symint aware: given a Tensor
with sizes (u0, u1, u2), it has strides (u1 * u2, u1, 1), which is
sortable under the guard_size_oblivious assumptions.

Test Plan:
- test case

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137063
Approved by: https://github.com/eellison
2024-10-31 13:11:02 +00:00
Wu, Chunyuan
3192bdeea4 [AOTI] Use len(serialized_weights) when calculating consts_size (#139054)
Fixes the failure of INT8 DLRM using AOTI.
The previous code calculates `consts_size` directly using `tensor` from `graph.constants`:
```
  consts_size = sum(
      get_nbytes_of_tensor(tensor, all_cuda)
      for (name, tensor) in graph.constants.items()
      if name not in graph.folded_constants
  )
```
Meanwhile, the actual bytes to serialize (`serialized_weights`) is using `graph.get_original_value_of_constant(name)`:
```
  serialized_weights = b"".join(
      _to_bytes(graph.get_original_value_of_constant(name), all_cuda)
      for name in graph.constants.keys()
      if name not in graph.folded_constants
  )
```

`tensor` from `graph.constants` could be different from `graph.get_original_value_of_constant(name)` thus making the `consts_size` inconsistent with the actual byte size of the `serialized_weights`, resulting in runtime error `weights_offset must be aligned to 16K boundary`, similar to what happened in https://github.com/pytorch/pytorch/pull/135205.

This PR direclty gets `consts_size ` using `len(serialized_weights)`, which fixes the inconsistency.

We also added a `reduce_range` argument to the `get_default_x86_inductor_quantization_config` function, which is needed in the unit test to avoid accuracy issue on CI machines (earlier CPUs without VNNI).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139054
Approved by: https://github.com/leslie-fang-intel, https://github.com/jgong5, https://github.com/desertfire
2024-10-31 09:54:16 +00:00
Laith Sakka
24b695ae2d Don't uselessly recompute axiom dict every static eval call (#138967)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138967
Approved by: https://github.com/ezyang
2024-10-31 07:46:35 +00:00
Huy Do
f98bc9a49d Revert D65167805 (#139371)
Summary:
This diff reverts D65167805
broke the release pipeline

Test Plan: NA

Differential Revision: D65245198

@diff-train-skip-merge (to silent facebook-github-bot until I have a stamp to land this)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139371
Approved by: https://github.com/malfet
2024-10-31 07:25:28 +00:00
cyy
f95c71867e [9/N] Fix extra warnings brought by clang-tidy-17 (#139286)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139286
Approved by: https://github.com/ezyang
2024-10-31 05:20:31 +00:00
FFFrog
42b5e191ae Fix the example of fx/interpreter (#139368)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139368
Approved by: https://github.com/ezyang
2024-10-31 05:12:43 +00:00
Bob Ren
74b7fb9519 Add conjugate method on SymFloat (#139249)
Fixes python test/dynamo/test_dynamic_shapes.py DynamicShapesFunctionTests.test_number_method_method_conjugate_num_type4_dynamic_shapes

when we turn off specialize float on eager: https://github.com/pytorch/pytorch/pull/138915

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139249
Approved by: https://github.com/ezyang
2024-10-31 04:55:36 +00:00
kshitij12345
0cf4cc3d5f [fx] split_module subgraph should always have an output node (#139275)
Fixes https://github.com/pytorch/pytorch/issues/138207

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139275
Approved by: https://github.com/ezyang
2024-10-31 04:53:19 +00:00
Sam Larsen
e3e3ab805b [fx graph cache] Refactor FxGraphCachePickler (#138682)
Summary: In an upcoming change, we need to modify FxGraphCachePickler to behave differently depending on whether the graph has frozen parameters (whether or not we have frozen parameters). To do that, it will be convenient to change FxGraphCachePickler into a regular object instead of a collection of classmethods.

Test Plan: unit tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138682
Approved by: https://github.com/eellison
ghstack dependencies: #138681
2024-10-31 03:31:51 +00:00
cyy
1dd503c6fb [4/N] Fix Wextra-semi warning (#139256)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139256
Approved by: https://github.com/ezyang
2024-10-31 03:01:14 +00:00
Wu, Chunyuan
d7411c0cc1 [AOTI] add C shim for QConvPointWise (#138540)
This PR adds C shim for `QConvPointWisePT2E` and `QConvPointWiseBinaryPT2E` similar to https://github.com/pytorch/pytorch/pull/138439. Besides that, we aligned the implementation of `qconv_pointwise` with `qlinear_pointwise` in the following aspects:
1. The parameter order of `qconv_pointwise` and `qlinear_pointwise` are quite different, we aligned the schema of `qconv_pointwise` to have similar parameter order as `qlinear_pointwise` to make it more consistent.
2. We always converted `x_scale` and `x_zero_point` to Tensors, just like in the lowering of `qlinear_pointwise`. This avoids the need to create two separate C APIs (one for `double x_scale` and `int64_t x_zero_point`, and another for `Tensor` versions). Instead, we only need one API for `Tensor`-based `x_scale` and `x_zero_point`. If we later add dynamic quantization for qconv (which will use `Tensor` for `x_scale` and `x_zero_point`), we can reuse the code from this PR and don't need to change the C shim layer API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138540
Approved by: https://github.com/jgong5, https://github.com/desertfire
ghstack dependencies: #138691, #138806
2024-10-31 02:03:01 +00:00
Oguz Ulgen
69ea2e726c Consolidate Triton cache into Inductor cache (#138239)
Summary:
This diff/PR attempts to consolidate Triton caching into the Inductor caching so that there can be just one cache that unifies them both, reducing network requests and increasing success rate.

Implementation details can be found via reading the code or the post: https://fb.workplace.com/groups/1553867532149891/posts/1605037517032892

I did not use the Autotune bundler code at all since I want to simplify that and merge it into this on the next diff/PR.

In terms of instrumentation
1) Dynamo compile: `triton_bundler_time_saved_s` this is sum of all triton.compile calls. We dont have to use the specific number, can use this as a binary value.
2) Events table: I used dynamo_timed to measure how much time we spend on bundler collect and write functions which is all the work we do in this diff
3) TLParse: I emitted number of kernels and triton_bundler_time_saved_s into tlparse as well

Test Plan: Updated unit tests

Adhoc running
```
TORCHINDUCTOR_BUNDLE_TRITON_INTO_FX_GRAPH_CACHE=1 buck2 run @mode/opt //scripts/oulgen:runner
```
gives
https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmpmTZt6b/0_0_0/fx_graph_cache_hit_4.json
<img width="771" alt="image" src="https://github.com/user-attachments/assets/478782a2-ee47-40cb-b723-fcac2bf9dd93">

Differential Revision: D64504909

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138239
Approved by: https://github.com/ezyang
2024-10-31 01:37:16 +00:00
Edward Z. Yang
c7f1fccd7a Globally enable Python dispatcher for all of Inductor compilation (#137621)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137621
Approved by: https://github.com/eellison
2024-10-31 01:35:23 +00:00
PyTorch MergeBot
289e03a429 Revert "Allow inplacing buffer when other users are inconsequential (#138383)"
This reverts commit 8840889c3f.

Reverted https://github.com/pytorch/pytorch/pull/138383 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it seems to break trunk after landing ([comment](https://github.com/pytorch/pytorch/pull/138383#issuecomment-2448824206))
2024-10-31 01:32:15 +00:00
Yidi Wu
38429938de [cond] make cond not throw warnings on constant pred in eager mode (#138837)
We don't raise warnings for torch.cond in eager mode the motivation is in  https://github.com/pytorch/pytorch/issues/138782.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138837
Approved by: https://github.com/zou3519
2024-10-31 01:13:19 +00:00
Saurabh Mishra
b90503d9ae [DCP] Unit Test to validate the stateful and non-stateful loads (#139251)
Summary: Unit Test to validate the stateful and non-stateful loads. This test is a follow up to the fix in [#138575](https://github.com/pytorch/pytorch/pull/138575) which addresses an issue in stateful dict's in-place updates in distributed checkpoint loading. Also, added additional code comments regarding the stateful and non-stateful loads.

Test Plan:
```
buck2 test //caffe2/test/distributed/checkpoint/e2e:test_e2e_save_and_load
```

https://www.internalfb.com/intern/testinfra/testrun/8162774562859797

Differential Revision: D65188659

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139251
Approved by: https://github.com/LucasLLC, https://github.com/fegin
2024-10-31 01:12:51 +00:00
PyTorch MergeBot
42d790bb65 Revert "Add conjugate method on SymFloat (#139249)"
This reverts commit bcf8a0124f.

Reverted https://github.com/pytorch/pytorch/pull/139249 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but the doc build failure is legit ([comment](https://github.com/pytorch/pytorch/pull/139249#issuecomment-2448755839))
2024-10-31 00:45:48 +00:00
eellison
4db6b740bc [Easy] GraphTransformObserver Refactoring (#139292)
Uses `torch._inductor.config.trace.log_url_for_graph_xform` by default as the log url. It was only ever instantiated with this as the log_url argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139292
Approved by: https://github.com/shengfukevin, https://github.com/shunting314
2024-10-31 00:33:28 +00:00
Brian Hirsh
f81223938c support nesting of suppress_guards, suppress guards when generated compiled autograd graph (#138968)
Fixes https://github.com/pytorch/pytorch/issues/138920. See comments there for details.

I still need to try to get a smaller repro to write an actual test. But suppressing the guards, I now no longer see the specilization in the CA graph in the linked example:
```
        aot1_view_3: ... = torch.ops.aten.view.default(aot1_tangents_1, [aot1_sym_size_int, 48, 1])
        aot1_view_4: ... = torch.ops.aten.view.default(aot1_view_3, [aot1_sym_size_int, 48])
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138968
Approved by: https://github.com/yf225, https://github.com/xmfan
2024-10-31 00:13:39 +00:00
cz2h
48854cbfc4 Add missing operator and corresponding unittest (#138309)
Fixes https://github.com/pytorch/pytorch/issues/129690

Add operator.neg and oepartor.pos into _SYM_BOOL_OPS.

Provide simple unit test under export/test_serialize.py that can reproduce the issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138309
Approved by: https://github.com/ezyang, https://github.com/angelayi
2024-10-30 23:50:24 +00:00
Sherlock Huang
f32b9a5145 Fx graph always return tuple in fuse_as_graphmodule (#139236)
Summary: As title.

Test Plan: Let's see what OSS CI says

Differential Revision: D65147426

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139236
Approved by: https://github.com/ezyang
2024-10-30 23:31:06 +00:00
Bob Ren
a494572799 Update tensorify pass to specialize symfloats we didn't tensorify away (#138868)
As discussed w/ @ezyang offline, one way to de-risk the `specialize_float=False` rollout is to specialize all backed symfloats that we fail to tensorify away. This diff does a few things:

1) It fixes a bug where item_memo gets dropped (due to incorrect epoch invalidation)
2) It updates the tensorify pass to do the backup specialization

This pass was originally part of the [PR](https://github.com/pytorch/pytorch/pull/137782) that flips `specialize_float=False` but we learned that the blast radius is simply too large. We've pivoted to a more milestone driven approach where we learn from the failures of the aforementioned PR and cherry pick fixes into main first. After this current PR lands our strategy is as follows:

1) Integrate turning off specialize float only in the automatic dynamic pass.
2) Put up a canary diff that only turns off specialize float in `backend=eager` mode to sniff out symfloat related bugs in dynamo due to code paths we previously never exercised.
3) Put up a canary diff that only turns off specialize float in `backend=aot_eager` mode to sniff out symfloat related bugs in aotautograd due to code paths we previously never exercised.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138868
Approved by: https://github.com/ezyang
2024-10-30 23:28:25 +00:00
Bob Ren
bcf8a0124f Add conjugate method on SymFloat (#139249)
Fixes python test/dynamo/test_dynamic_shapes.py DynamicShapesFunctionTests.test_number_method_method_conjugate_num_type4_dynamic_shapes

when we turn off specialize float on eager: https://github.com/pytorch/pytorch/pull/138915

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139249
Approved by: https://github.com/ezyang
2024-10-30 23:28:09 +00:00
Bob Ren
a426837f85 Don't set replacement if lhs is in the free symbols of the rhs (#139250)
Fixes python test/dynamo/test_functions.py FunctionTests.test_is_integer

when we turn off specialize float on eager: https://github.com/pytorch/pytorch/pull/138915

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139250
Approved by: https://github.com/ezyang
2024-10-30 23:21:30 +00:00
Edward Z. Yang
ae6cbd4256 Block more keys from config serialization (#139285)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139285
Approved by: https://github.com/jovianjaison, https://github.com/markkm, https://github.com/c00w
2024-10-30 23:05:59 +00:00
Will Constable
4a8d12227e [Pipelining] add schedule simulator and chrometrace dump (#138134)
Schedule simulator is useful for detecting hangs in schedules and
validating that they won't hang.  It also inserts bubbles (None actions)
at any timestep where a rank can not enqueue its next action due to
unmet dependencies, which can serve as a rough metric for schedule
efficiency.  The output can be visualized.  The simulator expects a full
comm + compute schedule as input.

Chrometrace dump is a basic visualization utility.  It currently just
renders one 'process' per rank, and lets users visualize the schedule in
a UI instead of as text.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138134
Approved by: https://github.com/H-Huang
2024-10-30 23:00:58 +00:00
Yukio Siraichi
fef5e94657 addmm: error on output dtype mismatch. (#138520)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138520
Approved by: https://github.com/ezyang
ghstack dependencies: #138515
2024-10-30 21:46:39 +00:00
Gabriel Ferns
8840889c3f Allow inplacing buffer when other users are inconsequential (#138383)
Summary:
I think we can inplace a buffer if all of the users of said buffer are "inconsequential", defined as having been removed, being completed, or being part of the ancestors set. In particular, this allows LayerNorm to inplace its input buffer.

Implements:
https://github.com/pytorch/pytorch/issues/132826

Test Plan:
New unit test of matmul followed by LayerNorm, make sure there's an inplaced buffer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138383
Approved by: https://github.com/eellison
2024-10-30 21:35:50 +00:00
Richard Zou
ad0883a288 [real_tensor_prop] Infer Fake kernels during real tensor prop (#139213)
This PR changes real_tensor_prop to also infer fake kernels when the
operator doesn't have it.

We infer the fake output to be of the same properties as the real
output, with unbacked symints in the sizes and some stride order.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139213
Approved by: https://github.com/pianpwk
ghstack dependencies: #139212
2024-10-30 21:29:33 +00:00
Zhengxu Chen
03ec25053a [export] Update min_val and max_val to Optional[int] in serialization. (#139223)
Summary: According to export team's discussion, we are upgrading min_val and max_val to optional fields which shouldn't break BC and allows the schema to express infinity.

Test Plan: buck test mode/opt caffe2/test:test_export -- -r test_serialize_infinite_sym_int

Differential Revision: D65167805

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139223
Approved by: https://github.com/yiming0416
2024-10-30 21:14:17 +00:00
Richard Zou
ef380f7b8e [real tensor prop] Add some asserts for custom ops (#139212)
When we see a custom op:
- check that its mutation annotations are correct
- check that its aliasing constraints matches our constraints for custom
  ops.

Otherwise, there may be undefined behavior.

Test Plan:
- new tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139212
Approved by: https://github.com/angelayi
2024-10-30 19:29:11 +00:00
Alex Baden
5c6d35482e [Inductor] Support Triton AttrsDescriptor cls field (#139193)
Fixes #139179

Adding corresponding changes to https://github.com/triton-lang/triton/pull/4888

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139193
Approved by: https://github.com/bertmaher
2024-10-30 18:16:38 +00:00
Pian Pawakapan
180d283156 [export] avoid debug name crash for dim hints (#139104)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139104
Approved by: https://github.com/ezyang
2024-10-30 18:12:44 +00:00
Yifu Wang
7765d1ef70 Preliminary registered-buffer collective support via Inductor (#138029)
```
NOTE [lowering-time collective optimization]

In collective communication libraries such as NCCL, every rank maintains
communication buffers that are remotely accessible by some peers. Depending
on the underlying transport, remote accessibility may be established via
mechanisms such as ib_reg_mr, CUDA P2P, or CUDA multicast. Typically, these
buffers are private to the communication library by default, and
communication ops copy user data in and out of these buffers.

To prevent these copies, an optimization commonly known as "user buffer
registration" can be employed. This allows direct establishment of remote
accessibility on user buffers, eliminating the need for copying. However,
this optimization introduces stringent usage requirements, which are
typically hard to satisfy without being intrusive to the user code:

- Establishing remote accessibility is expensive and often done ahead of
time. In such implementations, all ranks must agree on the set of allocations
used for every collective op. Failing to meet this requirement can
lead to runtime errors or even silent correctness issues.
- Even if the collective communication library supports gracefully falling
back to "unregistered" implementations, the fallback mechanism would nullify
the optimization.
- Some communication mechanisms impose stricter requirements than others. For
example, CUDA's multicast + multi-mem instructions require all ranks to agree
not only on the allocations used for every collective but also on the offsets
within these allocations.

To support all different mechanisms with optimal results, we aim to satisfy
the strictest requirement for this family of optimizations - we ensures that
every collective op invocation is guaranteed to operate on the same
allocation, at the same offset, in every iteration.

For eligible collective ops, we identify communication buffers at lowering
time and optionally choose to lower the op to a different kernel
(ommunication libraries like NCCL handle both registered and non-registered
buffers transparently within the same op, though some may require different
ops for different cases). Later, the codegen will perform "persistent
allocation" to satisfy the aforementioned constraints, and optionally,
perform buffer planning to optimize overall memory usage.
```

### Changes
- Created `comm_lowering.py` for the lowerings of `_c10d_functional` ops. This is to prevent cluttering `lowering.py` as we add more lowering-time collective optimizations. This PR moved the lowerings for `all_reduce` and `all_reduce_` to the file.
- Added `comm_buffer_type: Dict[str, str]` to `GraphLowering` to track whether a buffer is a comm buffer and the type of the comm buffer.
- Added codegen allocation support for comm buffers of type "symm_mem".
- Added support for auto-lowering `_c10d_functional.all_reduce_` to `symm_mem.one_shot_all_reduce`.
- Added an Inductor config for collective optimizations in general (`config._collective`).

### Limitation
Currently, each persistently allocated comm buffer is dedicated to a single callsite. This is not viable in terms of memory usage. However, this is a neccesary intermediate state before we tackle memory planning for comm buffers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138029
Approved by: https://github.com/Chillee
ghstack dependencies: #138028
2024-10-30 18:11:09 +00:00
Yifu Wang
421473c234 get_symm_mem_workspace(): print helpful error during graph capture (#138028)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138028
Approved by: https://github.com/weifengpy
2024-10-30 18:11:09 +00:00
Howard Huang
f4ab8b48c5 Allow schedules to run with single stage (#138925)
Ran into issues (https://github.com/pytorch/pytorch/pull/138863) when adding a Schedule with a single stage, so adding code to support this edge case (mostly for test purposes)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138925
Approved by: https://github.com/wconstab
2024-10-30 17:33:16 +00:00
Antoni Viros
ad637a4c5c Add support for index_put_ in NT (#135722)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135722
Approved by: https://github.com/jbschlosser
2024-10-30 17:17:59 +00:00
angelayi
f14f245747 [export] Remove custom forward func in swap (#139126)
Differential Revision: [D65100694](https://our.internmc.facebook.com/intern/diff/D65100694)

Remove the custom forward function and instead move the pytree flatten/unflatten ops into the graph. This allows us to natively run via the interpreter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139126
Approved by: https://github.com/avikchaudhuri
2024-10-30 16:50:57 +00:00
Roy Hvaara
4b83302585 [MPS] Update error message for supported autocast type (#139192)
Autocast in MPS currently only supports dtype of `torch.float16`. This PR updates the error message to reflect this.

This PR was created using [Copilot Workspace](https://copilot-workspace.githubnext.com/pytorch/pytorch/issues/139190?shareId=5b510fda-380c-4e86-8e91-6b67a078f180) with no human input other than clicking buttons.

Fixes #139190

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139192
Approved by: https://github.com/malfet
2024-10-30 16:48:29 +00:00
Arjun Vikram
29eb65fce8 Fix in-place state dict updates for distributed checkpoint loading (#138575)
`dcp.load()` is documented as "operating in place", updating the state of existing state_dict elements instead of replacing them wherever possible. However, it appears that in the case of a stateful element, the code both updates its state in-place, then replaces it with a copy of itself in the state_dict. This looks like a simple oversight, so here's a PR that should fix it!

[From the docs:](https://pytorch.org/docs/stable/distributed.checkpoint.html)
> DCP is different than torch.save and torch.load in a few significant ways: *...*
> - It operates in place, meaning that the model should allocate its data first and DCP uses that storage instead.

This manifested as a strange bug in TorchTitan, causing a model loaded from a checkpoint to be saved incorrectly, resulting in a twice-resumed model being subtly broken.

Let me know if this makes sense, and if there's anything else I should add!

Thanks for all the work on PyTorch!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138575
Approved by: https://github.com/kwen2501, https://github.com/fegin
2024-10-30 16:10:24 +00:00
Bin Bao
04eb15da44 [AOTI] Unify the default value of allow_stack_allocation (#139147)
Summary: Unify the default value of allow_stack_allocation for fbcode and OSS

Differential Revision: D65064673

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139147
Approved by: https://github.com/hl475
2024-10-30 16:01:23 +00:00
PyTorch MergeBot
49bfbed2eb Revert "Add deterministic path for CUDA cumsum (#136224)"
This reverts commit 383eba5229.

Reverted https://github.com/pytorch/pytorch/pull/136224 on behalf of https://github.com/ezyang due to larger memory usage apparently not acceptable ([comment](https://github.com/pytorch/pytorch/pull/136224#issuecomment-2447382819))
2024-10-30 14:43:15 +00:00
cyyever
456c87c8a2 [8/N] Fix extra warnings brought by clang-tidy-17 (#139151)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139151
Approved by: https://github.com/ezyang

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2024-10-30 14:20:08 +00:00
Tom Ritchford
44257c063e [dynamo] Fix constant propagation in builtins and UserClasses (#131354)
* Fixes https://github.com/pytorch/pytorch/issues/118675
* Replaces https://github.com/pytorch/pytorch/pull/118994

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131354
Approved by: https://github.com/jansel, https://github.com/anijain2305
2024-10-30 12:47:20 +00:00
Xuehai Pan
9bbe4a67ad [dynamo] support maxlen for collections.deque (#138194)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138194
Approved by: https://github.com/jansel, https://github.com/malfet
2024-10-30 10:08:02 +00:00
Edward Z. Yang
a4b35767cb Don't have random print in convert_frame (#139203)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139203
Approved by: https://github.com/Skylion007
2024-10-30 09:35:37 +00:00
YangQun1
a19bdfb36e [compiled autograd] reorder backward hooks to match eager behavior (#138553)
Fixes #138538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138553
Approved by: https://github.com/xmfan
2024-10-30 08:46:45 +00:00
wz337
b71ab3fc85 [DTensor][Bug Fix]Fix 2D DTensor mm with mesh_shape (1, n) or (n, 1) (#139134)
Fixes #138742. In the issue, the matrix multiplication with DTensor failed when the size of one of mesh dimension is 1 when the mesh is > 1D. We are missing tests for covering this corner case where mesh_shape is (n, 1) or (1, n). The DTensor mm op is correct when the 1D mesh is of shape (self.world_size, ) or 2D mesh with none of the mesh_dimension has a size of 1.

In this PR, we fixed the corner case by updating `gen_einsum_strategies` in `_einsum_strategy.py`. Specifically, we cannot skip generating `mesh_dim_strategies` when `mesh_dim <= 1`, as this is not valid for nD mesh with one of the mesh dimension sizes being 1.

Without the fix, the OpStrategy generated for 2D mesh with mesh_shape of (1,n) or (n,1) is wrong, as the OpStrategy generated is 1D.

```
all_mesh_dim_strategies=[[[Replicate(), Replicate(), Replicate()], [Partial(sum), Shard(dim=1), Shard(dim=0)], [Shard(dim=0), Shard(dim=0), Replicate()], [Shard(dim=1), Replicate(), Shard(dim=1)]]]
OpStrategy(all_strategies):::   [(R, R) -> R, (S(1), S(0)) -> P, (S(0), R) -> S(0), (R, S(1)) -> S(1)] @ mesh: (4, 1)[(R, R) -> R, (S(1), S(0)) -> P, (S(0), R) -> S(0), (R, S(1)) -> S(1)] @ mesh: (4, 1)
```

After the fix, we can see the OpStrategy generated is correct with 2D strategy.
```
all_mesh_dim_strategies=[[[Replicate(), Replicate(), Replicate()], [Partial(sum), Shard(dim=1), Shard(dim=0)], [Shard(dim=0), Shard(dim=0), Replicate()], [Shard(dim=1), Replicate(), Shard(dim=1)]]][[[Replicate(), Replicate(), Replicate()], [Partial(sum), Shard(dim=1), Shard(dim=0)], [Shard(dim=0), Shard(dim=0), Replicate()], [Shard(dim=1), Replicate(), Shard(dim=1)]]]
OpStrategy(all_strategies) = [(RR, RR) -> RR, (RS(1), RS(0)) -> RP, (RS(0), RR) -> RS(0), (RR, RS(1)) -> RS(1), (S(1)R, S(0)R) -> PR, (S(1)S(1), S(0)S(0)) -> PP, (S(1)S(0), S(0)R) -> PS(0), (S(1)R, S(0)S(1)) -> PS(1), (S(0)R, RR) -> S(0)R, (S(0)S(1), RS(0)) -> S(0)P, (S(0)S(0), RR) -> S(0)S(0), (S(0)R, RS(1)) -> S(0)S(1), (RR, S(1)R) -> S(1)R, (RS(1), S(1)S(0)) -> S(1)P, (RS(0), S(1)R) -> S(1)S(0), (RR, S(1)S(1)) -> S(1)S(1)] @ mesh: (4, 1)
```

*******
As a follow up, we should add more test coverage for DTensor op with 2D mesh and 2D mesh with one of the size of mesh dimension being 1.
*******

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139134
Approved by: https://github.com/fegin
2024-10-30 08:09:39 +00:00
leslie-fang-intel
cd8f7730f4 [PT2E][Quant] Remove Redundant Method in X86 Quantizer (#139161)
**Summary**
Remove the redundant method of X86 Inductor Quantizer as `get_supported_quantization_configs`, `get_supported_operator_for_quantization_config` and `get_supported_operators`. They are not the must have to implement a customized Quantizer and not mentioned in existing document for how to use X86 Inductor Quantizer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139161
Approved by: https://github.com/jgong5
2024-10-30 03:31:17 +00:00
PyTorch MergeBot
5861279f47 Revert "Add support for index_put_ in NT (#135722)"
This reverts commit b4836e5b5c.

Reverted https://github.com/pytorch/pytorch/pull/135722 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing on ROCm ([comment](https://github.com/pytorch/pytorch/pull/135722#issuecomment-2445651914))
2024-10-30 01:53:55 +00:00
Angela Yi
d9e87fb339 [draft-export] Include guards for constraint violation errors (#138748)
Summary:
Added where logs are being added to constrain violations in draft export.

Example output:
```
1. Constraint violation error.
    The specified input dynamic_shapes spec was found to be incorrect during tracing.
    Specifically, this guard was added: Eq(s0, 3), where {'s0': "L['args'][0][0].size()[0]"}.
    This occured at the following stacktrace:
        File /data/users/angelayi/fbsource/buck-out/v2/gen/fbcode/1beb9df83fd74b9a/scripts/angelayi/draft_export/__test_draft_export__/test_draft_export#link-tree/torch/nn/modules/module.py, lineno 1736, in _wrapped_call_impl
        File /data/users/angelayi/fbsource/buck-out/v2/gen/fbcode/1beb9df83fd74b9a/scripts/angelayi/draft_export/__test_draft_export__/test_draft_export#link-tree/torch/nn/modules/module.py, lineno 1747, in _call_impl
        File /data/users/angelayi/fbsource/buck-out/v2/gen/fbcode/1beb9df83fd74b9a/scripts/angelayi/draft_export/__test_draft_export__/test_draft_export#link-tree/scripts/angelayi/draft_export/test_draft_export.py, lineno 138, in forward.
    Because of this, we have modified the dynamic shapes structure to be the following:
    ```
    dynamic_shapes = {'a': {0: 3}}
    ```
```

The result of this diff is also that `dynamic` logs are permanently turned on during draft export. Otherwise we cannot capture the `[guard added]` logs from symbolic_shapes.py.

Test Plan: `buck2 run @//mode/dev-nosan scripts/angelayi/draft_export:test_draft_export -- -r "test_shape_failure" `

Differential Revision: D64862374

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138748
Approved by: https://github.com/ezyang
2024-10-30 00:24:17 +00:00
Antoni Viros
b4836e5b5c Add support for index_put_ in NT (#135722)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135722
Approved by: https://github.com/jbschlosser
2024-10-30 00:03:21 +00:00
Nikita Shulga
bd369bb182 Workaround torch.deploy failures (#139195)
Summary:
Which are backed with an older version of `typing_extensoins` but this runtime could not care less about type-checking.
So pretend that is has `TypeIs` by replacing it with `TypeGuard`

Fixes test failures introduced by https://github.com/pytorch/pytorch/pull/133814 / D65030974

Test Plan: `buck2 test 'fbcode//mode/opt' fbcode//multipy/runtime:test_deploy -- --exact 'multipy/runtime:test_deploy - TorchpyTest.TestNumpy'`

Differential Revision: D65145409

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139195
Approved by: https://github.com/Skylion007
2024-10-29 23:36:16 +00:00
Colin L. Rice
a0e095dd9f config: Modify install_config_module to use a layered approach (#138758)
This modifies the config system, to use a single mapping of config ->
ConfigEntry and to store the default and user values within them.

We could have used multiple dicts (i.e. user_override and default), but
as we add more fields (justknobs in this PR, perhaps testing and env
variables later), it quickly becomes painful.

There are a couple design decisions we could change.
1) All configs we save store the resolved value - not the default and
   user override seperately
2) All configs we load, apply the resolved value as a user override.

This means that certain complexities of default behvaiour and deletion
(as well as JK), will change if you save + load a config.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138758
Approved by: https://github.com/ezyang
2024-10-29 23:19:36 +00:00
Joel Schlosser
23d590e518 More flexible test parametrization with @reparametrize (#138369)
**Background:** The `@parametrize` decorator enjoys widespread usage as a convenient tool for ensuring extensive test coverage. One particular feature that makes this easy is the ability to stack such decorators, testing over the cross-product of inputs. Example:
```python
class MyTestClass(TestCase):
    @parametrize("x", range(3))
    @parametrize("y", [False, True])
    def test_foo(self, x, y):
        # Invoked with:
        # x=0, y=False
        # x=1, y=False
        # x=2, y=False
        # x=0, y=True
        # x=1, y=True
        # x=2, y=True
        ...
```

Note that the `@ops` and `@modules` decorators employ the same underlying machinery for parametrizing over `OpInfo` / `ModuleInfo` entries. These decorators also parametrize over op-specific `device` / `dtype` info *according to what is supported for each op*.
```python
class MyTestClass(TestCase):
    @ops(op_db)
    def test_foo(self, op, device, dtype):
        # Invoked each OpInfo in the db along with each device / dtype that corresponds
        # with this op according to the OpInfo entry.
        ...
```

Note that this in contrast to the naive cross product between ops and devices / dtypes, which would generate too many tests. Certain use cases benefit from a similar type of flexible parametrization that is more intelligent than simple cross-product composition. It is expensive to generate / run too many tests, even if the unneeded ones are skipped appropriately.

This PR attempts to generalize such flexible parametrization and satisfy these use cases through the introduction of a `@reparametrize` decorator, which operates on an existing parametrizer and allows for customized on-the-fly parametrization through the use of an `adapter_fn`. Examples:
```python
# adapter_fn that adds a new arg
 def include_is_even_arg(test_name, param_kwargs):
    x = param_kwargs["x"]
    is_even = x % 2 == 0
    new_param_kwargs = dict(param_kwargs)
    new_param_kwargs["is_even"] = is_even
    is_even_suffix = "_even" if is_even else "_odd"
    new_test_name = f"{test_name}{is_even_suffix}"
    yield (new_test_name, new_param_kwargs)

# adapter_fn that excludes certain values
def exclude_odds(test_name, param_kwargs):
    x = param_kwargs["x"]
    is_even = x % 2 == 0
    yield None if not is_even else (test_name, param_kwargs)

class MyTestClass(TestCase):
    @reparametrize(parametrize("x", range(5)), include_is_even_arg)
    def test_foo(self, x, is_even):
        # Invoked with both the x value and the new is_even arg
        ...

    @reparametrize(parametrize("x", range(5)), exclude_odds)
    def test_bar(self, x):
        # Only invoked with even x values
        ...
```

For a more real-world use case, imagine you want to write a set of OpInfo tests that parametrize over additional op-specific things beyond `device` / `dtype` (in NJT's case, this includes contiguity type, whether to operate over the batch / ragged / other dims, etc.). The `@reparametrize` decorator allows you to customize the `@ops` parametrization to add in these additional args as they make sense on a per-op basis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138369
Approved by: https://github.com/janeyx99
2024-10-29 22:14:38 +00:00
drisspg
80c7c7178e Make sure all SDPA tests are ran with tensor cores enabled (#135592)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135592
Approved by: https://github.com/eqy
2024-10-29 20:53:10 +00:00
Jake Schmidt
2b577ae58f Implement NJT embedding backward (#138627)
Fixes #138352

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138627
Approved by: https://github.com/jbschlosser
2024-10-29 18:44:58 +00:00
drisspg
a884462bca Add workspace to TritonTemplates (#138050)
Here's a markdown summary for the PR:

# Add workspace buffer support for Triton templates

## Summary
Adds support for templates to allocate and use temporary workspace buffers

## Key Changes
- Add `WorkspaceArg` support in Triton template system
- Automatic workspace allocation/deallocation around kernel execution
- Zero-initialization support for workspace buffers
- Seamless integration with existing tensor management

## Example Usage
```python
def generate(self, ...):
    workspace_arg = WorkspaceArg(
        count=1024*1024,  # 1MB workspace
        zero_fill=True    # Zero-initialized
    )

    return TritonTemplateCaller(..., workspace_arg=workspace_arg)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138050
Approved by: https://github.com/Chillee, https://github.com/eellison
2024-10-29 18:17:54 +00:00
Xilun Wu
7964bcc3dc [DeviceMesh] fix sub mesh size calculation in create_sub_mesh() (#138945)
**Summary**
This PR fixes a calculation miss in DeviceMesh's create_sub_mesh().

**Error Description**
When users call `device_mesh["dim0", "dim1", "dim2", "dim3"]`, it creates a slice of mesh or we call it "submesh". Users can also slice a submesh from a flattened mesh. For example:
```
flattened_mesh = device_mesh["dim0", "dim1", "dim2"]._flatten("dim0-2")
alias_flattened_mesh = device_mesh["dim0-2"]  # this mesh slice leads to error in current impl
```

It triggers the error in the size calculation `reduce(lambda, mesh_dim)` happening in `create_sub_mesh`:
```
IndexError: Dimension out of range (expected to be in range of [-4, 3], but got 4)
```

**Fix**
The usage of lambda is wrong, for `lambda x, y`, the x is the accumulated value while `y` is the iterator value.

**Test**
`pytest test/distributed/test_device_mesh.py -s -k test_flatten_mesh_4d`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138945
Approved by: https://github.com/wz337
2024-10-29 17:56:56 +00:00
Abatom
c98c88a211 [Bugfix] UnicodeDecodeError: 'utf-8' codec can't decode byte (#139062)
Fixes #113564

When I used PyTorch's profiler to analyze the performance of vLLM, I encountered the following error. This error is similar to #113564. After analysis and troubleshooting, I changed the temporary file from text mode to binary mode, and it no longer reported an error and ran normally.

```bash
ERROR 10-28 10:25:50 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/torch/profiler/profiler.py", line 722, in stop
ERROR 10-28 10:25:50 engine.py:160]     self._transit_action(self.current_action, None)
ERROR 10-28 10:25:50 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/torch/profiler/profiler.py", line 751, in _transit_action
ERROR 10-28 10:25:50 engine.py:160]     action()
ERROR 10-28 10:25:50 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/torch/profiler/profiler.py", line 745, in _trace_ready
ERROR 10-28 10:25:50 engine.py:160]     self.on_trace_ready(self)
ERROR 10-28 10:25:50 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/torch/profiler/profiler.py", line 444, in handler_fn
ERROR 10-28 10:25:50 engine.py:160]     prof.export_chrome_trace(os.path.join(dir_name, file_name))
ERROR 10-28 10:25:50 engine.py:160]   File "/usr/local/lib/python3.12/dist-packages/torch/profiler/profiler.py", line 220, in export_chrome_trace
ERROR 10-28 10:25:50 engine.py:160]     fout.writelines(fin)
ERROR 10-28 10:25:50 engine.py:160]   File "<frozen codecs>", line 322, in decode
ERROR 10-28 10:25:50 engine.py:160] UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 5896: invalid start byte
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139062
Approved by: https://github.com/ezyang
2024-10-29 17:16:26 +00:00
Boyuan Feng
68134a320e [Flex Attention] Paged Attention (#137164)
This PR adds paged attention for flex attention.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137164
Approved by: https://github.com/drisspg
2024-10-29 17:05:22 +00:00
cyy
3907f36808 Turn some variables and functions into static (#136847)
Re-check some files and mark variables and functions into static and fix other warnings.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136847
Approved by: https://github.com/ezyang
2024-10-29 17:01:56 +00:00
Henry Tsang
3f9f6048da [aoti] Print output name for sympy.Expr as well (#138524)
To avoid
```
NotImplementedError: unsupported type of output=s0*s1
```

It seems like this was caused by the use of `_scaled_dot_product_flash_attention`.

Fallback kernek:
```
FallbackKernel(
  python_kernel_name='torch.ops.aten._scaled_dot_product_flash_attention.default',
  name=buf55,
  layout=MultiOutputLayout(device=device(type='cuda', index=0)),
  inputs=[ComputedBuffer(name='buf52', layout=FixedLayout('cuda', torch.bfloat16, size=[1, 6, s0*s1, 64], stride=[384*s0*s1, 64*s0*s1, 64, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.bfloat16, inner_fn=<function BaseView.make_loader.<locals>.loader at 0x7fcd7f99da20>, ranges=[1, 6, s0*s1, 64])), ComputedBuffer(name='buf53', layout=FixedLayout('cuda', torch.bfloat16, size=[1, 6, s0*s1, 64], stride=[384*s0*s1, 64*s0*s1, 64, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.bfloat16, inner_fn=<function BaseView.make_loader.<locals>.loader at 0x7fcd7f99d480>, ranges=[1, 6, s0*s1, 64])), ComputedBuffer(name='buf54', layout=FixedLayout('cuda', torch.bfloat16, size=[1, 6, s0*s1, 64], stride=[384*s0*s1, 64*s0*s1, 64, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.bfloat16, inner_fn=<function BaseView.make_loader.<locals>.loader at 0x7fcd7f99c430>, ranges=[1, 6, s0*s1, 64]))],
  constant_args=(0.125,),
  kwargs={'scale': 0.125},
  output_view=None,
  python_kernel_name=torch.ops.aten._scaled_dot_product_flash_attention.default,
  cpp_kernel_name=at::_ops::_scaled_dot_product_flash_attention::call,
  ordered_kwargs_for_cpp_kernel=['scale'],
  op_overload=aten._scaled_dot_product_flash_attention.default,
  arg_properties=[{'name': 'query', 'type': Tensor, 'default_value': None}, {'name': 'key', 'type': Tensor, 'default_value': None}, {'name': 'value', 'type': Tensor, 'default_value': None}, {'name': 'dropout_p', 'type': float, 'default_value': 0.0}, {'name': 'is_causal', 'type': bool, 'default_value': False}, {'name': 'return_debug_mask', 'type': bool, 'default_value': False}],
  kwarg_properties=None,
  unbacked_bindings=None,
  mutation_outputs=[],
  origin_node=None,
  origins=OrderedSet([_scaled_dot_product_flash_attention])
)
```

codegen with this pr
```
// Topologically Sorted Source Nodes: [scaled_dot_product_attention], Original ATen: [aten._scaled_dot_product_flash_attention]
    double var_147 = 0.125;
    AtenTensorHandle buf56_handle;
    AtenTensorHandle buf57_handle;
    auto buf55_4 = s0*s1;
    auto buf55_5 = s0*s1;
    AtenTensorHandle buf58_handle;
    AtenTensorHandle buf59_handle;
    AtenTensorHandle buf60_handle;
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cuda__scaled_dot_product_flash_attention(convert_arrayref_tensor_to_tensor(buf52), convert_arrayref_tensor_to_tensor(buf53), convert_arrayref_tensor_to_tensor(buf54), 0.0, 0, 0, &var_147, &buf56_handle, &buf57_handle, nullptr, nullptr, &buf55_4, &buf55_5, &buf58_handle, &buf59_handle, &buf60_handle));
    RAIIAtenTensorHandle buf56(buf56_handle);
    RAIIAtenTensorHandle buf57(buf57_handle);
    RAIIAtenTensorHandle buf58(buf58_handle);
    RAIIAtenTensorHandle buf59(buf59_handle);
    RAIIAtenTensorHandle buf60(buf60_handle);
```

Differential Revision: D64724460

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138524
Approved by: https://github.com/chenyang78
2024-10-29 16:02:45 +00:00
Jason Ansel
a762dc0357 [inductor] Multi-kernel + cooperative reductions (#138893)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138893
Approved by: https://github.com/shunting314
ghstack dependencies: #138533
2024-10-29 15:45:17 +00:00
Jason Ansel
77b0ae832d [inductor] Allow cooperative + persistent reductions (#138533)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138533
Approved by: https://github.com/shunting314, https://github.com/eellison
2024-10-29 15:45:17 +00:00
Vishwa Raj Singh
9d7a0869f0 Make DDP Quantization hooks backend Agnostic (#138816)
Current ddp hooks quantization code use .cuda() API to move tensors and parameter on backend devices. This limits only cuda backend to work with ddp quantization hooks.
Change is to make code backend agnostic and move tensors/parameters based on **tensor.device.**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138816
Approved by: https://github.com/kwen2501
2024-10-29 15:02:45 +00:00
Wu, Chunyuan
489c66fdb3 [AOTI] fix pointer_to_list (#138806)
Fixes the `pointer_to_list` function to take `*(ptr + i)` instead of `*ptr`.
This fixes the runtime error when running INT8 yolo-v7.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138806
Approved by: https://github.com/jgong5, https://github.com/desertfire
ghstack dependencies: #138691
2024-10-29 14:33:16 +00:00
Wu, Chunyuan
9af1816974 [AOTI] add C shim for _weight_int8pack_mm (#138691)
Fixes the error of running WOQ-INT8 LLaMA:
```
E           In file included from /home/user/inductor/pytorch/torch/include/torch/csrc/inductor/aoti_runtime/arrayref_tensor.h:3,
E                            from /tmp/torchinductor_user/sw/csw5gfmlzp5iooqvfwl2gwn574frwdpmtrx2y6nu2m6x76d3xcux.cpp:4:
E           /tmp/torchinductor_user/sw/csw5gfmlzp5iooqvfwl2gwn574frwdpmtrx2y6nu2m6x76d3xcux.cpp: In function ‘void inductor_entry_impl(AtenTensorOpaque**, AtenTensorOpaque**)’:
E           /tmp/torchinductor_user/sw/csw5gfmlzp5iooqvfwl2gwn574frwdpmtrx2y6nu2m6x76d3xcux.cpp:117:33: error: ‘aoti_torch_cpu__weight_int8pack_mm’ was not declared in this scope
E             117 |     AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_cpu__weight_int8pack_mm(convert_arrayref_tensor_to_tensor(arg8_1), _frozen_param0, _frozen_param1, &buf0_handle));
E                 |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138691
Approved by: https://github.com/leslie-fang-intel, https://github.com/jgong5, https://github.com/desertfire
2024-10-29 13:53:36 +00:00
Yuanhao Ji
b9618c9b88 [Dynamo] Add itertools.compress() support (#139061)
Use polyfill to add `itertools.compress()` support in Dynamo.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139061
Approved by: https://github.com/jansel
2024-10-29 10:25:55 +00:00
Sam Ginzburg
93d7f90c3a [inductor] getting AOT inductor to treat None args correctly (#139114)
Differential Revision: [D65102228](https://our.internmc.facebook.com/intern/diff/D65102228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139114
Approved by: https://github.com/aakhundov
2024-10-29 08:11:53 +00:00
PyTorch MergeBot
38645e8a3e Revert "Fix unbind_copy and add its decomposition (#134319)"
This reverts commit 8aedc649bd.

Reverted https://github.com/pytorch/pytorch/pull/134319 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but this is still failing the same test on ExecuTorch ([comment](https://github.com/pytorch/pytorch/pull/134319#issuecomment-2443209139))
2024-10-29 04:54:37 +00:00
Yuanhao Ji
e52ccb3ca6 [Device] Replace hardcoded devices with 'torch._C._get_accelerator()' (#139032)
I noticed that some hard-code like `"cuda" if torch.cuda.is_available() else "cpu"` which can be replaced with `torch._C._get_accelerator()`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139032
Approved by: https://github.com/ezyang
2024-10-29 04:51:47 +00:00
cyy
0274d16c01 Fix clang-tidy warnings in jit code (#138974)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138974
Approved by: https://github.com/ezyang
2024-10-29 04:33:40 +00:00
Yiming Zhou
48b55ca1b1 [export] Fix non-strict retracing with kwargs (#138927)
Summary:
`torch.fx.Interpreter.run()` only takes args as input. Currently we pass kwargs as well which causes errors during retracing.

Flatten the kwargs and concat them with args will solve the issue.

Several previously failing tests under `_retraceability_non_strict` now passes.

Test Plan:
```
buck2 test @//mode/dev-nosan //caffe2/test:test_export -- -r _retraceability_non_strict
```

Differential Revision: D64980053

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138927
Approved by: https://github.com/angelayi
2024-10-29 04:31:21 +00:00
PyTorch MergeBot
6aef58a249 Revert "Dont decompose aten.baddmm in inductor (#137904)"
This reverts commit c066f4a055.

Reverted https://github.com/pytorch/pytorch/pull/137904 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I think the test is failing in trunk, maybe a landrace? ([comment](https://github.com/pytorch/pytorch/pull/137904#issuecomment-2443158194))
2024-10-29 04:08:11 +00:00
Will Feng
4ee514144b [c10d][Partial-Graph Overlap] Support calling .wait_tensor() on output tensor of eager async_op=True collective if under allow_inflight_collective_as_graph_input_ctx() context manager (#137763)
This PR aims to support the following use case:
```python
def all_reduce_eager(x):
    y = x * x
    req = dist.all_reduce(y, op=dist.ReduceOp.SUM, async_op=True)
    assert isinstance(req, torch.distributed.Work)
    return y

@torch.compile(fullgraph=True)
def all_reduce_wait_compiled(y):
    torch.ops.c10d_functional.wait_tensor(y)
    return y * y

x = torch.ones(1280, 1280, device="cuda") + self.rank
with allow_inflight_collective_as_graph_input_ctx():
    y = all_reduce_eager(x)
    z = all_reduce_wait_compiled(y)
```
where the collective is issued in eager (with `async_op=True`) but waited in compiled region.

This is important for internal use cases such as TorchRec, where we issue collectives in eager for SparseArch all_to_all but want to wait for them in compiled region at beginning of OverArch, so that the all_to_all can be overlapped with the DenseArch compute that runs in parallel.

----

**Update**: Did two items to prevent regression to existing use cases:

1. Added memory-stressed test case to test_c10d_nccl.py `test_unwaited` to cover existing user's "not calling work.wait() for non-functional collective" use case
2. Gated all new `register_work()` / `unregister_work()` calls with `c10d::allow_inflight_collective_as_graph_input()` check, which is a new context manager that requires explicit user enablement (i.e. not on by default, so should not affect existing users).

The risk of this new version of PR causing regression should be very low.

------

Test commands:
- `pytest -rA test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_eager_async_allreduce_inductor_wait`
- `pytest -rA test/test_fx.py::TestDCE::test_keep_collectives`
- `pytest -rA test/test_fx.py::TestDCE::test_keep_collectives_no_overload`
- `pytest -rA test/distributed/test_c10d_functional_native.py::TestWithNCCL::test_wait_tensor`
- `pytest -rA test/distributed/test_c10d_functional_native.py::TestWithNCCL::test_unwaited`
- `pytest -rA test/distributed/test_c10d_nccl.py::CommTest::test_wait_tensor`
- `pytest -rA test/distributed/test_c10d_nccl.py::CommTest::test_unwaited`
- `pytest -rA test/distributed/_tensor/test_tensor_ops.py::DistTensorOpsTest::test_equal`
- `pytest -rA test/distributed/_tensor/test_random_ops.py::DistTensorRandomOpTest::test_manual_seed`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_ddp_baseline_aot_eager_multiprocess`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_aot_eager`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_setattr`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_unspecialized_forced_getattr_inline`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_unspecialized_forced_getattr_no_inline`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_asymmetric_compilation`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_automatic_dynamic_scalar`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_automatic_dynamic_speculation_divergence`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_automatic_dynamic_tensor`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_dim_mismatch`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_graph_break_empty_graph_still_collective`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_missing_source`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_scalar_missing_source`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_compiler_collectives_type_mismatch`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_ddp_activation_checkpointing`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_ddp_baseline_aot_eager_multiprocess`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_activation_checkpointing`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_aot_eager`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_inductor`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_setattr`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_unspecialized_forced_getattr_inline`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_fsdp_unspecialized_forced_getattr_no_inline`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_hf_bert_ddp_aot_eager`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_hf_bert_ddp_aot_eager_static_graph`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_hf_bert_ddp_inductor`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_hf_bert_ddp_inductor_static_graph`
- `pytest -rA test/distributed/test_dynamo_distributed.py::TestMultiProc::test_hf_bert_fsdp_activation_checkpointing`
- `pytest -rA test/distributed/_tensor/test_experimental_ops.py::DistOtherOpsTest::test_bernoulli`
- `pytest -rA test/distributed/_tensor/test_dtensor_compile.py::TestDTensorCompileE2E::test_tp_compile_fullgraph_is_seq_parallel_True`
- `pytest -rA test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_allreduce_inductor_cudagraph_trees`
- `python benchmarks/dynamo/torchbench.py --ci --accuracy --timing --explain --inductor --device cuda --inference --bfloat16 --total-partitions 2 --partition-id 1 --output inference_torchbench.csv --only moco`

------

Differential Revision: [D65023311](https://our.internmc.facebook.com/intern/diff/D65023311)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137763
Approved by: https://github.com/yifuwang
2024-10-29 03:31:19 +00:00
cyy
d8f99f39cb Avoid unnecessary tensor constructions (#139039)
Because Variable is an alias of Tensor

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139039
Approved by: https://github.com/Skylion007
2024-10-29 02:23:23 +00:00
Animesh Jain
e80fe7f13a [dynamo][guards] Skip guards on empty nn module hooks (#138942)
This brings some unsoundness in guards. Earlier we were skipping empty nn module hooks dict guard only on inbuilt nn modules, but as seen in https://github.com/pytorch/pytorch/issues/138386, there could be still be significant guard overhead. With this PR, we reduce the guard eval latency from 420 us to 280 us (1.5x reduction).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138942
Approved by: https://github.com/ezyang, https://github.com/jansel
ghstack dependencies: #139040, #138954
2024-10-29 02:11:47 +00:00
Animesh Jain
2aa5348356 [dynamo][guards] Skip no tensor aliasing guards on parameters (#138954)
This is another unsound guard eval optimization. Its rare in practice to
compile a function with two different parameters as inputs, and then
later call the function with one parameter input as two different inputs
(aliasing). This further reduces guard overhead from 280 us to 240 us
for the model in https://github.com/pytorch/pytorch/issues/138386

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138954
Approved by: https://github.com/jansel
ghstack dependencies: #139040
2024-10-29 02:11:47 +00:00
Animesh Jain
dee7e715ba [dynamo][refactor] Remaining cleanup from config-cleanup of enable_cpp_guard_manager (#139040)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139040
Approved by: https://github.com/williamwen42, https://github.com/jansel
2024-10-29 02:11:39 +00:00
Jeff Daily
7c7b2d89ba [ROCm] set hipblas workspace (#138791)
Fixes #138532.

This brings hipblas behavior in line with cublas behavior with respect to setting the workspace to an allocation from the caching allocator as well as the env var HIPBLAS_WORKSPACE_CONFIG.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138791
Approved by: https://github.com/naromero77amd, https://github.com/eqy, https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-10-29 01:37:55 +00:00
Syed Tousif Ahmed
1637a40796 Adds snapshot API for MemPools to get pool memory segments (#133601)
Canonically, the snapshot API returns the entire memory state of the CUDACachingAllocator (using `get_all_blocks`). There is no API that can only return the memory state of a given pool.

In this PR, we extend the functionality of snapshot API such that it can only return the memory addresses of an active pool. When snapshot API is called under a MemPoolContext, we only return the blocks that correspond to the pool id of the active pool.

Part of https://github.com/pytorch/pytorch/issues/124807.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133601
Approved by: https://github.com/ezyang
2024-10-29 01:01:47 +00:00
eellison
c066f4a055 Dont decompose aten.baddmm in inductor (#137904)
Previously the decomposition would upcasts inputs to fp32. This led to a slowdown compared to eager which would run in fp16. We also tried keeping the bmm in fp16, and the upcasting for the epilogue but that led to worse numerics because the bmm in eager would do the epilogue all in fp32 without a downcast in the bmm accumulator.

Fix for https://github.com/pytorch/pytorch/issues/137897

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137904
Approved by: https://github.com/ngimel
2024-10-29 00:54:29 +00:00