Commit Graph

2446 Commits

Author SHA1 Message Date
chilli
e0c5113dec Add support for capturing tensors with score_mod (#124444)
```
import torch
from torch import nn
import torch.nn.functional as F
import torch._inductor.config as config
# torch.set_default_device('cuda')

import torch
from torch.nn.attention._templated_attention import _templated_attention as templated_attention
from triton.testing import do_bench
from torch.nn.attention import SDPBackend, sdpa_kernel

index = torch.ops.aten
torch.manual_seed(0)

B = 16
H = 16
S = 2048
D = 64

head_scale = torch.randn(H, device='cuda')
def alibi(score, batch, head, token_q, token_kv):
    return score + torch.ops.aten.index(head_scale, [head]) * (token_q - token_kv)
bias = torch.randn(H, S, S, dtype=torch.float16, device='cuda')

query = torch.randn(B, H, S, D, device="cuda", dtype=torch.float16)
key = torch.randn(B, H, S, D, device="cuda", dtype=torch.float16)
value = torch.randn(B, H, S, D, device="cuda", dtype=torch.float16)

compiled = torch.compile(templated_attention)
out = compiled(query, key, value, score_mod=alibi)
out2 = templated_attention(query, key, value,score_mod=alibi)
print((out - out2).abs().mean())
assert (out - out2).abs().mean() < 1e-3
print("Flash (no mask): ", do_bench(lambda: F.scaled_dot_product_attention(query, key, value)))
print("Flash (mask): ", do_bench(lambda: F.scaled_dot_product_attention(query, key, value, attn_mask=bias)))
print("flexattention: ", do_bench(lambda: compiled(query, key, value, score_mod=alibi)))
```
<img width="324" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/18c175d0-2720-4dfd-8747-85b8a8f609f5">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124444
Approved by: https://github.com/jansel, https://github.com/drisspg
2024-04-23 06:20:13 +00:00
Yanbo Liang
72a34eeb99 Dynamo x autograd.Function supports non-{Tensor, symnode, constant} inputs (#124360)
Fixes #118395

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124360
Approved by: https://github.com/zou3519
2024-04-22 23:32:54 +00:00
Jeff Daily
6ede882c0b preferred blas library; cublaslt gemm implementation (#122106)
Following the example of PyTorch supporting a preferred Linalg library (cusolver or magma), this PR introduces a preferred blas library selector of either cublas or cublaslt for CUDA and hipblas or hipblaslt for ROCm via normal hipification of sources.

The default blas implementation remains cublas or hipblas.  cublaslt or hipblaslt can be enabled using environment variable TORCH_BLAS_PREFER_CUBLASLT=1 (or TORCH_BLAS_PREFER_HIPBLASLT=1 as an alias) or by calling `torch.backends.cuda.preferred_blas_library(backend="cublaslt")` or as an alias `backend="hipblaslt"`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122106
Approved by: https://github.com/lezcano
2024-04-22 15:38:22 +00:00
Aaron Gokaslan
29cc293725 [BE]: FURB142 - Remove set mutations. Use set update (#124551)
Uses set mutation methods instead of manually reimplementing (update, set_difference etc).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124551
Approved by: https://github.com/ezyang
2024-04-21 14:12:33 +00:00
Aaron Gokaslan
5a1216bb2e [BE]: Update ruff to 0.4.1 (#124549)
Update ruff to 0.4.1 .
This version fixes a lot false negatives/false positives, is 20-40% faster, and has various other bug fixes.

Below is a before and after table showing the execution time of ruff lint and ruff format in milliseconds courtesy of https://astral.sh/blog/ruff-v0.4.0

| Repository                                         | Linter (v0.3) | Linter (v0.4) | Formatter (v0.3) | Formatter (v0.4) |
|----------------------------------------------------|---------------|---------------|------------------|------------------|
| [pytorch/pytorch](https://github.com/pytorch/pytorch) | 328.7         | 251.8         | 351.1            | 274.9            |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124549
Approved by: https://github.com/ezyang
2024-04-21 14:06:23 +00:00
Edward Z. Yang
f34905f61d Assert that TracingContext is available when set_example_value is called (#124284)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124284
Approved by: https://github.com/Chillee
ghstack dependencies: #124105, #124059, #124176, #124283
2024-04-21 11:23:13 +00:00
Yanbo Liang
0d90d4d613 [Dynamo] Fix NamedTuple hasattr bug (#124531)
Fixes #124402

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124531
Approved by: https://github.com/jansel
2024-04-21 04:36:22 +00:00
Animesh Jain
a32eac345f [dynamo] Return gm.forward for eager backend (#124109)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124109
Approved by: https://github.com/yanboliang, https://github.com/jansel
ghstack dependencies: #124445
2024-04-20 14:11:05 +00:00
Animesh Jain
febc4d8759 [dynamo][easy] forbid_in_graph check to use getattr_static (#124445)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124445
Approved by: https://github.com/yanboliang, https://github.com/jansel
2024-04-20 14:11:05 +00:00
Yanbo Liang
a3e3693afc [Dynamo] Fix missing bracket in ListVariable (#124532)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124532
Approved by: https://github.com/williamwen42
2024-04-20 08:26:30 +00:00
drisspg
f1cbaf1764 Adds LSE output for templated-attention-hop if inputs require grad (#124308)
Adds LSE output for templated-attention-hop if inputs require grad

Prep PR for adding autograd support to templated-attention-hop. The kernel needs to output the LSE during the forward which will be used during backwards.

### Output code
https://gist.github.com/drisspg/2aea3ce5db75811e7e143eeecb774d8a

## Before
| Type    |   Speedup |   batch_size |   num_heads |   q_seq_len |   k_seq_len |   head_dim | score_mod     | dtype          |
|---------|-----------|--------------|-------------|-------------|-------------|------------|---------------|----------------|
| Average |     1.159 |              |             |             |             |            |               |                |
| Max     |     1.342 |           16 |          16 |         512 |         512 |         64 | noop          | torch.bfloat16 |
| Min     |     1.016 |            1 |          16 |         512 |         512 |         64 | relative_bias | torch.bfloat16 |

## After
 Type    |   Speedup |   batch_size |   num_heads |   q_seq_len |   k_seq_len |   head_dim | score_mod   | dtype          |
|---------|-----------|--------------|-------------|-------------|-------------|------------|-------------|----------------|
| Average |     1.155 |              |             |             |             |            |             |                |
| Max     |     1.339 |           16 |          16 |         512 |         512 |         64 | noop        | torch.bfloat16 |
| Min     |     1.009 |            1 |          16 |         512 |         512 |         64 | head_bias   | torch.bfloat16 |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124308
Approved by: https://github.com/Chillee
2024-04-20 05:45:56 +00:00
soulitzer
cf5ca58e7f [NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124343
Approved by: https://github.com/jbschlosser
2024-04-19 23:13:59 +00:00
Laith Sakka
acbf888a13 rename sl to strobelight (#124455)
Summary:
TORCH_COMPILE_SL_PROFILE ->TORCH_COMPILE_STROBELIGHT
SL_MAX_STACK_LENGTH -> COMPILE_STROBELIGHT_MAX_STACK_LENGTH
SL_MAX_PROFILE_TIME -> COMPILE_STROBELIGHT_MAX_PROFILE_TIME
profile_with_sl() -> strobelight()
compiletime_sl_profile_meta() -> compiletime_strobelight_meta()

Test Plan:
1. run and verify
```
TORCH_COMPILE_STROBELIGHT=TRUE buck2 run  @//mode/inplace  @//mode/opt  //caffe2/fb/strobelight:compiletime_profiler_example
```
2. run and verify
```
buck2 run  @//mode/inplace  @//mode/opt  //caffe2/fb/strobelight:function_profiler_example --local-only
```
3. run and verify truncated stack for
```
TORCH_COMPILE_STROBELIGHT=TRUE COMPILE_STROBELIGHT_MAX_STACK_LENGTH=1 buck2 run  @//mode/inplace  @//mode/opt  //caffe2/fb/strobelight:compiletime_profiler_example
```
4. add infinite loop in _verify and verify samples for
```
COMPILE_STROBELIGHT_MAX_PROFILE_TIME=30 TORCH_COMPILE_STROBELIGHT=TRUE buck2 run  @//mode/inplace  @//mode/opt  //caffe2/fb/strobelight:compiletime_profiler_example
```

Reviewed By: oulgen

Differential Revision: D56327139

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124455
Approved by: https://github.com/oulgen
2024-04-19 22:50:13 +00:00
JackCaoG
7ae835eee4 Enable SourcelessBuilder to build GraphModule generated by make_fx (#123673)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123673
Approved by: https://github.com/ezyang, https://github.com/anijain2305
ghstack dependencies: #123680
2024-04-19 17:23:51 +00:00
Michael Lazos
5050e627dc Defer marking_static_address (#124309)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124309
Approved by: https://github.com/anijain2305
ghstack dependencies: #123324, #123404, #123405
2024-04-19 17:20:57 +00:00
ydwu4
e62169a8fa Support torchbind op dispatch in python (#123367)
We override the `__call__` method and register fake, functional, proxy default dispatch mode implementation in its python_key_mode_table.

The idea is:
1. when inputs contains FakeScriptObject,  we dispatch it through _get_dispatch mechanism. We implement dispatch mode keys automatically in the operator's constructor.
2. when inputs are not fakified, we dispatch through the original c++ dispatcher.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123367
Approved by: https://github.com/zou3519
2024-04-19 17:17:27 +00:00
Boyuan Feng
9a71d12d92 [CUDAGraphTree] Support mutated inputs from prior cudagraph pool (#123231)
# PR
This PR supports mutating inputs in cudagraph trees, if these inputs are outputs from previous cudagraph. Please check #121861 for more details.

# Note on Optimistic Mutation Check
To determine whether applying cudagraph, we need to check input mutations, falling into four categories: a) no mutation, b) mutation on parameters/buffers, c) mutation on cudagraph recorded tensors, d) mutation on non-cudagraph recorded tensors. We can apply cudagraph for type a,b,c but cannot for type d. This input mutation types depends on function, current_node, and inputs.

Since `check_for_mutation` is slow, there is a trade-off on making type c or d faster.
- To make type d) faster, we want to `check_for_mutation` and call eager function early. However, this adds unnecessary overhead to type a, b, c due to the extra check.
- To make type c) faster, we want to skip `check_for_mutation` at the beginning and only `check_for_mutation` before `record_function` for a new function. This removes the overhead of `check_for_mutation` for type a, b, c. However, this adds extra overhead to type d due to `check_invariants` for all children nodes.

Instead, we design optimistic mutation check. The assumption is that, given a function and a node, the input mutation types usually remain the same across inputs. So, if we have ever detect a function on a node with type d, we will never detect it as type c. The detailed design is:
- [Slow Path] On the first invocation of a function on a node, we run `check_for_mutation` once and cache the input mutation type as `non_cudagraph_managed_mutation[node_id][func_id]`.
- [Fast Path] On the subsequent invocations of a function on a node, we skip `check_for_mutation`. For `non_cudagraph_managed_mutation[node_id][func_id]` as true, we directly call eager function. Otherwise, we `check_variants` and call cudagraph function.
- [Slow Path] Before `record_function`, we run `check_for_mutation` again.

**Q1: Would there be overhead for type a,b,c,d?**
A: No. We only check input mutation types for the first invocation of a function on a node.

**Q2: If a function happens to be type c during the first invocation on a node, could we detect it as type d in the future?**
A: Yes. This is done by `check_invariants` and guarantees the correctness.

**Q3: If a function happens to be type d during the first invocation on a node, could it still be recognized as type c in the future?**
A: No. But this should happen rarely according to our assumption. In the rare case that it happens, there would not be any correctness issues and the performance is the same as the eager (or inductor optimized) function.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123231
Approved by: https://github.com/eellison
2024-04-19 10:32:12 +00:00
Xuehai Pan
a6f044a490 [dynamo, 3.8-3.9] support dataclass with frozen=True in Python 3.8/3.9 (#124393)
Closes #114966

Frozen field assignment in `__init__` in Python 3.8-3.9:

f5bd65ed37/Lib/dataclasses.py (L402-L411)

```python
import builtins

BUILTINS = builtins

def _field_assign(frozen, name, value, self_name):
    # If we're a frozen class, then assign to our fields in __init__
    # via object.__setattr__.  Otherwise, just use a simple
    # assignment.
    #
    # self_name is what "self" is called in this function: don't
    # hard-code "self", since that might be a field name.
    if frozen:
        return f'BUILTINS.object.__setattr__({self_name},{name!r},{value})'
    return f'{self_name}.{name}={value}'
```

Frozen field assignment in `__init__` in Python 3.10+:

812245ecce/Lib/dataclasses.py (L436-L445)

```python
__dataclass_builtins_object__ = object

def _field_assign(frozen, name, value, self_name):
    # If we're a frozen class, then assign to our fields in __init__
    # via object.__setattr__.  Otherwise, just use a simple
    # assignment.
    #
    # self_name is what "self" is called in this function: don't
    # hard-code "self", since that might be a field name.
    if frozen:
        return f'__dataclass_builtins_object__.__setattr__({self_name},{name!r},{value})'
    return f'{self_name}.{name}={value}'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124393
Approved by: https://github.com/jansel
2024-04-19 05:10:33 +00:00
ydwu4
2e48b39603 Fix example_value of map (#124203)
Previously, we didn't expand the shape of example_value of map to the same as inputs (edit: the first mapped dimension). This pr fixes this bug. To make this easier, we change _call_function_and_unflatten_output to accept example_values directly instead of retrieving them from the variable trackers.

Also remove a redundant call function node in strict_mode higher order op in dynamo.

Test Plan:
existing tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124203
Approved by: https://github.com/ezyang, https://github.com/zou3519
2024-04-18 19:18:36 +00:00
PyTorch MergeBot
4a0900d04b Revert "[NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343)"
This reverts commit ef93402f61.

Reverted https://github.com/pytorch/pytorch/pull/124343 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/124343#issuecomment-2064937192))
2024-04-18 18:55:48 +00:00
Jason Ansel
7a6edb0b66 Possible fix for einops warning (#124084)
See https://github.com/arogozhnikov/einops/issues/315

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124084
Approved by: https://github.com/peterbell10
2024-04-18 17:09:50 +00:00
Boyuan Feng
aa2da0cdd2 [Export] Add runtime assert to non-strict export (#123681)
This PR moves insert_deferred_runtime_asserts from dynamo to torch.fx.passes and uses it to add runtime assertion for non-strict export.

Differential Revision: D55944267

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123681
Approved by: https://github.com/tugsbayasgalan, https://github.com/angelayi
2024-04-18 16:13:27 +00:00
soulitzer
ef93402f61 [NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124343
Approved by: https://github.com/jbschlosser
2024-04-18 14:42:54 +00:00
Animesh Jain
704fac5618 [dynamo][cpp-guard] Reland Attempt 1 - Enable cpp guard manager (#124231)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124231
Approved by: https://github.com/jansel
ghstack dependencies: #124230, #124237
2024-04-18 06:36:20 +00:00
PyTorch MergeBot
6e86a40694 Revert "[Dynamo] Check for __bool__ attribute before accessing it (#120943)"
This reverts commit dd7aeedb72.

Reverted https://github.com/pytorch/pytorch/pull/120943 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/120943#issuecomment-2063098295))
2024-04-18 06:34:32 +00:00
William Wen
812bae09be [dynamo] fix 3.11+ refleak (#124238)
Fixes https://github.com/pytorch/pytorch/issues/119607 for 3.11+.

In 3.11+, `_PyFrame_FastToLocalsWithError` could implicity run `COPY_FREE_VARS` on the original frame, leading to double incref's since the dynamo shadow frame can rerun `COPY_FREE_VARS`. So the solution is to skip the first `COPY_FREE_VARS` instruction in the shadow frame if it was already executed in the original frame.

Also move the location for clearing the original frame in 3.12 to handle error cases more thoroughly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124238
Approved by: https://github.com/jansel
2024-04-18 03:02:29 +00:00
Animesh Jain
6b4b857a60 [dynamo][nn_module] Enable torch.compile/disable as decorators on the class (#124187)
Support something like. This is UI change, so please review carefully.

~~~
        @torch._dynamo.disable
        class SimpleLinear(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.layer0 = torch.nn.Linear(4, 4)

            def forward(self, inp):
                return self.layer0(torch.sigmoid(inp))

        @torch.compile(backend=cnts)
        class SimpleModel(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.layer0 = SimpleLinear()
                self.layer1 = torch.nn.Linear(4, 4)

            def forward(self, inp):
                z = self.layer0(torch.sin(inp))
                return self.layer1(z)
~~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124187
Approved by: https://github.com/yanboliang, https://github.com/jansel
2024-04-18 02:51:30 +00:00
Andrew M. James
64f42bfd52 [dynamo] Support list.reverse (#124210)
fixes #123974

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124210
Approved by: https://github.com/peterbell10
2024-04-17 23:33:32 +00:00
Matthias Reso
dd7aeedb72 [Dynamo] Check for __bool__ attribute before accessing it (#120943)
This PR checks if __bool__ attribute is available before accessing it when handling a UserDefinedObjectVariable

Fixes #119782

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120943
Approved by: https://github.com/zou3519
2024-04-17 23:26:55 +00:00
Edward Z. Yang
bebdbb63ce Introduce set_example_value and use it throughout Dynamo (#124176)
I'm going to setup some extra behavior when we set example value, so
I need a convenient place to interpose.  I cannot easily do it on
meta itself because its a generic dict with no interposition point.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124176
Approved by: https://github.com/oulgen
ghstack dependencies: #124105, #124059
2024-04-17 22:57:11 +00:00
Xuehai Pan
93e249969b [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261)
Remove useless parentheses in `raise` statements if the exception type is raised with no argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124261
Approved by: https://github.com/albanD
2024-04-17 19:29:34 +00:00
Animesh Jain
f433517181 [dynamo][decorator] Support disable on nn modules (#124185)
Fixes https://github.com/pytorch/pytorch/issues/123979

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124185
Approved by: https://github.com/weifengpy, https://github.com/yoyoyocmu
2024-04-17 16:20:34 +00:00
Xuehai Pan
7e1c98c171 [dynamo] support object.__setattr__(obj, name, value) (#124068)
Resolves #114964
Resolves #114966

- #114964
- #114966

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124068
Approved by: https://github.com/jansel
2024-04-17 15:57:14 +00:00
Simon Fan
67bd43b510 [compiled autograd][dynamo] use aliases for stack restore when partial graphs steal inputs (#124127)
same idea as https://github.com/pytorch/pytorch/pull/123359, but for when we restore stack variables after calling a partial graph:

Illustrated by the test case:

before:
```python
def function(inputs):
    graph_out_0 = __compiled_fn_2(inputs)
    getitem_1 = graph_out_0[0]
    add = inputs[1]  <---- error inputs is already cleared
    del graph_out_0
    add_1 = add + getitem_1
    add = None
    getitem_1 = None
    cpu = add_1.cpu()
    add_1 = None
    return (cpu,)
```
after:
```python
def function(inputs):
    inputs_ref_0 = inputs[1]
    graph_out_1 = __compiled_fn_2(inputs)
    getitem_1 = graph_out_1[0]
    add = inputs_ref_0
    del graph_out_1
    add_1 = add + getitem_1
    add = None
    getitem_1 = None
    cpu = add_1.cpu()
    add_1 = None
    return (cpu,)
```

Co-authored-by: Jason Ansel <jansel@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124127
Approved by: https://github.com/jansel
2024-04-16 17:01:34 +00:00
William Wen
9309580d69 [dynamo, 3.12] handle possibility of NULL local variables during graph breaks (#124095)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124095
Approved by: https://github.com/jansel
2024-04-16 08:44:43 +00:00
William Wen
2b3594f90e [dynamo] fix call_finally issue in Python 3.8 (#124122)
Fix https://github.com/pytorch/pytorch/issues/97811 again...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124122
Approved by: https://github.com/jansel
2024-04-16 08:36:20 +00:00
Animesh Jain
bb0c768c5b [dynamo][refactor] Move LazyGraphModule handling (#124113)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124113
Approved by: https://github.com/jansel
ghstack dependencies: #124078
2024-04-16 06:39:45 +00:00
PyTorch MergeBot
530bf391cc Revert "[dynamo] Turn on CPP guard manager (#123547)"
This reverts commit 3e98bdd66d.

Reverted https://github.com/pytorch/pytorch/pull/123547 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/123547#issuecomment-2058337419))
2024-04-16 06:38:15 +00:00
Shunting Zhang
df5829d0ba [inductor] let rand_strided support fp8 (#124120)
I'm working on https://fb.workplace.com/groups/1075192433118967/posts/1411161629522044/ (this is a meta internal link about a inefficient inner/persistent reduction kernel generated by inductor). I found the generated benchmark code for a kernel ( https://gist.github.com/shunting314/13a0105f72a1c54d9c220370c7fd3845 ) can not be run since rand_strided failed to generate tensors for fp8. Errors are like

```
RuntimeError: "normal_kernel_cpu" not implemented for 'Float8_e4m3fn'
```
for CPU
or
```
RuntimeError: "normal_kernel_cuda" not implemented for 'Float8_e4m3fn'
```
for GPU

This PR work around that problem.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124120
Approved by: https://github.com/Chillee, https://github.com/jansel
2024-04-16 04:15:56 +00:00
Animesh Jain
601112fdb4 [dynamo][log] Print missing skipped frame info on debug (#124078)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124078
Approved by: https://github.com/yanboliang
2024-04-15 20:33:17 +00:00
Aleksandar Samardžić
f5331aade5 Simplify ATen sparse semi-structured operators based on CUTLASS (#123473)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123473
Approved by: https://github.com/cpuhrsch
2024-04-14 06:57:41 +00:00
William Wen
0dfe72c63b [dynamo, 3.12] fix positions and offsets of added instructions when we clean (#123991)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123991
Approved by: https://github.com/jansel
ghstack dependencies: #123978
2024-04-14 03:58:04 +00:00
Jason Ansel
f3fd280238 [dynamo] Relax strict_mode for autograd.Function forward inputs (#123910)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123910
Approved by: https://github.com/oulgen
2024-04-13 19:41:59 +00:00
William Wen
6440d1baa6 [dynamo, 3.12] fix the block stack... again (#123978)
Some changes to how we handle blocks in 3.11+:
- We only keep track of with blocks that are not enclosed in a try block
- We do not compile partial graphs if we are in a block that is not in a tracked with block - i.e. any block enclosed in some non-with try/except/etc. block

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123978
Approved by: https://github.com/jansel
2024-04-13 17:07:02 +00:00
PyTorch MergeBot
97261be0a8 Revert "Simplify ATen sparse semi-structured operators based on CUTLASS (#123473)"
This reverts commit b2a0b8c446.

Reverted https://github.com/pytorch/pytorch/pull/123473 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/123473#issuecomment-2053561077))
2024-04-13 07:47:32 +00:00
Animesh Jain
58afcd7b61 [dynamo][dict] Add UnspecializedNNModuleVariable to dict keys (#122812)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122812
Approved by: https://github.com/jansel
ghstack dependencies: #122943, #123877, #123878
2024-04-13 02:07:35 +00:00
Animesh Jain
fefe6e2fea [dynamo][3.12] Stop backend detection on the first RETURN_VALUE (#123878)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123878
Approved by: https://github.com/williamwen42
ghstack dependencies: #122943, #123877
2024-04-13 02:07:35 +00:00
willfengg
f1654fd4b0 [PT2D][FSDP] skip FSDP hooks base on dynamo config (#123021)
unit test: `pytest test/distributed/_composable/fsdp/test_fully_shard_compile.py`

For FSDP, we turn on/off compiling hooks base on `torch._dynamo.config.skip_fsdp_hooks`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123021
Approved by: https://github.com/yf225, https://github.com/anijain2305
2024-04-13 01:47:25 +00:00
Aaron Gokaslan
1d6c5972c1 [BE]: Optimize min/max/sum comprehensions C419 (#123960)
Automatic fixes that replaces certain list comprehensions with generator ones where appropriate so that they are immediately consumed. This is preview functionality in ruff for rule C419 and it was automatically applied.

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123960
Approved by: https://github.com/malfet
2024-04-12 23:54:15 +00:00
Animesh Jain
3e98bdd66d [dynamo] Turn on CPP guard manager (#123547)
As title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123547
Approved by: https://github.com/jansel
2024-04-12 23:30:56 +00:00