Maggie Moss
eb83c3ca23
Clean up unused Pyrefly suppressions ( #166178 )
...
Cleaning up ignores that are no longer needed in the repo and adding select suppressions so the main branch is clean.
test plan:
`lintrunner -a`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166178
Approved by: https://github.com/oulgen
2025-10-25 05:32:21 +00:00
PyTorch MergeBot
e50dc40d28
Revert "Update gm.print_readable to include Annotation ( #165397 )"
...
This reverts commit 7a65770013 .
Reverted https://github.com/pytorch/pytorch/pull/165397 on behalf of https://github.com/malfet due to I don't know how/why, but it breaks windows tests, see 2e22b1a61e/1 ([comment](https://github.com/pytorch/pytorch/pull/165397#issuecomment-3417428128 ))
2025-10-17 22:35:50 +00:00
Sherlock Huang
7a65770013
Update gm.print_readable to include Annotation ( #165397 )
...
Sample output
```
[rank0]: # Annotation: {'compile_with_inductor': 'flex_attention'} File: /data/users/bahuang/pytorch/torch/nn/attention/flex_attention.py:1490 in flex_attention, code: out, lse, max_scores = flex_attention_hop(
[rank0]: score_mod_2 = self.score_mod_2
[rank0]: mask_fn_2 = self.mask_fn_2
[rank0]: flex_attention_1 = torch.ops.higher_order.flex_attention(xq_5, xk_5, xv_3, score_mod_2, (2048, 2048, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___kv_num_blocks, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___kv_indices, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___full_kv_num_blocks, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___full_kv_indices, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___q_num_blocks, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___q_indices, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___full_q_num_blocks, g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___full_q_indices, 128, 128, mask_fn_2), 0.25, {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True, 'OUTPUT_MAX': False}, (), (g____import_torchtitan_dot_models_dot_attention___flex_attention_block_masks___block_causal___none___mask_mod___closure___0_cell_contents,)); xq_5 = xk_5 = xv_3 = score_mod_2 = mask_fn_2 = None
[rank0]: out_2: "bf16[8, 4, 2048, 16]" = flex_attention_1[0]; flex_attention_1 = None
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165397
Approved by: https://github.com/yushangdi , https://github.com/anijain2305
2025-10-17 18:35:18 +00:00
Yuanyuan Chen
b11593c31b
[8/N] Apply ruff UP035 rule ( #165214 )
...
This is follow-up of #164653 to continue applying `UP035` fixes. The purpose is to finally enable this rule.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165214
Approved by: https://github.com/ezyang
2025-10-15 03:18:57 +00:00
Yuanyuan Chen
fb64da0791
[2/N] Use "is" in python type comparison ( #165142 )
...
This is follow-up of #165037 . It generally recommended to use `is/is not` to compare types. Therefore this series of changes apply this suggestion in the code base, and it aims to finally enabling related linter checks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165142
Approved by: https://github.com/albanD
2025-10-10 15:36:44 +00:00
Edward Yang
65aa62d50d
Use codegen for the boxed interpreters ( #164573 )
...
Authored with claude code. The arg parsing is kind of horrible, open
to more suggestions.
Signed-off-by: Edward Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164573
Approved by: https://github.com/albanD , https://github.com/jansel
2025-10-08 06:27:44 +00:00
Maggie Moss
b13cd141b3
Add pyrefly suppressions ( #164748 )
...
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283
Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check
step 1: delete lines in the pyrefly.toml file from the `project-excludes` field
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/4b3bf2037014e116bc00706a16aef199
after:
0 errors (4,263 ignored)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164748
Approved by: https://github.com/oulgen
2025-10-07 17:31:18 +00:00
bobrenjc93
8c54101933
add tensor subclass printing support in fx/graph.py ( #164403 )
...
it was previously quite misleading since it looks like the inputs to the
dynamo graph are plain tensors when in reality they are tensor subclasses
before
```
class GraphModule(torch.nn.Module):
def forward(self, L_input_batch_inputs_: "i64[2, 512][512, 1]cuda:0", L_self_parameters_weight_: "f32[202048, 256][256, 1]cuda:0"):
```
after
```
class GraphModule(torch.nn.Module):
def forward(self, L_input_batch_inputs_: "DTensor(i64[2, 512][512, 1]cuda:0)", L_self_parameters_weight_: "DTensor(f32[202048, 256][256, 1]cuda:0)"):
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164403
Approved by: https://github.com/ezyang
2025-10-02 20:06:12 +00:00
FFFrog
ec0cd81c38
[Code Clean] Remove deadcodes about Python3.9 [4/N] ( #163643 )
...
As the title stated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163643
Approved by: https://github.com/albanD
ghstack dependencies: #163626 , #163627 , #163629
2025-09-24 07:30:50 +00:00
Lucas Kabela
b6c53383fe
[Dynamo][Better Engineering] Type annotation for torch/_dynamo/output_graph.py ( #159602 )
...
As part of better engineering effort, we would like to improve out type support to improve dev experience in dynamo
This PR adds strict typing support to `torch/_dynamo/output_graph.py`
Running
```
mypy torch/_dynamo/output_graph.py --linecount-report /tmp/coverage_log
```
| -------- | Lines Annotated | Lines Total | % lines covered | Funcs Annotated | Funcs Total | % funcs covered |
| -------- | ------- | -------- | ------- | ------- | ------- | ------- |
| Main | 2163 | 4792 | 45.14% | 121 | 268 | 45.15% |
| This PR | 4818 | 4818 | 100.00% | 268 | 268 | 100.00% |
| Delta | +2655 | +26 | +54.84% | +147 | 0 | +54.85% |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159602
Approved by: https://github.com/Skylion007
2025-08-05 03:50:54 +00:00
Edward Z. Yang
204eb4da5e
Add expanded_def option for FX printing, render descriptor, update tests ( #158708 )
...
----
- First, we add a new expanded_def to FX, which will expand the
definitions of variables into multiple lines, one per variable
definition. This makes extremely long args/return lists much
more readable.
- Next, we extend this mechanism to also print out descriptors on
placeholders and return values, as comments, if available. This
is how we will test descriptors.
- We update tlparse for AOTAutograd to use this format.
- We update expect tests to use this format and update their formats,
so you can inspect what it can look at. There may be other tests
I should update, open to suggestions.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158708
Approved by: https://github.com/wconstab
ghstack dependencies: #158624
2025-07-25 13:22:32 +00:00
Xuehai Pan
11c07c848c
[BE][14/16] fix typos in torch/ (torch/fx/) ( #156604 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156604
Approved by: https://github.com/jingsh
ghstack dependencies: #156318 , #156320 , #156602
2025-07-02 22:55:29 +00:00
Xuehai Pan
2e0e08588e
[BE][PYFMT] migrate PYFMT for torch/[e-n]*/ to ruff format ( #144553 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144553
Approved by: https://github.com/ezyang
ghstack dependencies: #144551
2025-06-17 08:18:47 +00:00
Menglu Yu
2d25e4d478
[1/n][Optimus][Auto-AC] Support activation quantization without scaling ( #148380 )
...
Summary: We enable the activation quantization in the forward pass, and users can customize the dtype they want to quantize.
Test Plan:
# unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:quantization -- test_activation_quantization_aten
```
Buck UI: https://www.internalfb.com/buck2/776d3911-bb86-4ac8-a527-540cf1510b9d
Test UI: https://www.internalfb.com/intern/testinfra/testrun/4785074873051017
Network: Up: 4.3MiB Down: 42MiB (reSessionID-fef7e727-68b1-4645-a519-5652854df38d)
Executing actions. Remaining 0/4 6.7s exec time total
Command: test. Finished 2 local
Time elapsed: 3:11.5s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
# E2E
### how to enable (you can overrite the dtype, if nothing given, the default is fp8)
```
post_grad_fusion_options={
"activation_quantization_aten_pass": {"quant_type": "torch.float8_e5m2"}
},
```
Differential Revision: D70522237
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148380
Approved by: https://github.com/Mingming-Ding , https://github.com/Hahu803
2025-05-08 04:44:15 +00:00
Aaron Orenstein
7a0781eaad
Improve cache key graph printing performance ( #151928 )
...
Teach the graph printer how to allow overriding printing SymTypes (`SymInt`, `SymFloat`, `SymBool`) and then use that to reuse the fast SymNode printing from `torch._inductor.utils.sympy_str()` to make computing the cache key faster.
On my computer the repro from #151823 goes from 480s -> 80s (still terrible... but better).
Fixes #151823
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151928
Approved by: https://github.com/laithsakka
2025-05-06 17:39:53 +00:00
Animesh Jain
b1d34acac5
[fx] Recursive DCE on subgraphs ( #152772 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152772
Approved by: https://github.com/bdhirsh , https://github.com/zou3519
2025-05-06 02:55:34 +00:00
Nikita Shulga
13966d0bf5
[BE] Migrate dtype_abbrs into one location ( #152229 )
...
Namely `torch.utils._dtype_abbrs.dtype_abbrs`
Before that it was defined in various forms of completeness in
c02edba863/torch/fx/graph.py (L215) ,
c02edba863/torch/testing/_internal/common_utils.py (L5226)
and c02edba863/torch/testing/_internal/logging_tensor.py (L17)
TODO:
- Add linter that `torch.testing._internal` module is not referenced from any of the public facing APIs, as it can have extra dependencies such as `expect_test`
Fixes https://github.com/pytorch/pytorch/issues/152225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152229
Approved by: https://github.com/clee2000 , https://github.com/Skylion007
2025-04-28 03:52:47 +00:00
Jakub Grzybek
73358d37da
Fix codegen, change str comparison opeator to == for proper equality … ( #150611 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150611
Approved by: https://github.com/Skylion007 , https://github.com/cyyever
2025-04-04 09:59:59 +00:00
Henry Hu
5cf3029503
Remove unused rand call if not fallback to eager for rand ( #147790 )
...
Fixes #147171
Pull Request resolved: https://github.com/pytorch/pytorch/pull/147790
Approved by: https://github.com/eellison
2025-04-03 23:27:03 +00:00
vasiliy
c974b5322a
enable torch.compile for torch._scaled_mm nvfp4 recipe ( #150462 )
...
Summary:
Updates the meta registration for `torch._scaled_mm` to work for the
nvfp4 recipe.
Test Plan:
```bash
pytest test/test_matmul_cuda.py -s -k test_blockwise_nvfp4
```
Reviewers:
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150462
Approved by: https://github.com/eellison
2025-04-02 01:08:40 +00:00
Tugsbayasgalan Manlaibaatar
fb0e9cb0a0
Remove warnings on non-buffer tensor constants ( #148483 )
...
Export already registers tensor constants directly in the graph and this is also true for Torchbind objects. This removes warning that pollutes the output.
Differential Revision: [D70577856](https://our.internmc.facebook.com/intern/diff/D70577856 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148483
Approved by: https://github.com/zhxchen17 , https://github.com/zou3519
ghstack dependencies: #148364
2025-03-12 18:20:04 +00:00
Guilherme Leobas
4e7d264cf8
Introduce UserDefinedExceptionClassVariable ( #146504 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146504
Approved by: https://github.com/anijain2305
2025-03-11 18:55:45 +00:00
eellison
4c13a859e5
Workaround no triton float8_e8m0fnu support in inductor ( #148722 )
...
Triton doesn't support actual float8_e8m0fnu yet, so we can't currently codegen any arithmetic on them. But we can support bitcasting, and view/memory operators and treat them as uint8 for now. Fix for https://github.com/pytorch/pytorch/issues/147873 .
The one question i'm not sure of is whether or not we need to explicitly disable triton template fusion since it would fuse in these dtypes as uint8..
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148722
Approved by: https://github.com/vkuzo
ghstack dependencies: #148450
2025-03-10 17:37:39 +00:00
Jason Ansel
a60b4ed623
[fx] Optimize TracerBase.create_arg and Graph._gen_python_code ( #148292 )
...
Before: 19502951 function calls (18702776 primitive calls) in 8.533 seconds
After: 16402551 function calls (15602452 primitive calls) in 7.701 seconds
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148292
Approved by: https://github.com/oulgen
ghstack dependencies: #148243 , #148260 , #148261 , #148288
2025-03-10 16:06:19 +00:00
Jason Ansel
8f858e226b
[fx] Optimizations for node name generation ( #148288 )
...
Before:

After:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148288
Approved by: https://github.com/oulgen
ghstack dependencies: #148243 , #148260 , #148261
2025-03-10 16:06:19 +00:00
Jason Ansel
bec7bdad47
[fx] Move map_aggregate to C++ ( #148243 )
...
Microbenchmarking `fx.symbolic_trace(lambda x: functools.reduce(operator.add, [x, *range(100000)]))`, before:
```
30603618 function calls (29403419 primitive calls) in 13.744 seconds
```
after:
```
25203549 function calls (24403352 primitive calls) in 12.090 seconds
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148243
Approved by: https://github.com/oulgen
2025-03-10 16:05:53 +00:00
PyTorch MergeBot
92beda54c8
Revert "[fx] Move map_aggregate to C++ ( #148243 )"
...
This reverts commit edaff88f69 .
Reverted https://github.com/pytorch/pytorch/pull/148243 on behalf of https://github.com/jovianjaison due to breaking internal builds [T216910920] ([comment](https://github.com/pytorch/pytorch/pull/148243#issuecomment-2698724058 ))
2025-03-04 19:40:21 +00:00
PyTorch MergeBot
611b0e9bc4
Revert "[fx] Optimizations for node name generation ( #148288 )"
...
This reverts commit 5eb0337cfd .
Reverted https://github.com/pytorch/pytorch/pull/148288 on behalf of https://github.com/clee2000 due to something in this stack broke some dynamo and higher order ops tests like higher_order_ops/test_invoke_subgraph.py::TestInvokeSubgraphCompile::test_dedupe [GH job link](https://github.com/pytorch/pytorch/actions/runs/13645082540/job/38149882002 ) [HUD commit link](8531d247ba ). dynamo/test_graph_deduplication did run on the PR but the higher_order_ops one didn't, probably combo of landrace and bad TD ([comment](https://github.com/pytorch/pytorch/pull/148288#issuecomment-2698365172 ))
2025-03-04 17:10:12 +00:00
PyTorch MergeBot
ed9055c303
Revert "[fx] Optimize TracerBase.create_arg and Graph._gen_python_code ( #148292 )"
...
This reverts commit 8531d247ba .
Reverted https://github.com/pytorch/pytorch/pull/148292 on behalf of https://github.com/clee2000 due to something in this stack broke some dynamo and higher order ops tests like higher_order_ops/test_invoke_subgraph.py::TestInvokeSubgraphCompile::test_dedupe [GH job link](https://github.com/pytorch/pytorch/actions/runs/13645082540/job/38149882002 ) [HUD commit link](8531d247ba ). dynamo/test_graph_deduplication did run on the PR but the higher_order_ops one didn't, probably combo of landrace and bad TD ([comment](https://github.com/pytorch/pytorch/pull/148288#issuecomment-2698365172 ))
2025-03-04 17:10:12 +00:00
Jason Ansel
8531d247ba
[fx] Optimize TracerBase.create_arg and Graph._gen_python_code ( #148292 )
...
Before: 19502951 function calls (18702776 primitive calls) in 8.533 seconds
After: 16402551 function calls (15602452 primitive calls) in 7.701 seconds
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148292
Approved by: https://github.com/oulgen
ghstack dependencies: #148243 , #148260 , #148261 , #148303 , #148288
2025-03-04 02:42:23 +00:00
Jason Ansel
5eb0337cfd
[fx] Optimizations for node name generation ( #148288 )
...
Before:

After:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148288
Approved by: https://github.com/oulgen
ghstack dependencies: #148243 , #148260 , #148261 , #148303
2025-03-04 02:42:23 +00:00
Jason Ansel
edaff88f69
[fx] Move map_aggregate to C++ ( #148243 )
...
Microbenchmarking `fx.symbolic_trace(lambda x: functools.reduce(operator.add, [x, *range(100000)]))`, before:
```
30603618 function calls (29403419 primitive calls) in 13.744 seconds
```
after:
```
25203549 function calls (24403352 primitive calls) in 12.090 seconds
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148243
Approved by: https://github.com/oulgen
2025-03-02 22:42:31 +00:00
Aaron Orenstein
db4ce78d46
PEP585: More UP006 fixes ( #146392 )
...
This should be the final PR before we can enable RUFF UP006.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146392
Approved by: https://github.com/justinchuby , https://github.com/albanD , https://github.com/Skylion007
2025-02-20 06:18:13 +00:00
Aaron Orenstein
1f8ff94d4f
PEP585: Add noqa to necessary tests ( #146391 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146391
Approved by: https://github.com/justinchuby , https://github.com/Skylion007
2025-02-12 15:29:50 +00:00
Aaron Orenstein
35c8c31f11
Fix for failure in D68425364 ( #145304 )
...
Summary: Back out change from #145166 which causes an internal model to fail.
Differential Revision: D68459095
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145304
Approved by: https://github.com/izaitsevfb
2025-01-22 23:33:02 +00:00
Aaron Orenstein
0b2a3687b9
PEP585 update - torch/fx ( #145166 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145166
Approved by: https://github.com/bobrenjc93
2025-01-20 18:11:54 +00:00
Aaron Gokaslan
cb66146f2b
[BE]: Update literal typing for torch/fx/graph nodelist ( #144650 )
...
Mentioned in discussion for #144631
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144650
Approved by: https://github.com/jansel
2025-01-12 21:02:13 +00:00
Jason Ansel
fd382f1269
Micro-optimization in Graph.nodes.__iter__ ( #144631 )
...
This generates slightly better code (removing a generator frame) and
drops a redundant assert.
```py
>>> import timeit
>>> def a():
... yield from range(3)
...
>>> def b():
... return range(3)
...
>>> timeit.timeit(lambda: [*a()])
0.2714634328149259
>>> timeit.timeit(lambda: [*b()])
0.12076826114207506
>>>
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144631
Approved by: https://github.com/oulgen , https://github.com/Skylion007
2025-01-12 17:46:46 +00:00
Marvin Kim
b1b0afb8e8
[BE] Add type annotation to eliminate_dead_code ( #142251 )
...
Test Plan: CI
Reviewed By: evanleed
D-ifferential Revision: D66887283
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142251
Approved by: https://github.com/ezyang , https://github.com/Skylion007
2024-12-10 17:09:21 +00:00
PyTorch MergeBot
75530885ba
Revert "[BE] Add type annotation to eliminate_dead_code ( #142251 )"
...
This reverts commit 3d04de6b2f .
Reverted https://github.com/pytorch/pytorch/pull/142251 on behalf of https://github.com/jeanschmidt due to checking if reverting will fix 'FAILED [5.0221s] test_dataloader.py::TestIndividualWorkerQueue::test_ind_worker_queue' on windows ([comment](https://github.com/pytorch/pytorch/pull/142251#issuecomment-2531706362 ))
2024-12-10 13:57:00 +00:00
Marvin Kim
3d04de6b2f
[BE] Add type annotation to eliminate_dead_code ( #142251 )
...
Test Plan: CI
Reviewed By: evanleed
Differential Revision: D66887283
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142251
Approved by: https://github.com/ezyang , https://github.com/Skylion007
2024-12-10 09:27:29 +00:00
Avik Chaudhuri
74eb92ed6e
fix deep copy of empty graph ( #141660 )
...
Differential Revision: D66532131
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141660
Approved by: https://github.com/ezyang
2024-12-02 22:03:13 +00:00
Sherlock Huang
071d48c56e
Add output_node util function to fx.Graph ( #139770 )
...
Summary: A util function for access output node for FX graph
Test Plan: OSS CI
Differential Revision: D65486457
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139770
Approved by: https://github.com/ezyang , https://github.com/Chillee
2024-11-07 18:54:59 +00:00
Matthew Francis-Landau
a7f49de485
Fixes issue with enums in a tuple for dynamo ( #133123 )
...
Currently when tuples values are encountered in dynamo, they are encoded using `repr(arg)`. This causes an issue if one of the values inside of the tuple will not be properly encoded. In this case, if an enum is contained inside of a tuple, it will cause invalid python code to be generated
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133123
Approved by: https://github.com/jansel
2024-10-21 23:45:11 +00:00
Xuehai Pan
abbd71d29d
[BE][Easy] enable PYFMT for torch.fx ( #138443 )
...
Reproduce command:
```bash
ghstack checkout https://github.com/pytorch/pytorch/pull/138443
git checkout HEAD~1 torch/
lintrunner -a --take "PYFMT" --all-files
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138443
Approved by: https://github.com/ezyang
2024-10-21 19:15:49 +00:00
Tom Ritchford
c0582fd0f8
Remove unused Python variables in torch/[b-z]* ( #136963 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136963
Approved by: https://github.com/ezyang
2024-10-19 16:45:22 +00:00
Jason Ansel
1f15c0c7a5
[fx] Replace _snake_case with a regexp ( #135822 )
...
~2x speedup on this function, though saves <0.5s overall
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135822
Approved by: https://github.com/oulgen
ghstack dependencies: #135787 , #135788 , #135820 , #135821
2024-09-13 00:18:41 +00:00
Jason Ansel
a72124add9
[fx] Minor optimization in create_arg ( #135821 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135821
Approved by: https://github.com/oulgen
ghstack dependencies: #135787 , #135788 , #135820
2024-09-13 00:18:41 +00:00
Jason Ansel
86335e9135
[reland 3/3][fx] Bypass custom __setattr__ in Node.__init__ ( #135735 )
...
Relands #135079 whcih was reverted by #135562
I broke this up into three parts to test internally.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135735
Approved by: https://github.com/oulgen
2024-09-12 05:50:39 +00:00
Ivan Zaitsev
440f8f57af
Revert "[fx] Bypass custom __setattr__ in Node.__init__ ( #135079 )" ( #135562 )
...
This reverts commit 66da3b3b2a .
#135079 breaks internal tests and needs to be reverted. Revert with mergebot doesn't work as this PR is technically part of the stack, but, according to @jansel, it should be possible to revert it individually.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135562
Approved by: https://github.com/jansel , https://github.com/seemethere
2024-09-10 18:07:11 +00:00