ZhongYingMatrix
1c40ce4f19
handle SymInt shape/input when debugging in dynamic shape ( #96645 )
...
Handle SymInt shape/input when debugging in dynamic shape. Fixes #96272
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96645
Approved by: https://github.com/bdhirsh
2023-03-20 18:19:03 +00:00
Michael Lazos
203890e1e0
Properly show buck target to run ( #96089 )
...
Summary: Makes the debug dir location configurable with TORCH_COMPILE_DEBUG_DIR env var
Test Plan: TORCH_COMPILE_DEBUG_DIR=”.” buck2 run mode/dev-nosan //caffe2/test/inductor:minifier_smoke
Reviewed By: bertmaher
Differential Revision: D43639955
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96089
Approved by: https://github.com/bertmaher
2023-03-07 22:52:27 +00:00
Edward Z. Yang
20dfce591c
Add support for Inductor + symbolic shapes + training ( #93059 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93059
Approved by: https://github.com/ezyang
2023-02-28 22:44:31 +00:00
Kazuaki Ishizaki
46385b3e48
Fix typos under torch/_dynamo directory ( #95599 )
...
This PR fixes typos in comments and messages of `.py` files under `torch/_dynamo` directory
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95599
Approved by: https://github.com/ezyang
2023-02-28 03:44:24 +00:00
Jason Ansel
e071d72f3c
Tag dynamo backends as debug/experimental ( #93878 )
...
Hides debug/experimental backends by default.
Before:
```
torch._dynamo.list_backends()
['aot_eager', 'aot_eager_decomp_partition', 'aot_torchxla_trace_once', 'aot_torchxla_trivial', 'aot_ts', 'aot_ts_nvfuser', 'cudagraphs', 'dynamo_accuracy_minifier_backend', 'dynamo_minifier_backend', 'eager', 'inductor', 'ipex', 'nvprims_aten', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'torchxla_trace_once', 'torchxla_trivial', 'ts', 'tvm']
```
After:
```
torch._dynamo.list_backends()
['aot_ts_nvfuser', 'cudagraphs', 'inductor', 'ipex', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'tvm']
```
Fixes https://github.com/pytorch/pytorch/issues/93733
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93878
Approved by: https://github.com/voznesenskym
2023-02-04 00:50:51 +00:00
Jason Ansel
ee2729890c
Refactor dynamo register_backend/BACKENDS ( #93389 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93389
Approved by: https://github.com/voznesenskym
2023-02-02 19:41:48 +00:00
Edward Z. Yang
6c93c3b58a
Save and restore functorch configuration in minified scripts ( #93853 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93853
Approved by: https://github.com/williamwen42
2023-02-02 03:09:46 +00:00
Edward Z. Yang
ca9ebf9e2b
Delete dynamo_import and inductor_import ( #93851 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93851
Approved by: https://github.com/albanD , https://github.com/jansel
2023-02-02 01:51:29 +00:00
Edward Z. Yang
207399cf5f
Add repro_forward_only for inference debugging ( #93856 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93856
Approved by: https://github.com/williamwen42
2023-02-01 22:03:13 +00:00
Edward Z. Yang
66fd99cc09
Use symbolic tracing_mode for aot repro with dynamic_shapes ( #93393 )
...
This is by no means a complete fix for broken aot symbolic
tracing, but it is definitely better what we have right now.
More context: https://github.com/pytorch/pytorch/issues/93367
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93393
Approved by: https://github.com/SherlockNoMad , https://github.com/bdhirsh
2023-02-01 11:51:00 +00:00
Edward Z. Yang
08041c5264
Configurable repro_tolerance for same_two_models ( #93398 )
...
Fixes https://github.com/pytorch/pytorch/issues/93293
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93398
Approved by: https://github.com/SherlockNoMad
2023-02-01 01:41:48 +00:00
Edward Z. Yang
3bae5484d0
Typofix ( #93402 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93402
Approved by: https://github.com/albanD
2023-02-01 01:39:49 +00:00
Edward Z. Yang
76b683b008
Correctly propagate compiler kwargs to aot minifier ( #93308 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93308
Approved by: https://github.com/Chillee , https://github.com/voznesenskym
2023-01-31 20:25:27 +00:00
Edward Z. Yang
2a31c3589b
Report suppressed exception in minifier ( #93368 )
...
Suppressing exceptions is bad! If you're debugging PyTorch itself
you want to see the exception so you can do something about it.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93368
Approved by: https://github.com/Skylion007 , https://github.com/mlazos , https://github.com/bdhirsh
2023-01-31 19:31:50 +00:00
Jason Ansel
bbce4184be
Refactor inductor to use standard BACKENDS dict ( #92187 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92187
Approved by: https://github.com/desertfire
2023-01-17 04:05:43 +00:00
Bin Bao
55f0ed6dcd
[inductor] Fix an issue causing "Could not generate fp64 outputs" ( #92036 )
...
Summary: Fix a fp64 version of model failed-to-run issue when convert_element_type
appears in the model. The failure can cause some numerical difference
recognized as accuracy error since the fp64 baseline result is not
available, and thus distracts Minifier from finding a real culprit for
accuracy error.
See the discussion in https://github.com/pytorch/torchdynamo/issues/1812
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92036
Approved by: https://github.com/ngimel
2023-01-14 17:03:27 +00:00
Mark Saroufim
c7f32613ec
Find other temp directory for code cache if no /tmp ( #91701 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/2004
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91701
Approved by: https://github.com/anijain2305 , https://github.com/wconstab
2023-01-05 02:29:52 +00:00
Animesh Jain
a32916190d
buck-related minifier work ( #91215 )
...
Summary: Extending the minifier to generate buck target
Test Plan: N/A
Differential Revision: D42173893
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91215
Approved by: https://github.com/bertmaher , https://github.com/ngimel
2022-12-22 19:33:50 +00:00
William Wen
289f06434c
[dynamo] check buffers when checking accuracy ( #91037 )
...
Tested by running `python benchmarks/dynamo/torchbench.py --accuracy --float32 -dcuda --output=inductor_torchbench_float32_training_cuda_performance.csv --training --inductor --no-skip --dashboard --only mobilenet_v2 --cold_start_latency` and breakpointing after the changes to inspect buffers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91037
Approved by: https://github.com/anijain2305
2022-12-20 13:57:25 +00:00
William Wen
86269852de
Serialize dynamo/inductor config for minifier ( #90501 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1965
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90501
Approved by: https://github.com/mlazos
2022-12-14 23:44:06 +00:00
Bert Maher
c318de4274
[dynamo] Get GPU names without calling nvidia-smi ( #90474 )
...
Believe it or not, inductor can sometimes be used on machines that
have CUDA GPUs but no nvidia-smi. Let's use torch APIs instead of subprocess.
Differential Revision: [D41841930](https://our.internmc.facebook.com/intern/diff/D41841930/ )
Differential Revision: [D41841930](https://our.internmc.facebook.com/intern/diff/D41841930 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90474
Approved by: https://github.com/voznesenskym , https://github.com/anijain2305
2022-12-12 05:31:50 +00:00
William Wen
eb5b4c21e1
Deepcopy GraphModule in minifier ( #90401 )
...
Fixes https://github.com/pytorch/pytorch/issues/90397 . Remove deepcopy calls in minifier tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90401
Approved by: https://github.com/anijain2305 , https://github.com/mlazos
2022-12-08 23:59:05 +00:00
Ram Rachum
351d73b97f
Fix exception causes all over the codebase ( #90271 )
...
This is the continuation to #90134 and hopefully the final PR in this series.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90271
Approved by: https://github.com/kit1980
2022-12-07 04:29:00 +00:00
Michael Voznesensky
41c3b41b92
Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] ( #90039 )
...
After all of the preparatory commits, this is a subset of the
changes in https://github.com/pytorch/pytorch/pull/89392 that actually
change us to propagating fake tensors to backends.
Signed-off-by: Edward Z. Yang <ezyangfb.com>
This is the merger of Ed's PR #89672 , which is a rewrite of an older PR of mine (#89392 ), with CI Fixes on top of it (#89773 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90039
Approved by: https://github.com/ezyang
2022-12-05 01:56:50 +00:00
PyTorch MergeBot
4648baa911
Revert "Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] ( #90039 )"
...
This reverts commit ef0c7ec958 .
Reverted https://github.com/pytorch/pytorch/pull/90039 on behalf of https://github.com/clee2000 due to broke xla tests ef0c7ec958 https://github.com/pytorch/pytorch/actions/runs/3606308473/jobs/6077646142
2022-12-04 21:57:30 +00:00
Richard Zou
4068c5467d
[Reland] Move functorch/_src to torch/_functorch ( #88756 ) ( #90091 )
...
This will be the last disruptive functorch internals change.
Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.
Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times
Test Plan:
- wait for tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90091
Approved by: https://github.com/anijain2305 , https://github.com/ezyang
2022-12-03 14:17:15 +00:00
Michael Voznesensky
ef0c7ec958
Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] ( #90039 )
...
After all of the preparatory commits, this is a subset of the
changes in https://github.com/pytorch/pytorch/pull/89392 that actually
change us to propagating fake tensors to backends.
Signed-off-by: Edward Z. Yang <ezyangfb.com>
This is the merger of Ed's PR #89672 , which is a rewrite of an older PR of mine (#89392 ), with CI Fixes on top of it (#89773 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90039
Approved by: https://github.com/ezyang
2022-12-03 01:19:55 +00:00
William Wen
f4707ae004
Add arguments to collect_results ( #89611 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1901 . Test script:
```python
import copy
import torch
import torch._dynamo as dynamo
import torch._dynamo.config
dynamo.config.repro_after = "dynamo"
dynamo.config.repro_level = 4
def custom_backend(gm: torch.fx.GraphModule, example_inputs):
gm = copy.deepcopy(gm)
for node in gm.graph.nodes:
if len(node.args) > 1:
node.target = torch.add
node.args = (node.args[0], 0)
gm.recompile()
return gm
inp = torch.ones(5)
inp.requires_grad_(True)
@dynamo.optimize(custom_backend)
def foo(x):
x = x * x
return x.sum()
y = foo(inp)
print(y)
y.backward()
print(inp.grad)
```
Before, the script will finish but output an incorrect gradient. After the change, the accuracy minifier is triggered.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89611
Approved by: https://github.com/ezyang
2022-11-30 04:25:33 +00:00
PyTorch MergeBot
218d9c6e09
Revert "Move functorch/_src to torch/_functorch ( #88756 )"
...
This reverts commit 52bc5c1cfe .
Reverted https://github.com/pytorch/pytorch/pull/88756 on behalf of https://github.com/clee2000 due to broke imports in tests 52bc5c1cfe https://github.com/pytorch/pytorch/actions/runs/3574742513/jobs/6010814968 probably a landrace
2022-11-29 17:17:11 +00:00
Richard Zou
52bc5c1cfe
Move functorch/_src to torch/_functorch ( #88756 )
...
This will be the last disruptive functorch internals change.
Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.
Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times
Test Plan:
- wait for tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88756
Approved by: https://github.com/ezyang
2022-11-29 13:55:42 +00:00
Animesh Jain
8226a5d383
[minifier] Continue on assertion for accuracy minification ( #89739 )
...
During accuracy minification, minifier can create graphs which can cause assertion failures. This PR catches such assertions and let minifier move on, instead of getting stuck in minifying this issue.
It is possible that such graphs point to some real-although-unrelated issue. So, printing an assertion to flag and debug if needed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89739
Approved by: https://github.com/mlazos
2022-11-29 07:49:07 +00:00
Animesh Jain
2b522670d2
[dynamo] Minifier fixes for reproducing segfault ( #89712 )
...
Helped with minifying the segfault in https://github.com/pytorch/torchdynamo/issues/1928
Tests not really needed. It improves quality of life as segfault can fail anywhere (when CUDA_LAUNCH_BLOCKING is off)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89712
Approved by: https://github.com/mlazos , https://github.com/ngimel
2022-11-29 04:29:42 +00:00
Animesh Jain
30d9fb9157
[dynamo][reland] API Support for nn.Module ( #89113 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89113
Approved by: https://github.com/ezyang
2022-11-17 02:03:48 +00:00
William Wen
f5e2cb5249
Add comprehensive minifier tests ( #88022 )
...
Adds tests for https://github.com/pytorch/torchdynamo/issues/1241 .
To run: `pytest test/dynamo/test_minifier.py`.
Actually runs minifier launcher script and repro scripts, rather than just checking for existence of the minifier launcher script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88022
Approved by: https://github.com/mlazos , https://github.com/anijain2305
2022-11-17 02:02:29 +00:00
PyTorch MergeBot
98bcb4acb6
Revert "[reland][dynamo] Better support for nn.Module ( #88959 )"
...
This reverts commit e950afc395 .
Reverted https://github.com/pytorch/pytorch/pull/88959 on behalf of https://github.com/malfet due to Broke `test_accuracy_issue1`
2022-11-13 16:21:14 +00:00
Animesh Jain
e950afc395
[reland][dynamo] Better support for nn.Module ( #88959 )
...
Relanding https://github.com/pytorch/pytorch/pull/88629
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88959
Approved by: https://github.com/msaroufim
2022-11-13 08:19:45 +00:00
PyTorch MergeBot
ae2c668cc0
Revert "[dynamo][api] Better support of torch.nn.Module ( #88629 )"
...
This reverts commit c83348597b .
Reverted https://github.com/pytorch/pytorch/pull/88629 on behalf of https://github.com/anijain2305 due to job failing on master https://github.com/pytorch/pytorch/actions/runs/3449914495/jobs/5758267231
2022-11-12 07:52:56 +00:00
PyTorch MergeBot
34641c4384
Revert "Add comprehensive minifier tests ( #88022 )"
...
This reverts commit 5ff600aa6e .
Reverted https://github.com/pytorch/pytorch/pull/88022 on behalf of https://github.com/wconstab due to Seems to be causing CI failures relating to minifier test and some /tmp/ path not existing
2022-11-12 05:16:41 +00:00
Animesh Jain
c83348597b
[dynamo][api] Better support of torch.nn.Module ( #88629 )
...
This is an API change, so please review carefully.
With this PR, torchdynamo returns an `OptimizedModule` class object, a subclass of `torch.nn.Module`, when asked to optimize a `nn.Module` object. Most of the methods are redirected to the original `nn.Module`, which is installed as `_mod` in the `OptimizedModule`.
This is helpful for many cases
```
mod = MockModule()
opt_mod = torch._dynamo.optimize()(mod)
print(opt_mod) # Works
opt_mod = opt_mod.to(device="cuda")
print(opt_mod) # Works
opt_mod(input) # Triggers recompile if necessary, earlier we were shedding the TorchDynamo wrapper
opt_mod.parameters() # Refers to the original module
```
Topics unclear to me
* I have overridden many methods to raise NotImplementedError. A careful review of those will be good.
* hooks
* For the optimized forward, should we call torchdynamo optimization on `__call__` or `forward`
* What else to test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88629
Approved by: https://github.com/Chillee , https://github.com/jansel , https://github.com/msaroufim
2022-11-12 04:45:17 +00:00
William Wen
5ff600aa6e
Add comprehensive minifier tests ( #88022 )
...
Adds tests for https://github.com/pytorch/torchdynamo/issues/1241 .
To run: `pytest test/dynamo/test_minifier.py`.
Actually runs minifier launcher script and repro scripts, rather than just checking for existence of the minifier launcher script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88022
Approved by: https://github.com/mlazos , https://github.com/anijain2305
2022-11-12 00:22:25 +00:00
Michael Lazos
5220d07d2c
Fix minifier accuracy msg ( #88515 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1809
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88515
Approved by: https://github.com/yanboliang , https://github.com/williamwen42
2022-11-04 23:26:44 +00:00
Ivan Yashchuk
0eea05b11e
Remove "prims_nvfuser" backend for TorchDynamo ( #88083 )
...
Removing "prims_nvfuser" backend according to the discussion in https://github.com/pytorch/torchdynamo/pull/1281#discussion_r979468355 .
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88083
Approved by: https://github.com/ezyang
2022-11-01 03:09:37 +00:00
Mengchi Zhang
9109ecf914
Even "nvcc not found" should be commented out ( #87959 )
...
Summary: Even "nvcc not found" should be commented out in minifier_launcher.py, cause there could be a case that PyTorch/minifier can find cuda path but nvcc is not explicitly included in env variable like PATH.
Differential Revision: D40790023
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87959
Approved by: https://github.com/anijain2305 , https://github.com/jianyuh
2022-10-30 18:22:17 +00:00
Michael Lazos
9691ba2dbd
Remove excess exception logging for minifier, cleanup backend failure exception format ( #87537 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1376
Ensures exceptions are printed only in one place, once.
implements some of the ideas from https://github.com/pytorch/torchdynamo/issues/1754
- Attaches a field to the exception which indicates that it's minified, a usage message is printed if this field is present
cc @jansel @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @lezcano @fdrocha
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87537
Approved by: https://github.com/anijain2305
2022-10-28 21:33:55 +00:00
Michael Lazos
44d7ba7efb
Fix debug dir bugs and minifier output directories ( #87682 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1758 , https://github.com/pytorch/torchdynamo/issues/1752
- minifier_launcher.py now dumps checkpoints to \<cwd\>/checkpoints when run
- a single debug directory is created per script invocation, asserts failing with no directory will no longer occur
- torchinductor debug tracing will correctly dump to the debug directory now since no prior setup is needed, (the directory was incorrectly only initialized during dynamo tracing)
cc @jansel @lezcano @fdrocha @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87682
Approved by: https://github.com/ezyang
2022-10-25 21:55:28 +00:00
Edward Z. Yang
181b615b4e
Fix accuracy minifier ( #87606 )
...
Signed-off-by: Edward Z. Yang <ezyangfb.com>
cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang @penguinwu
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87606
Approved by: https://github.com/anjali411 , https://github.com/anijain2305 , https://github.com/albanD , https://github.com/soumith , https://github.com/malfet
2022-10-24 17:27:17 +00:00
Michael Lazos
8461460d55
Unified debug directory for dynamo/inductor tools ( #87438 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1705
Fixes https://github.com/pytorch/torchdynamo/issues/1383
Adds a debug directory by default called `torchdynamo_debug` in the current working directory.
In the debug directory for each run of dynamo (an enter and exit of optimize) folder run_\<timestamp\> is created which contains any minifier/inductor/torchdynamo artifacts under respective folders.
Updated the minifier, record replay, and inductor tracing to use this directory
cc @jansel @lezcano @fdrocha @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87438
Approved by: https://github.com/soumith
2022-10-22 03:43:11 +00:00
Edward Z. Yang
96691865b9
[dynamo] Unify raise_on_* config to suppress_errors and raise by default ( #87440 )
...
I noticed that a lot of bugs are being suppressed by torchdynamo's default
error suppression, and worse yet, there's no way to unsuppress them. After
discussion with voz and soumith, we decided that we will unify error suppression
into a single option (suppress_errors) and default suppression to False.
If your model used to work and no longer works, try TORCHDYNAMO_SUPPRESS_ERRORS=1
to bring back the old suppression behavior.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87440
Approved by: https://github.com/voznesenskym , https://github.com/albanD
2022-10-21 17:03:29 +00:00
Yanbo Liang
a91abedf0d
[Inductor] TorchInductor tracing fx_graph.py should import overrides ( #87271 )
...
Running the generated script would be failed if there are ops like ```philox_rand_like``` and ```philox_rand_like```.
cc @jansel @lezcano @fdrocha
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87271
Approved by: https://github.com/jansel
2022-10-20 21:59:12 +00:00
Zachary DeVito
1e4a274248
[dynamo] avoid popen.communicate() ( #87335 )
...
It seems like when popen.communicate() is used it waits for all the
desendents of popen to close the stdin/stderr. However, if we have
have worker processes running in the child, and the child segfaults,
those processes will stay alive until someone waitpid's the child.
Since those children have open handles to the stdin/stderr pipe,
communicate never returns.
This change just writes the output to temp files and directly calls
wait() on the child, which returns as soon as it dies.
cc @jansel @lezcano @fdrocha
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87335
Approved by: https://github.com/anijain2305 , https://github.com/voznesenskym
2022-10-20 17:28:27 +00:00