Commit Graph

73 Commits

Author SHA1 Message Date
Edward Z. Yang
17be65381d Do not use pickle to output config entries in repro scripts (#100354)
New output looks like:

```
torch._dynamo.config.dynamic_shapes = True
torch._dynamo.config.assume_static_by_default = False
torch._inductor.config.fallback_random = True
torch._inductor.config.triton.cudagraphs = True
```

instead of an unreadable pickle.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100354
Approved by: https://github.com/voznesenskym
2023-05-02 11:44:01 +00:00
Edward Z. Yang
2d8deffc1e Refactor repro/minifier into CLI; add analyze (#100226)
This is a two part PR; I can split it if you really want me to.

The first part is a refactor of the after aot repro/minifier scripts to come with a command line interface. I maintain exact BC with the previous interface (so, e.g., you still get a repro.py and a run_minifier.py that do the same thing as before), but each of these scripts also take command line arguments now which you can use to customize what actually happens. Check `run_repro` for full documentation on the arguments.

The second part of this is an implementation of `analyze` subcommand on the new CLI for any repro.

<img width="1277" alt="image" src="https://user-images.githubusercontent.com/13564/235045677-8545aab7-5e83-4813-bbec-47783dc60122.png">

This facility is oriented towards accuracy debugging. It does several things:

1. It will run your model twice and check for nondeterminism in inductor/float64, *even* on intermediate inputs (our benchmarking nondeterminism test only checks for nondeterminism on the final output). This makes localizing which operator is nondeterministic easy.
2. It will run your compiled model side-by-side with eager and float64 variants, and then report when things diverge too far from RMSE delta from float64.

Importantly, it does all this without requiring every intermediate to be held in memory (which will cause an OOM on large repros, such as the one I tested this on.)

Some other minor improvements:

* MinifierTestBase now has an easy to comment out spot that you can use to retain the temporary directory; good for debugging
* We print "running minifier" and "running repro" in MinifierTestBase to make it easier to orient where logs are coming from
* same takes a `log_error` optional argument which you can use to reroute the error logs when things mismatch
* counters["inductor"]["intermediate_hooks"] tracks the number of intermediate hooks we've codegen'ed; good for populate the tqdm interface
* torch.fx.interpreter gets an official `boxed_run` interface which uses the boxed arguments calling convention and doesn't retain inputs unnecessarily long
* torch.utils._content_store gets compute_tensor_metadata/read_tensor_metadata helper functions for computing tensor information without serializing it

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100226
Approved by: https://github.com/bertmaher, https://github.com/bdhirsh, https://github.com/anijain2305
2023-05-01 11:12:38 +00:00
Animesh Jain
5f138a6b65 [minifier][after dynamo] clone inputs while retaining gradness (#100066)
Helps with minifying one failure in https://github.com/pytorch/pytorch/issues/98561

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100066
Approved by: https://github.com/ezyang
2023-04-26 21:31:18 +00:00
Jason Ansel
220712f4de Fix torch.compile() on a skipped module (#98894)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98894
Approved by: https://github.com/xw285cornell
2023-04-22 16:10:55 +00:00
Edward Z. Yang
881c57230d Move more stuff to after_aot (#99557)
Not sure why this didn't work first time around. Second time's a charm.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99557
Approved by: https://github.com/anijain2305
2023-04-21 16:20:40 +00:00
Edward Z. Yang
805a6dc8d2 Add an expect test for test_save_graph_repro (#99538)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99538
Approved by: https://github.com/anijain2305
2023-04-20 00:00:40 +00:00
Edward Z. Yang
b01edf45f8 Add typing to debug_utils and repro (#99452)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99452
Approved by: https://github.com/anijain2305
2023-04-19 16:00:19 +00:00
Edward Z. Yang
2e25fb5d55 Refactor debug_utils into after_aot and after_dynamo modules (#99450)
There are no code changes but I did take the opportunity to
reorder and group the functions once they were placed in their
respective modules.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99450
Approved by: https://github.com/anijain2305
2023-04-19 16:00:19 +00:00
Edward Z. Yang
c67c16bcd2 Switch calling convention back to real tensors (#99320)
Months ago, in order to get dynamic shapes working through to Dynamo backends, we changed the calling convention to pass fake tensors rather than real tensors as example inputs to backends. The motivation at the time was, well, backends shouldn't really be peeking at the real tensors when they are doing compilation, and so it would make more sense to hide the real tensors from backends. But there were a bunch of problems:

* This interacted poorly with our accuracy minifier design: accuracy minifier needs access to the real inputs in order to run the model and figure out what happens!
* The TensorRT backend required real inputs and we never figured out how to fix it.
* In practice, all the backends needed to detect if they were passed real tensors, and fakeify them anyway (certainly AOTAutograd does this)
* Parameters and inputs are treated non-uniformly: parameters had to be passed as real tensors, because CUDA graphs requires knowing what the actual tensors are

Furthermore, there were some more problems discovered after the fact:

* Backends may want to optimize on aspects of tensors which you cannot tell without having real tensors; e.g., alignment of the data pointer

So, this PR decides that changing the calling convention was a bad idea, and switches back to passing real tensors. There is a problem though: AOTAutograd will perform fakeification, which means that in practice backends are still going to end up with fake tensors in the end anyway. I want to change this, but this will require some work with bdhirsh's upcoming AOTAutograd export refactor.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99320
Approved by: https://github.com/voznesenskym
2023-04-19 12:15:52 +00:00
PyTorch MergeBot
ea50d4f146 Revert "Switch calling convention back to real tensors (#99320)"
This reverts commit 780922c24e.

Reverted https://github.com/pytorch/pytorch/pull/99320 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-04-19 09:44:06 +00:00
Edward Z. Yang
780922c24e Switch calling convention back to real tensors (#99320)
Months ago, in order to get dynamic shapes working through to Dynamo backends, we changed the calling convention to pass fake tensors rather than real tensors as example inputs to backends. The motivation at the time was, well, backends shouldn't really be peeking at the real tensors when they are doing compilation, and so it would make more sense to hide the real tensors from backends. But there were a bunch of problems:

* This interacted poorly with our accuracy minifier design: accuracy minifier needs access to the real inputs in order to run the model and figure out what happens!
* The TensorRT backend required real inputs and we never figured out how to fix it.
* In practice, all the backends needed to detect if they were passed real tensors, and fakeify them anyway (certainly AOTAutograd does this)
* Parameters and inputs are treated non-uniformly: parameters had to be passed as real tensors, because CUDA graphs requires knowing what the actual tensors are

Furthermore, there were some more problems discovered after the fact:

* Backends may want to optimize on aspects of tensors which you cannot tell without having real tensors; e.g., alignment of the data pointer

So, this PR decides that changing the calling convention was a bad idea, and switches back to passing real tensors. There is a problem though: AOTAutograd will perform fakeification, which means that in practice backends are still going to end up with fake tensors in the end anyway. I want to change this, but this will require some work with bdhirsh's upcoming AOTAutograd export refactor.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99320
Approved by: https://github.com/voznesenskym
2023-04-18 02:09:57 +00:00
Edward Z. Yang
2471eac618 Make run_fwd_maybe_bwd work with int inputs (#99365)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99365
Approved by: https://github.com/voznesenskym
2023-04-18 02:05:26 +00:00
PyTorch MergeBot
629377ea8b Revert "Replace _dynamo.config with an object instead of module (#96455)"
This reverts commit 420104a886.

Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Han Qi
420104a886 Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Edward Z. Yang
b09722f540 Convert logging f-strings to use % format, part two (#98700)
This hits multi-line logging strings

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98700
Approved by: https://github.com/voznesenskym
2023-04-10 12:19:31 +00:00
Edward Z. Yang
9a8f71f23e Convert logging f-strings to use % format (#98697)
Codemod done with
https://gist.github.com/ezyang/2e8b0463cdc6be278478495b23ff0530 with
assistance from ChatGPT.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98697
Approved by: https://github.com/voznesenskym
2023-04-10 12:19:31 +00:00
Animesh Jain
cdb32dad3d [minifier] cuda.synchronize to better detect IMA (#97962)
Sometimes IMA can trigger much later than the kernel invocation call, and they escape minifier. Calling cuda.synchronize fixes this issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97962
Approved by: https://github.com/mlazos
2023-03-30 15:46:52 +00:00
Yanbo Liang
b23cfe5465 [Inductor] Remove fb custom ops dependency (#97907)
As it conflicts with other dependencies

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97907
Approved by: https://github.com/anijain2305
2023-03-29 23:53:21 +00:00
Michael Lazos
f6bafcde6f Added current buck target as minifier dep (#97183)
Summary: Have minifier include the current buck target as a dependency to make sure all deps are included.

Test Plan: TORCH_COMPILE_DEBUG_DIR=”.” buck2 run mode/dev-nosan //caffe2/test/inductor:minifier_smoke

Differential Revision: D44231209

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97183
Approved by: https://github.com/anijain2305
2023-03-22 08:30:53 +00:00
ZhongYingMatrix
1c40ce4f19 handle SymInt shape/input when debugging in dynamic shape (#96645)
Handle SymInt shape/input when debugging in dynamic shape. Fixes #96272

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96645
Approved by: https://github.com/bdhirsh
2023-03-20 18:19:03 +00:00
Michael Lazos
203890e1e0 Properly show buck target to run (#96089)
Summary: Makes the debug dir location configurable with TORCH_COMPILE_DEBUG_DIR env var

Test Plan: TORCH_COMPILE_DEBUG_DIR=”.” buck2 run mode/dev-nosan //caffe2/test/inductor:minifier_smoke

Reviewed By: bertmaher

Differential Revision: D43639955

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96089
Approved by: https://github.com/bertmaher
2023-03-07 22:52:27 +00:00
Edward Z. Yang
20dfce591c Add support for Inductor + symbolic shapes + training (#93059)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93059
Approved by: https://github.com/ezyang
2023-02-28 22:44:31 +00:00
Kazuaki Ishizaki
46385b3e48 Fix typos under torch/_dynamo directory (#95599)
This PR fixes typos in comments and messages of `.py` files under `torch/_dynamo` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95599
Approved by: https://github.com/ezyang
2023-02-28 03:44:24 +00:00
Jason Ansel
e071d72f3c Tag dynamo backends as debug/experimental (#93878)
Hides debug/experimental backends by default.

Before:
```
torch._dynamo.list_backends()
['aot_eager', 'aot_eager_decomp_partition', 'aot_torchxla_trace_once', 'aot_torchxla_trivial', 'aot_ts', 'aot_ts_nvfuser', 'cudagraphs', 'dynamo_accuracy_minifier_backend', 'dynamo_minifier_backend', 'eager', 'inductor', 'ipex', 'nvprims_aten', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'torchxla_trace_once', 'torchxla_trivial', 'ts', 'tvm']
```

After:
```
torch._dynamo.list_backends()
['aot_ts_nvfuser', 'cudagraphs', 'inductor', 'ipex', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'tvm']
```

Fixes https://github.com/pytorch/pytorch/issues/93733

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93878
Approved by: https://github.com/voznesenskym
2023-02-04 00:50:51 +00:00
Jason Ansel
ee2729890c Refactor dynamo register_backend/BACKENDS (#93389)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93389
Approved by: https://github.com/voznesenskym
2023-02-02 19:41:48 +00:00
Edward Z. Yang
6c93c3b58a Save and restore functorch configuration in minified scripts (#93853)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93853
Approved by: https://github.com/williamwen42
2023-02-02 03:09:46 +00:00
Edward Z. Yang
ca9ebf9e2b Delete dynamo_import and inductor_import (#93851)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93851
Approved by: https://github.com/albanD, https://github.com/jansel
2023-02-02 01:51:29 +00:00
Edward Z. Yang
207399cf5f Add repro_forward_only for inference debugging (#93856)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93856
Approved by: https://github.com/williamwen42
2023-02-01 22:03:13 +00:00
Edward Z. Yang
66fd99cc09 Use symbolic tracing_mode for aot repro with dynamic_shapes (#93393)
This is by no means a complete fix for broken aot symbolic
tracing, but it is definitely better what we have right now.

More context: https://github.com/pytorch/pytorch/issues/93367

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93393
Approved by: https://github.com/SherlockNoMad, https://github.com/bdhirsh
2023-02-01 11:51:00 +00:00
Edward Z. Yang
08041c5264 Configurable repro_tolerance for same_two_models (#93398)
Fixes https://github.com/pytorch/pytorch/issues/93293

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93398
Approved by: https://github.com/SherlockNoMad
2023-02-01 01:41:48 +00:00
Edward Z. Yang
3bae5484d0 Typofix (#93402)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93402
Approved by: https://github.com/albanD
2023-02-01 01:39:49 +00:00
Edward Z. Yang
76b683b008 Correctly propagate compiler kwargs to aot minifier (#93308)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93308
Approved by: https://github.com/Chillee, https://github.com/voznesenskym
2023-01-31 20:25:27 +00:00
Edward Z. Yang
2a31c3589b Report suppressed exception in minifier (#93368)
Suppressing exceptions is bad!  If you're debugging PyTorch itself
you want to see the exception so you can do something about it.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93368
Approved by: https://github.com/Skylion007, https://github.com/mlazos, https://github.com/bdhirsh
2023-01-31 19:31:50 +00:00
Jason Ansel
bbce4184be Refactor inductor to use standard BACKENDS dict (#92187)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92187
Approved by: https://github.com/desertfire
2023-01-17 04:05:43 +00:00
Bin Bao
55f0ed6dcd [inductor] Fix an issue causing "Could not generate fp64 outputs" (#92036)
Summary: Fix a fp64 version of model failed-to-run issue when convert_element_type
appears in the model. The failure can cause some numerical difference
recognized as accuracy error since the fp64 baseline result is not
available, and thus distracts Minifier from finding a real culprit for
accuracy error.

See the discussion in https://github.com/pytorch/torchdynamo/issues/1812

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92036
Approved by: https://github.com/ngimel
2023-01-14 17:03:27 +00:00
Mark Saroufim
c7f32613ec Find other temp directory for code cache if no /tmp (#91701)
Fixes https://github.com/pytorch/torchdynamo/issues/2004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91701
Approved by: https://github.com/anijain2305, https://github.com/wconstab
2023-01-05 02:29:52 +00:00
Animesh Jain
a32916190d buck-related minifier work (#91215)
Summary: Extending the minifier to generate buck target

Test Plan: N/A

Differential Revision: D42173893

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91215
Approved by: https://github.com/bertmaher, https://github.com/ngimel
2022-12-22 19:33:50 +00:00
William Wen
289f06434c [dynamo] check buffers when checking accuracy (#91037)
Tested by running `python benchmarks/dynamo/torchbench.py --accuracy --float32 -dcuda --output=inductor_torchbench_float32_training_cuda_performance.csv --training --inductor --no-skip --dashboard --only mobilenet_v2 --cold_start_latency` and breakpointing after the changes to inspect buffers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91037
Approved by: https://github.com/anijain2305
2022-12-20 13:57:25 +00:00
William Wen
86269852de Serialize dynamo/inductor config for minifier (#90501)
Fixes https://github.com/pytorch/torchdynamo/issues/1965

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90501
Approved by: https://github.com/mlazos
2022-12-14 23:44:06 +00:00
Bert Maher
c318de4274 [dynamo] Get GPU names without calling nvidia-smi (#90474)
Believe it or not, inductor can sometimes be used on machines that
have CUDA GPUs but no nvidia-smi.  Let's use torch APIs instead of subprocess.

Differential Revision: [D41841930](https://our.internmc.facebook.com/intern/diff/D41841930/)

Differential Revision: [D41841930](https://our.internmc.facebook.com/intern/diff/D41841930)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90474
Approved by: https://github.com/voznesenskym, https://github.com/anijain2305
2022-12-12 05:31:50 +00:00
William Wen
eb5b4c21e1 Deepcopy GraphModule in minifier (#90401)
Fixes https://github.com/pytorch/pytorch/issues/90397. Remove deepcopy calls in minifier tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90401
Approved by: https://github.com/anijain2305, https://github.com/mlazos
2022-12-08 23:59:05 +00:00
Ram Rachum
351d73b97f Fix exception causes all over the codebase (#90271)
This is the continuation to #90134 and hopefully the final PR in this series.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90271
Approved by: https://github.com/kit1980
2022-12-07 04:29:00 +00:00
Michael Voznesensky
41c3b41b92 Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] (#90039)
After all of the preparatory commits, this is a subset of the
changes in https://github.com/pytorch/pytorch/pull/89392 that actually
change us to propagating fake tensors to backends.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

This is the merger of Ed's PR #89672, which is a rewrite of an older PR of mine (#89392), with CI Fixes on top of it (#89773)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90039
Approved by: https://github.com/ezyang
2022-12-05 01:56:50 +00:00
PyTorch MergeBot
4648baa911 Revert "Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] (#90039)"
This reverts commit ef0c7ec958.

Reverted https://github.com/pytorch/pytorch/pull/90039 on behalf of https://github.com/clee2000 due to broke xla tests ef0c7ec958 https://github.com/pytorch/pytorch/actions/runs/3606308473/jobs/6077646142
2022-12-04 21:57:30 +00:00
Richard Zou
4068c5467d [Reland] Move functorch/_src to torch/_functorch (#88756) (#90091)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90091
Approved by: https://github.com/anijain2305, https://github.com/ezyang
2022-12-03 14:17:15 +00:00
Michael Voznesensky
ef0c7ec958 Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] (#90039)
After all of the preparatory commits, this is a subset of the
changes in https://github.com/pytorch/pytorch/pull/89392 that actually
change us to propagating fake tensors to backends.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

This is the merger of Ed's PR #89672, which is a rewrite of an older PR of mine (#89392), with CI Fixes on top of it (#89773)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90039
Approved by: https://github.com/ezyang
2022-12-03 01:19:55 +00:00
William Wen
f4707ae004 Add arguments to collect_results (#89611)
Fixes https://github.com/pytorch/torchdynamo/issues/1901. Test script:
```python
import copy

import torch
import torch._dynamo as dynamo
import torch._dynamo.config

dynamo.config.repro_after = "dynamo"
dynamo.config.repro_level = 4

def custom_backend(gm: torch.fx.GraphModule, example_inputs):
    gm = copy.deepcopy(gm)
    for node in gm.graph.nodes:
        if len(node.args) > 1:
            node.target = torch.add
            node.args = (node.args[0], 0)
    gm.recompile()
    return gm

inp = torch.ones(5)
inp.requires_grad_(True)

@dynamo.optimize(custom_backend)
def foo(x):
    x = x * x
    return x.sum()

y = foo(inp)
print(y)
y.backward()
print(inp.grad)
```
Before, the script will finish but output an incorrect gradient. After the change, the accuracy minifier is triggered.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89611
Approved by: https://github.com/ezyang
2022-11-30 04:25:33 +00:00
PyTorch MergeBot
218d9c6e09 Revert "Move functorch/_src to torch/_functorch (#88756)"
This reverts commit 52bc5c1cfe.

Reverted https://github.com/pytorch/pytorch/pull/88756 on behalf of https://github.com/clee2000 due to broke imports in tests 52bc5c1cfe https://github.com/pytorch/pytorch/actions/runs/3574742513/jobs/6010814968 probably a landrace
2022-11-29 17:17:11 +00:00
Richard Zou
52bc5c1cfe Move functorch/_src to torch/_functorch (#88756)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88756
Approved by: https://github.com/ezyang
2022-11-29 13:55:42 +00:00
Animesh Jain
8226a5d383 [minifier] Continue on assertion for accuracy minification (#89739)
During accuracy minification, minifier can create graphs which can cause assertion failures. This PR catches such assertions and let minifier move on, instead of getting stuck in minifying this issue.

It is possible that such graphs point to some real-although-unrelated issue. So, printing an assertion to flag and debug if needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89739
Approved by: https://github.com/mlazos
2022-11-29 07:49:07 +00:00