Commit Graph

60 Commits

Author SHA1 Message Date
PyTorch MergeBot
75462fd870 Revert "[1/N] Dynamo skipfiles refactor (#109567)"
This reverts commit f8e0ebec8c.

Reverted https://github.com/pytorch/pytorch/pull/109567 on behalf of https://github.com/huydhn due to Many jobs are failing in trunk after this with FILENAME_ALLOWLIST is not defined error f8e0ebec8c. This looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/109567#issuecomment-1738344950))
2023-09-28 02:22:22 +00:00
Yanbo Liang
f8e0ebec8c [1/N] Dynamo skipfiles refactor (#109567)
This is 1/N of the dynamo skipfiles/allowed_functions refactor, the major change in this PR includes:
* Refactor & define the [skipfiles rules](https://github.com/pytorch/pytorch/pull/109567/files#diff-5aa3ce9db729bf0901ea97a5d3cc51924cc8575d9c516c1c8f572a35de92544aR56) and interface
* For every ```skipfiles.check```, we return both the check result and the skip/inline reason and log them for debugging.
* We found several latent issues/bugs and incorrect implementations in the codebase, but I'm planning to fix them in follow-up PRs to make the refactor decoupled with bug fixes.
* More details in the inline comments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109567
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/anijain2305
2023-09-28 01:21:59 +00:00
Michael Voznesensky
b123fd168a Higher order op for preserving leaf functions through trace, particularly for getting user defined hooks to compiled autograd (#109690)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109690
Approved by: https://github.com/ezyang
2023-09-27 20:47:15 +00:00
Kimish Patel
eb67c452c8 [Quant] Add DQ duplication pass (#107900)
Summary:
During convert step observers are first replaced by Q-DQ pair. In some
scenarios like following output DQ has a fan out.

                 ---> OP2 -> Q -> DQ
                /
OP -> Q -> DQ -
                \
                 ---> OP3 -> Q -> DQ

If either op OP2 or OP3 are configured to be quantized, then the input
is expected to quantized. In this case quantized equivalent of some
pattern, that quantizer asked to be quantized, should look like:
[DQ -> {pattern} -> Q]. However, in scenario like above where DQ node
is shared between multiple "quantized" patterns, boundary of "quantized"
pattern is not clear because DQ now belongs to multiple quantized
patterns.

This poses challenge for:
- Porting metadata: which "quantized" partition this DQ node belongs
- Quantized representation, equivalently, needs to identify
self-contained quantized pattern that is replaced by its equivalent pattern
that captures compute in the quantized precision.

Test Plan:
test_duplicate_dq_pass

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D48663147](https://our.internmc.facebook.com/intern/diff/D48663147)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107900
Approved by: https://github.com/jerryzh168, https://github.com/andrewor14, https://github.com/leslie-fang-intel
ghstack dependencies: #107105, #107106, #107899
2023-09-02 06:20:03 +00:00
Wanchao Liang
a29b9101fa [dynamo] fix dynamo + DTensor to work with 2d (#108329)
pair debugged with @wconstab and we found some issue in both dynamo and
the TP's fsdp extension side. This PR fixes the dynamo + DTensor integration
so that the current graph break FSDP can work with tensor parallel by moving
the torch.compile after FSDP wrapping.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108329
Approved by: https://github.com/Skylion007, https://github.com/wconstab
2023-08-31 22:46:26 +00:00
ydwu4
49e964cad6 Automatically turn on dynamo in cond (#108028)
A replacement of https://github.com/pytorch/pytorch/pull/107932.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108028
Approved by: https://github.com/zou3519
ghstack dependencies: #108025, #108026, #108027
2023-08-28 10:16:41 +00:00
Tugsbayasgalan Manlaibaatar
20c5add133 [export] Refactor constrain_as_value and constrain_as_size (#106591)
Some notable changes:
1. `constrain_as_size` allows min value to be less than 2 as it will unconditionally assume min >= 2 for compiler purposes. Instead, we add additional check to make sure max value is always greater than 2.
2. Previously, we used to runtime assert on the unbacked symint's val range which would be always between [2, max]. I modified this logic to assert on [0, max] unless user explicitly specifies the min range.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106591
Approved by: https://github.com/gmagogsfm, https://github.com/ezyang
2023-08-15 05:41:43 +00:00
Zhengxu Chen
547ccae0db [export] Support preserving calling convention to some modules. (#106798)
Summary: APS use this feature to swap out some submodules after unflattening.

Test Plan: test_export_preserve_signature

Differential Revision: D48154341

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106798
Approved by: https://github.com/tugsbayasgalan
2023-08-11 21:17:45 +00:00
PyTorch MergeBot
745d29b0cc Revert "[export] Refactor constrain_as_value and constrain_as_size (#106591)"
This reverts commit 18989890bf.

Reverted https://github.com/pytorch/pytorch/pull/106591 on behalf of https://github.com/izaitsevfb due to Breaks inductor test on trunk ([comment](https://github.com/pytorch/pytorch/pull/106591#issuecomment-1675069091))
2023-08-11 16:37:47 +00:00
Tugsbayasgalan Manlaibaatar
18989890bf [export] Refactor constrain_as_value and constrain_as_size (#106591)
Some notable changes:
1. `constrain_as_size` allows min value to be less than 2 as it will unconditionally assume min >= 2 for compiler purposes. Instead, we add additional check to make sure max value is always greater than 2.
2. Previously, we used to runtime assert on the unbacked symint's val range which would be always between [2, max]. I modified this logic to assert on [0, max] unless user explicitly specifies the min range.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106591
Approved by: https://github.com/gmagogsfm, https://github.com/ezyang
2023-08-11 05:29:22 +00:00
kshitij12345
cce2c52b0b [pt2] support vmap (#101707)
Teach dynamo about `vmap`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101707
Approved by: https://github.com/zou3519
2023-08-09 03:39:33 +00:00
Kshiteej K
af78e139a8 [functorch] fix dynamo support for functorch.grad (#106610)
Ref: https://github.com/pytorch/pytorch/pull/106475#discussion_r1282384503

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106610
Approved by: https://github.com/zou3519
2023-08-07 17:44:49 +00:00
Michael Voznesensky
8549abc347 Grab bag of DTensor enablement stuff (Enable whole graph capture for DTensor) (#105787)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105787
Approved by: https://github.com/ezyang
2023-07-30 00:17:45 +00:00
Jerry Zhang
3a77f9aaaf [quant][api] Move torch.ao.quantization.pt2e.quantizer to torch.ao.quantization.quantizer (#105885)
Summary: moving quantizer to torch.ao.quantization to make it a public api, since pt2e is a folder for implementations

Test Plan:
CIs

sanity check: "buck test //executorch/backends/xnnpack/test:test_xnnpack_quantized_models -- test_resnet18"

Differential Revision: D47727838

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105885
Approved by: https://github.com/andrewor14
2023-07-26 18:20:09 +00:00
PyTorch MergeBot
6dd4b99ec2 Revert "Disable torchrec/sparse from top-level Dynamo tracing (#105733)"
This reverts commit 60d5efdb15.

Reverted https://github.com/pytorch/pytorch/pull/105733 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/105733#issuecomment-1650931609))
2023-07-26 03:44:47 +00:00
Edward Z. Yang
60d5efdb15 Disable torchrec/sparse from top-level Dynamo tracing (#105733)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105733
Approved by: https://github.com/voznesenskym
2023-07-22 02:00:36 +00:00
Jerry Zhang
dff4e034b8 [quant][pt2e][be] Rename qnnpack quantizer to xnnpack quantizer (#105551)
Summary: att

Test Plan: sandcastle CI and OSS CI

Reviewed By: andrewor14

Differential Revision: D47422894

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105551
Approved by: https://github.com/andrewor14
2023-07-20 03:52:40 +00:00
Jerry Zhang
7b4d080496 [quant][pt2e] Rename _pt2e to pt2e (#104668)
Summary:
X-link: https://github.com/pytorch/executorch/pull/3

att

Test Plan: Imported from OSS

Differential Revision: D47202807

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104668
Approved by: https://github.com/andrewor14
2023-07-15 06:34:17 +00:00
Andrew Or
4b29829ece [quant][pt2] Fix QAT convert for mobilenetv2 (#104110)
Summary:
QAT convert for mobilenetv2 was previously not working
because we incorrectly applied dropout during eval as well as
training. This is because, for exported models, model.eval() does
not change the behavior of dropout, unlike models with torch ops.
This commit simulates the effects of model.eval() for exported
models as well by replacing the aten dropout pattern before eval.
As of this commit, end-to-end QAT numerics now match for
mobilenetv2 between FX and PT2.

Test Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2

Differential Revision: D46750343

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104110
Approved by: https://github.com/jerryzh168
2023-07-11 18:42:42 +00:00
Jerry Zhang
c98896b76f [quant][pt2e] Add more precise representation for quantized add (#104130)
Summary:
The planned e2e for quantization in pytorch 2.0 export is the following:

float_model -> prepare_pt2e -> calibration -> convert_pt2e -> ...

inside convert_pt2e, we will first produce a q/dq representation of the quantized model, similar to the previous output of
convert_to_reference_fx in fx grah mode quantization:

```
torch.ops.quantized_decomposed.dequantize_per_tensor -> torch.ops.aten.add -> torch.ops.quantized_decomopsed.quantize_per_tensor
torch.ops.quantized_decomposed.dequantize_per_tensor   /
```

Then we'll rewrite the above to a more precise representation that express the intention in a more precise manner, since
here we actually want to do int8 addition, instead of simulating the int8 addition with fp32 operations, the representation for
quantized add is:

```
def quantized_add(x_i8, x_scale, x_zero_point, y_i8, y_scale, y_zero_point, out_scale, out_zero_point):
    x = (x_scale / out_scale) * x_i8
    y = (y_scale / out_scale) * y_i8
    out = x + y
    out -= (x_zero_point * x_scale - y_zero_point * y_scale) / out_scale
    out += out_zero_point
    return out
```

Test Plan:
```
buck2 test caffe2/test:quantization_pt2e -- --exact 'caffe2/test:quantization_pt2e - test_representation_add (quantization.pt2e.test_quantize_pt2e.TestQuantizePT2E)'
```

Reviewed By: kimishpatel

Differential Revision: D45628032

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104130
Approved by: https://github.com/kimishpatel
2023-06-27 20:11:30 +00:00
Animesh Jain
75dab587ef [dynamo] FSDP + AC + torch.compile (#103953)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103953
Approved by: https://github.com/wanchaol
2023-06-24 01:40:56 +00:00
kshitij12345
d552c271db [pt2] grad support (#102264)
Teach dynamo about grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102264
Approved by: https://github.com/zou3519
2023-06-21 10:13:09 +00:00
PyTorch MergeBot
e737a8486f Revert "[pt2] grad support (#102264)"
This reverts commit 85b83954c8.

Reverted https://github.com/pytorch/pytorch/pull/102264 on behalf of https://github.com/huydhn due to This is failing in trunk 85b83954c8 and looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/102264#issuecomment-1600001309))
2023-06-21 03:02:55 +00:00
kshitij12345
85b83954c8 [pt2] grad support (#102264)
Teach dynamo about grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102264
Approved by: https://github.com/zou3519
2023-06-21 01:37:08 +00:00
Zhengxu Chen
26bf8894b6 [export] Replicate exportdb examples and tests in oss. (#102769)
Summary: Initial work to copy source to OSS for exportdb and make sure tests can run properly.

Test Plan: test_export

Differential Revision: D46369152

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102769
Approved by: https://github.com/angelayi
2023-06-04 20:01:57 +00:00
Michael Lazos
c75e064dd6 Disallow _foreach_utils.py, but allow it to be inlined (#102221)
This function should not be allowed, but should be inlineable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102221
Approved by: https://github.com/anijain2305
2023-06-02 05:14:09 +00:00
PyTorch MergeBot
8aa48315de Revert "Disallow _foreach_utils.py, but allow it to be inlined (#102221)"
This reverts commit 552299c42c.

Reverted https://github.com/pytorch/pytorch/pull/102221 on behalf of https://github.com/huydhn due to Sorry for reverting your PR. It starts to break dynamo jobs in trunk 552299c42c and it looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/102221#issuecomment-1563694599))
2023-05-26 01:27:19 +00:00
Michael Lazos
552299c42c Disallow _foreach_utils.py, but allow it to be inlined (#102221)
This function should not be allowed, but should be inlineable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102221
Approved by: https://github.com/anijain2305
2023-05-25 23:48:36 +00:00
Kimish Patel
24e9b8f5f4 [PT2E][Quant] Use subgraph matcher annotate linear pattern (#100566)
This diff adds subgraph matcher for pattern matching. Furthermore, we also move
annotations for the matched subgraph in a way that only input and output nodes
of the matched subgraph have quantization related valid annotations.

Differential Revision: [D45535539](https://our.internmc.facebook.com/intern/diff/D45535539/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100566
Approved by: https://github.com/jerryzh168
2023-05-04 21:31:59 +00:00
andrewor14
9cda7b9e47 [hotfix] Do not import torch.ao.quantization._pt2e from dynamo (#100194)
Summary: Importing torch.ao.quantization._pt2e from dynamo led to
internal test failures related to memory profiling. For now,
let's express the path using a simple string instead.

Reviewers: jerryzh168, kimishpatel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100194
Approved by: https://github.com/jerryzh168
2023-04-28 01:32:23 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
02f059c2b7 Add private _export API (#99992)
Differential Revision: D45279206

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99992
Approved by: https://github.com/angelayi, https://github.com/gmagogsfm
2023-04-27 16:24:14 +00:00
Edward Z. Yang
3a5427baf4 Add torch.utils._content_store (#99809)
Implements a simple content-addressable store for storages (with tensors implemented as cheap references on top), enabling incremental serialization of tensors to disk, which I intend to use in the accuracy repro extractor.  Check the comment at the top of torch/utils/_content_store.py for more details on the intended use case.

One major piece of this PR is implementing the content hash for tensors.  For our prospective use case, we may need to repeatedly hash up to 80 GB of tensor data every time we snapshot (and we may snapshot multiple times).  Using a conventional cryptographic hash and hashing each snapshot would likely take on order of minutes, which seemed too slow to me.  So instead, I implemented a crappy hash function that can be run on GPU.  It is at least somewhat theoretically grounded: using random parameters generated by Philox, we use the standard shift-multiply and xor sum universal hash family.  The hash function is a bit dorky though; instead of properly doing 160-bit math, it just runs 32-bit hash five times and cats them together.  By the way, this sets the first precedent for kernel in PyTorch library which MUST be torch.compile'd to be run (in fact, this kernel does not run in eager mode because of the use of xor_sum, which doesn't actually exist in ATen.)

I had to add a few more primitives to inductor, namely randint (over the entire int range) and xor_sum.  Fortunately, these primitives are natively supported by Triton/C++, and so they were very easy to plumb through.  xor_sum is exposed as a prim, while randint special cases on when low/high span the entire 32-bit signed integer range.

Thanks to Jeff Johnson for letting me bounce ideas of him on a Saturday morning lol.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99809
Approved by: https://github.com/voznesenskym
2023-04-26 18:02:59 +00:00
Will Constable
6427b849a3 Allow in graph einops operators (#99631)
Coordinating with arogozhnikov from einops team, allowing specific operators in the dynamo graph avoids dynamo tracing problems provided the operators are screened for safety - they must not bake in unintended constants or take data-dependent control flow paths.

Fixes #99031

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99631
Approved by: https://github.com/jansel
2023-04-21 03:14:38 +00:00
andrewor14
22af604e1b [quant][pt2] Add Conv + BN fusion for prepare QAT (#98568)
**Summary:** This commit adds the `prepare_qat_pt2e` API and the
fusion logic for Conv + BN. We use the subgraph rewriter to
match and replace the pattern with the existing logic in
`nniqat.ConvBn2d`. Note this is not the end-to-end flow yet.
In particular, the convert flow needs to swap the new subgraph
with another one that merges the batchnorm stats back into conv.

The Conv + BN fusion is implemented in the following steps:

1. Annotate all nodes in the pattern `[conv - bn - getitem]`

2. Match and replace this pattern with the fused QAT pattern
   (note that this is a larger subgraph than the original one)

3. Copy over metadata from the original nodes to the
   corresponding nodes in the new subgraph, to ensure the
   stack traces and dtype annotations are preserved

4. Prepare will insert fake quantizes in the right places
   based on the annotations

**Test Plan:**
python test/test_quantization.py TestQuantizePT2E.test_qat_conv_bn_fusion

**Reviewers:** jerryzh168, kimishpatel, yanboliang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98568
Approved by: https://github.com/kimishpatel
2023-04-20 20:15:28 +00:00
PyTorch MergeBot
96a262d666 Revert "Allow in graph einops operators (#99478)"
This reverts commit 309b7edfe1.

Reverted https://github.com/pytorch/pytorch/pull/99478 on behalf of https://github.com/kit1980 due to dynamo/test_after_aot.py::TestAfterAot::test_save_graph_repro - AssertionError, see https://github.com/pytorch/pytorch/actions/runs/4750274195/jobs/8438535867
2023-04-20 06:42:35 +00:00
Will Constable
309b7edfe1 Allow in graph einops operators (#99478)
Coordinating with @arogozhnikov from einops team, allowing specific operators in the dynamo graph avoids dynamo tracing problems provided the operators are screened for safety - they must not bake in unintended constants or take data-dependent control flow paths.

Fixes #99031

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99478
Approved by: https://github.com/jansel
2023-04-20 03:40:50 +00:00
PyTorch MergeBot
ab08284225 Revert "Disable dynamo tracing torchrec.distributed (#97824)"
This reverts commit df216b5736.

Reverted https://github.com/pytorch/pytorch/pull/97824 on behalf of https://github.com/izaitsevfb due to back out diff that doubles memory consumption for multitask FAIM flows. See D44978317
2023-04-17 20:34:01 +00:00
Jason Ansel
9ab5fdff81 Remove obsolete HAS_PRIMS_REFS (#99252)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99252
Approved by: https://github.com/ngimel
2023-04-17 00:27:37 +00:00
Angela Yi
1d077f28ed [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-13 21:20:10 +00:00
PyTorch MergeBot
ab761605ae Revert "[export] Constraints API (#98433)"
This reverts commit 1510eb4072.

Reverted https://github.com/pytorch/pytorch/pull/98433 on behalf of https://github.com/izaitsevfb due to Breaks internal tests, asked by author to revert
2023-04-12 23:37:19 +00:00
PyTorch MergeBot
629377ea8b Revert "Replace _dynamo.config with an object instead of module (#96455)"
This reverts commit 420104a886.

Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Angela Yi
1510eb4072 [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-12 01:32:44 +00:00
Han Qi
420104a886 Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Jason Ansel
a7892802b9 [dynamo] Add einops to skipfiles (#98661)
This was causing failures in a torchbench model

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98661
Approved by: https://github.com/yanboliang
2023-04-11 03:21:36 +00:00
Yanbo Liang
a9c7e882ac [Dynamo] Support skip fbcode modules (#98192)
Fix Meta internal use case:
* We are going to skip tracing ```torchrec.distributed```, however, in fbcode, the structure is a bit different from OSS torchrec.
* Meta internally uses ```torch.package```, so we should support skip tracing files like ```<torch_package_0>.torchrec/distributed/...```.
* We put the logic behind a flag ```is_fbcode``` to avoid misuse.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98192
Approved by: https://github.com/yf225
2023-04-04 06:33:55 +00:00
Yanbo Liang
df216b5736 Disable dynamo tracing torchrec.distributed (#97824)
This was used to unblock Meta internal use cases, where ```torchrec.distributed``` was used, however, it can't be traced by dynamo properly right now.
We were sending the same fix(#90087) several months ago, but was reverted due to ```fbgemm``` conflicts. This PR catches ```Exception``` rather than ```ImportError``` which can handle the conflicts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97824
Approved by: https://github.com/wconstab
2023-04-01 00:39:59 +00:00
PyTorch MergeBot
7868e4b45b Revert "Disable dynamo tracing torchrec.distributed (#97824)"
This reverts commit 9d1d95099b.

Reverted https://github.com/pytorch/pytorch/pull/97824 on behalf of https://github.com/yanboliang due to need to catch more exception
2023-03-30 20:43:00 +00:00
Yanbo Liang
9d1d95099b Disable dynamo tracing torchrec.distributed (#97824)
This was used to unblock Meta internal use cases, where ```torchrec.distributed``` was used, however, it can't be traced by dynamo properly right now.
We were sending the same fix(#90087) several months ago, but was reverted due to ```fbgemm``` conflicts. This PR catches ```Exception``` rather than ```ImportError``` which can handle the conflicts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97824
Approved by: https://github.com/wconstab
2023-03-29 04:29:51 +00:00
Aaron Gokaslan
3d82d8d0ed [BE] Enable more flake8-comprehensions checks (#94601)
I applied some flake8 fixes and enabled checking for them in the linter. I also enabled some checks for my previous comprehensions PR.

This is a follow up to #94323 where I enable the flake8 checkers for the fixes I made and fix a few more of them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94601
Approved by: https://github.com/ezyang
2023-02-10 23:40:29 +00:00
Edward Z. Yang
ca9ebf9e2b Delete dynamo_import and inductor_import (#93851)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93851
Approved by: https://github.com/albanD, https://github.com/jansel
2023-02-02 01:51:29 +00:00