Commit Graph

23 Commits

Author SHA1 Message Date
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
Aaron Gokaslan
67d9790985 [BE] Apply almost all remaining flake8-comprehension checks (#94676)
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
2023-02-12 01:01:25 +00:00
Xuehai Pan
a229b4526f [BE] Prefer dash over underscore in command-line options (#94505)
Preferring dash over underscore in command-line options. Add `--command-arg-name` to the argument parser. The old arguments with underscores `--command_arg_name` are kept for backward compatibility.

Both dashes and underscores are used in the PyTorch codebase. Some argument parsers only have dashes or only have underscores in arguments. For example, the `torchrun` utility for distributed training only accepts underscore arguments (e.g., `--master_port`). The dashes are more common in other command-line tools. And it looks to be the default choice in the Python standard library:

`argparse.BooleanOptionalAction`: 4a9dff0e5a/Lib/argparse.py (L893-L895)

```python
class BooleanOptionalAction(Action):
    def __init__(...):
            if option_string.startswith('--'):
                option_string = '--no-' + option_string[2:]
                _option_strings.append(option_string)
```

It adds `--no-argname`, not `--no_argname`. Also typing `_` need to press the shift or the caps-lock key than `-`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94505
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-09 20:16:49 +00:00
Mor Tzur
6575174dcb [fx2ait] fixes for AITSplitter (#87805)
Summary: propagate lower settings to AITSplitter settings.

Reviewed By: yinghai, qxy11

Differential Revision: D40568216

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87805
Approved by: https://github.com/yinghai
2022-11-04 20:18:08 +00:00
Shiyan Deng
fb1586fbcb Make a copy of the submodule inputs (#87899)
Summary: There might be inplace ops in the model that would change the saved inputs. To avoid that, we save a deepcopy version.

Test Plan: CI

Differential Revision: D40771290

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87899
Approved by: https://github.com/houseroad
2022-11-01 22:42:04 +00:00
Nan Xiao
c47e0450f8 [fbia] Keep Track of full qualified name before and after remote sharding (#83889)
Summary: track qualname changes in embedding sharding & FX split, and compose target qualname in the end of FBIA transform stage, so we can use the qualname mapping in XL materialize stage

Test Plan:
CI/CD

with DISABLE_XLEBB_MATERIALIZATION = True
https://fburl.com/fblearner/a8yljbux

with DISABLE_XLEBB_MATERIALIZATION = False
https://fburl.com/fblearner/2nvi0dam

Reviewed By: lliu315gt

Differential Revision: D38772525

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83889
Approved by: https://github.com/houseroad
2022-08-24 01:15:25 +00:00
Shirong Wu
4ae40d74ac Back out "Add an op_lowering_disallow_list in fx splitter base class. (#82288)" (#82750)
Summary:
Revert since this breaks BC test
More context:
failing test
https://www.internalfb.com/.../fblearner/details/361780349/
issue report thread
https://fb.workplace.com/groups/2211200152361974/permalink/2303690223112966/

Test Plan: All unit test

Differential Revision: D38399966

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82750
Approved by: https://github.com/yinghai
2022-08-05 02:15:00 +00:00
Ying Zhang
a71d0e882c Add an op_lowering_disallow_list in fx splitter base class. (#82288)
Summary: ATT, so that we can control not to lower some specific ops.

Test Plan: Tested together with the next diff in stack.

Differential Revision: D38188836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82288
Approved by: https://github.com/mikeiovine, https://github.com/khabinov
2022-07-28 05:19:33 +00:00
anjali411
3bcc19b29a Add __all__ to various submodules in torch.fx, distributions, distributed, package (#80367)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80367
Approved by: https://github.com/albanD
2022-06-27 21:27:30 +00:00
Oleg Khabinov
848af37209 Debug small ACC subgraphs elimination (#80117)
Reviewed By: yinghai

Differential Revision: D37368729

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80117
Approved by: https://github.com/yinghai, https://github.com/houseroad
2022-06-23 18:45:24 +00:00
Chao Gu
bdf468b94d [FX] Fix type of argument min_acc_module_size (#74891)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74891

As title, otherwise the below error is thrown:
```
TypeError: '>=' not supported between instances of 'int' and 'str'
```

Test Plan: easy

Reviewed By: jackm321

Differential Revision: D35206473

fbshipit-source-id: 200c83b9a19b6aae6f0da03abe99121e55893fd3
(cherry picked from commit 20744d2ce59ea07ecdb2570929dd5344c65b751a)
2022-03-29 17:48:32 +00:00
Shiyan Deng
2afed243b5 [fx2trt] remove split.py (#71933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71933

Add the functionalities provided by split.py to splitter_base.
- Propagate submodule inputs
- Create SplitResult to hold the split results.
Then removed split.py, to me this makes navigating the lowering code a bit easier.

Added default split and trace function for use.

Next step is to add better error handling for each stage during lowering and create unit tests for each stage. I'll probably make some bootcamp tasks for unit tests.

Test Plan: CI

Reviewed By: frank-wei, wushirong

Differential Revision: D33794322

fbshipit-source-id: f991893047a3701177f54cf22d9a6e48e0529472
(cherry picked from commit 1f3e13efba)
2022-02-08 03:31:25 +00:00
Shirong Wu
7d38768d84 Rename splitter result (#68303)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68303

Result of splitter is run on either accelerator or directly on gpu, rename gpu part graph to run_on_gpu

Test Plan: buck test mode/opt caffe2/test:trt_tools_test

Reviewed By: 842974287

Differential Revision: D32392492

fbshipit-source-id: b085376c00c1097752e856e22c631d74a0fbc38f
2021-11-18 09:04:30 -08:00
Shirong Wu
799ebce3aa Add algo recorder/replayer to lower.py (#68194)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68194

Add algorithm recorder/replayer to lower.py

Reviewed By: yinghai

Differential Revision: D31909575

fbshipit-source-id: 552f2ba4fbd6ea646316f6412d55416a76e1f69a
2021-11-11 21:22:22 -08:00
Shirong Wu
69adbc8778 Fix splitter_base and add unit test for trt splitter (#67569)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67569

Splitter_base has assumption that the first subgraph after split must be cpu subgraph if there exists cpu node. This is wrong, start subgraph should be determined by which subgraph has 0-dep node.
Also add unit test for splitter.

Reviewed By: yinghai

Differential Revision: D32012549

fbshipit-source-id: e2639ccd7774b4295ca05c2ddbefff9726702b3f
2021-10-29 18:51:59 -07:00
Kefei Lu
d4d3bb91f9 Refactor OperatorSupport related code and fix TRT not supporting int64 dtype (#65848)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65848

This diff includes:

* [fix]: The initialization of `OperatorSupport._support_dict` makes it a class variable, so we need to move its initialization into constructor.
* Add abstract class (more of an interface) `OperatorSupportBase`, since `OperatorSupport`'s purpose is too specific.
* [refactor]: what `TRToperatorSupport` really does is to populate a `OperatorSupport._support_dict`, so there really is no reason for subclassing. So removing it, and changing it to instantiating a `OperatorSupport` with properly populated `_support_dict`.
* Add a framework for defining simple and basic op support logic, and composing them into more complex ones:
    1. `create_op_support` wraps a function into a `OperatorSupportBase` instance
    2. `chain` can combine several simple `OperatorSupportBase` into more complex ones
    3. `OpSupports` provides a set of pre-defined, simple `OperatorSupportBase` that can be composed together using `chain`.
        1. Currently the only pre-defined one is `decline_if_input_dtype(..)`, which declares a node non-supported, if its args are of user specified dtype
* Fix `TRTOperatorSupport` so that it not only looks for registered converters, but also decline a node if its arg is of int64

Test Plan: linter and CI

Reviewed By: 842974287

Differential Revision: D31275525

fbshipit-source-id: bbc02f7ccf4902a7912bb98ba5be2c2fbd53b606
2021-09-30 13:36:26 -07:00
James Reed
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
James Reed
cf7409e184 [FX] Move graph_manipulation and param_fetch out of experimental and into passes (#65183)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65183

ghstack-source-id: 138309655

Test Plan: waitforsadcastle

Reviewed By: protonu

Differential Revision: D31007630

fbshipit-source-id: 77d14b284737aabbe2b9e6394177a0c2e40aafba
2021-09-17 09:32:40 -07:00
Kefei Lu
adbcc819cd Fix fx2trt SplitterBase non_tensor_input logic (#64286)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64286

During graph splitting, `_SplitterBase` supports taking into consideration whether the subnet boundary nodes
produces "supported" outputs that will cross the acc/non-acc boundary. Specifically, if the backend only
supports Tensor-based data passing cross boundary, then we cannot split the graph at a place where the node
output is a non-Tensor type (e.g., `Tuple[Tensor]`).

There's currently a bug in this logic that it does not correctly detect the output type of a Node. Instead of
using `Node.meta['tensor_meta']`, we should instead check `Node.meta['type']`.

`Node.meta['tensor_meta']` is not appropriate because this key will exist if the node output is an iterable
and one of the element is of type `Tensor`. So `Tuple[Tensor]` will be wrongly considered "supported".

Test Plan:
arc lint
run CI tests

Reviewed By: yinghai, 842974287

Differential Revision: D30617147

fbshipit-source-id: e8ba70dfaddc05cafb8037d58fca73b7ccbb1a49
2021-09-07 04:02:29 -07:00
Oleg Khabinov
a0c1c7e5d4 Fixing the case when starter nodes depend on get_attr node (#62234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62234

There was a typo that we caught until recently, thus making this fix.

Reviewed By: 842974287

Differential Revision: D29924190

fbshipit-source-id: ee6259fcd41358aefe9680b419acc87c0c2821cb
2021-07-27 10:29:53 -07:00
Shiyan Deng
9d56176034 Fix splitter and add a unittest (#58075)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58075

Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1687

Reviewed By: mikekgfb

Differential Revision: D28357724

fbshipit-source-id: 36c2d211576a90107bc75468a39408ffecaeed43
2021-05-12 10:40:37 -07:00
Shiyan Deng
d896d1f4ce [fx splitter] Fix fusion group utility (#57280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57280

We've found an issue that fusion group would results in circular dependency. For example
```
a -> b -> c -> d
|              ^
+ -------------+

Only a has non tensor output and currently we would create a fusion group (a, b, d). This results in circular dependency because now the fusion group depends on c while c depends on the fusion group as well.
```

This diff implement the solution discussed before. When we add a node to fusion group, we add all the nodes that are in the middle of the fusion group and this newly added node.

Use the same logic in minimizer to build fusion group.

Test Plan: split_tests and net_min_tests

Reviewed By: khabinov

Differential Revision: D27917432

fbshipit-source-id: a3d99fe5929dbc9f8eb0f45bccd83fd7b173795a
2021-04-30 10:18:01 -07:00
Shiyan Deng
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00