Commit Graph

17 Commits

Author SHA1 Message Date
Wanchao Liang
7afba50508 [dtensor] delete unused torch_function (#90449)
torch_function is not actually getting used yet today, deleting
it first and we can revisit once we really need it
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90449
Approved by: https://github.com/fduwjj
2022-12-10 01:29:02 +00:00
Sergii Dymchenko
f51f6aa387 Fix non-existing parameters in docstrings (#90505)
Continuation after https://github.com/pytorch/pytorch/pull/90163.

Here is a script I used to find all the non-existing arguments in the docstrings (the script can give false positives in presence of *args/**kwargs or decorators):

_Edit:_
I've realized that the indentation is wrong for the last `break` in the script, so the script only gives output for a function if the first docstring argument is wrong. I'll create a separate PR if I find more issues with corrected script.

``` python
import ast
import os
import docstring_parser

for root, dirs, files in os.walk('.'):
    for name in files:
        if root.startswith("./.git/") or root.startswith("./third_party/"):
            continue
        if name.endswith(".py"):
            full_name = os.path.join(root, name)
            with open(full_name, "r") as source:
                tree = ast.parse(source.read())
                for node in ast.walk(tree):
                    if isinstance(node, ast.FunctionDef):
                        all_node_args = node.args.args
                        if node.args.vararg is not None:
                            all_node_args.append(node.args.vararg)
                        if node.args.kwarg is not None:
                            all_node_args.append(node.args.kwarg)
                        if node.args.posonlyargs is not None:
                            all_node_args.extend(node.args.posonlyargs)
                        if node.args.kwonlyargs is not None:
                            all_node_args.extend(node.args.kwonlyargs)
                        args = [a.arg for a in all_node_args]
                        docstring = docstring_parser.parse(ast.get_docstring(node))
                        doc_args = [a.arg_name for a in docstring.params]
                        clean_doc_args = []
                        for a in doc_args:
                            clean_a = ""
                            for c in a.split()[0]:
                                if c.isalnum() or c == '_':
                                    clean_a += c
                            if clean_a:
                                clean_doc_args.append(clean_a)
                        doc_args = clean_doc_args
                        for a in doc_args:
                            if a not in args:
                                print(full_name, node.lineno, args, doc_args)
                            break

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90505
Approved by: https://github.com/malfet, https://github.com/ZainRizvi
2022-12-09 21:43:09 +00:00
Wanchao Liang
9e314bd822 [dtensor] handle the case where output of op is Optional[Tensor] (#90241)
Observed by @aazzolini, some op might have Optional[Tensor] returns
where it return None (i.e. native_layer_norm_backward), it's a mismatch
between C++ aten op signature and python None, but we need to handle it
in the python side
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90241
Approved by: https://github.com/aazzolini
2022-12-06 18:17:20 +00:00
Wanchao Liang
2c2cce73d4 [dtensor] remove torchgen function schema and parse manually (#90106)
This PR get rids of torchgen FunctionSchema parsing and parse
it manually, it should resolve torchgen package issue and also
provide some perf wins when running DTensor eagerly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90106
Approved by: https://github.com/awgu
2022-12-06 05:45:00 +00:00
jiaruifang
29ea1c9c8e [doc] update dtensor readme (#89991)
I fixed some import erros in readme of dtensor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89991
Approved by: https://github.com/wanchaol
2022-12-01 22:16:39 +00:00
Wanchao Liang
bf23e0bdbd [dtensor] ufmt distributed._tensor (#89967)
cmd: `ufmt format torch/distributed/_tensor`

copy from Andrew:

Notes
For VSCode users,

Install ufmt: https://pypi.org/project/ufmt/
Install VSCode ufmt extension: https://marketplace.visualstudio.com/items?itemName=omnilib.ufmt
Include in settings.json:
```
{
    "[python]": {
        "editor.defaultFormatter": "omnilib.ufmt",
        "editor.formatOnSave": true,
    },
}
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89967
Approved by: https://github.com/fduwjj
2022-12-01 20:58:13 +00:00
Wanchao Liang
4451eb24e6 Move tensor_parallel out to distributed.tensor folder (#89878)
This PR moves tensor parallel from torch.distributed._tensor.parallel
to torch.distributed.tensor.parallel, to prepare for beta release
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89878
Approved by: https://github.com/fduwjj
2022-11-30 22:13:10 +00:00
fduwjj
009dd3c4af [PT-D][Tensor Parallel] Add more test cases when we use use_orig_params for FSDP wrapping (#89779)
Differential Revision: [D41600656](https://our.internmc.facebook.com/intern/diff/D41600656)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89779
Approved by: https://github.com/wanchaol
2022-11-30 06:34:58 +00:00
Wanchao Liang
12f98f85bc [dtensor] update README (#89800)
This PR updates README to include the RFC details
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89800
Approved by: https://github.com/mrshenli
2022-11-30 04:35:32 +00:00
fduwjj
de0dee30d0 [PT-D][3/N] Sync TP API change to Pytorch (#89535)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89535
Approved by: https://github.com/wanchaol
2022-11-23 16:13:49 +00:00
fduwjj
00b9473ad6 [PT-D][Tensor Parallelism][2/N] Sync TP API change to PT prod (#89467)
This is part of TP Beta Release efforts.
ref: https://github.com/pytorch/tau/issues/576
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89467
Approved by: https://github.com/wanchaol
2022-11-22 03:05:53 +00:00
fduwjj
6afe341276 [PT-D][1/N] Sync TP Beta change to prod (#89242)
This is part of TP Beta Release efforts.

ref: https://github.com/pytorch/tau/issues/576

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89242
Approved by: https://github.com/wanchaol
2022-11-19 18:01:25 +00:00
Wanchao Liang
f20b3f2e57 [dtensor] PART 8: move tensor parallel api and tests to core distributed (#88180)
This PR moves tensor/parallel folder and tests to torch.distributed.

part of https://github.com/pytorch/pytorch/issues/88838
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88180
Approved by: https://github.com/aazzolini
2022-11-16 08:07:50 +00:00
Wanchao Liang
1b88476320 [dtensor] PART 4: move remaining DTensor ops to core distributed (#88550)
This PR moves the view related DTensor ops to core distributed,
tests will be add in follow up PRs

part of https://github.com/pytorch/pytorch/issues/88838
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88550
Approved by: https://github.com/fduwjj
2022-11-16 08:07:44 +00:00
Wanchao Liang
2dcf0978a2 [dtensor] PART 3: move most DTensor ops to core distributed (#88177)
This PR moves most DTensor ops to torch.distributed._tensor. We will
add all tests in the following PRs.

part of https://github.com/pytorch/pytorch/issues/88838
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88177
Approved by: https://github.com/fduwjj
2022-11-16 08:07:42 +00:00
Wanchao Liang
4b945967de [dtensor] PART 2: move DTensor abstraction and APIs to core distributed (#88176)
This PR moves the core DTensor abstraction and high level APIs to
torch.distributed._tensor folder, which includes the following:
1. DTensor class
2. high level APIs (distribute_tensor/module)
3. dispatching logic
4. redistribute logic

part of https://github.com/pytorch/pytorch/issues/88838
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88176
Approved by: https://github.com/fduwjj
2022-11-16 08:07:41 +00:00
Wanchao Liang
370fc5cb42 [dtensor] PART 1: move DeviceMesh and placement to core distributed (#88549)
This PR creates `torch.distributed._tensor` package and moves
DeviceMesh, PlacementTypes to it

part of https://github.com/pytorch/pytorch/issues/88838
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88549
Approved by: https://github.com/fduwjj
2022-11-16 08:07:39 +00:00