Commit Graph

64 Commits

Author SHA1 Message Date
Scott Wolchok
b87d3fa432 [PyTorch][jit] Don't allow create() on singleton types (#56807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56807

If I understand correctly, there's no reason to create your own instance of these global singleton types.
ghstack-source-id: 127312270

Test Plan: CI

Reviewed By: SplitInfinity

Differential Revision: D27973447

fbshipit-source-id: f12df69d185f1baaa45f2ac6eac70570a7a65912
2021-04-30 10:28:50 -07:00
James Reed
71a5314591 Fix ScriptMethod dispatch on __torch_function__ (#56103)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56103

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D27784142

Pulled By: jamesr66a

fbshipit-source-id: 555dcb7c3a98b8fb9e9ca9b499cafad54e819aa7
2021-04-15 08:46:43 -07:00
Nikita Shulga
6a39613f35 [BE] Make torch/csrc/jit/tensorexpr/ clang-tidy clean (#55628)
Summary:
Mostly auto-generated changes using
```
 python3 tools/clang_tidy.py -c build -x torch/csrc/jit/tensorexpr/eval.cpp -s
```
With following common patterns manually fixed
- Use ` = default` instead of `{}`
- deleted methods should be public
- Use pass-by-value + std::move instead of pass-by-reference+copy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55628

Reviewed By: walterddr

Differential Revision: D27655378

Pulled By: malfet

fbshipit-source-id: 92be87a08113435d820711103ea9b0364182c71a
2021-04-08 19:44:14 -07:00
Meghan Lele
6866c033d5 [JIT] Add recursive scripting for class type module attributes (#55124)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55124

**Summary**
This commit modifies type inference (used by the module scripting code)
so that it tries to script the type of any class instances that it
encounters. This enables recursive, automatic scripting of class type
module attributes.

**Test Plan**
This commit adds a test case for this to `TestClassType`.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D23971883

Pulled By: SplitInfinity

fbshipit-source-id: 7a5a2e7c12ee68cbdeb0a07e6aaf98734a79cb06
2021-04-02 12:16:21 -07:00
Rohan Varma
a37fbf9b45 [Futures] Bump log verbosity when ignoring cb errors in python future. (#54476)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54476

Per title. For `add_done_callback`, we log but swallow exceptions in order to keep consistent with what concurrent.futures python library does, see discussion in https://github.com/pytorch/pytorch/pull/45675.

Although, it would be good to improve the verbosity here as this can be a source of confusion if users are setting a different future via `add_done_callback`, and an error is hit resulting in an unexpected hang (see https://github.com/pytorch/pytorch/issues/52132 for more details on how this can happen).
ghstack-source-id: 125300389

Test Plan: CI

Reviewed By: lw

Differential Revision: D27253004

fbshipit-source-id: 72ed21c8fb6d27de5797c17fc46b762f893e6fea
2021-03-31 15:17:06 -07:00
Michael Suo
8a170fbacd [package] fix mangling issues with TorchScript (#54915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54915

TorchScript and torch.package have different mangling schemes. To avoid
them interfering with each other, we should undo the torch.package
mangling before processing anything with TorchScript (since TS
independently makes sure that no names collide).

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D27410472

Pulled By: suo

fbshipit-source-id: d1cc013c532d9abb7fb9615122bc465ded4785bb
2021-03-31 00:58:05 -07:00
Christian Puhrsch
2668149b8c Export torch::jit::toIValue (#54449)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54448

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54449

Reviewed By: SplitInfinity

Differential Revision: D27243154

Pulled By: cpuhrsch

fbshipit-source-id: fc21d6ce251b868356ad8ea13ae891fb56e311ce
2021-03-22 17:17:18 -07:00
James Reed
1fe6a6507e [WIP][FX] Fix tracing support for torchbind (#52884)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52884

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D26675801

Pulled By: jamesr66a

fbshipit-source-id: 8e5100bcea17589a53163abf6ab991658e11fa3a
2021-03-05 23:40:16 -08:00
Rohan Varma
c941730b96 [JIT/Futures] support set_exception api (#50983)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50983

There is currently no way to handle/propagate errors with the python-based futures API (they are raised correctly if set with an error, but this is only possible from C++).

This diff allows the Future's `unwrap_func` to be set in python optionally, so users can set futures completed with an exception and the error will throw as expected. This is mostly to support the following use case in the next diff:

```
ret_fut = torch.futures.Future(unwrap_func = lambda python_result: {
    # throw exception if needed
    if isinstance(python_result, Exception):
        throw python_result
})

rpc_fut = rpc.rpc_async(...) # RPC future that times out
# Goal is to propagate RPC error to this future
rpc_fut.add_done_callback(
res => {
    # Note that ret_fut.set_result(res.wait()) won't propagate the error
    try:
        ret_fut.set_result(res.wait())
    except Exception as e:
        ret_fut.set_result(e)
}
)
```
ghstack-source-id: 121021434

Test Plan:
unittest
```
buck test mode/dev-nosan mode/no-gpu //caffe2/test:futures -- te
st_unwrap --print-passing-details
```

Reviewed By: mrshenli

Differential Revision: D25950304

fbshipit-source-id: 7ee61e98fcd783b3f515706fa141d538e6d2174d
2021-02-04 20:22:19 -08:00
Thomas Viehmann
86861095fa Graceful invalidation of Python Node/Value/Block when C++ object is deleted (#50326)
Summary:
Previously we might have gotten segfaults and all, now it raises an exception.
Thread safety hasn't been an objective.

I have a followup to expand the Python interface for the API.

Fixes https://github.com/pytorch/pytorch/issues/49969.

wanchaol

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50326

Reviewed By: pbelevich

Differential Revision: D26096234

Pulled By: gmagogsfm

fbshipit-source-id: 5425772002eb4deb3830ed51eaa3964f22505840
2021-02-04 01:34:46 -08:00
anjali411
18a7ec7d7d Update the JIT complex type name to be consistent with Python (#51476)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51476

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D26179237

Pulled By: anjali411

fbshipit-source-id: 6a5c60c8545eb42416583836b8038ceffd3f3244
2021-02-03 09:59:08 -08:00
anjali411
f9f22c8b5c Add serialization logic for complex numbers (#51287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51287

This reverts commit dfdb1547b9.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26131165

Pulled By: anjali411

fbshipit-source-id: 047167fac594ddb670c5e169446e90e74991679a
2021-01-28 17:25:35 -08:00
Meghan Lele
88baf470d1 [JIT] Provide more info when attribute fails to convert (#50870)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50870

**Summary**
Module attributes whose types cannot be determined based on annotations
or inference based on their values at script time are added to the
concrete type of the corresponding module as "failed attributes". Any
attempt to access them in scripted code produces an error with a message
explaining that the attribute could not be contributed to a
corresponding attribute on the TorchScript module. However, this error
is not more specific than that.

This commit modifies `infer_type` in `_recursive.py` so that it returns
`c10::InferredType` instead, which allows more information about typing
failures to be communicated to the caller through the `reason()` method
on this class. This information is appended to the hint added to the
module concrete type for failed attributes.

**Testing**
This commit adds a unit test to `test_module_containers.py` that checks
that extra information is provided about the reason for the failure
when a module attribute consisting of a list of `torch.nn.Module` fails to convert.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26091472

Pulled By: SplitInfinity

fbshipit-source-id: fcad6588b937520f250587f3d9e005662eb9af0d
2021-01-27 20:37:10 -08:00
Mike Ruberry
dfdb1547b9 Revert D26094906: Add serialization logic for complex numbers
Test Plan: revert-hammer

Differential Revision:
D26094906 (2de4ecd4eb)

Original commit changeset: 7b2614f3ee4a

fbshipit-source-id: 6f32a9fc6bb2a904ca1a282bbc6b2df0aee50068
2021-01-27 19:44:26 -08:00
anjali411
2de4ecd4eb Add serialization logic for complex numbers (#50885)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50885

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26094906

Pulled By: anjali411

fbshipit-source-id: 7b2614f3ee4a30c4b4cf04aaa3432988b38a0721
2021-01-27 15:19:36 -08:00
Anjali Chourdia
b6eaca9f1f Add type annotation logic for complex numbers (#50884)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50884

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D26086963

fbshipit-source-id: f103f7f529d63d701c4f17862e30eafbab7d0c68
2021-01-26 19:39:35 -08:00
Shen Li
c480eebf95 Completely remove FutureMessage type (#50029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50029

Test Plan:
buck run mode/opt -c=python.package_style=inplace //caffe2/torch/fb/training_toolkit/examples:ctr_mbl_feed_april_2020 -- local-preset --flow-entitlement pytorch_ftw_gpu --secure-group oncall_pytorch_distributed

Before:

```
...

I0107 11:03:10.434000 3831111 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|total_examples 14000.0
I0107 11:03:10.434000 3831111 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|window_qps 74.60101318359375
I0107 11:03:10.434000 3831111 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|lifetime_qps 74.60101318359375

...

I0107 11:05:12.132000 3831111 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|total_examples 20000.0
I0107 11:05:12.132000 3831111 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|window_qps 64.0
I0107 11:05:12.132000 3831111 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|lifetime_qps 64.64917755126953

...
```

After:

```
...

I0107 11:53:03.858000 53693 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|total_examples 14000.0
I0107 11:53:03.858000 53693 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|window_qps 72.56404876708984
I0107 11:53:03.858000 53693 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|lifetime_qps 72.56404876708984

...

I0107 11:54:24.612000 53693 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|total_examples 20000.0
I0107 11:54:24.612000 53693 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|window_qps 73.07617950439453
I0107 11:54:24.612000 53693 print_publisher.py:23  master      ] Publishing batch metrics: qps-qps|lifetime_qps 73.07617950439453

...
```

Reviewed By: lw

Differential Revision: D25774915

Pulled By: mrshenli

fbshipit-source-id: 1128c3c2df9d76e36beaf171557da86e82043eb9
2021-01-07 19:50:57 -08:00
Luca Wehrstedt
1ac05cfe01 Remove DataPtr extractor from CUDAFuture (#48840)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48840

The CUDAFuture class needs to inspect the values it contains in order to extract its tensors (in fact, the DataPtrs backing those). These are needed first to determine what CUDA devices back those tensors, so that an event for each such device can be recorded; and later to record these DataPtrs with the CUDA caching allocator if they are used in other streams.

This became complicated when Python was added to the mix, because to inspect a Python object we need to acquire the GIL, but we couldn't do so from code that was supposed to also work in C++-only mode. The solution was for users to provide a custom way to extract DataPtrs, so that the PythonFutureWrapper could install such a custom Python-aware one. This was the DataPtr extractor.

In https://github.com/pytorch/pytorch/pull/48502 a different suggestion was proposed. At its root, it consists in adding support for IValues of type PyObject to the visit() and getSubValues() methods. In order to deal with the GIL, we do this through a virtual method: PyObjectHolder, which is the base class, is available also in C++-only mode, and thus defines this method but leaves it unimplemented; ConcretePyObjectHolder, which is the subclass, is only included in Python mode, and thus it can implement that method, acquire the GIL, and do what it's supposed to.

In my opinion, this approach is just brilliant! Thank wanchaol for proposing it! It hides the complexity of dealing with Python inside getSubValues(), where it can be done properly, thus simplifying enormously the CUDAFuture and the PythonFutureWrapper classes.
ghstack-source-id: 118704935

Test Plan: Unit tests

Reviewed By: wanchaol

Differential Revision: D25334355

fbshipit-source-id: 3f1d3bf6e6e8505a114c877fb9a6fcc3f68d91d3
2020-12-19 11:03:45 -08:00
Rohan Varma
a727bf2851 Refactor RPC matchBuiltInOp to get rid of exception swallowing (#49009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49009

As per the title, we should generally not have exception swalling and
this commit makes it so that if there is a true error in JIT operator
resolution, it is propagated back to the RPC callee and we don't silently
swallow any other exceptions that may happen. Swallowing the exceptions
previously resulted in hard to debug issues such as unexpected ops showing up
in profiler, and flaky tests which were fixed by
https://github.com/pytorch/pytorch/pull/41287

Added a unittest that validates the error that comes from `jit/pybind_utils.h`.
ghstack-source-id: 118794661

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D25392905

fbshipit-source-id: 6f93251635740bcf902824548b2bc6f9249be5f0
2020-12-17 11:37:21 -08:00
Sebastian Messmer
4431731c68 Making ops c10-full: Storage arguments (#49146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49146

Add support for Storage arguments to IValue and the JIT typing system, and make ops that were blocked on that c10-full.
ghstack-source-id: 118710665

(Note: this ignores all push blocking failures!)

Test Plan: waitforsandcastle

Reviewed By: ezyang

Differential Revision: D25456799

fbshipit-source-id: da14f125af352de5fcf05a83a69ad5a69d5a3b45
2020-12-16 14:00:34 -08:00
James Reed
76d41c801e [JIT] Fix toIValue handling of AttributeError when casting ClassType (#49188)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49188

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25476573

Pulled By: jamesr66a

fbshipit-source-id: cec296fae71cc0cdf36bde60417d7d3b1aa84198
2020-12-11 17:54:16 -08:00
Luca Wehrstedt
a6778989d1 Support wider range of types in FutureNCCL (#48502)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48502

This commit is part of a stack that reworks FutureNCCL in order to extract a generic CUDA-aware Future subclass. The stack deliberately breaks up this transition into elementary changes, to make it easier to verify that the behavior is preserved (or to highlight how it gets changed).

 ---

FutureNCCL restricted the values to be tensors, or (singleton) lists of tensors, or Python object that could be converted to either of those types. We need a CUDA future that can handle more generic types though.

The main challenge is extracting all DataPtrs from an arbitrary object. I think I found some ways of doing so, but I'd like some JIT experts to look into this and tell me if there are better ways. I'll add inline comments for where their input would be appreciated.
ghstack-source-id: 118180026

Test Plan: Unit tests (I should probably add new ones)

Reviewed By: wanchaol

Differential Revision: D25177562

fbshipit-source-id: 1ef18e67bf44543c70abb4ca152f1610dea4e533
2020-12-10 03:54:15 -08:00
Luca Wehrstedt
b7f5aa9890 Remove NCCL dependency from PythonFutureWrapper (#48495)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48495

This commit is part of a stack that reworks FutureNCCL in order to extract a generic CUDA-aware Future subclass. The stack deliberately breaks up this transition into elementary changes, to make it easier to verify that the behavior is preserved (or to highlight how it gets changed).

 ---

PythonFutureWrapper needs to provide a GIL-aware way to extract tensors from an IValue of type PyObject. Since this was only used by FutureNCCL it was guarded by #ifdef USE_C10D_NCCL. However, we will need to use it with CUDA-aware futures other than the NCCL one. This might have been achieved simply by replacing USE_C10D_NCCL with USE_CUDA, but I wanted to clean this up better.

We're dealing with two independent dimensions: C++-vs-Python and CPU-vs-CUDA. To make the code more modular, the two dimensions should be dealt with by orthogonal solutions: the user setting a custom callback to handle Python, and the subclass being CUDA-aware. Mixing these two axes makes it more complicated.

Another reason for changing how this works is that later on, when we'll introduce multi-device support, we'll need to extract dataptrs for other reasons too (rather than just recording streams with the caching allocator), namely to inspect the value to determine which devices it resides on.
ghstack-source-id: 118180038

Test Plan: Unit tests

Reviewed By: mrshenli

Differential Revision: D25177560

fbshipit-source-id: 3a424610c1ea191e8371ffee0a26d62639895884
2020-12-10 03:53:44 -08:00
Meghan Lele
18eccfbe42 [JIT] Fix clang-tidy warnings in jit/python (#47985)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47985

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D25258644

Pulled By: SplitInfinity

fbshipit-source-id: dfc15dc62c148f79f4e99fd058a6bf2d071ccbb5
2020-12-02 12:35:36 -08:00
Elias Ellison
4380934b9b [JIT] Dont use specialized tensor type (#46130)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/46122

For `Any`, we infer the type of the ivalue to set the ivalue's type tag. When we saw a Tensor, we would use a specialized Tensor type, so when `Dict[str, Tensor]` was passed in as any `Any` arg it would be inferred as `Dict[str, Float(2, 2, 2, 2)]` which breaks runtime `isinstance` checking.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46130

Reviewed By: glaringlee

Differential Revision: D24261447

Pulled By: eellison

fbshipit-source-id: 8a2bb26ce5b6c56c8dcd8db79e420f4b5ed83ed5
2020-11-13 18:34:40 -08:00
Wanchao Liang
fa560ceb9c [reland] make intrusive_ptr as a pybind holder type (#47586)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47586

relanding PR of https://github.com/pytorch/pytorch/pull/44492, and add
additional Capsule related wrapping to ensure we still have the correct
type in pybind11 to resolve Capsule as torch._C.CapsuleType

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24822519

Pulled By: wanchaol

fbshipit-source-id: eaaea446fb54b56ed3b0d04c31481c64096e9459
2020-11-10 10:09:08 -08:00
Yi Wang
98aad933b6 [pytorch][PR] Record FutureNCCL callback stream on CUDA caching allocator (#45318)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45318

When calling `then()` from WorkNCCL, record the input data pointers in futureNCCLCallbackStream_ before the execution of the input callback.

Note that the recording cannot be directly added to the lambda used by addCallback in ProcessGroupNCCL.hpp. This is because the type of future value in that context is pyobject rather than TensorList, but a type casting will require pybind and introduce Python dependency, which should not be allowed in c10d library.

I have considered creating a util function in a separate file to support this type casting, and then placing it under torch/csrc directory where python dependency is allowed. However, torch/csrc has a dependency on c10d, so this will create a circular dependency.

Finally, a `record_stream_cb_` member is added to FutureNCCL, and the default value is nullptr. A default `record_stream_cb_` implementation is added to `PythonFutureWrapper,` where Python dependency is allowed.

In addition, a few lines are reformatted by lint.
caffe2/torch/csrc/distributed/c10d/init.cpp is only reformatted.

#Closes: https://github.com/pytorch/pytorch/issues/44203

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- ProcessGroupNCCLTest
buck test mode/dev-nosan caffe2/test/distributed:c10d  -- test_accumulate_gradients_no_sync_allreduce_with_then_hook
buck test mode/dev-nosan caffe2/test/distributed:c10d  -- test_ddp_comm_hook_allreduce_with_then_hook_nccl

Reviewed By: pritamdamania87

Differential Revision: D23910257

fbshipit-source-id: 66920746c41f3a27a3689f22e2a2d9709d0faa15
2020-10-22 01:49:47 -07:00
chengjun
5741de883a Define the record_stream method in native_functions.yaml (#44301)
Summary:
The record_stream method was hard coded for CUDA device. Define the record_stream in the native_functions.yaml to enable the dynamic dispatch to different end device.

Fixes https://github.com/pytorch/pytorch/issues/36556

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44301

Reviewed By: glaringlee

Differential Revision: D23763954

Pulled By: ezyang

fbshipit-source-id: e6d24f5e7892b56101fa858a6cad2abc5cdc4293
2020-10-13 09:15:22 -07:00
Brian Hirsh
a3caa719af fix #45552 - adding add_done_callback(fn) to torch.futures.Future (#45675)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45675

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24055353

Pulled By: bdhirsh

fbshipit-source-id: 9233c8e17acc878f0fecbe740a4397fb55cf722f
2020-10-13 07:47:36 -07:00
gunandrose4u
f07ac6a004 Fix Windows build failure after DDP PR merged (#45335)
Summary:
Fixes #{issue number}
This is resubmit for PR https://github.com/pytorch/pytorch/issues/42897 . Together with fix for Windows build issue introduced by PR https://github.com/pytorch/pytorch/issues/44344 .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45335

Reviewed By: zou3519

Differential Revision: D23931471

Pulled By: mrshenli

fbshipit-source-id: f49b5a114944c1450b32934b3292170be064f494
2020-09-25 12:37:50 -07:00
Mike Ruberry
103fa3894a Revert D23841786: [pytorch][PR] Enable distributed package on windows, Gloo backend supported only
Test Plan: revert-hammer

Differential Revision:
D23841786 (0122299f9b)

Original commit changeset: 334ba1ed73ef

fbshipit-source-id: ec95432f9957df56a5a04e52661f5db920b7f57f
2020-09-24 22:44:33 -07:00
gunandrose4u
0122299f9b Enable distributed package on windows, Gloo backend supported only (#42897)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42095

For test case part will be committed to this PR later

mrshenli, please help to review

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42897

Reviewed By: osalpekar

Differential Revision: D23841786

Pulled By: mrshenli

fbshipit-source-id: 334ba1ed73eff2f668857390fc32d1bc7f08e5f3
2020-09-24 21:13:55 -07:00
Yanan Cao
0bd35de30e Add Enum convert back to Python object support (#43121)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43121

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D23222628

Pulled By: gmagogsfm

fbshipit-source-id: 6850c56ced5b52943a47f627b2d1963cc9239408
2020-08-21 10:36:51 -07:00
Sinan Nasir
6e1127ea3f [NCCL] Changed FutureNCCL's then callback logic for better efficiency. (#42869)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42869

We realized that when we invoke a simple callback that divides the tensors by `world_size` after `allreduce`, the performance was almost 50% lower in terms of QPS compared to the case where a simple `allreduce` hook is used with no `then` callback.

The main problem was as we call `work.wait()` before invoking `then` callback, we were synchronizing `work`'s stream with the default PyTorch stream inside [`runHook`](https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp#L609) and stalling the backward computation.

In that PR, we ensure that FutureNCCL's `then` callback is not stalling the backward computation. Assuming single-process single-device, `FutureNCCL` gets a new stream from device's pool using `at::cuda::getStreamFromPool` to run `callback` and before invoking the `callback` inline it synchronizes `WorkNCCL`'s stream by callback's stream not the default stream.

ghstack-source-id: 110208431

Test Plan: Run performance benchmark tests to validate performance issue is resolved. Also, `python test/distributed/test_c10d.py` to avoid any odd issues.

Reviewed By: pritamdamania87

Differential Revision: D23055807

fbshipit-source-id: 60e50993f1ed97497514eac5cb1018579ed2a4c5
2020-08-19 19:42:22 -07:00
Basil Hosmer
feeb515ad5 add Quantizer support to IValue (#42438)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42438

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D22894190

Pulled By: bhosmer

fbshipit-source-id: b2d08abd6f582f29daa6cc7ebf05bb1a99f7514b
2020-08-05 12:56:18 -07:00
Will Constable
6d1e43c5a6 Release the GIL before invokeOperator (#42341)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41865

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42341

Reviewed By: ezyang

Differential Revision: D22928622

Pulled By: wconstab

fbshipit-source-id: 8fa41277c9465f816342db6ec0e6cd4b30095c5c
2020-08-05 11:51:39 -07:00
Yanan Cao
655f376460 Implement Enum sugared value and Enum constant support (#42085)
Summary:
[3/N] Implement Enum JIT support

* Add enum value as constant support
* Add sugared value for EnumClass

Supported:
Enum-typed function arguments
using Enum type and comparing them
Support getting name/value attrs of enums
Using Enum value as constant

TODO:
Add PyThon sugared value for Enum
Support Enum-typed return values
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42085

Reviewed By: eellison

Differential Revision: D22758042

Pulled By: gmagogsfm

fbshipit-source-id: 5c6e571686c0b60d7fbad59503f5f94b3b3cd125
2020-07-31 17:29:55 -07:00
Shen Li
d4736ef95f Add done() API to Future (#42013)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42013

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D22729596

Pulled By: mrshenli

fbshipit-source-id: ed31021a35af6e2c3393b9b14e4572cf51013bc0
2020-07-24 14:13:41 -07:00
Yanan Cao
4a3aad354a [1/N] Implement Enum JIT support (#41390)
Summary:
* Add EnumType and AnyEnumType as first-class jit type
* Add Enum-typed IValue
* Enhanced aten::eq to support Enum

Supported:
Enum-typed function targuments
using Enum type and comparing them

TODO:
Add PyThon sugared value for Enum
Support getting name/value attrs of enums
Support Enum-typed return values
Support enum values of different types in same Enum class
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41390

Reviewed By: eellison

Differential Revision: D22524388

Pulled By: gmagogsfm

fbshipit-source-id: 1627154a64e752d8457cd53270f3d14aea4b1150
2020-07-18 22:15:06 -07:00
Michael Suo
ca1b8ebbcb move misc implementation out of jit/__init__.py (#41154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41154

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22445213

Pulled By: suo

fbshipit-source-id: 200545715c5ef13beb1437f49e01efb21498ddb7
2020-07-13 16:59:55 -07:00
Michael Suo
c93e96fbd9 [jit] move script-related implementation out of torch/jit/__init__.py (#40902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40902

See the bottom of this stack for context.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22360210

Pulled By: suo

fbshipit-source-id: 4275127173a36982ce9ad357aa344435b98e1faf
2020-07-08 11:38:34 -07:00
Sebastian Messmer
53af9df557 Unify boxed function signature between jit and c10 (#37034)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37034

c10 takes a Stack* in boxed functions while JIT took Stack&.
c10 doesn't return anything while JIT returns an int which is always zero.

This changes JIT to follow the c10 behavior.
ghstack-source-id: 106834069

Test Plan: unit tests

Differential Revision: D20567950

fbshipit-source-id: 1a7aea291023afc52ae706957e9a5ca576fbb53b
2020-06-29 19:24:26 -07:00
Lu Fang
8315bb2359 Back out "[pytorch][PR] [JIT] Infer NamedTuple type attributes of nn.Modules correctly" (#40270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40270

Original commit changeset: 1227e243ab94

D22082806 (1e03d603c6) broke the model generation of pyper models. We trace the namedtuple as input. To unblock the development of PyPer project, let's revert the diff first.

Sorry about the inconvenience, SplitInfinity
ghstack-source-id: 106217609

Test Plan: buck run dper3/dper3_models/experimental/pytorch/feed:feed_generation_script -- --model_files_dir=/tmp/

Reviewed By: alyssawangqq

Differential Revision: D22132960

fbshipit-source-id: ce9278c8462602a341e231ea890e46f74e743ddf
2020-06-19 02:58:31 -07:00
Meghan Lele
1e03d603c6 [JIT] Infer NamedTuple type attributes of nn.Modules correctly (#39116)
Summary:
**Summary**
This commit modifies type inference for `nn.Module` instance attributes
such that the type of a `NamedTuple` attribute is inferred correctly and
such that the field names of this `NamedTuple` instance can be used in
scripted methods. At present, the type of this attribute is inferred to be
`Tuple[T, U, ..., V]`, so the field must be referred to by index and
cannot be referred to by name.

**Test Plan**
This commit adds a unit test to test that a field of a `NamedTuple`
attribute can be referred to by name in a scripted method.

**Fixes**
This commit fixes https://github.com/pytorch/pytorch/issues/37668.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39116

Differential Revision: D22082806

Pulled By: SplitInfinity

fbshipit-source-id: 1227e243ab941376cd5e382fb093751e88dc8846
2020-06-17 13:58:15 -07:00
Yanan Cao
c22bbb2124 [JIT] Add Type::repr_str to return human-readable str (#39544)
Summary:
Clearly expressing a type is inferred by PyTorch instead of explicitly annotated by user makes many error messages more user-friendly

Currently Type has two string conversion methods. str() for IR printing and python_str() for serialization and error message generation. If we want to include more information in type printing while maintaining serialization/deserialization correctness, we need to split python_str() into annotation_str() and repr_str().

annotation_str is solely responsible for serialization, it strictly matches format of python type annotation. repr_str() is responsible for generating a human-readable error message that includes information like "this type is inferred, not explicitly annotated"

Closes https://github.com/pytorch/pytorch/issues/39449
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39544

Differential Revision: D21978759

Pulled By: gmagogsfm

fbshipit-source-id: 733566f5a62e748b5ca4bb3c5943ebb6d5b664d0
2020-06-10 12:01:24 -07:00
Elias Ellison
49b69b2ade [JIT] fix broadcasting lists of ints (#39481)
Summary:
Previously, on conversion from python -> c++ it was casted to double list through bad copy pasta. It's pretty unusual for someone to script a broadcasting list function directly since it's an internal api, so it was unlikely to affect anyone.

Fix for https://github.com/pytorch/pytorch/issues/39450
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39481

Reviewed By: jamesr66a

Differential Revision: D21870557

Pulled By: eellison

fbshipit-source-id: e704e5e87d2702a270b7d65c4df444246a134480
2020-06-04 12:16:41 -07:00
Shen Li
bb0377bb24 Expose torch.futures.Future (#39008)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39008

This commit adds a `torch.futures.Future` type and exposes its ctor,
`wait`, `then`, and `set_result` APIs. This type is currently a
wrapper of `c10::ivalue::Future` and mainly used by RPC for now. Later,
we could revamp c10d APIs to return this `Future` type as well. More
utils will be added into `torch.futures` package in followup PRs.

Test Plan: Imported from OSS

Differential Revision: D21723022

Pulled By: mrshenli

fbshipit-source-id: 92e56160544e9bf00d11db3e8347a1b9707882c9
2020-06-02 10:12:56 -07:00
Michael Suo
0d220ef381 [torchbind] Better error message when missing init. (#37474)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37474

Previously we would segfault

Test Plan: Imported from OSS

Differential Revision: D21297542

Pulled By: suo

fbshipit-source-id: c7e2f828a250c490ec23fb51c6a4a642d3370e52
2020-05-13 17:38:31 -07:00
Shen Li
2e9d6d99be Explicitly decref py::object in ConcretePyObjectHolder and PythonFunctionGuard (#38364)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38364

Test Plan: Imported from OSS

Differential Revision: D21537611

Pulled By: mrshenli

fbshipit-source-id: e22d1f1360cf71bec526841b5014013b11316f8d
2020-05-12 20:55:53 -07:00
Shen Li
dad552666e Add then(callback)->Future API to ivalue::Future (#37311)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37311

Test Plan: Imported from OSS

Differential Revision: D21247827

Pulled By: mrshenli

fbshipit-source-id: f8fe0617ccb957aa747a78554a000ce2c4a58495
2020-05-11 21:58:56 -07:00