Commit Graph

12304 Commits

Author SHA1 Message Date
Nikitha Malgi
616503ac19 Merge branch 'cuda_stream_jit' of https://github.com/pytorch/pytorch into cuda_stream_jit 2020-11-13 08:14:10 -08:00
Nikitha Malgi
91142f77fe WIP changes 2020-10-26 09:30:51 -07:00
Nikitha Malgi
fa3801a44e WIP changes 2020-10-26 08:41:58 -07:00
Nikitha Malgi
68eb21b72a Adding JIT CUDA Stream support 2020-10-23 11:00:41 -07:00
Luca Wehrstedt
f230245c06 Revert D24422354: [pytorch][PR] fix-process-group-counter
Test Plan: revert-hammer

Differential Revision:
D24422354 (caed29a069)

Original commit changeset: 32493cc2001d

fbshipit-source-id: 9b633f738ea555f45031056689f780dde8eda859
2020-10-23 08:04:37 -07:00
Ansley Ussery
6c5f634657 Fix grammar and spelling errors (#46713)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46713

Test Plan: Imported from OSS

Reviewed By: Lilyjjo

Differential Revision: D24477771

Pulled By: ansley

fbshipit-source-id: bc39b63ab2158a5233e48b89bfaa97a4cfb1f7a1
2020-10-23 01:31:17 -07:00
Christian Hudon
511f89eaa9 Add nvtx.range() context manager (#42925)
Summary:
Small quality-of-life improvement to NVTX Python bindings, that we're using internally and that would be useful to other folks using NVTX annotations via PyTorch. (And my first potential PyTorch contribution.)

Instead of needing to be careful with try/finally to make sure all your range_push'es are range_pop'ed:

```
nvtx.range_push("Some event")
try:
    # Code here...
finally:
    nvtx.range_pop()
```

you can simply do:

```
with nvtx.range("Some event"):
    # Code here...
```

or even use it as a decorator:

```
class MyModel(nn.Module):

    # Other methods here...

    nvtx.range("MyModel.forward()")
    def forward(self, *input):
        # Forward pass code here...
```

A couple small open questions:

1. I also added the ability to call `msg.format()` inside `range()`, with the intention that, if there is nothing listening to NVTX events, we should skip the string formatting, to lower the overhead in that case. If you like that idea, I could add the actual "skip string formatting if nobody is listening to events" parts. We can also just leave it as is. Or I can remove that if you folks don't like it. (In the first two cases, should we add that to `range_push()` and `mark()` too?) Just let me know which one it is, and I'll update the pull request.

2. I don't think there are many places for bugs to hide in that function, but I can certainly add a quick test, if you folks want.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42925

Reviewed By: gchanan

Differential Revision: D24476977

Pulled By: ezyang

fbshipit-source-id: 874882818d958e167e624052e42d52fae3c4abf1
2020-10-22 19:46:16 -07:00
albanD
27e2ea4cea Make add_relu an internal function (#46676)
Summary:
Cleanup for 1.7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46676

Reviewed By: gchanan

Differential Revision: D24458565

Pulled By: albanD

fbshipit-source-id: b1e4b4630233d3f1a4bac20e3077411d1ae17f7b
2020-10-22 18:08:15 -07:00
lixinyu
870a5a0d6d Enable DataParallel to run zero input Module (#46565)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46565

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D24405275

Pulled By: glaringlee

fbshipit-source-id: a8baaf4cf227f7f21fc3b080a446f92f0effe18e
2020-10-22 18:04:33 -07:00
Supriya Rao
842494af77 [quant][fx] EmbeddingBag quantization support (#46678)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46678

Test Plan:
python test/test_quantization.py TestQuantzeFxOps.test_qembedding_bag_module

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24463306

fbshipit-source-id: 175e77f4450344fbf63409be35338b0c29afd585
2020-10-22 18:04:31 -07:00
Supriya Rao
e34c825b77 [quant][fx] Embedding quantization support (#46677)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46677

Add support for weight only embedding quantization

Test Plan:
python test/test_quantization.py TestQuantizeFxOps.test_qembedding_module

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24463305

fbshipit-source-id: 2dba49d8a77cf237a8e6da2efdd83b1ebdc432d6
2020-10-22 17:59:52 -07:00
Jeff Daily
ce5bca5502 ProcessGroupNCCL::alltoall_base needs to call recordStream (#46603)
Summary:
For similar reasons as documented in the `[Sync Streams]` note.  For a current example, `ProcessGroupNCCL::allgather` must also call `recordStream` and does so already.

The output tensor is created on the default stream (by the application).  NCCL/RCCL internally uses another stream (i.e., ncclStream).  If we do not record the output tensor on the ncclStream, there is a chance that the output tensor might be deallocated while NCCL/RCCL is using it.

The application is not aware of the ncclStream since it's internal to ProcessGroupNCCL.  So, the application cannot record the output tensor on the ncclStream.

Patch originally developed by sarunyap.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46603

Reviewed By: srinivas212

Differential Revision: D24458530

fbshipit-source-id: b02e74d1c3a176ea1b9bbdd7dc671b221fcadaef
2020-10-22 15:53:19 -07:00
Jerry Zhang
bd90379df5 [quant][graphmode][fx] Add support for additional_fuse_method_mapping (#46345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46345

Allow user to add more fusion mappings

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317439

fbshipit-source-id: 3b144bbc305e41efbdf3e9fb25dbbeaad9e86c6a
2020-10-22 15:15:31 -07:00
Hao Lu
d6519d4e9f [pt][static_runtime] Add option enable_out_variant (#46690)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46690

- Add option enable_out_variant to Static Runtime
- Add gflags --pt_cleanup_activations and --pt_enable_out_variant to the benchmark script

Reviewed By: yinghai, houseroad

Differential Revision: D24438107

fbshipit-source-id: c1185c0fee93edc0118542b2faa8bc4ffdd19075
2020-10-22 15:00:23 -07:00
Jerry Zhang
23fad9111e [quant][graphmode][fx] Add additional_qat_module_mapping (#46344)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46344

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317438

fbshipit-source-id: f9e73aeb4c7a107c8df0bae8319464e7d5d7275b
2020-10-22 13:11:26 -07:00
James Reed
9ccf85b7b4 [FX] Make wrapped functions traceable (#46692)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46692

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D24465958

Pulled By: jamesr66a

fbshipit-source-id: 8c04aa3f59d1371d730ded7abd8f0c6c047e76b6
2020-10-22 12:00:02 -07:00
James Reed
2700932ef2 [FX] Fix recursion depth issue on Graph deepcopy (#46669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46669

Make `Graph`'s deepcopy behavior iterative rather than recursive. This prevents stack overflow issues with very large `Graph`s

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D24455120

Pulled By: jamesr66a

fbshipit-source-id: 5c37db5acabe313b9a7a464bebe2a82c59e4e2e9
2020-10-22 11:55:23 -07:00
Pritam Damania
06d50b5eb0 Pull in fairscale.nn.Pipe into PyTorch. (#44090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44090

This is an initial commit pulling in the torchgpipe fork at
https://github.com/facebookresearch/fairscale.

The purpose of this commit is to just pull in the code and ensure all tests and
builds work fine. We will slowly modify this to match our intended API
mentioned in https://fb.quip.com/txurAV3zIFox#RPZACAfAKMq. Follow up PRs would
address further changes needed on top of the initial commit..

We're pulling the code into the `torch.distributed._pipeline.sync` package. The
package is private on purpose since there is a lot of work (ex: docs, API
changes etc.) that needs to go in before we can actually officially support
this.
ghstack-source-id: 114864254

Test Plan:
1) waitforbuildbot
2) Ran all tests on my devgpu

Reviewed By: mrshenli

Differential Revision: D23493316

fbshipit-source-id: fe3c8b7dadeeb86abdc00e8a8652491b0b16743a
2020-10-22 10:59:02 -07:00
Alexander Grund
93719440b8 Replace map(lambda constructs (#46462)
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal

Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462

Reviewed By: zou3519

Differential Revision: D24422343

Pulled By: ezyang

fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
2020-10-22 09:50:22 -07:00
Rohan Varma
25dc0056f2 [RPC] print exception message on workers that run python functions (#46372)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46372

Currently, in `_run_function`, we catch an exception from the python
function which is run, and report it back to the master. However in some large
scale training jobs, it would be valuable to also log the error on the trainer
itself for faster debugging.

Test Plan: Added unittest.

Reviewed By: pritamdamania87

Differential Revision: D24324578

fbshipit-source-id: 88460d7599ea69d2c38fd9c10eb6471f7edd4100
2020-10-22 09:44:15 -07:00
Ivan Kobzarev
3112e23428 [py][vulkan][reland] Add is_vulkan to py api, add vulkan to device type parsing (#46655)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46655

Test Plan: Imported from OSS

Pulled By: IvanKobzarev

Reviewed By: mrshenli

Differential Revision: D24448984

fbshipit-source-id: 5000846a06077f7a5a06dd51da422d2a42f70820
2020-10-22 09:35:50 -07:00
Rohan Varma
7245d2c939 Avoid scatter for single-device case in DDP (#46304)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46304

In the case that a single process operates only on one GPU, we can
avoid this scatter and instead replace it with a recursive version of `to`
which transfers the input tensors to the correct device.

The implementation of `_recursive_to` is modeled after `scatter` in https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/scatter_gather.py, in order to keep parity with the previous conventions (i.e. custom types not having their tensors moved).
ghstack-source-id: 114896677

Test Plan: Added unittest, and CI

Reviewed By: pritamdamania87

Differential Revision: D24296377

fbshipit-source-id: 536242da05ecabfcd36dffe14168b1f2cf58ca1d
2020-10-22 08:29:37 -07:00
albanD
143d1fd9f5 Namespace cleanup for 1.7 Part 2 (#46673)
Summary:
make valgrind_toggle and valgrind_supported_platform private functions

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46673

Reviewed By: gchanan

Differential Revision: D24458133

Pulled By: albanD

fbshipit-source-id: 6f3fad9931d73223085edbd3cd3b7830c569570c
2020-10-22 07:57:51 -07:00
albanD
16c5b7b3f2 Avoid leaking has_torch_function and handle_torch_function in torch namespace (#46680)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46680

Reviewed By: zou3519

Differential Revision: D24459823

Pulled By: albanD

fbshipit-source-id: 4ff6925afcf14214dc45921bca0d2f33ca1944a1
2020-10-22 07:48:36 -07:00
Pearu Peterson
905ed3c840 Revised sparse tensor documentation. (#45400)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/44635.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45400

Reviewed By: ezyang

Differential Revision: D24359410

Pulled By: mruberry

fbshipit-source-id: 37c691a49a7b0042c7a298e0ed1226702b097c8b
2020-10-22 02:07:54 -07:00
kshitij12345
8e13fe6c44 [numpy] torch.sin : support and promote integer inputs to float (#45733)
Summary:
References https://github.com/pytorch/pytorch/issues/42515

> Enable integer -> float unary type promotion for ops like sin

Will follow-up for other such Ops once this PR is merged.

cc: mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45733

Reviewed By: zou3519

Differential Revision: D24431194

Pulled By: mruberry

fbshipit-source-id: db600bc5de0e535b538d2aa301c3526b7c75ed17
2020-10-22 01:58:57 -07:00
Yi Wang
98aad933b6 [pytorch][PR] Record FutureNCCL callback stream on CUDA caching allocator (#45318)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45318

When calling `then()` from WorkNCCL, record the input data pointers in futureNCCLCallbackStream_ before the execution of the input callback.

Note that the recording cannot be directly added to the lambda used by addCallback in ProcessGroupNCCL.hpp. This is because the type of future value in that context is pyobject rather than TensorList, but a type casting will require pybind and introduce Python dependency, which should not be allowed in c10d library.

I have considered creating a util function in a separate file to support this type casting, and then placing it under torch/csrc directory where python dependency is allowed. However, torch/csrc has a dependency on c10d, so this will create a circular dependency.

Finally, a `record_stream_cb_` member is added to FutureNCCL, and the default value is nullptr. A default `record_stream_cb_` implementation is added to `PythonFutureWrapper,` where Python dependency is allowed.

In addition, a few lines are reformatted by lint.
caffe2/torch/csrc/distributed/c10d/init.cpp is only reformatted.

#Closes: https://github.com/pytorch/pytorch/issues/44203

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- ProcessGroupNCCLTest
buck test mode/dev-nosan caffe2/test/distributed:c10d  -- test_accumulate_gradients_no_sync_allreduce_with_then_hook
buck test mode/dev-nosan caffe2/test/distributed:c10d  -- test_ddp_comm_hook_allreduce_with_then_hook_nccl

Reviewed By: pritamdamania87

Differential Revision: D23910257

fbshipit-source-id: 66920746c41f3a27a3689f22e2a2d9709d0faa15
2020-10-22 01:49:47 -07:00
Jerry Zhang
ab28bd528d [quant][graphmode][fx] Support quantizing FloatFunctional (#46634)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46634

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24438227

fbshipit-source-id: f33439d51112e13f59ee4292e804495d38fa3899
2020-10-22 01:21:17 -07:00
Xiao Wang
fe4f90c40b Cusolver inverse check info (#46625)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46557

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46625

Reviewed By: zou3519

Differential Revision: D24438577

Pulled By: ngimel

fbshipit-source-id: d00e6eb2eae4aa39ca6ecf5914fe9cf37c24b906
2020-10-21 21:46:33 -07:00
Yi Wang
adffd8eb6b Add const to the first arg 'grad' of Reducer::copy_grad_to_bucket (#46501)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46501

Gradients in this method will not be modified.
ghstack-source-id: 114851646

Test Plan: waitforbuildbot

Reviewed By: pritamdamania87

Differential Revision: D24374300

fbshipit-source-id: a2941891008f9f197a5234b50260218932d2d37d
2020-10-21 21:34:31 -07:00
Brian Hirsh
db83ddcb86 small doc fix (#46599)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46599

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D24426181

Pulled By: bdhirsh

fbshipit-source-id: d0900d5c43574c80f1bf614824eafd21ba6a9caf
2020-10-21 20:17:31 -07:00
Rahul Nambiar
adbb50ea67 Enabling alias annotation checks for all operations during autograd tests (#46601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46601

* except excluded tests and magic methods.

https://github.com/pytorch/pytorch/issues/38731

Previously, we'd only do run these tests for inplace operations. Since this is a lot more tests, fixed these issues that came up when running them -
- Updated schema of conj() to reflect existing behaviour.
- Updated deepEquals method in check_alias_annotation.cpp to re-use the overloaded == operator. Previous implementation did not cover all types of IValues.
- Corrected the order inputs are passed in during autograd testing of 'view' & 'reshape'.
- Subbed out atn::ger with the func its aliased to, atn::outer, for testing. The alias annotation checking code doesn't handle aliased operators properly.
ghstack-source-id: 114830903

Test Plan: Ran all tests in test:jit and verified they pass.

Reviewed By: eellison

Differential Revision: D24424955

fbshipit-source-id: 382d7e2585911b81b1573f21fff1d54a5e9a2054
2020-10-21 20:01:57 -07:00
Jerry Zhang
13decddae2 [reland][quant] Add FixedQParamsFakeQuantize module (#45538) (#46657)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46657

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24451406

fbshipit-source-id: 26cc140c00f12bdec9a8f9dc880f4c425f4d4074
2020-10-21 16:47:11 -07:00
Jerry Zhang
746febdeac [quant][graphmode][fx] Add additional_object_mapping argument to convert (#46338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46338

Should we merge quantized module and quantized operator configurations?

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317435

fbshipit-source-id: 3575251fe9d80a6628b8c3243c2ed92ea5e921e3
2020-10-21 16:39:07 -07:00
Ansley Ussery
475b4e30e6 Allow for source code comments at any level of indentation (#46548)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46548

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24434778

Pulled By: ansley

fbshipit-source-id: e24ed73d497381e02ef1155622641027ae34770a
2020-10-21 13:49:42 -07:00
Joel Lamy-Poirier
caed29a069 fix-process-group-counter (#46563)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46561

A minimal fix to issue https://github.com/pytorch/pytorch/issues/46561. Increment the global variable `_group_count` at the same time as the others so the global state remains consistent in case of a failure.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46563

Reviewed By: zou3519

Differential Revision: D24422354

Pulled By: mrshenli

fbshipit-source-id: 32493cc2001d21ad366c396d16c303936959434e
2020-10-21 13:03:53 -07:00
Hugo van Kemenade
09896eda14 Fix version comparisons for Python 3.6, 3.10 and 4 (#32389)
Summary:
There's some code which uses `six.PY3`, similar to:

```python
if six.PY3:
    print("Python 3+ code")
else:
    print "Python 2 code"
```

Where:

```python
PY3 = sys.version_info[0] == 3
```

When run on Python 4, this will run the Python 2 code! Instead, use `six.PY2` and avoid `six.PY3`.

 ---

Similarly, there's some `sys.version_info[0] == 3` checks, better done as `sys.version_info[0] >= 3`.

 ---

Also, it's better to avoid comparing the `sys.version` string, as it makes assumptions that each version component is exactly one character long, which will break in Python 3.10:

```pycon
>>> sys.version
'3.8.1 (v3.8.1:1b293b6006, Dec 18 2019, 14:08:53) \n[Clang 6.0 (clang-600.0.57)]'
>>> sys.version < "3.3"
False
>>> fake_v3_10 = '3.10.1 (v3.8.1:1b293b6006, Dec 18 2019, 14:08:53) \n[Clang 6.0 (clang-600.0.57)]'
>>> fake_v3_10 < "3.3"
True
```

 ---

Finally, I think the intention here is to skip when the Python version is < 3.6:

```python
unittest.skipIf(sys.version_info[0] < 3 and sys.version_info[1] < 6, "dict not ordered")
```

However, it will really skip for Python 0.0-0.5, 1.0-1.5 and 2.0-2.5. It's best to compare to the `sys.version_info` tuple and not `sys.version_info[1]`:

```python
    unittest.skipIf(sys.version_info < (3, 6), "dict not ordered")
```

 ---

Found using https://github.com/asottile/flake8-2020:
```console
$ pip install -U flake8-2020
$ flake8 --select YTT
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/32389

Reviewed By: zou3519

Differential Revision: D24424662

Pulled By: ezyang

fbshipit-source-id: 1266c4dbcc8ae4d2e2e9b1d7357cba854562177c
2020-10-21 11:52:50 -07:00
Jithun Nair
65da50c099 Apply hip vs hipcc compilation flags correctly for building extensions (#46273)
Summary:
Fixes issues when building certain PyTorch extensions where the cpp files do NOT compile if flags such as `__HIP_NO_HALF_CONVERSIONS__` are defined.
cc jeffdaily

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46273

Reviewed By: zou3519

Differential Revision: D24422463

Pulled By: ezyang

fbshipit-source-id: 7a43d1f7d59c95589963532ef3bd3c68cb8262be
2020-10-21 11:40:40 -07:00
Ollin Boer Bohan
ac4ee0ef5d Fix typo in docs for interpolate (#46589)
Summary:
Removes a spurious backtick in [the docs for `torch.nn.functional.interpolate`](https://pytorch.org/docs/stable/nn.functional.html?highlight=grid_sample#torch.nn.functional.interpolate)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46589

Reviewed By: zou3519

Differential Revision: D24422550

Pulled By: ezyang

fbshipit-source-id: c1e6b7de4584b2a3f68b458801a33b3fc71c1944
2020-10-21 11:31:53 -07:00
Negin Raoof
96bc7faa50 [ONNX] Export var, var_mean and std_mean ops (#45678)
Summary:
Adding export for var, var_mean and std_mean ops

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45678

Reviewed By: houseroad

Differential Revision: D24398811

Pulled By: bzinodev

fbshipit-source-id: bf51422a9e035d521156c0fa6e77898aac83a380
2020-10-21 11:23:54 -07:00
Ivan Yashchuk
6de619e4a4 Allow converting parameters of nn.Module to complex dtypes (#44788)
Summary:
This PR makes it possible to cast the parameters of nn.Module to complex dtypes.
The following code works with the proposed changes.
```python
In [1]: import torch
In [2]: lin = torch.nn.Linear(5, 1).to(torch.complex64)
In [3]: lin(torch.zeros(3, 5, dtype=torch.complex64))
Out[3]:
tensor([[-0.1739+0.j],
        [-0.1739+0.j],
        [-0.1739+0.j]], grad_fn=<AddmmBackward>)
```
Fixes https://github.com/pytorch/pytorch/issues/43477.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44788

Reviewed By: zou3519

Differential Revision: D24307225

Pulled By: anjali411

fbshipit-source-id: dacc4f5c8c9a99303f74d1f5d807cd657b3b69b5
2020-10-21 08:54:59 -07:00
Howard Huang
611f028168 Add Batch-Updating Parameter Server Example to CI Tests (#46510)
Summary:
Resolves one item in https://github.com/pytorch/pytorch/issues/46321

This PR sets up DistExamplesTest which will be used as the class to implement future tests for examples. This class is run as part of CI tests. It also creates a dist_examples folder and includes the [batch server example](https://github.com/pytorch/examples/blob/master/distributed/rpc/batch/parameter_server.py) which is slightly modified to allow to be tested.

Run test:
pytest test/distributed/rpc/test_tensorpipe_agent.py -k test_batch_updating_parameter_server -vs
pytest test/distributed/rpc/test_process_group_agent.py -k test_batch_updating_parameter_server -vs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46510

Reviewed By: mrshenli

Differential Revision: D24379296

Pulled By: H-Huang

fbshipit-source-id: 1c102041e338b022b7a659a51894422addc0e06f
2020-10-21 08:46:46 -07:00
Shen Li
cebe87fe3a Revert D24379422: [py][vulkan] Add is_vulkan to py api, add vulkan to device type parsing
Test Plan: revert-hammer

Differential Revision:
D24379422 (e8fbe54cf5)

Original commit changeset: afab89bb9e17

fbshipit-source-id: 743c77e453239f10c155c67490cba5a42ab42f58
2020-10-21 08:23:05 -07:00
Mehdi Mirzazadeh
8357e2edc3 Back out "Revert D24269034: [fx] Refactor Tracer so that find_module and root args creation could be overridden by implementations" (#46573)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46573

Original commit changeset: 7dd709b585f8
ghstack-source-id: 114730143

Test Plan: Verified on circleci that previously broken test is fixed.

Reviewed By: zdevito

Differential Revision: D24413096

fbshipit-source-id: 439568c631c4556b8ed6af20fcaa4b1375e554cf
2020-10-20 22:17:36 -07:00
Ivan Kobzarev
e8fbe54cf5 [py][vulkan] Add is_vulkan to py api, add vulkan to device type parsing (#46511)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46511

Test Plan: Imported from OSS

Reviewed By: AshkanAliabadi

Differential Revision: D24379422

Pulled By: IvanKobzarev

fbshipit-source-id: afab89bb9e17c50934083598262bbe14ea82e893
2020-10-20 20:04:24 -07:00
Ashkan Aliabadi
2181449068 Revert D24004795: [quant] Add FixedQParamsFakeQuantize module
Test Plan: revert-hammer

Differential Revision:
D24004795 (253918ec55)

Original commit changeset: fc4797f80842

fbshipit-source-id: 663169e90a2f58e5a89e4d382291ae41c24d0fee
2020-10-20 19:40:21 -07:00
Pritam Damania
cb3c1d17e4 Promote -Wcast-function-type to an error in builds. (#46356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46356

Adding the flag `-Werror=cast-function-type` to ensure we don't allow
any invalid casts (ex: PyCFunction casts).

For more details see: https://github.com/pytorch/pytorch/issues/45419
ghstack-source-id: 114632980

Test Plan: waitforbuildbot

Reviewed By: albanD

Differential Revision: D24319759

fbshipit-source-id: 26ce4650c220e8e9dd3550245f214c7e6c21a5dc
2020-10-20 18:09:06 -07:00
Yanan Cao
42a70dc5a8 Implement all communication APIs in DistributedC10d new frontend (#46053)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46053

Reviewed By: wanchaol

Differential Revision: D24300487

Pulled By: gmagogsfm

fbshipit-source-id: 0d0b01c4f9d9e1d59dd17d7606ce47d54d61951d
2020-10-20 17:52:07 -07:00
Jerry Zhang
253918ec55 [quant] Add FixedQParamsFakeQuantize module (#45538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45538

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24004795

fbshipit-source-id: fc4797f80842daacd3b3584c5b72035774634edd
2020-10-20 17:43:25 -07:00
Lillian Johnson
f83cf2dab3 [JIT] adding torch.jit.isinstance support (#46062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46062

Adds support for torch.jit.isinstance in both eager and script mode

Example use:

```
import torch
from typing import Any, List

class TestModule(torch.nn.Module):
    def __init__(self):
        super(TestModule, self).__init__()

    def call(self, input1: str, input2: str) -> str:
        return input1

    def forward(self, input: Any) -> None:
        if torch.jit.isinstance(input, List[str]):
            for el in input:
                print(el)

TestModule().forward(["1","2"])
scripted_module = torch.jit.script(TestModule())
scripted_module(["1", "2"])
```

Test Plan: Imported from OSS

Reviewed By: bertmaher, zou3519

Differential Revision: D24264415

Pulled By: Lilyjjo

fbshipit-source-id: 039c95bddd854c414027ac8332832e6bc830b5b9
2020-10-20 16:47:49 -07:00