Commit Graph

8526 Commits

Author SHA1 Message Date
Anjali Chourdia
5687ee1d85 added a serialize function in SGD class to utilize the existing macro for serialization/deserialization calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30739

Differential Revision: D18842908

Pulled By: anjali411

fbshipit-source-id: 7dc13ff9c4fc126790b88b1b4b5d03425c349d38
2019-12-06 08:38:07 -08:00
Seiya Tokui
1d7b40f1c4 Fix reading __cuda_array_interface__ without strides (#24947)
Summary:
When converting a contiguous CuPy ndarray to Tensor via `__cuda_array_interface__`, an error occurs due to incorrect handling of default strides. This PR fixes this problem. It makes `torch.tensor(cupy_ndarray)` works for contiguous inputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24947

Differential Revision: D18838986

Pulled By: ezyang

fbshipit-source-id: 2d827578f54ea22836037fe9ea8735b99f2efb42
2019-12-06 07:36:27 -08:00
Xintao Chen
9a858aba5f Moving checks related to options.aliasAnalysis and schema.hasAliasInfo to read callsite (#30671)
Summary:
**Context:**
In D18530964, we allow not set aliasAnalysis at previous registration call, and then update it to the correct one in following registration call.

But its not working E2E due to those existing checks.

So we want to remove or delay those TORCH_CHECKs.

Here is the existing three callsites for operator.aliasAnalysisKind():
https://our.intern.facebook.com/intern/diffusion/FBS/browse/master/fbcode/caffe2/torch/csrc/jit/ir.cpp?lines=994%2C995%2C996%2C1001%2C1004

https://our.intern.facebook.com/intern/diffusion/FBS/browse/master/fbcode/caffe2/torch/csrc/jit/operator.cpp?lines=147%2C155

https://our.intern.facebook.com/intern/diffusion/FBS/browse/master/fbcode/caffe2/torch/csrc/jit/passes/alias_analysis.cpp?lines=260%2C277%2C380

**Things to check**
1. Those two checks are different. But since in original op_registration code, if options.schemaOrName_->is_right() is FALSE, we kind of convert it to FunctionSchema type, so in the read callsites, we only need to check the following: options.aliasAnalysisKind_ == AliasAnalysisKind::FROM_SCHEMA ||  !schema.hasAnyAliasInfo()

2. If the three callsites above are indeed needed for those checks.

3. Here we made assumptions that for reads from jit or other places, its always being called after all registrations calls are done. Trying to make sure its a valid assumption
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30671

Test Plan: Will update and refactor the tests soon.

Differential Revision: D18784623

Pulled By: charliechen0401

fbshipit-source-id: 75edea140d0ae3e54820e1aeef010c81fe26416a
2019-12-06 01:36:22 -08:00
Shen Li
619e2ffe23 Replace deprecated AT_* with TORCH_* to reduce warnings in c10d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30795

Test Plan: Imported from OSS

Differential Revision: D18826310

Pulled By: mrshenli

fbshipit-source-id: 0041ac2e5788e874e0a566abd57a8a90e658da9b
2019-12-06 01:28:30 -08:00
Shen Li
b0cba8ceae Replace deprecated AT_ERROR with TORCH_CHECK to reduce warnings in rpc
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30794

Test Plan: Imported from OSS

Differential Revision: D18826311

Pulled By: mrshenli

fbshipit-source-id: bfd58d30f386bbe9535264b2afce4acbe7ac5b0e
2019-12-06 01:28:26 -08:00
Satendra Gera
d32aec5ad6 Add get_metrics and get_debug_info to rpc agent (#30833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30833

[rpc] Add get_metrics and get_debug_info to rpc agent

Test Plan: UT and builds

Reviewed By: mrshenli

Differential Revision: D18835068

fbshipit-source-id: f552cf196bb6d54ccd38a44ba981e7d5b15513f0
2019-12-05 23:52:42 -08:00
Jerry Zhang
f1755d9aea Insert GetAttr for quantization parameters instead of Constant (#30551)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30551

To enable quantizing with shared types, we need to insert GetAttr nodes for
quantization parameters since the code might be shared by multiple module instances
and we'd like to make quantized module instance also share the same code but with
different values of attributes.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D18818652

fbshipit-source-id: fc95623cac59dcedd9e3f95397524eae515e7a11
2019-12-05 22:52:45 -08:00
Jerry Zhang
a7406516d1 Refactor bias and weight check and add aten::linear pattern (#30474)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30474

There are some common parts in `isBiasOfConvOrLinear` and `isWeightOfConvOrLinear`, we can factor
them out, the refactor will allow for easier extension of new patterns

Test Plan:
python test/test_jit.py
python test/test_quantization.py

Imported from OSS

Differential Revision: D18795725

fbshipit-source-id: 446463da5e3fa8464db441ed0d9651930487b3b7
2019-12-05 21:00:39 -08:00
Supriya Rao
a51c5f5cbf Add JIT pass to insert permutes for conv ops (#30679)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30679

Caffe2 expects quantized ops to be in NHWC format while pytorch inputs are in NCHW.
Add a jit pass to insert permutes to convert from nchw2nhwc before each conv op and add nhwc2nchw permute after the conv op.
Using graph rewriter to find consecutive redundant permutes and remove them from the graph

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps

Imported from OSS

Differential Revision: D18790518

fbshipit-source-id: 4dd39cf0b31b21f5586c0edfdce2260d4e245112
2019-12-05 18:51:16 -08:00
peterjc123
6486bdfb90 Fix os.register_at_fork not defined on Windows (#30809)
Summary:
According to https://docs.python.org/3.8/library/os.html#os.register_at_fork, this function is only available in Unix platforms.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30809

Differential Revision: D18828777

Pulled By: bddppq

fbshipit-source-id: 3325a984da488bb0a80a5c27131553fbcf78921f
2019-12-05 13:36:53 -08:00
Will Feng
244b0bd1a5 Add docs for how we expose declarations in at:: to torch:: (#30760)
Summary:
This PR adds docs for how we expose declarations in `at::` to `torch::`, to make the semantics more clear.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30760

Differential Revision: D18833081

Pulled By: yf225

fbshipit-source-id: eff4d8815c67f681ce3a930ce99771cf2e55dbd9
2019-12-05 13:05:28 -08:00
Nathan Goldbaum
f531815526 Deprecate tensor.type() (#30281)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/29161.

I looked a bit at the code changes related to this and think I have all of the use cases of `DeprecatedTypeProperties` covered in the message, but suggestions from someone with more context on this would be very much appreciated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30281

Differential Revision: D18830818

Pulled By: ezyang

fbshipit-source-id: 1a7fcee15354ae09e6644577e7fa33bd26acfe20
2019-12-05 10:55:34 -08:00
Heungsub Hans Lee
fa251cfd97 Fully deprecate variadic inputs of checkpoint_sequential (#25985)
Summary:
To support variadic inputs of `checkpoint_sequential` was deprecated at https://github.com/pytorch/pytorch/issues/21006. This case should be warned with `DeprecationWarning` for PyTorch 1.2, but it should be simply failed with `TypeError` since PyTorch 1.3. This patch removes the `DeprecationWarning` for PyTorch 1.2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25985

Differential Revision: D18809875

Pulled By: albanD

fbshipit-source-id: e84dd8629c04979c4b2dc63e8ada94292e8cedd0
2019-12-05 09:23:28 -08:00
Jerry Zhang
1d20c32bf1 Make InsertQuantDeQuantHelper global (#30550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30550

Right now we have a `InsertQuantDeQuantHelper` for each module, but we need
it to be global because we need to know what graphs have been quantized before
and based on this information we can decide how to handle the module instance.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D18818651

fbshipit-source-id: bfcaf37094ce20a257171a0c99b05b9348ebc13d
2019-12-04 20:03:00 -08:00
Jerry Zhang
c4c2e23385 Supporting making submodules unique (#30037)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30037

Support quantization for modules with reused submodules, e.g. relu (automatically make unique)
We first do a pass on the graph to find all duplicate uses of the same module, and record the `Value`s of the
module instance, for each of these values we create a new module and change the access to that module.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18821483

fbshipit-source-id: 1698b981e9e9f0c728d9f03fcbcfbd260151f679
2019-12-04 19:26:56 -08:00
Zachary DeVito
7a2889b014 Stop producing op_version_set version numbers.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28122

Test Plan: Imported from OSS

Differential Revision: D17959565

Pulled By: zdevito

fbshipit-source-id: 701101bd870700eb0c9882c69e2cfdd2524b555e
2019-12-04 19:14:43 -08:00
Jerry Zhang
3c1bb21cf5 Invoke more passes in insertObservers (#30473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30473

Invoked `ConstantPooling` and `FuseLinear` pass before
`insertObservers`.
`ConstantPooling` is for cleanning up traced graph, e.g. when we
have to constant node that has the same value, this pass will merge them,
this allows us to have less quantization patterns
`FuseLinear` is to merge the exploded linear function into `aten::linear` so
that we can quantize this function properly. We need to fuse it because right now
the way we recognize weight and bias is by matching the argument position in certain function
calls, e.g. 1st argument of aten::conv2d is weight. Therefore we have to preserve
the bounary of the linear function to recognize the weight of linear. Since in the exploded
linear code, input of addmm is transposed weight rather than the original weight of linear.
ghstack-source-id: 94887831

Test Plan:
This is needed for quantizing traced model tests to pass

Imported from OSS

Differential Revision: D18795722

fbshipit-source-id: 192d9d1e56307e2e1d90e30dce0502e31cb4f829
2019-12-04 18:45:04 -08:00
Wanchao Liang
569ea63f3b fix anynonzero op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29423

Test Plan: Imported from OSS

Differential Revision: D18820523

fbshipit-source-id: 55c7a1911121f0aed008bd684b448151bbbf0a8a
2019-12-04 16:40:43 -08:00
Jerry Zhang
1707774417 AddConstant and findConstant for ClassType (#29217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29217

We want to preserve constant information in ClassType so that
users can access the constants in the module by name.
This is also used later for freezing some attribute(converting
attributes to constant)

Test Plan:
tbd

Imported from OSS

Differential Revision: D18799955

fbshipit-source-id: fbfbcd5d3f7f560368b96e2a87e270c822a3d03a
2019-12-04 14:17:13 -08:00
davidriazati
2308a0ec1b Improve documentation around builtin functions (#30347)
Summary:
This breaks the builtins page into some more sections and adds details about Python built-in functions
](https://our.intern.facebook.com/intern/diff/18718166/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30347

Pulled By: driazati

Reviewed By: wanchaol

Differential Revision: D18718166

fbshipit-source-id: bf43260ab7bcf92cccef684a5ce68cb16020771d
2019-12-04 13:50:40 -08:00
Nathan Goldbaum
9d3402e4cb Add the __torch_function__ API override mechanism (#30730)
Summary:
This is a re-do of https://github.com/pytorch/pytorch/issues/27064, which was reverted (b8792c0438). This was landed at the same time as other work that added new operators to the `torch` namespace so the check for whether the `torch` namespace is exhaustively checked for overridability was triggering test failures.

I've temporarily disabled that check and added an explanatory comment that the check will be re-enabled in a future PR that will be merged during a time when the commit velocity on PyTorch is lower.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30730

Differential Revision: D18813270

Pulled By: ezyang

fbshipit-source-id: 70477c4656dca8fea6e7bc59259555041fcfbf68
2019-12-04 13:19:07 -08:00
Elias Ellison
d38f9117fd Cache compilation of free functions (#30503)
Summary:
We don't have to recompile free functions if we've already compiled them.

Improved compilation of resnet18 by 27%.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30503

Differential Revision: D18796501

Pulled By: eellison

fbshipit-source-id: 2dee0fc5fcf9adc5b92213f8cb813730d71b376f
2019-12-04 12:45:35 -08:00
Jerry Zhang
756f279d95 Rename QuantizeHelper to InsertQuantDeQuantHelper (#30549)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30549

Preparing for later refactoring

Test Plan:
.

Imported from OSS

Differential Revision: D18802464

fbshipit-source-id: 0b5afb143549d93eed4c429125d3d5fd253093a9
2019-12-04 10:40:22 -08:00
Jerry Zhang
f73cd28082 InsertObservers for shared class types (#30548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30548

ClassTypes can be shared among different module instances, but previously we assumed
they would be unique, this PR enables the insert_observers pass to work with shared class types

Test Plan:
python test/test_jit.py
python test/test_quantization.py

Imported from OSS

Differential Revision: D18802465

fbshipit-source-id: b782e71e44a043af45577ac2b5c83e695155bb8b
2019-12-04 09:34:47 -08:00
Edward Yang
a55f125e3b Check the error return of nvrtcGetProgramLogSize and nvrtcGetProgramLog (#30663)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30663

Yes they can fail.  See https://github.com/ROCm-Developer-Tools/HIP/issues/1706

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18810088

Pulled By: ezyang

fbshipit-source-id: 96186e71c9a195bdbbed811e7ba8dc40bec09eae
2019-12-04 08:37:43 -08:00
Edward Yang
38986e1dea Split libtorch.so back into libtorch_{cpu,cuda,hip} (#30315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30315

The new structure is that libtorch_cpu contains the bulk of our
code, and libtorch depends on libtorch_cpu and libtorch_cuda.
This is a reland of https://github.com/pytorch/pytorch/pull/29731 but
I've extracted all of the prep work into separate PRs which can be
landed before this one.

Some things of note:

* torch/csrc/cuda/nccl.cpp was added to the wrong list of SRCS, now fixed (this didn't matter before because previously they were all in the same library)
* The dummy file for libtorch was brought back from the dead; it was previously deleted in #20774
In an initial version of the patch, I forgot to make torch_cuda explicitly depend on torch_cpu. This lead to some very odd errors, most notably "bin/blob_test: hidden symbol `_ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infom' in lib/libprotobuf.a(arena.cc.o) is referenced by DSO"
* A number of places in Android/iOS builds have to add torch_cuda explicitly as a library, as they do not have transitive dependency calculation working correctly
* I had to torch_cpu/torch_cuda caffe2_interface_library so that they get whole-archived linked into torch when you statically link. And I had to do this in an *exported* fashion because torch needs to depend on torch_cpu_library. In the end I exported everything and removed the redefinition in the Caffe2Config.cmake. However, I am not too sure why the old code did it in this way in the first place; however, it doesn't seem to have broken anything to switch it this way.
* There's some uses of `__HIP_PLATFORM_HCC__` still in `torch_cpu` code, so I had to apply it to that library too (UGH). This manifests as a failer when trying to run the CUDA fuser. This doesn't really matter substantively right now because we still in-place HIPify, but it would be good to fix eventually. This was a bit difficult to debug because of an unrelated HIP bug, see https://github.com/ROCm-Developer-Tools/HIP/issues/1706

Fixes #27215 (as our libraries are smaller), and executes on
part of the plan in #29235.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18790941

Pulled By: ezyang

fbshipit-source-id: 01296f6089d3de5e8365251b490c51e694f2d6c7
2019-12-04 08:04:57 -08:00
Will Price
1189595875 Fix Tensor.argsort -> torch.argsort documentation link
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30464

Differential Revision: D18717657

Pulled By: zou3519

fbshipit-source-id: 9894f63c6cb1b5311117441e78805230d1bc09f3
2019-12-04 07:49:38 -08:00
Edward Yang
b8792c0438 Revert D18645954: add __torch_function__ API override mechanism
Test Plan: revert-hammer

Differential Revision:
D18645954

Original commit changeset: 54b5e4344d7a

fbshipit-source-id: 4a7aebb483e6b001130d6f384ccc53c5a808ab13
2019-12-04 07:41:47 -08:00
Tongzhou Wang
a68b790293 fix ref to nonexistent torch.repeat
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30614

Differential Revision: D18808517

Pulled By: ezyang

fbshipit-source-id: 27f9bda6fbbd1c3c751a0e96fdc336bf724c0b31
2019-12-04 07:27:01 -08:00
Tongzhou Wang
ec7bb9de1c format tri[lu]_indices doc better
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30377

Differential Revision: D18689152

Pulled By: zou3519

fbshipit-source-id: 7fab1e39ecd39ef6a3869befcbe217f8d3b6a87e
2019-12-04 07:16:34 -08:00
Tongzhou Wang
d6ca93b353 add doc for F.softplus
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30055

Differential Revision: D18762624

Pulled By: zou3519

fbshipit-source-id: 61da88cbb8cd0f37ac26b0fb8aaacdbe85c724ba
2019-12-04 07:16:30 -08:00
Prasun Anand
d12786b24f add __torch_function__ API override mechanism (#27064)
Summary:
Closes https://github.com/pytorch/pytorch/issues/24015 (see description of that issue for more details).

For a toy example, see the `DiagonalTensor` and `SubDiagonalTensor` class in test/test_overrides.py.

This PR currently contains:

* tests for `__torch_function__` behavior
* modification to `gen_python_functions` and `parse` function signatures and dispatched to correct overloaded argument.

This feature is inspired by and analogous to NumPy's `__array_function__` protocol ([see NumPy Enhancement Proposal 18](https://numpy.org/neps/nep-0018-array-function-protocol.html#trying-array-function-methods-until-the-right-one-works)).

### Benchmarks:
See Nathan's comment below: https://github.com/pytorch/pytorch/pull/27064#issuecomment-554601189
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27064

Differential Revision: D18645954

Pulled By: ezyang

fbshipit-source-id: 54b5e4344d7afdbcf996bb57191b0bdadc7b1767
2019-12-04 05:56:46 -08:00
Martin Yuan
b26401f965 Dump operator names of a script module (#30467)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30467

Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.

Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']

Test Plan: Imported from OSS

Differential Revision: D18801619

Pulled By: iseeyuan

fbshipit-source-id: f9b198d3e82b095daf704ee595d8026ad889bb13
2019-12-03 20:20:33 -08:00
Shen Li
63a1542ed2 Adding Debug Info for RRef Context
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30610

Test Plan: Imported from OSS

Differential Revision: D18763592

Pulled By: mrshenli

fbshipit-source-id: ad8854bdb6250c29eaa0f582d66cfd31394312e5
2019-12-03 19:16:31 -08:00
Shen Li
6dda241ab8 Add RRef.__str__() API
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30609

Test Plan: Imported from OSS

Differential Revision: D18763593

Pulled By: mrshenli

fbshipit-source-id: 20f1eea2d6cfe9ab2a27a9677d97dde07c1dca9b
2019-12-03 19:16:26 -08:00
Hong Xu
bb5dcaf24f Add logical_and and logical_or (#30521)
Summary:
With the CI failure caused in 8bbafa0b32 fixed (incorrect return type of the lambdas in CUDA kernels)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30521

Differential Revision: D18770151

Pulled By: ailzhang

fbshipit-source-id: 02f0fe1d5718c34d24da6dbb5884ee8b247ce39a
2019-12-03 18:24:54 -08:00
Prasun Anand
3cf8382984 detect_anomaly() for SparseTensors (#29803)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/28649

1. Modified detect_anomaly() to use isnan()
2. isnan() for SparseTensors returns a bool Tensor of _values.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29803

Differential Revision: D18594299

Pulled By: ezyang

fbshipit-source-id: 3f4190c569f53219be330584fc604ca43c4a6c7a
2019-12-03 15:42:51 -08:00
Rohan Varma
fef4360536 remove default constructor in futureInfo (#30197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30197

This default constructor was added because std::map's operator[]
requires a default constructor. However, instead of using operator[], we can
use emplace and remove the constructor, to ensure that the FutureInfo struct
doesnt get constructed with garbage values.
ghstack-source-id: 94802453

Test Plan: Unit tests pass.

Differential Revision: D18627675

fbshipit-source-id: c4cb000e60081478c0fd7308e17103ebbc4dc554
2019-12-03 15:36:22 -08:00
Tristan Rice
59151d3e43 autograd/profiler: support merging FunctionEventAvg (#30677)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30677

Currently you can only add FunctionEvents to FunctionEventAvg. This makes it so you can add multiple FunctionEventAvg objects together. This is useful for merging multiple profiles together such as when dealing with distributed training.

Test Plan:
added unit test

  buck test //caffe2/test:autograd -- test_profiler

Reviewed By: bddppq

Differential Revision: D18785578

fbshipit-source-id: 567a441dec885db7b0bd8f6e0ac9a60b18092278
2019-12-03 15:28:58 -08:00
Peter Bell
dcd1216efe Force early initialization of OpenMP in forked children (#29006)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/28389

Intel's OpenMP implementation sets the thread affinity on the first call to an OpenMP function after a fork. By adding an atfork handler we can force this to happen before a user tries to set the affinity in their own DataLoader `worker_init_fn`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29006

Differential Revision: D18782456

Pulled By: ezyang

fbshipit-source-id: ce0b515256da0cf18ceb125e0cdec99a3311bbd3
2019-12-03 15:23:31 -08:00
Nikolay Korovaiko
d4c25add45 make sure the counter stays correct in between bailout transitions (#30186)
Summary:
This fixes the second issue reported in https://github.com/pytorch/pytorch/issues/29909 namely, a loop counter is assigned the wrong values after transitioning to a bailout graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30186

Differential Revision: D18646845

Pulled By: Krovatkin

fbshipit-source-id: 1f7c601dd9f35892979385ffa132fb0886a4f203
2019-12-03 14:59:08 -08:00
Will Feng
03a73cb9ac Remove namespace F = torch::nn::functional from torch/nn/modules/batchhnorm.h (#30684)
Summary:
This PR removes `namespace F = torch::nn::functional` from `torch/nn/modules/batchhnorm.h`, so that people don't have to define `torch::nn::functional` as `F` if they don't want to.

Fixes https://github.com/pytorch/pytorch/issues/30682.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30684

Differential Revision: D18795717

Pulled By: yf225

fbshipit-source-id: c9feffbeb632cc6b4ce3e6c22c0a78533bab69ad
2019-12-03 14:52:23 -08:00
Brian Vaughan
604a27361f remove tuple_parser (#30659)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30659

I could only find one usage of TupleParser and it doesn't seem worth maintaining just for that one usage.

Test Plan: Imported from OSS

Differential Revision: D18795979

Pulled By: nairbv

fbshipit-source-id: 6e50d65fc8fade0944f36ab20d00f1539a3d4cb8
2019-12-03 14:49:59 -08:00
Supriya Rao
980aead1f8 Add support for quantized slice conversion (#30498)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30498

Updated Int8SliceOp to accept dim, start and end index similar to Pytorch.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_slice

Imported from OSS

Differential Revision: D18740519

fbshipit-source-id: 2313f37a4936edb150ce04911b241e591e191801
2019-12-03 14:37:59 -08:00
Sebastian Messmer
bc2e6d10fa Back out "Revert D17908478: Switch PyTorch/Caffe2 to C++14"
Summary: Original commit changeset: 775d2e29be0b

Test Plan: CI

Reviewed By: mruberry

Differential Revision: D18775520

fbshipit-source-id: a350b3f86b66d97241f208786ee67e9a51172eac
2019-12-03 14:33:43 -08:00
Yanli Zhao
40146eb48e Skip ProcessGroupGlooAyncTest if there is no CUDA available (#30345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30345

Skip ProcessGroupGlooAyncTest if there is no CUDA available, otherwise in sandcastle non GPU host the test will abort with failing to load CUDA library
ghstack-source-id: 94771241

Test Plan: test skipped on non GPU host

Differential Revision: D18665322

fbshipit-source-id: 8c7b89aeecc6ec007bee12d864a6058384254e61
2019-12-03 13:27:34 -08:00
Jerry Zhang
19cd90d303 Globally record observer nodes (#30547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30547

att

Test Plan:
test_jit.py test_quantization.py

Imported from OSS

Differential Revision: D18784752

fbshipit-source-id: 000e140aa86ff12a240d98da71871a5a5053401f
2019-12-03 12:16:00 -08:00
Jerry Zhang
7023e13fbb Fix mapping white list (#30636)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30636

Currently DeQuantStub is still in whitelist because set union has
lower precedence than set difference
fix issue: https://github.com/pytorch/pytorch/issues/29646

Test Plan:
verified locally that we don't attach qconfig for DeQuantStub

Imported from OSS

Differential Revision: D18775275

fbshipit-source-id: 8da07e40963555671b3d4326c9291706103f858e
2019-12-03 11:34:28 -08:00
Ailing Zhang
a997f224ac Add torch.multiprocessing.create_processes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28493

Differential Revision: D18766066

Pulled By: ailzhang

fbshipit-source-id: 7f424c8fae3012be2416cf9bc72ee2dde40c1f89
2019-12-03 10:38:19 -08:00
Lara
4d30415f12 Add ONNX Scripting Conv Support (#30618)
Summary:
Convolution nodes are traced as aten:_convolution and are currently supported in ONNX.
Scripting convolution uses aten:conv<1,2,3>d which are currently not supported in ONNX.
This PR adds the symbolics for aten:conv<1,2,3>d and aten:conv_transpose<1,2,3>d
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30618

Reviewed By: hl475

Differential Revision: D18778145

Pulled By: houseroad

fbshipit-source-id: 4af0379f29974a1ce8443024d1d87b3eb8d2dd36
2019-12-03 10:28:38 -08:00
Jerry Zhang
89be1a22d4 split getInvokedMethods (#30546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30546

factor out this function for later support of quantizing shared types

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D18776304

fbshipit-source-id: f5a736b0f69019cefe17ec4517da1ae5462f78e1
2019-12-03 10:11:57 -08:00
Rohan Varma
5a484245d9 Change test_invalid_names test to only test constructor of WorkerInfo (#30620)
Summary:
This tests seems to only test that we throw exceptions in the `WorkerInfo` constructor when invalid names are passed in, so I don't think we need to complicate by initializing RPC, and exposing ourselves to potential flakiness.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30620

Differential Revision: D18766955

Pulled By: rohan-varma

fbshipit-source-id: 11643de4d57431e5f46e096c7766de3ab0b9b05a
2019-12-03 09:07:10 -08:00
Shen Li
f9f54201d3 Remove deprecated fromIvalue in RRefForkData
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30646

Test Plan: Imported from OSS

Differential Revision: D18777610

Pulled By: mrshenli

fbshipit-source-id: 7a749c1035e36bbb464332d3829fd53e2c6cf727
2019-12-03 09:01:40 -08:00
Brian Vaughan
e5b947a3a8 Raise an error for is_signed on quantized types (#30527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30527

When we introduced dtype.is_signed we allowed for support of
quantized types, but we're not sure what the correct result should be.

See discussion at https://github.com/pytorch/pytorch/pull/29511

Test Plan: Imported from OSS

Differential Revision: D18765410

Pulled By: nairbv

fbshipit-source-id: c87cfe999b604cfcbbafa561e04d0d5cdbf41e6d
2019-12-03 06:34:53 -08:00
Will Feng
18ec4632b3 Exclude undefined tensors in the result of Module::parameters() / named_paramters() / buffers() / named_buffers() (#30626)
Summary:
PR https://github.com/pytorch/pytorch/pull/30523 attempted to fix https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462, but the fix wasn't complete. This PR makes the following improvements:
1. Fixes https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462 properly by excluding undefined tensors in the result of `Module::parameters()` / `named_parameters()` / `buffers()` / `named_buffers()`, which mirrors the Python API behavior.
2. Audits all use sites of `Module::parameters_` / `buffers_` and change them to `Module::named_parameters(/*recurse=*/false)` / `named_buffers(/*recurse=*/false)` when appropriate, so that use sites of module parameters / buffers never need to worry about undefined tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30626

Differential Revision: D18777507

Pulled By: yf225

fbshipit-source-id: 55b64b69779e1186342efd3c44857f416334ed6b
2019-12-02 21:59:58 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Jianyu Huang
0bebfe2143 Add the explicit per-tensor/per-channel quant info when we print the module (#30591)
Summary:
As Title says. We would like to explicitly distinguish per-tensor/per-channel scheme when we print the module.

Here is an example for Lenet after applying the per-channel dynamic quantization:

Before this PR:
```
FloatModel(
  (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
  (fc1): DynamicQuantizedLinear(
    in_features=800, out_features=500
    (_packed_params): LinearPackedParams()
  )
  (fc2): DynamicQuantizedLinear(
    in_features=500, out_features=10
    (_packed_params): LinearPackedParams()
  )
)
```

After this PR:
```
FloatModel(
  (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
  (fc1): DynamicQuantizedLinear(
    in_features=800, out_features=500, qscheme=torch.per_channel_affine
    (_packed_params): LinearPackedParams()
  )
  (fc2): DynamicQuantizedLinear(
    in_features=500, out_features=10, qscheme=torch.per_channel_affine
    (_packed_params): LinearPackedParams()
  )
)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30591

Differential Revision: D18764366

Pulled By: jianyuh

fbshipit-source-id: e897ab42ace6b82b2a90729ba788313c7873de1a
2019-12-02 20:14:46 -08:00
Jeremy Lilley
4dab29a2bd Fix serialization memory lifetime issue. (#30603)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30603

Pickler object needs to be kept in scope until data is written out to the
final serialized string. tensorData in particular is a reference to memory
owned by the descoped Pickle object.

Noticed this by inspection. In practice, this potential read-after-free here
is limited to non-cpu tensors, and any such use was very soon after free.
ghstack-source-id: 94756036

Test Plan: existing test suite at buck test mode/dev-nosan caffe2/test:rpc_fork

Differential Revision: D18760463

fbshipit-source-id: 9de890d66626aa48f13ca376dd9bd50b92e0cb00
2019-12-02 20:10:28 -08:00
Pritam Damania
db81e13d6b Fix TCPStoreTest and improve tcputils::connect() (#30354)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30354

TCPStoreTest would timeout since the TCPStore constructor for the
server would block the main thread waiting for workers. The workers themselves
were spawned later on once the server store is created. As a result, this test
would always timeout.

To fix the test, I moved the server store to a thread so that the workers can
register with the server in parallel.

In addition to this made a few improvements to tcputils::connect. When
tcputils::connect() encountered an exception, it always looked at `errno` for
the error code. In some cases `errno` could be overwritten and the real error
code would be stored in `std::system_error`. As a result, I've modified the
code to look at the error code in `std::system_error` if we catch an exception
of that type.
ghstack-source-id: 94758939

Test Plan: waitforbuildbot

Differential Revision: D18668454

fbshipit-source-id: d5a3c57b066b094bfecda9a79d9d31bfa32e17f0
2019-12-02 19:52:34 -08:00
Supriya Rao
968c0d4a46 Add support for converting quantized AvgPool2d and Reshape operations (#30490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30490

Add symbolic mapping to Int8AvgPool2d and Int8Reshape op in C2

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps

Imported from OSS

Differential Revision: D18740520

fbshipit-source-id: 1606125500c4b549fbc984e7929b7fd5204396a0
2019-12-02 18:15:01 -08:00
davidriazati
9c02b88791 Add pickler support for Device (#30131)
Summary:
This PR adds (un)pickling support for `c10::Device`. It also adds `torch.device` as a type annotation for device attributes.
](https://our.intern.facebook.com/intern/diff/18664421/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30131

Pulled By: driazati

Differential Revision: D18664421

fbshipit-source-id: 64378fb42b2d1bbe2bd86259e5ed10f24b5d1e49
2019-12-02 17:43:08 -08:00
Mingbo Wan
3636cb0364 windows build (#30556)
Summary:
based on https://github.com/pytorch/pytorch/pull/28677
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30556

Differential Revision: D18764040

Pulled By: mingbowan

fbshipit-source-id: 53104636800f5887b74a82c154bc5e9603de9322
2019-12-02 14:54:22 -08:00
Edward Yang
1111a6b810 Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL (#30274)
Summary:
Reland of https://github.com/pytorch/pytorch/pull/29095
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30274

Differential Revision: D18762293

Pulled By: ezyang

fbshipit-source-id: d3d50c2dd12bcb678ab25fa708eb6587cc4b66f9
2019-12-02 12:19:58 -08:00
Shen Li
dd52f50fc8 Add examples to RRef doc
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30516

Test Plan: Imported from OSS

Differential Revision: D18728183

Pulled By: mrshenli

fbshipit-source-id: af472ebed0e6dd0a85653b080abd3ac4d482bd26
2019-11-28 15:34:26 -08:00
Shen Li
30d70d5378 Make doc source format consistent in rpc/init.cpp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30515

Test Plan: Imported from OSS

Differential Revision: D18728184

Pulled By: mrshenli

fbshipit-source-id: 7b643c7f8225943113fbd7130ff6aadb30c1d4e9
2019-11-28 15:34:22 -08:00
Jeremy Lilley
f4e7e9039d Improve process_group_agent() serialization speed (#29785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29785

TLDR: This change improves process_group's serialization speed:
  Serialize_Tensor64:     12.38us ->   1.99us  (~-84%)
  Deserialize_Tensor64:   33.89us ->   5.62us  (~-84%)
  Serialize_Tensor1M:    525.74us -> 285.43us  (~-45%)
  Deserialize_Tensor1M:  892.61us -> 273.68us  (~-70%)

After speaking with the jit team, we had consensus that torch::save()/load()
are somewhat high-overhead for RPC serialization, mostly intended for
persistent disk data.

(Particularly, for large tensors, 35% of the time is spent in CRC checking, even
with the fb-side changes to subsitute 40x faster SSE-accelerated crc checking;
Also, for small tensors, the zip container overhead is considerable, as is the
overhead of lexing/parsing an embedded text python program for each RPC).

The jit team encouraged us to use jit::pickler, with the WriteableTensorData
way of outputting result tensors (not the default side-tensor table, or
with pickling the actual tensors). This ends up just pickling some tensor
metadata, and giving us some tensor blobs that we can mindlessly
blit over the wire (they copy to cpu memory if needed).

There is yet no standardized container format for the pickled data
(there is jit::pickle_save() checked in, but but it's experimental,
no load function is yet provided), but they encouraged us to just use
something sensible for this, and possibly revisit later. For now, I made
the directory headers slightly http-inspired.

Note that serialization is just one component of the pipeline, but that
said, we also see reasonable reductions in end-to-end echo times (noisier):
   ProcessGroupAgent_Echo(Tensor_Small)   855.25us -> 492.65us  (~-42%)
   ProcessGroupAgent_Echo(Tensor_1M)       10.82ms -> 6.94ms    (~-35%)
   ProcessGroupAgent_Echo(Small_NoTensor) 688.82us -> 301.72us  (~-56%)
   ProcessGroupAgent_Echo(1MB_NoTensor)     4.65ms -> 3.71ms    (~-20%)

I moved the "wire serialization" logic to a separate file to assist with
unittesting.
ghstack-source-id: 94694682

Test Plan:
buck test mode/dev-nosan caffe2/test/cpp/api:serialize
  buck test mode/dev-nosan caffe2/test/...

Differential Revision: D18493938

fbshipit-source-id: 07ddfe87dbe56472bc944f7d070627052c94a8f4
2019-11-28 09:57:52 -08:00
Rohan Varma
1350b99de4 Add local shutdown to process group agent (#30330)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30330

This is now possible due to previous changes made in `gloo` and `ProcessGroupGloo`. We `abort` the listener thread that is waiting for a message, and join all other threads. The API is changed so that the previous `wait_all_workers` does not destroy the agent, and this is now done in a new `shutdown` method. All callsites are updated appropriately.

ghstack-source-id: 94673884
ghstack-source-id: 94673884

Test Plan: Unit tests pass.

Reviewed By: mrshenli

Differential Revision: D18661775

fbshipit-source-id: 5aaa7c14603e18253394224994f6cd43234301c2
2019-11-27 22:34:08 -08:00
Will Feng
7ac8efa689 Skip undefined tensors when moving torch::nn module to a different device (#30523)
Summary:
This fixes high-pri issues such as https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30523

Differential Revision: D18732904

Pulled By: yf225

fbshipit-source-id: fe5a7a43838000f5803bd9c01ecfba0c3f02df5d
2019-11-27 21:21:02 -08:00
Sebastian Messmer
a2ed50c920 Revert D17908478: Switch PyTorch/Caffe2 to C++14
Test Plan: revert-hammer

Differential Revision:
D17908478

Original commit changeset: 6e340024591e

fbshipit-source-id: 775d2e29be0bc3a0db64f164c8960c44d4877d5d
2019-11-27 14:57:05 -08:00
Tao Xu
a69be8123a Use gettimeofday on iOS (#30361)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30361

### Summary

By default, the compiler will choose `clock_gettime` for the iOS build. However, that API is not available until iOS 10. Since the Facebook app still supports iOS 9.0,  we have to use `gettimeofday` instead.

```shell
xplat/caffe2/torch/csrc/autograd/profiler.h:86:3: error: 'clock_gettime' is only available on iOS 10.0 or newer [-Werror,-Wunguarded-availability]

xplat/caffe2/torch/csrc/autograd/profiler.h:86:17: error: '_CLOCK_MONOTONIC' is only available on iOS 10.0 or newer [-Werror,-Wunguarded-availability]
```

P.S. the open-sourced version is iOS 12.0 and above, so we don't have this problem.

### Test Plan

- buck build works
- Don't break CIs

Test Plan: Imported from OSS

Differential Revision: D18730262

Pulled By: xta0

fbshipit-source-id: fe6d954b8d3c23cbc9d1e25a2e72e0b0c1d4eaa9
2019-11-27 11:48:41 -08:00
Sebastian Messmer
d0acc9c085 Switch PyTorch/Caffe2 to C++14 (#30406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30406

ghstack-source-id: 94642238

Test Plan: waitforsandcastle

Differential Revision: D17908478

fbshipit-source-id: 6e340024591ec2c69521668022999df4a33b4ddb
2019-11-27 10:47:31 -08:00
Richard Zou
ec5c08de74 Revert D18580867: Add logical_and and logical_or
Test Plan: revert-hammer

Differential Revision:
D18580867

Original commit changeset: 7e4d7c37da4d

fbshipit-source-id: 81fb604c7aef8d847f518f5faa016e7bd0423016
2019-11-27 09:27:00 -08:00
Bowen Bao
1e8ed021c6 Support logsoftmax with dim != -1 (#30433)
Summary:
PyTorch dim and ONNX axis have different meanings.
ONNX only supports log_softmax with dim = -1. Transpose must be added before and after log_softmax to support other cases.
This requires input rank to be known at export time.
Fixes https://github.com/pytorch/pytorch/issues/17918
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30433

Reviewed By: hl475

Differential Revision: D18723520

Pulled By: houseroad

fbshipit-source-id: d0ed3b3f051d08d46495a7abfa854edd120dca3a
2019-11-27 08:34:38 -08:00
Pieter Noordhuis
0282c5ae69 Add helper to aggregate multiple process groups (#25768)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25768

The round robin process group can be constructed from multiple other
process groups. Every collective call against this new process group
is delegated to the specified process groups in a round robin fashion.

Doing so may benefit performance when calling into multiple NCCL
process groups. Instead of adding support for round-robin usage of
NCCL communicators, we achieve the same without changing the NCCL
process group and adding this wrapper class.

The API to create this round robin process group is a bit harsh. If we
find it adds significant benefit we can revisit and make this a first
class citizen in the torch.distributed module.
ghstack-source-id: 94578376

Test Plan: The newly added test passes.

Reviewed By: chenyangyu1988

Differential Revision: D17226323

fbshipit-source-id: ec9f754b66f33b983fee30bfb86a1c4c5d74767d
2019-11-27 08:34:34 -08:00
Pieter Noordhuis
1d3f3a1a0c Add pybind11 trampoline class for c10d.Store (#30415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30415

This enables subclassing of c10d.Store and implementing its interface in Python.
ghstack-source-id: 94586627

Test Plan: New tests passes.

Reviewed By: vladbelous

Differential Revision: D18693018

fbshipit-source-id: fa1eba4bd11cc09a3d6bf3f35369c885033c63c0
2019-11-27 08:34:29 -08:00
neginraoof
512c2a2df5 Enable constant folding (#29834)
Summary:
Set default do_constant_folding = True
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29834

Reviewed By: hl475

Differential Revision: D18588037

Pulled By: houseroad

fbshipit-source-id: b35c06161321629c886e177ea666eff31cebf06a
2019-11-27 08:34:20 -08:00
Junjie Bai
c1c8105de0 Make the warning of using SparseTensor in JIT less noisy
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30499

Test Plan: waitforsandcastle

Reviewed By: wanchaol

Differential Revision: D18705553

fbshipit-source-id: d6e16e3285a74a1c031a5312f7a690f1baf392f8
2019-11-27 08:34:16 -08:00
Daya Khudia
2d6b2f39e9 Fix docs so that the example works (#30120)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30120

The example given for functional conv2d didn't work. This diff fixes the example in docs so that it works.

Fixes https://github.com/pytorch/pytorch/issues/29649
ghstack-source-id: 94601559

Test Plan: Tried the example locally

Differential Revision: D18604606

fbshipit-source-id: ff1a4f903e2843efe30d962d4ff00e5065cd1d7e
2019-11-26 17:38:40 -08:00
Pavel Belevich
6bd8937aee FunctionParameter::set_default_str replace || with &&
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30471

Test Plan: Imported from OSS

Differential Revision: D18710958

Pulled By: pbelevich

fbshipit-source-id: 7e5339175c7e16cd975a90bf6b123df728045e4d
2019-11-26 17:38:31 -08:00
Hong Xu
8bbafa0b32 Add logical_and and logical_or (#28162)
Summary:
Superseding https://github.com/pytorch/pytorch/issues/24379 as type promotion has been implemented.

Close https://github.com/pytorch/pytorch/issues/24379
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28162

Differential Revision: D18580867

Pulled By: ailzhang

fbshipit-source-id: 7e4d7c37da4dc8df87314bd4f1f6a7539e46586a
2019-11-26 17:38:22 -08:00
James Reed
05a1644ce3 Fix BC for quantized linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30481

Test Plan: Imported from OSS

Differential Revision: D18714602

Pulled By: jamesr66a

fbshipit-source-id: d51206c22cf2446e98053446789c6324c0481321
2019-11-26 17:38:09 -08:00
Elias Ellison
634f370c63 Add comment to ops bound at python layer
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30419

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D18714000

Pulled By: eellison

fbshipit-source-id: 22ccb941b2db24031921f378c600e68fe70e1346
2019-11-26 17:37:59 -08:00
albanD
b0871f211b Make all optimizers consistent so that they don't change gradients inplace
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30257

Test Plan: Imported from OSS

Differential Revision: D18665461

Pulled By: albanD

fbshipit-source-id: cfdafef919468a41007881b82fd288b7128baf95
2019-11-26 12:16:25 -08:00
vishwakftw
dcd9f49809 Specify ordering on singular values and eigenvalues output from torch… (#30389)
Summary:
….svd/symeig respectively

Changelog:
- Adds a note to docstrings of the both functions specifying the ordering

Fixes https://github.com/pytorch/pytorch/issues/30301
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30389

Differential Revision: D18707608

Pulled By: zou3519

fbshipit-source-id: b0f73631578f39a24fae9af4997c6491de8be9a8
2019-11-26 10:23:47 -08:00
BowenBao
0febff36ac Export dynamic unbind/split and __getitem__ (#29136)
Summary:
In ONNX opset 11, a series of sequence ops were added. Operators that are related to Tensor[] in PyTorch can be exported using these sequence ops.
In this PR, unbind/split that produces Tensor[], and __getitem__ that takes Tensor[] as input, are exported correctly to ONNX opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29136

Reviewed By: hl475

Differential Revision: D18309222

Pulled By: houseroad

fbshipit-source-id: be12c96bf8d0a56900683ef579f1c808c0a1af21
2019-11-26 06:54:06 -08:00
Supriya Rao
2599b9b551 Add output_size argument to caffe2 Int8ResizeNearest (#30202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30202

Pytorch Upsample operator has output_size as an argument.
For quantized tensor inputs we cannot get the input_size to calculate the width and height scale factor.
Instead we pass the output_size directly to caffe2 to calculate the scale factors.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_upsample

Imported from OSS

Differential Revision: D18631478

fbshipit-source-id: 38a39129bc863f4ecf2293acc068e40ab7edc825
2019-11-26 06:54:02 -08:00
Shen Li
efe1859ad9 By default ignore RRef leaks during shutdown (#30217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30217

Before this commit, RRefContext throws an error if it detects any
RRef leak during shutdown. However, this requires applications to
make sure that is has freed all references to RRefs in application
code, which can be a bad debugging experience when for large
applications. Besides, this also relies on Python GC to free things
up in time, which might not always be true. After this commit,
RRefContext would ignore leaking RRefs during shutdown, as shutdown
is called when the application has finished training and no longer
care about local states. Hence, it should be OK to just ignore
those leaks and destroy OwnerRRefs. If application would like to
enforce no leaks, just set torch.distributed.rpc.api._ignore_rref_leak
to False.

Test Plan: Imported from OSS

Differential Revision: D18632546

Pulled By: mrshenli

fbshipit-source-id: 2744b2401dafdd16de0e0a76cf8e07777bed0f38
2019-11-26 06:53:58 -08:00
Spandan Tiwari
06db5ad707 Provide names for operator nodes in ONNX exported graph. (#27342)
Summary:
The PyTorch exporter does not add any name to the ONNX operators in the exported graph. A common request is to add names to op nodes by default. This helps the readability of the graph in visualization tools such a Netron, or when the ONNX graph is printed as a string. Also, it helps with the debuggability of the ONNX graph.

Therefore this PR adds name to operators in the exporters. The names follow a simple format, <op_type>_<index>. Expect files for tests in `test/onnx/test_operators.py` have been updated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27342

Reviewed By: hl475

Differential Revision: D17790979

Pulled By: houseroad

fbshipit-source-id: 1eaae88b5f51f152735a2ff96e22827837e34d9d
2019-11-26 06:53:53 -08:00
BowenBao
584be86c3f Try exporting ONNX with force_outplace=False (#29466)
Summary:
This should resolve https://github.com/pytorch/pytorch/issues/29008. This flag has two effects on the tracer.
- Remove the underscroll for inplace operators. E.g.: index_put_ ==> index_put. This is handled in utils.py separately as well.
- Add out as input for backward computation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29466

Reviewed By: hl475

Differential Revision: D18422815

Pulled By: houseroad

fbshipit-source-id: 317b6a3c8a5751fe6fe49d7543e429d281ed0d6d
2019-11-26 06:53:49 -08:00
Raghuraman Krishnamoorthi
eccf42fd15 Bug fix: Handle missing keys in observer state dict during load (#30357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30357

Fix issue https://github.com/pytorch/pytorch/issues/29032 in loading from state dict for observers and fake quant.
ghstack-source-id: 94468814

Test Plan: Ensures that load/save of fake quant and observers with missing keys works correctly.

Differential Revision: D18668517

fbshipit-source-id: 0eda6f47c39102e55977fc548b9a03664f123ad7
2019-11-26 06:53:45 -08:00
Jonathan Reynolds
085dde5965 Fix for when PyTorch model trace has RecursiveScriptModules (#30430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30430

When a module isn't a TracedModule, attempt to get name information with `original_name` property on module and default to 'Module' when no such property exists.

Test Plan:
### Change child module to scripted module:
```
model = torchvision.models.alexnet()
model.classifier = torch.jit.script(model.classifier)
```
### Add graph
```
w = SummaryWriter()
w.add_graph(model, torch.rand((2, 3, 224, 224)))
w.close()
```
### No errors
However, graph is disconnected at parts and hard to understand.
{F223327878}

Reviewed By: sanekmelnikov

Differential Revision: D18690836

fbshipit-source-id: 42295d06b7c1d48d5401776dca1e0d12cd64b49d
2019-11-26 06:53:35 -08:00
Jerry Zhang
661a6c8ef2 Add get_qparams and revert the changes to calculate_qparams (#30262)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30262

`get_qparams` returns all parameters that's needed to call quantize function

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18645047

fbshipit-source-id: e57c11a66dac2d589778d412a996796ad5b6f86a
2019-11-26 06:53:26 -08:00
Zhang Zhi
ab2ec4d835 Fix inexistent parameter in document (#24335)
Summary:
There is no `out` argument to `argsort` according to the source code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24335

Differential Revision: D16829134

Pulled By: vincentqb

fbshipit-source-id: 8f91154984cd4a753ba1d6105fb8a9bfa0da22b3
2019-11-26 06:53:17 -08:00
Jerry Zhang
0b71e7e1fd Refactor QAT Conv module for better extensibility (#30362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30362

Right now the qat modules(qat.ConvBn2d, qat.ConvBnReLU2d, qat.Conv2d)
are not convinent to support other dimensions of Conv, this PR refactors
these modules so that we can support Conv1d/Conv3d better

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18691152

fbshipit-source-id: 5b561e6b054eadd31b98cabdf1ac67a61ee9b805
2019-11-26 06:53:12 -08:00
Lingyi Liu
b8f50d9cc8 Support to add dequant for each use of Value (#30145)
Summary:
In this PR, we mainly handle the case there are multiple usage of a Value when inserting the quant-dequant pair. This change will add one dequant for each usage of the Value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30145

Differential Revision: D18671600

Pulled By: lly-zero-one

fbshipit-source-id: 61324a98861da85b80dcf7e930381311118ae53b
2019-11-25 14:52:58 -08:00
Rohan Varma
5c6705e62c add default arg for init_method (#30208)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30208

Adds default arg for init_method so users don't have to pass this in,
and moves it to `RpcBackendOptions` struct. Removes `init_method` arg from rpc.init_rpc. Also fixes some docs.
ghstack-source-id: 94500475

Test Plan: Unit tests pass.

Reviewed By: mrshenli

Differential Revision: D18630074

fbshipit-source-id: 04b7dd7ec96f4c4da311b71d250233f1f262135a
2019-11-25 14:52:48 -08:00
Xiaomeng Yang
c12f9a12a8 Fix quantized ConvReLU3d test (#30266)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30266

Fix quantized ConvReLU3d test

Test Plan: buck test mode/dev-nosan //caffe2/test:quantized -- "conv"

Reviewed By: hl475

Differential Revision: D18645717

fbshipit-source-id: bbe93f9daf5046f2aa05363efc7d0e59eaff37bf
2019-11-25 14:52:32 -08:00
Sebastian Messmer
aa2862b843 Hide the OperatorKernel* argument from the stack based kernel API (#29337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29337

This argument is needed by boxing wrappers so they're able to get a pointer to the corresponding unboxed kernel and call into it.
But if a kernel is registered in a boxed way, we don't need it and should hide this from the API.
This is especially needed for the backend fallback API where users would only be left wondering why this argument is there and what it does.
Also, hiding it allows us to potentially totally remove it in a future refactoring if we find some way to do so.
ghstack-source-id: 94481316

Test Plan: unit tests

Differential Revision: D18361991

fbshipit-source-id: 5cef26c896fe3f2a5db730d3bc79dcd62e7ef492
2019-11-23 15:25:01 -08:00
Sebastian Messmer
583c288232 Add a OperatorHandle argument to boxed kernels (#29201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29201

This is required for boxed backend fallback kernels (e.g. lazy, AMP) because they need to know which op was actually called.
ghstack-source-id: 94481313

Test Plan: I will add unit tests in a diff stacked on top

Differential Revision: D18282746

fbshipit-source-id: 339a1bbabd6aff31a587b98f095c75104dfc6f99
2019-11-23 15:24:49 -08:00
Chris Gottbrath
7c4b9042ab Updates to quantization documentation (#30288)
Summary:
This pull request includes fixes for six quantization doc bugs.

https://github.com/pytorch/pytorch/issues/30283 - Rendering issue on QConfig
https://github.com/pytorch/pytorch/issues/26305 - Minor doc issue on fuse_modules()
https://github.com/pytorch/pytorch/issues/27451 - Issues with ConvReLU2d, ConvReLU3d, and LinearReLU doc issues
https://github.com/pytorch/pytorch/issues/26899 - Missing docstrings in torch.nn.intrinsic fused functions
https://github.com/pytorch/pytorch/issues/29735 - add discussion of QNNPack to quantization doc page
https://github.com/pytorch/pytorch/issues/27938 - some of the quantized functions lack documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30288

Differential Revision: D18653368

Pulled By: gottbrath

fbshipit-source-id: 410b3dd81ff10909a7f1a7736ca42d7cabf0beb1
2019-11-23 09:29:30 -08:00
Lingyi Liu
59ca9b7430 Graph-mode quantization for convolution from traced model (#30245)
Summary:
In the PR, we enhance the graph-mode quantization for aten::_convolution, which could be generated from tracing path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30245

Differential Revision: D18671597

Pulled By: lly-zero-one

fbshipit-source-id: 78a2470fbb0fe0def55d63c6bda7cbb5c89f7848
2019-11-23 01:24:50 -08:00
davidriazati
2a7a39c1af (de)serialization of values between C++ and Python (#30108)
Summary:
This PR updates `torch::pickle_save` to use the new zipfile format introduced in #29232 and adds `torch::pickle_load` which can decode the zipfile format. Now that `torch.save/load` use this format as well (if the `_use_new_zipfile_serialization` flag is `True`), raw values saved in Python can be loaded in C++ and vice versa.

Fixes #20356
](https://our.intern.facebook.com/intern/diff/18607087/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30108

Pulled By: driazati

Differential Revision: D18607087

fbshipit-source-id: 067cdd5b1cf9c30ddc7e2e5021a8cceee62d8a14
2019-11-23 00:06:07 -08:00
Lingyi Liu
328ec5460f refactor the observer removal and quantize tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30360

Differential Revision: D18670373

Pulled By: lly-zero-one

fbshipit-source-id: 1481d6e4d5ce40376577b8deb0a0f74d5559076e
2019-11-22 21:25:23 -08:00
Shihao Xu
6a00191fc2 Add RpcAgent::getWorkerInfos() (#30241)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30241

We need an API to get all worker infos. This will be used by backend-agnostic `rpc.wait_all_workers()` API.
ghstack-source-id: 94454935

Test Plan:
# Unit tests

```
buck test mode/dev-nosan //caffe2/test:rpc_fork -- test_get_worker_infos

buck-out/gen/caffe2/test/rpc_fork\#binary.par -r test_get_worker_infos
```

```
buck test mode/dev-nosan //caffe2/test:rpc_fork_thrift -- test_get_worker_infos

buck-out/gen/caffe2/test/rpc_fork_thrift\#binary.par -r test_get_worker_infos
```

Differential Revision: D5693412

fbshipit-source-id: 5123c8248b6d44fd36b8a5f381dbabb2660e6f0f
2019-11-22 18:26:30 -08:00
Hongyi Jia
c7f988b8c6 transport open registration (#30167)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30167

Pull Request resolved: https://github.com/pytorch/pytorch/pull/29164

- Created GlooDeviceFactory to hide device creation details
- Added transport option while on Python interface

The reason of making the factory class is to make it easier to extend gloo transport in the future

Test Plan: Imported from OSS

Reviewed By: satgera, d4l3k

Differential Revision: D18596527

fbshipit-source-id: e8114162ee8d841c0e0769315b48356b37d6ca0a
2019-11-22 17:41:52 -08:00
Sebastian Messmer
ac103a5d78 Remove variable wrapping from register_c10_ops (#29207)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29207

The logic calling c10 ops from JIT did some variable wrapping to make sure all results are always variables.
Thanks to ezyang, this is not needed anymore because everything is a variable now.
ghstack-source-id: 93345590

Test Plan: waitforsandcastle

Differential Revision: D18327507

fbshipit-source-id: 86512c5e19d6972d70f125feae172461c25e3cb6
2019-11-22 15:32:55 -08:00
David Riazati
8c6f0c0587 Detect TorchScript archives in torch.load (#29339)
Summary:
This PR looks for a `constants.pkl` file at the top level in a zip file
in `torch.load`. If found, it calls `torch.jit.load` instead and issues
a warning to call `torch.jit.load` directly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29339

Differential Revision: D18611095

Pulled By: driazati

fbshipit-source-id: f070a02f6b5509054fc3876b3e8356bbbcc183e1
2019-11-22 12:30:30 -08:00
James Reed
97fae401f0 Use LinearPackedParams everywhere
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30198

Test Plan: Imported from OSS

Differential Revision: D18628003

Pulled By: jamesr66a

fbshipit-source-id: 76ff0248fd859e805a15cde555d26dd2138636fa
2019-11-22 11:31:17 -08:00
James Reed
1cc321deed Memoize parseIR calls in graph mode quantization
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30188

Test Plan: Imported from OSS

Differential Revision: D18625743

Pulled By: jamesr66a

fbshipit-source-id: 88f9da8e79324ba91e3550a8fc1a05e85bb83a86
2019-11-22 11:31:13 -08:00
James Reed
65f465050b Dont use SubgraphRewriter in FoldQuantizeCallIntoBuffer
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30264

Test Plan: Imported from OSS

Differential Revision: D18645531

Pulled By: jamesr66a

fbshipit-source-id: 44fc0f0a3c8cabe62924baae0d556e43bbf637ec
2019-11-22 11:31:08 -08:00
Shen Li
a9f3f48f88 Revert D5578006: Add local shutdown to process group agent
Test Plan: revert-hammer

Differential Revision:
D5578006

Original commit changeset: 6258879fb44c

fbshipit-source-id: 11b893b3a280a8383eeb20a0548626811616dca1
2019-11-22 11:31:04 -08:00
Christian Puhrsch
7903fb118f Move qkv_same, kv_same into branch (#30142)
Summary:
Perf improvements to multi_head_attention_forward

- qkv_same and kv_same were not used outside of that branch. Further, kv_same was calculated even though it is not used if qkv_same
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30142

Differential Revision: D18610938

Pulled By: cpuhrsch

fbshipit-source-id: 19b7456f20aef90032b0f42d7da8c8a2d5563ee3
2019-11-22 10:40:02 -08:00
Rohan Varma
c478a92b93 Add local shutdown to process group agent (#30020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30020
This is now possible due to previous changes made in `gloo` and `ProcessGroupGloo`. We `abort` the listener thread that is waiting for a message, and join all other threads. The destructor calls this same `localShutdown` method, but we ensure this is not called multiple times.

ghstack-source-id: 94415336

Test Plan: Unit tests pass.

Differential Revision: D5578006

fbshipit-source-id: 6258879fb44c9fca97fdfad64468c1488c16ac02
2019-11-22 10:03:00 -08:00
Martin Yuan
559b3b5a7a Use unboxed registration for most of operators used in lite interpreter. (#30239)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30239

Use unboxed registration per smessmer 's request. For some ops with optional arg or tensor list that unboxed registration are not supported, we still use boxed.

Test Plan: Imported from OSS

Differential Revision: D18653846

Pulled By: iseeyuan

fbshipit-source-id: c22ce8111dfff0ba63316a9bcfe2b712b2d31fc1
2019-11-22 10:00:30 -08:00
Rohan Varma
f41422121e default construct rpc agent options based on the backend type (#30201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30201

Provide a default constructor so that users don't have to construct
RPC agent options. Also rename this to RPCBackend Options as suggested.
ghstack-source-id: 94411768

Test Plan: Unit tests pass.

Differential Revision: D18628698

fbshipit-source-id: 81fb45f124ad1006e628f6045162308093c9d446
2019-11-22 08:18:06 -08:00
Luke Yeager
183aa1534f Add --no_python flag (#29144)
Summary:
Allows you to use a bash script wrapper in-between launch and your
training script. e.g.
```
python -m torch.distributed.launch --nproc_per_node=8 --no_python --use_env \
    bash -c 'exec numactl --cpunodebind=$(( LOCAL_RANK / 4 )) "$@"' -- \
    python train.py ...
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29144

Differential Revision: D18345647

Pulled By: pietern

fbshipit-source-id: f05849c38c82de782988d07d300e00cf9f37253a
2019-11-22 06:05:41 -08:00
Pieter Noordhuis
a074080d57 Mark c10d::~NCCLUtils as noexcept (#29118)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29118

It's never a good idea to throw from a destructor and per #28288 we
can't use `std::make_shared` on a class with a `noexcept(false)`
destructor.

To fix this, we `abort` instead of throw from the `NCCLComm` destructor.

Closes #28288.
ghstack-source-id: 93182910

Test Plan: ProcessGroupNCCLErrorsTest runs successfully.

Reviewed By: pritamdamania87

Differential Revision: D18298271

fbshipit-source-id: ccac37753fef64fb63cb304433f4f97dc5621379
2019-11-22 04:06:12 -08:00
Natalia Lunova
23650671a8 add_hparams() NoneType error (#30286)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30286

add_hparams() in torch.utils.tensorboard.writer produced the following error
python3.7/site-packages/torch/utils/tensorboard/writer.py", line 294, in add_hparams
    with SummaryWriter(log_dir=os.path.join(self.file_writer.get_logdir(), str(time.time()))) as w_hp:
AttributeError: 'NoneType' object has no attribute 'get_logdir'
Other methods such as add_scalar() and add_histogram() use self._get_file_writer() instead of self.file_writer directly.

Test Plan:
```
writer = summary_writer()
writer.add_hparams({"a": 0, "b": 0}, {"hparam/test_accuracy": 0.5}))
writer.flush()
writer.close()
```

Reviewed By: J0Nreynolds, sanekmelnikov

Differential Revision: D18650610

fbshipit-source-id: 1039dd2067d37913a8a131c8b372491a63154899
2019-11-21 23:25:26 -08:00
neginraoof
a822a1d2a8 Avoid overwriting output type in onnx graph (#25906)
Summary:
When creating the onnx graph, we overwrite the output type with the output type of the PT graph.
In some special cases, when using scripting, the PT graph does not have type information. We want to avoid overwriting the input type is these cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25906

Reviewed By: hl475

Differential Revision: D18645903

Pulled By: houseroad

fbshipit-source-id: 56acc43e0c15c74ac8ebd689e04f7371054e362e
2019-11-21 21:30:12 -08:00
Jonathan Reynolds
0c04763d59 Changes to get inlined graph and proper names after JIT updates (#30244)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30244

This makes several small changes to the tensorboard graph parsing methods to address the recent changes to the PyTorch JIT trace/graph.
- Inline graph to get information for all nodes
- Assign and propagate scope names to GetAttr nodes
- Prune all useless GetAttr nodes (any with a ClassType output type - tensors and primitives are kept)
- Create output nodes so output tensor shape can be examined

Reviewed By: sanekmelnikov

Differential Revision: D18556323

fbshipit-source-id: b73a809bacfa554c3fe9c4ae3563525f57539874
2019-11-21 16:59:28 -08:00
Shen Li
fea963d3ae Fix BackendType repr in doc (#30243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30243

Before this commit, rpc docs shows init_rpc as the following:

```
torch.distributed.rpc.init_rpc(
   name,
   backend=<BackendType.PROCESS_GROUP: BackendValue(
     construct_rpc_agent_options_handler=<function _process_group_construct_rpc_agent_options_handler>,
     init_backend_handler=<function _process_group_init_backend_handler>)>,
   init_method=None,
   rank=-1,
   world_size=None,
   rpc_agent_options=None
)
```

It unnecessarily leaks implementation details. This commit adds a
__repr__ function to BackendType Enum class to address this problem.

closes #29905

Test Plan: Imported from OSS

Differential Revision: D18641559

Pulled By: mrshenli

fbshipit-source-id: 19bf8a2d21c8207f026d097d8e3f077578d53106
2019-11-21 16:22:43 -08:00
Junjie Bai
352731bd6e Revert D18632773: Split libtorch.so back into libtorch_{cpu,cuda,hip}
Test Plan: revert-hammer

Differential Revision:
D18632773

Original commit changeset: ea717c81e0d7

fbshipit-source-id: 18601439f9f81c9f389020e5a0e4e04adb21772d
2019-11-21 15:01:09 -08:00
Mike Ruberry
eff4c4d7c1 Revert D18301806: Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL
Test Plan: revert-hammer

Differential Revision:
D18301806

Original commit changeset: 03da6a26c41e

fbshipit-source-id: c1324ee8d154e7e16f5dd4f1cf3625aaa566cd39
2019-11-21 14:50:07 -08:00
Alan Du
f4b9690f2d Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL (#29095)
Summary:
Given that pybind11 implements these gil functions, I don't think it makes sense for Pytorch to have its own bespoke versions.

Fixes https://github.com/pytorch/pytorch/issues/29065
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29095

Differential Revision: D18301806

Pulled By: ezyang

fbshipit-source-id: 03da6a26c41ee65aaadf7b67b9f0b14d2def2a5a
2019-11-21 13:44:40 -08:00
Jerry Zhang
1bba0eb35b Add clone_instance for Module (#30168)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30168

Previous implementation of `clone` in `script::Module` copies both the module instance and the
class type, after we enabled type sharing https://github.com/pytorch/pytorch/pull/26666 we also
need to have a function to clone instance only and share the underlying class type.

Test Plan:
tbd

Imported from OSS

Differential Revision: D18631324

fbshipit-source-id: dbadcf19695faee0f755f45093b24618c047b9d1
2019-11-21 13:00:34 -08:00
Mikhail Zolotukhin
2c1c6de122 Represent the original python name the same way in traced and scripted modules.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29912

Test Plan: Imported from OSS

Differential Revision: D18533135

Pulled By: ZolotukhinM

fbshipit-source-id: 080dbafa5dcd8c1fb12fec0c956e52fceec430e7
2019-11-21 11:55:40 -08:00
Edward Yang
ec30d9028a Split libtorch.so back into libtorch_{cpu,cuda,hip} (#29731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29731

The new structure is that libtorch_cpu contains the bulk of our
code, and libtorch depends on libtorch_cpu and libtorch_cuda.

Some subtleties about the patch:
- There were a few functions that crossed CPU-CUDA boundary without API macros. I just added them, easy enough. An inverse situation was aten/src/THC/THCTensorRandom.cu where we weren't supposed to put API macros directly in a cpp file.
- DispatchStub wasn't getting all of its symbols related to static members on DispatchStub exported properly. I tried a few fixes but in the end I just moved everyone off using DispatchStub to dispatch CUDA/HIP (so they just use normal dispatch for those cases.) Additionally, there were some mistakes where people incorrectly were failing to actually import the declaration of the dispatch stub, so added includes for those cases.
- torch/csrc/cuda/nccl.cpp was added to the wrong list of SRCS, now fixed (this didn't matter before because previously they were all in the same library)
- The dummy file for libtorch was brought back from the dead; it was previously deleted in #20774
- In an initial version of the patch, I forgot to make torch_cuda explicitly depend on torch_cpu. This lead to some very odd errors, most notably "bin/blob_test: hidden symbol `_ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infom' in lib/l
ibprotobuf.a(arena.cc.o) is referenced by DSO"
- A number of places in Android/iOS builds have to add torch_cuda explicitly as a library, as they do not have transitive dependency calculation working correctly. This situation also happens with custom C++ extensions.
- There's a ROCm compiler bug where extern "C" on functions is not respected. There's a little workaround to handle this.
- Because I was too lazy to check if HIPify was converting TORCH_CUDA_API into TORCH_HIP_API, I just made it so HIP build also triggers the TORCH_CUDA_API macro. Eventually, we should translate and keep the nature of TORCH_CUDA_API constant in all cases.

Fixes #27215 (as our libraries are smaller), and executes on
part of the plan in #29235.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18632773

Pulled By: ezyang

fbshipit-source-id: ea717c81e0d7554ede1dc404108603455a81da82
2019-11-21 11:27:33 -08:00
Lingyi Liu
7d3afc4186 enable the per channel dynamic quantization (#30122)
Summary:
The PR tried to enable the per-channel(row-wise) dynamic quantization for linear operator. Given we have seen some accuracy drop due to the per-tensor quantization, we expect the per-channel could help improve the accuracy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30122

Differential Revision: D18630541

Pulled By: lly-zero-one

fbshipit-source-id: d52685deec5e7de46cd686ae649a8c8765b9cacf
2019-11-21 10:12:05 -08:00
Will Feng
3ba1456aee Fix clip_grad_norm_ / clip_grad_value_ to take input by value instead of by non-const ref (#30216)
Summary:
The original design of `torch::nn::utils::clip_grad_norm_` / `clip_grad_value_` takes input by non-const reference, which prevents users from passing rvalue reference as input into the functions. This PR changes the functions to take input by value, which matches the Python version's semantics, and also adheres to the C++ API convention that if a function modifies its input in-place, it should take that input by value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30216

Differential Revision: D18632543

Pulled By: yf225

fbshipit-source-id: 97a09d6467f982fe9c8120f483a9c07fcf13699e
2019-11-21 10:07:00 -08:00
Wen Zhang
6e4c23b02f Add RPC internal helper that overrides the default pickler. (#30185)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30185

To enable share_memory over RPC, add an internal helper that overrides the default RPC pickler.
Replace D18598974
ghstack-source-id: 94299660

Test Plan:
`python test/test_rpc_spawn RpcTestWithSpawn.test_use_rpc_pickler`

`buck test mode/dev-nosan //caffe2/test:rpc_spawn -- test_use_rpc_pickler`

Reviewed By: mrshenli

Differential Revision: D18621372

fbshipit-source-id: c680ef711b2c42524c47a5266e911fa8e0cd45ae
2019-11-21 10:01:02 -08:00
Nikolay Korovaiko
e3334723b2 fix a crash due in nested bailouts (#30097)
Summary:
A prim::BailOut also needs to capture max trip counts as for some graphs they aren't constants and they are used in continuation graphs to figure out the remaining number of iterations to run.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30097

Differential Revision: D18624446

Pulled By: Krovatkin

fbshipit-source-id: 085d25981c6669f65848996cd2d50066cc252048
2019-11-21 09:53:12 -08:00
Edward Yang
9e81616343 Merge Tensor and Variable types. (#28287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28287

This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see https://github.com/zdevito/ATen/issues/27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18571426

Pulled By: ezyang

fbshipit-source-id: 2ea8151e5f1d8512cdebf1345399642e68b707b8
2019-11-21 09:26:39 -08:00
Wanchao Liang
f7b12a9858 fix aten::grad to return optional list (#29577)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29577

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan: Imported from OSS

Differential Revision: D18491642

fbshipit-source-id: d32b2b3446cf9e8b9a98f6d203a21a75643d8991
2019-11-20 22:19:10 -08:00
Rohan Varma
cc16819028 Add abort API in gloo ProcessGroup Send/Recv Work (#29928)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29928

Original author: Shihao Xu
- Add abort to `c10d::ProcessGroup::Work`.
- Change the return type of `c10d::ProcessGroup::Work::wait()` to boolean to indicate if the work is aborted after waiting.
- Add unit test for the correctness of abort.
ghstack-source-id: 94305515
ghstack-source-id: 94305515

Differential Revision: D5685727

fbshipit-source-id: 6e682bb563c2393a5c303c877331140417d3f607
2019-11-20 20:18:54 -08:00
lsrock1
0a77c090d5 C++ parity, convert_parameters (#29267)
Summary:
yf225 https://github.com/pytorch/pytorch/issues/25883
update parameters_to_vector and vector_to_parameters
check please!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29267

Differential Revision: D18628571

Pulled By: yf225

fbshipit-source-id: 03783e6b0f8183dd97ae48f3da4acb1d07083555
2019-11-20 19:59:11 -08:00
Lara
bbb3c415c9 ONNX Hardtanh Opset 11 Support (#30169)
Summary:
Add support for hardtanh that was blacklisted in opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30169

Reviewed By: hl475

Differential Revision: D18619552

Pulled By: houseroad

fbshipit-source-id: 0c1bfb0a53d1dd2327c5db7afd03a90482abb9fe
2019-11-20 18:59:00 -08:00
James Reed
449828378d Serialize ClassType as its qualname
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30058

Test Plan: Imported from OSS

Differential Revision: D18584269

Pulled By: jamesr66a

fbshipit-source-id: 5f1d0142bd7cd94eecbd2ed9250a0de47639040b
2019-11-20 16:17:26 -08:00
Rohan Varma
de05114618 polish examples in docstrings and update docs to reflect correct use of (#30052)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30052

Some of the examples provided in `rpc/api.py` were not updated along
with the code changes, this PR updates them. Also removes the
`dist.ProcessGroup` information since `init_rpc` now initializes a default
process group.
ghstack-source-id: 94273004

Test Plan: Unit tests pass

Differential Revision: D18582596

fbshipit-source-id: a637683f0221f9600f7e50b74e9f7e5a1d331d8f
2019-11-20 15:30:38 -08:00
Jeremy Lilley
bebed492cf Make RRefContext singleton leaky, deal with module destruct order race. (#30172)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30172

RRefContext is a conventional singleton, used by rref.cpp. At module teardown
time, it's not defined whether rref_context.cpp or rref.cpp will be destroyed first.

We were observing a SIGSEGV because RRefContext is destroyed before a dangling
~UserRRef() call is able to execute. Particularly, the underlying
ctx.agent()->getWorkerInfo(ownerId_) call failed.

This change just avoids the SIGSEGV by forcing an intentional leak, though we still
need to deal with why there's a dangling UserRref at module destruction time.
ghstack-source-id: 94287441

Test Plan:
existing test suite
       test_elastic_averaging in context of D18511430, where the segfault reproed reliable.

Differential Revision: D18620786

fbshipit-source-id: 17b6ccc0eb1724b579a68615e4afb8e9672b0662
2019-11-20 15:12:51 -08:00
Wanchao Liang
36aaa299f8 shut up clang-tidy on ir.h/cpp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30118

Test Plan: Imported from OSS

Differential Revision: D18620239

fbshipit-source-id: 5734d9d1f38a9b38ac4a1fc121fb246b783fa262
2019-11-20 13:19:25 -08:00
James Reed
c2b7b2cbf8 Make observed values actually flow through observers (#30140)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30140

This seems more semantically correct to me, and makes it so we don't have to iterate over Uses of observed values

Test Plan: Imported from OSS

Differential Revision: D18610676

Pulled By: jamesr66a

fbshipit-source-id: f835266f148bd8198b05cd9df95276e1112dd250
2019-11-20 12:48:16 -08:00
James Reed
2d534abb39 Modernize graph mode IR API calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30130

Test Plan: Imported from OSS

Differential Revision: D18608004

Pulled By: jamesr66a

fbshipit-source-id: 42e946ec96b1d26a364abe0a7eb71aa0aecc52ed
2019-11-20 12:48:12 -08:00
Rohan Varma
f304bd5062 rename join_rpc to wait_all_workers in public api (#30050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30050

Renames this API to wait_all_workers as discussed.
ghstack-source-id: 94273005

Test Plan: Unit tests pass

Differential Revision: D18581466

fbshipit-source-id: 4ff5d5fb2d528f17252d5b5f30c3047d2efb92bf
2019-11-20 12:38:35 -08:00
Will Feng
a460c856dd Fix naming for kl_div and binary_cross_entropy functional options (#30146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30146

This PR fixes naming for kl_div and binary_cross_entropy functional options, to be more consistent with the naming scheme of other functional options.

Test Plan: Imported from OSS

Differential Revision: D18618971

Pulled By: yf225

fbshipit-source-id: 2af62c1a0ace2cd0c36c2f1071639bf131d8fe61
2019-11-20 12:23:50 -08:00
Raghuraman Krishnamoorthi
67b77afcdf Fast histogram observer
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29790

Test Plan:
import torch
import time
import numpy as np
from torch.quantization.observer import HistogramObserver

X = torch.randn(1,1,224,224)

obs = HistogramObserver(2048)
acc_time = 0
for i in range(100):
   X = torch.randn(10,1,320,320)
   start = time.time()
   obs(X)
   #obs.forward_new(X)
   acc_time = acc_time + time.time()-start
print(acc_time)

Imported from OSS

Differential Revision: D18508562

fbshipit-source-id: 456e82360ce1b3f9d8b6e1832d23f1339655011a
2019-11-20 11:14:41 -08:00
Jerry Zhang
f2b851a9e5 Returning axis from calculate_qparams (#29494)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29494

`calculate_qparams` of per channel quantization should return the axis, this
PR added this and also added corresponding support in graph mode

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18580905

fbshipit-source-id: f9691c1f043f8bca39f81716a4d0b10f60a65396
2019-11-20 11:06:48 -08:00
David Reiss
fbcb88e8b3 Split module.cpp and export.cpp to support saving on mobile (#29881)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29881

Breaking these into separate files allows us to have three different builds:
- Mobile inference-only.
- Mobile with module saving.
- Server with module saving and other export functions like ONNX.

And this can be accomplished just by selecting which cpp files to compile,
without setting any preprocessor flags.

Test Plan: CI.  Local mobile+saving build.

Reviewed By: smessmer

Differential Revision: D18509296

fbshipit-source-id: 9438273bac4624df5c7f035b2bacb901cce43053
2019-11-20 10:47:21 -08:00
Will Feng
72bc7bf37b Revert D18612158: Fix naming for kl_div and binary_cross_entropy functional options
Test Plan: revert-hammer

Differential Revision:
D18612158

Original commit changeset: 8c403fa1c2a0

fbshipit-source-id: f22d7c4664119d4e7397fc017bacecf3e318af11
2019-11-20 10:26:31 -08:00
Will Feng
e84fcc1fd1 Fix naming for kl_div and binary_cross_entropy functional options (#30146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30146

This PR fixes naming for kl_div and binary_cross_entropy functional options, to be more consistent with the naming scheme of other functional options.

Test Plan: Imported from OSS

Differential Revision: D18612158

Pulled By: yf225

fbshipit-source-id: 8c403fa1c2a0a65734a3ec2387cc0937c46cab24
2019-11-20 09:44:21 -08:00
xiaobing.zhang
c2c835dd95 Port sigmoid backward to Aten(CPU+CUDA) (#29185)
Summary:
VitalyFedyunin, This PR is about port sigmoid backward to Aten:
**Test script:**
```
import torch
import torch.nn as nn
import time

torch.manual_seed(0)

def _time():
    if torch.cuda.is_available():
        torch.cuda.synchronize()
    return time.time()

device = "cpu"
if torch.cuda.is_available():
    device = "cuda"

#warm up
for n in [100, 10000]:
    input = torch.randn(128, n, requires_grad=True, device=device)
    for i in range(1000):
        output = input.sigmoid().sum()
        output.backward()

#get running time
for n in [100, 10000]:
    bwd_t = 0
    input = torch.randn(128, n, requires_grad=True, device=device)
    for i in range(10000):
        output = input.sigmoid().sum()
        t1 = _time()
        output.backward()
        t2 = _time()
        bwd_t = bwd_t + (t2 - t1)
    bwd_avg = bwd_t / 10000 * 1000
    print("input size(128, %d), backwad avg time is %.2f (ms)." % (n, bwd_avg))
```
Test Device: CPU: skx-8280, GPU: Tesla P40

**Perfromance**:
Before:
```
GPU:
input size(128, 100), backwad avg time is 0.14 (ms).
input size(128, 10000), backwad avg time is 0.17 (ms).
CPU:
OMP_NUM_THREADS=56
input size(128, 100), backwad avg time is 0.06 (ms).
input size(128, 10000), backwad avg time is 4.21 (ms).
OMP_NUM_THREADS=1
input size(128, 100), backwad avg time is 0.06 (ms).
input size(128, 10000), backwad avg time is 2.30 (ms).
```
After:
```
GPU:
input size(128, 100), backwad avg time is 0.14 (ms).
input size(128, 10000), backwad avg time is 0.17 (ms).
CPU:
OMP_NUM_THREADS=56
input size(128, 100), backwad avg time is 0.05 (ms).
input size(128, 10000), backwad avg time is 0.48 (ms).
OMP_NUM_THREADS=1
input size(128, 100), backwad avg time is 0.04 (ms).
input size(128, 10000), backwad avg time is 0.86 (ms).
```
How to set number thread? using following script:
```
num_threads=$1
script=$2
last_core=`expr $num_threads - 1`
echo "using $num_threads OMP threads"
echo "bind cores to 0~$last_core"
export OMP_NUM_THREADS=$num_threads
export KMP_AFFINITY=granularity=fine,compact,1,0
numactl --physcpubind=0-$last_core --membind=0 python $script
```
and run **./run.sh num_threads test.py**.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29185

Differential Revision: D18587352

Pulled By: VitalyFedyunin

fbshipit-source-id: 8167ca261960399f795d35a83fa8c4be365bc4da
2019-11-20 07:31:42 -08:00
albanD
c0104a1c89 Fix typo in comment in cpp_extension (#30028)
Summary:
From https://github.com/pytorch/pytorch/issues/26614
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30028

Differential Revision: D18597666

Pulled By: albanD

fbshipit-source-id: 93bf0e4ee34a63df4b544d44f630a9c0fc95fd83
2019-11-20 07:16:48 -08:00
Pavel Belevich
f8e7f3fca4 C++ API parity: BCEWithLogitsLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28783

Test Plan: Imported from OSS

Differential Revision: D18202435

Pulled By: pbelevich

fbshipit-source-id: 011b028bbb2a091e98d3548616b99d7b4569c239
2019-11-20 06:46:38 -08:00
Michael Suo
93db2b86d1 Fix type sharing on loaded ScriptModules (#29826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29826

After save/load, we lose concrete type information. So if you tried to
script something that contained a loaded ScriptModule as a submodule,
the following sequence happened:
1. During ConcreteType inference, the loaded submodule got a new
inferred type.
2. But it already has a type! So there was a type mismatch.

To fix this, we should generate a ConcreteType directly from the loaded
submodule type (similar to what we do for interfaces). This makes sense
too--the ConcreteModuleType should be empty, since all the "sugaredness"
was stripped out during the save/load process.

Test Plan: Imported from OSS

Differential Revision: D18575009

Pulled By: suo

fbshipit-source-id: 4d329b7e9b7e7624f459e50092e35ab0ab813791
2019-11-20 01:13:09 -08:00
Michael Suo
558a777615 Re-unify module and interface in ConcreteModuleType (#29825)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29825

We made `ModuleInfo` a union initially to represent the idea that a
submodule could either be a regular module or a module interface.

This PR represents module interfaces as a ConcreteModuleType with no
info (e.g.  no "sugaredness"), and with the interface type as the
underlying `jitType_`. This has the effect of reducing the special
casing around adding/maintaining module info.

Test Plan: Imported from OSS

Differential Revision: D18575011

Pulled By: suo

fbshipit-source-id: 53e297b39aa1a03bcdadd795ff225aa68fec9d70
2019-11-20 01:13:06 -08:00
Michael Suo
63e66fd267 Split ConcreteModuleType into two types (#29824)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29824

We have two distinct phases/uses for ConcreteModuleType:
1. We are building it up and using it to check whether we can
reuse JIT types. (RawConcreteModuleType)
2. We are using it to satisfy ModuleValue::attr queries.
(ConcreteModuleType)

These types share an underlying `ConcreteModuleTypeData` which
actually stores the relevant info.

Previously they were the same type because I was lazy, but it's been the
source of a bug. So split them to formalize the differing invariants for
the two phases.

Test Plan: Imported from OSS

Differential Revision: D18575010

Pulled By: suo

fbshipit-source-id: 3e4ebcd36e78b947150d8f0dbb74ecccad23e7c4
2019-11-20 01:13:02 -08:00
Pritam Damania
c06f9023e5 Polish rpc docstring. (#30069)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30069

1) Fix rpc docstrings
2) Fix some links
ghstack-source-id: 94250890

Test Plan: waitforbuildbot

Differential Revision: D18588231

fbshipit-source-id: 33846ace1afa94d25f34b0370437abf6d9408f06
2019-11-19 23:10:14 -08:00
Yanli Zhao
b410d864c9 make python remote exception to rethrow when using remote reference to itself (#29930)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29930
Right now, python call remote exception rethrown is coupled with deserializtiaon.
For owner ref, the setValue() and getValue() do not use serialization and deserialization, so when users create a ref to itself, and call ownerRef.to_here(), python call remote exception will not be rethrown.

This diff is to move remote exception rethrown out of deserialization, and exception can be handled for ownerRef.localValue() or ownerRef.to_here()

close #29924
ghstack-source-id: 94210894

Test Plan: unit tests

Differential Revision: D18541916

fbshipit-source-id: 7cda93f623d52c740b3c1b1fa9a442f866984340
2019-11-19 21:33:21 -08:00
Pavel Belevich
cc81769e10 C++ API parity: isfinite
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30083

Test Plan: Imported from OSS

Differential Revision: D18594723

Pulled By: pbelevich

fbshipit-source-id: 5970e0aa6ef8994e9c4a741784fd053383aaceb7
2019-11-19 20:00:05 -08:00
Jerry Zhang
b2291d4600 Make PerChannelMinMaxObserver scriptable using torch.jit.ignore (#29416)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29416

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18580906

fbshipit-source-id: 5370300b89e26c2b4662b17e51284e8708cb5843
2019-11-19 19:12:55 -08:00
Shihao Xu
80e3f17301 Resubmit "Add RpcAgentOptions struct type, which bundles different required arguments for different RpcAgents" (#30093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30093

https://github.com/pytorch/pytorch/pull/28226 introduced `worker_to_id` arg to the `def init_rpc` function for other `RpcAgent`. While it's not really used by `ProcessGroupAgent`. Cleanup is wanted for this, as described in https://github.com/pytorch/pytorch/issues/29031.

To adapt to the difference of different `RpcAgent`, adding a `RpcAgentOptions` base classes, which allow leveraging inheritance to add extra fields.
ghstack-source-id: 94197295

Test Plan:
### OSS RPC + RRef tests

```
buck test mode/dev-nosan //caffe2/test:rpc_fork
```

```
buck test mode/dev-nosan caffe2/torch/fb/distributed/thriftRpcBackend/test:thrift_rpc_fork_test -- test_sync_rpc
```

### Prototype RRef tests

```
buck test mode/dev-nosan caffe2/torch/fb/distributed/pytorch/tests:test_rpc
```

```
buck test mode/dev-nosan //caffe2/torch/fb/distributed/pytorch/tests:test_rpc_thrift_rpc_agent
```

### Dist autograd

```
buck test mode/dev-nosan caffe2/test:dist_autograd_fork
```

```
buck test mode/dev-nosan caffe2/torch/fb/distributed/thriftRpcBackend/test:thrift_dist_autograd_fork_test
```

Differential Revision: D18595578

fbshipit-source-id: 616fca3b844c171ed5277bbc6a2b1693bc3a8065
2019-11-19 18:52:30 -08:00
Guanheng Zhang
15bc41a8aa Overwrite __setstate__ func in MultiheadAttention (#29001)
Summary:
Overwrite `__setstate__` func in nn.MultiheadAttention func and add `self._qkv_same_embed_dim` attribute in the `dict`. Current users should not be affected by the change.

The changes have been tested to load a MultiheadAttention model trained by PyTorch 1.1. If users have an old MultiheadAttention model, please use `torch.load` func to load the old model for inference under v1.4.0 and above.

```
import torch
model = torch.load('old_v1.1.0_MultiheadAttention.pt') # model works for torch 1.4
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29001

Differential Revision: D18257671

Pulled By: zhangguanheng66

fbshipit-source-id: fa41b85f6d53034dc9f445af60f2ad9636e9abf7
2019-11-19 18:32:44 -08:00
Alisson Gusatti Azzolini
07e14c7cd0 DistributedOptimizer: wait for all workers to finish _LocalOptimizer constructor (#30062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30062

This allows to catch exceptions during optimizer creation.
ghstack-source-id: 94232436

Test Plan: new unit test.

Differential Revision: D18586108

fbshipit-source-id: 71cfdf337fe803dbea8787b4c68e5a52b70a1f68
2019-11-19 18:30:00 -08:00
Tao Xu
2367e71f55 Disable ProfilingGraphExecutorImpl for mobile (#30067)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30067

### Summary

The mobile build has been broken since last week due to a runtime error caused by a missing operator in JIT:

```shell
libc++abi.dylib: terminating with uncaught exception of type torch::jit::script::ErrorReport:
Unknown builtin op: aten::_adaptive_avg_pool2d_backward.
Could not find any similar ops to aten::_adaptive_avg_pool2d_backward. This op may not exist or may not be currently supported in TorchScript.
:
at <string>:9:28
                grad_self = grad.expand(self.size()) / (self_size[-1] * self_size[-2])
            else:
                grad_self = torch._adaptive_avg_pool2d_backward(grad, self)
                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

            return grad_self
```
### How this happens

Since we've disabled the autograd for the opensourced version, the `backward` ops won't get registered by JIT.

When `forward` runs, a `GraphExecutor` will be created according to the value of  `executor_mode`.  In the mobile case , this one was set to true, which gives us the `ProfilingGraphExecutorImpl` object. Seems like this executor will eventually try to emit IR for autograd schemas? which causes the error.

### Fix

There are two ways to fix it.

1. Add a macro to disable `profiling_mode` as well as `executor_mode` on mobile. Like what `FBCODE_CAFFE2` does [here](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/profiling_graph_executor_impl.cpp#L22).
2. Disable the two modes in runtime, by calling ` torch::jit::getExecutorMode() = false;` before calling forward.

(IMO, The second fix is sort of a workaround as it doesn't make sense from a user perspective (Why I need to do this).  But the up side is that we don't have to introduce yet another macro )

Feel free to drop comments, if there is a better way to fix it.

### How this was not detected by our mobile CI

We're working on adding runtime tests to our mobile build to prevent similar issues like this.

### Test Plan

- The error above disappears
- Don't break CI

cc AshkanAliabadi

Test Plan: Imported from OSS

Differential Revision: D18605998

Pulled By: xta0

fbshipit-source-id: 11fa85c2b44d54bc28a9c45731af0f5d17d5804c
2019-11-19 18:04:57 -08:00
Mikhail Zolotukhin
2c8dce915c Show full call stack in TorchScript exception even when calls were inlined.
Summary:
This uses newly added InlinedCallStack to print the original call stack
even if the real call stack is shallower because of inlining.
This change also makes torchscript stacktraces look like python ones.

Example:
```
torch.jit.script
def baz(c, b):
    return c + b

torch.jit.script
def foo(c, b):
    return baz(c, b)

torch.jit.script
def bar(c, b):
    return foo(c, b)

bar(torch.rand(10), torch.rand(9))
```

Output before:
```
Traceback (most recent call last):
  File "fail.py", line 25, in <module>
    bar(torch.rand(10), torch.rand(9))
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
The above operation failed in interpreter, with the following stack trace:
at fail.py:15:11
torch.jit.script
def baz(c, b):
    return c + b
           ~~~~~ <--- HERE
```

Output after:
```
Traceback (most recent call last):
  File "fail.py", line 41, in <module>
    bar(torch.rand(10), torch.rand(9))
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
The above operation failed in interpreter.
Traceback (most recent call last):
  File "fail.py", line 33
torch.jit.script
def bar(c, b):
    return foo(c, b)
           ~~~ <--- HERE
  File "fail.py", line 29, in foo
torch.jit.script
def foo(c, b):
    return baz(c, b)
           ~~~ <--- HERE
  File "fail.py", line 25, in baz
torch.jit.script
def baz(c, b):
    return c + b
           ~~~~~ <--- HERE
```

Output of non-scripted python code:
```
Traceback (most recent call last):
  File "fail.py", line 36, in <module>
    bar(torch.rand(10), torch.rand(9))
  File "fail.py", line 21, in bar
    return foo(c, b)
  File "fail.py", line 18, in foo
    return baz(c, b)
  File "fail.py", line 15, in baz
    return c + b
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
```

Differential Revision: D18532812

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: e7e5ba5e4a8f1c7086406271d0f1685d9db8541a
2019-11-19 17:58:55 -08:00
Mikhail Zolotukhin
a9d1465c82 Add logging to inliner. (#27922)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27922

gh-metadata: pytorch pytorch 27922 gh/ZolotukhinM/140/head

Differential Revision: D17914135

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: d75bdf1efbfdc877f10017b16046bdbdc97e2dd6
2019-11-19 17:58:51 -08:00
Mikhail Zolotukhin
59eb682ce3 Add InlinedCallStack class. (#27921)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27921

InlinedCallstack serves a similar purpose to Scope, but instead of storing
string names of the functions it stores pointer to Function objects
themselves. Currently, scopes are used in tracing and callstacks are
used in scripting - hopefully I would be able to merge them in future.

gh-metadata: pytorch pytorch 27921 gh/ZolotukhinM/139/head

Differential Revision: D17914132

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: b1daa6700199ee1a97a7f49a6fced9ac0dc13051
2019-11-19 17:58:46 -08:00
Mikhail Zolotukhin
12263cfa98 Make inlineCallTo to take Function instead of Graph as the callee argument. (#27920)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27920

gh-metadata: pytorch pytorch 27920 gh/ZolotukhinM/138/head

Differential Revision: D17914133

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: 6aec2a71ed5718fecab81a107e37b26088b94c65
2019-11-19 17:58:42 -08:00
Mikhail Zolotukhin
0eb8c3dbfb Add a variant of insertGraph that fills values map. (#27919)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27919

gh-metadata: pytorch pytorch 27919 gh/ZolotukhinM/137/head

Differential Revision: D17914134

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: ecc85c97b497eaf82e25e9c6b4477f6b1103bf69
2019-11-19 17:58:37 -08:00
Will Feng
bb1d9b238d torch::nn::FractionalMaxPool{2,3}d module and functional
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29933

Test Plan: Imported from OSS

Differential Revision: D18548174

Pulled By: yf225

fbshipit-source-id: 070776db6e8b7ad94d9b7cbd82b3d6966f061a46
2019-11-19 17:24:07 -08:00
Divyansh Singhvi
ec52d911bd InstanceNorm{1,2,3}d (#28790)
Summary:
Hi yf225,

I have a few doubts related to implementation:
1) What tests do I have to write?
2) What does _load_state_from_dict does?
3) Do I need to override reset() function as I can not see it's utility?
4) InstanceNormOptions could be removed with BatchNormOptions, but I find that
`track_running_status` is not defined instead `stateful` is defined.

InstanceNorm{1,2,3}d https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28790

Differential Revision: D18588666

Pulled By: yf225

fbshipit-source-id: bb9b81f01f62c3fc8765fa0ba0716768087ee155
2019-11-19 16:57:01 -08:00
Will Feng
99c59d73a7 Remove input_channels / output_channels / with_bias from ConvOptions (#29838)
Summary:
Since torchvision is not using input_channels / output_channels / with_bias in ConvOptions anymore (https://github.com/pytorch/vision/pull/1576), we can remove the bridges now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29838

Differential Revision: D18597943

Pulled By: yf225

fbshipit-source-id: 59101437f032f042574998eb90eaf0be09352364
2019-11-19 16:28:54 -08:00
Vitaly Fedyunin
877c96cddf explicitly provide memory format when calling to *_like operators
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30008

Test Plan: Imported from OSS

Differential Revision: D18575981

Pulled By: VitalyFedyunin

fbshipit-source-id: ec3418257089ad57913932be1a8608cd20ce054c
2019-11-19 16:19:29 -08:00
Vitaly Fedyunin
e46babb637 explicitly provide memory format when calling to *_like operators
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30007

Test Plan: Imported from OSS

Differential Revision: D18575982

Pulled By: VitalyFedyunin

fbshipit-source-id: 83be0857fe1080216cd09547a2b3d34455a0cce4
2019-11-19 16:19:24 -08:00
Vitaly Fedyunin
04018ba865 explicitly provide memory format when calling to *_like operators (Redo of 81bf7364)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30006

Test Plan: Imported from OSS

Differential Revision: D18575984

Pulled By: VitalyFedyunin

fbshipit-source-id: b72ea0404f0363001c94f39567c0aeae71cb1f67
2019-11-19 16:19:20 -08:00
Vitaly Fedyunin
66913fe5c1 explicitly provide memory format when calling to *_like operators (Redo of cc1c01)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30005

Test Plan: Imported from OSS

Differential Revision: D18575976

Pulled By: VitalyFedyunin

fbshipit-source-id: 94cc213f42f9bd50eaa096872f38c4563e5c9ba1
2019-11-19 16:19:16 -08:00
Will Feng
05a7aaa742 Pass Tensor instead of Tensor& to torch::nn functionals that can change input in place (#30112)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30112

Currently, we have torch::nn functionals that takes `input` as `Tensor&` in order to be able to in-place change `input`'s value. We likely shouldn't do this because it will prevent the following use case:
```cpp
F::elu(torch::tensor(1), F::ELUFuncOptions().inplace(true))
```
The solution is to change the type of `input` to `Tensor`, so that we can pass an rvalue into the functional.

Test Plan: Imported from OSS

Differential Revision: D18601580

Pulled By: yf225

fbshipit-source-id: 639a86eb62f6c986b0f20bf7e201983e83126e73
2019-11-19 16:11:39 -08:00
nuka137
a75b669b0f C++ API: torch::nn::ConvTranspose{1,2,3}d (#29721)
Summary:
Add torch::nn::ConvTranspose{1,2,3}d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29721

Differential Revision: D18588943

Pulled By: yf225

fbshipit-source-id: d4dbb091389367e70459399d5cda3778325c2120
2019-11-19 16:04:12 -08:00
Jerry Zhang
c2e576e74b Per channel quantization support in insert_prepack_unpack (#29701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29701

att

Test Plan:
python test/test_jit.py 'TestJit.test_insert_prepack_unpack'

Imported from OSS

Differential Revision: D18580908

fbshipit-source-id: 2d1ce9b6279586198cb53a7fd2a35325fa20bf20
2019-11-19 15:53:04 -08:00
Pritam Damania
63c957cd94 Use std::shared_ptr for DistAutogradContext. (#29770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29770

We were passing around const and non-const references for
DistAutogradContext from DistAutogradContainer. This wasn't safe since the
context could be deleted from the container and a thread might still be using
the reference. This usually would happen when a backward pass fails on the node
driving the backward pass (resulting in delete context messages being sent to
all nodes) but other nodes are still executing code related to that autograd
context.

This was also the reason why `test_backward_autograd_engine_error` was flaky.

Using a std::shared_ptr everywhere ensures we're safe and never crash.

Closes #28928
Closes #26922
ghstack-source-id: 94201446

Differential Revision: D18494814

fbshipit-source-id: 0c925fdbd5755f6d876dad56885e2cbaf41fc5f0
2019-11-19 15:50:42 -08:00
Jerry Zhang
a689e3a0c4 Support per channel quantization in insert_quant_dequant and fold_prepack (#29492)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29492

Previously graph mode quantization only works for per tensor quantization,
this PR added support for per channel quantization as well, changes include
- insert per channel quantization calls(insert_quant_dequant)
- add support of folding for prepacked per channel quantized weight (fold_prepack)

Test Plan:
test is not possible until we can script PerChannelObserver, which comes in https://github.com/pytorch/pytorch/pull/29416
we'll add test in a separate PR after that.

Imported from OSS

Differential Revision: D18580444

fbshipit-source-id: 347c07f201648ec49f070523642a9170278f8aa4
2019-11-19 12:25:28 -08:00
David Riazati
dca123e76d Add zipfile serialization (#29232)
Summary:
Stacked PRs
 * https://github.com/pytorch/pytorch/issues/29244 - Use custom CRC
 * **https://github.com/pytorch/pytorch/issues/29232 - Add zipfile serialization**

This adds a serialization method that uses a zipfile (https://github.com/pytorch/pytorch/issues/26567). Right now it is
guarded behind a flag `_use_new_zipfile_serialization`. In release mode it seems to have performance about the same / slightly better than the current serialization in some simple benchmarks for large/small tensors.

Follow ups:
* Flip the `_use_new_zipfile_serialization` flag
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29232

Differential Revision: D18332036

Pulled By: driazati

fbshipit-source-id: 1bac0847c4d599612cba905f2cac8248783be2f4
2019-11-19 10:17:32 -08:00
Suyash458
e88d096321 C++/Python API Parity: add AlphaDropout (#28424)
Summary:
- add `AlphaDropoutImpl` to `modules/dropout.h` and `modules/dropout.cpp`
 - add `functional/dropout.h` containing the `alpha_dropout` function
 - include `functional/dropout.h` in `nn/functional.h`
 - add functional and module tests
-  related issue https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28424

Differential Revision: D18589162

Pulled By: yf225

fbshipit-source-id: c85734e02431a6c052515e26b11ca30ad7303644
2019-11-19 10:05:51 -08:00
Peter Bell
37ca5a8a64 convert_sync_batchnorm should not convert _InstanceNorm instances (#29985)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/29187

This introduces a new class `_NormBase` that `_InstanceNorm` and `_BatchNorm` inherit from separately. This means the `isinstance(module, _BatchNorm)` check won't falsely pass for `_InstanceNorm`.

The suggested fix of adding `and not isinstance(module, _InstanceNorm)` works as well, but requires introducing a cyclic dependency between `instancenorm.py` and `batchnorm.py`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29985

Differential Revision: D18588104

Pulled By: yf225

fbshipit-source-id: f599da3b902ad9c56836db4d429bfc462ed51338
2019-11-19 09:39:36 -08:00
Lara Haidar
45024e7a35 Support Exporting Bitshift to ONNX (#28210)
Summary:
Support exporting left/right bitshifts to ONNX for all opset versions.

ONNX has a bitshift operator in opset 11, but it only supports unsigned ints, so it can't be used in PyTorch (since only uint8 is the only uint type).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28210

Reviewed By: hl475

Differential Revision: D18575512

Pulled By: houseroad

fbshipit-source-id: 74161db67f599996a0614981edcc171af6780d21
2019-11-19 09:25:50 -08:00
Edward Yang
1dda8186ae Revert D18549919: Add RpcAgentOptions struct type, which bundles different required arguments for different RpcAgents
Test Plan: revert-hammer

Differential Revision:
D18549919

Original commit changeset: b9f3f1a41d1f

fbshipit-source-id: 2d5e578d18c0725b59eb99a0e942fbf7fe3341ee
2019-11-19 08:14:40 -08:00
Rohan Varma
83513506c3 poll for timed out futures in process group agent (#29601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29601

Follow up from https://github.com/pytorch/pytorch/pull/28392. Adds a background thread to `ProcessGroupAgent` that polls for timed out RPCs at a pre-set interval, and marks them as completed with a timeout exception if they have timed out. Also deletes the futures from the corresponding maps `futures_` and `futureTimeouts`. Unit tests are added to ensure that timed out RPCs are appropriately cleaned up.

Also adds a `shutdown` variable to process group agent to control the shutting down of this background thread, which can eventually be extended to use for controlling a clean shutdown of process group agent.
ghstack-source-id: 94175131

Test Plan: Added unit tests

Differential Revision: D18434215

fbshipit-source-id: c48abdb8759fe1447200ec66bb9d4b1c50ec4535
2019-11-19 06:42:04 -08:00
Shihao Xu
21dc1d4543 Add RpcAgentOptions struct type, which bundles different required arguments for different RpcAgents (#29972)
Summary:
https://github.com/pytorch/pytorch/pull/28226 introduced `worker_to_id` arg to the `def init_rpc` function for other `RpcAgent`. While it's not really used by `ProcessGroupAgent`. Cleanup is wanted for this, as described in https://github.com/pytorch/pytorch/issues/29031.

To adapt to the difference of different `RpcAgent`, adding a `RpcAgentOptions` base classes, which allow leveraging inheritance to add extra fields.

closes https://github.com/pytorch/pytorch/issues/29031
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29972

Differential Revision: D18549919

Pulled By: xush6528

fbshipit-source-id: b9f3f1a41d1ff18498734081870820b055d56f5b
2019-11-19 01:00:08 -08:00
Martin Yuan
c272758b43 Mobile module forward() pass input by value. (#30060)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30060

Mobile forward() passed inputs by reference, which is different from JIT's script::module. To make it consistent, change it pass by value.

Test Plan: Imported from OSS

Differential Revision: D18587786

Pulled By: iseeyuan

fbshipit-source-id: fa398124fd0a5168f708733ff88f0ba327726f43
2019-11-18 22:33:38 -08:00
neginraoof
267fd4a06c Fix for batch norm 2D with affine=False (#29458)
Summary:
This is a fix for batch norm 2D with affine=False.
Repro: https://github.com/pytorch/pytorch/issues/29271
Error is because the output of the unsqueeze op does not have scalar type information. So I moved the references to scalar type after the unsqueeze line.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29458

Reviewed By: hl475

Differential Revision: D18400975

Pulled By: houseroad

fbshipit-source-id: f5c5633857c584edcef3b9e9946861dcfccccd75
2019-11-18 21:52:11 -08:00
Vitaly Fedyunin
a4f60b64dc explicitly provide memory format when calling to *_like operators
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29391

Test Plan: Imported from OSS

Differential Revision: D18429726

Pulled By: VitalyFedyunin

fbshipit-source-id: 07dfff568ad776cf792122913530566d53be55fa
2019-11-18 21:47:52 -08:00
Vitaly Fedyunin
2dba553990 explicitly provide memory format when calling to *_like operators
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29390

Test Plan: Imported from OSS

Differential Revision: D18429722

Pulled By: VitalyFedyunin

fbshipit-source-id: e5f40da1550b4316e9c4725adbdf557c832b7563
2019-11-18 21:47:47 -08:00
Alisson Gusatti Azzolini
97156f548d Add hash and equality operators for WorkerInfo (#29958)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29958

DistributedOptimizer relies on hashing WorkerInfo in order to coalesce fan-out RPCs. This will likely be a very common use case (EASGD will do the same, for example).
ghstack-source-id: 94169198

Test Plan: unit test.

Differential Revision: D18548257

fbshipit-source-id: 7d67d4e1b9bc60403c372164982a75ae8c1d8389
2019-11-18 20:47:13 -08:00
Will Feng
3bd0f476d4 Revert D18233037: C++ API parity: isfinite
Test Plan: revert-hammer

Differential Revision:
D18233037

Original commit changeset: c76b9467bbc1

fbshipit-source-id: 97d2cfa9de767a8c3a0ca919f9d768e959fa484e
2019-11-18 20:26:19 -08:00
Pritam Damania
63f4b607aa Ensure initializedContextIds_ map is cleaned up appropriately in DistEngine. (#29787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29787

The initializedContextIds_ map was never cleaned up in DistEngine and
kept on growing as we continue to run backward passes. To fix this, in this PR
we ensure that the context id is cleaned up from this map once we are done with
the backward pass.

Closes #29083
ghstack-source-id: 94161770

Test Plan: waitforbuildbot

Differential Revision: D18498937

fbshipit-source-id: 8d31fc066f6994627766f2b6ca36efa1bef89840
2019-11-18 20:11:18 -08:00
Pavel Belevich
8df5e10ee9 C++ API parity: isfinite
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28918

Test Plan: Imported from OSS

Differential Revision: D18233037

Pulled By: pbelevich

fbshipit-source-id: c76b9467bbc1fbb2c9bf49855895c98438b36c12
2019-11-18 19:06:57 -08:00
Pritam Damania
5d69bc1eda Add docs for distributed optimizer. (#29971)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29971

ghstack-source-id: 94132160

Test Plan: waitforbuildbot

Differential Revision: D18554631

fbshipit-source-id: c4485f7cff5159f423d0f35d1caf71074b62dc28
2019-11-18 18:51:26 -08:00
Pritam Damania
ab93b3df60 Polish distributed autograd docs. (#29942)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29942

1) Added links to the design.
2) Fixed function signautres.
3) Expanded examples
ghstack-source-id: 94162372

Test Plan: waitforbuildbot

Differential Revision: D18547103

fbshipit-source-id: 067ba166c107ed14085af8ee3306d3f8a9dcebe7
2019-11-18 18:13:08 -08:00
Pritam Damania
df6a1c0437 Remove rpc.sync_rpc from the public API. (#30033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30033

Removing this API for now since we don't have a concrete use-case for
this yet and as a result exposing this as a public API might result in users
depending on this API.

We can always add some variant of this API back if needed later.
ghstack-source-id: 94138302

Test Plan: waitforbuildbot

Differential Revision: D18578056

fbshipit-source-id: 078c62331725e03bd5702624afc16b1cdcdf26a4
2019-11-18 18:02:07 -08:00
jiej
9c7e604c60 SyncBatchNorm Update on input dimension checks (#29626)
Summary:
update the requirements on input dimensions for `torch.nn.SyncBatchNorm`:
1. 2D inputs is now permissible, https://github.com/pytorch/pytorch/issues/20204 ;
2. requires at least two element along normalization plane (BatchNorm behavior);
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29626

Differential Revision: D18492531

Pulled By: albanD

fbshipit-source-id: f008e46a2d520d73c3c2730890a7424eba2ede9e
2019-11-18 16:09:51 -08:00
Jerry Zhang
64cdc648da fix submodule traversal in FoldPrepackedWeightIntoModule (#29925)
Summary:
similar to https://github.com/pytorch/pytorch/pull/29914
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29925

Differential Revision: D18548029

Pulled By: jerryzh168

fbshipit-source-id: 7b36133454c5190be19380bf125203807ea0b129
2019-11-18 13:34:45 -08:00
Supriya Rao
91c6d2e51c Add support for quantized operator conversion from PT to C2 via ONNX (#29694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29694

This PR adds preliminary support required to be able to run quantized pytorch models on a C2 backend.
For quantized ops we use a custom domain name 'caffe2' to register the ops if they are in the "quantized" namespace.
The change also adds JIT pass to unpack the quantized weights and insert the unpacked values into the graph.
The actual tensor values are looked up from the params dict.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2.py TestQuantizedOps

Imported from OSS

Reviewed By: houseroad

Differential Revision: D18467130

fbshipit-source-id: 53ebd8c43935f7d7e74305dad6c231a2247df176
2019-11-18 12:12:40 -08:00
Will Feng
82682b3e96 Revert D18531481: Remove input_channels / output_channels / with_bias from ConvOptions
Test Plan: revert-hammer

Differential Revision:
D18531481

Original commit changeset: e48d9e8cf110

fbshipit-source-id: a233425cc10278552674c48b6b577ef53fca0632
2019-11-18 09:10:54 -08:00
Edward Yang
f6cadad174 Delete redefinitions of methods in Variable already present on Tensor. (#29667)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29667

Some previous implementations are defined in native_functions.yaml.
In this case, I don't define them explicitly in Tensor; instead
they are placed in VariableTypeManual.cpp. When I did this, I would
have deleted documentation; instead, this documentation was moved
to native_functions.yaml

This also replaces `current_version` with just `_version`.

This is a carved out portion of #28287, rebased past Tensor-Variable
merge.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18504934

Pulled By: ezyang

fbshipit-source-id: be7adf45b637daffe2b0b1631eb31d967525fc31
2019-11-18 08:12:16 -08:00
Edward Yang
1ab2f043ba Move most methods off Variable into torch::autograd::impl functions. (#29665)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29665

Our intention is to merge the static distinction between Tensor and
Variable.  Ordinarily, this would entail merging the methods of Tensor
and Variable.  But there are a lot of "private"-ish methods on Variable
that we don't actually want to dump onto the Tensor class.  So, as prep
work, we move all of those methods off of Variable and into
the torch::autograd::impl namespace (impl as in, please don't use this
end users).  This ends up being a fairly large patch because all of
the call sites have to play ball too.

While I was on the topic, I also moved any of the touched functions into
the C++ file, so that modifying them would not trigger a recompilation of
all of torch.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18496169

Pulled By: ezyang

fbshipit-source-id: afb203252620ec274be596b3e7b1d84d321bad3a
2019-11-18 08:12:12 -08:00
SsnL
38340f59fd randint accept generator=None (#29748)
Summary:
This PR fixes the inconsistent behavior of `randint`'s `generator=` kwarg. It does not accept `None`, which is inconsistent with how other random functions behave:
```
In [12]: torch.randint(0, 4, size=(2,3), generator=torch.Generator())
Out[12]:
tensor([[2, 0, 1],
        [0, 1, 3]])

In [13]: torch.randint(0, 4, size=(2,3), generator=None)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-13-a6bc6525a1e1> in <module>
----> 1 torch.randint(0, 4, size=(2,3), generator=None)

TypeError: randint() received an invalid combination of arguments - got (int, int, generator=NoneType, size=tuple), but expected one of:
 * (int high, tuple of ints size, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
 * (int high, tuple of ints size, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
 * (int low, int high, tuple of ints size, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
 * (int low, int high, tuple of ints size, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
```

Other random functions work fine:
```
In [9]: torch.bernoulli(torch.ones(3))
Out[9]: tensor([1., 1., 1.])

In [10]: torch.bernoulli(torch.ones(3), generator=None)
Out[10]: tensor([1., 1., 1.])
```

This PR also documents the `generator=` kwarg, and fixes https://github.com/pytorch/pytorch/issues/29683 since it's a related easy fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29748

Differential Revision: D18529951

Pulled By: ezyang

fbshipit-source-id: e956cc989decc94e9483fd4a30f9255240d7c07e
2019-11-18 08:07:29 -08:00
MrTsepa
94016b153a Fix typo in documentation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29755

Differential Revision: D18529963

Pulled By: ezyang

fbshipit-source-id: 8d9100f00c46238fa3210944864b1d178717499f
2019-11-18 07:44:12 -08:00
Rohan Varma
639133d6d1 rename init_model_parallel to init_rpc (#29762)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29762

Rename this API as discussed, since it's use cases extend beyond only
model parallelism.
ghstack-source-id: 94020627

Test Plan: Unit tests pass

Differential Revision: D18491743

fbshipit-source-id: d07676bb14f072c64da0ce99ee818bcc582efc57
2019-11-18 06:07:44 -08:00
Vitaly Fedyunin
5f510374e7 Add torch.memory_format support to the TorchScript
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28544

Test Plan: Imported from OSS

Differential Revision: D18093801

Pulled By: VitalyFedyunin

fbshipit-source-id: 2c82a1508da50a24825b44939434d86546cf1e19
2019-11-18 05:35:49 -08:00
Vitaly Fedyunin
cb43170dcb Add memory format support to the resize_ op. (#28292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28292

Allows to simplify patterns like:

1.
	output.resize_({sizeB, sizeC, osizeH, osizeW}).as_strided_({sizeB, sizeC, osizeH, osizeW}, {sizeC*osizeH*osizeW, 1, osizeW*sizeC, sizeC});

2.
	output.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	indices.resize_({nbatch, nInputPlane, outputHeight, outputWidth});
	output.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
	indices.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

3.
	gradInput.resize_as_(input);
  	gradInput.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);

Test Plan: Imported from OSS

Differential Revision: D18044978

Pulled By: VitalyFedyunin

fbshipit-source-id: bbf67c25f9cf88bc6e949089a3b247df50f86dc4
2019-11-18 05:35:44 -08:00
Vitaly Fedyunin
b80c4f60fb Add channels last support to cuda.comm.scatter and gather
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28077

Test Plan: Imported from OSS

Differential Revision: D17980305

Pulled By: VitalyFedyunin

fbshipit-source-id: e4741194baac3d93f2d53724582dc4c38f82ee84
2019-11-18 05:35:35 -08:00
Vitaly Fedyunin
9f3b347874 Add memory format support to resize_as_ operator (#27979)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27979

Adds memory_format keyword argument (positional for cpp).

'Preserve' behavior now follows next rules:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17980311

Pulled By: VitalyFedyunin

fbshipit-source-id: 12d013521091fcc9c045833577f6dc78d7b1e68f
2019-11-18 05:35:23 -08:00
James Reed
18bdf97dbb Factor Module into Object and Module
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29500

Test Plan: Imported from OSS

Differential Revision: D18463064

Pulled By: jamesr66a

fbshipit-source-id: d37bef242a8626593d4b8754042152cfc0f0acb2
2019-11-17 22:58:50 -08:00
Martin Yuan
b011461c9f Add missing operators for pytext, v2 (#29970)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29970

Add operators and JMP instruction used in PyText model in lite interpreter.

Test Plan: Imported from OSS

Differential Revision: D18555483

fbshipit-source-id: e5124d908762f78fb548505aecf33be8c8503275
2019-11-16 23:59:12 -08:00
Martin Yuan
6980cb2519 Add overload name to JIT prim operators, version 2 (#29960)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29960

Overload name is required in mobile operators with the same name but different schema. Since it's not used in JIT, it's safe to add overload names for JIT operators.

Test Plan: Imported from OSS

Differential Revision: D18555484

fbshipit-source-id: b451379af24e255d8b0c61b964ae32fd1a64ed34
2019-11-16 23:59:07 -08:00
Will Feng
689b4bea7b torch::nn::GLU and F::glu (#29922)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29922

* #29920 [C++ API] torch::nn::GroupNorm and F::group_norm

Test Plan: Imported from OSS

Differential Revision: D18558818

Pulled By: yf225

fbshipit-source-id: ff80d634309fcb55f53db8dcf86eb9cf8161b37e
2019-11-16 21:03:38 -08:00
Will Feng
d5bf51b684 torch::nn::GroupNorm and F::group_norm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29920

Test Plan: Imported from OSS

Differential Revision: D18539314

Pulled By: yf225

fbshipit-source-id: dabbbaac31796fe7bfde02487737971bde699c1c
2019-11-16 19:22:11 -08:00
PyExtreme
e1d13f4f8b C++ API parity: NLLLoss & CrossEntropyLoss (#29812)
Summary:
Hi yf225 , I have added **NLLLoss and CrossEntropyLoss.**
```

Also, while using log_softmax in cross_entropy_loss, I am getting an error
../caffe2/../torch/csrc/api/include/torch/nn/functional/loss.h:537:63: error: no matching function for call to  log_softmax(const at::Tensor&)’
     const Tensor& log_softmax_input = torch::log_softmax(input);

aten/src/ATen/Functions.h:5551:22: note: candidate: at::Tensor at::log_softmax(const at::Tensor&, int64_t, c10::optional<c10::ScalarType>)
 static inline Tensor log_softmax(const Tensor & self, int64_t dim, c10::optional<ScalarType> dtype) {
                      ^~~~~~~~~~~
aten/src/ATen/Functions.h:5551:22: note:   candidate expects 3 arguments, 1 provided
```

I think the other two parameters should be optional as in python frontend(shown in documentation here at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.log_softmax ). Rest, there were no errors in build and tests have passed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29812

Differential Revision: D18548249

Pulled By: yf225

fbshipit-source-id: 2ab350abd2a6f498d4dba2345f51ad87471f3038
2019-11-16 10:49:09 -08:00
Will Feng
890a3f8b8d Remove input_channels / output_channels / with_bias from ConvOptions (#29838)
Summary:
Since torchvision is not using input_channels / output_channels / with_bias in ConvOptions anymore (https://github.com/pytorch/vision/pull/1576), we can remove the bridges now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29838

Differential Revision: D18531481

Pulled By: yf225

fbshipit-source-id: e48d9e8cf110095f83d9ed18b9fec020ec725f3e
2019-11-16 10:46:50 -08:00
Mikhail Zolotukhin
4553d5e69b Fix submodule traversal in insertPackUnpack pass. (#29914)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29914

Currently we're visiting all submodules every time we're visiting a
method of a module.

Test Plan: Imported from OSS

Differential Revision: D18534602

Pulled By: ZolotukhinM

fbshipit-source-id: 38c5b0ab0bdd27599fd0a6af0eaa3603c68a97a8
2019-11-15 20:43:43 -08:00
Pavel Belevich
27afac2134 C++ API parity: Dropout, Dropout2d, Dropout3d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29761

Test Plan: Imported from OSS

Differential Revision: D18530820

Pulled By: pbelevich

fbshipit-source-id: 9d351561692f7de099d7c6aaf2ecb930b5c867e9
2019-11-15 20:32:06 -08:00
BowenBao
fbabf72829 Add ONNX support for Logdet (#29767)
Summary:
Exported as combination of ONNX::Log and ONNX::Det.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29767

Reviewed By: hl475

Differential Revision: D18499762

Pulled By: houseroad

fbshipit-source-id: e6f2298635a995f01b2913d8958b5e1ca9d04058
2019-11-15 20:27:43 -08:00
Zachary DeVito
a5b4d78c6d Revert D18499600: Add overload name to JIT prim operators.
Test Plan: revert-hammer

Differential Revision:
D18499600

Original commit changeset: a1b49e64c908

fbshipit-source-id: 73e27b72f53799c0133850d2352ae8cd8a82d87c
2019-11-15 18:36:17 -08:00
Zachary DeVito
2a442f5dca Revert D18499601: Add missing operators for PyText model.
Test Plan: revert-hammer

Differential Revision:
D18499601

Original commit changeset: 8a38d3d809ee

fbshipit-source-id: 4f28f291bd7020f1fc9fc313bc766b5dbf5b1b90
2019-11-15 18:36:11 -08:00
Zachary DeVito
f1860aea83 fix missing lock in profiling graph compilation (#29886)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29886

Fixes https://github.com/pytorch/pytorch/issues/29764

Test Plan: Imported from OSS

Differential Revision: D18523903

Pulled By: zdevito

fbshipit-source-id: 4e2b04102ee9f6312e4a7b48536392454e6c1b79
2019-11-15 17:51:46 -08:00
Martin Yuan
6c39e5033c Add missing operators for PyText model.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29664

Test Plan: Imported from OSS

Differential Revision: D18499601

fbshipit-source-id: 8a38d3d809ee5ef5b73b5a5ce1db612aea680e75
2019-11-15 16:22:52 -08:00
Martin Yuan
ff4e782e79 Add overload name to JIT prim operators.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29656

Test Plan: Imported from OSS

Differential Revision: D18499600

fbshipit-source-id: a1b49e64c908d16d40a6ddb048182d7bbe80bcd6
2019-11-15 16:22:47 -08:00
Martin Yuan
3003c5f91b OPN ops TupleConstruct/Unpack and format. (#29635)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29635

TupleConstruct/Unpack as OPN ops.

Test Plan: Imported from OSS

Differential Revision: D18499602

fbshipit-source-id: 389b21d3ea532ef6fa729d67ce34214d86700cd2
2019-11-15 16:22:42 -08:00
vishwakftw
69e343f2cc Expose is_signed for dtype (#29511)
Summary:
Changelog:
- Expose is_signed for torch.dtype by modifying torch/csrc/Dtype.cpp
- Allow half, bfloat16 and bool to also been "known" by the isSignedType function
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29511

Test Plan:
- Add tests in test/test_torch.py

Closes https://github.com/pytorch/pytorch/issues/29475

Differential Revision: D18439030

Pulled By: albanD

fbshipit-source-id: 4b1f9da70c1c8dfd0a5bc028b6936acd1c64af47
2019-11-15 11:16:45 -08:00
David Reiss
0108f473ad Use c10::to_string in more places (#29839)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29839

std::to_string isn't reliably available on Android.  Use c10::to_string
instead in some more files that we want to add to some Android builds.

Test Plan: CI

Reviewed By: linbinyu

Differential Revision: D18509295

fbshipit-source-id: 678af1abbea05777310499634ab01afbe21134d8
2019-11-15 09:22:59 -08:00
Xiaomeng Yang
510ef4b63a Add nn.quantized.Conv3d (#29813)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29813

Add nn.quantized.Conv3d

Test Plan: buck test mode/dev-nosan //caffe2/test:quantized -- "conv"

Reviewed By: jianyuh

Differential Revision: D18467749

fbshipit-source-id: 892f708179e9e836ad902851ac1838847009da15
2019-11-15 04:33:40 -08:00
Shen Li
e1a309a647 Always include autograd context id in rpc/remote requests (#29781)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29781

Even though the request might not contain any requires_grad tensor,
the return value could. Therefore, we should always include the
autograd context id in the request.

closes #28819

Test Plan: Imported from OSS

Differential Revision: D18496709

Pulled By: mrshenli

fbshipit-source-id: 2f870c410291a1300952895b7488ea07e5574228
2019-11-14 23:02:11 -08:00
Will Feng
893105b79e Add reset_parameters to torch::nn modules (#29832)
Summary:
This PR adds `reset_parameters` to the torch::nn modules whose Python version also has `reset_parameters` defined, so that there is better parity between Python and C++ version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29832

Differential Revision: D18515939

Pulled By: yf225

fbshipit-source-id: 5aa23e5c7ce1026787c04ffeb6c7f167620dd491
2019-11-14 20:58:32 -08:00
Rohan Varma
371da6acef move get_rpc_timeout to pybind (#29765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29765

instead of wrapping this C++ function with python that causes
unnecessary overhead, we can move this to pybind and use the `DefaultRpcAgent`
to get the timeout.
ghstack-source-id: 93879236

Test Plan: unit tests pass

Differential Revision: D18493195

fbshipit-source-id: fd0f1f13ee15acb5ea1ae7c696925c9b54304f6d
2019-11-14 19:39:22 -08:00
Elias Ellison
902c1f9ef1 Check for mutable default parameters (#29833)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/21545

We we were silently giving wrong semantics previously:

Python behavior:
```
def test(x=[]):
   x.append(1)
   return len(x)

print(test()) # 1
print(test()) # 2
```

By checking at the python layer, we prevent any new models from serializing this behavior but do not break existing serialized models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29833

Differential Revision: D18513168

Pulled By: eellison

fbshipit-source-id: 6fe73f28e1f9d39dedeaf67a04718089d14401a1
2019-11-14 18:28:48 -08:00
Pritam Damania
77bb41c965 Rename dist_autograd_context and dist_autograd_container. (#29696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29696

The paths distributed/autograd/context/dist_autograd_context.h and
distributed/autograd/context/dist_autograd_container.h were repetitive.

Therefore renaming these to distributed/autograd/context/context.h and
distributed/autograd/context/container.h
ghstack-source-id: 93850266

Test Plan: waitforbuildbot

Differential Revision: D18467624

fbshipit-source-id: bbf3905396f553006851af296c880c1bd106ec47
2019-11-14 14:49:34 -08:00
Rohan Varma
06ef4a757d Add docs for RPC, dist autograd, and RRef modules (#29276)
Summary:
Closes https://github.com/pytorch/pytorch/issues/28983. Documentation for `torch.distributed.rpc` and `torch.distributed.autograd` modules. Also fixes/tidies up some of the docstrings in rpc/autograd, and moves some functions to be private so they don't show up in the documentation.

Note: Much of the text to describe/explain the RPC/RRef layers are taken from the following RFCs: https://github.com/pytorch/pytorch/issues/23110, https://github.com/pytorch/pytorch/issues/26759
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29276

Differential Revision: D18478754

Pulled By: rohan-varma

fbshipit-source-id: e9a7089baf5275304e5408d319eb9bf98e53fff8
2019-11-14 14:32:03 -08:00
Your Name
bfedace5e3 Expose miniz to Python (#29228)
Summary:
Stacked PRs
 * https://github.com/pytorch/pytorch/issues/29232 - Add zipfile serialization
 * https://github.com/pytorch/pytorch/issues/29244 - Use custom CRC
 * **https://github.com/pytorch/pytorch/issues/29228 - Expose miniz to Python**

This adds the miniz wrapper to Python along with some functionality so that it can operate on both files and buffers. Python's `zipfile` module is pretty slow (see https://github.com/pytorch/pytorch/issues/26573), but miniz solves most of the perf issues.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29228

Differential Revision: D18330945

Pulled By: driazati

fbshipit-source-id: 455a19bcb23b871d56e4233edbf897134b2c2f1d
2019-11-14 13:37:31 -08:00
Edward Yang
65bb34d885 Remove TensorImpl::is_variable, deprecate Tensor::is_variable (#29653)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29653

I didn't remove is_variable from Tensor for BC reasons, but I did
remove as many uses as I could from the codebase.
at::impl::variable_excluded_from_dispatch got moved to TensorBody.h
so that it's more widely accessible.

This diff is NOT semantics preserving.  Here are the major differences:

- In a number of native operator implementations, we tested that arguments
  are not variable.  I replaced these with asserts that variable is
  excluded from dispatch.  I actually don't think these asserts are really
  necessary now (they should certainly be true, but it's hard to get
  it wrong), but I've kept them for old time's sake.  At least, they'll detect
  if you call these functions before you've processed variable (indicating
  a bug in your kernel.)

- There are a number of places where we do a per-tensor test for being a
  variable, for better error reporting when someone commits Tensor/Variable
  confusion.  Although these tests are substantively the same as the
  tests above, in these cases I decided to *delete* the test entirely.
  The reasoning is that in these cases, we didn't really care about
  dispatch (also, see above; I'm not too sure we really need the dispatch
  asserts), we cared about Tensor/Variable confusion.  Since Tensor/Variable
  confusion is impossible now, we don't need the tests.  One of the key
  factors which pushed me one way or another was whether or not a function
  was doing per-tensor validation; if I kept the assert in such functions,
  I'd repeatedly access the TLS.  Even if we want to bring back the asserts,
  they would have to go somewhere else.

  Another similar idiom is the number of places we do !x.defined() ||
  x.is_variable(); I treated this equivalently.

- nuclear_norm's computation of compute_uv is a bit weird, but I think
  it's OK to just delete the is_variable case (I *suspect* that it is
  always the case that self.is_variable(), but it doesn't really matter.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18496168

Pulled By: ezyang

fbshipit-source-id: 5a1ded931e0c10a6b758ba64a8380d34110e0c3e
2019-11-14 11:41:02 -08:00
James Reed
8d23f7a3a8 Only print original SourceRange on highlight
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29708

Test Plan: Imported from OSS

Differential Revision: D18472089

Pulled By: jamesr66a

fbshipit-source-id: 89cbe8edf4e3c90d3795a1f3ea55cb234e2682e0
2019-11-14 11:38:02 -08:00
James Reed
90ac35b7bd Fix tracing of autograd functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29791

Test Plan: Imported from OSS

Differential Revision: D18499142

Pulled By: jamesr66a

fbshipit-source-id: 6c2864dfbfa0419c8c888d55e082a619d058b3ee
2019-11-14 11:18:07 -08:00
Xiaomeng Yang
bf80664515 Add quantized conv3d function (#29686)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29686

Add quantized conv3d function

Test Plan: buck test mode/dev-nosan //caffe2/test:quauntized -- "conv"

Reviewed By: hl475

Differential Revision: D18463090

fbshipit-source-id: f9c3d2920c3fc015bbb2b6a583a582c9f8397b08
2019-11-14 03:04:51 -08:00
Shen Li
4a1fcc0b83 Allow rpc.remote to create RRef on self (#29634)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29634

This implementation supports rpc.remote to self by doing the
following steps:

1. create an owner RRef
2. add the owner RRef to owners_ in RRefContext, and keep it alive
   by using RRefId as the ForkId.
3. Go through serde and insert the message to the caller's thread-pool
4. When the response message gets processed, remove the itself from
   RRef fork map.

Test Plan: Imported from OSS

Differential Revision: D18445812

Pulled By: mrshenli

fbshipit-source-id: e3b9aa98962c388acbc2ce294101a236d5cb2da6
2019-11-14 00:10:24 -08:00
Will Feng
a68c52494c Use F::*FuncOptions for embedding/embeddingbag functionals (#29673)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29673

Following https://github.com/pytorch/pytorch/pull/29364 and https://github.com/pytorch/pytorch/pull/29404, this PR makes `F::EmbeddingFuncOptions` and `F::EmbeddingBagFuncOptions` separate classes from `torch::nn::EmbeddingOptions` and `torch::nn::EmbeddingBagOptions`, so that it's easier to enforce that arguments such as `num_embeddings` and `embedding_dim` are required for `torch::nn::EmbeddingOptions` and `torch::nn::EmbeddingBagOptions`.

Test Plan: Imported from OSS

Differential Revision: D18462540

Pulled By: yf225

fbshipit-source-id: f2abf431e48675b0a9d7f6f398cdb90ff9037c35
2019-11-13 18:47:22 -08:00
Elias Ellison
681b610f35 use new overload mechanism for rnns (#29614)
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload

This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614

Differential Revision: D18486982

Pulled By: eellison

fbshipit-source-id: aaaea66a4a7f12d2e46199ca254f9e8f7475500e
2019-11-13 15:44:25 -08:00
Will Feng
2bcac59a30 Use default dtype for torch::tensor(floating_point_values) and torch::tensor(empty braced-init-list) when dtype is not specified (#29632)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29632

This PR is BC-breaking in the following way:

Previously, C++ `torch::tensor` with a floating-point literal with no suffix (e.g. `torch::tensor(1.1)`) or a (nested) braced-init-list of
floating-point literals with no suffix (e.g. `torch::tensor({{1.1, 2.2}})` produces a tensor with dtype `at::kDouble`. After this PR, it produces a tensor with dtype `torch::get_default_dtype()`, matching Python `torch.tensor` behavior.

Test Plan: Imported from OSS

Differential Revision: D18465819

Pulled By: yf225

fbshipit-source-id: 6834fe50335c677bc3832f2a5e9cf8d1ede9f665
2019-11-13 15:17:11 -08:00
Rohan Varma
3fb9bbc99b refactor and move createException function (#29605)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29605

Adds a wrapper around the existing createException function that
allows passing of an error string, instead of a regular C++ exception. This
allows us to createExceptions for errors that aren't necessarilu c++
exceptions. This function is used by
https://github.com/pytorch/pytorch/pull/29601 and
https://github.com/pytorch/pytorch/pull/26336.
ghstack-source-id: 93819039

Test Plan: Unit tests pass

Differential Revision: D18439216

fbshipit-source-id: 70b6a2e4f107304e322cdd2630847ad0071bc0c1
2019-11-13 14:53:22 -08:00
Will Feng
b37c235d86 C++/Python API parity for Conv{1,2,3}d layers, and add F::conv{1,2,3}d functionals (#28917)
Summary:
This PR changes the implementation of C++ Conv{1,2,3}d layers to exactly match the Python version, and add F::conv{1,2,3}d functionals. For more thorough testing, I will rely on the parity test mechanism which uses values from `common_nn.py` to generate the inputs and options that we are interested in testing.

This PR is BC-breaking in the following way:

In `Conv{1,2,3}dOptions`:
- `with_bias` is renamed to `bias`.
- `input_channels` is renamed to `in_channels`.
- `output_channels` is renamed to `out_channels`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28917

Differential Revision: D18471526

Pulled By: yf225

fbshipit-source-id: 7a33f60654ad93cc2e043245e7ff9e0ef9da15b3
2019-11-13 12:53:31 -08:00
Edward Yang
0c91ebb694 Delete all trivial uses of make_variable. (#29213)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29213

A trivial use of make_variable is one where requires_grad=False.  This
transformation is not technically semantics preserving, as make_variable
will create a shallow copy of the tensor in question; however, I
am guessing that we have the invariant that we don't actually make
use of this shallow copy in a nontrivial way.

There were some cases where the surrounding code expected a Variable proper
to be returned; I retained those sites.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18353503

Pulled By: ezyang

fbshipit-source-id: 57fe34d82e009c0cc852266fb0b79d6d9c62bb03
2019-11-13 07:43:41 -08:00
Edward Yang
30092df15e Rename getNonVariableDeprecatedTypeProperties to getDeprecatedTypeProperties (#29203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29203

There is no more Variable/Tensor distinction, so fix the misleading name.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18353505

Pulled By: ezyang

fbshipit-source-id: dadc394d533ab7746f70bc186c6645441a784518
2019-11-13 07:43:32 -08:00
Edward Yang
715e951e3c Revert D18458751: use new overload mechanism for rnns
Test Plan: revert-hammer

Differential Revision:
D18458751

Original commit changeset: 07c71838f21c

fbshipit-source-id: 86acb02f3e022e93ea6c1ef23fe39c80ad43978f
2019-11-13 07:21:31 -08:00