Commit Graph

893 Commits

Author SHA1 Message Date
Meghan Lele
d58b8222b7 [JIT] Add support for with statements (#34705)
Summary:
**Summary**
This commit adds support for with statements to PyTorch JIT. Each
of the with items in a with statement is represented in the JIT IR
as a pair of `prim::Enter` and `prim::Exit` nodes that call the
`__enter__` and `__exit__` methods defined on the context manager objects
returned by the expressions in the with item.

**Testing**
This commit adds unit tests for with statements with named with items,
nameless with items, and with statements that encounter exceptions.
```
$ python test/test_jit.py TestWith.test_with_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.430s

OK
```

```
$ python test/test_jit.py TestWith.test_with_no_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.264s

OK
```

```
$ python test/test_jit.py TestWith.test_with_exceptions
Fail to import hypothesis in common_utils, tests are not derandomized
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 1.053s

OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34705

Differential Revision: D22095945

Pulled By: SplitInfinity

fbshipit-source-id: f661565a834786725259b8ea014b4d7532f9419d
2020-06-18 16:57:18 -07:00
Shihao Xu
f3f30d4354 [JIT x RPC] Consolidate RRef type class and RRef impl class (#35694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35694

close https://github.com/pytorch/pytorch/issues/35110

Differential Revision: D7881729

fbshipit-source-id: eedda8f1b7510491886d469efeed4e002bb8b991
2020-06-18 07:46:38 -07:00
Shen Li
74142f76fa Adding torch.futures to API docs (#40051)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40051

Test Plan: Imported from OSS

Differential Revision: D22055031

Pulled By: mrshenli

fbshipit-source-id: ce8a79ba4ffdc7dbed6d4c62b1c33b96764c89e7
2020-06-17 17:55:48 -07:00
Alban Desmaison
08227fea4f Revert D22079377: [pytorch][PR] [RELAND] Change AccumulateGrad to yield .grads that match weights' memory layout
Test Plan: revert-hammer

Differential Revision:
D22079377

Original commit changeset: 9bd2b7e0c34f

fbshipit-source-id: c22cc349d790caa574eace0d63980854c33e5a59
2020-06-17 10:17:27 -07:00
Michael Carilli
1ec8ece2b9 [RELAND] Change AccumulateGrad to yield .grads that match weights' memory layout (#40129)
Summary:
https://github.com/pytorch/pytorch/pull/34904 was reverted because it had a misconfigured 4 GPU test that for some reason wasn't caught by external CI ([example failure](https://app.circleci.com/pipelines/github/pytorch/pytorch/181719/workflows/cfb37cd9-9a0c-4738-898b-d683934cd308/jobs/5868948/steps)).

This PR reverts the revert, and adds diffs that should repair the misconfigured test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40129

Differential Revision: D22079377

Pulled By: albanD

fbshipit-source-id: 9bd2b7e0c34fdaf887497b52037cfe82cba709c1
2020-06-17 09:02:54 -07:00
Mike Ruberry
9d588f7ce2 Removes dunder div (#39151)
Summary:
BC-breaking note:

If a user is using one of these dunders directly they will not longer be available. Users should update to Python3 compatible dunders.

Original PR note:

`__div__` (and `__idiv__` and `__rdiv__`) are no longer special dunders in Python3. This PR replaces them with the `__truediv__` (`__itrudediv__`, `__rtruediv__`) dunders, since we no longer support Python2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39151

Differential Revision: D22075713

Pulled By: mruberry

fbshipit-source-id: d318b47b51f7cc4c3728b1606a34d81e49ba0fa1
2020-06-16 23:02:20 -07:00
Alban Desmaison
f1e575a0bf Revert D20496044: [pytorch][PR] Change AccumulateGrad to yield .grads that match weights' memory layout
Test Plan: revert-hammer

Differential Revision:
D20496044

Original commit changeset: 248d680f4b1b

fbshipit-source-id: 6462b25e3fb9c8596c1da443389089f09c32df4d
2020-06-16 10:38:40 -07:00
mattip
dd581b4512 DOC: fix rpc reference in top-level index (#40077)
Summary:
Fixes gh-40046

PR gh-37419 refactored the content of `docs/source/rpc/index.rst` into `docs/source/rpc.rst` but did not link to the latter from `doc/source/index.rst` so the top-level RPC documentation is missing from https://pytorch.org/docs/master/.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40077

Differential Revision: D22068128

Pulled By: mrshenli

fbshipit-source-id: 394433f98f86509e0c9cb6d910a86fb8a2932683
2020-06-16 10:26:03 -07:00
Michael Carilli
2beb9690c3 Change AccumulateGrad to yield .grads that match weights' memory layout (#34904)
Summary:
Currently, whether `AccumulateGrad`  [steals](67cb018462/torch/csrc/autograd/functions/accumulate_grad.h (L42)) or [clones](67cb018462/torch/csrc/autograd/functions/accumulate_grad.h (L80)) an incoming gradient, the gradient ends up rowmajor contiguous, regardless of its param's layout.  If the param's layout is channels last, or otherwise not rowmajor contigous, later kernels that apply gradients to params are forced into an uncoalesced memory access pattern for either the param or the gradient.  This may not sound like a big deal but for any binary op on large tensors it's a >3X increase in gmem traffic => 3X slowdown.

The present PR changes `AccumulateGrad` to prefer, where possible, stashing gradients that match their params' layouts (["Gradient Layout Contract"](https://github.com/pytorch/pytorch/pull/34904/files#diff-ef1a56d24f66b280dcdb401502d6a796R29-R38)).

Allowing `AccumulateGrad` to stash non-rowmajor-contiguous grads means DDP allreduces and DP reduces must allow non-rowmajor-contiguous grads.  This PR extends DDP and DP to allow gradients with non-rowmajor-contiguous strides as long as their layout is nonoverlapping and dense.

For good measure, I include changes that allow all five nccl primitives (allreduce, reduce, broadcast, allgather, reducescatter) to act on non-rowmajor-contiguous tensors (again as long as each input's layout is nonoverlapping and dense, and as long as all tensors participating in a given collective have the same layout).  The primitive comm changes aren't necessary to enable the DDP changes, but I wasn't sure this would end up true until I had written both sets of changes.  I think primitive comm enablement is reasonable to keep in the PR, especially since the code for it is simple.

Channels last params will be a major beneficiary of this PR, but I don't see it as channels-last-specific fix.  The spirit is layout matching in general:
- Grads should be stashed with memory layouts matching their params.
- Src and dst tensors on opposite ends of collectives should have matching dense layouts.

This PR also updates autograd docs to describe potential BC-breaking changes below.

## BC notes
ngimel albanD gchanan

#### BC-breaking
In the common case where the user lets AccumulateGrad decide grad layouts, strides for grads of dense but non-rowmajor-contiguous params will change.  Any user code that was accustomed to `view(-1)`ing these grads will break.

Also, the circumstances under which a grad can be stolen directly from the backward function that created it, as opposed to deep-copied by AccumulateGrad, have changed.  In most cases we expect silent performance improvement, because we expect channels-last-aware backward kernels will create channels last gradients for channels last params.  Now those can be stolen, whereas before this PR they were cloned and made rowmajor contiguous.  IMO this is a mild BC breakage.  Param backward hooks still see grads come in with whatever format the backward kernel gave them.  The only BC breakage potential I see is if user code relies somehow on a grad in a hook having or not having the same deep memory as the eventual `param.grad`.  Any such users hopefully know they're off the edge of the map and understand how to update their expectations.

#### BC escape hatches
At alband's recommendation, this PR's changes to AccumulateGrad do not alter the pre-PR code's decisions about whether grad is accumulated in or out of place.  Accumulations of new grads onto an existing `.grad` attribute were (usually) in-place before this PR and remain in-place after this PR, keeping the existing `.grad`'s layout.  After this PR, if the user wants to force accumulation into a grad with a particular layout, they can preset `param.grad` to a zeroed tensor with the desired strides or call `grad.contiguous(desired format)`.  This likely won't be as performant as letting AccumulateGrad establish grad layouts by cloning or stealing grads with contract-compliant strides, but at least users have a control point.

One limitation (present before this PR and unchanged by this PR):  Presetting `param.grad` does not ensure in-place accumulation all the time.  For example, if `create_graph=True`, or if incoming `new_grad` is dense and existing `variable_grad` is sparse, accumulation occurs out of place, and the out-of-place result may not match the existing grad's strides.

----------------------------
I also noticed some potential DDP improvements that I considered out of scope but want to mention for visibility:
1. make sure Reducer's ops sync with AccumulateGrad streams
2. ~to reduce CPU overhead and incur fewer kernel launches, lazily create flat `contents` tensors by a single `cat` kernel only when a bucket is full, instead of `copy_`ing grads into `contents` individually as soon as they are received.~  PR includes a [minor change](https://github.com/pytorch/pytorch/pull/34904/files#diff-c269190a925a4b0df49eda8a8f6c5bd3R312-R315) to divide grads while copying them into flat buffers, instead of copying them in, then dividing separately.  Without cat+div fusion, div-while-copying is the best we can do.
3. https://github.com/pytorch/pytorch/issues/38942
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34904

Differential Revision: D20496044

Pulled By: albanD

fbshipit-source-id: 248d680f4b1bf77b0a986451844ec6e254469217
2020-06-16 08:43:31 -07:00
Shawn Zhong
96870181c6 Remove duplicated entries in random.rst (#39725)
Summary:
In the current master doc, every function under [`torch.random`](https://pytorch.org/docs/master/random.html) appears twice because the function docs are generated by both `automodule` and `autofunction`.

This PR removes the parts generated by `autofunction`.

See changed docs at https://5751500-65600975-gh.circle-artifacts.com/0/docs/random.html:

![image](https://user-images.githubusercontent.com/6421097/84165823-bf720580-aa39-11ea-9149-c428d43260f8.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39725

Differential Revision: D21983701

Pulled By: ngimel

fbshipit-source-id: 5f515d7fd8034687e754e2c7b2ea9e154b3ea9b9
2020-06-10 16:51:15 -07:00
lixinyu
7cb4eae8b1 correct some cpp extension code usages and documents (#39766)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39766

Test Plan: Imported from OSS

Differential Revision: D21967284

Pulled By: glaringlee

fbshipit-source-id: 8597916bee247cb5f8c82ed8297119d2f3a72170
2020-06-10 08:31:22 -07:00
kshitij12345
9733390998 Add torch.flip{lr, ud} (#38599)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/38349

TODO:
* [x] Add Tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38599

Differential Revision: D21941884

Pulled By: mruberry

fbshipit-source-id: 7a442ff11051c2c868cf8e3c04e4bba0f1a1d426
2020-06-09 07:19:37 -07:00
krshrimali
335e4a1e3b Add arcosh, arcsinh and arctanh to unary ops (#38388)
Summary:
This PR aims to add `arcosh`, `arcsinh` and `arctanh` support. Please see issue https://github.com/pytorch/pytorch/issues/38349 for more details.

**TODOs:**

* [x] Add test cases for `arcosh`, `arcsinh` and `arctanh`. (need help)
* [x] Overload ops if `std::op` does not work with `thrust::complex` types (like for `sinh`, `cosh`).

Note: `std::acosh, std::asinh, std::atanh` do not support `thrust::complex` types. Added support for complex types for these 3 ops (`arccosh, arcsinh, arctanh`)

cc: mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38388

Differential Revision: D21882055

Pulled By: mruberry

fbshipit-source-id: d334590b47c5a89e491a002c3e41e6ffa89000e3
2020-06-04 11:40:55 -07:00
mattip
ada2652ca6 Restore docs coverage test via sphinx (#39331)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39331

Fixes gh-37590

Adds an extra `make coverage` to document building, which uses the built-in facility in sphinx to check docstring coverage. Also fixes a failure to import `torch/jit/supported_ops.py` which broke the [Torchscript Builtins](https://pytorch.org/docs/stable/jit_builtin_functions.html) page.

This also adds the required `SPHINXOPTS` to turn warnings into error, but this is commented out. Note that since documentation of `torchvision` is merged in here, failures there would cause failures here if this is made active. Some thought might be needed about pinning the torchvision version merged into documentation.

The first commit should fail, since the "ScriptModule" class is commented out. I did that in order to check that a CI failure is properly reported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38244

Differential Revision: D21640589

Pulled By: ezyang

fbshipit-source-id: 1e240d81669b5f21404d596de4a27d192dc9fd8a
2020-06-04 10:49:38 -07:00
Aayush Naik
0829cadca3 Implement rad2deg, deg2rad (#38852)
Summary:
Resolves https://github.com/pytorch/pytorch/issues/38372.

cc mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38852

Differential Revision: D21868935

Pulled By: mruberry

fbshipit-source-id: ae6ded11b743c9d1cdc032984b4abe0a115290d6
2020-06-03 22:21:54 -07:00
neginraoof
4d597cb794 [ONNX] Update pytoch/onnx doc (#39480)
Summary:
Updated dos for operator_export_types and recently added op symbolics.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39480

Reviewed By: hl475

Differential Revision: D21877364

Pulled By: houseroad

fbshipit-source-id: 9831fe5776629da897db6d7943f830528cb916d2
2020-06-03 22:15:30 -07:00
Shen Li
a05ef17e46 Add rpc.functions.async_execution decorator for rpc_sync/rpc_async (#39216)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39216

The `rpc.functions.async_execution` decorator specifies that the
wrapped function is guaranteed to return a `torch.futures.Future`.
The decorator adds a `_wrapped_async_rpc_function` attribute to
the wrapper function. The caller retrieves this information and
then sets `isAsyncFunction` argument accordingly which is later
added to PythonCall RPC message as a field. On the callee side,
if the PythonCall carries an asynchronous function, it will cast
the function's return value to a jit::PythonFutureWrapper object,
and then install response creation and communication as a callback
on the that jit::PythonFutureWrapper.

For applications, this feature is useful when a function needs to
wait for IO or additional singaling. In those cases, marking the
user function as `rpc.functions.async_execution` will prevent it
from blocking one thread on callee for too long.

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D21779962

fbshipit-source-id: 6b6aa698bf6f91dad6ed2a7ee433df429b59e941
2020-06-02 23:21:25 -07:00
Xiang Gao
ebd4125e7e [JIT] Make torch.unique_consecutive compatible (#39339)
Summary:
A `unique_consecutive` version of https://github.com/pytorch/pytorch/pull/38156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39339

Differential Revision: D21823997

Pulled By: eellison

fbshipit-source-id: d14596a36ba36497e296da5a344e0376cef56f1b
2020-06-02 14:54:29 -07:00
Alban Desmaison
c6720f0d6b nit on functional autograd (#35493)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35493

Test Plan: Imported from OSS

Differential Revision: D21843416

Pulled By: albanD

fbshipit-source-id: af4d017ff4559237dd31e2ccaa1e3a967f7497ba
2020-06-02 14:49:16 -07:00
mattip
dc4fd0409f DOC: remove java documentation (#38920)
Summary:
Continuation of issue gh-36064 and PR gh-38042 which removed the unmaintained javaspinx extension. The unknown sphinx directives cause warnings when building documentation.

Edit: link to PR as well as issue
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38920

Differential Revision: D21818297

Pulled By: ezyang

fbshipit-source-id: 2c1d007a7689b26653d7dee081b0b969b8a731a2
2020-06-01 07:32:00 -07:00
anjali411
a50d781c03 Added real and imag views as tensor attributes (#39033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39033

Added `real` and `imag` views as tensor attributes. Right now, tensor.imag is disabled for real tensors. This is because if we return a new tensor of zeros, the user would be able to update the tensor returned by tensor.imag which should not be allowed as numpy returns a read-only array, and pytorch doesn't support read-only tensors yet.

TODO in follow-up PRs:
1. add a setter for `real` and `imag`
2. add special case in codegen for `real` and `imag` backward functions.
3. remove `copy_real` and `copy_imag` methods.

Test Plan: Imported from OSS

Differential Revision: D21767542

Pulled By: anjali411

fbshipit-source-id: 539febf01f01ff055e3fbc7e9ff01fd3fe729056
2020-05-29 12:31:51 -07:00
Cloud Han
05f097b5bb Implement logaddexp (#38384)
Summary:
Resolve https://github.com/pytorch/pytorch/issues/38377
Related https://github.com/pytorch/pytorch/issues/38349

This op should be disambiguated with `logsumexp` which do a reduction on a tensor over a specific axis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38384

Differential Revision: D21737336

Pulled By: mruberry

fbshipit-source-id: 7864d04ca304c0fb2937bb083583e3e3d6ef205d
2020-05-27 20:27:31 -07:00
Jessica Lin
b12a879184 Correct Javadoc link to master (#39038)
Summary:
Correct Javadoc link to match the 1.4 version: https://github.com/pytorch/pytorch/blob/release/1.4/docs/source/index.rst
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39038

Differential Revision: D21747969

Pulled By: jlin27

fbshipit-source-id: 941b61204e9be53e15a6351eff6f4935e6a16d24
2020-05-27 16:21:30 -07:00
Martin Valgur
de8c888232 Fix torch.hub.hub_dir inconsistencies (#38969)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/38401

* `torch.hub.load_state_dict_from_url()` now also downloads to `$TORCH_HOME/hub/checkpoints` instead of `$TORCH_HOME/checkpoints` like `torch.hub.load()` and others.
* Make `hub_dir` private, add and use `get_dir()` instead.

Also updated docs. Did not see a need for additional unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38969

Differential Revision: D21725880

Pulled By: ailzhang

fbshipit-source-id: 58cc6b32ddbda91e58c1c1433cc3916223556ea1
2020-05-26 21:06:52 -07:00
Shawn Zhong
ba3893e736 Rename torch._C.Generator to torch.Generator (#38773)
Summary:
Fix https://github.com/pytorch/pytorch/issues/26528
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38773

Differential Revision: D21701053

Pulled By: pbelevich

fbshipit-source-id: 57632ca9ce430ec30dc8e40739194ee2b5860f71
2020-05-26 08:29:46 -07:00
kshitij12345
3487744821 Add torch.logcumsumexp (#36308)
Summary:
Creating new PR as I am unable to push to pandeykartikey 's branch as I don't have the permissions.

Closes https://github.com/pytorch/pytorch/issues/26411

Based on https://github.com/pytorch/pytorch/issues/32876 Thanks pandeykartikey for starting this out.

Have addressed the comments.

anjali411 agadetsky albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36308

Differential Revision: D21648573

Pulled By: albanD

fbshipit-source-id: bc1a8fc4ab474a1148298117a1549b0e46f7c3ff
2020-05-21 09:12:31 -07:00
Alban Desmaison
b88b7d552f Prevent custom Functions from creating non differentiable type that requires grad (#38326)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38326

Test Plan: Imported from OSS

Differential Revision: D21668740

Pulled By: albanD

fbshipit-source-id: f452f65e76003492055311523a652937b1300183
2020-05-21 08:30:14 -07:00
Supriya Rao
530d48e93a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38749

Test Plan: python test/test_quantization.py TestFused

Differential Revision: D21654659

Pulled By: supriyar

fbshipit-source-id: 301be24083e794f4e71ff1d6d842e1aaefa640f0
2020-05-19 22:48:05 -07:00
Natalia Gimelshein
b995540a01 Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
Test Plan: revert-hammer

Differential Revision:
D21632878

Original commit changeset: 0d73398b95d7

fbshipit-source-id: c4dd18a4220d175237f31f741a782f2596228009
2020-05-19 15:22:16 -07:00
Supriya Rao
7d38db0f9a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38452

Test Plan:
python test/test_quantization.py TestFused

Imported from OSS

Differential Revision: D21632878

fbshipit-source-id: 0d73398b95d72a0a23b42ef36f3ede1bfcc35eda
2020-05-19 09:53:56 -07:00
Xinyu Li
52e9953faf use version number instead of 'master' in html header title (#38149)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38149

This is for (#21290) (#31894)

Instead of putting "Pytorch master documentation" in header's html title, now we use "Pytorch 1.x.x documentation", this is similar to tensorFlow and numpy doc page.

In google search, we will get
Pytorch Documentation - Pytorch 1.x.x Documentation instead.

Test Plan: Imported from OSS

Differential Revision: D21586559

Pulled By: glaringlee

fbshipit-source-id: 2995709ac3c22dbb0183b5b4abfde7d795f1f8eb
2020-05-15 08:32:32 -07:00
Supriya Rao
de7025fbdb [quant] Support for functional quantized::conv1d (#38449)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38449

Also update docs to reflect conv1d op support

Test Plan:
python test/test_quantization.py TestQuantizedFunctional.test_conv1d_api

Imported from OSS

Differential Revision: D21575921

fbshipit-source-id: 21c9f6b49ad456cd9d93e97f17cf5b8d87f0da6b
2020-05-14 16:09:51 -07:00
Jessica Lin
8b6bf2a457 Add C++ Landing Page (#38450)
Summary:
* Add cpp_index.rst for landing page to match 1.5 (https://github.com/pytorch/pytorch/blob/release/1.5/docs/source/cpp_index.rst)
* Link to new cpp landing page was added to the docs table of contents in this PR: https://github.com/pytorch/pytorch/pull/38350
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38450

Differential Revision: D21580939

Pulled By: jlin27

fbshipit-source-id: 021c43f207a100d554266e4e16cb6752ca9c56a0
2020-05-14 16:02:01 -07:00
Bharat123rox
15da26f8aa DOC: Add documentation for Tensor.is_nonzero (#37845)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/37438 by adding documentation for `Tensor.is_nonzero`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37845

Differential Revision: D21494422

Pulled By: mruberry

fbshipit-source-id: ee4f5979922d7c8100b5031d770ccdf59fe1c1a1
2020-05-14 04:46:55 -07:00
Supriya Rao
f6626aaf43 [quant] Add support for Quantized Conv1d and ConvRELU1d (#38283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38283

Adds support for the modules and tests

Test Plan:
python test/test_quantization.py TestStaticQuantizedModule.test_conv1d_api

Imported from OSS

Differential Revision: D21553665

fbshipit-source-id: 7ea28da024bdf59f87f300d616c266f2b41f0bcd
2020-05-13 16:59:13 -07:00
Jessica Lin
33977ca769 Update Cpp, rpc docs and Libraries section to match 1.5 (#38350)
Summary:
* Link cpp docs to the cpp landing page
* Link to rpc.rst landing page
* Update Libraries to match 1.5 (https://github.com/pytorch/pytorch/blob/release/1.5/docs/source/index.rst)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38350

Differential Revision: D21554435

Pulled By: jlin27

fbshipit-source-id: d1c9d5a86f84910225cbd0a57074ae95c8a9a450
2020-05-13 15:20:35 -07:00
Elias Ellison
eb3e9872c9 [JIT] make torch.unique compilable (#38156)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986

Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156

Differential Revision: D21504449

Pulled By: eellison

fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
2020-05-12 18:37:53 -07:00
Donna Choi
4c99a9b672 Add documentation for hardswish (#37989)
Summary:
Fix issue https://github.com/pytorch/pytorch/issues/37431.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37989

Differential Revision: D21502182

Pulled By: zou3519

fbshipit-source-id: 245586fb555f7f1d9ec8d87269035b6fe626b47b
2020-05-12 06:48:51 -07:00
Xinyu Li
dcf1861f88 add document for bucktization (#38119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38119

This is for (#37435).
Demo is here:
https://glaringlee.github.io/generated/torch.searchsorted.html
https://glaringlee.github.io/generated/torch.bucketize.html

Test Plan: Imported from OSS

Differential Revision: D21517392

Pulled By: glaringlee

fbshipit-source-id: b35795c7f07e9ae4c4806c528eb51fd4ca14d499
2020-05-11 21:54:19 -07:00
Ilia Cherniavskii
43dd8760d7 Move ThreadLocalDebugInfo to c10 (#37774)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37774

Move ThreadLocalDebugInfo from ATen to C10

Test Plan: Imported from OSS

Differential Revision: D21384249

Pulled By: ilia-cher

fbshipit-source-id: f9b5089a868f84a2ee013695a481fcc883d3c6b2
2020-05-11 19:27:41 -07:00
毛毛
19d6e32e9a fix sample code (#38002)
Summary:
Make Linear layer working correct when bias is False
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38002

Differential Revision: D21509679

Pulled By: malfet

fbshipit-source-id: c7077992cf414ecc557b39e5ed1e39ef01c8b347
2020-05-11 15:34:09 -07:00
Shawn Zhong
5f9b9036c1 Add instance methods tensor.isnan(), tensor.isinf(), tensor.isfinite() (#37942)
Summary:
Fix https://github.com/pytorch/pytorch/issues/37736
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37942

Differential Revision: D21503150

Pulled By: soumith

fbshipit-source-id: cf6bf57ca67013efe119543f3d9a698473960dec
2020-05-11 13:56:59 -07:00
mattip
c31913671c DOC: add BFloat16 dtype and BFloat16Tensor (#37051)
Summary:
Related to gh-36318

Mention `bfloat16` dtype and `BFloat16Tensor` in documentation. The real fix would be to implement cpu operations on 16-bit float `half`, and I couldn't help but notice that `torch.finfo(torch.bfloat16).xxx` crashes for `xxx in ['max', 'min', 'eps']`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37051

Differential Revision: D21476851

Pulled By: ngimel

fbshipit-source-id: fef601d3116d130d67cd3a5654077f31b699409b
2020-05-11 12:44:46 -07:00
Donna Choi
ca2206d071 Add documentation for FeatureAlphaDropout (#36295)
Summary:
These changes add documentation for FeatureAlphaDropout, based on a need raised in an issue by SsnL (Issue https://github.com/pytorch/pytorch/issues/9886).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36295

Differential Revision: D21478591

Pulled By: zou3519

fbshipit-source-id: a73c40bf1c7e3b1f301dc3347cef7b32e9842320
2020-05-08 15:09:01 -07:00
Richard Zou
172bcdb8c8 Add documentation for nn.Hardsigmoid and nn.functional.hardsigmoid. (#38120)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38120

Test Plan: build docs locally and attach a screenshot to this PR.

Differential Revision: D21477815

Pulled By: zou3519

fbshipit-source-id: 420bbcfcbd191d1a8e33cdf4a90c95bf00a5d226
2020-05-08 13:56:45 -07:00
Edward Yang
f8c93c5d3e Get rid of javasphinx dependency. (#38042)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38042

Fixes https://github.com/pytorch/pytorch/issues/36064

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21460484

Pulled By: ezyang

fbshipit-source-id: 553cbacc4365cfd84ff4a468a7366b12eade6fe0
2020-05-07 19:52:31 -07:00
Ilia Cherniavskii
2d708cefcc Move RecordFunction into ATen (#37548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37548

Moving RecordFunction from torch::autograd::profiler into at namespace

Test Plan:
CI

Imported from OSS

Differential Revision: D21315852

fbshipit-source-id: 4a4dbabf116c162f9aef0da8606590ec3f3847aa
2020-05-07 14:52:39 -07:00
Ilia Cherniavskii
c24c5f9684 Make RecordFunction callbacks thread local and modernize interface (#37491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37491

This PR modernizes RecordFunction API and adds thread local callbacks
in addition to the global ones

Changes:
 - support for TLS callbacks, this is going to be the foundation of profiler and other tools
 - modernize interface around simple set of functions (add|remove|has|clear)(Global|ThreadLocal)(Callback) and adding RecordFunctionCallback to easily construct callbacks to be passed
 - we also add `.setShouldRun` into the callback interface to support cases when simple uniform sampling is not enough
 - to properly support add/remove introduce the idea of callback handle returned by add
 - internal implementation still uses SmallVector to store intermediate state (as before) - in this case these are vector of handles of callbacks that were picked to run
 - to speed up runtime we keep these vectors sorted, this way we can quickly enumerate callbacks that need to be run
 - added tests for new functionality

Test Plan:
BUILD_BINARY=1 USE_BLAS=MKL USE_MKLDNN=0 USE_CUDA=0 python setup.py
develop install
./build/bin/test_jit
CI

record_function_benchmark: https://gist.github.com/ilia-cher/f1e094dae47fe23e55e7672ac4dcda2f

Imported from OSS

Differential Revision: D21300448

fbshipit-source-id: 6d55c26dbf20b33d35c3f1604dcc07bb063c8c43
2020-05-07 14:51:02 -07:00
Shawn Zhong
ec7fd0caef [docs] Fix broken links in contribution_guide.rst and governance.rst (#37820)
Summary:
Fix https://github.com/pytorch/pytorch/issues/37716

Fix three broken links in the documentation:
- [PyTorch Governance](https://pytorch.org/docs/source/community/governance.rst) in the [Contribution Guide page](https://pytorch.org/docs/master/community/contribution_guide.html#the-pytorch-contribution-process)
- [PyTorch Governance | Persons of Interest](https://pytorch.org/docs/source/community/persons_of_interest.rst) under the [Core Developer section](https://pytorch.org/docs/master/community/governance.html#core-developers)
- [PyTorch Contributor Guide](https://pytorch.org/docs/source/community/contribution_guide.rst) under the [FAQ session of the Governance Page](https://pytorch.org/docs/master/community/governance.html#faq)

The old link leads to the `.rst` source file, which does not exist on the server.

It's now fixed using the [document cross-referencing syntax](https://www.sphinx-doc.org/en/1.8/usage/restructuredtext/roles.html#cross-referencing-documents)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37820

Differential Revision: D21414579

Pulled By: mruberry

fbshipit-source-id: ecf6de9317ce93f70205cbfe97a3bdd54e635fe5
2020-05-06 10:33:33 -07:00
Edward Yang
4fef3763dd Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings" (#37778)
Summary:
Original PR: https://github.com/pytorch/pytorch/pull/37419

cc mattip suo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37778

Differential Revision: D21385774

Pulled By: ezyang

fbshipit-source-id: 5de532faab8bae132736b6b5189e0ee2ac9935be
2020-05-04 14:32:35 -07:00