Commit Graph

901 Commits

Author SHA1 Message Date
anjali411
8ec2ae9a9f Add view_as_real, view_as_complex for complex tensors (#39099)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39099

Test Plan: Imported from OSS

Differential Revision: D22057886

Pulled By: anjali411

fbshipit-source-id: bad5ba7097ba0dd13f2c549b2463094dee9afa14
2020-06-22 15:15:27 -07:00
Edward Yang
e4766fb4d9 Meta tensors, but without code deduplication (#38490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38490

A meta tensor is a tensor that is a lot like a normal tensor,
except it doesn't actually have any data associated with it.
You can use them to carry out shape/dtype computations without
actually having to run the actual code; for example, this could
be used to do shape inference in a JIT analysis pass.
Check out the description in DispatchKey.h for more information.

Meta tensors are part of a larger project to rationalize how we
write kernels so that we don't have to duplicate shape logic
in CPU kernel, CUDA kernel and meta kernel (this PR makes the
duplication problem worse!)  However, that infrastructure can
be built on top of this proof of concept, which just shows how
you can start writing meta kernels today even without this
infrastructure.

There are a lot of things that don't work:
- I special cased printing for dense tensors only; if you try to
  allocate a meta sparse / quantized tensor things aren't going
  to work.
- The printing formula implies that torch.tensor() can take an
  ellipsis, but I didn't add this.
- I wrote an example formula for binary operators, but it isn't
  even right!  (It doesn't do type promotion of memory layout
  correctly).  The most future proof way to do it right is to
  factor out the relevant computation out of TensorIterator,
  as it is quite involved.
- Nothing besides torch.add works right now
- Meta functions are ALWAYS included in mobile builds (selective
  build doesn't work on them).  This isn't a big deal for now
  but will become more pressing as more meta functions are added.

One reason I'm putting up this PR now is to check with Yinghai Lu
if we can unblock shape inference for accelerators, while we are
still working on a long term plan for how to unify all shape
computation across our kernels.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21935609

Pulled By: ezyang

fbshipit-source-id: f7d8636eeb8516b6bc296db99a16e56029972eee
2020-06-22 09:18:33 -07:00
Jerry Zhang
59ca1d31ca [quant][graphmode] docstrings for top level APIs (#40328)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40328

Test Plan: Imported from OSS

Differential Revision: D22149708

fbshipit-source-id: 63a1cd229d9e4668fba0ef3977e894cb8984318b
2020-06-19 22:20:23 -07:00
Mike Ruberry
4f761f325c Back out "[pytorch][PR] Removes dunder div"
Summary: NVIDIA's Apex is updating to no longer rely on this behavior, but we're reverting this Python2->Python3 update to unblock internal apex users.

Test Plan: Sandcaslte + OSS CI.

Reviewed By: ngimel

Differential Revision: D22146782

fbshipit-source-id: f9483d2cbf9dc3a469ad48a6c863edea3ae51070
2020-06-19 18:31:20 -07:00
Shen Li
3ca05500fa Improve RPC documents (#40296)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40296

1. Added a link to parameter server tutorial
2. Explained current states for TorchScript support

Test Plan: Imported from OSS

Differential Revision: D22142647

Pulled By: mrshenli

fbshipit-source-id: ffd697dd64a3aa874cf3f3488122ed805903370d
2020-06-19 15:34:49 -07:00
James Reed
c73095e78f Add note to serialization docs about zipfile format (#40288)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40288

Test Plan: Imported from OSS

Differential Revision: D22140324

Pulled By: jamesr66a

fbshipit-source-id: 01d7aa642ed2f4e4bdac4b7f3223bf4d7e62fd4d
2020-06-19 13:40:08 -07:00
Negin Raoof
73a156e81f [ONNX] Update pytorch/onnx docs for new export API args (#39802)
Summary:
Update pytorch/onnx docs for new export API args:
Use external data format and Training args.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39802

Reviewed By: hl475

Differential Revision: D22139664

Pulled By: houseroad

fbshipit-source-id: 7d6dcf75129cb88987f8c37b7d9d48ca594c0f38
2020-06-19 13:38:47 -07:00
Luca Wehrstedt
2393bab036 [TensorPipe] Update documentation (#40222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40222

Mention the TensorPipe agent in the RPC docs and give users the information they need to choose which agent to use.
ghstack-source-id: 106225711

Test Plan: Export to GitHub, build locally and try out the docs.

Differential Revision: D22116494

fbshipit-source-id: 30703ba8410c40f64e785f60d71dfd9faa8de4a1
2020-06-19 04:26:49 -07:00
Meghan Lele
d58b8222b7 [JIT] Add support for with statements (#34705)
Summary:
**Summary**
This commit adds support for with statements to PyTorch JIT. Each
of the with items in a with statement is represented in the JIT IR
as a pair of `prim::Enter` and `prim::Exit` nodes that call the
`__enter__` and `__exit__` methods defined on the context manager objects
returned by the expressions in the with item.

**Testing**
This commit adds unit tests for with statements with named with items,
nameless with items, and with statements that encounter exceptions.
```
$ python test/test_jit.py TestWith.test_with_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.430s

OK
```

```
$ python test/test_jit.py TestWith.test_with_no_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.264s

OK
```

```
$ python test/test_jit.py TestWith.test_with_exceptions
Fail to import hypothesis in common_utils, tests are not derandomized
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 1.053s

OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34705

Differential Revision: D22095945

Pulled By: SplitInfinity

fbshipit-source-id: f661565a834786725259b8ea014b4d7532f9419d
2020-06-18 16:57:18 -07:00
Shihao Xu
f3f30d4354 [JIT x RPC] Consolidate RRef type class and RRef impl class (#35694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35694

close https://github.com/pytorch/pytorch/issues/35110

Differential Revision: D7881729

fbshipit-source-id: eedda8f1b7510491886d469efeed4e002bb8b991
2020-06-18 07:46:38 -07:00
Shen Li
74142f76fa Adding torch.futures to API docs (#40051)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40051

Test Plan: Imported from OSS

Differential Revision: D22055031

Pulled By: mrshenli

fbshipit-source-id: ce8a79ba4ffdc7dbed6d4c62b1c33b96764c89e7
2020-06-17 17:55:48 -07:00
Alban Desmaison
08227fea4f Revert D22079377: [pytorch][PR] [RELAND] Change AccumulateGrad to yield .grads that match weights' memory layout
Test Plan: revert-hammer

Differential Revision:
D22079377

Original commit changeset: 9bd2b7e0c34f

fbshipit-source-id: c22cc349d790caa574eace0d63980854c33e5a59
2020-06-17 10:17:27 -07:00
Michael Carilli
1ec8ece2b9 [RELAND] Change AccumulateGrad to yield .grads that match weights' memory layout (#40129)
Summary:
https://github.com/pytorch/pytorch/pull/34904 was reverted because it had a misconfigured 4 GPU test that for some reason wasn't caught by external CI ([example failure](https://app.circleci.com/pipelines/github/pytorch/pytorch/181719/workflows/cfb37cd9-9a0c-4738-898b-d683934cd308/jobs/5868948/steps)).

This PR reverts the revert, and adds diffs that should repair the misconfigured test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40129

Differential Revision: D22079377

Pulled By: albanD

fbshipit-source-id: 9bd2b7e0c34fdaf887497b52037cfe82cba709c1
2020-06-17 09:02:54 -07:00
Mike Ruberry
9d588f7ce2 Removes dunder div (#39151)
Summary:
BC-breaking note:

If a user is using one of these dunders directly they will not longer be available. Users should update to Python3 compatible dunders.

Original PR note:

`__div__` (and `__idiv__` and `__rdiv__`) are no longer special dunders in Python3. This PR replaces them with the `__truediv__` (`__itrudediv__`, `__rtruediv__`) dunders, since we no longer support Python2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39151

Differential Revision: D22075713

Pulled By: mruberry

fbshipit-source-id: d318b47b51f7cc4c3728b1606a34d81e49ba0fa1
2020-06-16 23:02:20 -07:00
Alban Desmaison
f1e575a0bf Revert D20496044: [pytorch][PR] Change AccumulateGrad to yield .grads that match weights' memory layout
Test Plan: revert-hammer

Differential Revision:
D20496044

Original commit changeset: 248d680f4b1b

fbshipit-source-id: 6462b25e3fb9c8596c1da443389089f09c32df4d
2020-06-16 10:38:40 -07:00
mattip
dd581b4512 DOC: fix rpc reference in top-level index (#40077)
Summary:
Fixes gh-40046

PR gh-37419 refactored the content of `docs/source/rpc/index.rst` into `docs/source/rpc.rst` but did not link to the latter from `doc/source/index.rst` so the top-level RPC documentation is missing from https://pytorch.org/docs/master/.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40077

Differential Revision: D22068128

Pulled By: mrshenli

fbshipit-source-id: 394433f98f86509e0c9cb6d910a86fb8a2932683
2020-06-16 10:26:03 -07:00
Michael Carilli
2beb9690c3 Change AccumulateGrad to yield .grads that match weights' memory layout (#34904)
Summary:
Currently, whether `AccumulateGrad`  [steals](67cb018462/torch/csrc/autograd/functions/accumulate_grad.h (L42)) or [clones](67cb018462/torch/csrc/autograd/functions/accumulate_grad.h (L80)) an incoming gradient, the gradient ends up rowmajor contiguous, regardless of its param's layout.  If the param's layout is channels last, or otherwise not rowmajor contigous, later kernels that apply gradients to params are forced into an uncoalesced memory access pattern for either the param or the gradient.  This may not sound like a big deal but for any binary op on large tensors it's a >3X increase in gmem traffic => 3X slowdown.

The present PR changes `AccumulateGrad` to prefer, where possible, stashing gradients that match their params' layouts (["Gradient Layout Contract"](https://github.com/pytorch/pytorch/pull/34904/files#diff-ef1a56d24f66b280dcdb401502d6a796R29-R38)).

Allowing `AccumulateGrad` to stash non-rowmajor-contiguous grads means DDP allreduces and DP reduces must allow non-rowmajor-contiguous grads.  This PR extends DDP and DP to allow gradients with non-rowmajor-contiguous strides as long as their layout is nonoverlapping and dense.

For good measure, I include changes that allow all five nccl primitives (allreduce, reduce, broadcast, allgather, reducescatter) to act on non-rowmajor-contiguous tensors (again as long as each input's layout is nonoverlapping and dense, and as long as all tensors participating in a given collective have the same layout).  The primitive comm changes aren't necessary to enable the DDP changes, but I wasn't sure this would end up true until I had written both sets of changes.  I think primitive comm enablement is reasonable to keep in the PR, especially since the code for it is simple.

Channels last params will be a major beneficiary of this PR, but I don't see it as channels-last-specific fix.  The spirit is layout matching in general:
- Grads should be stashed with memory layouts matching their params.
- Src and dst tensors on opposite ends of collectives should have matching dense layouts.

This PR also updates autograd docs to describe potential BC-breaking changes below.

## BC notes
ngimel albanD gchanan

#### BC-breaking
In the common case where the user lets AccumulateGrad decide grad layouts, strides for grads of dense but non-rowmajor-contiguous params will change.  Any user code that was accustomed to `view(-1)`ing these grads will break.

Also, the circumstances under which a grad can be stolen directly from the backward function that created it, as opposed to deep-copied by AccumulateGrad, have changed.  In most cases we expect silent performance improvement, because we expect channels-last-aware backward kernels will create channels last gradients for channels last params.  Now those can be stolen, whereas before this PR they were cloned and made rowmajor contiguous.  IMO this is a mild BC breakage.  Param backward hooks still see grads come in with whatever format the backward kernel gave them.  The only BC breakage potential I see is if user code relies somehow on a grad in a hook having or not having the same deep memory as the eventual `param.grad`.  Any such users hopefully know they're off the edge of the map and understand how to update their expectations.

#### BC escape hatches
At alband's recommendation, this PR's changes to AccumulateGrad do not alter the pre-PR code's decisions about whether grad is accumulated in or out of place.  Accumulations of new grads onto an existing `.grad` attribute were (usually) in-place before this PR and remain in-place after this PR, keeping the existing `.grad`'s layout.  After this PR, if the user wants to force accumulation into a grad with a particular layout, they can preset `param.grad` to a zeroed tensor with the desired strides or call `grad.contiguous(desired format)`.  This likely won't be as performant as letting AccumulateGrad establish grad layouts by cloning or stealing grads with contract-compliant strides, but at least users have a control point.

One limitation (present before this PR and unchanged by this PR):  Presetting `param.grad` does not ensure in-place accumulation all the time.  For example, if `create_graph=True`, or if incoming `new_grad` is dense and existing `variable_grad` is sparse, accumulation occurs out of place, and the out-of-place result may not match the existing grad's strides.

----------------------------
I also noticed some potential DDP improvements that I considered out of scope but want to mention for visibility:
1. make sure Reducer's ops sync with AccumulateGrad streams
2. ~to reduce CPU overhead and incur fewer kernel launches, lazily create flat `contents` tensors by a single `cat` kernel only when a bucket is full, instead of `copy_`ing grads into `contents` individually as soon as they are received.~  PR includes a [minor change](https://github.com/pytorch/pytorch/pull/34904/files#diff-c269190a925a4b0df49eda8a8f6c5bd3R312-R315) to divide grads while copying them into flat buffers, instead of copying them in, then dividing separately.  Without cat+div fusion, div-while-copying is the best we can do.
3. https://github.com/pytorch/pytorch/issues/38942
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34904

Differential Revision: D20496044

Pulled By: albanD

fbshipit-source-id: 248d680f4b1bf77b0a986451844ec6e254469217
2020-06-16 08:43:31 -07:00
Shawn Zhong
96870181c6 Remove duplicated entries in random.rst (#39725)
Summary:
In the current master doc, every function under [`torch.random`](https://pytorch.org/docs/master/random.html) appears twice because the function docs are generated by both `automodule` and `autofunction`.

This PR removes the parts generated by `autofunction`.

See changed docs at https://5751500-65600975-gh.circle-artifacts.com/0/docs/random.html:

![image](https://user-images.githubusercontent.com/6421097/84165823-bf720580-aa39-11ea-9149-c428d43260f8.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39725

Differential Revision: D21983701

Pulled By: ngimel

fbshipit-source-id: 5f515d7fd8034687e754e2c7b2ea9e154b3ea9b9
2020-06-10 16:51:15 -07:00
lixinyu
7cb4eae8b1 correct some cpp extension code usages and documents (#39766)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39766

Test Plan: Imported from OSS

Differential Revision: D21967284

Pulled By: glaringlee

fbshipit-source-id: 8597916bee247cb5f8c82ed8297119d2f3a72170
2020-06-10 08:31:22 -07:00
kshitij12345
9733390998 Add torch.flip{lr, ud} (#38599)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/38349

TODO:
* [x] Add Tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38599

Differential Revision: D21941884

Pulled By: mruberry

fbshipit-source-id: 7a442ff11051c2c868cf8e3c04e4bba0f1a1d426
2020-06-09 07:19:37 -07:00
krshrimali
335e4a1e3b Add arcosh, arcsinh and arctanh to unary ops (#38388)
Summary:
This PR aims to add `arcosh`, `arcsinh` and `arctanh` support. Please see issue https://github.com/pytorch/pytorch/issues/38349 for more details.

**TODOs:**

* [x] Add test cases for `arcosh`, `arcsinh` and `arctanh`. (need help)
* [x] Overload ops if `std::op` does not work with `thrust::complex` types (like for `sinh`, `cosh`).

Note: `std::acosh, std::asinh, std::atanh` do not support `thrust::complex` types. Added support for complex types for these 3 ops (`arccosh, arcsinh, arctanh`)

cc: mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38388

Differential Revision: D21882055

Pulled By: mruberry

fbshipit-source-id: d334590b47c5a89e491a002c3e41e6ffa89000e3
2020-06-04 11:40:55 -07:00
mattip
ada2652ca6 Restore docs coverage test via sphinx (#39331)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39331

Fixes gh-37590

Adds an extra `make coverage` to document building, which uses the built-in facility in sphinx to check docstring coverage. Also fixes a failure to import `torch/jit/supported_ops.py` which broke the [Torchscript Builtins](https://pytorch.org/docs/stable/jit_builtin_functions.html) page.

This also adds the required `SPHINXOPTS` to turn warnings into error, but this is commented out. Note that since documentation of `torchvision` is merged in here, failures there would cause failures here if this is made active. Some thought might be needed about pinning the torchvision version merged into documentation.

The first commit should fail, since the "ScriptModule" class is commented out. I did that in order to check that a CI failure is properly reported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38244

Differential Revision: D21640589

Pulled By: ezyang

fbshipit-source-id: 1e240d81669b5f21404d596de4a27d192dc9fd8a
2020-06-04 10:49:38 -07:00
Aayush Naik
0829cadca3 Implement rad2deg, deg2rad (#38852)
Summary:
Resolves https://github.com/pytorch/pytorch/issues/38372.

cc mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38852

Differential Revision: D21868935

Pulled By: mruberry

fbshipit-source-id: ae6ded11b743c9d1cdc032984b4abe0a115290d6
2020-06-03 22:21:54 -07:00
neginraoof
4d597cb794 [ONNX] Update pytoch/onnx doc (#39480)
Summary:
Updated dos for operator_export_types and recently added op symbolics.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39480

Reviewed By: hl475

Differential Revision: D21877364

Pulled By: houseroad

fbshipit-source-id: 9831fe5776629da897db6d7943f830528cb916d2
2020-06-03 22:15:30 -07:00
Shen Li
a05ef17e46 Add rpc.functions.async_execution decorator for rpc_sync/rpc_async (#39216)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39216

The `rpc.functions.async_execution` decorator specifies that the
wrapped function is guaranteed to return a `torch.futures.Future`.
The decorator adds a `_wrapped_async_rpc_function` attribute to
the wrapper function. The caller retrieves this information and
then sets `isAsyncFunction` argument accordingly which is later
added to PythonCall RPC message as a field. On the callee side,
if the PythonCall carries an asynchronous function, it will cast
the function's return value to a jit::PythonFutureWrapper object,
and then install response creation and communication as a callback
on the that jit::PythonFutureWrapper.

For applications, this feature is useful when a function needs to
wait for IO or additional singaling. In those cases, marking the
user function as `rpc.functions.async_execution` will prevent it
from blocking one thread on callee for too long.

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D21779962

fbshipit-source-id: 6b6aa698bf6f91dad6ed2a7ee433df429b59e941
2020-06-02 23:21:25 -07:00
Xiang Gao
ebd4125e7e [JIT] Make torch.unique_consecutive compatible (#39339)
Summary:
A `unique_consecutive` version of https://github.com/pytorch/pytorch/pull/38156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39339

Differential Revision: D21823997

Pulled By: eellison

fbshipit-source-id: d14596a36ba36497e296da5a344e0376cef56f1b
2020-06-02 14:54:29 -07:00
Alban Desmaison
c6720f0d6b nit on functional autograd (#35493)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35493

Test Plan: Imported from OSS

Differential Revision: D21843416

Pulled By: albanD

fbshipit-source-id: af4d017ff4559237dd31e2ccaa1e3a967f7497ba
2020-06-02 14:49:16 -07:00
mattip
dc4fd0409f DOC: remove java documentation (#38920)
Summary:
Continuation of issue gh-36064 and PR gh-38042 which removed the unmaintained javaspinx extension. The unknown sphinx directives cause warnings when building documentation.

Edit: link to PR as well as issue
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38920

Differential Revision: D21818297

Pulled By: ezyang

fbshipit-source-id: 2c1d007a7689b26653d7dee081b0b969b8a731a2
2020-06-01 07:32:00 -07:00
anjali411
a50d781c03 Added real and imag views as tensor attributes (#39033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39033

Added `real` and `imag` views as tensor attributes. Right now, tensor.imag is disabled for real tensors. This is because if we return a new tensor of zeros, the user would be able to update the tensor returned by tensor.imag which should not be allowed as numpy returns a read-only array, and pytorch doesn't support read-only tensors yet.

TODO in follow-up PRs:
1. add a setter for `real` and `imag`
2. add special case in codegen for `real` and `imag` backward functions.
3. remove `copy_real` and `copy_imag` methods.

Test Plan: Imported from OSS

Differential Revision: D21767542

Pulled By: anjali411

fbshipit-source-id: 539febf01f01ff055e3fbc7e9ff01fd3fe729056
2020-05-29 12:31:51 -07:00
Cloud Han
05f097b5bb Implement logaddexp (#38384)
Summary:
Resolve https://github.com/pytorch/pytorch/issues/38377
Related https://github.com/pytorch/pytorch/issues/38349

This op should be disambiguated with `logsumexp` which do a reduction on a tensor over a specific axis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38384

Differential Revision: D21737336

Pulled By: mruberry

fbshipit-source-id: 7864d04ca304c0fb2937bb083583e3e3d6ef205d
2020-05-27 20:27:31 -07:00
Jessica Lin
b12a879184 Correct Javadoc link to master (#39038)
Summary:
Correct Javadoc link to match the 1.4 version: https://github.com/pytorch/pytorch/blob/release/1.4/docs/source/index.rst
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39038

Differential Revision: D21747969

Pulled By: jlin27

fbshipit-source-id: 941b61204e9be53e15a6351eff6f4935e6a16d24
2020-05-27 16:21:30 -07:00
Martin Valgur
de8c888232 Fix torch.hub.hub_dir inconsistencies (#38969)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/38401

* `torch.hub.load_state_dict_from_url()` now also downloads to `$TORCH_HOME/hub/checkpoints` instead of `$TORCH_HOME/checkpoints` like `torch.hub.load()` and others.
* Make `hub_dir` private, add and use `get_dir()` instead.

Also updated docs. Did not see a need for additional unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38969

Differential Revision: D21725880

Pulled By: ailzhang

fbshipit-source-id: 58cc6b32ddbda91e58c1c1433cc3916223556ea1
2020-05-26 21:06:52 -07:00
Shawn Zhong
ba3893e736 Rename torch._C.Generator to torch.Generator (#38773)
Summary:
Fix https://github.com/pytorch/pytorch/issues/26528
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38773

Differential Revision: D21701053

Pulled By: pbelevich

fbshipit-source-id: 57632ca9ce430ec30dc8e40739194ee2b5860f71
2020-05-26 08:29:46 -07:00
kshitij12345
3487744821 Add torch.logcumsumexp (#36308)
Summary:
Creating new PR as I am unable to push to pandeykartikey 's branch as I don't have the permissions.

Closes https://github.com/pytorch/pytorch/issues/26411

Based on https://github.com/pytorch/pytorch/issues/32876 Thanks pandeykartikey for starting this out.

Have addressed the comments.

anjali411 agadetsky albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36308

Differential Revision: D21648573

Pulled By: albanD

fbshipit-source-id: bc1a8fc4ab474a1148298117a1549b0e46f7c3ff
2020-05-21 09:12:31 -07:00
Alban Desmaison
b88b7d552f Prevent custom Functions from creating non differentiable type that requires grad (#38326)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38326

Test Plan: Imported from OSS

Differential Revision: D21668740

Pulled By: albanD

fbshipit-source-id: f452f65e76003492055311523a652937b1300183
2020-05-21 08:30:14 -07:00
Supriya Rao
530d48e93a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38749

Test Plan: python test/test_quantization.py TestFused

Differential Revision: D21654659

Pulled By: supriyar

fbshipit-source-id: 301be24083e794f4e71ff1d6d842e1aaefa640f0
2020-05-19 22:48:05 -07:00
Natalia Gimelshein
b995540a01 Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
Test Plan: revert-hammer

Differential Revision:
D21632878

Original commit changeset: 0d73398b95d7

fbshipit-source-id: c4dd18a4220d175237f31f741a782f2596228009
2020-05-19 15:22:16 -07:00
Supriya Rao
7d38db0f9a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38452

Test Plan:
python test/test_quantization.py TestFused

Imported from OSS

Differential Revision: D21632878

fbshipit-source-id: 0d73398b95d72a0a23b42ef36f3ede1bfcc35eda
2020-05-19 09:53:56 -07:00
Xinyu Li
52e9953faf use version number instead of 'master' in html header title (#38149)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38149

This is for (#21290) (#31894)

Instead of putting "Pytorch master documentation" in header's html title, now we use "Pytorch 1.x.x documentation", this is similar to tensorFlow and numpy doc page.

In google search, we will get
Pytorch Documentation - Pytorch 1.x.x Documentation instead.

Test Plan: Imported from OSS

Differential Revision: D21586559

Pulled By: glaringlee

fbshipit-source-id: 2995709ac3c22dbb0183b5b4abfde7d795f1f8eb
2020-05-15 08:32:32 -07:00
Supriya Rao
de7025fbdb [quant] Support for functional quantized::conv1d (#38449)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38449

Also update docs to reflect conv1d op support

Test Plan:
python test/test_quantization.py TestQuantizedFunctional.test_conv1d_api

Imported from OSS

Differential Revision: D21575921

fbshipit-source-id: 21c9f6b49ad456cd9d93e97f17cf5b8d87f0da6b
2020-05-14 16:09:51 -07:00
Jessica Lin
8b6bf2a457 Add C++ Landing Page (#38450)
Summary:
* Add cpp_index.rst for landing page to match 1.5 (https://github.com/pytorch/pytorch/blob/release/1.5/docs/source/cpp_index.rst)
* Link to new cpp landing page was added to the docs table of contents in this PR: https://github.com/pytorch/pytorch/pull/38350
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38450

Differential Revision: D21580939

Pulled By: jlin27

fbshipit-source-id: 021c43f207a100d554266e4e16cb6752ca9c56a0
2020-05-14 16:02:01 -07:00
Bharat123rox
15da26f8aa DOC: Add documentation for Tensor.is_nonzero (#37845)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/37438 by adding documentation for `Tensor.is_nonzero`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37845

Differential Revision: D21494422

Pulled By: mruberry

fbshipit-source-id: ee4f5979922d7c8100b5031d770ccdf59fe1c1a1
2020-05-14 04:46:55 -07:00
Supriya Rao
f6626aaf43 [quant] Add support for Quantized Conv1d and ConvRELU1d (#38283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38283

Adds support for the modules and tests

Test Plan:
python test/test_quantization.py TestStaticQuantizedModule.test_conv1d_api

Imported from OSS

Differential Revision: D21553665

fbshipit-source-id: 7ea28da024bdf59f87f300d616c266f2b41f0bcd
2020-05-13 16:59:13 -07:00
Jessica Lin
33977ca769 Update Cpp, rpc docs and Libraries section to match 1.5 (#38350)
Summary:
* Link cpp docs to the cpp landing page
* Link to rpc.rst landing page
* Update Libraries to match 1.5 (https://github.com/pytorch/pytorch/blob/release/1.5/docs/source/index.rst)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38350

Differential Revision: D21554435

Pulled By: jlin27

fbshipit-source-id: d1c9d5a86f84910225cbd0a57074ae95c8a9a450
2020-05-13 15:20:35 -07:00
Elias Ellison
eb3e9872c9 [JIT] make torch.unique compilable (#38156)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986

Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156

Differential Revision: D21504449

Pulled By: eellison

fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
2020-05-12 18:37:53 -07:00
Donna Choi
4c99a9b672 Add documentation for hardswish (#37989)
Summary:
Fix issue https://github.com/pytorch/pytorch/issues/37431.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37989

Differential Revision: D21502182

Pulled By: zou3519

fbshipit-source-id: 245586fb555f7f1d9ec8d87269035b6fe626b47b
2020-05-12 06:48:51 -07:00
Xinyu Li
dcf1861f88 add document for bucktization (#38119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38119

This is for (#37435).
Demo is here:
https://glaringlee.github.io/generated/torch.searchsorted.html
https://glaringlee.github.io/generated/torch.bucketize.html

Test Plan: Imported from OSS

Differential Revision: D21517392

Pulled By: glaringlee

fbshipit-source-id: b35795c7f07e9ae4c4806c528eb51fd4ca14d499
2020-05-11 21:54:19 -07:00
Ilia Cherniavskii
43dd8760d7 Move ThreadLocalDebugInfo to c10 (#37774)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37774

Move ThreadLocalDebugInfo from ATen to C10

Test Plan: Imported from OSS

Differential Revision: D21384249

Pulled By: ilia-cher

fbshipit-source-id: f9b5089a868f84a2ee013695a481fcc883d3c6b2
2020-05-11 19:27:41 -07:00
毛毛
19d6e32e9a fix sample code (#38002)
Summary:
Make Linear layer working correct when bias is False
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38002

Differential Revision: D21509679

Pulled By: malfet

fbshipit-source-id: c7077992cf414ecc557b39e5ed1e39ef01c8b347
2020-05-11 15:34:09 -07:00
Shawn Zhong
5f9b9036c1 Add instance methods tensor.isnan(), tensor.isinf(), tensor.isfinite() (#37942)
Summary:
Fix https://github.com/pytorch/pytorch/issues/37736
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37942

Differential Revision: D21503150

Pulled By: soumith

fbshipit-source-id: cf6bf57ca67013efe119543f3d9a698473960dec
2020-05-11 13:56:59 -07:00