Commit Graph

880 Commits

Author SHA1 Message Date
Rohan Varma
639133d6d1 rename init_model_parallel to init_rpc (#29762)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29762

Rename this API as discussed, since it's use cases extend beyond only
model parallelism.
ghstack-source-id: 94020627

Test Plan: Unit tests pass

Differential Revision: D18491743

fbshipit-source-id: d07676bb14f072c64da0ce99ee818bcc582efc57
2019-11-18 06:07:44 -08:00
Rohan Varma
455b5c1a7d minor updates to rpc docs (#29857)
Summary:
Small fixes to rpc docs:
- mark as experimental and subject to change
- Reference the distributed autograd design document in pytorch notes page.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29857

Differential Revision: D18526252

Pulled By: rohan-varma

fbshipit-source-id: e09757fa60a9f8fe9c76a868a418a1cd1c300eae
2019-11-15 22:28:08 -08:00
Pritam Damania
eb29276623 Update distributed autograd design doc with appropriate links. (#29927)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29927

With the docs page now up, we can update the links in the design doc
to point to the docs page.
ghstack-source-id: 94055423

Test Plan: waitforbuildbot

Differential Revision: D18541878

fbshipit-source-id: f44702d9a8296ccc0a5d58d56c3b6dc8a822b520
2019-11-15 21:10:53 -08:00
Xiaomeng Yang
510ef4b63a Add nn.quantized.Conv3d (#29813)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29813

Add nn.quantized.Conv3d

Test Plan: buck test mode/dev-nosan //caffe2/test:quantized -- "conv"

Reviewed By: jianyuh

Differential Revision: D18467749

fbshipit-source-id: 892f708179e9e836ad902851ac1838847009da15
2019-11-15 04:33:40 -08:00
Rohan Varma
06ef4a757d Add docs for RPC, dist autograd, and RRef modules (#29276)
Summary:
Closes https://github.com/pytorch/pytorch/issues/28983. Documentation for `torch.distributed.rpc` and `torch.distributed.autograd` modules. Also fixes/tidies up some of the docstrings in rpc/autograd, and moves some functions to be private so they don't show up in the documentation.

Note: Much of the text to describe/explain the RPC/RRef layers are taken from the following RFCs: https://github.com/pytorch/pytorch/issues/23110, https://github.com/pytorch/pytorch/issues/26759
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29276

Differential Revision: D18478754

Pulled By: rohan-varma

fbshipit-source-id: e9a7089baf5275304e5408d319eb9bf98e53fff8
2019-11-14 14:32:03 -08:00
Hong Xu
bd0394d473 Add op bitwise_xor to replace __xor__ and __ixor__ (#25665)
Summary:
We define `bitwise_xor` instead of
`__xor__` and `__ixor__`. The reason is that (a) it is not idiomatic to call
functions starting and ending with double underscores, and that (b) the
types of argument that we can add is limited (e.g., no out), and that (c) consistent with the naming of `bitwise_not` and numpy.

Fix https://github.com/pytorch/pytorch/issues/24513,  Fix https://github.com/pytorch/pytorch/issues/24517, Fix https://github.com/pytorch/pytorch/issues/24660, Fix https://github.com/pytorch/pytorch/issues/24664
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25665

Differential Revision: D17577143

Pulled By: VitalyFedyunin

fbshipit-source-id: 042f6385f9305bd66d50a8ce82e28f40a23a7266
2019-11-12 16:14:04 -08:00
Pritam Damania
c3b2c2e353 Design doc for distributed autograd. (#29175)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29175

Updates our docs to include a design doc for distributed autograd.
Currently, this doc only covers the FAST mode algorithm. The Smart mode
algorithm section just refers to the original RFC.

There is a section for Distributed Optimizer that we can complete once we've
finalized the API for the same.
ghstack-source-id: 93701129

Test Plan: look at docs.

Differential Revision: D18318949

fbshipit-source-id: 670ea1b6bb84692f07facee26946bbc6ce8c650c
2019-11-12 15:04:23 -08:00
Anjali Chourdia
eeb7199ccc updated name_inference doc for cumsum and cumprod (#29453)
Summary:
cumsum/cumprod  perform their own respective operations over a desired dimension, but there is no reduction in dimensions in the process, i.e. they are not reduction operations and hence just keep the input names of the tensor on which the operation is performed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29453

Differential Revision: D18455683

Pulled By: anjali411

fbshipit-source-id: 9e250d3077ff3d8f3405d20331f4b6ff05151a28
2019-11-12 13:43:47 -08:00
Michela Paganini
8e8a5e0664 Pruning Functionality (#24076)
Summary:
Provides implementation for feature request issue https://github.com/pytorch/pytorch/issues/20402.

Adds pruning functionalities (structured and unstructured, local and global, as well as pruning from user-provided mask).

Associated tutorial here: https://github.com/pytorch/tutorials/pull/605

cc: soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24076

Differential Revision: D18400431

Pulled By: mickypaganini

fbshipit-source-id: a97bd6ca61f8600ae411da9ff6533c232aae1a51
2019-11-08 19:38:00 -08:00
Prasun Anand
c99cdfeb7d link to documentation for RNNBase.flatten_parameters() (#29196)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/28658

I have added the link to the docs for `flatten_parameters`.

RNNBase is a superclass of RNN, LSTM and GRM classes. Should I add a link to `flatten_parameters()` in those sections as well ?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29196

Differential Revision: D18326815

Pulled By: ezyang

fbshipit-source-id: 4239019112e77753a0820aea95c981a2c868f5b0
2019-11-05 09:45:21 -08:00
Edward Yang
93acd1998f Revert D18249048: Moved VonMises distribution with sampling upstream from Pyro.
Test Plan: revert-hammer

Differential Revision:
D18249048

Original commit changeset: 3e6df9006c7b

fbshipit-source-id: 001666e4b5b9879d36147bacfc761ea661ded900
2019-11-04 09:50:50 -08:00
Ahmad Salim Al-Sibahi
0f97e08a36 Moved VonMises distribution with sampling upstream from Pyro. (#17168)
Summary:
At the encouragement of Pyro developers and https://github.com/pytorch/pytorch/issues/13811, I have opened this PR to move the (2D) von Mises distribution upstream.
CC: fritzo neerajprad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17168

Differential Revision: D18249048

Pulled By: ezyang

fbshipit-source-id: 3e6df9006c7b85da7c4f55307c5bfd54c2e254e6
2019-11-04 08:44:11 -08:00
Xiaomeng Yang
2460dced8f Add torch.nn.GELU for GELU activation (#28944)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28944

Add torch.nn.GELU for GELU activation

Test Plan: buck test mode/dev-nosan //caffe2/test:nn -- "GELU"

Reviewed By: hl475, houseroad

Differential Revision: D18240946

fbshipit-source-id: 6284b30def9bd4c12bf7fb2ed08b1b2f0310bb78
2019-11-03 21:55:05 -08:00
Alban Desmaison
f5edb62a7f Clean extending autograd doc for output size 1 (#28860)
Summary:
Fix https://github.com/pytorch/pytorch/issues/28583
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28860

Differential Revision: D18224497

Pulled By: albanD

fbshipit-source-id: 0fa4eacce6f6092d555e509dc23bd75206f78d41
2019-10-30 13:57:10 -07:00
Prasun Anand
4230132baf Added docs for context method mixins. Fixes issue #27365 (#28643)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/27365 .

This PR:
1. Makes Context method docs available.
2. Links [Extending torch autograd](https://pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd) notes to Context method docs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28643

Differential Revision: D18170089

Pulled By: albanD

fbshipit-source-id: a1119ea8e2f8a71f0d1aadf416f2f98343aa9b7b
2019-10-28 08:31:35 -07:00
Vincent Quenneville-Belair
e4f40bf3b2 Add multiplicative lr. (#27254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27254

`MultiplicativeLR` consumes a function providing the multiplicative factor at each epoch. It mimics `LambdaLR` in its syntax.

Test Plan: Imported from OSS

Differential Revision: D17728088

Pulled By: vincentqb

fbshipit-source-id: 1c4a8e19a4f24c87b5efccda01630c8a970dc5c9
2019-10-23 11:38:45 -07:00
Jessica Lin
c813503f05 Update hyperlink syntax for XLA, torchaudio, torchtext, and C++ API (#28019)
Summary:
Tested locally. Should render as such:

![image](https://user-images.githubusercontent.com/8042156/66861657-4373fc00-ef44-11e9-8a5b-52abc3ddcd51.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28019

Differential Revision: D18012303

Pulled By: brianjo

fbshipit-source-id: 4b3bd9f63f5d94d474ab13bb06220a112185e924
2019-10-18 12:15:17 -07:00
davidriazati
618cb40e30 Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17859654/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17859654

fbshipit-source-id: f3a116cddb5393bdfbef670c56efb2ee62ccf252
2019-10-17 11:12:35 -07:00
Zafar Takhirov
dc8785a022 Refactoing names for consistency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27670

Test Plan: Imported from OSS

Differential Revision: D17846269

Pulled By: z-a-f

fbshipit-source-id: ed3c7441c185bf11b2e62879aa3ecbc654aa2d4e
2019-10-16 12:18:26 -07:00
Richard Zou
817cb4182e Fix Sphinx warning about '_images' not existing (#27927)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27927

This fixes
`WARNING: html_static_path entry '_images' does not exist`
by removing '_images' from conf.py. As far as I can tell, '_images' in
`html_static_path` is only necessary if images already exist in the
`_images` folder; otherwise, sphinx is able to auto-generate _images
into the build directory and populate it correctly.

Test Plan: - build and view the docs locally.

Differential Revision: D17915109

Pulled By: zou3519

fbshipit-source-id: ebcc1f331475f52c0ceadd3e97c3a4a0d606e14b
2019-10-15 07:50:26 -07:00
zou3519
e5d6b75319 Bag of documentation fixes; fix more sphinx warnings (#27850)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850

Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).

Test Plan: - built and viewed the documentation for each change locally.

Differential Revision: D17908123

Pulled By: zou3519

fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
2019-10-15 07:31:14 -07:00
vishwakftw
ad47788647 Add Polygamma to the docs (#27696)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25347
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27696

Differential Revision: D17916790

Pulled By: ezyang

fbshipit-source-id: ac2635a300b1ef0ab437e3ffac152239754fe828
2019-10-15 07:00:57 -07:00
Dmytro Dzhulgakov
169327f557 Add note that cuda quantization is not supported (#27829)
Summary:
People get confused with partial support otherwise: https://github.com/pytorch/pytorch/issues/27811 #27729

Suggestions on where else put warnings are welcomed (probably in tutorials - cc SethHWeidman )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27829

Differential Revision: D17910931

Pulled By: dzhulgakov

fbshipit-source-id: 37a169a4bef01b94be59fe62a8f641c3ec5e9b7c
2019-10-14 11:25:51 -07:00
StandbyMe
a23edd6b9c Fix Type Errors in Examples about Named Tensor (#27828)
Summary:
`names` should be `tuple`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27828

Differential Revision: D17908112

Pulled By: zou3519

fbshipit-source-id: bd1454c5d6e6b690955f49380e34c4b0ddaf879b
2019-10-14 09:24:45 -07:00
vishwakftw
82a69a690f Add documentation for torch.lgamma (#27812)
Summary:
Changelog:
- Add doc string in _torch_docs.py, _tensor_docs.py
- Expose in docs/source/torch.rst, docs/source/tensors.rst
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27812

Test Plan:
- Remove `lgamma`, `lgamma_` from the blacklist

Fixes https://github.com/pytorch/pytorch/issues/27783

Differential Revision: D17907630

Pulled By: ezyang

fbshipit-source-id: 14e662a4e5262126889a437e5c4bfb21936730e8
2019-10-14 08:47:04 -07:00
zou3519
23bffc4f14 Fix most documentation warnings (#27782)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27782

Warnings show up when running `make html` to build documentation. All of
the warnings are very reasonable and point to bugs in our docs. This PR
attempts to fix most of those warnings.

In the future we will add something to the CI that asserts that there
are no warnings in our docs.

Test Plan: - build and view changes locally

Differential Revision: D17887067

Pulled By: zou3519

fbshipit-source-id: 6bf4d08764759133b20983d6cd7f5d27e5ee3166
2019-10-13 10:34:01 -07:00
BowenBao
907ce80321 Update onnx landing page for 1.3 (#27581)
Summary:
* Update supported operator list.
* Update FAQ on implicit scalar casting. Traced models are now more robust.

cc spandantiwari lara-hdr neginraoof Please feel free to add any missing points. Thank you!

cc houseroad for review.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27581

Reviewed By: hl475

Differential Revision: D17882147

Pulled By: houseroad

fbshipit-source-id: c1d745ca647fce2daf897bbb6d1ff8c283f18839
2019-10-11 20:53:50 -07:00
Chris Gottbrath
f35d7d4614 Pr v130 doc changes oct10 take2 (#27721)
Summary:
resolves issues:
https://github.com/pytorch/pytorch/issues/27703

Updates to index for v1.3.0
* add javasphinx to the required sphinx plugins
* Update "Package Reference" to "Python API"
* Add in torchaudio and torchtext reference links so they show up across all docs not just the main page
* Add "Other Languages" section, add in C++ docs, add in Javadocs
* Add link to XLA docs under Notes: http://pytorch.org/xla/

this includes changes to:
docs/source/conf.py
docs/source/index.rst
docs/source/nn.rst
docs/requirements.txt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27721

Differential Revision: D17881973

Pulled By: jlin27

fbshipit-source-id: ccc1e9e4da17837ad99d25df997772613f76aea8
2019-10-11 11:49:14 -07:00
Elias Ellison
5d495a11cb add unused and is_scripting to docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27630

Differential Revision: D17868856

Pulled By: eellison

fbshipit-source-id: 7cf183d5c0d5436fbaa549a02e6b8fd47fa15b67
2019-10-10 17:02:17 -07:00
Michael Suo
9bc8fb8dfd Revert D17850696: [pytorch][PR] Updates to quantization related files, index.rst, and javadocs
Test Plan: revert-hammer

Differential Revision:
D17850696

Original commit changeset: 3de146f06522

fbshipit-source-id: 565fef87fcf6021362ec3e540be78641d47ef9a7
2019-10-10 09:23:33 -07:00
Edward Yang
9d925c1d6f Revert D17851047: [pytorch][PR] Add javasphinx extension
Test Plan: revert-hammer

Differential Revision:
D17851047

Original commit changeset: 8ed7e3c44f20

fbshipit-source-id: 9021436a7c84f7582c3d4d3e29fb5f7b0887e88c
2019-10-10 07:36:42 -07:00
Dmytro Dzhulgakov
d931c8bf75 substantially restructure all quantized docs to group logically (#27677)
Summary:
Make everything clickable
Organize APIs logically in subsections
Fix many typos
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27677

Differential Revision: D17850650

Pulled By: dzhulgakov

fbshipit-source-id: 060f6ed988d1c4beecba6bc8daf55626961fac98
2019-10-10 00:50:02 -07:00
Jessica Lin
91959aa3d3 Add javasphinx extension
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27681

Differential Revision: D17851047

Pulled By: brianjo

fbshipit-source-id: 8ed7e3c44f2055d2b8577686aff1d13548f45688
2019-10-09 23:20:33 -07:00
Jessica Lin
1118ea5866 Updates to quantization related files, index.rst, and javadocs (#27676)
Summary:
- Update torch.rst to remove certain autofunction calls
- Add reference to Quantization Functions section in nn.rst
- Update javadocs for v1.3.0
- Update index.rst:
  - Update "Package Reference" to "Python API"
  - Add in torchaudio and torchtext reference links so they show up across all docs not just the main page
  - Add "Other Languages" section, add in C++ docs, add in Javadocs
  - Add link to XLA docs under Notes: http://pytorch.org/xla/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27676

Differential Revision: D17850696

Pulled By: brianjo

fbshipit-source-id: 3de146f065222d1acd9a33aae3b543927a63532a
2019-10-09 22:52:19 -07:00
Michael Suo
17a54e1b3d Revert D17840343: [pytorch][PR] changes to the documentation in support of quantization
Test Plan: revert-hammer

Differential Revision:
D17840343

Original commit changeset: 06bf3da6012b

fbshipit-source-id: 35f96fac299a0f9dd8ad864f475f606317c46823
2019-10-09 19:20:44 -07:00
Michael Suo
971f773886 Revert D17750005: [jit] Add doc copy-edits from review
Test Plan: revert-hammer

Differential Revision:
D17750005

Original commit changeset: 230d1d33efb0

fbshipit-source-id: 12d22567b99286a8c4f719c3a384cb3665f7ba54
2019-10-09 19:12:58 -07:00
Jessica Lin
18d5210de9 changes to the documentation in support of quantization (#27603)
Summary:
this includes changes to

docs/source/conf.py
docs/source/index.rst
docs/source/nn.rst
docs/source/torch.rst
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27603

Differential Revision: D17840343

Pulled By: gottbrath

fbshipit-source-id: 06bf3da6012b334e3246a6a2cad42358462e2630
2019-10-09 17:13:34 -07:00
Chris Gottbrath
e049e0b027 adding quantization.rst file for quantization feature (#27559)
Summary:
This was written by Raghu, Jessica, Dmytro and myself.

This PR will accumulate additional changes (there are a few more things we need to add to this actual rst file). I'll probably add the related image files to this PR as well.

I'm breaking draft PR https://github.com/pytorch/pytorch/pull/27553 into more easily digestible pieces.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27559

Differential Revision: D17843414

Pulled By: gottbrath

fbshipit-source-id: 434689f255ac1449884acf81f10e0148d0d8d302
2019-10-09 16:45:09 -07:00
Jessica Lin
0eccd05ab4 Add javadoc rst files
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27646

Differential Revision: D17844860

Pulled By: brianjo

fbshipit-source-id: 9b3ddf8dab2f63345b73436aeb245eea1686c350
2019-10-09 16:40:02 -07:00
davidriazati
e7c9c8098a Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17750005/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17750005

fbshipit-source-id: 230d1d33efb015e40327373a05a1d3eced7c5c00
2019-10-09 14:16:48 -07:00
Dylan Bespalko
7c472ec597 Vectorized complex unary and binary op support. (#26500)
Summary:
Added Complex support with AVX to unary ops and binary ops.

I need to add nan propagation to minimum() and maximum() in the future.
In-tree changes to pytorch to support complex numbers are being submitted here.
Out-of-tree support for complex numbers is here: pytorch-cpu-strided-complex extension

Preliminary Benchmarks are here.

I tried rrii and riri and found that riri is better in most situations.
Divide is very slow because you can't reduce 1/(x+y)
Sqrt is also very slow.
Reciprocal could be sped up after I add conj()
Everything else is typically within 20% of the real number performance.
Questions:

Why does macOS not support mil? #if AT_MKL_ENABLED() && !defined(__APPLE__) in vml.h. MKL does support some complex operations like Abs, so I was curious about trying it.
Is MKL just calling AVX?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26500

Differential Revision: D17835431

Pulled By: ezyang

fbshipit-source-id: 6746209168fbeb567af340c22bf34af28286bd54
2019-10-09 12:49:21 -07:00
Hong Xu
987e37b9c2 Enable EXE001 flake8 check. (#27560)
Summary:
According to https://github.com/pytorch/pytorch/issues/27285 , seems we do not intend to use shebang as an indication of Python version, thus
we enable EXE001 flake8 check.
For violations, we either remove shebang from non-executable Python scripts or grant them executable permission.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27560

Differential Revision: D17831782

Pulled By: ezyang

fbshipit-source-id: 6282fd3617b25676a6d959af0d318faf05c09b26
2019-10-09 09:15:29 -07:00
zou3519
59b14a7620 Documentation for named tensors (#27173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27173

`docs/source/named_tensor.rst` is the entry point; most users will land
either here or the named tensor tutorial when looking to use named
tensors. We should strive to make this as readable, concise, and understandable
as possible.

`docs/source/name_inference.rst` lists all of the name inference rules.
It should be clear but it's hard to make it concise.

Please let me know if anything doesn't make sense and please propose
alternative wordings and/or restructuring to improve the documentation.
This should ultimately get cherry-picked into the 1.3 branch as one
monolithic commit so it would be good to get all necessary changes made
in this PR and not have any follow ups.

Test Plan: - built and reviewed locally with `cd docs/ && make html`.

Differential Revision: D17763046

Pulled By: zou3519

fbshipit-source-id: c7872184fc4b189d405b18dad77cad6899ae1522
2019-10-08 22:22:30 -07:00
Jerry Ma
1610ea8ef8 Comprehensive-ish instrumentation for CUDA memory allocator (#27361)
Summary:
Adds comprehensive memory instrumentation to the CUDA caching memory allocator.

# Counters

Added comprehensive instrumentation for the following stats:
  - Allocation requests (`allocation`)
  - Allocated memory (`allocated_bytes`)
  - Reserved segments from cudaMalloc (`segment`)
  - Reserved memory (`reserved_bytes`)
  - Active memory blocks (`active`)
  - Active memory (`active_bytes`)
  - Inactive, non-releasable blocks (`inactive_split`)
  - Inactive, non-releasable memory (`inactive_split_bytes`)
  - Number of failed cudaMalloc calls that result in a cache flush and retry (`cuda_malloc_retries`)
  - Number of OOMs (`num_ooms`)

Except for the last two, these stats are segmented between all memory, large blocks, and small blocks. Along with the current value of each stat, historical counts of allocs/frees as well as peak usage are tracked by the allocator.

# Snapshots

Added the capability to get a "memory snapshot" – that is, to generate a complete dump of the allocator block/segment state.

# Implementation: major changes

- Added `torch.cuda.memory_stats()` (and associated C++ changes) which returns all instrumented stats as a dictionary.
- Added `torch.cuda.snapshot()` (and associated C++ changes) which returns a complete dump of the allocator block/segment state as a list of segments.
- Added memory summary generator in `torch.cuda.memory_summary()` for ease of client access to the instrumentation stats. Potentially useful to dump when catching OOMs. Sample output here: https://pastebin.com/uKZjtupq

# Implementation: minor changes

- Add error-checking helper functions for Python dicts and lists in `torch/csrc/utils/`.
- Existing memory management functions in `torch.cuda` moved from `__init__.py` to `memory.py` and star-imported to the main CUDA module.
- Add various helper functions to `torch.cuda` to return individual items from `torch.cuda.memory_stats()`.
- `torch.cuda.reset_max_memory_cached()` and `torch.cuda.reset_max_memory_allocated()` are deprecated in favor of `reset_peak_stats`. It's a bit difficult to think of a case where only one of those stats should be reset, and IMO this makes the peak stats collectively more consistent.
- `torch.cuda.memory_cached()` and `torch.cuda.max_memory_cached()` are deprecated in favor of `*memory_reserved()`.
- Style (add access modifiers in the allocator class, random nit fixes, etc.)

# Testing

- Added consistency check for stats in `test_cuda.py`. This verifies that the data from `memory_stats()` is faithful to the data from `snapshot()`.
- Ran on various basic workflows (toy example, CIFAR)

# Performance

Running the following speed benchmark: https://pastebin.com/UNndQg50

- Before this PR: 45.98 microseconds per tensor creation
- After this PR: 46.65 microseconds per tensor creation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27361

Differential Revision: D17758747

Pulled By: jma127

fbshipit-source-id: 5a84e82d696c40c505646b9a1b4e0c3bba38aeb6
2019-10-08 15:42:48 -07:00
davidriazati
a6bb8b52d4 Reduce error context from 10 -> 3 (#26765)
Summary:
10 lines of error context (on both sides) is overkill, especially now
that we have line numbers. With a compilation stack of a couple
functions, it becomes a pain to scroll to the top of the stack to see
the real error every time.

This also fixes class names in the compilation stack to a format of
`ClassName.method_name` instead of the the full qualified name
Old output
```
clip_boxes_to_image(Tensor boxes, (int, int) size) -> (Tensor):
Expected a value of type 'Tuple[int, int]' for argument 'size' but instead found type 'Tuple[int, int, int]'.
:
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:365:20
        top_n_idx = self._get_top_n_idx(objectness, num_anchors_per_level)
        batch_idx = torch.arange(num_images, device=device)[:, None]
        objectness = objectness[batch_idx, top_n_idx]
        levels = levels[batch_idx, top_n_idx]
        proposals = proposals[batch_idx, top_n_idx]

        final_boxes = []
        final_scores = []
        for boxes, scores, lvl, img_shape in zip(proposals, objectness, levels, image_shapes):
            boxes = box_ops.clip_boxes_to_image(boxes, img_shape)
                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            keep = box_ops.remove_small_boxes(boxes, self.min_size)
            boxes, scores, lvl = boxes[keep], scores[keep], lvl[keep]
            # non-maximum suppression, independently done per level
            keep = box_ops.batched_nms(boxes, scores, lvl, self.nms_thresh)
            # keep only topk scoring predictions
            keep = keep[:self.post_nms_top_n]
            boxes, scores = boxes[keep], scores[keep]
            final_boxes.append(boxes)
            final_scores.append(scores)
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:446:8
        num_images = len(anchors)
        num_anchors_per_level = [o[0].numel() for o in objectness]
        objectness, pred_bbox_deltas = \
            concat_box_prediction_layers(objectness, pred_bbox_deltas)
        # apply pred_bbox_deltas to anchors to obtain the decoded proposals
        # note that we detach the deltas because Faster R-CNN do not backprop through
        # the proposals
        proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
        proposals = proposals.view(num_images, -1, 4)
        boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

        losses = {}
        if self.training:
            assert targets is not None
            labels, matched_gt_boxes = self.assign_targets_to_anchors(anchors, targets)
            regression_targets = self.box_coder.encode(matched_gt_boxes, anchors)
            loss_objectness, loss_rpn_box_reg = self.compute_loss(
                objectness, pred_bbox_deltas, labels, regression_targets)
            losses = {
'RegionProposalNetwork.forward' is being compiled since it was called from 'MaskRCNN.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/generalized_rcnn.py:53:8
        """
        if self.training and targets is None:
            raise ValueError("In training mode, targets should be passed")
        original_image_sizes = [(img.shape[-2], img.shape[-3])  for img in images]

        images, targets = self.transform(images, targets)
        features = self.backbone(images.tensors)
        if isinstance(features, torch.Tensor):
            features = OrderedDict([(0, features)])
        proposals, proposal_losses = self.rpn(images, features, targets)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
        detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
        detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)

        losses = {}
        losses.update(detector_losses)
        losses.update(proposal_losses)

        # TODO: multiple return types??
        # if self.training:
```

New output

```
RuntimeError:

clip_boxes_to_image(Tensor boxes, (int, int) size) -> (Tensor):
Expected a value of type 'Tuple[int, int]' for argument 'size' but instead found type 'Tuple[int, int, int]'.
:
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:365:20
        final_scores = []
        for boxes, scores, lvl, img_shape in zip(proposals, objectness, levels, image_shapes):
            boxes = box_ops.clip_boxes_to_image(boxes, img_shape)
                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            keep = box_ops.remove_small_boxes(boxes, self.min_size)
            boxes, scores, lvl = boxes[keep], scores[keep], lvl[keep]
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:446:8
        proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
        proposals = proposals.view(num_images, -1, 4)
        boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

        losses = {}
'RegionProposalNetwork.forward' is being compiled since it was called from 'MaskRCNN.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/generalized_rcnn.py:53:8
        if isinstance(features, torch.Tensor):
            features = OrderedDict([(0, features)])
        proposals, proposal_losses = self.rpn(images, features, targets)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
        detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
        detections = self.transform.postprocess
```
](https://our.intern.facebook.com/intern/diff/17560963/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26765

Pulled By: driazati

Differential Revision: D17560963

fbshipit-source-id: e463548744b505ca17f0158079b80e08fda47d49
2019-10-04 11:24:52 -07:00
Orion Reblitz-Richardson
cc964765a5 Add method add_hparams to API doc (#27344)
Summary:
Adds the method `add_hparams` to `torch.utils.tensorboard` API docs. Will want to have this in PyTorch 1.3 release.

cc sanekmelnikov lanpa natalialunova
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27344

Differential Revision: D17753689

Pulled By: orionr

fbshipit-source-id: cc8636e0bdcf3f434444cd29471c62105491039d
2019-10-03 17:07:45 -07:00
Brian Vaughan
0c6a18de8d Add torch.promote_types function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26655

Test Plan: Imported from OSS

Differential Revision: D17556196

Pulled By: nairbv

fbshipit-source-id: eeebce8968bfb2ffd25c066595bc19e5dee6ea6f
2019-09-27 16:48:38 -07:00
Brian Vaughan
2a43b74196 Add torch.can_cast(from, to) function (#26805)
Summary:
https://github.com/pytorch/pytorch/issues/25472
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26805

Differential Revision: D17628434

Pulled By: nairbv

fbshipit-source-id: 6af8031ac3afda1505d338075c0637ad043f8b7e
2019-09-27 08:40:34 -07:00
Ailing Zhang
0f1fbc0eb2 Hub improvements (#26723)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/25980.
Our old serialization was in tar (like `resnet18-5c106cde.pth` was in this format) so let's only support automatically unzip if checkpoints are zipfiles.
We can still manage to get it work with tarfile, but let's delay it when there's an ask.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26723

Differential Revision: D17551795

Pulled By: ailzhang

fbshipit-source-id: 00b4e7621f1e753ca9aa07b1fe356278c6693a1e
2019-09-25 08:21:50 -07:00
Brian Vaughan
002c250139 Expose a torch.result_type and simplify tensor iterator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26012

Test Plan: Imported from OSS

Differential Revision: D17556197

Pulled By: nairbv

fbshipit-source-id: c0be3ac9e99fecc26a181e301defc1942bc6708c
2019-09-25 06:52:23 -07:00
Karl Ostmo
839e636fa1 Revert D17495679: [pytorch][PR] A few hub improvements
Test Plan: revert-hammer

Differential Revision:
D17495679

Original commit changeset: 695df3e803ad

fbshipit-source-id: 6c85bc980991971b08714f05155dd23147eed233
2019-09-23 23:38:19 -07:00
Ailing Zhang
1eaaf8b68b A few hub improvements (#25980)
Summary:
This PR does a few small improvements to hub:
- add support `verbose` option in `torch.load`. Note that this mutes hitting cache message but keeps the message of first download as suggested. fixes https://github.com/pytorch/pytorch/issues/24791
- add support loading state dict from tar file or zip file in `torch.hub.load_state_dict_from_url`.
- add `torch.hub.download_url_to_file` as public API, and add BC bit for `_download_url_to_file`.
- makes hash check in filename optional through `check_hash`, many users don't have control over the naming, relaxing this constraint could potentially avoid duplicating download code on user end.
- move pytorch CI off `pytorch/vision` and use `ailzhang/torchhub_example` as a dedicated test repo. fixes https://github.com/pytorch/pytorch/issues/25865
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25980

Differential Revision: D17495679

Pulled By: ailzhang

fbshipit-source-id: 695df3e803ad5f9ca33cfbcf62f1a4f8cde0dbbe
2019-09-23 17:24:19 -07:00
vishwakftw
15b506068b Remove deprecated torch.gels (#26480)
Summary:
Changelog:
- Remove `torch.gels` which was deprecated in v1.2.0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26480

Test Plan: - No tests were changed and all callsites for `torch.gels` where modified to `torch.lstsq` when `torch.lstsq` was introduced

Differential Revision: D17527207

Pulled By: zou3519

fbshipit-source-id: 28e2fa3a3bf30eb6b9029bb5aab198c4d570a950
2019-09-23 07:15:39 -07:00
Dmytro Dzhulgakov
8c1354c31b Implement more support for per-channel quantization (#26240)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26240

In particular adds support for empty/empty_like which is needed for memory layouts to work.

Test Plan: Imported from OSS

Differential Revision: D17443220

Pulled By: dzhulgakov

fbshipit-source-id: 9c9e25981999c0edaf40be104a5741e9c62a1333
2019-09-19 13:39:17 -07:00
Elias Ellison
7ab4ad7b6d add torch.jit.is_scripting() api (#25263)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25263

This adds an api to return true in script and false in eager, which together with ignore allows guarding of not yet supported JIT features. Bikeshedding requested please.

cc zou3519

```
def foo():
   if not torch.jit.is_scripting():
      return torch.linear(...)
   else:
      return addmm(...)
```

Test Plan: Imported from OSS

Differential Revision: D17272443

Pulled By: eellison

fbshipit-source-id: de0f769c7eaae91de0007b98969183df93a91f42
2019-09-09 20:24:36 -07:00
M. Doosti Lakhani
1777eb2ed9 fix typo: toDense --> to_dense #25706 (#25832)
Summary:
Only fixes a minor typo in [torch.sparse.FloatTensor docs](https://pytorch.org/docs/stable/sparse.html#torch.sparse.FloatTensor.toDense).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25832

Differential Revision: D17276700

Pulled By: soumith

fbshipit-source-id: cf3d550d5756b000a4e864170ecd4b31826b40f8
2019-09-09 18:27:03 -07:00
TortillasAlfred
38e4766349 Add CosineAnnealingWarmRestarts to optim documentation (#25421)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/20028.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25421

Differential Revision: D17221542

Pulled By: soumith

fbshipit-source-id: 9c83c9ad6bf34ba59713c61485e4ef4b782a2792
2019-09-05 19:06:18 -07:00
Brian Vaughan
88e4cee3e7 Improve handling of mixed-type tensor operations (#22273)
Summary:
Improve handling of mixed-type tensor operations.

This PR affects the arithmetic (add, sub, mul, and div) operators implemented via TensorIterator (so dense but not sparse tensor ops).

For these operators, we will now promote to reasonable types where possible, following the rules defined in https://github.com/pytorch/pytorch/issues/9515, and error in cases where the cast would require floating point -> integral or non-boolean to boolean downcasts.

The details of the promotion rules are described here:
https://github.com/nairbv/pytorch/blob/promote_types_strict/docs/source/tensor_attributes.rst

Some specific backwards incompatible examples:
* now `int_tensor * float` will result in a float tensor, whereas previously the floating point operand was first cast to an int. Previously `torch.tensor(10) * 1.9` => `tensor(10)` because the 1.9 was downcast to `1`. Now the result will be the more intuitive `tensor(19)`
* Now `int_tensor *= float` will error, since the floating point result of this operation can't be cast into the in-place integral type result.

See more examples/detail in the original issue (https://github.com/pytorch/pytorch/issues/9515), in the above linked tensor_attributes.rst doc, or in the test_type_promotion.py tests added in this PR:
https://github.com/nairbv/pytorch/blob/promote_types_strict/test/test_type_promotion.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22273

Reviewed By: gchanan

Differential Revision: D16582230

Pulled By: nairbv

fbshipit-source-id: 4029cca891908cdbf4253e4513c617bba7306cb3
2019-09-05 18:26:09 -07:00
davidriazati
0be29ee2ba Finish testing code examples in the docs (#25668)
Summary:
All of the code examples should now run as unit tests, save for those
that require interaction (i.e. show `pdb` usage) and those that use
CUDA.

`save` had to be moved before `load` in `jit/__init__.py` so `load`
could use the file generated by `save`
](https://our.intern.facebook.com/intern/diff/17192417/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25668

Pulled By: driazati

Differential Revision: D17192417

fbshipit-source-id: 931b310ae0c3d2cc6affeabccae5296f53fe42bc
2019-09-05 16:13:37 -07:00
Michael Suo
11eb8ac2a9 Revert D17199043: [JIT] preserve ignored function return value type
Test Plan: revert-hammer

Differential Revision:
D17199043

Original commit changeset: 1196fd94c207

fbshipit-source-id: 49789ae1f128262bc40a9d5b0d2b7bfbbf0b7e1e
2019-09-05 15:51:06 -07:00
Elias Ellison
df043cd49d preserve ignored function return value type (#25262)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25262

Preserve the type of ignore'd functions on serialization. Currently we first compile an ignore'd function with it's annotated type when first compiling, but do not preserve it. This is important for being able to compile models with not-yet-supported features in JIT.

```
torch.jit.ignore
def unsupported(x):
    return x

def foo():
   if not torch.jit._is_scripting():
      return torch.linear(...)
   else:
      return unsupported(...)
```

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D17199043

Pulled By: eellison

fbshipit-source-id: 1196fd94c207b9fbee1087e4b2ef7d4656a6647f
2019-09-05 11:21:55 -07:00
Jessica Lin
0cc8ac75c9 Alphabetize Package Reference section in Docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25666

Differential Revision: D17190766

Pulled By: soumith

fbshipit-source-id: 836305062b0195b2f11be069447e05008c128d21
2019-09-04 14:31:16 -07:00
Igor Fedan
896cd1c510 Documentation for cdist (#25221)
Summary:
https://github.com/pytorch/pytorch/issues/21730
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25221

Differential Revision: D17073908

Pulled By: ifedan

fbshipit-source-id: 19e2534183d6a2a7e9cdfcee4734cff1b124e05a
2019-09-03 14:16:07 -07:00
Brian Johnson
832c72a2d6 Update index.rst (#24245)
Summary:
Adds links to torchaudio and torchtext to docs index. We should eventually evolve this to bring the audio and text docs builds in like torchvision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24245

Differential Revision: D17163539

Pulled By: soumith

fbshipit-source-id: 5754bdf7579208e291e53970b40f73ef119b758f
2019-09-03 09:28:19 -07:00
Horace He
71c97d3747 Fixed flatten docs (I think) (#25544)
Summary:
I think...

I'm having issues building the site, but it appears to get rid of the error.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25544

Differential Revision: D17157327

Pulled By: ezyang

fbshipit-source-id: 170235c52008ca78ff0d8740b2d7f5b67397b614
2019-09-02 11:34:56 -07:00
BowenBao
bbf84c1a9f Fix dead link and syntax in ONNX landing page
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25126

Differential Revision: D17129237

Pulled By: dzhulgakov

fbshipit-source-id: 80fab457387d357ddcfc23710cb4493ce94cab5e
2019-08-29 23:58:34 -07:00
davidriazati
fe922a2e84 Fix item() call in docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25404

Pulled By: driazati

Differential Revision: D17116098

fbshipit-source-id: e365f254f38a3134898817d75201dd9ae009ecb4
2019-08-29 13:50:04 -07:00
Vincent Quenneville-Belair
05f1fed693 Add OneCycleLR (#25324)
Summary:
Squash rebase of https://github.com/pytorch/pytorch/issues/21258

ghstack-source-id: 7d3ce522ac4dd3050bc6c6bbda1eaaeb8bc4b2c1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25324
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25325

Differential Revision: D17095722

Pulled By: vincentqb

fbshipit-source-id: 7fe69b210924ee3b39223dd78122aea61267234a
2019-08-28 16:59:40 -07:00
Tzu-Wei Huang
cd14518ee8 hyperparameter plugin (#23134)
Summary:
closes https://github.com/pytorch/pytorch/issues/16838

example usage:
```python
writer.add_hparam(hparam_dict= {'lr': 0.1, 'bsize': 12}, metrics= {'accuracy': 0.987, 'loss': 10})

```
cc orionr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23134

Reviewed By: orionr

Differential Revision: D16807300

Pulled By: sanekmelnikov

fbshipit-source-id: 4072c529076f423b34b00b68be2d6eec444423fe
2019-08-26 10:40:34 -07:00
Daya Khudia
12ea1d74f0 Add missing functions and methods for channelwise quantization (#24934)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24934

1) Functions and methods to get scales and zero_points for channelwise quantization were missing. Adding these.
2) Correctly print quantized tensors for channelwise quantization.
ghstack-source-id: 88868339

Test Plan:
buck test mode/dev caffe2/test:quantized -- 'test_qtensor\ \(test_quantized_tensor.TestQuantizedTensor\)'  --print-passing-details

```
Running 1 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/1970324844629541
      ✓ caffe2/test:quantized - test_qtensor (test_quantized_tensor.TestQuantizedTensor) 0.161 1/1 (passed)
Test output:
> test_qtensor (test_quantized_tensor.TestQuantizedTensor) ... ok
>
> ----------------------------------------------------------------------
> Ran 1 test in 0.161s
>
> OK
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/1970324844629541
Summary (total time 6.61s):
  PASS: 1
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```
To be added in a followup diff.
Current output for printing qtensors:
print(W_q.int_repr())
print(W_q)

```
> tensor([[[[-3,  0,  0],
>           [ 4, -2, -4],
>           [-1, -3, -2]],
>
>          [[-3,  1,  3],
>           [-3, -3,  3],
>           [-3, -5, -1]]],
>
>
>         [[[ 4, -3, -4],
>           [ 4, -3, -3],
>           [ 4, -1, -1]],
>
>          [[ 2, -3,  0],
>           [ 3,  1,  1],
>           [ 2, -4,  0]]]], dtype=torch.int8)
> tensor([[[[-0.9273, -0.2318, -0.2318],
>           [ 0.6955, -0.6955, -1.1592],
>           [-0.4637, -0.9273, -0.6955]],
>
>          [[-0.9273,  0.0000,  0.4637],
>           [-0.9273, -0.9273,  0.4637],
>           [-0.9273, -1.3910, -0.4637]]],
>
>
>         [[[ 0.3938, -0.1575, -0.2363],
>           [ 0.3938, -0.1575, -0.1575],
>           [ 0.3938,  0.0000,  0.0000]],
>
>          [[ 0.2363, -0.1575,  0.0788],
>           [ 0.3150,  0.1575,  0.1575],
>           [ 0.2363, -0.2363,  0.0788]]]], size=(2, 2, 3, 3), dtype=torch.qint8,
>        quantization_scheme=torch.per_channel_affine,
>        scale=tensor([0.2318, 0.0788]), zero_point=tensor([ 1, -1]))
```

Differential Revision: D16659715

fbshipit-source-id: f8d3eeaff8f618aa0cca4fd076db73318e6df946
2019-08-23 15:44:16 -07:00
davidriazati
1c4495d8ac Clean up after running doc tests (#25036)
Summary:
](https://our.intern.facebook.com/intern/diff/16965612/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25036

Pulled By: driazati

Differential Revision: D16965612

fbshipit-source-id: 494a734c27c1330ea0917397efbad6bc4f40be73
2019-08-23 12:52:48 -07:00
bnehoran
0ae030f87e Typo correction in cuda_deterministic_backward.rst (#25011)
Summary:
I presume this is what was intended.
cc t-vi
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25011

Differential Revision: D16980939

Pulled By: soumith

fbshipit-source-id: c55b22e119f3894bd124eb1dce4f92a719ac047a
2019-08-22 21:19:39 -07:00
Elias Ellison
e8ea44796e add support for multiple assignment statements (#24477)
Summary:
add support for : `a = b, c = (1, 2)`

partial fix for https://github.com/pytorch/pytorch/issues/24256
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24477

Differential Revision: D16963413

Pulled By: eellison

fbshipit-source-id: 0433a1e759b3aa719ef1b766bb5160f2ca814205
2019-08-22 10:17:14 -07:00
davidriazati
6dca147946 Misc doc updates #2 (#24445)
Summary:
Another pass over the docs, this covers most of the remaining stuff

* content updates for new API
* adds links to functions instead of just names
* removes some useless indentations
* some more code examples + `testcode`s
](https://our.intern.facebook.com/intern/diff/16847964/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24445

Pulled By: driazati

Differential Revision: D16847964

fbshipit-source-id: cd0b403fe4a89802ce79289f7cf54ee0cea45073
2019-08-21 16:45:19 -07:00
davidriazati
1d53d07566 Add docs to CI (#24435)
Summary:
Stacked PRs
 * #24445 - [jit] Misc doc updates #2
 * **#24435 - [jit] Add docs to CI**

This integrates the [doctest](http://www.sphinx-doc.org/en/master/usage/extensions/doctest.html) module into `jit.rst` so that we can run our code examples as unit tests. They're added to `test_jit.py` under the `TestDocs` class (which takes about 30s to run). This should help prevent things like #24429 from happening in the future. They can be run manually by doing `cd docs && make doctest`.

* The test setup requires a hack since `doctest` defines everything in the `builtins` module which upsets `inspect`
* There are several places where the code wasn't testable (i.e. it threw an exception on purpose). This may be resolvable, but I'd prefer to leave that for a follow up. For now there are `TODO` comments littered around.
](https://our.intern.facebook.com/intern/diff/16840882/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24435

Pulled By: driazati

Differential Revision: D16840882

fbshipit-source-id: c4b26e7c374cd224a5a4a2d523163d7b997280ed
2019-08-20 21:40:44 -07:00
hacker_itchy
c0a796d95d Update docs for softmax in onnx supported operators (#24832)
Summary:
Update the softmax in onnx supported operators from `softmax (only dim=-1 supported)` to `softmax`, as all cases of dim options are supported in:
[https://github.com/pytorch/pytorch/issues/18482](https://github.com/pytorch/pytorch/pull/18482): ONNX Export All Cases of Softmax
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24832

Differential Revision: D16896538

Pulled By: bddppq

fbshipit-source-id: 284039ffa42f09b0043e95cfe9f17e1afde53814
2019-08-19 10:13:41 -07:00
Heungsub Hans Lee
e166811598 Documentation for Tensor.record_stream() (#24078)
Summary:
This patch writes documentation for `Tensor.record_stream()`, which is not a documented API currently. I've discussed publishing it with colesbury in https://github.com/pytorch/pytorch/issues/23729.

The documentation is based on [the introduction at `CUDACachingAllocator.cpp`](25d1496d58/c10/cuda/CUDACachingAllocator.cpp (L47-L50)). ~~I didn't explain full details of the life cycle of memory blocks or stream awareness of the allocator for the consistent level of details with other documentations.~~ I explained about the stream awareness in a note block.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24078

Differential Revision: D16743526

Pulled By: zou3519

fbshipit-source-id: 05819c3cc96733e2ba93c0a7c0ca06933acb22f3
2019-08-16 08:07:33 -07:00
davidriazati
b59fa077b3 Misc doc updates / fixes (#24371)
Summary:
This is a bunch of changes to the docs for stylistic changes,
correctness, and updates to the new script API / recent TorchScript
changes (i.e. namedtuple)

For reviewers, ping me to see a link of the rendered output.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24371

Pulled By: driazati

Differential Revision: D16832417

fbshipit-source-id: a28e748cf1b590964ca0ae2dfb5d8259c766a203
2019-08-15 11:31:24 -07:00
Hong Xu
338f9c860f Add logical_xor operator (#23847)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23847

Related to #23836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23847

Test Plan: Imported from OSS

Differential Revision: D16678300

Pulled By: gchanan

fbshipit-source-id: 67020aca5830b6bec2f561105954e0a8c2ee37e0
2019-08-15 08:40:25 -07:00
Hong Xu
1f4c73618c Add logical_not operator. (#23839)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23839

Close #23836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23839

Test Plan: Imported from OSS

Differential Revision: D16678301

Pulled By: gchanan

fbshipit-source-id: 54e7b3f3b04c577e239b88493247e1c036266774
2019-08-15 08:40:21 -07:00
Hong Xu
d9d5d9a913 Sanity fixes for bitwise_not (#24296)
Summary:
(intentionally left blank)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24296

Differential Revision: D16809598

Pulled By: ezyang

fbshipit-source-id: 00718faf1ece06b6af0160763ac22d9cb10c2575
2019-08-14 21:07:26 -07:00
davidriazati
9fe4052b6c Add trace_module to docs (#24258)
Summary:
Stacked PRs
 * **#24258 - [jit] Add `trace_module` to docs**
 * #24208 - [jit] Cleanup documentation around `script` and `trace`

](https://our.intern.facebook.com/intern/diff/16811342/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24258

Pulled By: driazati

Differential Revision: D16811342

fbshipit-source-id: 893be85a711ab180319b790ed1c72b93022373c1
2019-08-14 14:04:14 -07:00
davidriazati
716abd8705 Cleanup documentation around script and trace (#24208)
Summary:
Stacked PRs
 * #24258 - [jit] Add `trace_module` to docs
 * **#24208 - [jit] Cleanup documentation around `script` and `trace`**

Examples / info was duplicated between `ScriptModule`, `script`, and
`trace`, so this PR consolidates it and moves some things around to make
the docs more clear.

For reviewers, if you want to see the rendered output, ping me for a
link
](https://our.intern.facebook.com/intern/diff/16746236/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24208

Pulled By: driazati

Differential Revision: D16746236

fbshipit-source-id: fac3c6e762a31c897b132b8421baa8d4d61f694c
2019-08-14 14:04:10 -07:00
Tongzhou Wang
98d3d1659e Document benchmarking practice for CUDA
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23910

Differential Revision: D16732365

Pulled By: ezyang

fbshipit-source-id: 24e055602d479293da3e00a7143bba8f92bb7c4a
2019-08-13 15:07:23 -07:00
Stephen Roller
1daac9c0a2 Update tensorboard.rst (#22026)
Summary:
**Patch Description**:
Update the docs to reflect one no longer needs to install tensorboard nightly, as Tensorboard 1.14.0 was [released last week](https://github.com/tensorflow/tensorboard/releases/tag/1.14.0).

**Testing**:
Haven't actually tested pytorch with tensorboard 1.14 yet. I'll update this PR once I have.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22026

Differential Revision: D16772136

Pulled By: orionr

fbshipit-source-id: 2e1e17300f304f50026837abbbc6ffb25704aac0
2019-08-12 15:02:26 -07:00
Ilia Cherniavskii
936632b120 Thread local debug info
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22365

Test Plan:
USE_CUDA=0 python setup.py develop
./build/bin/test_jit

Imported from OSS

Reviewed By: ajyu

Differential Revision: D16065129

Pulled By: ilia-cher

fbshipit-source-id: f300985459a83c2c1049ed8c4fefd23a3144047a
2019-08-12 14:53:57 -07:00
davidriazati
be5eb6782b Fix builtin function reference (#24056)
Summary:
This was previously buggy and not being displayed on master. This fixes
the issues with the script to generate the builtin function schemas and
moves it to its own page (it's 6000+ lines of schemas)

Sphinx looks like it will just keep going if it hits errors when importing modules, we should find out how to turn that off and put it in the CI

This also includes some other small fixes:
* removing internal only args from `script()` and `trace()` docs, this also requires manually keeping these argument lists up to date but I think the cleanliness is worth it
* removes outdated note about early returns
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24056

Pulled By: driazati

Differential Revision: D16742406

fbshipit-source-id: 9102ba14215995ffef5aaafcb66a6441113fad59
2019-08-09 15:58:15 -07:00
davidriazati
bdf15311a3 Migration doc fixes (#24033)
Summary:
This time I built the docs to make sure everything looks right
](https://our.intern.facebook.com/intern/diff/16719435/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24033

Pulled By: driazati

Differential Revision: D16719435

fbshipit-source-id: 290c6431e7577ef9fbd595d9ac206df867366937
2019-08-08 16:32:45 -07:00
Brian Johnson
b8b86de89b Adds torch.random to docs/toc (#23553)
Summary:
Fix for   https://github.com/pytorch/pytorch.github.io/issues/162
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23553

Differential Revision: D16700003

Pulled By: soumith

fbshipit-source-id: 0d988985fee9aeadd01f9caba24987f960ce2470
2019-08-07 16:31:32 -07:00
davidriazati
b0a27278bd Recursive script migration guide
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23892

Pulled By: driazati

Differential Revision: D16677532

fbshipit-source-id: 40f506b1c770e60309c0628d4745047996a05295
2019-08-06 21:43:28 -07:00
Iurii Zdebskyi
19c675178f Updated docs and added deprecation warnings to acknowledge a bool tensor (#22261)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22261
ghimport-source-id: 1611d62d056a04c0ad15ef662e594a3d206a78e2

Test Plan: Imported from OSS

Differential Revision: D16005990

Pulled By: izdeby

fbshipit-source-id: 2413824aa75a0755719e4df11acd21e6607e5a85
2019-08-05 07:42:34 -07:00
vishwakftw
8e2b9de860 Document empty_strided (#23735)
Summary:
Changelog:
- Add doc string for torch.empty_strided
- Remove empty file named `python` in test/

Fixes https://github.com/pytorch/pytorch/issues/23688
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23735

Differential Revision: D16623438

Pulled By: ailzhang

fbshipit-source-id: acd5a47da9220243467ccc6bff92edd209cca709
2019-08-02 20:02:44 -07:00
Dmytro Dzhulgakov
acc5cedf6a Adjust maintainers list (#23693)
Summary:
Adds new people and reorders sections to make more sense
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23693

Differential Revision: D16618230

Pulled By: dzhulgakov

fbshipit-source-id: 74191b50c6603309a9e6d14960b7c666eec6abdd
2019-08-01 22:59:02 -07:00
Tongzhou Wang
336c9be7f4 Slightly improve dataloader docs on when auto-batching is disabled (#23671)
Summary:
cc gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23671

Differential Revision: D16604387

Pulled By: soumith

fbshipit-source-id: 0ebc120bcaa0f6fa09158b1d0459a72ab11a53d6
2019-08-01 12:10:17 -07:00
Edward Yang
5b66062f99 Use prerendered KaTeX in docs. (#23376)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23376

This uses master version of sphinxcontrib-katex as it only
recently got prerender support.

Fixes #20984

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D16582064

Pulled By: ezyang

fbshipit-source-id: 9ef24c5788c19572515ded2db2e8ebfb7a5ed44d
2019-07-31 10:01:28 -07:00
Prasun Anand
be3d27589f Added torch.autograd.profiler.record_function() as context manager. (#23428)
Summary:
Added torch.autograd.profiler.record_function() as context manager to annotate block of Python code during profiling.

Fixes https://github.com/pytorch/pytorch/issues/19422 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23428

Differential Revision: D16560771

Pulled By: soumith

fbshipit-source-id: 3923130f7647a36a84dbbe28cc59d216d395d3f9
2019-07-30 11:10:01 -07:00
vishwakftw
b3a9a7a9b9 Rename gels to lstsq (#23460)
Summary:
Changelog:
- Rename `gels` to `lstsq`
- Fix all callsites
- Rename all tests
- Create a tentative alias for `lstsq` under the name `gels` and add a deprecation warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23460

Test Plan: - All tests should pass to confirm that the patch is correct

Differential Revision: D16547834

Pulled By: colesbury

fbshipit-source-id: b3bdb8f4c5d14c7716c3d9528e40324cc544e496
2019-07-30 09:56:04 -07:00
davidriazati
696642ae8d Change docs to use recursive script API (#21612)
Summary:
Use the recursive script API in the existing docs

TODO:
* Migration guide for 1.1 -> 1.2
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21612

Pulled By: driazati

Differential Revision: D16553734

fbshipit-source-id: fb6be81a950224390bd5d19b9b3de2d97b3dc515
2019-07-29 17:51:22 -07:00
Ilia Cherniavskii
41dfe7204b Threading and CPU Inference note
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23417

Test Plan:
cd docs; make html

Imported from OSS

Differential Revision: D16523781

Pulled By: ilia-cher

fbshipit-source-id: d6c09e8a85d39e6185bbdc4b312fea44fcdfff06
2019-07-29 15:45:49 -07:00
Soumith Chintala
a356276d79 add note to Contribution Guide around recently released research (#23513)
Summary:
Thanks adefazio for the feedback, adding a note to the Contribution guide so that folks don't start working on code without checking with the maintainers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23513

Differential Revision: D16546685

Pulled By: soumith

fbshipit-source-id: 1ee8ade963703c88374aedecb8c9e5ed39d7722d
2019-07-29 12:24:59 -07:00
Hong Xu
64e4152064 Clarify that torch.device without an index will always represent the current device (#23468)
Summary:
Per discussion in https://github.com/pytorch/pytorch/issues/23448
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23468

Differential Revision: D16532950

Pulled By: soumith

fbshipit-source-id: 48c97060aaf55f1d7589afab42c6cd623d71a9a7
2019-07-27 06:49:52 -07:00
Yuxin Wu
23f963e4a8 Update distributed.rst (#23289)
Summary:
Different backend is supported since https://github.com/pytorch/pytorch/pull/18595
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23289

Differential Revision: D16528229

Pulled By: soumith

fbshipit-source-id: 57753e84c015817661ba30835278ee3a899aa2d0
2019-07-26 16:55:52 -07:00
BowenBao
46224ef89e Update ONNX docs (#23185)
Summary:
This is still work in progress.

There are several more items to add to complete this doc, including

- [x] LHS indexing, index assignments.
- [x] Tensor List.
- [x] ~Shape/Type propagation.~
- [x] FAQs

Please review and share your thoughts, feel free to add anything that you think should be included as well. houseroad spandantiwari lara-hdr neginraoof
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23185

Differential Revision: D16459647

Pulled By: houseroad

fbshipit-source-id: b401c005f848d957541ba3b00e00c93ac2f4609b
2019-07-26 00:14:54 -07:00
Horace He
1c00e0fc3f Added a flatten module (#22245)
Summary:
https://github.com/pytorch/pytorch/issues/2118

I'm not sure I'm doing it correctly, so I'll add tests if we decide that it's roughly correct.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22245

Differential Revision: D16508957

Pulled By: Chillee

fbshipit-source-id: a8dc7af999ba698c921006889f71cb1bc5a59d50
2019-07-25 22:48:52 -07:00
Pavel Belevich
dd79d45c5a Added torch.bitwise_not docstr (#23397)
Summary:
Fixing https://github.com/pytorch/pytorch/issues/23311
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23397

Differential Revision: D16505107

Pulled By: pbelevich

fbshipit-source-id: 8d515fc27e253469393941c8da23d8e0510e64df
2019-07-25 18:32:58 -07:00
Kexuan Sun
ba6f65cf33 Add document of functions nn.init.ones_/zeros_ (#23145)
Summary:
Functions `nn.init.ones_` and `nn.init.zeros_` were not documented. As mentioned in https://github.com/pytorch/pytorch/issues/9886
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23145

Differential Revision: D16427108

Pulled By: soumith

fbshipit-source-id: 4fac31e79717a436411ef5e107a829b403e576c9
2019-07-25 06:09:50 -07:00
Pieter Noordhuis
95e822622b Enhance interpretation of GLOO_SOCKET_IFNAME (#22978)
Summary:
With this change you can now list multiple interfaces separated by
comma. ProcessGroupGloo creates a single Gloo context for every device
in the list (a context represents a connection to every other
rank). For every collective that is called, it will select the context
in a round robin fashion. The number of worker threads responsible for
executing the collectives is set to be twice the number of devices.

If you have a single physical interface, and wish to employ increased
parallelism, you can also specify
`GLOO_SOCKET_IFNAME=eth0,eth0,eth0,eth0`.  This makes ProcessGroupGloo
use 4 connections per rank, 4 I/O threads, and 8 worker threads
responsible for executing the collectives.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/22978
ghstack-source-id: 87006270

Differential Revision: D16339962

fbshipit-source-id: 9aa1dc93d8e131c1714db349b0cbe57e9e7266f1
2019-07-25 04:52:38 -07:00
Dmytro Dzhulgakov
d6dcec37b6 Add docs about prod ecosystem features (#23010)
Summary:
Covering fleet-wide profiling, api logging, etc.

It's my first time writing rst, so suggestions are definitely welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23010

Differential Revision: D16456721

Pulled By: dzhulgakov

fbshipit-source-id: 3d3018f41499d04db0dca865bb3a9652d8cdf90a
2019-07-24 14:15:33 -07:00
Edward Yang
895e79adf1 Revert D16441000: Switch from KaTeX to imgmath for documentation rendering.
Differential Revision:
D16441000

Original commit changeset: c1ab557cb816

fbshipit-source-id: cbfec2ca648b614b291debd6b3e215db9fbeb57b
2019-07-24 11:43:17 -07:00
Will Feng
3ed79f4b6c Fix argument names in torch doc (#22973)
Summary:
I manually went through all functions in `torch.*` and corrected any mismatch between the arguments mentioned in doc and the ones actually taken by the function. This fixes https://github.com/pytorch/pytorch/issues/8698.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22973

Differential Revision: D16419602

Pulled By: yf225

fbshipit-source-id: 5562c9b0b95a0759abee41f967c45efacf2267c2
2019-07-24 11:22:45 -07:00
Edward Yang
174f7a586f Switch from KaTeX to imgmath for documentation rendering.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23025

Test Plan: Imported from OSS

Differential Revision: D16441000

Pulled By: ezyang

fbshipit-source-id: c1ab557cb8163e9c69585c32d237c076582a6d73
2019-07-23 09:44:37 -07:00
Kexuan Sun
45d3f495ef Add document of function torch.as_strided (#22842)
Summary:
Documentation of `torch.as_strided` and `Tensor.as_strided` is missing. As mentioned in https://github.com/pytorch/pytorch/issues/9886
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22842

Differential Revision: D16254106

Pulled By: soumith

fbshipit-source-id: dee142483fb9ef7bea84bd44a970b6eccdcdc471
2019-07-23 06:06:00 -07:00
Orion Reblitz-Richardson
858d4a6a04 Cleanup API and remove 'experimental' warning (#23000)
Summary:
This fixes ASAN test issues with https://github.com/pytorch/pytorch/pull/21786 seen at https://circleci.com/api/v1.1/project/github/pytorch/pytorch/2212325/output/105/0?file=true and lands it again.

This cleans up the `torch.utils.tensorboard` API to remove all kwargs usage (which isn't clear to the  user) and removes the "experimental" warning in prep for our 1.2 release.

We also don't need the additional PyTorch version checks now that we are in the codebase itself.

cc yf225, lanpa, natalialunova
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23000

Reviewed By: sanekmelnikov

Differential Revision: D16349734

Pulled By: orionr

fbshipit-source-id: 604a9cad56868a55e08b509a0c6f42b84f68de95
2019-07-22 12:10:05 -07:00
Elias Ellison
2ee0f0bc3a add break continue to docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23091

Differential Revision: D16382604

Pulled By: eellison

fbshipit-source-id: 47432d844c811ecd87ad97155e835b07ae8056cc
2019-07-19 12:17:00 -07:00
vishwakftw
6dfecc7e01 Remove deprecated linear algebra functions (and methods) (#22841)
Summary:
Changelog:
- Removed the following linear algebra functions in PyTorch in favor of the renamed operations
  - `btrifact` (use `lu` instead)
  - `btrifact_with_info` (use `lu` with `get_infos=True` instead)
  - `btrisolve` (use `lu_solve` instead)
  - `btriunpack` (use `lu_unpack` instead)
  - `gesv` (use `solve` instead)
  - `pstrf` (use `cholesky` instead)
  - `potrf` (use `cholesky` instead)
  - `potri` (use `cholesky_inverse` instead)
  - `potrs` (use `cholesky_solve` instead)
  - `trtrs` (use `triangular_solve` instead)

- Removed dead code after the removal of `pstrf`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22841

Test Plan:
- All existing tests should pass to verify that the removal is clean

Closes https://github.com/pytorch/pytorch/issues/22832

Differential Revision: D16346184

Pulled By: zou3519

fbshipit-source-id: f748d16ed7609c028de6adcbc28684d5a1af0678
2019-07-19 11:43:06 -07:00
Shen Li
84d892b645 Remove DistributedDataParallelCPU as DDP now supports CPU models (#22864)
Summary:
cc ailzhang aazzolini yifuwang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22864

Differential Revision: D16358011

Pulled By: mrshenli

fbshipit-source-id: 8db2dc035dea03f07a32c749e754f625fda1bf28
2019-07-18 12:50:45 -07:00
Orion Reblitz-Richardson
e24f18cea0 Revert D15854892: [pytorch][PR] [tensorboard] Cleanup API and remove 'experimental' warning
Differential Revision:
D15854892

Original commit changeset: 06b849882694

fbshipit-source-id: 588edc4616d020a23645f8c8181782c8412c4c6e
2019-07-17 16:45:54 -07:00
George Guanheng Zhang
3c0814ffeb add docs to onnx APIs (#22938)
Summary:
Add docs to onnx APIs, including
  - export
  - export_to_pretty_string
  - is_in_onnx_export

Fix https://github.com/pytorch/pytorch/issues/14698
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22938

Differential Revision: D16296182

Pulled By: zhangguanheng66

fbshipit-source-id: 1a1fa769b430db6428e6dfafba5447e6e2a75517
2019-07-17 10:50:41 -07:00
Orion Reblitz-Richardson
4861527446 Cleanup API and remove 'experimental' warning (#21786)
Summary:
This cleans up the `torch.utils.tensorboard` API to remove all kwargs usage (which isn't clear to the  user) and removes the "experimental" warning in prep for our 1.2 release.

We also don't need the additional PyTorch version checks now that we are in the codebase itself.

cc ezyang lanpa natalialunova
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21786

Reviewed By: natalialunova

Differential Revision: D15854892

Pulled By: orionr

fbshipit-source-id: 06b8498826946e578824d4b15c910edb3c2c20c6
2019-07-17 10:34:00 -07:00
Iurii Zdebskyi
bd88fd0793 Added .bfloat16() (#22852)
Summary:
Add conversion method for bfloat16
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22852

Differential Revision: D16256760

Pulled By: izdeby

fbshipit-source-id: 01d75495f9df513a0cdf78791c3eb013ab92bd95
2019-07-15 09:32:18 -07:00
shihongzhi
45cf33a731 add fill_diagonal function (#21892)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/21796
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21892

Differential Revision: D16164678

Pulled By: colesbury

fbshipit-source-id: 85df8ae9b7a6a91b6023fe7295b3a8124e4526ea
2019-07-11 09:20:44 -07:00
Hong Xu
e2dc1fc715 Add a bitwise NOT operator for integer and Boolean types (CPU).
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22283

Test Plan: Imported from OSS

Differential Revision: D16183576

Pulled By: colesbury

fbshipit-source-id: 2e539fab8ff885dddb9bff334d1d784b28d65b8f
2019-07-10 12:17:44 -07:00
Arul
43d36415b9 torch.utils.data.Dataloader: documentation about RNG state consumption (#22540)
Summary:
the outcome from the pytorch forum issue: https://discuss.pytorch.org/t/dataloader-problem-problem-arises-when-shuffle-true/45631

The discussion is here: https://github.com/pytorch/pytorch/pull/20749
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22540

Differential Revision: D16131777

Pulled By: ezyang

fbshipit-source-id: 566deda1b44dc7fae54250e9b508d120851a2848
2019-07-08 08:22:04 -07:00
Michael Acar
a4b2f3e213 Implement AdamW optimizer (#21250)
Summary:
# What is this?
This is an implementation of the AdamW optimizer as implemented in [the fastai library](803894051b/fastai/callback.py) and as initially introduced in the paper [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101). It decouples the weight decay regularization step from the optimization step during training.

There have already been several abortive attempts to push this into pytorch in some form or fashion: https://github.com/pytorch/pytorch/pull/17468, https://github.com/pytorch/pytorch/pull/10866, https://github.com/pytorch/pytorch/pull/3740, https://github.com/pytorch/pytorch/pull/4429. Hopefully this one goes through.
# Why is this important?
Via a simple reparameterization, it can be shown that L2 regularization has a weight decay effect in the case of SGD optimization. Because of this, L2 regularization became synonymous with the concept of weight decay. However, it can be shown that the equivalence of L2 regularization and weight decay breaks down for more complex adaptive optimization schemes. It was shown in the paper [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101) that this is the reason why models trained with SGD achieve better generalization than those trained with Adam. Weight decay is a very effective regularizer. L2 regularization, in and of itself, is much less effective. By explicitly decaying the weights, we can achieve state-of-the-art results while also taking advantage of the quick convergence properties that adaptive optimization schemes have.
# How was this tested?
There were test cases added to `test_optim.py` and I also ran a [little experiment](https://gist.github.com/mjacar/0c9809b96513daff84fe3d9938f08638) to validate that this implementation is equivalent to the fastai implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21250

Differential Revision: D16060339

Pulled By: vincentqb

fbshipit-source-id: ded7cc9cfd3fde81f655b9ffb3e3d6b3543a4709
2019-07-02 09:09:10 -07:00
Pieter Noordhuis
6ff0c6ca3f Remove THD (#22065)
Summary:
It's been ~9 months since moving THD to the `torch.distributed.deprecated` namespace (see https://github.com/pytorch/pytorch/issues/11405) and we haven't seen issues related to it, so it's time to remove it.

Closes https://github.com/pytorch/pytorch/issues/18967.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22065

Reviewed By: mrshenli

Differential Revision: D15983669

Pulled By: pietern

fbshipit-source-id: 2a2f5866f9a63040bc7cef3956d5fd215aba7165
2019-06-25 12:19:13 -07:00
Hong Xu
a45898931c Document the Boolean tensor type.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21601

Differential Revision: D15971573

Pulled By: gchanan

fbshipit-source-id: c07c57f989980149cb1307dcca6ba64dce52d0ef
2019-06-24 14:16:36 -07:00
byronhe
6edaa11e5a fix broken link
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22064

Differential Revision: D15951107

Pulled By: mrshenli

fbshipit-source-id: 0b8f97bd2bbac26855cd2889e1fc619770974ee2
2019-06-24 07:34:16 -07:00
Tongzhou Wang
058beae411 Add IterableDataset (#19228)
Summary:
This is a modified version of https://github.com/pytorch/pytorch/pull/14705 since commit structure for that PR is quite messy.

1. Add `IterableDataset`.
3. So we have 2 data loader mods: `Iterable` and `Map`.

    1. `Iterable` if the `dataset` is an instance of `IterableDataset`
    2. `Map` o.w.

3. Add better support for non-batch loading (i.e., `batch_size=None` and `batch_sampler=None`). This is useful in doing things like bulk loading.
3. Refactor `DataLoaderIter` into two classes, `_SingleProcessDataLoaderIter` and `_MultiProcessingDataLoaderIter`. Rename some methods to be more generic, e.g., `get_batch` -> `get_data`.
4. Add `torch.utils.data.get_worker_info` which returns worker information in a worker proc (e.g., worker id, dataset obj copy, etc.) and can be used in `IterableDataset.__iter__` and `worker_init_fn` to do per-worker configuration.
5. Add `ChainDataset`, which is the analog of `ConcatDataset` for `IterableDataset`.
7. Import torch.utils.data in `torch/__init__.py`
9. data loader examples and documentations
10. Use `get_worker_info` to detect whether we are in a worker process in `default_collate`

Closes https://github.com/pytorch/pytorch/issues/17909, https://github.com/pytorch/pytorch/issues/18096, https://github.com/pytorch/pytorch/issues/19946, and some of https://github.com/pytorch/pytorch/issues/13023
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19228

Reviewed By: bddppq

Differential Revision: D15058152

fbshipit-source-id: 9e081a901a071d7e4502b88054a34b450ab5ddde
2019-06-20 20:12:44 -07:00
Syed Tousif Ahmed
effcc398c4 Refactor Random Number Generators in ATen (#21555)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21555
ghimport-source-id: dd900a8c3e1ef9ef1e011b8bb5476626d18cc462

Test Plan: Imported from OSS

Differential Revision: D15875780

Pulled By: ezyang

fbshipit-source-id: 6e04e90af62ab9c9593d74f344a3a084aaaf6f43
2019-06-19 13:54:09 -07:00
Jerry Zhang
94f903654c Add qscheme() method (#20608)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20608

Exposing QScheme in python as Python objects like `torch.qscheme.per_tensor_affine` etc.

Reviewed By: zafartahirov

Differential Revision: D15364354

fbshipit-source-id: 4d6a96d67e9ead051cf4a8f934553a8c7232fdb7
2019-06-14 16:29:29 -07:00
Ailing Zhang
16b4a12ed8 better example for local weights (#21685)
Summary:
fixes https://github.com/pytorch/hub/issues/29
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21685

Differential Revision: D15817774

Pulled By: ailzhang

fbshipit-source-id: d2f615e5d431186d45a21d8300fb9ba3c37b246c
2019-06-13 17:56:25 -07:00
Edward Yang
b858f42e16 Document that no_grad is thread local. (#21755)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21755
ghimport-source-id: dfb53759024d9ba9d104fdb2a8151ab996e55234

Differential Revision: D15811172

Pulled By: ezyang

fbshipit-source-id: c8c7c1c15277d8fe8cc513e20af449257d7ff15c
2019-06-13 13:47:09 -07:00
Guanheng Zhang
756a20de93 Add/edit docs for nn.transformer (#21746)
Summary:
Add docs for TransformerEncoder and TransformerDecoder, plus minor edits.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21746

Differential Revision: D15807498

Pulled By: zhangguanheng66

fbshipit-source-id: 388efb5821c4c3d25865cecea70902e9b2bf5d15
2019-06-13 12:27:26 -07:00
Brennan Vincent
699de487db numerical integration "trapz" function. (#21610)
Summary:
This is intended to match [numpy.trapz](https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html): numerical integration based on the trapezoid rule.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21610

Differential Revision: D15747618

Pulled By: umanwizard

fbshipit-source-id: 8eadb2e75c9877b07592d875ca0b2cca6cb72297
2019-06-12 15:30:13 -07:00
Syed Tousif Ahmed
ae342fd076 Refactor Random Number Generators in ATen (#21364)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21364
ghimport-source-id: ca7d37e10190ba46dc8512f437404ca9216d3369

Differential Revision: D15696497

Pulled By: ezyang

fbshipit-source-id: 2e713b8566ae915e175b5a79ac1dd9b86cc2a23d
2019-06-12 13:01:30 -07:00
Guanheng Zhang
83cec5f3ee nn.Transformer (#20170)
Summary:
Accidentally rebased the old PR and make it too messy. Find it here (https://github.com/pytorch/pytorch/pull/19274)

Create a PR for comments. The model is still WIP but I want to have some feedbacks before moving too far. The transformer model depends on several modules, like MultiheadAttention (landed).

Transformer is implemented based on the paper (https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf). Users have the flexibility to build a transformer with self-defined and/or built-in components (i.e encoder, decoder, encoder_layer, decoder_layer). Users could use Transformer class to build a standard transformer model and modify sub-layers as needed.

Add a few unit tests for the transformer module, as follow:
TestNN.test_Transformer_cell
TestNN.test_transformerencoderlayer
TestNN.test_transformerdecoderlayer
TestNN.test_transformer_args_check
TestScript.test_scriptmodule_transformer_cuda

There is another demonstration example for applying transformer module on the word language problem. https://github.com/pytorch/examples/pull/555
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20170

Differential Revision: D15417983

Pulled By: zhangguanheng66

fbshipit-source-id: 7ce771a7e27715acd9a23d60bf44917a90d1d572
2019-06-12 12:22:12 -07:00
Sergey Zagoruyko
dd439bc39e Rename hubconf.conf to hubconf.py in docs (#21631)
Summary:
It's a typo I guess. cc fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21631

Differential Revision: D15764909

Pulled By: soumith

fbshipit-source-id: 5ffc7bde181c13e151332e7de3c0da36505b495e
2019-06-11 12:22:43 -07:00
Brian Johnson
4610347fdf Breaks up NN module in docs so it loads faster.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21291

Differential Revision: D15760935

Pulled By: ezyang

fbshipit-source-id: 114da4c52b78949e631e9adcae4eb620546124fb
2019-06-11 09:38:41 -07:00
Ailing Zhang
1e6c99a6e0 update hub doc (#21568)
Summary:
update doc as pointed out in https://github.com/pytorch/hub/pull/22
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21568

Differential Revision: D15732927

Pulled By: ailzhang

fbshipit-source-id: 78ab026539e5ee59e7c3a8144e2c9fcbbc225733
2019-06-10 09:39:35 -07:00
Brennan Vincent
e268fc97c3 Re-add Tensor.T (#21175)
Summary:
Something flaky is going on with `test_inplace_view_saved_output` on Windows.

With my PR #20598 applied, the test fails, even though there is no obvious reason it should be related, so the PR was reverted.

Based on commenting out various parts of my change and re-building, I think the problem is with the name -- renaming everything from `T` to `asdf` seems to make the test stop failing. I can't be sure that this is actually the case though, since I could just be seeing patterns in non-deterministic build output...

I spoke with colesbury offline and we agreed that it is okay to just disable this test on Windows for now and not block landing the main change. He will look into why it is failing.

**Test Plan:** I will wait to make sure the Windows CI suite passes before landing this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21175

Differential Revision: D15566970

Pulled By: umanwizard

fbshipit-source-id: edf223375d41faaab0a3a14dca50841f08030da3
2019-06-04 17:38:25 -07:00
Brennan Vincent
77c2f5dd75 fix copyright notice in docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21372

Differential Revision: D15631889

Pulled By: umanwizard

fbshipit-source-id: cf764432c27cb1b01d8137ed60ec7de361450d0e
2019-06-04 14:53:45 -07:00
Bram Vanroy
38d68ad803 Update randomness.rst (#21337)
Summary:
Following [this question on the forums](https://discuss.pytorch.org/t/reproducibility-and-performance/46504), I propose the following doc change. It clarifies that 'performance reduction' concerns the processing speed (and not the training accuracy).

Related website commit: https://github.com/pytorch/pytorch.github.io/pull/211
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21337

Differential Revision: D15622151

Pulled By: soumith

fbshipit-source-id: f0edeb20049f2ee715c400e7c57abb966864d621
2019-06-04 07:38:00 -07:00
Ailing Zhang
7c823312d3 hub doc improvements
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21307

Differential Revision: D15610441

Pulled By: ailzhang

fbshipit-source-id: 2b2a28ed808936cf7c93db31afc6b5ea888ab1b1
2019-06-03 16:29:39 -07:00
Xiaomeng Yang
93ae040ff0 Add gelu activation in pytorch (#20665)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20665

Add gelu activation forward on CPU in pytorch

Compare to current python implemented version of gelu in BERT model like

  def gelu(self, x):
      x * 0.5 * (1.0 + torch.erf(x / self.sqrt_two))

The torch.nn.functional.gelu function can reduce the forward time from 333ms to 109ms (with MKL) / 112ms (without MKL) for input size = [64, 128, 56, 56] on a devvm.

Reviewed By: zheng-xq

Differential Revision: D15400974

fbshipit-source-id: f606b43d1dd64e3c42a12c4991411d47551a8121
2019-06-02 09:08:47 -07:00
Sovvo
5bc7c1f83d fix contribution and governance links (#21243)
Summary:
Updated web links on contribution_guide and governance documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21243

Differential Revision: D15591065

Pulled By: soumith

fbshipit-source-id: fdcfc518605a08a2ac35a10c146122d7d0a3f609
2019-05-31 21:02:13 -07:00
Jerry Zhang
7f960a9c01 remove quantize_linear from Tensor method (#21196)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21196

we'll add `quantize(quantizer)` as a tensor method later when we expose `quantizer` in Python frontend
Python
```
torch.quantize_linear(t, ...)
```
C++
```
at::quantize_linear(t, ...)
```

Differential Revision: D15577123

fbshipit-source-id: d0abeea488418fa9ab212f84b0b97ee237124240
2019-05-31 12:01:10 -07:00
Edward Yang
e161360b62 Revert D15558784: [reland][pt1][quant] remove quantize_linear from Tensor method
Differential Revision:
D15558784

Original commit changeset: 0b194750c423

fbshipit-source-id: d180a7f76bb05ad7470f17bc3d2bd614fab16529
2019-05-31 06:20:05 -07:00
Jerry Zhang
f91f24764e remove quantize_linear from Tensor method (#21156)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21156

we'll add `quantize(quantizer)` as a tensor method later when we expose `quantizer` in Python frontend
Python
```
torch.quantize_linear(t, ...)
```
C++
```
at::quantize_linear(t, ...)
```

Differential Revision: D15558784

fbshipit-source-id: 0b194750c423f51ad1ad5e9387a12b4d58d969a9
2019-05-30 22:02:12 -07:00
Edward Yang
c4a90ca18e Revert D15477933: [pt1][quant] remove quantize_linear and dequantize from Tensor method
Differential Revision:
D15477933

Original commit changeset: c8aa81f681e0

fbshipit-source-id: ec494fbbab72e20da262bdd8657887e1fdd173cb
2019-05-30 05:04:12 -07:00
Jerry Zhang
67291ba74f remove quantize_linear and dequantize from Tensor method (#20874)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20874

A criteria for what should go in Tensor method is whether numpy has it, for this one it does not
so we are removing it as a Tensor method, we can still call it as function.
Python
```
torch.quantize_linear(t, ...), torch.dequantize(t)
```
C++
```
at::quantize_linear(t, ...), at::dequantize(t)
```

Reviewed By: dzhulgakov

Differential Revision: D15477933

fbshipit-source-id: c8aa81f681e02f038d72e44f0c700632f1af8437
2019-05-29 19:17:16 -07:00
vfdev
449a2c3555 Fixes #20124 (#20203)
Summary:
Fixes #20124

Description:
Code wraps `optimizer.step()` method to detect whether user is following new pattern or old pattern. In case of old pattern detected, a UserWarning is raised. Documentation is also updated to reflect the change:

![Screen Shot 2019-05-07 at 11 05 17](https://user-images.githubusercontent.com/2459423/57287527-04e63580-70b8-11e9-9ddd-5d159ef0ed2f.png)

cc SsnL, bado-lee
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20203

Differential Revision: D15543060

Pulled By: ezyang

fbshipit-source-id: 3605e1afdb6ffc2dfd5e75e92e01b967c4d065b5
2019-05-29 14:15:01 -07:00
Edward Yang
0544a491d5 Revert D15499749: [pytorch][PR] Add Tensor.T attribute to reverse dimensions
Differential Revision:
D15499749

Original commit changeset: f3306b496667

fbshipit-source-id: 7f50431d2ea37bc41bfed62f386ddedea1412878
2019-05-29 04:29:48 -07:00
Brennan Vincent
9294de8c9f Add Tensor.T attribute to reverse dimensions (#20598)
Summary:
For compatibility with numpy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20598

Differential Revision: D15499749

Pulled By: umanwizard

fbshipit-source-id: f3306b496667f20169e9b28db3150d12183703bc
2019-05-28 16:59:06 -07:00
Nishant Pandit
9d9751f634 Convert dequantize_linear to an internal function _dequantize_linear (#20938)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20938

Dequantize_linear need not be exposed to the front end users.
It will only be used for the jit passes for q-dq insertion and op
substitution.

Differential Revision: D15446097

fbshipit-source-id: a5fbcf2bb72115122c9653e5089d014e2a2e891d
2019-05-27 15:40:21 -07:00
Tongzhou Wang
bb89827e1d Update cuda pinned memory note to include tensor.to (#20977)
Summary:
separate bits of changes from #19228
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20977

Differential Revision: D15511919

Pulled By: soumith

fbshipit-source-id: 5015a29cdac6d6e160388c493182c330f0da63ec
2019-05-26 22:22:06 -07:00
Tongzhou Wang
83fe92870d Update multiprocessing note now that shared CUDA tensors are refcounted (#19904)
Summary:
The mp notes are not updated after https://github.com/pytorch/pytorch/pull/16854. (The torch.multiprocessing page is.)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19904

Differential Revision: D15509661

Pulled By: soumith

fbshipit-source-id: 7c11e14a6c804498dda3adbf19710e63e6a564a0
2019-05-25 17:40:42 -07:00
Tomasz Wrona
2fb665a9df Add warning about memory overhead when using multiple tiny tensors (#20801)
Summary:
added note in docs regarding #19408
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20801

Differential Revision: D15503351

Pulled By: mrshenli

fbshipit-source-id: 7ab371a7992233fb867aadd4bb6b74fccd232c33
2019-05-24 21:45:51 -07:00
Dávid Komorowicz
b5a5e296aa Support 3D mesh/point cloud (#20413)
Summary:
I started adding support for the new **[mesh/point cloud](https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/g3doc/tensorboard.md)** data type introduced to TensorBoard recently.

I created the functions to add the data, created the appropriate summaries.
This new data type however requires a **Merged** summary containing the data for the vertices, colors and faces.

I got stuck at this stage. Maybe someone can help. lanpa?

I converted the example code by Google to PyTorch:
```python
import numpy as np
import trimesh

import torch
from torch.utils.tensorboard import SummaryWriter

sample_mesh = 'https://storage.googleapis.com/tensorflow-graphics/tensorboard/test_data/ShortDance07_a175_00001.ply'
log_dir = 'runs/torch'
batch_size = 1

# Camera and scene configuration.
config_dict = {
    'camera': {'cls': 'PerspectiveCamera', 'fov': 75},
    'lights': [
        {
            'cls': 'AmbientLight',
            'color': '#ffffff',
            'intensity': 0.75,
        }, {
            'cls': 'DirectionalLight',
            'color': '#ffffff',
            'intensity': 0.75,
            'position': [0, -1, 2],
        }],
    'material': {
        'cls': 'MeshStandardMaterial',
        'roughness': 1,
        'metalness': 0
    }
}

# Read all sample PLY files.
mesh = trimesh.load_remote(sample_mesh)
vertices = np.array(mesh.vertices)
# Currently only supports RGB colors.
colors = np.array(mesh.visual.vertex_colors[:, :3])
faces = np.array(mesh.faces)

# Add batch dimension, so our data will be of shape BxNxC.
vertices = np.expand_dims(vertices, 0)
colors = np.expand_dims(colors, 0)
faces = np.expand_dims(faces, 0)

# Create data placeholders of the same shape as data itself.
vertices_tensor = torch.as_tensor(vertices)
faces_tensor = torch.as_tensor(faces)
colors_tensor = torch.as_tensor(colors)

writer = SummaryWriter(log_dir)

writer.add_mesh('mesh_color_tensor', vertices=vertices_tensor, faces=faces_tensor,
                colors=colors_tensor, config_dict=config_dict)

writer.close()
```

I tried adding only the vertex summary, hence the others are supposed to be optional.
I got the following error from TensorBoard and it also didn't display the points:
```
Traceback (most recent call last):
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 302, in run_wsgi
    execute(self.server.app)
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 290, in execute
    application_iter = app(environ, start_response)
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/tensorboard/backend/application.py", line 309, in __call__
    return self.data_applications[clean_path](environ, start_response)
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/werkzeug/wrappers/base_request.py", line 235, in application
    resp = f(*args[:-2] + (request,))
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/tensorboard/plugins/mesh/mesh_plugin.py", line 252, in _serve_mesh_metadata
    tensor_events = self._collect_tensor_events(request)
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/tensorboard/plugins/mesh/mesh_plugin.py", line 188, in _collect_tensor_events
    tensors = self._multiplexer.Tensors(run, instance_tag)
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/tensorboard/backend/event_processing/plugin_event_multiplexer.py", line 400, in Tensors
    return accumulator.Tensors(tag)
  File "/home/dawars/workspace/pytorch/venv/lib/python3.6/site-packages/tensorboard/backend/event_processing/plugin_event_accumulator.py", line 437, in Tensors
    return self.tensors_by_tag[tag].Items(_TENSOR_RESERVOIR_KEY)
KeyError: 'mesh_color_tensor_COLOR'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20413

Differential Revision: D15500737

Pulled By: orionr

fbshipit-source-id: 426e8b966037d08c065bce5198fd485fd80a2b67
2019-05-24 14:30:58 -07:00
Elias Ellison
fa20327618 Update Refinement Docs (#20912)
Summary:
To say that we don't do refinement on module attributes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20912

Differential Revision: D15496453

Pulled By: eellison

fbshipit-source-id: a1ab9fb0157a30fa1bb71d0793fcc9b1670c4926
2019-05-24 10:17:55 -07:00
PgLoLo
ec45baf4dd tensor_illustration with correct numbers and better fonts for README file (#20751)
Summary:
Fix of README tensor image for issue #20641
Numbers are fixed, symbols made more readable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20751

Differential Revision: D15495706

Pulled By: ezyang

fbshipit-source-id: b6013574d16253ec681fc57143efe3d53952fbe9
2019-05-24 09:18:18 -07:00
Sergii Dymchenko
a5c90aaf47 Use "length of the RNN input" instead of "length of the RNN"
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20873

Differential Revision: D15495570

Pulled By: ezyang

fbshipit-source-id: e3b4cd67ccf97d0053ac053c3bcb74415b928c0a
2019-05-24 09:03:50 -07:00
Tzu-Wei Huang
cfc98ae714 fix add_histogram_raw (#20688)
Summary:
This is a porting of the fix from:
https://github.com/lanpa/tensorboardX/issues/421

cc orionr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20688

Reviewed By: NarineK

Differential Revision: D15415093

Pulled By: orionr

fbshipit-source-id: d32a6298218fbc6fe315aa0f18b57e0c8ef92627
2019-05-22 14:06:21 -07:00
Jerry Zhang
cca923c481 Add dequantize_linear for JIT pass (#20107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20107

att

Reviewed By: nishantpdce

Differential Revision: D15202187

fbshipit-source-id: 7d6274a67fcca695c0425587f35046fecbc2ccdc
2019-05-21 12:26:48 -07:00
Brennan Vincent
987f1ccf49 Add "ndim" property to tensor (#20565)
Summary:
For compatibility with numpy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20565

Differential Revision: D15374390

Pulled By: umanwizard

fbshipit-source-id: 4ab209a5fb27d8ba27ee7eb6b67b858ce2480594
2019-05-20 16:10:50 -07:00
Ilia Cherniavskii
409200df59 Move inter-op settings into ATen/Parallel (#20050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20050
ghimport-source-id: cc102bab8abf3e56c099245976786317ed63ea14

Differential Revision: D15248576

Pulled By: ilia-cher

fbshipit-source-id: 55ddcb7af387ddfc68a42ac7167de07ea648e249
2019-05-17 03:12:02 -07:00
Igor Fedan
4c23c34e79 Computing var/stddev and mean at the same time (#18731)
Summary:
The current variance kernels compute mean at the same time. Many times we want both statistics together, so it seems reasonable to have a kwarg/function that allows us to get both values without launching an extra kernel.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18731

Differential Revision: D14726082

Pulled By: ifedan

fbshipit-source-id: 473cba0227b69eb2240dca5e61a8f4366df0e029
2019-05-15 16:42:38 -07:00
Lara Haidar
f4d9bfaa4d Support Exports to Multiple ONNX Opset (#19294)
Summary:
Support exporting multiple ONNX opsets (more specifically opset 10 for now), following the proposal in https://gist.github.com/spandantiwari/99700e60919c43bd167838038d20f353.
And add support for custom ops (merge with https://github.com/pytorch/pytorch/pull/18297).

This PR will be followed by another PR containing the changes related to testing the ops for different opsets.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19294

Reviewed By: zrphercule

Differential Revision: D15043951

Pulled By: houseroad

fbshipit-source-id: d336fc35b8827145639137bc348ae07e3c14bb1c
2019-05-10 18:37:12 -07:00
Nick Korovaiko
ed25b8a667 Add a FAQ entry to explain Cannot insert a Tensor that requires grad as a constant
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20181

Differential Revision: D15296816

Pulled By: Krovatkin

fbshipit-source-id: 382e3a0aa982774771e98050cbc65d144cbd959e
2019-05-10 10:55:33 -07:00
Tzu-Wei Huang
7edf9a25e8 Clarify API and add examples for all methods (#20008)
Summary:
As a part of supporting writing data into TensorBoard readable format, we show more example on how to use the function in addition to the API docs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20008

Reviewed By: natalialunova

Differential Revision: D15261502

Pulled By: orionr

fbshipit-source-id: 16611695a27e74bfcdf311e7cad40196e0947038
2019-05-08 14:06:10 -07:00
Ilia Cherniavskii
481b6d0268 Allow a non-OpenMP based build (#19749)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19749
ghimport-source-id: a6636c0acddbdc5fd5b0dcb20b9f80cbdb9159b9

Differential Revision: D15141993

Pulled By: ilia-cher

fbshipit-source-id: 96085608398b2a4c97c68b2948f5184d07f9ad3d
2019-05-06 19:34:48 -07:00
Sunyeop Lee
3a318074e5 Fix examples in jit#user-defined-types documentation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20003

Differential Revision: D15218172

Pulled By: ezyang

fbshipit-source-id: 07dfebafa337252c2733c018ecaabb0d9902e07d
2019-05-06 08:20:00 -07:00
davidriazati
947fd9c3f5 More doc edits (#19929)
Summary:
* document `torch.jit.Attribute`
* add JIT one-liner to `README.md`
* misc clarity edits](https://our.intern.facebook.com/intern/diff/15152418/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19929

Pulled By: driazati

Differential Revision: D15152418

fbshipit-source-id: dfee03f0a17300aaf453fcf17f418463288f66c2
2019-04-30 13:52:07 -07:00
iurii zdebskyi
aa6403bae6 Added .bool() method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19928

Differential Revision: D15131923

Pulled By: izdeby

fbshipit-source-id: 3909cf4623fe85e98ceaf57fbb57745919899445
2019-04-30 10:34:31 -07:00
Orion Reblitz-Richardson
62a1640666 Improve torch.utils.tensorboard docs (#19915)
Summary:
This adds method details and corrects example on the page that didn't run properly. I've now confirmed that it runs in colab with nightly.

For those with internal access the rendered result can be seen at https://home.fburl.com/~orionr/pytorch-docs/tensorboard.html

cc lanpa, soumith, ezyang, brianjo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19915

Differential Revision: D15137430

Pulled By: orionr

fbshipit-source-id: 833368fb90f9d75231b8243b43de594b475b2cb1
2019-04-29 16:06:27 -07:00
peter
39b885cbbf Add magma for CUDA 10.1 to Windows docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19914

Differential Revision: D15123029

Pulled By: soumith

fbshipit-source-id: a3d4b498a763e1a9829896d44211be00400ec39d
2019-04-29 10:13:21 -07:00
davidriazati
dc67d9f3b9 Cleanup documentation (#19584)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19584 [jit] Cleanup documentation**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/19584

Pulled By: driazati

Differential Revision: D15104801

fbshipit-source-id: 87391fd62ee92b615e680469f8bd9a1ac654be7e
2019-04-26 15:43:07 -07:00
Tzu-Wei Huang
98e312cf96 TensorBoard support within PyTorch (#16196)
Summary:
This PR adds TensorBoard logging support natively within PyTorch. It is based on the tensorboardX  code developed by lanpa and relies on changes inside the tensorflow/tensorboard repo landing at https://github.com/tensorflow/tensorboard/pull/2065.

With  these changes users can simply `pip install tensorboard; pip install torch` and then log PyTorch data directly to the TensorBoard protobuf format using

```
import torch
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
s1 = torch.rand(1)
writer.add_scalar('data/scalar1', s1[0], 0)
writer.close()
```

Design:
- `EventFileWriter` and `RecordWriter` from tensorboardX now live in tensorflow/tensorboard
- `SummaryWriter` and PyTorch-specific conversion from tensors, nn modules, etc. now live in pytorch/pytorch. We also support Caffe2 blobs and nets.

Action items:
- [x] `from torch.utils.tensorboard import SummaryWriter`
- [x] rename functions
- [x] unittests
- [x] move actual writing function to tensorflow/tensorboard in https://github.com/tensorflow/tensorboard/pull/2065

Review:
- Please review for PyTorch standard formatting, code usage, etc.
- Please verify unittest usage is correct and executing in CI

Any significant changes made here will likely be synced back to github.com/lanpa/tensorboardX/ in the future.

cc orionr, ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16196

Differential Revision: D15062901

Pulled By: orionr

fbshipit-source-id: 3812eb6aa07a2811979c5c7b70810261f9ea169e
2019-04-25 21:30:23 -07:00
Seungwon Park
6c7135decb fix typo: pytoch -> pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19719

Differential Revision: D15080095

Pulled By: ezyang

fbshipit-source-id: b731a0fde87d25c63c1e3d4b9a9c2244e5ad84af
2019-04-25 06:40:40 -07:00
vishwakftw
c30224ad21 Rename potri to cholesky_inverse (#19498)
Summary:
Changelog:
- Rename `potri` to `cholesky_inverse` to remain consistent with names of `cholesky` methods (`cholesky`, `cholesky_solve`)
- Fix all callsites
- Rename all tests
- Create a tentative alias for `cholesky_inverse` under the name `potri` and add a deprecation warning to not promote usage
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19498

Differential Revision: D15029901

Pulled By: ezyang

fbshipit-source-id: 2074286dc93d8744cdc9a45d54644fe57df3a57a
2019-04-22 08:18:39 -07:00
Tongzhou Wang
6d307db5b4 Move cuFFT plan cache note outside Best Practices (#19538)
Summary:
I mistakenly put it there.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19538

Differential Revision: D15026500

Pulled By: soumith

fbshipit-source-id: 0c13499571fdfd789c3bd1c4b58abd870725d422
2019-04-20 21:39:59 -07:00
davidriazati
405c7bcea0 Fix bad annotation in docs (#19501)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19501 [jit] Fix bad annotation in docs**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/19501

Pulled By: driazati

Differential Revision: D15016062

fbshipit-source-id: 3dcd0481eb48b84e98ffe8c5df2cbc9c2abf99f9
2019-04-19 12:42:26 -07:00
MilesCranmer
30292d994f Add an identity module (#19249)
Summary:
This is a simple yet useful addition to the torch.nn modules: an identity module. This is a first draft - please let me know what you think and I will edit my PR.

 There is no identity module - nn.Sequential() can be used, however it is argument sensitive so can't be used interchangably with any other module. This adds nn.Identity(...) which can be swapped with any module because it has dummy arguments. It's also more understandable than seeing an empty Sequential inside a model.

See discussion on #9160. The current solution is to use nn.Sequential(). However this won't work as follows:

```python
batch_norm = nn.BatchNorm2d
if dont_use_batch_norm:
    batch_norm = Identity
```

Then in your network, you have:

```python
nn.Sequential(
    ...
    batch_norm(N, momentum=0.05),
    ...
)
```

If you try to simply set `Identity = nn.Sequential`, this will fail since `nn.Sequential` expects modules as arguments. Of course there are many ways to get around this, including:

- Conditionally adding modules to an existing Sequential module
- Not using Sequential but writing the usual `forward` function with an if statement
- ...

**However, I think that an identity module is the most pythonic strategy,** assuming you want to use nn.Sequential.

Using the very simple class (this isn't the same as the one in my commit):

```python
class Identity(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__()
    def forward(self, x):
        return x
```

we can get around using nn.Sequential, and `batch_norm(N, momentum=0.05)` will work. There are of course other situations this would be useful.

Thank you.
Best,
Miles
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19249

Differential Revision: D15012969

Pulled By: ezyang

fbshipit-source-id: 9f47e252137a1679e306fd4c169dca832eb82c0c
2019-04-19 10:12:18 -07:00
Ailing Zhang
4c93be0fa0 fix hub doc formatting issues (#19434)
Summary:
minor fixes for doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19434

Differential Revision: D15003903

Pulled By: ailzhang

fbshipit-source-id: 400768d9a5ee24f9183faeec9762b688c48c531b
2019-04-18 16:02:19 -07:00
Tongzhou Wang
973d51079b Add device-specific cuFFT plan caches (#19300)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/19224
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19300

Differential Revision: D14986967

Pulled By: soumith

fbshipit-source-id: 8c31237db50d6924bba1472434c10326610d9255
2019-04-18 06:39:35 -07:00
Ailing Zhang
a1174dbc50 fix hub doc format
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19396

Differential Revision: D14993859

Pulled By: ailzhang

fbshipit-source-id: bdf94e54ec35477cfc34019752233452d84b6288
2019-04-17 23:43:56 -07:00
Ailing Zhang
2787f1d8ed hub minor fixes (#19247)
Summary:
A few improvements while doing bert model
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19247

Differential Revision: D14989345

Pulled By: ailzhang

fbshipit-source-id: f4846813f62b6d497fbe74e8552c9714bd8dc3c7
2019-04-17 21:04:33 -07:00
Bharat123rox
3fcee4875c Add rst entry for nn.MultiheadAttention (#19346)
Summary:
Fix #19259 by adding the missing `autoclass` entry for `nn.MultiheadAttention` from [here](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/activation.py#L676)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19346

Differential Revision: D14971426

Pulled By: soumith

fbshipit-source-id: ceaaa8ea4618c38fa2bff139e7fa0d6c9ea193ea
2019-04-17 04:40:28 -07:00
Jerry Zhang
06c28d8a12 Add slicing and int_repr() to QTensor (#19296)
Summary:
Stack:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; **#19296 [pt1][quant] Add slicing and int_repr() to QTensor**&nbsp;&nbsp;[💛](https://our.intern.facebook.com/intern/diff/D14756833/)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; #18960 [pt1][quant] Add empty_quantized&nbsp;&nbsp;[💛](https://our.intern.facebook.com/intern/diff/D14810261/)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; #19312 Use the QTensor with QReLU&nbsp;&nbsp;[💛](https://our.intern.facebook.com/intern/diff/D14819460/)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; #19319 [RFC] Quantized SumRelu&nbsp;&nbsp;[💛](https://our.intern.facebook.com/intern/diff/D14866442/)

Methods added to pytorch python frontend:
- int_repr() returns a CPUByte Tensor which copies the data of QTensor.
- Added as_strided for QTensorImpl which provides support for slicing a QTensor(see test_torch.py)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19296

Differential Revision: D14756833

Pulled By: jerryzh168

fbshipit-source-id: 6f4c92393330e725c4351d6ff5f5fe9ac7c768bf
2019-04-16 20:17:21 -07:00
Ailing Zhang
e54cb03a51 add/move a few apis in torch.hub (#18758)
Summary:
* `torch.hub.list('pytorch/vision')` - show all available hub models in `pytorch/vision`
* `torch.hub.show('pytorch/vision', 'resnet18')` - show docstring & example for `resnet18` in `pytorch/vision`
* Moved `torch.utils.model_zoo.load_url` to `torch.hub.load_state_dict_from_url` and deprecate `torch.utils.model_zoo`
* We have too many env to control where the cache dir is, it's not very necessary. I actually want to unify `TORCH_HUB_DIR`, `TORCH_HOME` and `TORCH_MODEL_ZOO`, but haven't done it. (more suggestions are welcome!)
* Simplify `pytorch/vision` example in doc, it was used to show how how hub entrypoint can be written so had some confusing unnecessary args.

An example of hub usage is shown below
```

In [1]: import torch

In [2]: torch.hub.list('pytorch/vision', force_reload=True)
Downloading: "https://github.com/pytorch/vision/archive/master.zip" to /private/home/ailzhang/.torch/hub/master.zip
Out[2]: ['resnet18', 'resnet50']

In [3]: torch.hub.show('pytorch/vision', 'resnet18')
Using cache found in /private/home/ailzhang/.torch/hub/vision_master

    Resnet18 model
    pretrained (bool): a recommended kwargs for all entrypoints
    args & kwargs are arguments for the function

In [4]: model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True)
Using cache found in /private/home/ailzhang/.torch/hub/vision_master
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18758

Differential Revision: D14883651

Pulled By: ailzhang

fbshipit-source-id: 6db6ab708a74121782a9154c44b0e190b23e8309
2019-04-10 23:10:39 -07:00
Xiang Gao
ea2405c7dc Add torch.unique_consecutive (#19060)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/19045

Please review: VitalyFedyunin ngimel

This is independent on the #18649 series. This will cause merge conflicts in #18649 series, but please merge this first, and I will resolve the merge conflicts there.

The new feature is exposed in `_unique2_temporary_will_remove_soon` and `_unique_dim2_temporary_will_remove_soon`. But not at `torch.unique` yet. I will take care of the API after #18649 series get merged completely.

Benchmark on a tensor of shape `torch.Size([15320, 2])`:

```python
print(torch.__version__)
print()
a = tensor.sort().values.to('cpu')
print('cpu, sorted_input=False:')
%timeit torch._unique2_temporary_will_remove_soon(a)
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True)
%timeit torch._unique2_temporary_will_remove_soon(a, return_counts=True)
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True, return_counts=True)
print()
print('cpu, sorted_input=True:')
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True)
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True)
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_counts=True)
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True, return_counts=True)
print()
a = a.to('cuda')
print('cuda, sorted_input=False:')
%timeit torch._unique2_temporary_will_remove_soon(a); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True, return_counts=True); torch.cuda.synchronize()
print()
print('cuda, sorted_input=True:')
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True, return_counts=True); torch.cuda.synchronize()
```

```
1.1.0a0+2addccc

cpu, sorted_input=False:
340 µs ± 5.88 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
717 µs ± 14.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
52.3 ms ± 2.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
52.3 ms ± 1.79 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

cpu, sorted_input=True:
32.8 µs ± 285 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
49.9 µs ± 557 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
51.6 µs ± 1.08 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
78 µs ± 782 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

cuda, sorted_input=False:
213 µs ± 1.52 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
291 µs ± 3.81 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
250 µs ± 1.05 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
321 µs ± 1.59 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

cuda, sorted_input=True:
45.6 µs ± 2.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
110 µs ± 2.47 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
82 µs ± 857 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
143 µs ± 409 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```

```python
print(torch.__version__)
print()
a1, a2 = tensor.unbind(1)
indices = (a1 * tensor.max() + a2).sort().indices
a = tensor.index_select(0, indices).to('cpu')
print('cpu, sorted_input=False:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_counts=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True, return_counts=True)
print()
print('cpu, sorted_input=True:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_counts=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True, return_counts=True)
print()
a = a.to('cuda')
print('cuda, sorted_input=False:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True, return_counts=True); torch.cuda.synchronize()
print()
print('cuda, sorted_input=True:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True, return_counts=True); torch.cuda.synchronize()
```

```
cpu, sorted_input=False:
55.4 ms ± 1.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.8 ms ± 616 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.2 ms ± 402 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.1 ms ± 725 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

cpu, sorted_input=True:
54.7 ms ± 585 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.2 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
54.5 ms ± 865 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
54.9 ms ± 577 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

cuda, sorted_input=False:
171 µs ± 783 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
220 µs ± 1.65 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
203 µs ± 2.95 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
251 µs ± 2.83 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

cuda, sorted_input=True:
59.6 µs ± 757 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
113 µs ± 431 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
93.2 µs ± 2.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
147 µs ± 2.81 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
The CPU implementation of `unique_dim` is super slow, see https://github.com/pytorch/pytorch/issues/18987, but this PR will not worry about this issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19060

Differential Revision: D14866909

Pulled By: ezyang

fbshipit-source-id: d20012cec68c37b05cf770a6f4d6524f910b950f
2019-04-10 07:36:08 -07:00
Vishwak Srinivasan
487388d8ad Rename btrisolve to lu_solve (#18726)
Summary:
Changelog:
- Rename `btrisolve` to `lu_solve` to remain consistent with names of solve methods (`cholesky_solve`, `triangular_solve`, `solve`)
- Fix all callsites
- Rename all tests
- Create a tentative alias for `lu_solve` under the name `btrisolve` and add a deprecation warning to not promote usage
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18726

Differential Revision: D14726237

Pulled By: zou3519

fbshipit-source-id: bf25f6c79062183a4153015e0ec7ebab2c8b986b
2019-04-09 15:21:24 -07:00
Edward Yang
29ea08616b Add torch.__config__.show(), reporting detailed version of all libraries. (#18579)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18579
ghimport-source-id: 65124c95e49423de4ad1008c65e75057fea09b94

Differential Revision: D14778507

Pulled By: ezyang

fbshipit-source-id: 1e4bb79f4800a116ce8fb7af2fefbd34da8d102c
2019-04-09 11:13:24 -07:00
Edward Yang
48a35135fb Convert all tabs to spaces, add CI. (#18959)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18959
ghimport-source-id: a934163fa34cb2019732d5f49dc7290c376bf156

Differential Revision: D14831246

Pulled By: ezyang

fbshipit-source-id: beb92dc4ee8c82f4c8259c081dd72e477fe7a9d0
2019-04-09 08:12:26 -07:00
jgong5
3ad710b837 Add MKL-DNN Tensor (#17748)
Summary:
This is a minimalist PR to add MKL-DNN tensor per discussion from Github issue: https://github.com/pytorch/pytorch/issues/16038

Ops with MKL-DNN tensor will be supported in following-up PRs to speed up imperative path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17748

Reviewed By: dzhulgakov

Differential Revision: D14614640

Pulled By: bddppq

fbshipit-source-id: c58de98e244b0c63ae11e10d752a8e8ed920c533
2019-04-08 21:41:38 -07:00
Soumith Chintala
7b5b1486c9 add SyncBatchNorm to docs (#18988)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/18983
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18988

Differential Revision: D14820042

Pulled By: soumith

fbshipit-source-id: 356169f554a42303b266d700d3379a5288f9671d
2019-04-06 11:43:20 -07:00
Gao, Xiang
8c9caf185b Add numpy like repeat as torch.repeat_interleave (#18395)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/14093
cc: SsnL
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18395

Differential Revision: D14599509

Pulled By: umanwizard

fbshipit-source-id: 2391a1cc135fe5bab38475f1c8ed87c4a96222f3
2019-04-05 18:16:25 -07:00
Tongzhou Wang
4f5e72600e fix lint in optim doc
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18883

Differential Revision: D14793365

Pulled By: ezyang

fbshipit-source-id: c1b46c98e3319badec3e0e772d0ddea24cbf9c89
2019-04-04 19:08:13 -07:00
Wanwannodao
8ca9ba17da Fix typo
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18802

Differential Revision: D14781874

Pulled By: ezyang

fbshipit-source-id: 0f94c40bd84c84558ea3329117580f6c749c019f
2019-04-04 12:46:39 -07:00
Elias Ellison
b80a4fa201 Allow ints, floats, and tensors in conditional (#18755)
Summary:
Per our offline discussion, allow Tensors, ints, and floats to be casted to be bool when used in a conditional

Fix for https://github.com/pytorch/pytorch/issues/18381
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18755

Reviewed By: driazati

Differential Revision: D14752476

Pulled By: eellison

fbshipit-source-id: 149960c92afcf7e4cc4997bccc57f4e911118ff1
2019-04-03 17:12:17 -07:00
Jerry Zhang
dfcd7b0185 QTensor (#18230)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18230

Implementing minimum qtensor API to unblock other workstreams in quantization

Changes:
- Added Quantizer which represents different quantization schemes
- Added qint8 as a data type for QTensor
- Added a new ScalarType QInt8
- Added QTensorImpl for QTensor
- Added following user facing APIs
  - quantize_linear(scale, zero_point)
  - dequantize()
  - q_scale()
  - q_zero_point()

Reviewed By: dzhulgakov

Differential Revision: D14524641

fbshipit-source-id: c1c0ae0978fb500d47cdb23fb15b747773429e6c
2019-04-03 13:17:11 -07:00
Edward Yang
6841537933 Delete duplicated technical content from contribution_guide.rst (#18628)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18628
ghimport-source-id: d94b81a6f303883d97beaae25344fd591e13ce52

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18629 Provide flake8 install instructions.
* **#18628 Delete duplicated technical content from contribution_guide.rst**

There's useful guide in contributing_guide.rst, but the
technical bits were straight up copy-pasted from CONTRIBUTING.md,
and I don't think it makes sense to break the CONTRIBUTING.md
link.  Instead, I deleted the duplicate bits and added a cross
reference to the rst document.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14701003

fbshipit-source-id: 3bbb102fae225cbda27628a59138bba769bfa288
2019-03-31 19:13:22 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
Vishwak Srinivasan
e73be58ff7 Rename btriunpack to lu_unpack (#18529)
Summary:
Changelog:
- Renames `btriunpack` to `lu_unpack` to remain consistent with the `lu` function interface.
- Rename all relevant tests, fix callsites
- Create a tentative alias for `lu_unpack` under the name `btriunpack` and add a deprecation warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18529

Differential Revision: D14683161

Pulled By: soumith

fbshipit-source-id: 994287eaa15c50fd74c2f1c7646edfc61e8099b1
2019-03-29 13:01:30 -07:00
Vishwak Srinivasan
d859031ebf Rename btrifact* to lu (#18435)
Summary:
Changelog:

- Renames `btrifact` and `btrifact_with_info` to `lu`to remain consistent with other factorization methods (`qr` and `svd`).
- Now, we will only have one function and methods named `lu`, which performs `lu` decomposition. This function takes a get_infos kwarg, which when set to True includes a infos tensor in the tuple.
- Rename all tests, fix callsites
- Create a tentative alias for `lu` under the name `btrifact` and `btrifact_with_info`, and add a deprecation warning to not promote usage.
- Add the single batch version for `lu` so that users don't have to unsqueeze and squeeze for a single square matrix (see changes in determinant computation in `LinearAlgebra.cpp`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18435

Differential Revision: D14680352

Pulled By: soumith

fbshipit-source-id: af58dfc11fa53d9e8e0318c720beaf5502978cd8
2019-03-29 00:34:30 -07:00
Sam Pepose
8635078d9e Adds Cyclical Learning Rate and Momentum (#18001)
Summary:
This implements a cyclical learning rate (CLR) schedule with an optional inverse cyclical momentum. More info about CLR: https://github.com/bckenstler/CLR

This is finishing what #2016 started. Resolves #1909.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18001

Differential Revision: D14451845

Pulled By: sampepose

fbshipit-source-id: 8f682e0c3dee3a73bd2b14cc93fcf5f0e836b8c9
2019-03-27 19:56:04 -07:00
Paul O’Shannessy
defe67caf2 Generate sphinx docs with secure content. (#18508)
Summary:
There are a number of pages in the docs that serve insecure content. AFAICT this is the sole source of that.

I wasn't sure if docs get regenerated for old versions as part of the automation, or if those would need to be manually done.

cf. https://github.com/pytorch/pytorch.github.io/pull/177
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18508

Differential Revision: D14645665

Pulled By: zpao

fbshipit-source-id: 003563b06048485d4f539feb1675fc80bab47c1b
2019-03-27 11:01:48 -07:00
Edward Yang
81e030d9a6 Upgrade flake8-bugbear to master, fix the new lints. (#18507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18507
ghimport-source-id: 1c3642befad2da78a7e5f39d6d58732b85c76267

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18507 Upgrade flake8-bugbear to master, fix the new lints.**

It turns out Facebobok is internally using the unreleased master
flake8-bugbear, so upgrading it grabs a few more lints that Phabricator
was complaining about but we didn't get in open source.

A few of the getattr sites that I fixed look very suspicious (they're
written as if Python were a lazy language), but I didn't look more
closely into the matter.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14633682

fbshipit-source-id: fc3f97c87dca40bbda943a1d1061953490dbacf8
2019-03-27 08:07:41 -07:00
James Reed
f447b63ed0 Add section about .code to docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18493

Differential Revision: D14634677

Pulled By: jamesr66a

fbshipit-source-id: 9ee065f6ce4218f725b93deb4c64b4ef55926145
2019-03-26 20:52:31 -07:00
Xiang Gao
2ba41c5550 Add some missing docs for tensor methods and attributes, new unittest to enforce tensors.rst no longer miss anything (#16057)
Summary:
This depend on https://github.com/pytorch/pytorch/pull/16039

This prevent people (reviewer, PR author) from forgetting adding things to `tensors.rst`.

When something new is added to `_tensor_doc.py` or `tensor.py` but intentionally not in `tensors.rst`, people should manually whitelist it in `test_docs_coverage.py`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16057

Differential Revision: D14619550

Pulled By: ezyang

fbshipit-source-id: e1c6dd6761142e2e48ec499e118df399e3949fcc
2019-03-26 18:05:56 -07:00
Pat Mellon
fa0bfa03ed Add Google tag (#17690)
Summary:
This PR adds a Global Site Tag to the site.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17690

Differential Revision: D14620816

Pulled By: zou3519

fbshipit-source-id: c02407881ce08340289123f5508f92381744e8e3
2019-03-26 10:35:24 -07:00
vishwakftw
5e462a3ed6 Introduce SobolEngine (#10505)
Summary:
`SobolEngine` is a quasi-random sampler used to sample points evenly between [0,1]. Here we use direction numbers to generate these samples. The maximum supported dimension for the sampler is 1111.

Documentation has been added, tests have been added based on Balandat 's references. The implementation is an optimized / tensor-ized implementation of Balandat 's implementation in Cython as provided in #9332.

This closes #9332 .

cc: soumith Balandat
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10505

Reviewed By: zou3519

Differential Revision: D9330179

Pulled By: ezyang

fbshipit-source-id: 01d5588e765b33b06febe99348f14d1e7fe8e55d
2019-03-26 07:53:07 -07:00
Vitaly Fedyunin
5653a914f7 Implement reference counting for shared IPC CUDA tensors (#16854)
Summary:
This is to fix #16141 and similar issues.

The idea is to track a reference to every shared CUDA Storage and deallocate memory only after a consumer process deallocates received Storage.

ezyang Done with cleanup. Same (insignificantly better) performance as in file-per-share solution, but handles millions of shared tensors easily. Note [ ] documentation in progress.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16854

Differential Revision: D13994490

Pulled By: VitalyFedyunin

fbshipit-source-id: 565148ec3ac4fafb32d37fde0486b325bed6fbd1
2019-03-25 10:24:38 -07:00
David Riazati
f79eac2c7a Cleanup TorchScript rst docs (#18234)
Summary:
* Adds more headers for easier scanning
* Adds some line breaks so things are displayed correctly
* Minor copy/spelling stuff
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18234

Reviewed By: ezyang

Differential Revision: D14567737

Pulled By: driazati

fbshipit-source-id: 046d991f7aab8e00e9887edb745968cb79a29441
2019-03-21 20:19:17 -07:00
vishwakftw
291746f110 Rename trtrs to triangular_solve (#18213)
Summary:
Changelog:
- Renames `trtrs` to `triangular_solve` to remain consistent with `cholesky_solve` and `solve`.
- Rename all tests, fix callsites
- Create a tentative alias for `triangular_solve` under the name `trtrs`, and add a deprecation warning to not promote usage.
- Move `isnan` to _torch_docs.py
- Remove unnecessary imports
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18213

Differential Revision: D14566902

Pulled By: ezyang

fbshipit-source-id: 544f57c29477df391bacd5de700bed1add456d3f
2019-03-21 14:27:21 -07:00
kshitij12345
1c671c56c1 Fix contribution_guide docs (#18237)
Summary:
Fixes Typo and a Link in the `docs/source/community/contribution_guide.rst`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18237

Differential Revision: D14566907

Pulled By: ezyang

fbshipit-source-id: 3a75797ab6b27d28dd5566d9b189d80395024eaf
2019-03-21 13:20:57 -07:00
Vishwak Srinivasan
421b508d55 Rename gesv to solve (#18060)
Summary:
Changelog:

- Renames `gesv` to `solve` to remain consistent with `cholesky_solve`.
- Rename all tests, fix callsites
- Create a tentative alias for `solve` under the name `gesv`, and add a deprecated warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18060

Differential Revision: D14503117

Pulled By: zou3519

fbshipit-source-id: 99c16d94e5970a19d7584b5915f051c030d49ff5
2019-03-18 16:04:24 -07:00
Johannes M Dieterich
8362177bcf Fix typo (#17949)
Summary:
Fix a very common typo in my name.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17949

Differential Revision: D14475162

Pulled By: ezyang

fbshipit-source-id: 91c2c364c56ecbbda0bd530e806a821107881480
2019-03-14 20:11:33 -07:00
peter
9af6564060 Add magma debug version for Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18008

Differential Revision: D14455117

Pulled By: soumith

fbshipit-source-id: 29d9a2e0b36d72bece0bb1870bbdc740c4d1f9d6
2019-03-14 10:15:57 -07:00
livc
ecc5e623a2 fix punctuation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17973

Differential Revision: D14438725

Pulled By: zou3519

fbshipit-source-id: 30a5485b508b4ae028057e0b66a8abb2b163d66b
2019-03-13 08:14:30 -07:00
Eric Nakagawa
b9e8f56daa Add PyTorch Governance, Contributor Guide, and List of Persons of Interest
Summary: Adding new documents to the PyTorch website to describe how PyTorch is governed, how to contribute to the project, and lists persons of interest.

Reviewed By: orionr

Differential Revision: D14394573

fbshipit-source-id: ad98b807850c51de0b741e3acbbc3c699e97b27f
2019-03-11 10:36:41 -07:00
Tongzhou Wang
b6313d74e1 fix faq typo
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17851

Differential Revision: D14401791

Pulled By: soumith

fbshipit-source-id: ed6d64d6f5985e7ce76dca1e9e376782736b90f9
2019-03-10 15:33:52 -07:00
peter
c78da0c6ed Enable using CMD when building cpp extensions on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17706

Differential Revision: D14346482

Pulled By: ezyang

fbshipit-source-id: 7c85e51c701f6c0947ad324ef19fafda40ae1cb9
2019-03-06 14:45:31 -08:00
Tongzhou Wang
39f94619ec fix typo in hub doc
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17705

Differential Revision: D14338380

Pulled By: ailzhang

fbshipit-source-id: d53eece30bede88a642e718ee6f829ba29c7d1c4
2019-03-05 23:19:30 -08:00
peter
81f2bdf9c2 Update magma to 2.5.0 for Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17607

Differential Revision: D14281291

Pulled By: yf225

fbshipit-source-id: 51209c5540932871e45e54ba6d61b3b7d264aa8c
2019-03-01 09:53:56 -08:00
Michael Suo
2f840ba6d6 upload alias tracker graph for docs (#17476)
Summary:
as title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17476

Differential Revision: D14218312

Pulled By: suo

fbshipit-source-id: 64df096a3431a6f25cd2373f0959d415591fed15
2019-02-25 16:58:43 -08:00
vfdev
e984244828 fix code block typo
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17421

Differential Revision: D14194877

Pulled By: soumith

fbshipit-source-id: 6173835d833ce9e9c02ac7bd507cd424a20f2738
2019-02-22 16:34:12 -08:00
David Riazati
1fea60be25 Add dict to docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16640

Differential Revision: D14178270

Pulled By: driazati

fbshipit-source-id: 581040abd0b7f8636c53fd97c7365df99a2446cf
2019-02-21 17:45:24 -08:00
Xiang Gao
4fcab92d6c Move outplace ops to ATen (#16788)
Summary:
Based on https://github.com/pytorch/pytorch/pull/12413, with the following additional changes:

-  Inside `native_functions.yml` move those outplace operators right next to everyone's corresponding inplace operators for convenience of checking if they match when reviewing
- `matches_jit_signature: True` for them
- Add missing `scatter` with Scalar source
- Add missing `masked_fill` and `index_fill` with Tensor source.
- Add missing test for `scatter` with Scalar source
- Add missing test for `masked_fill` and `index_fill` with Tensor source by checking the gradient w.r.t source
- Add missing docs to `tensor.rst`

Differential Revision: D14069925

Pulled By: ezyang

fbshipit-source-id: bb3f0cb51cf6b756788dc4955667fead6e8796e5
2019-02-15 15:58:10 -08:00
Krishna
b892f69440 one_hot docs missing (#17142)
Summary:
one_hot docs is missing [here](https://pytorch.org/docs/master/nn.html#one-hot).

I dug around and could not find a way to get this working properly.

Differential Revision: D14104414

Pulled By: zou3519

fbshipit-source-id: 3f45c8a0878409d218da167f13b253772f5cc963
2019-02-15 10:48:18 -08:00
Xiang Gao
07b5782ff7 Add some missing docs to torch.rst, new unittest to enforce torch.rst no longer miss anything (#16039)
Summary:
This prevent people (reviewer, PR author) from forgetting adding things to `torch.rst`.

When something new is added to `_torch_doc.py` or `functional.py` but intentionally not in `torch.rst`, people should manually whitelist it in `test_docs_coverage.py`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16039

Differential Revision: D14070903

Pulled By: ezyang

fbshipit-source-id: 60f2a42eb5efe81be073ed64e54525d143eb643e
2019-02-15 07:02:31 -08:00
Jiren Jin
a9f1d2e371 Fix the error in the note about torch.device documentation. (#16839)
Summary:
This PR is a simple fix for the mistake in the first note for `torch.device` in the "tensor attributes" doc.
![image](https://user-images.githubusercontent.com/8536399/52399611-1becaa00-2b00-11e9-85bf-cac04b29842d.png)

```
>>> # You can substitute the torch.device with a string
>>> torch.randn((2,3), 'cuda:1')
```
Above code will cause error like below:
```
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-53-abdfafb67ab1> in <module>()
----> 1 torch.randn((2,3), 'cuda:1')

TypeError: randn() received an invalid combination of arguments - got (tuple, str), but expected one of:
 * (tuple of ints size, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
 * (tuple of ints size, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
```

Simply adding the argument name `device` solves the problem: `torch.randn((2,3), device='cuda:1')`.

However, another concern is that this note seems redundant as **there is already another note covering this usage**:
![image](https://user-images.githubusercontent.com/8536399/52399583-0ecfbb00-2b00-11e9-914f-e95da4edecd1.png)

So maybe it's better to just remove this note?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16839

Reviewed By: ezyang

Differential Revision: D13989209

Pulled By: gchanan

fbshipit-source-id: ac255d52528da053ebfed18125ee6b857865ccaf
2019-02-09 20:18:58 -08:00
Michael Suo
96369506c4 doc updates for TorchScript (#16866)
Summary:
Some batched updates:
1. bool is a type now
2. Early returns are allowed now
3. The beginning of an FAQ section with some guidance on the best way to do GPU training + CPU inference
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16866

Differential Revision: D13996729

Pulled By: suo

fbshipit-source-id: 3b884fd3a4c9632c9697d8f1a5a0e768fc918916
2019-02-07 18:03:57 -08:00
Ailing Zhang
3b91df3744 add example multiprocess code (#16345)
Summary: fixes #16141

Differential Revision: D13868539

Pulled By: ailzhang

fbshipit-source-id: 03e858d0aff7804c5e9e03a8666f42fd12836ef2
2019-01-30 09:35:58 -08:00
Elias Ellison
956cabd887 Update Documentation for Optionals (#16380)
Summary:
Now that https://github.com/pytorch/pytorch/pull/15587 has landed, updating docs.

Will close https://github.com/pytorch/pytorch/issues/15278
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16380

Differential Revision: D13825221

Pulled By: eellison

fbshipit-source-id: c5a7a7fbb40ba7be46a80760862468f2c9967169
2019-01-25 15:14:16 -08:00
Sidney Zhang
fdda533eb1 Update docs to include variable annotation example (#16324)
Summary:
Relates to this issue https://github.com/pytorch/pytorch/issues/16288
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16324

Reviewed By: ezyang

Differential Revision: D13805412

Pulled By: suo

fbshipit-source-id: 8b80f988262da2c717452a71142327bbc23d1b8f
2019-01-24 13:10:56 -08:00
Derek Kim
36e27aa092 Typos and broken RSTs fixed in torch.distribution (#16136)
Summary:
- probabilty -> probability
- make long lines break
- Add LogitRelaxedBernoulli in distribution.rst
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16136

Differential Revision: D13780406

Pulled By: soumith

fbshipit-source-id: 54beb975eb18c7d67779a9631dacf7d1461a6b32
2019-01-23 03:03:10 -08:00
SsnL
300dcc3b96 Add cuda.reset_max_memory_* (#15985)
Summary:
Addresses #15968
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15985

Differential Revision: D13649916

Pulled By: soumith

fbshipit-source-id: a207aea5709a79dba7a6fc541d0a70103f49efff
2019-01-14 07:31:51 -08:00
vishwakftw
95febdfacc Add is_floating_point to docs (#15704)
Summary:
Fixes #15700 .

Changelog:

- Expose torch.*.is_floating_point to docs

Differential Revision: D13580734

Pulled By: zou3519

fbshipit-source-id: 76edb4af666c08237091a2cebf53d9ba5e6c8909
2019-01-07 10:43:22 -08:00
Alexander Rodin
a0d22b6965 Fix typo in documentation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15628

Differential Revision: D13562685

Pulled By: soumith

fbshipit-source-id: 1621fcff465b029142313f717035e935e9159513
2018-12-30 18:07:57 -08:00
SsnL
e4477feb15 Update cuda.get/set_rng_state doc (#14324)
Summary:
Now that `cuda.get/set_rng_state` accept `device` objects, the default value should be an device object, and doc should mention so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14324

Reviewed By: ezyang

Differential Revision: D13528707

Pulled By: soumith

fbshipit-source-id: 32fdac467dfea6d5b96b7e2a42dc8cfd42ba11ee
2018-12-27 14:09:25 -08:00
Soumith Chintala
2fe5c29d81 remove legacy from docs (#15112)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/15062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15112

Differential Revision: D13547845

Pulled By: soumith

fbshipit-source-id: 61e3e6c6b0f6b6b3d571bee02db2938ea9698c99
2018-12-25 21:57:54 -08:00
SsnL
521894c490 Allow converting char tensor to numpy; add [fi]info.min (#15046)
Summary:
https://github.com/pytorch/pytorch/pull/14710 with test fixed.

Also added `finfo.min` and `iinfo.min` to get castable tensors.

cc soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15046

Reviewed By: soumith

Differential Revision: D13429388

Pulled By: SsnL

fbshipit-source-id: 9a08004419c83bc5ef51d03b6df3961a9f5dbf47
2018-12-24 09:11:24 -08:00
Gao, Xiang
4a716250cc Add torch.rot90 to torch.rst
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15512

Differential Revision: D13545775

Pulled By: soumith

fbshipit-source-id: 2a8896571745630cff4aaf3d5469ef646bdcddb4
2018-12-23 14:31:11 -08:00
Gao, Xiang
a47749cb28 Add at::one_hot (#15208)
Summary: Closes: https://github.com/pytorch/pytorch/issues/15060

Differential Revision: D13528014

Pulled By: ezyang

fbshipit-source-id: 5a18689a4c5638d92f9390c91517f741e5396293
2018-12-20 14:24:58 -08:00
vishwakftw
41e7e1bc40 Rename potrs to cholesky_solve (#15334)
Summary:
Changelog:
- Renames `potrs` to `cholesky_solve` to remain consistent with Tensorflow and Scipy (not really, they call their function chol_solve)
- Default argument for upper in cholesky_solve is False. This will allow a seamless interface between `cholesky` and `cholesky_solve`, since the `upper` argument in both function are the same.
- Rename all tests
- Create a tentative alias for `cholesky_solve` under the name `potrs`, and add deprecated warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15334

Differential Revision: D13507724

Pulled By: soumith

fbshipit-source-id: b826996541e49d2e2bcd061b72a38c39450c76d0
2018-12-19 12:31:24 -08:00
Krishna Kalyan
c51c825efe Delete ffi documentation (#15220)
Summary: Deleting FFI documentation since its deprecated.

Differential Revision: D13477329

Pulled By: soumith

fbshipit-source-id: 0b3d485eb7cef1f05b6b397dff50f21a49d6409e
2018-12-15 09:49:02 -08:00
David Riazati
e9fb4d1f11 Fix jit doc codeblocks and tables (#15227)
Summary:
Some of the codeblocks were showing up as normal text and the "unsupported modules" table was formatted incorrectly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15227

Differential Revision: D13468847

Pulled By: driazati

fbshipit-source-id: eb7375710d4f6eca1d0f44dfc43c7c506300cb1e
2018-12-14 14:27:56 -08:00
David Riazati
5837320b70 Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.

* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912

Differential Revision: D13385928

Pulled By: driazati

fbshipit-source-id: e391691b2f87dba6e13be05d4aa3ed2f004e31da
2018-12-12 12:30:13 -08:00
Michael Carilli
5d3a347685 Stashing checkpointing RNG states based on devices of arg tensors (#14518)
Summary:
This PR intends to address apaszke's concerns in https://github.com/pytorch/pytorch/pull/14253#issuecomment-441740016.  Preserving the rng state is now controlled by a kwarg rather than a global state, hopefully in a python 2.7-compatible way.

Additionally, the checkpointing function stashes and restores the RNG states of
1. devices associated with all input tensor args to run_fn as well as
2. the current device.

I could easily change this to only save and restore the RNG states associated 1. alone.  This would simplify the logic to create a [deduplicated, ordered](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R37) list of devices considered active.

I'm wondering if the [get_device_states](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R32) and [set_device_states](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R47) functions are general enough to reside elsewhere (presumably torch/random.py).  I'm also wondering if the check on [torch.cuda._initialized](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R47) would be better placed within `get_device_states`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14518

Differential Revision: D13356210

Pulled By: ezyang

fbshipit-source-id: afa4cc21ce7862142d5cb1dec3750018df222039
2018-12-11 09:48:45 -08:00
Michael Suo
25144c8a09 s/Torch Script/TorchScript/g (#15011)
Summary:
pls
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15011

Differential Revision: D13404158

Pulled By: suo

fbshipit-source-id: e906281463d65c86e4e9073eb0c0a26f4f29e307
2018-12-10 13:48:24 -08:00