Commit Graph

343 Commits

Author SHA1 Message Date
Tongzhou Wang
e8536c08a1 Update extension docs, fix Fold/Unfold docs (#9239)
Summary:
Commits:
1. In extension doc, get rid of all references of `Variable` s (Closes #6947 )
    + also add minor improvements
    + also added a section with links to cpp extension :) goldsborough
    + removed mentions of `autograd.Function.requires_grad` as it's not used anywhere and hardcoded to `return_Py_True`.
2. Fix several sphinx warnings
3. Change `*` in equations in `module/conv.py` to `\times`
4. Fix docs for `Fold` and `Unfold`.
    + Added better shape check for `Fold` (it previously may give bogus result when there are not enough blocks). Added test for the checks.
5. Fix doc saying `trtrs` not available for CUDA (#9247 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9239

Reviewed By: soumith

Differential Revision: D8762492

Pulled By: SsnL

fbshipit-source-id: 13cd91128981a94493d5efdf250c40465f84346a
2018-07-08 19:09:39 -07:00
Ailing Zhang
227c8f2654 Implement nn.functional.interpolate based on upsample. (#8591)
Summary:
This PR addresses #5823.

* fix docstring: upsample doesn't support LongTensor

* Enable float scale up & down sampling for linear/bilinear/trilinear modes. (following SsnL 's commit)

* Enable float scale up & down sampling for nearest mode. Note that our implementation is slightly different from TF that there's actually no "align_corners" concept in this mode.

* Add a new interpolate function API to replace upsample. Add deprecate warning for upsample.

* Add an area mode which is essentially Adaptive_average_pooling into resize_image.

* Add test cases for interpolate in test_nn.py

* Add a few comments to help understand *linear interpolation code.

* There is only "*cubic" mode missing in resize_images API which is pretty useful in practice. And it's labeled as hackamonth here #1552. I discussed with SsnL that we probably want to implement all new ops in ATen instead of THNN/THCUNN. Depending on the priority, I could either put it in my queue or leave it for a HAMer.

* After the change, the files named as *Upsampling*.c works for both up/down sampling. I could rename the files if needed.

Differential Revision: D8729635

Pulled By: ailzhang

fbshipit-source-id: a98dc5e1f587fce17606b5764db695366a6bb56b
2018-07-06 15:28:11 -07:00
Vishwak Srinivasan
14cbd9adb8 Implement torch.pinverse : Pseudo-inverse (#9052)
Summary:
1. Used SVD to compute.
2. Tests in test_autograd, test_cuda and test_torch
3. Doc strings in _torch_docs.py and _tensor_docs.py

Closes #6187
Closes https://github.com/pytorch/pytorch/pull/9052

Reviewed By: soumith

Differential Revision: D8714628

Pulled By: SsnL

fbshipit-source-id: 7e006c9d138b9f49e703bd0ffdabe6253be78dd9
2018-07-05 09:11:24 -07:00
vishwakftw
08daed40f7 Fix bug in flip() (#9156)
Summary:
Closes #9147
Added a test to prevent regression in test_torch
Added entries in docs

cc ezyang weiyangfb
Closes https://github.com/pytorch/pytorch/pull/9156

Differential Revision: D8732095

Pulled By: soumith

fbshipit-source-id: 7a6892853cfc0ccb0142b4fd25015818849adf61
2018-07-04 07:24:01 -07:00
vishwakftw
49f88ac956 Add grid lines for activation images, fixes #9130 (#9134)
Summary:
1. Add dashed light blue line for asymptotes.
2. RReLU was missing the activation image.
3. make clean in docs will remove the activation images too.

Sample image:
![image](https://user-images.githubusercontent.com/23639302/42224142-5d66bd0a-7ea7-11e8-8b0a-26918df12f7c.png)
Closes https://github.com/pytorch/pytorch/pull/9134

Differential Revision: D8726880

Pulled By: ezyang

fbshipit-source-id: 35f00ee08a34864ec15ffd6228097a9efbc8dd62
2018-07-03 19:10:00 -07:00
vishwakftw
4643269eb5 Document get_device, fixes #8857 (#8859)
Differential Revision: D8677690

Pulled By: ezyang

fbshipit-source-id: 0167672d1d2659d9fc7d68530760639ba35ed7d8
2018-06-28 22:11:08 -07:00
Tongzhou Wang
be3d65a7e2 i2h<->h2h in gif (#8750)
* i2h<->h2h

* should have 11 frames
2018-06-21 14:46:47 -04:00
Richard Zou
b4cd9f2fc9
Clarify mp note about sharing a tensor's grad field. (#8688)
* Clarify mp note about sharing a tensor's grad field.

* Address comments

* Address comments
2018-06-20 14:22:38 -04:00
Thomas Viehmann
0ae8b6c027 add fold example and add nn.Fold/nn.Unfold and F.fold/F.unfold to doc (#8600)
* add fold example and add nn.Fold/nn.Unfold and F.fold/F.unfold to doc

and a few drive-by doc fixes

* typo
2018-06-18 09:36:42 -04:00
Du Phan
9d88ff7d0d Add half cauchy, half normal distributions (#8411) 2018-06-14 10:28:42 +02:00
Vishwak Srinivasan
61f61de270 Expose logsumexp docs and mark log_sum_exp in distributions for internal use (#8428) 2018-06-13 12:27:58 -04:00
Richard Xue
c6db1bc952 Add gt lt ge le to the supported operators list (#8375)
Add gt lt ge le to the supported operators list
2018-06-12 15:28:34 -04:00
albanD
78e3259bbe Add autograd automatic anomaly detection (#7677)
* add autograd automatic anomaly detection

* python 3 string support

* Fix non python build

* fix typo in doc

* better test and naming fix

* fix no python build and python object handling

* fix missing checks

* clean NO_PYTHON build

* Remove unwanted changes
2018-06-11 21:26:17 -04:00
Seth Hendrickson
94888106a9 Add docstring for torch.sparse_coo_tensor (#8152)
* add sparse_coo_tensor docstring

* update empty tensor example

* whitespace

* whitespace again
2018-06-11 00:03:51 -04:00
Kaiyu Shi
0169ac5936 Fix sample code for cuda stream (#8319) 2018-06-10 11:41:50 -04:00
Tongzhou Wang
742912512c Move signal window functions to ATen; add Blackman window (#8130)
* Move signal window functions to ATen; add Blackman window

* fix cuda test not checking scipy
2018-06-08 11:37:46 -04:00
Tongzhou Wang
9af3a80cff
Docs for gradcheck and gradgradcheck; expose gradgradcheck (#8166)
* Docs for gradcheck and gradgradcheck; expose gradgradcheck

* address comments
2018-06-06 13:59:55 -04:00
Ir1dXD
c719c8032c docs: add canonical_url and fix redirect link (#8155)
* docs: enable redirect link to work for each specific page

* docs: add canonical_url for search engines

closes #7222

* docs: update redirect link to canonical_url
2018-06-05 10:29:55 -04:00
Marcin Elantkowski
c2046c1e5e Implement adaptive softmax (#5287)
* Implement adaptive softmax

* fix test for python 2

* add return_logprob flag

* add a test for cross-entropy path

* address review comments

* Fix docs

* pytorch 0.4 fixes

* address review comments

* don't use no_grad when computing log-probs

* add predict method

* add test for predict

* change methods order

* get rid of hardcoded int values

* Add an optional bias term to the head of AdaptiveSoftmax
2018-06-04 12:12:03 -04:00
Xingdong Zuo
8be17723cb Update nn.rst (#8029) 2018-06-01 09:37:18 -04:00
Tongzhou Wang
f9926e4ce5 Fix EmbeddingBag max_norm option (#7959)
* fix EmbeddingBag max_norm option

* flake8

* add warning to the embedding bag arg change
2018-05-31 09:42:56 -04:00
peterjc123
108fb1c2c9 Fix the import part of the windows doc (#7979) 2018-05-30 21:51:30 -04:00
peterjc123
267fc43a96 Fix Windows doc for import error (#7704)
* Fix Windows doc for import error

* Fix doc again

* Fix wrong format
2018-05-29 22:07:00 +01:00
Sebastian Meßmer
a0480adc79 Fix file extension (#7852) 2018-05-29 15:52:31 -04:00
Gao, Xiang
d7c32df67f move Subset, random_split to data, use sequence at some places. (#7816) 2018-05-25 12:50:50 +02:00
braincodercn
5ee5537b98 Fix typo in document (#7725) 2018-05-21 11:10:24 -04:00
Gao, Xiang
42e5e12750 make BatchSampler subclass of Sampler, and expose (#7707) 2018-05-19 21:29:03 +02:00
Richard Zou
e37da05bd5 Expose documentation for random_split (#7676)
Fixes #7640
2018-05-18 17:16:25 +02:00
Soumith Chintala
d4f6c84041
fix nccl distributed documentation 2018-05-17 18:03:54 -04:00
Thomas Viehmann
1ce5431aaf Documentation improvements (#7537)
- improve scatter documentation (fixes #7518)
- refine KLDivLoss documentation (fixes #7464)
- fix some sphinxbuild warnings

Thank you, Hugh Perkins for reporting!
2018-05-13 15:44:24 -04:00
James Reed
d9c74f727c
Fix ONNX tutorial specification for input names (#7433)
* Fix ONNX tutorial specification for input names

* Some more updates
2018-05-09 13:01:53 -07:00
Tongzhou Wang
55b8317f1d
Update gif with new logo (#7301)
* Update gif with new logo

* add requires_grad=True
2018-05-04 16:47:08 -04:00
Richard Zou
24681a8e49
Update unstable docs logo to new logo. (#7305)
Fixes #7302
2018-05-04 16:44:58 -04:00
Tongzhou Wang
371cc1e2db update the gif for 0.4 (#7262) 2018-05-03 14:23:08 -07:00
Soumith Chintala
1904058370
update logos (#7184) 2018-05-02 10:56:20 -07:00
gchanan
8031da5479
Implement torch.as_tensor, similar to numpy.asarray. (#7109)
* Implement torch.as_tensor, similar to numpy.asarray.
torch.as_tensor behaves like torch.tensor except it avoids copies if possible; so also somewhat like tensor.new but without the size overloads.
I didn't add a requires_grad field, because we haven't decided on the semantics such as as_param.

* Remove requires_grad for doc.
2018-05-01 12:54:43 -04:00
Masaki Kozuki
ba046331e8 add spectral normalization [pytorch] (#6929)
* initial commit for spectral norm

* fix comment

* edit rst

* fix doc

* remove redundant empty line

* fix nit mistakes in doc

* replace l2normalize with F.normalize

* fix chained `by`

* fix docs

fix typos
add comments related to power iteration and epsilon
update link to the paper
make some comments specific

* fix typo
2018-05-01 17:00:30 +08:00
Peter Goldsborough
b70b7a80d4 Inline JIT C++ Extensions (#7059)
Adds ability to JIT compile C++ extensions from strings

>>> from torch.utils.cpp_extension import load_inline
>>> source = '''
    at::Tensor sin_add(at::Tensor x, at::Tensor y) {
      return x.sin() + y.sin();
    }
'''
>>> module = load_inline(name='inline_extension', cpp_sources=source, functions='sin_add')
Fixes #7012

* Inline JIT C++ Extensions

* jit_compile_sources -> jit_compile

* Split up test into CUDA and non-CUDA parts

* Documentation fixes

* Implement prologue and epilogue generation

* Remove extra newline

* Only create the CUDA source file when cuda_sources is passed
2018-04-30 11:48:44 -04:00
Soumith Chintala
6a55d86234 GroupNorm docs (#7086) 2018-04-30 09:40:34 +02:00
Thomas Viehmann
1b0ad8678b import *Sampler to utils.data (Better fix than #6982) (#7007) 2018-04-27 10:18:29 +02:00
Richard Zou
9dd73aa7eb Fix stable link to always be /stable/ (#6907) 2018-04-24 15:42:46 -04:00
Richard Zou
0430bfe40b
[docs] Update broadcasting and cuda semantics notes (#6904)
* [docs] Update broadcasting and cuda semantics notes

* Update multiprocessing.rst

* address comments

* Address comments
2018-04-24 13:41:24 -04:00
Richard Zou
82a33c32aa Update device docs (#6887)
Tell users that one can substitute torch.device with a string
2018-04-23 19:04:20 -04:00
Tongzhou Wang
1ee009599c Add torch.get_default_dtype doc (#6872)
* add torch.get_default_dtype doc

* address comments
2018-04-23 18:58:01 -04:00
peterjc123
a4dbd37403 [doc] Minor fixes for Windows docs (#6853) 2018-04-23 13:15:33 +02:00
peterjc123
56567fe47d Add documents for Windows (#6653)
* Add Windows doc

* some minor fixes

* Fix typo

* more minor fixes

* Fixes on dataloader
2018-04-22 15:18:02 -04:00
li-roy
d564ecb4a5 Update docs with new tensor repr (#6454)
* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Link to torch.no_grad, etc, from torch doc

* Add dtype aliases to table

* regen docs again

* Tensor attributes stub page

* link to inplace sampling

* Link torch.dtype, device, and layout

* fix dots after nonfinite floats

* better layout docs
2018-04-21 07:35:37 -04:00
Richard Zou
2acc247517
[docs] Update autograd notes (#6769) 2018-04-19 13:34:14 -04:00
Richard Zou
47bd4be4d3
[docs] More factory functions (#6709)
* More factory functions

Changes:
- Added the remaining factory and factory-like functions
- Better argument reuse via string templates
- Link under torch.rst's Creation Ops to the randomized creation ops

* Add double tick around False

* fix flake8

* Fix False

* Clarify comment: hopefully it is clearer now
2018-04-19 13:16:07 -04:00
Richard Zou
cc3284cad3
[docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763) 2018-04-19 13:15:27 -04:00