Commit Graph

303 Commits

Author SHA1 Message Date
Richard Zou
9dd73aa7eb Fix stable link to always be /stable/ (#6907) 2018-04-24 15:42:46 -04:00
Richard Zou
0430bfe40b
[docs] Update broadcasting and cuda semantics notes (#6904)
* [docs] Update broadcasting and cuda semantics notes

* Update multiprocessing.rst

* address comments

* Address comments
2018-04-24 13:41:24 -04:00
Richard Zou
82a33c32aa Update device docs (#6887)
Tell users that one can substitute torch.device with a string
2018-04-23 19:04:20 -04:00
Tongzhou Wang
1ee009599c Add torch.get_default_dtype doc (#6872)
* add torch.get_default_dtype doc

* address comments
2018-04-23 18:58:01 -04:00
peterjc123
a4dbd37403 [doc] Minor fixes for Windows docs (#6853) 2018-04-23 13:15:33 +02:00
peterjc123
56567fe47d Add documents for Windows (#6653)
* Add Windows doc

* some minor fixes

* Fix typo

* more minor fixes

* Fixes on dataloader
2018-04-22 15:18:02 -04:00
li-roy
d564ecb4a5 Update docs with new tensor repr (#6454)
* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Link to torch.no_grad, etc, from torch doc

* Add dtype aliases to table

* regen docs again

* Tensor attributes stub page

* link to inplace sampling

* Link torch.dtype, device, and layout

* fix dots after nonfinite floats

* better layout docs
2018-04-21 07:35:37 -04:00
Richard Zou
2acc247517
[docs] Update autograd notes (#6769) 2018-04-19 13:34:14 -04:00
Richard Zou
47bd4be4d3
[docs] More factory functions (#6709)
* More factory functions

Changes:
- Added the remaining factory and factory-like functions
- Better argument reuse via string templates
- Link under torch.rst's Creation Ops to the randomized creation ops

* Add double tick around False

* fix flake8

* Fix False

* Clarify comment: hopefully it is clearer now
2018-04-19 13:16:07 -04:00
Richard Zou
cc3284cad3
[docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763) 2018-04-19 13:15:27 -04:00
Richard Zou
9c682f02b7
[docs] Fix some sphinx warnings (#6764)
These aren't important but too many of them can obscure real warnings
with the docs.
2018-04-19 12:37:42 -04:00
Richard Zou
2d09799950
[docs] Document CUDA profiling gatchas in bottleneck docs (#6715) 2018-04-18 16:55:13 -04:00
Thomas Viehmann
bd0cc7d364 Implement torch.einsum (fixes #1889) (#6307)
* start at generic trilinear

* Implement einsum (fixes #1889)

This provides a simple implementation of einsum. It is built on
top of the work for computing bilinear (#6110).
It uses a naive left-to-right resolution at the moment.
Autograd is able to differentiate by itself.
The obvious unsupported feature is taking diagonals (einsum('ii->i',(a,)).

* add tests and docs

* fix flake8

* clean diff

* rebase on current master to resolve conflicting String wrapping

* clean up after rebase

* better commentary in einsum and sumproduct_pair

* don't say fixme if it's fixed and rename num_outputs to num_output_dims

* adapt python wrapper to use std::string instead of String to avoid typedef at::String

* typos and some vector to array conversion

* fix accidental python<->python3 change

* really fix bad rebase
2018-04-18 13:41:27 +02:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Richard Zou
7de61c3b8c
Update tensors.rst Tensor introduction (#6670)
Changes:
- Deleted docs for old constructor. Add link to new `torch.tensor` ctor
- Add docs for `torch.tensor`
- Add some info on dtypes to the top of `tensors.rst`.
2018-04-17 16:52:22 -04:00
Richard Zou
1f2829dd2a
Update tensor factory method docs (#6640)
* Update tensor factory method docs

Also add new docs for `torch.empty`.

* Add full; some refactoring to make docs nicer
2018-04-17 14:30:46 -04:00
Tony Beltramelli
7fcaf3b49e Update torch.nn.init and torch.nn.utils.clip_grad (#6173)
Introducing two updates.

1. Add param to He initialization scheme in torch.nn.init
Problem solved:
The function calculate_gain can take an argument to specify the type of non-linearity used. However, it wasn't possible to pass this argument directly to the He / Kaiming weight initialization function.

2. Add util to clip gradient value in torch.nn.utils.clip_grad
Problem solved:
DL libraries typically provide users with easy access to functions for clipping the gradients both using the norm and a fixed value. However, the utils clip_grad.py only had a function to clip the gradient norm.

* add param to He initialization scheme in torch.nn.init

* add util to clip gradient value in torch/nn/utils/clip_grad.py

* update doc in torch.nn.utils.clip_grad

* update and add test for torch.nn.utils.clip_grad

* update function signature in torch.nn.utils.clip_grad to match suffix_ convention

* ensure backward compatibility in torch.nn.utils.clip_grad

* remove DeprecationWarning in torch.nn.utils.clip_grad

* extend test and implementation of torch.nn.utils.clip_grad

* update test and implementation torch.nn.utils.clip_grad
2018-04-17 11:32:32 -04:00
gchanan
c77fca570c Add device docs; match constructor parameter names with attribute names. (#6633)
* Add device docs; match constructor parameter names with attribute names.

* Use double quotes for strings.

* Update printing.

* Separate device ordinal-only construction into a separate note.

* Use current device.
2018-04-17 09:55:44 -04:00
Semion Sidorenko
639dd0e324 Fix an error in the tensor docs. (#6658)
The docs incorrectly stated that there was seven CPU tensor types and
eight GPU tensor types, before listing eight types for both CPU and GPU.
2018-04-17 09:54:19 -04:00
gchanan
d7cb78478f Split set_default_tensor_type(dtype) into set_default_dtype(dtype). (#6599)
* Split set_default_tensor_type(dtype) into set_default_dtype(dtype).

* Fix flake8.

The difference between this one and set_default_tensor_type is that it only sets scalar type what determines the type + device of a tensor returned from a factory function with defaults is the default tensor type + the current device (if the default tensor type is cuda). This just changes the scalar type of the default tensor type.

We do eventually want to deprecate set_default_tensor_type; it is not clear how to do that in a sensible and backwards compatible way.
2018-04-16 13:49:00 -04:00
Fritz Obermeyer
76ca037069 [distributions] Implement Independent distribution (#6615)
* Implement Independent distribution

* Add docs for Independent distribution
2018-04-16 11:42:12 -04:00
Teng Li
f5beff334b Added distributed docs on NCCL2 backend/functions and launch module (#6579) 2018-04-15 21:53:10 -04:00
Richard Zou
99cfb56698 Add docs for torch.randn_like (#6565)
* Add docs for torch.randn_like

* Address comments

* Address commetns

* Address comments
2018-04-13 11:33:56 -04:00
Richard Zou
16704249cb Add docs for tensor.index_put_ (#6563) 2018-04-12 17:00:02 -04:00
Tongzhou Wang
6b7ec95abb Link relevant FAQ section in DataLoader docs (#6476)
* Link FAQ section on workers returning same random numbers in DataLoader docs

* explicitly mention section names
2018-04-11 13:41:46 -04:00
Xingdong Zuo
494aaab00e Add docs for item() (#6508) 2018-04-11 12:40:01 -04:00
Naman Jain
1e5611014d Adding autofunction entry for torch.randint (#6507)
* added randint function in ATEN yaml as well as Tensorfactories.cpp

* corrected randint

* randint with overloading complete,getting tuple of ints behaviour though

* done randintlike and randint_out

Left : adding docs and test, and remove the bug on size = (5)

* Removed my error messages, ThRandomTensor will handle all exceptions

* added docs and tests, corrected a mistake

Tested with manual seeds in some test cases as well. Seems fine to me (check documentation though)

* corrected indentation to spaces, and improved sizes argument description

* made documentation argument description shorter

* added whitespace after ',' in torch docs

* addes spaces in documentation

* added more tests (including bounds and overloading features)

* added whitespaces in test_torch

* removed trailing whitespaces

* removed whitespace from a blank line

* removed positive requirement from docs. Added dtype argument and gave eg

* made randint over randn in all files

* changed to data type for dtype in docs for randint

* added autofunction entry for randint in torch.rst
2018-04-11 12:34:25 -04:00
Tongzhou Wang
d9345aa60f
add checkpoint to index.rst (#6498) 2018-04-11 02:50:01 -04:00
Tongzhou Wang
0dff2b5e35
[fft] [3 of 3] Implements backward of fft ifft rfft irfft (#5537)
* change irfft signal_sizes arg to be the last

* add docs for fft, ifft, rfft, irfft; update doc for stft

* fix typo in window function docs

* improve gradcheck error message

* implement backward of fft, ifft, rfft, irfft

* add grad tests for fft, ifft, rfft, irfft

* fix nits and typos from #6118

* address comments
2018-04-10 22:09:36 -04:00
Priya Goyal
e3196e0ea8
[Re-checkpointing] Autograd container for trading compute for memory (#6467)
* Autograd container for trading compute for memory

* add a unit test for checkpoint

* address comments

* address review comments

* adding some docs for the checkpoint api

* more comments

* more comments

* repro bug

* Fix a subtle bug/apply some review comments

* Update checkpoint.py

* Run everything in grad mode

* fix flake and chunk=1

* use imperative backward as per discussion

* remove Variable and also add models and test for models

* Add a simple thread local variable to check for autograd grad mode

* remove models and models test after debugging

* address review comments

* address more comments

* address more comments
2018-04-10 15:26:24 -04:00
Richard Zou
04c215b445 Add link in docs menu to stable docs (#6475)
Part of #5738. Warns users that they're not viewing the latest stable
release docs.

We should remember to delete this when cutting out 0.4.0 release docs. (we'd just delete the div in pytorch.github.io)
2018-04-10 14:53:04 -04:00
Tongzhou Wang
59bda9a8c4
Fix reflection padding boundary checks (#6438)
* Fix Reflection padding boundary checks

* Improve padding docs

* fix lint
2018-04-10 10:37:01 -04:00
Richard Zou
265e1a97ec Add different logo for master docs (#6446) 2018-04-09 18:48:53 -04:00
Richard Zou
1b3a5a4e7d bottleneck supports better user-provided arguments (#6425)
Fixes #6312.

Changed bottleneck's arg parser to user argparse.REMAINDER. This lets
the user specify args as `python -m torch.utils.bottleneck script.py
[args]` (previously, a -- was needed after `bottleneck` and before
`script.py`).
2018-04-09 13:57:26 -04:00
Tongzhou Wang
4d15442ebc
Add total_length option to pad_packed_sequence (#6327)
* add total_length to pad_packed_sequence; add example on how to use pack->rnn->unpack with DP

* address comments

* fix typo
2018-04-08 20:25:48 -04:00
Tongzhou Wang
e0f3e5dc77 fix activation images not showing up on official website (#6367) 2018-04-07 11:06:24 -04:00
Kento NOZAWA
c00ee6da8f Fix typos (#6348)
* Fix typo

* Fix typo

* Update faq.rst
2018-04-06 11:06:42 -04:00
Masaki Kozuki
a093ec997f fix typo (#6329) 2018-04-05 21:36:16 -04:00
Vishwak Srinivasan
0aa35780bf [ready] Implement log2 and log10 in PyTorch (#6272)
* Implemented log2 and log10

* Re-add incorrectly removed files

* Fix minor bugs

* Fix log1p docs

* Add a try-except for python2 math module in log2 test

* Revert changes made to aten/doc/*

* Fix docstring errors

* Fix windows build
2018-04-05 14:28:37 -04:00
Peter Goldsborough
9ba70856a1 Add max_values and argmax convenience functions to ATen (#6201)
* Add max_values and argmax convenience functions to ATen

* Add documentation for torch.argmax/argmin and skip max_values

* Add tests for argmax/argmin

* Dont default the dim argument

* Use dim=0 in test_torch.py for argmax tests

* Implement argmin()  and argmax() without dim

* Call .contiguous() before .view(-1)
2018-04-04 15:53:26 -04:00
Kaiyu Shi
605307f8f3 Add support for printing extra information in Module and refactor redundant codes (#5936)
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.

Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.

* Add support for extra information on Module

* Rewrite the repr method of Module

* Fix flake8

* Change the __repr__ to get_extra_repr in Linear

* Fix extra new-line for empty line

* Add test for __repr__ method

* Fix bug of block string indent

* Add indent for multi-line repr test.

* Address review comments

* Update tutorial for creating nn.Module

* Fix flake8, add extra_repr of bilinear

* Refactor DropoutNd

* Change to extra_repr in some Modules

* Fix flake8

* Refactor padding modules

* Refactor pooling module

* Fix typo

* Change to extra_repr

* Fix bug for GroupNorm

* Fix bug for LayerNorm
2018-04-02 13:52:33 -04:00
Thomas Viehmann
32ba2ca203 add documentation for diagflat and diagonal (#6161) 2018-03-31 18:03:21 +02:00
Richard Zou
1449c9f754 Update autograd docs (#5907)
* Update autograd docs

* Deprecate 'grad_variables' in backward().

Advise to replace with 'grad_tensors'.

* Resolve saved_variables/saved_tensors

* Tensor section

* Address comments

* Address comments

* Address comments
2018-03-30 15:33:11 -04:00
Tongzhou Wang
4f05cb710e Add underscore to nn.init.* and deprecate the original ones (#6093)
Fixes #5946.

* add underscore to nn.init.* and deprecate the original ones

* add a test for deprecation
2018-03-29 13:26:12 -04:00
Peter Goldsborough
47f31cb1e6 Update FAQ to make more sense after tensor/variable merge (#6017) 2018-03-27 07:48:25 -07:00
Richard Zou
5d628db0a2 Deprecate ctx.saved_variables via python warning. (#5923)
* Deprecate ctx.saved_variables via python warning.

Advises replacing saved_variables with saved_tensors.
Also replaces all instances of ctx.saved_variables with ctx.saved_tensors in the
codebase.

Test by running:
```
import torch
from torch.autograd import Function

class MyFunction(Function):
    @staticmethod
    def forward(ctx, tensor1, tensor2):
        ctx.save_for_backward(tensor1, tensor2)
        return tensor1 + tensor2

    @staticmethod
    def backward(ctx, grad_output):
        var1, var2 = ctx.saved_variables
        return (grad_output, grad_output)

x = torch.randn((3, 3), requires_grad=True)
y = torch.randn((3, 3), requires_grad=True)
model = MyFunction()
model.apply(x, y).sum().backward()
```
and assert the warning shows up.

* Address comments

* Add deprecation test for saved_variables
2018-03-26 14:13:45 -04:00
Fritz Obermeyer
03a6952ac9 [distributions] Fix scalar bugs in torch.distributions.transforms etc. (#5931) 2018-03-25 13:33:31 +02:00
Richard Zou
feb2785c5c Implement torch.util.bottleneck (#5216)
* Implement torch.util.bottleneck

This is a tool that is intended to be used as initial exploratory
debugging of bottlenecks in user scripts. Run it with

    python -m torch.utils.bottleneck /path/to/source/script.py

* Refactor and address comments

* Fix tests

* Allow passing of args to the profiled script

* Replace Variable
2018-03-23 17:27:35 -04:00
Vishwak Srinivasan
8cf521b522 fix mvn docs (#5967) 2018-03-23 14:26:55 -04:00
Brooks
1936753708 Added an implementation of a multivariate normal distribution (#4950) 2018-03-19 23:22:46 +01:00