Commit Graph

592 Commits

Author SHA1 Message Date
Richard Zou
2d09799950
[docs] Document CUDA profiling gatchas in bottleneck docs (#6715) 2018-04-18 16:55:13 -04:00
Thomas Viehmann
bd0cc7d364 Implement torch.einsum (fixes #1889) (#6307)
* start at generic trilinear

* Implement einsum (fixes #1889)

This provides a simple implementation of einsum. It is built on
top of the work for computing bilinear (#6110).
It uses a naive left-to-right resolution at the moment.
Autograd is able to differentiate by itself.
The obvious unsupported feature is taking diagonals (einsum('ii->i',(a,)).

* add tests and docs

* fix flake8

* clean diff

* rebase on current master to resolve conflicting String wrapping

* clean up after rebase

* better commentary in einsum and sumproduct_pair

* don't say fixme if it's fixed and rename num_outputs to num_output_dims

* adapt python wrapper to use std::string instead of String to avoid typedef at::String

* typos and some vector to array conversion

* fix accidental python<->python3 change

* really fix bad rebase
2018-04-18 13:41:27 +02:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Richard Zou
7de61c3b8c
Update tensors.rst Tensor introduction (#6670)
Changes:
- Deleted docs for old constructor. Add link to new `torch.tensor` ctor
- Add docs for `torch.tensor`
- Add some info on dtypes to the top of `tensors.rst`.
2018-04-17 16:52:22 -04:00
Richard Zou
1f2829dd2a
Update tensor factory method docs (#6640)
* Update tensor factory method docs

Also add new docs for `torch.empty`.

* Add full; some refactoring to make docs nicer
2018-04-17 14:30:46 -04:00
Tony Beltramelli
7fcaf3b49e Update torch.nn.init and torch.nn.utils.clip_grad (#6173)
Introducing two updates.

1. Add param to He initialization scheme in torch.nn.init
Problem solved:
The function calculate_gain can take an argument to specify the type of non-linearity used. However, it wasn't possible to pass this argument directly to the He / Kaiming weight initialization function.

2. Add util to clip gradient value in torch.nn.utils.clip_grad
Problem solved:
DL libraries typically provide users with easy access to functions for clipping the gradients both using the norm and a fixed value. However, the utils clip_grad.py only had a function to clip the gradient norm.

* add param to He initialization scheme in torch.nn.init

* add util to clip gradient value in torch/nn/utils/clip_grad.py

* update doc in torch.nn.utils.clip_grad

* update and add test for torch.nn.utils.clip_grad

* update function signature in torch.nn.utils.clip_grad to match suffix_ convention

* ensure backward compatibility in torch.nn.utils.clip_grad

* remove DeprecationWarning in torch.nn.utils.clip_grad

* extend test and implementation of torch.nn.utils.clip_grad

* update test and implementation torch.nn.utils.clip_grad
2018-04-17 11:32:32 -04:00
gchanan
c77fca570c Add device docs; match constructor parameter names with attribute names. (#6633)
* Add device docs; match constructor parameter names with attribute names.

* Use double quotes for strings.

* Update printing.

* Separate device ordinal-only construction into a separate note.

* Use current device.
2018-04-17 09:55:44 -04:00
Semion Sidorenko
639dd0e324 Fix an error in the tensor docs. (#6658)
The docs incorrectly stated that there was seven CPU tensor types and
eight GPU tensor types, before listing eight types for both CPU and GPU.
2018-04-17 09:54:19 -04:00
gchanan
d7cb78478f Split set_default_tensor_type(dtype) into set_default_dtype(dtype). (#6599)
* Split set_default_tensor_type(dtype) into set_default_dtype(dtype).

* Fix flake8.

The difference between this one and set_default_tensor_type is that it only sets scalar type what determines the type + device of a tensor returned from a factory function with defaults is the default tensor type + the current device (if the default tensor type is cuda). This just changes the scalar type of the default tensor type.

We do eventually want to deprecate set_default_tensor_type; it is not clear how to do that in a sensible and backwards compatible way.
2018-04-16 13:49:00 -04:00
Fritz Obermeyer
76ca037069 [distributions] Implement Independent distribution (#6615)
* Implement Independent distribution

* Add docs for Independent distribution
2018-04-16 11:42:12 -04:00
Teng Li
f5beff334b Added distributed docs on NCCL2 backend/functions and launch module (#6579) 2018-04-15 21:53:10 -04:00
Richard Zou
99cfb56698 Add docs for torch.randn_like (#6565)
* Add docs for torch.randn_like

* Address comments

* Address commetns

* Address comments
2018-04-13 11:33:56 -04:00
Richard Zou
16704249cb Add docs for tensor.index_put_ (#6563) 2018-04-12 17:00:02 -04:00
Tongzhou Wang
6b7ec95abb Link relevant FAQ section in DataLoader docs (#6476)
* Link FAQ section on workers returning same random numbers in DataLoader docs

* explicitly mention section names
2018-04-11 13:41:46 -04:00
Xingdong Zuo
494aaab00e Add docs for item() (#6508) 2018-04-11 12:40:01 -04:00
Naman Jain
1e5611014d Adding autofunction entry for torch.randint (#6507)
* added randint function in ATEN yaml as well as Tensorfactories.cpp

* corrected randint

* randint with overloading complete,getting tuple of ints behaviour though

* done randintlike and randint_out

Left : adding docs and test, and remove the bug on size = (5)

* Removed my error messages, ThRandomTensor will handle all exceptions

* added docs and tests, corrected a mistake

Tested with manual seeds in some test cases as well. Seems fine to me (check documentation though)

* corrected indentation to spaces, and improved sizes argument description

* made documentation argument description shorter

* added whitespace after ',' in torch docs

* addes spaces in documentation

* added more tests (including bounds and overloading features)

* added whitespaces in test_torch

* removed trailing whitespaces

* removed whitespace from a blank line

* removed positive requirement from docs. Added dtype argument and gave eg

* made randint over randn in all files

* changed to data type for dtype in docs for randint

* added autofunction entry for randint in torch.rst
2018-04-11 12:34:25 -04:00
Tongzhou Wang
d9345aa60f
add checkpoint to index.rst (#6498) 2018-04-11 02:50:01 -04:00
Tongzhou Wang
0dff2b5e35
[fft] [3 of 3] Implements backward of fft ifft rfft irfft (#5537)
* change irfft signal_sizes arg to be the last

* add docs for fft, ifft, rfft, irfft; update doc for stft

* fix typo in window function docs

* improve gradcheck error message

* implement backward of fft, ifft, rfft, irfft

* add grad tests for fft, ifft, rfft, irfft

* fix nits and typos from #6118

* address comments
2018-04-10 22:09:36 -04:00
Priya Goyal
e3196e0ea8
[Re-checkpointing] Autograd container for trading compute for memory (#6467)
* Autograd container for trading compute for memory

* add a unit test for checkpoint

* address comments

* address review comments

* adding some docs for the checkpoint api

* more comments

* more comments

* repro bug

* Fix a subtle bug/apply some review comments

* Update checkpoint.py

* Run everything in grad mode

* fix flake and chunk=1

* use imperative backward as per discussion

* remove Variable and also add models and test for models

* Add a simple thread local variable to check for autograd grad mode

* remove models and models test after debugging

* address review comments

* address more comments

* address more comments
2018-04-10 15:26:24 -04:00
Richard Zou
04c215b445 Add link in docs menu to stable docs (#6475)
Part of #5738. Warns users that they're not viewing the latest stable
release docs.

We should remember to delete this when cutting out 0.4.0 release docs. (we'd just delete the div in pytorch.github.io)
2018-04-10 14:53:04 -04:00
Tongzhou Wang
59bda9a8c4
Fix reflection padding boundary checks (#6438)
* Fix Reflection padding boundary checks

* Improve padding docs

* fix lint
2018-04-10 10:37:01 -04:00
Richard Zou
265e1a97ec Add different logo for master docs (#6446) 2018-04-09 18:48:53 -04:00
Richard Zou
1b3a5a4e7d bottleneck supports better user-provided arguments (#6425)
Fixes #6312.

Changed bottleneck's arg parser to user argparse.REMAINDER. This lets
the user specify args as `python -m torch.utils.bottleneck script.py
[args]` (previously, a -- was needed after `bottleneck` and before
`script.py`).
2018-04-09 13:57:26 -04:00
Tongzhou Wang
4d15442ebc
Add total_length option to pad_packed_sequence (#6327)
* add total_length to pad_packed_sequence; add example on how to use pack->rnn->unpack with DP

* address comments

* fix typo
2018-04-08 20:25:48 -04:00
Tongzhou Wang
e0f3e5dc77 fix activation images not showing up on official website (#6367) 2018-04-07 11:06:24 -04:00
Kento NOZAWA
c00ee6da8f Fix typos (#6348)
* Fix typo

* Fix typo

* Update faq.rst
2018-04-06 11:06:42 -04:00
Masaki Kozuki
a093ec997f fix typo (#6329) 2018-04-05 21:36:16 -04:00
Vishwak Srinivasan
0aa35780bf [ready] Implement log2 and log10 in PyTorch (#6272)
* Implemented log2 and log10

* Re-add incorrectly removed files

* Fix minor bugs

* Fix log1p docs

* Add a try-except for python2 math module in log2 test

* Revert changes made to aten/doc/*

* Fix docstring errors

* Fix windows build
2018-04-05 14:28:37 -04:00
Peter Goldsborough
9ba70856a1 Add max_values and argmax convenience functions to ATen (#6201)
* Add max_values and argmax convenience functions to ATen

* Add documentation for torch.argmax/argmin and skip max_values

* Add tests for argmax/argmin

* Dont default the dim argument

* Use dim=0 in test_torch.py for argmax tests

* Implement argmin()  and argmax() without dim

* Call .contiguous() before .view(-1)
2018-04-04 15:53:26 -04:00
Kaiyu Shi
605307f8f3 Add support for printing extra information in Module and refactor redundant codes (#5936)
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.

Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.

* Add support for extra information on Module

* Rewrite the repr method of Module

* Fix flake8

* Change the __repr__ to get_extra_repr in Linear

* Fix extra new-line for empty line

* Add test for __repr__ method

* Fix bug of block string indent

* Add indent for multi-line repr test.

* Address review comments

* Update tutorial for creating nn.Module

* Fix flake8, add extra_repr of bilinear

* Refactor DropoutNd

* Change to extra_repr in some Modules

* Fix flake8

* Refactor padding modules

* Refactor pooling module

* Fix typo

* Change to extra_repr

* Fix bug for GroupNorm

* Fix bug for LayerNorm
2018-04-02 13:52:33 -04:00
Thomas Viehmann
32ba2ca203 add documentation for diagflat and diagonal (#6161) 2018-03-31 18:03:21 +02:00
Richard Zou
1449c9f754 Update autograd docs (#5907)
* Update autograd docs

* Deprecate 'grad_variables' in backward().

Advise to replace with 'grad_tensors'.

* Resolve saved_variables/saved_tensors

* Tensor section

* Address comments

* Address comments

* Address comments
2018-03-30 15:33:11 -04:00
Tongzhou Wang
4f05cb710e Add underscore to nn.init.* and deprecate the original ones (#6093)
Fixes #5946.

* add underscore to nn.init.* and deprecate the original ones

* add a test for deprecation
2018-03-29 13:26:12 -04:00
Peter Goldsborough
47f31cb1e6 Update FAQ to make more sense after tensor/variable merge (#6017) 2018-03-27 07:48:25 -07:00
Richard Zou
5d628db0a2 Deprecate ctx.saved_variables via python warning. (#5923)
* Deprecate ctx.saved_variables via python warning.

Advises replacing saved_variables with saved_tensors.
Also replaces all instances of ctx.saved_variables with ctx.saved_tensors in the
codebase.

Test by running:
```
import torch
from torch.autograd import Function

class MyFunction(Function):
    @staticmethod
    def forward(ctx, tensor1, tensor2):
        ctx.save_for_backward(tensor1, tensor2)
        return tensor1 + tensor2

    @staticmethod
    def backward(ctx, grad_output):
        var1, var2 = ctx.saved_variables
        return (grad_output, grad_output)

x = torch.randn((3, 3), requires_grad=True)
y = torch.randn((3, 3), requires_grad=True)
model = MyFunction()
model.apply(x, y).sum().backward()
```
and assert the warning shows up.

* Address comments

* Add deprecation test for saved_variables
2018-03-26 14:13:45 -04:00
Fritz Obermeyer
03a6952ac9 [distributions] Fix scalar bugs in torch.distributions.transforms etc. (#5931) 2018-03-25 13:33:31 +02:00
Richard Zou
feb2785c5c Implement torch.util.bottleneck (#5216)
* Implement torch.util.bottleneck

This is a tool that is intended to be used as initial exploratory
debugging of bottlenecks in user scripts. Run it with

    python -m torch.utils.bottleneck /path/to/source/script.py

* Refactor and address comments

* Fix tests

* Allow passing of args to the profiled script

* Replace Variable
2018-03-23 17:27:35 -04:00
Vishwak Srinivasan
8cf521b522 fix mvn docs (#5967) 2018-03-23 14:26:55 -04:00
Brooks
1936753708 Added an implementation of a multivariate normal distribution (#4950) 2018-03-19 23:22:46 +01:00
Tongzhou Wang
24fca0efb2 fix some methods not showing up in doc (#5882) 2018-03-19 14:48:15 -04:00
Tongzhou Wang
00cc962670 typo (#5847) 2018-03-17 10:26:00 -04:00
Calvin Lee
f69fb3829a Add documentation for LPPool1D (#5730) 2018-03-13 04:37:25 -04:00
Sam Gross
a2641500bf Implement torch.reshape and Tensor.reshape (#5575)
* Implement torch.reshape and Tensor.reshape

This implements reshape which has similar semantics to numpy.reshape. It
will return a view of the source tensor if possible. Otherwise, it
returns a copy.

* Remove in-place reshape_ that was an alias for resize_

* Update documentation
2018-03-12 16:20:40 -04:00
Thomas Viehmann
a33aeed1dc Add set_grad_enabled as context manager and function (#5555) 2018-03-09 11:36:56 +01:00
Tongzhou Wang
71d73211f4 [ready] torch.* doc update for Variable/Tensor merge, and other improvements (#5443)
* 1. Update doc to reflect changes in Variable/Tensor merge, and new printing style
2. Remove functions in torch/functional.py that are already implemented with native_function
3. Add set_detault_tensor_type doc

* fix torch.split

* py2 unicode string fix

* update torch.gels doc

* address @fmassa 's comments

* double-colon
2018-03-08 23:02:38 -05:00
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
theweiho
c2721ab503 Add per-element unique op for CPU (#5503)
Questions/possible future works:

How to template-ize to extend support beyond LongTensor?
How to check if autograd works (and if not, how to add explicit gradient)?
CUDA support?
Testing command:
DEBUG=1 NO_CUDA=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build && DEBUG=1 NO_CUDA=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py develop && python3 test/test_torch.py

Partially fixes #2031

* Initial commit for unique op

* Working unique with test

* Make inverse indices shape conform to input

* flake8 whitespace removal

* address review comment nits

* Expose fn and add docs. Explicitly declare no gradients

* Trial generic dispatch implementation

* Add tests for generics

* flake8 whitespace

* Add basic CUDA error throwing and templateize set

* Explicit contiguous and AT_DISPATCH_ALL_TYPES return

* Remove extraneous numpy conversion

* Refactor out .data calls

* Refactored to variable return length API with wrapper fn as opposed to returning a 0-length tensor, per off-line reviewer comments

* Remove A

* Don't use hidden torch._unique() in test

* Fix documentations
2018-03-07 18:16:51 -05:00
Peter Goldsborough
792daeb422 Enable documentation for C++ extensions on the website (#5597) 2018-03-07 14:07:26 +01:00
Dillon Laird
8376e63738 fixed softmax support documentation (#5557) 2018-03-05 08:59:06 -05:00
Fritz Obermeyer
66547ca061 Fix links in distribution docs (#5531) 2018-03-04 21:33:07 +01:00
Adam Paszke
b1dec4a74f
Fix doc-push (#5494) 2018-03-01 17:37:30 +01:00
Piotr Mitros
7b33ef4cff Documentation cleanup for activation functions (#5457) 2018-03-01 14:53:11 +01:00
Tongzhou Wang
392fc8885c add faq on cuda memory management and dataloder (#5378) 2018-02-27 18:35:30 -05:00
Yinghai Lu
9f2975e2cf Remove onnx-caffe2 (#5425)
* Remove onnx-caffe2

* Comments
2018-02-27 03:15:49 -05:00
Tongzhou Wang
8c18220a59 Fix layer_norm initialization and nn.Module docs (#5422)
* Fix LN initialization; Support single int normalized_shape

* disable docstring inheritance

* fix sphinx warnings
2018-02-26 19:32:08 -05:00
Tongzhou Wang
1848cad108 [ready] Layer Normalization (#4922)
* at::maybe_data_ptr and Check.h => TensorUtils.h

* THNN support for optional BN running_*

* ATen support for optional BN running_*

* Python nn.* support for optional BN running_*; Improve IN and BN doc

* Add tests for IN and BN new option

* Layer Norm

* Fix LRN doc

* functional interface for LN and IN

* Layer norm tests

* fix BN double backward returning undefined tensors

* fix jit test using wrong dim inputs for BN

* add/improve BN, IN and LN GPU tests with half type

* Udpate docs to be consistent with Conv notation
Fix onnx
Clarified onnx symbokic wrapper

* fix typo

* Address comments
2018-02-22 11:56:41 -05:00
Junior Rojas
642e4d0762 Fix typos (#5340) 2018-02-21 16:27:12 -05:00
brett koonce
596470011b minor sp, underlyhing->underlying (#5304) 2018-02-19 22:28:17 -05:00
Choongwoo Han
cf71385ec9 Implement torch.isnan (#5273)
* Implement torch.isnan

* Simple python implementation

* Fix typo
2018-02-19 19:46:35 -05:00
Choongwoo Han
fae6c67121 Configurable flushing denormal numbers on CPU (#5294)
* Configurable flushing denormal numbers on CPU

* Formatting

* Update docs

* Minor doc changes
2018-02-19 19:23:43 -05:00
Edward Z. Yang
e411525f2c
Add a FAQ, for now just 'out of memory' advice. (#5251)
* Add a FAQ, for now just 'out of memory' advice.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Updates based on comments.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* minor copyedit
2018-02-15 17:38:55 -08:00
Martin Drawitsch
1fdb3929c9 Fixes for docstrings/sphinx rendering of CosineAnnealingLR and Local Response Normalization (#5254)
* Fix LaTex rendering in CosineAnnealingLR

Backslashes were interpreted by Python as escapes in the string, so \frac
turned into frac, which is not a valid LaTex command.
This could be fixed with double backslashes, but the easiest solution is to
just use a raw (r) docstring.

* Fix sphinx warnings for LRN doc headings

* Move LRN docstring from __init__ to class level

The docstring was not rendered by sphinx at
http://pytorch.org/docs/master/nn.html#torch.nn.LocalResponseNorm
because it was in the constructor.

* Remove superfluous backticks from LRN formula
2018-02-15 10:29:02 -05:00
Thibault FEVRY
e39e86f119 Remove deprecated references to volatile (#5193) 2018-02-12 21:08:27 +01:00
anderspapitto
315ee107f6 document input_names, output_names feature of onnx export (#5189) 2018-02-12 19:56:02 +01:00
Vishwak Srinivasan
1eaa10b32e Update torch.distributions documentation (#5050)
* Add a small paragraph for pathwise estimator

* Add differentiability as well

* Add small snippet and clear some grammatical errors

* Update documentation to reflect has_rsample

* Add a fix for ExponentialFamily docs

* Update __init__.py
2018-02-05 13:57:38 -05:00
Vishwak Srinivasan
85a7e0fc41 Addition of ExponentialFamily (#4876) 2018-02-04 12:18:28 +01:00
Peter Goldsborough
65353f1342 Remove volatile section from autograd notes 2018-02-01 00:26:36 +01:00
Fritz Obermeyer
8f273dea09 Implement constraint registry 2018-01-31 00:13:28 +01:00
Alican Bozkurt
967bceb16b Implement Transforms (#4771) 2018-01-28 21:17:16 +01:00
Tongzhou Wang
6420c6b224 Improve torch.cuda.empty_cache documentation (#4879)
* add doc about empty_cache wont increase amount of memory available

* typo
2018-01-27 04:54:25 -05:00
Rachit Singh
aaa0288aed Implemented Poisson in Distributions.cu and Distributions.cpp 2018-01-25 10:28:29 +01:00
Yongjik Kim
dd5c195646 More documentation for CUDA stream functions. (#4756) 2018-01-21 12:58:51 +01:00
Vishwak Srinivasan
f033dd60cd Implementation of the Fisher-Snedecor Distribution (#4706) 2018-01-20 21:49:09 +01:00
Alican Bozkurt
f72d86e0d3 Implement geometric distribution (#4708) 2018-01-19 21:45:14 +01:00
Sam Gross
f1c616418d
Fix Python docs for broadcast and braodcast_coalesced (#4727) 2018-01-19 10:57:20 -05:00
Alican Bozkurt
3254eca8c8 Implement binomial distribution (#4658) 2018-01-16 21:39:05 +01:00
Kai Arulkumaran
9f893dda5f Add LocalResponseNorm to docs (#4681) 2018-01-16 11:12:50 -05:00
HE, Tao
b42f163835 [ONNX] export sum, prod, sqrt improve log_softmax. (#4579)
* ONNX: export sum, prod, sqrt improve log_softmax and fix a typo in doc.

Signed-off-by: HE, Tao <sighingnow@gmail.com>

* Add new exported op to doc.

Signed-off-by: HE, Tao <sighingnow@gmail.com>

* Double quotes.

Signed-off-by: HE, Tao <sighingnow@gmail.com>

* Update trace log of log_softmax.

Signed-off-by: HE, Tao <sighingnow@gmail.com>

* Improve export when dim is None and axes_i should be a list of ints.

Signed-off-by: HE, Tao <sighingnow@gmail.com>

* Fix prod when no dim given.

Signed-off-by: HE, Tao <sighingnow@gmail.com>

* Update line ends in test expected file.

Signed-off-by: HE, Tao <sighingnow@gmail.com>
2018-01-12 07:44:56 -05:00
Tongzhou Wang
5918243b0c Methods for checking CUDA memory usage (#4511)
* gpu mem allocated

* add test

* addressed some of @apaszke 's comments

* cache stats

* add more comments about test
2018-01-09 11:47:48 -05:00
Fritz Obermeyer
3a335427b0 Start framework for kl_divergence(-,-) in torch.distributions (#4525) 2018-01-09 09:44:59 +01:00
Vishwak Srinivasan
5d6a5cf3a7 Implementation of Gumbel Distribution (#4517) 2018-01-08 23:21:27 +01:00
Alican Bozkurt
c9bc6c2bc3 Implement Student's t-distribution (#4510) 2018-01-08 10:23:48 +01:00
Vishwak Srinivasan
1e76ade9dc Implementation of Pareto Distribution (#4459) 2018-01-04 22:57:47 +01:00
Richard Zou
35c4d73bdb Deprecate nn.NLLLoss2d (#4238)
* Deprecate nn.NLLLoss2d

* Fix legacy tests

* Fix tests

* Remove NLLLoss2d from docs, add deprecation warning instead of error

* fix lint

* Add more to docs
2018-01-04 12:38:04 -05:00
Alican Bozkurt
02e7eba309 Implement Chi2 distribution (#4425)
* add chi2

* add tests for chi2

* add randomized test comments
2018-01-01 19:41:18 -05:00
Neeraj Pradhan
fa8de6b4f3 Adding the Cauchy distribution to torch.distributions 2017-12-29 11:57:21 +01:00
Fritz Obermeyer
5c33400dd3 Implement OneHotCategorical distribution (#4357) 2017-12-28 16:54:55 +01:00
Vishwak Srinivasan
e519ef5337 Adding torch.expm1() and its inplace function (#4350) 2017-12-28 18:56:03 +09:00
SsnL
9a48f8d7c3 add tests for btrifact_with_info and doc for btriunpack 2017-12-24 03:08:28 +08:00
SsnL
658d4c7ea8 allow optional int tensor 2017-12-24 03:08:28 +08:00
Sherin Thomas
492e26fbcd Pad sequences and Pack sequences (#3875) 2017-12-22 16:14:09 +01:00
gchanan
41c9959ef7
Enable functional torch.where. (#4298) 2017-12-21 13:55:57 -05:00
Fritz Obermeyer
ee98e7a82e Implement Dirichlet and Beta distributions (#4117) 2017-12-18 19:11:37 +01:00
Tongzhou Wang
d8b2e5d091 Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
* implement stft

* addressed comments; implemented window functions; added support for python only default initialization
2017-12-18 12:28:23 -05:00
Kai Arulkumaran
e9ef20eab5 Add Cosine Annealing LR Scheduler (#3311)
* Add Cosine Annealing LR Scheduler

* Update eta_min in tests to prevent numerical mistakes

* Use non-zero min_eta in test_cos_anneal_lr
2017-12-18 02:43:08 -05:00
Lu Fang
dde10e1d4b Add docs talking about how to adding symbolic for unsupported ops (#3741) 2017-12-15 09:37:09 -05:00
Tongzhou Wang
fe12ac57a4 Improve docs for torch and torch.Tensor (#3969)
* doc overhaul

* update split doc
2017-12-01 14:56:48 -05:00
Tongzhou Wang
c681b03d37 Add determinant function on variable; Add backward on svd (#3816)
* determinant on variable

* svd bwd
2017-12-01 13:22:46 -05:00
Edward Z. Yang
754ae49f65 Documentation updates for ONNX.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-11-30 23:09:45 -05:00
Fritz Obermeyer
1f64c2ef91 Rename pyro.distributions.Multinomial -> .Categorical (#3766)
* Rename distributions.Multinomial -> distributions.Categorical

* Rename Multinomial -> Categorical

* Update docs

* Update variable.py

* Update distributions.py

* Update variable.py
2017-11-18 16:10:07 -05:00
Sam Gross
9cb8b43778
Split off in-place NN functions (#3683)
For example, this splits threshold into threshold(), which is now
never in-place, and threshold_() which is always in-place.

This simplifies the in-place vs. non-in-place logic in
gen_variable_type.py, which was bug-prone.
2017-11-14 12:59:06 -05:00
Adam Paszke
02450fff38 Expend autograd profiler docs (#3621) 2017-11-10 08:58:45 -05:00
SsnL
bb1b826cdc Exposing emptyCache from allocator (#3518)
* Add empty_cache binding

* cuda.empty_cache document

* update docs
2017-11-07 17:00:38 -05:00
SsnL
e2f33eb6a2 add doc for sparse_adam (#3519) 2017-11-06 18:37:15 -05:00
Kaixhin
5de7f9e731 Tidy up CUDA notes 2017-11-05 14:42:06 +01:00
Lu Fang
66d24c5067 Update the ONNX doc 2017-11-01 15:43:08 -04:00
Sam Gross
7c0b16c140 Add torch.take and Tensor.put_ (#3263)
* Add torch.take and Tensor.put_

These are similar to numpy.take and numpy.put. The take function allows
you to linearly index into a tensor without viewing it as a 1D tensor
first. The output has the same shape as the indices. The put function
copies value into a tensor also using linear indices.
2017-11-01 06:04:44 -04:00
Richard Zou
2be8bd1880 Add docs for ByteTensor any()/all() 2017-10-30 16:00:48 -04:00
Kai Arulkumaran
a7c5be1d45 Document CUDA best practices (#3227) 2017-10-25 22:38:17 +02:00
Adam Paszke
76abc06b1f Fix nvprof mode in autograd profiler 2017-10-20 10:22:54 -04:00
Sam Gross
d9b89a352c Replace StochasticFunctions v2 (#3165)
This removes the StochasticFunctions for bernoulli, multinomial, and
normal and replaces them with classes in the torch.distributions
package. Each distribution supports the differentiable log_prob function
that returns the log of the pdf/pmf of the samples.

The current StochasticFunction implementation has a few problems: it can
be painful to use when there are multiple stochastic outputs which need
to be back-propagated through. It also requires that we store grad_fns
on Variables that have requires_grad=False in order to find stochastic
nodes.
2017-10-19 15:05:07 -04:00
Sam Gross
f1f64c8d07 Generate autograd functions for NN / more refactors (#3136)
Generate autograd functions for NN and implement more derivatives in derivatives.yaml

A big refactor of gen_variable_type.py
2017-10-19 15:03:26 -04:00
Sasank Chilamkurthy
a0ac72e84e Use template instead of sphinx-contrib for google analytics 2017-10-15 18:40:05 +02:00
Edward Z. Yang
94c1fdd254 Typofix
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-13 01:31:22 +02:00
SsnL
9260f0e5ee Fix a typo in optim.rst (#3069) 2017-10-11 16:47:14 +02:00
Sasank Chilamkurthy
169ed0cd4b remove torchvision docs from pytorch repo. Moved to vision repo (#3024) 2017-10-10 23:59:55 -04:00
SsnL
828048f578 Add document on how Module.cuda() and optims should work together (#3056) 2017-10-10 22:55:23 -04:00
Edward Z. Yang
a0831219cf SqueezeNet ceil_mode not yet supported.
Fixes #2898.

Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>
2017-10-09 11:07:11 -04:00
SsnL
d5a7e304fa added volumetric adaptive max pooling 2017-09-30 16:57:51 -04:00
Edward Z. Yang
db298618e4 Minor typofix.
Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>
2017-09-30 16:18:03 -04:00
Adam Paszke
411e1469e0 Add tools for autograd profiling 2017-09-25 23:21:30 -04:00
SsnL
6a4ec4f9a8 VolumetricAdaptiveAveragePool 2017-09-25 15:12:44 -04:00
Soumith Chintala
cf7e28de8e add CUDA RNG docs 2017-09-21 19:36:41 -04:00
IraKorshunova
2b9765ad02 Erf and erfinv (#2799) 2017-09-20 21:23:45 -04:00
Scott Sievert
3821fca0c6 DOC: i{send, recv} message order with MPI backend 2017-09-14 20:38:11 -04:00
Brett Koonce
08b4770adf minor spelling, intialize->initialize 2017-09-14 15:13:01 -04:00
Soumith Chintala
253d48c815 add in-place random sampling ops 2017-09-14 10:03:17 -04:00
Soumith Chintala
ce4932f8a4 add softmax2d docs 2017-09-14 09:41:04 -04:00
Soumith Chintala
4fec5f658b add Bilinear to docs, fix reference 2017-09-11 20:12:27 -04:00
Soumith Chintala
1794e76800 add missing bilinear docs entry 2017-09-11 20:06:44 -04:00
Soumith Chintala
8e4a889c8f Add onnx to the documentation index. 2017-09-07 09:43:37 -07:00
Edward Z. Yang
9cdef6c33b Update for latest ToffeeIR changes. (#2662)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-07 11:47:54 -04:00
Edward Z. Yang
7838840084 Detailed install instructions for ONNX. (#2654)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-07 08:48:37 -04:00
Edward Z. Yang
fbb8f13499 Docs now finally run with ToffeeIR master.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-06 21:35:50 -04:00
Edward Z. Yang
eb11cab272 Misc doc improvements.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-06 21:35:50 -04:00
Edward Z. Yang
7ea9de051e Code review comments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-06 21:35:50 -04:00
Zach DeVito
af649c19a2 ONNXIR -> to ONNX 2017-09-06 13:45:39 -04:00
Zach DeVito
6d8d5bab4c Codemod Toffee -> ONNX, toffee -> onnx. Change file names to match 2017-09-06 13:45:39 -04:00
Edward Z. Yang
4fc54af010 Code review comments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
1b792d3e57 Doc updates.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
b5833551f3 Documentation, and inplace support.
This adds the PyTorch API user documentation for Toffee.
To make the example work, I also converted all "inplace"
ops to export out-of-place in Toffee.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
57eb8bd288 Frontend refactor, and some documentation.
- BC BREAKING: export now also takes a mandatory file-ish argument, specifying
  the file to export the protobuf to.  I rewrote the tests to use BytesIO to
  get out the string so they could parse it again.

- BC BREAKING: export no longer returns the tensors that were computed.  To
  get these, use the internal _export function.

- Multiple inputs to models are now supported by passing a tuple to input.
  (Old API of a single Variable still works.)

- Keyword arguments to models are now supported via kwargs keyword arg.

- Renamed embed_params to export_params, and it now defaults to True.

- Toffee tests now live in their own test_toffee.py file.  I had to
  rename a pile of expect files for this.

- Removed defunct torch.toffee imports from autograd to solve module import
  cycle.

- Helper function _with_file_like to abstract over opening file-ish arguments,
  taken from torch.save()

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Sang-gil Lee
42448cf07f Fix to make the sample code executable as-is in "Extending PyTorch" (#2621) 2017-09-05 10:19:49 -04:00
Gabriel Bianconi
cdae579c22 Fix typos in "Extending PyTorch" (#2558) 2017-08-29 09:39:29 -04:00
Soumith Chintala
4cca286d9e add google analytics to docs 2017-08-27 20:58:33 -04:00
Alykhan Tejani
eb58740651 add ones_like and zeros_like 2017-08-25 14:11:04 -04:00
Soumith Chintala
b079469af0 self -> ctx in Extending note 2017-08-25 07:19:20 -04:00
jekbradbury
7aa6bc516f add "Basics" section to distributed docs (#2433) 2017-08-24 17:07:20 -04:00
Kai Arulkumaran
11a14fd0fd Clarifications on setting up torch.distributed (#2475) 2017-08-18 09:21:04 -04:00
Edward Z. Yang
b09d7c890e Copy-edit sparse constructor docs for clarity.
Basically, it's easy to confuse the dimensions of the index tensor.
This adds some more text which should hopefully clarify the situation.

Fixes #2416.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-08-15 13:36:30 -04:00
jekbradbury
5e088da5ba Add DistributedDataParallel to docs
DataParallel was included twice.
2017-08-15 10:01:36 +05:30
Benoit Rostykus
641e582f31 Fix typo (#2378) 2017-08-11 20:57:26 -04:00
Sasank Chilamkurthy
5caa42b538 Add ConcatDataset to docs (#2337) 2017-08-08 07:16:04 -04:00
Adam Paszke
4599c0c7df Update autograd notes (#2295) 2017-08-05 05:18:05 +05:30
Sasank Chilamkurthy
e548580f31 Add missing models to torch vision documentation (#2204) 2017-07-26 01:58:18 +05:30
ngimel
66bbe5d75a .creator -> .grad_fn in the code example (#2171) 2017-07-21 14:43:16 -04:00
Adam Paszke
4f035f14de Add a support matrix for distributed backends 2017-07-21 14:19:46 -04:00
Aron Barreira Bordin
11f3ccf98f Add missing Modules to nn.functional (#1801)
* add dropout2d and dropout3d to functional

added some loss functions to functional

added tests

using dropout from backend

added docs

fixes

* edited loss modules to call functional
2017-07-19 15:55:21 -04:00
brett koonce
16dd997239 Spelling tweaks for documentation (#2114) 2017-07-15 13:16:32 -07:00
yunjey
52a9367fa7 Fix minor typo (#2100)
Fixed minor typo in Autograd mechanics docs.
2017-07-14 10:20:13 -04:00
Soumith Chintala
37183e91de add normalize docs to sphinx 2017-07-13 02:31:57 -04:00
Soumith Chintala
58e4caf80f add missing docs 2017-07-13 01:01:04 -04:00
Soumith Chintala
81fd2bf2d0 fix some language / typos 2017-07-12 14:47:36 -04:00
Adam Paszke
8915e2710c Refactor scatter/gather and add distributed docs 2017-07-12 14:47:36 -04:00
Hungryof
73128f7b08 fix minor typos (#2051)
* Update extending.rst

fix typo

* Update cuda.rst

fix typo
2017-07-11 11:01:41 -04:00
Naren Dasan
49f679d0e9 Acknowledge the existence of cpu HalfTensor (#2018) 2017-07-08 10:03:36 -04:00
Sam Gross
2c038f2074 Add weight normalization implementation (#1945)
* Add weight normalization implementation

This adds forward "pre-hooks" which get called before the module's
forward() method. Weight norm is implemented as a hook which calculates
the weight variable from the weight_g and weight_v every iteration.

Based on @rtqichen implementation.

* Specify return type
2017-06-30 15:41:40 -04:00
Soumith Chintala
b3e500c522 fix docs generation warnings 2017-06-30 14:39:21 -04:00
Leonid Vlasenkov
ae61f3ff42 adds poisson NLL loss (#1779) 2017-06-27 10:04:54 -04:00
Soumith Chintala
1f391a42f7 fix warnings for docs generation 2017-06-27 00:18:32 -04:00
Soumith Chintala
ee1b7b50b3 fix docs for broadcast warning 2017-06-26 14:50:57 -04:00
Alykhan Tejani
67968cb60b Add numerically stable BCELoss which takes logits as input (#1792) 2017-06-19 22:05:51 -04:00
Francisco Massa
76ee014d10 Add documentation to SELU and AlphaDropout 2017-06-19 18:18:01 -04:00
Soumith Chintala
f61ec2495e nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean) 2017-06-15 12:32:47 -04:00
Aron Barreira Bordin
909f31764f Add nn.padding to docs fixes #1127 (#1808)
* exposed nn.padding modules

* using functional
2017-06-15 07:41:38 -04:00
Sam Gross
9c53c6dcb9 Fix errors and warnings when building docs (#1806) 2017-06-14 13:50:14 -04:00
gchanan
4e356528b4 Add torch.matmul function. (#1780)
* Add torch.matmul function.

Includes test_torch, test_autograd and docs changes.

* Add __all__ to functional so imports are accidentally imported.

* Include unbind in __all__.

* Add matmul case for when one argument is 1-dimensional and the other
at least 3-dimensional.

* Add squeeze_ to Variable.

* Use squeeze_ instead of squeeze for matmul.
2017-06-14 08:14:53 -04:00
Adam Paszke
12813b88f6 Add DistributedDataParallel 2017-06-12 22:00:22 -04:00
Gregory Chanan
1ef4cc1591 Incorporate review comments:
1) Line up trailing dimensions in broadcast docs.
2) remove unnecessary expand_as in common_nn test.
3) use view in tensor_str instead of resize_.
4) newExpand remove raiseErrors change.
5) clarify expandedSizes/expandedStrides parameters in inferExpandGeometry.
6) simplify inferSize2/inferSizeN implementations.
7) use new-style classes for warning.
2017-06-11 05:37:59 -04:00
Gregory Chanan
deec86cc05 Clarify a number of comments. 2017-06-11 05:37:59 -04:00
Gregory Chanan
5e1a714386 Add backwards incompatibility docs. 2017-06-11 05:37:59 -04:00
Gregory Chanan
cd35091d9b Include simple broadcasting example and demonstrate lining up trailing dimensions. 2017-06-11 05:37:59 -04:00
Gregory Chanan
471dfe9791 Add documentation including links to numpy broadcasting semantics. 2017-06-11 05:37:59 -04:00
Edward Z. Yang
ba690d5607 Add support for NVTX functions. (#1748) 2017-06-10 18:26:58 +02:00
Soumith Chintala
2a49353d5e minor fix for docs of Upsample 2017-06-07 11:42:52 -04:00
Luca Antiga
b9ab26765e Add 3D upsampling (nearest and trilinear) with tests 2017-06-07 11:29:27 -04:00
Aron Barreira Bordin
d7db75c10f added CosineSimilarity to nn.distance and updated docs (#1672)
* added CosineSimilarity to nn.distance and updated docs
2017-06-06 22:53:21 -04:00
Adam Paszke
a53cde09b5 Rename masked_copy_ to masked_scatter_ 2017-06-06 01:06:14 -04:00
Bubble
2ce5875a4d Modify the sample code of extending autograd (#1720)
The original input can not be used as input of Linear(), because forward() takes at least 3 arguments (2 given)
2017-06-05 23:36:58 -04:00
Sasank Chilamkurthy
24aecaa2c8 Cleanup torch vision docs (#1699)
* Modify torchvision documentation following https://github.com/pytorch/vision/pull/179

* Add new datasets to docs

* Fix wording in torch.datasets

* Small clarification
2017-06-05 11:52:41 -04:00
Soumith Chintala
460b8715a8 display version number in docs 2017-06-02 11:56:48 -04:00
Bubble
447fe953e5 Modify the sample code of volatile (#1694)
The original two inputs (torch.randn(5,5)) can not be used as input of resnet, which must be (batch, channels, width, height)
2017-06-01 09:46:04 -04:00
Jiaming Liu
630af4d7d8 add learning rate schedulers (#1370) 2017-05-25 16:21:43 -04:00
Edward Z. Yang
2f4bf4ab39 Rewrite 'How autograd encodes the history' to accurately describe current setup. (#1580)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-17 19:21:20 -04:00
Adam Paszke
6b84dc26f0 Add F.cosine_similarity (#1502) 2017-05-15 11:12:54 -06:00
Edward Z. Yang
743e4894d2 Prefix values/indices/sparse_mask/nnz with underscore (#1457)
As discussed in #1441.

I also added some docs giving clear guidance about how to coalescing
in sparse tensors.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-03 11:14:10 -04:00
Soumith Chintala
ecd51f8510 docs fixes 2017-05-02 15:42:33 -04:00
Edward Z. Yang
181cb15c72 Fix formatting error in docs.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-01 21:47:22 -04:00
Adam Paszke
5c7453447f Fix bugs, rename differentiate to grad, make it more flexible 2017-05-01 16:44:56 -04:00
Adam Paszke
e5db8f98be Add torch.autograd.differentiate 2017-05-01 16:44:56 -04:00
Edward Z. Yang
4624278b1d Make sparse documentation title consistent with others. (#1420)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-01 11:48:00 -04:00
Kai Arulkumaran
48a7869b23 Doc fixes (#1409) 2017-04-30 08:28:19 -04:00
Edward Z. Yang
9c01f5d6b2 Document hybrid sparse tensors.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-28 23:53:01 +02:00
Kai Arulkumaran
cbb9f08b71 Add new init methods gain, eye and dirac (#1172) 2017-04-28 17:16:40 -04:00
Edward Z. Yang
b39a2f2cbb Documentation for sparse tensors. (#1366) 2017-04-26 21:43:05 +02:00
Dmitry Ulyanov
fa4f363b93 Instance norm (#1283)
* instance norm

* fix whitespaces

* whitespaces

* docs

* "C" letter was cyrillic in docs, fixed

* remove force_eval, fix non contiguous case
2017-04-23 14:49:15 +02:00
Sam Gross
1375694853 Document torchvision members 2017-04-21 12:50:36 -07:00
Lucas Beyer
9150e33765 Add support for creating docsets. (#1276)
Docsets are an offline documentation format introduced by Dash.app and
supported by Zeal and some other open-source clones.
2017-04-17 16:35:02 -04:00
Lucas Beyer
e4478804ce Fix patched_make_field for newer Sphinx versions. (#1275)
Not sure since which version that change is needed, but using v1.5.5 here.
2017-04-17 16:17:58 -04:00
Quan Vuong
f6fef3718e fix typo in autograd.rst (#1219) 2017-04-10 01:16:59 -04:00
Soumith Chintala
aa506fa4d7 fix docs typo 2017-04-05 23:42:02 -04:00
Adam Paszke
91c4ba7980 Add torch.arange and deprecate torch.range 2017-04-03 10:38:58 -04:00
Soumith Chintala
2979f4b989 add more functions to docs 2017-03-29 01:29:17 -04:00
Soumith Chintala
22b3600f19 add samplers to documentation 2017-03-29 00:33:07 -04:00
Soumith Chintala
2fd4d088ff add Adaptive pooling methods to docs 2017-03-26 22:43:46 -04:00
Soumith Chintala
2d750b9da5 fix typo 2017-03-23 09:40:06 -04:00
Du Phan
86e40ed875 Fix a typo in docs about pinned memory buffers (#1023)
* remove misleading guide for BCELoss

* fix docs about pinned memory buffers
2017-03-17 05:08:03 -04:00
Soumith Chintala
13b1580613 add F.pad to docs 2017-03-15 00:09:14 -04:00
Jonathan Tremblay
9004652c7b updated the documentation to remove the unnecessary copy grads when using multiprocessing 2017-03-13 19:04:17 -04:00
Alykhan Tejani
01650ac9de add torch.nn.init docs to the source folder (#979) 2017-03-11 10:11:30 -05:00
Alexis David Jacq
2b1cd919ce Update extending.rst (#933) 2017-03-06 23:23:14 -05:00
Soumith Chintala
8e46a15605 add docs for set_printoptions to sphinx (#945) 2017-03-06 21:52:37 -05:00
Li Dong
761d6799be code syntax error in document (serialization.rst) (#937) 2017-03-06 10:06:04 -05:00
Sri Krishna
0d179aa8db Updated datasets.rst, combined all commits (#931)
Added MNIST in the docs

Updated incomplete cifar doc

Updated the datasets.rst to include all datasets
2017-03-05 17:38:28 -05:00
Yiran Mao
7d58765cee docs: Fixed example code bug in extending module doc. 2017-03-05 12:09:08 -05:00
Adam Paszke
da725830c2 Add support for variable length sequences in RNNs (#873) 2017-03-01 17:36:32 +01:00
Eli Stevens
88275da5e8 CUDA documentation tweaks (#858) 2017-02-26 20:37:43 +01:00
Adam Paszke
b3d41a5f96 Add docs for ModuleList and ParameterList 2017-02-26 20:02:42 +01:00
Eli Stevens
b87c113cf4 CUDA documentation enhancement and docs versioning (#848)
* Add more detail to CUDA documentation

Also adds better cross-linking to the pages that discuss relevant topics.

* Adds recommendation to torch.save docs

* Make the version numbers for the docs dynamic

Might need tweaks for beta, 1.0, etc.
2017-02-26 08:33:26 -05:00
Soumith Chintala
38c8520adf adding unsqueeze to docs 2017-02-23 12:13:25 -05:00
Adam Paszke
c2c1710047 Add clip_grad_norm 2017-02-20 23:28:31 -08:00
Sasank Chilamkurthy
49295ebe54 Add sequential to documentation 2017-02-18 08:42:43 +05:30
Adam Paszke
8f3da5b51d set_index -> _set_index 2017-02-01 21:48:11 +01:00
Adam Paszke
825e919eb8 Add torch.unbind 2017-02-01 21:48:11 +01:00
tvn
44196955e2 ByteTensor should be unsigned (#664)
ByteTensor should be unsigned
2017-01-31 21:43:39 -05:00
Soumith Chintala
d4c9a3782b billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix (#617)
* billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix
2017-01-30 05:08:48 +05:30
Adam Paszke
57373c7c29 Fix docs 2017-01-28 01:16:04 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Alfredo Canziani
ec4602a973 Fix bad code alignment (#612)
forward *is* a method of the Linear class
2017-01-27 20:16:49 +01:00
Alfredo Canziani
a38749d15f Fix cuda notes
Target GPU *is* consisten with source GPU
2017-01-27 19:30:49 +01:00
Soumith Chintala
0bc4246425 adding NLLLoss2d to docs 2017-01-24 09:22:51 -05:00
Adam Paszke
07ebbcbcb3 Add Parameter docs 2017-01-22 18:32:51 -05:00
Adam Paszke
f8ae34706e Port L-BFGS from Lua optim 2017-01-22 18:02:40 -05:00
Sheng Zhang
c28575a4eb Fix typo in documentation for autograd 2017-01-20 21:59:33 +01:00
Adam Paszke
58a88d1ac0 Fix doc search and warnings 2017-01-20 11:36:41 +01:00
Adam Paszke
ee4c77c59f Docs improvements (#512)
* Always compile .numpy() for all types

* Add torch.nn.functional docs and hidden headers

* Use sphinx to generate torchvision docs

* Remove unused import in ffi utils
2017-01-19 17:28:49 -05:00
Sam Gross
c414bf0aaf Fix handling of unicode in torch._C._add_docstr (#487) 2017-01-18 17:22:30 -05:00
Adam Paszke
be45231ccb Improve ffi utils (#479)
* Improve ffi utils
2017-01-18 11:17:01 -05:00
Sam Gross
2082ccbf59 More Tensor docs (#470) 2017-01-18 00:42:41 -05:00
Sam Gross
a09f653f52 Begin to document TensorBase methods (#466) 2017-01-17 21:44:12 -05:00
Sam Gross
517fb2f410 Remove free() and retain() from Tensor (#464) 2017-01-17 18:15:11 -05:00
Sam Gross
35c2821d71 Add documentation for methods defined in TensorBase (#462) 2017-01-17 17:40:54 -05:00
Adam Paszke
4cc11066b2 Add torch.utils.data docs and improve notes (#460)
* Add torch.utils.data docs and improve notes
2017-01-17 14:51:05 -05:00
Sam Gross
db7948d7d5 Add torchvision reference to docs
Some documentation is just copied from the GitHub readme for now.
2017-01-17 11:40:33 -08:00
Francisco Massa
8d9f6c2583 Minor fixes to docs 2017-01-17 10:19:14 -05:00
Soumith Chintala
ac32d8b706 fix docs 2017-01-16 21:08:14 -05:00
Adam Paszke
15c1dad340 Minor fixes and torch.cuda docs 2017-01-16 20:38:14 -05:00
Adam Paszke
6d8baf7c30 Fix Sphinx warnings 2017-01-16 20:38:14 -05:00
Adam Paszke
7ced682ff5 Add notes 2017-01-16 20:38:14 -05:00
Soumith Chintala
a0afb79898 add pic to readme 2017-01-16 20:15:19 -05:00
Adam Paszke
f91bb96071 Remove cmin, cmax and cinv 2017-01-16 19:07:37 -05:00
Soumith Chintala
652b468ec2 Readme improvements 2017-01-16 18:05:26 -05:00
Soumith Chintala
af110d37f2 remove old docs 2017-01-16 15:06:08 -05:00
Adam Paszke
df79631a72 Fix a mistake in autograd docs 2017-01-16 12:59:47 -05:00
Adam Paszke
77136e4c13 Add anything in torch.legacy docs 2017-01-16 12:59:47 -05:00
Adam Paszke
604e13775f Add optim docs 2017-01-16 12:59:47 -05:00
Adam Paszke
02380a74e3 Add warnings to multiprocessing docs 2017-01-16 12:59:47 -05:00
Sam Gross
3a07228509 Add ConvTranspose1d module (#449) 2017-01-13 15:22:57 -05:00
Sam Gross
24a2f2e3a0 Add MaxUnpool1d module (#447) 2017-01-13 14:36:25 -05:00
Sam Gross
cc32de8ef9 Fix typos etc. in docs
- replace "long" with the Python type "int"
 - remove "reshape" from torch.rst since torch.reshape is not
   implemented
2017-01-12 21:25:50 -08:00
Sam Gross
d5e45b2278 Add AvgPool1d which just uses AvgPool2d implementation (#439) 2017-01-12 15:07:11 -05:00
Sam Gross
c9ec7fad52 Add model_zoo utility torch torch.utils (#424)
This was originally part of a torchvision PR, but I think it will be
useful outside vision, such as for distributing word embeddings.
2017-01-09 13:16:58 -05:00
Soumith Chintala
87f1959be7 adding proper categories to torch.rst 2017-01-04 23:20:57 -05:00
Soumith Chintala
42f131c09f fixing nn.Conv* documentation for rst and adding nn docs to sphinx 2017-01-04 02:11:27 -05:00
Adam Paszke
89dca6ffdc Add a patch to stop Sphinx from cross-referencing ivar tags 2017-01-03 18:31:08 -05:00
Adam Paszke
b7f36f93d5 Expand autograd docs and add sections 2017-01-03 18:31:08 -05:00
Adam Paszke
58320d5082 Add multiprocessing docs 2017-01-03 18:31:08 -05:00
Soumith Chintala
a461804a65 adding docs for more torch.* functions 2017-01-03 18:29:50 -05:00
Soumith Chintala
817f6cc59d adding linspace, logspace, neg and range 2017-01-03 18:29:50 -05:00
Soumith Chintala
108936169c implement more torch.* docs, remove zero, cauchy, log_normal from torch.* docs as they are not stateless 2017-01-03 18:29:50 -05:00
Soumith Chintala
4f479a98d4 fix indentation issue for all examples, add doc for add 2017-01-03 18:29:50 -05:00
Soumith Chintala
6b4ed52f10 adding docs for some torch.* functions, removing all, any stateless methods 2017-01-03 18:29:50 -05:00
Sam Gross
dcf5f8671c Add __pow__ to Tensor and list additional undocumented functions (#398) 2017-01-03 13:38:44 -05:00
Adam Paszke
b277df6705 Doc css fixes for mobile and large screens (#389) 2016-12-31 12:01:01 -05:00
Adam Paszke
d2ef49384e Add custom docs stylesheet (#387) 2016-12-31 10:32:00 -05:00
Sam Gross
849794cd2c Remove deprecated and unimplemented functions (#383) 2016-12-30 18:37:44 -05:00
Sam Gross
ab5776449c Add documentation for some torch.xxx functions (#382) 2016-12-30 17:01:47 -05:00
Sam Gross
be98c5d12d Start documenting torch.Tensor (#377) 2016-12-30 01:21:34 -05:00
Adam Paszke
bc6a71b1f5 Add Function docs 2016-12-30 00:15:06 -05:00
Adam Paszke
26f1e2ca9c Add basic autograd docs 2016-12-30 00:15:06 -05:00
Adam Paszke
f4870ca5c6 Fix nn docs 2016-12-30 00:15:06 -05:00
Sam Gross
126a1cc398 Add Sphinx docs 2016-12-28 00:03:39 +01:00