Commit Graph

316 Commits

Author SHA1 Message Date
Richard Zou
3853d5da97 Add reduce keyword to NLLLoss and NLLLoss2d (#3080)
* API changes

* Implement reduce for THNN ClassNLLCriterion

* Implement reduce keyword for THCUNN ClassNLLCriterion

* Implement reduce for THNN SpatialClassNLLCriterion

* Implement reduce for THCUNN SpatialClassNLLCriterion

* Make legacy NLLLoss work

* Docs for NLLLoss reduce

* reduce keyword for double backwards NLLLoss

* reduce=False tests

* Addressed comments

* Fix trailing whitespace

* Fix test failures in legacy nn

* Rebase: add reduce keyword to aten declarations of NLLLoss

* Add reference functions for all NLLLoss and NLLLoss2d test cases

* Replaced slow get/set fns. Don't use int64_t in kernels.

* Use TH_INDEX_BASE in NLLLoss for consistency

* Fix legacy ClassNLLCriterion tests
2017-10-26 13:54:19 -04:00
Sam Gross
67839ce7bc Delete unused Softmax code (#3220)
Softmax and LogSoftmax are automatically bound and dispatched through
VariableType.
2017-10-21 20:51:27 +02:00
Sam Gross
5989b05ecc Enable ATen implementation of some NN functions and Variable methods 2017-10-20 15:38:01 -04:00
Adam Paszke
98e67448fa Large Softmax and LogSoftmax refactor
- Cleaned up THNN and THCUNN code and kernels
- Improved THCUNN kernel performance 5x, making it match cuDNN performance
- Added support for computing softmax over arbitrary dims
  NOTE: The default dim for 3D inputs is now 1 (used to be 0)
- Both functions now accept inputs with arbitrarily many dimensions
- Autograd functions no longer save the input (it's unnecessary)
- Added cuDNN bindings for softmax, but they are unused as THCUNN
  matches or even exceeds cuDNN performance
2017-10-19 19:51:10 +02:00
Marcin Elantkowski
57ffe64cbe Embedding related fixes (#3128)
* Fix docs for nn.Embedding and F.embedding.
  - add description of 'sparse' argument (#3104)
  - fix F.embedding example (resulted in RuntimeError)
* Make EmbeddingBag a New Style Function.
* Add a functional interface for EmbeddingBag
* Fix failing tests: add max_norm and norm_type to context,
and fix typo in backend call.
* Docfix: remove torch.manual_seed from example code.
* Add a note about using sparse keyword in Embedding function.
2017-10-18 23:38:07 +02:00
Arthur Crippa Búrigo
17d68f824d Fix typo. (#3140) 2017-10-17 00:50:33 +02:00
SsnL
6dc67aef17 doc (#3110) 2017-10-14 10:44:35 +02:00
Sam Gross
9437644f66 Replace softmin and softsign with simple differentiable expressions 2017-10-10 16:57:47 -04:00
Priya Goyal
2443fcac0b Deterministic cudnn algorithms 2017-10-10 10:53:34 -04:00
SsnL
0eec332e14 assert reflection padding in range (#3008) 2017-10-06 17:59:01 -04:00
Richard Zou
898c732293 Introduce a reduce keyword argument for MSELoss (#2878)
* Add reduce keyword to MSECriterion API

* Move gradOutput usage from py to backend

* Implement reduce keyword for THNN MSECriterion

* Implement reduce keyword for THCUNN MSECriterion

* Implement reduce keyword for MSE double backwards

* Tests for MSECriterion with reduce keyword

* Documentation for reduce for MSELoss

* Make legacy nn work with reduce keyword by ignoring it

* Apply linter suggestions

* Address comments (small changes)

* Revert "Tests for MSECriterion with reduce keyword"

This reverts commit 1c0be0defa49d336d023d7d9795db4037c92b6fe.

* Undo changes to legacy nn tests

* Reuse module test for MSELoss by creating a wrapper class for MSELoss

* Address comments: refactor MSECriterion.cu to be nicer

* Fix lint & build errors
2017-10-06 10:57:22 -04:00
SsnL
ba766ef39a Fix BN size check in eval mode (#2977) 2017-10-04 16:03:20 -04:00
SsnL
faa6fdfa18 Raise error when each channel only has 1 value in batch norm (#2961)
* add error when each channel only has 1 value
2017-10-03 17:56:15 -04:00
SsnL
d5a7e304fa added volumetric adaptive max pooling 2017-09-30 16:57:51 -04:00
Edward Z. Yang
9be8d0a9d2 Add a docstring for functional.linear.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-26 12:29:07 -04:00
SsnL
6a4ec4f9a8 VolumetricAdaptiveAveragePool 2017-09-25 15:12:44 -04:00
Luca Antiga
c580352aee Adding 1d upsampling (#2846) 2017-09-24 16:50:24 -04:00
Emanuel Jöbstl
39434ee2e4 Added LPPool1d. (#2783) 2017-09-20 09:19:29 -04:00
David Pollack
c6ea6ed8ff Add Nd Padding, Pad1d functions and ConstantPad3d (#2657) 2017-09-18 14:48:49 -04:00
Gregory Chanan
d910a94b2b Support AdaptiveMaxPool1d/2d double backwards. 2017-09-13 12:28:43 -04:00
Lu Fang
5294017d9f Adding implicit padding for 3d average pooling 2017-08-26 14:45:19 -04:00
yunjey
153c9b0714 Add examples in functional.py and loss.py (#2371)
* Add examples in functional.py

Added examples for F.cross_entropy, F.binary_cross_entropy and F.binary_cross_entropy_with_logits.

* Add ` for PyTorch docs

Added ` for PyTorch docs.

* Add examples in loss.py

Added examples for nn.BCELoss and nn.BCEWithLogitLoss.
2017-08-25 09:44:36 -04:00
Alykhan Tejani
30baba7d15 fix typo in docstring 2017-08-16 17:55:39 -04:00
Gregory Chanan
c92f229aa2 CosineEmbeddingLoss as a new style function. 2017-08-14 16:19:10 -04:00
Gregory Chanan
9bcb9658d5 MarginRankingLoss as new style function. 2017-08-14 16:19:10 -04:00
Gregory Chanan
7aeb837895 Implement HingeEmbeddingLoss double backwards. 2017-08-14 16:19:10 -04:00
Gregory Chanan
9a243abe5c Implement Softmin double backwards. 2017-08-14 16:19:10 -04:00
Gregory Chanan
a6cccc8701 Implement RReLU double backwards. 2017-08-14 16:19:10 -04:00
Luca Antiga
cd5275e79f Convert upsampling Functions to new style (#2372) 2017-08-11 21:03:58 -04:00
Soumith Chintala
42328b70f7 fix another is_same_size call 2017-08-02 19:53:39 -04:00
Soumith Chintala
b3ca3da4b6 fix type mismatch 2017-08-02 10:18:03 -04:00
yunjey
e1ca722988 Add comments for default value (#2248)
Added comments for default value in nn.functional
2017-08-01 14:27:46 +05:30
Alykhan Tejani
643f8d12ff [bugfix] in bce_with_logits logsumexp calculation (#2221)
* fix bug in bce_with_logits logsumexp calculation

* flake8 fix
2017-07-27 05:58:56 +05:30
Gregory Chanan
bcea678e7b Update rebased functions to call apply. 2017-07-25 07:37:25 +05:30
Gregory Chanan
1a52ca02ef Always return indices from MaxPool autograd functions to simplify implementation;
The callers (in functional.py) will filter out the return instead.
2017-07-25 07:37:25 +05:30
Gregory Chanan
291369ff1b Convert pooling functions to new-style, once_differentiable functions. 2017-07-25 07:37:25 +05:30
Gregory Chanan
9608e37969 Implement double backwards for PReLU. 2017-07-25 07:37:25 +05:30
Gregory Chanan
ec7c510557 Implement Softsign double backwards. 2017-07-25 07:37:25 +05:30
Gregory Chanan
852dd5f011 Convert _WeightedLoss functions to new style autograd functions. 2017-07-25 07:37:25 +05:30
Gregory Chanan
085abee444 Rebase kl_div changes. 2017-07-25 07:37:25 +05:30
Gregory Chanan
45ce4df74c Convert auto nn Functions (non-criterion) to new style. 2017-07-25 07:37:25 +05:30
Alykhan Tejani
112728cbe9 reformulate bce_with_logits to not use abs (#2195)
* reformulate bce_with_logits to not use abs

* flake8 fixes
2017-07-25 03:46:27 +05:30
Alykhan Tejani
35757af6f7 Add broadcasting of weights to bce/bce_with_logits (#2161)
* added tests + removed explicit expand of weight in bce with logits

* add auto broadcasting of weight to BCELoss

* remove the need for _BCELoss

* formatting of warning

* remove TODO

* move across assert from _functions/thnn/loss.py

* flake8 fixes
2017-07-21 16:02:07 -04:00
yunjey
ea607afd06 Add comments in nn.Upsample (#2175) 2017-07-21 14:34:58 -04:00
Edward Z. Yang
f3f478960e Convert Embedding to new style. (#1916)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-07-20 02:35:21 -04:00
Hugh Perkins
e537023147 add functional embedding (#1987) 2017-07-20 01:53:37 -04:00
Aron Barreira Bordin
11f3ccf98f Add missing Modules to nn.functional (#1801)
* add dropout2d and dropout3d to functional

added some loss functions to functional

added tests

using dropout from backend

added docs

fixes

* edited loss modules to call functional
2017-07-19 15:55:21 -04:00
Fisher Yu
d6bc2642e7 Add ignore_index to NLLLoss2d 2017-07-13 23:22:48 -04:00
Soumith Chintala
58e4caf80f add missing docs 2017-07-13 01:01:04 -04:00
Soumith Chintala
169ca67a4e Adding Spatial Transformers w/CuDNN support 2017-07-12 14:32:06 -04:00
yunjey
1ef1dd9cad Add comments for readability (#2005) 2017-07-10 23:02:56 -07:00
Leonid Vlasenkov
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
Gregory Chanan
f6578c1b24 Implement double backwards for Dropout and FeatureDropout. 2017-07-03 18:51:22 -04:00
Gregory Chanan
daa84e7663 Implement bilinear double backward. 2017-07-03 18:51:22 -04:00
Gregory Chanan
1aa145dbac Implement ConstantPad2d double backwards. 2017-07-03 18:51:22 -04:00
Alykhan Tejani
457587088a Fix broadcasting issues in binary_cross_entropy_with_logits (#1944)
* done re-seed cuda device if in bad fork

* avoid broadcasting in binary_cross_entropy_with_logits

* assert input sizes for BCEWithLogitLoss

* added check that BCEWithLogitsLoss == Sigmoid + BCELoss

* fix flake8 issues

* rename test_bce_with_logits_gives_same_result_as_bce_and_sigmoid -> test_bce_with_logits_gives_same_result_as_sigmooid_and_bce_loss

* add warning in BCELoss about input shapes

* fix lint
2017-07-01 23:06:36 -04:00
Sam Gross
da0fad8a7a Use torch.matmul in nn.Linear (#1935)
This takes advantage of the broadcasting behavior of torch.matmul to
support inputs with more than two dimensions. The extra dimensions are
treated like part of the batch dimension, much like nn.Bottle in Lua
Torch.

There are a few related small performance changes:

 * Addmm computes the gradient in column-major for inputs in
   column-major format
 * Variable.mm calls Addmm in-place with the desired output buffer
2017-06-30 16:53:26 -04:00
Sam Gross
4d5075add2 Add ignore_index to nnl_loss and cross_entropy (#1937) 2017-06-29 00:10:13 -04:00
Leonid Vlasenkov
ae61f3ff42 adds poisson NLL loss (#1779) 2017-06-27 10:04:54 -04:00
Alykhan Tejani
67968cb60b Add numerically stable BCELoss which takes logits as input (#1792) 2017-06-19 22:05:51 -04:00
Francisco Massa
76ee014d10 Add documentation to SELU and AlphaDropout 2017-06-19 18:18:01 -04:00
Francisco Massa
f619ac6ac9 Quickfix for AlphaDropout on CUDA 2017-06-19 18:18:01 -04:00
Sam Gross
38b9598685 Added GLU (gated linear unit)
From https://arxiv.org/abs/1612.08083
2017-06-13 20:48:19 -04:00
Francisco Massa
6626881e7a Add Alpha Dropout (#1775) 2017-06-13 00:39:49 +02:00
Francisco Massa
a24db91a38 Add SELU activation function (#1769)
* Add SELU activation function

* Remove unnecessary case

* Add Function for SELU + tests and fix RReLU inplace

* Fix extra line in doc

* Fix tests

Remove in-place tests for RReLU. For some reason they fail on legacy nn, but passes on nn

* SELU in new-style Function

It also supports double backprop, verifyed with gradgradcheck

* Fix flake8
2017-06-11 10:07:48 +03:00
Luca Antiga
b9ab26765e Add 3D upsampling (nearest and trilinear) with tests 2017-06-07 11:29:27 -04:00
Soumith Chintala
df7c47142d fix for THNN NLLLoss signature change 2017-06-07 00:18:11 -04:00
Aron Barreira Bordin
d7db75c10f added CosineSimilarity to nn.distance and updated docs (#1672)
* added CosineSimilarity to nn.distance and updated docs
2017-06-06 22:53:21 -04:00
Marvin Cao
174c3cc399 Add support for double backward of LeakyReLU (#1714) 2017-06-05 11:53:27 -04:00
Alykhan Tejani
f1c57ace1b added input dim checks to convxD and conv_transposedxd (#1695)
* add input dim check for conv2d

* add None check to conv2d

* added input dim checks to convxD and conv_transposedxd

* flake8 fixes
2017-06-02 11:58:19 -04:00
Thomas Viehmann
6107d15d14 Twice differentiability of pointwise functions (#1531) 2017-05-15 12:00:59 -06:00
Adam Paszke
6b84dc26f0 Add F.cosine_similarity (#1502) 2017-05-15 11:12:54 -06:00
Marvin Cao
0ba20435ce Add high order grad support for Some operator (#1507) 2017-05-14 23:02:04 +02:00
Gregory Chanan
171638a451 Fix test_normalize NN test. 2017-05-09 14:25:06 -07:00
Gregory Chanan
ae2b2cbbec Make keepdim work with autograd. 2017-05-09 14:15:59 -07:00
Sergey Zagoruyko
6d693fe413 Add F.normalize (#1467) 2017-05-07 13:54:16 +02:00
Marvin CAO
e3f41a4962 Add high order gradient support for Sigmoid (#1496) 2017-05-07 13:00:20 +02:00
Ankit Vani
4e18d89791 added twice differentiation for a bunch of ops (#1426) 2017-05-04 06:47:14 -04:00
andrew giessel
2e7635b929 Add flexible bilinear upsampling aspect ratio redux (#1317) 2017-05-03 08:46:28 -04:00
Soumith Chintala
ecd51f8510 docs fixes 2017-05-02 15:42:33 -04:00
Soumith Chintala
7dd8571bc6 fix avg_pool docs in nn.functional 2017-04-30 08:44:43 -04:00
Adam Paszke
457d78a7d9 Use THCUNN backward kernels for Tanh and Sigmoid in Autograd (#1399) 2017-04-29 09:07:03 -04:00
Uridah Sami Ahmed
75f1989bec Add nn.Bilinear and tests 2017-04-28 10:11:30 -04:00
Shubham Jain
a35f507532 Update functional.py (#1298) 2017-04-19 11:07:12 -04:00
Edward Z. Yang
34546f022a Expose dilated convolutions.
Fixes #1225.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Edward Z. Yang
ab77742f6e Add some missing documentation for arguments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Christian Sarofeen
e9ff57176b Fused pointwise kernels for GRU/LSTM 2017-04-11 13:42:06 -07:00
Christian Sarofeen
0b50f794e9 Use thnn version of Tanh/Sigmoid instead of autograd. (#1234) 2017-04-11 12:49:57 -07:00
Edgar Riba
9504246c32 add triplet margin loss (#1165) 2017-04-05 22:17:58 -04:00
Soumith Chintala
2979f4b989 add more functions to docs 2017-03-29 01:29:17 -04:00
Jason Kuen
f2c1071c33 Adaptive max and average pooling (1D & 2D) (#1084) 2017-03-26 17:09:28 +02:00
Edgar Riba
63f6c0d692 add Pairwise distance (#835) 2017-03-24 11:29:40 -04:00
ngimel
b3ab4b1094 Check torch.backends.cudnn.enabled, padding, and output_padding (#996)
* Check torch.backends.cudnn.enabled
* Don't allow negative padding and output_padding values
2017-03-22 19:42:11 -04:00
Kentaro Wada
7654b3f49e Add function to compute cross_entropy for 2D image (#802) 2017-03-16 17:34:04 +01:00
Soumith Chintala
13b1580613 add F.pad to docs 2017-03-15 00:09:14 -04:00
Sam Gross
34ce58c909 Parallelize backwards 2017-03-03 11:26:00 -08:00
Sergey Zagoruyko
12efd53dba ConstantPad2d and F.pad (#856) 2017-03-01 19:39:44 +01:00
Ofir Press
5e1d6a3691 Update functional.py (#862)
Fixed documentation error in conv3d
2017-02-27 10:42:02 -05:00
陈云
838842d4b2 fix documentation error. [issue #790](https://github.com/pytorch/pytorch/issues/790) (#831) 2017-02-23 08:59:29 +01:00
Joo-Kyung Kim
336eeee895 kernel_size as the default stride for avg_pool1d (#744)
Following the documentation, let stride to be kernel_size if stride is not provided.
2017-02-15 13:12:18 +05:30
Soumith Chintala
d4c9a3782b billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix (#617)
* billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix
2017-01-30 05:08:48 +05:30
Luke Yeager
3ed720079e [pep8] Fix most remaining lint manually 2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
f8d4f980b3 Add upsampling modules and functions 2017-01-24 17:30:50 -05:00
Alykhan Tejani
f8e89fbe11 fix docs for torch.nn.functional.conv1d (#536) 2017-01-21 10:41:52 -05:00
Adam Paszke
ee4c77c59f Docs improvements (#512)
* Always compile .numpy() for all types

* Add torch.nn.functional docs and hidden headers

* Use sphinx to generate torchvision docs

* Remove unused import in ffi utils
2017-01-19 17:28:49 -05:00
Sergey Zagoruyko
9c218b419f kl_div and docs (#429) 2017-01-17 19:24:01 -05:00
Adam Paszke
1dbf44c00d Add SmoothL1Loss to functional 2017-01-16 12:59:47 -05:00
Sam Gross
3a07228509 Add ConvTranspose1d module (#449) 2017-01-13 15:22:57 -05:00
Sam Gross
24a2f2e3a0 Add MaxUnpool1d module (#447) 2017-01-13 14:36:25 -05:00
Sam Gross
d5e45b2278 Add AvgPool1d which just uses AvgPool2d implementation (#439) 2017-01-12 15:07:11 -05:00
Sam Gross
fd92470e23 Add cuDNN bindings for BatchNorm (#421) 2017-01-07 15:35:24 -05:00
Adam Paszke
483490cc25 Move PixelShuffle implementation to functional 2016-12-30 23:02:57 +01:00
Adam Paszke
8d60e39fdc Rename torch.nn.functions to torch.nn._functions 2016-12-30 23:02:57 +01:00
Sam Gross
c367e0b64e Support dilated 1d and 3d convolutions (#372)
Fixes #367
2016-12-29 18:20:32 -05:00
Sergey Zagoruyko
62af45d99f Basic functional interface (#354) 2016-12-29 22:53:57 +01:00