Commit Graph

179 Commits

Author SHA1 Message Date
Sam Gross
9cb8b43778
Split off in-place NN functions (#3683)
For example, this splits threshold into threshold(), which is now
never in-place, and threshold_() which is always in-place.

This simplifies the in-place vs. non-in-place logic in
gen_variable_type.py, which was bug-prone.
2017-11-14 12:59:06 -05:00
josecabjim
e33df2b88a Add border-padding for grid_sampler (#3599)
* adds border padding to spatial grid sampler

* fixes flake8 * adds docs
2017-11-12 18:46:49 -05:00
Edward Z. Yang
19515520bb Make prelu an ATen op.
This operator is a warmup I was doing before tackling convolution, as it
has many properties that make it a "first" for implementing things.  In
particular, it is the first operator whose backwards have multiple
returns; this means its double backwards is the first backwards for a
function with multiple differentiable outputs.  This exercises new code
for output_mask and set_flags.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-11-10 09:58:40 +08:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
Richard Zou
77ddd5130b Add reduce keyword for KLDivLoss (#3330) 2017-11-07 08:57:11 -05:00
Hugh Perkins
b043a74919 fix softmax doc (#3337) 2017-11-01 08:47:51 -04:00
Gökçen Eraslan
638f0b5d78 Prevent numerical issues with poisson_nll_loss when log_input=False (#3336)
* Prevent numerical issues with poisson_nll_loss when log_input=False

Evaluation of the logarithm of the input variable in poisson negative log likelihood leads to NaN loss if variable being evaluated is zero. Small epsilon is added to prevent this. See equivalent Keras epsilon here: https://github.com/fchollet/keras/blob/master/keras/losses.py#L68

* PEP8 fix

* Add epsilon support to PoissonNLLLoss in nn.modules.loss
2017-11-01 08:47:19 -04:00
Richard Zou
6214487fa7 Add reduce keyword to L1Loss (#3366)
* Add reduce keyword to L1Loss

* Fix legacy test for abscriterion

* Address comments
2017-11-01 06:33:18 -04:00
Richard Zou
eac0942f6d Add more nn docs (#3374) 2017-10-30 18:37:36 -04:00
Ozan Caglayan
28f3d50f9d doc: Replace nclasses with C 2017-10-30 12:06:20 -04:00
John Chiotellis
a0ce84e476 fix triplet margin loss documentation (#3339) 2017-10-28 17:15:58 +02:00
SsnL
de1f4e69dd raw text (#3327) 2017-10-28 01:24:02 +05:30
Richard Zou
d8f3c601e4 Add reduce keyword to CrossEntropyLoss 2017-10-27 19:19:52 +02:00
Richard Zou
3853d5da97 Add reduce keyword to NLLLoss and NLLLoss2d (#3080)
* API changes

* Implement reduce for THNN ClassNLLCriterion

* Implement reduce keyword for THCUNN ClassNLLCriterion

* Implement reduce for THNN SpatialClassNLLCriterion

* Implement reduce for THCUNN SpatialClassNLLCriterion

* Make legacy NLLLoss work

* Docs for NLLLoss reduce

* reduce keyword for double backwards NLLLoss

* reduce=False tests

* Addressed comments

* Fix trailing whitespace

* Fix test failures in legacy nn

* Rebase: add reduce keyword to aten declarations of NLLLoss

* Add reference functions for all NLLLoss and NLLLoss2d test cases

* Replaced slow get/set fns. Don't use int64_t in kernels.

* Use TH_INDEX_BASE in NLLLoss for consistency

* Fix legacy ClassNLLCriterion tests
2017-10-26 13:54:19 -04:00
Sam Gross
67839ce7bc Delete unused Softmax code (#3220)
Softmax and LogSoftmax are automatically bound and dispatched through
VariableType.
2017-10-21 20:51:27 +02:00
Sam Gross
5989b05ecc Enable ATen implementation of some NN functions and Variable methods 2017-10-20 15:38:01 -04:00
Adam Paszke
98e67448fa Large Softmax and LogSoftmax refactor
- Cleaned up THNN and THCUNN code and kernels
- Improved THCUNN kernel performance 5x, making it match cuDNN performance
- Added support for computing softmax over arbitrary dims
  NOTE: The default dim for 3D inputs is now 1 (used to be 0)
- Both functions now accept inputs with arbitrarily many dimensions
- Autograd functions no longer save the input (it's unnecessary)
- Added cuDNN bindings for softmax, but they are unused as THCUNN
  matches or even exceeds cuDNN performance
2017-10-19 19:51:10 +02:00
Marcin Elantkowski
57ffe64cbe Embedding related fixes (#3128)
* Fix docs for nn.Embedding and F.embedding.
  - add description of 'sparse' argument (#3104)
  - fix F.embedding example (resulted in RuntimeError)
* Make EmbeddingBag a New Style Function.
* Add a functional interface for EmbeddingBag
* Fix failing tests: add max_norm and norm_type to context,
and fix typo in backend call.
* Docfix: remove torch.manual_seed from example code.
* Add a note about using sparse keyword in Embedding function.
2017-10-18 23:38:07 +02:00
Arthur Crippa Búrigo
17d68f824d Fix typo. (#3140) 2017-10-17 00:50:33 +02:00
SsnL
6dc67aef17 doc (#3110) 2017-10-14 10:44:35 +02:00
Sam Gross
9437644f66 Replace softmin and softsign with simple differentiable expressions 2017-10-10 16:57:47 -04:00
Priya Goyal
2443fcac0b Deterministic cudnn algorithms 2017-10-10 10:53:34 -04:00
SsnL
0eec332e14 assert reflection padding in range (#3008) 2017-10-06 17:59:01 -04:00
Richard Zou
898c732293 Introduce a reduce keyword argument for MSELoss (#2878)
* Add reduce keyword to MSECriterion API

* Move gradOutput usage from py to backend

* Implement reduce keyword for THNN MSECriterion

* Implement reduce keyword for THCUNN MSECriterion

* Implement reduce keyword for MSE double backwards

* Tests for MSECriterion with reduce keyword

* Documentation for reduce for MSELoss

* Make legacy nn work with reduce keyword by ignoring it

* Apply linter suggestions

* Address comments (small changes)

* Revert "Tests for MSECriterion with reduce keyword"

This reverts commit 1c0be0defa49d336d023d7d9795db4037c92b6fe.

* Undo changes to legacy nn tests

* Reuse module test for MSELoss by creating a wrapper class for MSELoss

* Address comments: refactor MSECriterion.cu to be nicer

* Fix lint & build errors
2017-10-06 10:57:22 -04:00
SsnL
ba766ef39a Fix BN size check in eval mode (#2977) 2017-10-04 16:03:20 -04:00
SsnL
faa6fdfa18 Raise error when each channel only has 1 value in batch norm (#2961)
* add error when each channel only has 1 value
2017-10-03 17:56:15 -04:00
SsnL
d5a7e304fa added volumetric adaptive max pooling 2017-09-30 16:57:51 -04:00
Edward Z. Yang
9be8d0a9d2 Add a docstring for functional.linear.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-26 12:29:07 -04:00
SsnL
6a4ec4f9a8 VolumetricAdaptiveAveragePool 2017-09-25 15:12:44 -04:00
Luca Antiga
c580352aee Adding 1d upsampling (#2846) 2017-09-24 16:50:24 -04:00
Emanuel Jöbstl
39434ee2e4 Added LPPool1d. (#2783) 2017-09-20 09:19:29 -04:00
David Pollack
c6ea6ed8ff Add Nd Padding, Pad1d functions and ConstantPad3d (#2657) 2017-09-18 14:48:49 -04:00
Gregory Chanan
d910a94b2b Support AdaptiveMaxPool1d/2d double backwards. 2017-09-13 12:28:43 -04:00
Lu Fang
5294017d9f Adding implicit padding for 3d average pooling 2017-08-26 14:45:19 -04:00
yunjey
153c9b0714 Add examples in functional.py and loss.py (#2371)
* Add examples in functional.py

Added examples for F.cross_entropy, F.binary_cross_entropy and F.binary_cross_entropy_with_logits.

* Add ` for PyTorch docs

Added ` for PyTorch docs.

* Add examples in loss.py

Added examples for nn.BCELoss and nn.BCEWithLogitLoss.
2017-08-25 09:44:36 -04:00
Alykhan Tejani
30baba7d15 fix typo in docstring 2017-08-16 17:55:39 -04:00
Gregory Chanan
c92f229aa2 CosineEmbeddingLoss as a new style function. 2017-08-14 16:19:10 -04:00
Gregory Chanan
9bcb9658d5 MarginRankingLoss as new style function. 2017-08-14 16:19:10 -04:00
Gregory Chanan
7aeb837895 Implement HingeEmbeddingLoss double backwards. 2017-08-14 16:19:10 -04:00
Gregory Chanan
9a243abe5c Implement Softmin double backwards. 2017-08-14 16:19:10 -04:00
Gregory Chanan
a6cccc8701 Implement RReLU double backwards. 2017-08-14 16:19:10 -04:00
Luca Antiga
cd5275e79f Convert upsampling Functions to new style (#2372) 2017-08-11 21:03:58 -04:00
Soumith Chintala
42328b70f7 fix another is_same_size call 2017-08-02 19:53:39 -04:00
Soumith Chintala
b3ca3da4b6 fix type mismatch 2017-08-02 10:18:03 -04:00
yunjey
e1ca722988 Add comments for default value (#2248)
Added comments for default value in nn.functional
2017-08-01 14:27:46 +05:30
Alykhan Tejani
643f8d12ff [bugfix] in bce_with_logits logsumexp calculation (#2221)
* fix bug in bce_with_logits logsumexp calculation

* flake8 fix
2017-07-27 05:58:56 +05:30
Gregory Chanan
bcea678e7b Update rebased functions to call apply. 2017-07-25 07:37:25 +05:30
Gregory Chanan
1a52ca02ef Always return indices from MaxPool autograd functions to simplify implementation;
The callers (in functional.py) will filter out the return instead.
2017-07-25 07:37:25 +05:30
Gregory Chanan
291369ff1b Convert pooling functions to new-style, once_differentiable functions. 2017-07-25 07:37:25 +05:30
Gregory Chanan
9608e37969 Implement double backwards for PReLU. 2017-07-25 07:37:25 +05:30
Gregory Chanan
ec7c510557 Implement Softsign double backwards. 2017-07-25 07:37:25 +05:30
Gregory Chanan
852dd5f011 Convert _WeightedLoss functions to new style autograd functions. 2017-07-25 07:37:25 +05:30
Gregory Chanan
085abee444 Rebase kl_div changes. 2017-07-25 07:37:25 +05:30
Gregory Chanan
45ce4df74c Convert auto nn Functions (non-criterion) to new style. 2017-07-25 07:37:25 +05:30
Alykhan Tejani
112728cbe9 reformulate bce_with_logits to not use abs (#2195)
* reformulate bce_with_logits to not use abs

* flake8 fixes
2017-07-25 03:46:27 +05:30
Alykhan Tejani
35757af6f7 Add broadcasting of weights to bce/bce_with_logits (#2161)
* added tests + removed explicit expand of weight in bce with logits

* add auto broadcasting of weight to BCELoss

* remove the need for _BCELoss

* formatting of warning

* remove TODO

* move across assert from _functions/thnn/loss.py

* flake8 fixes
2017-07-21 16:02:07 -04:00
yunjey
ea607afd06 Add comments in nn.Upsample (#2175) 2017-07-21 14:34:58 -04:00
Edward Z. Yang
f3f478960e Convert Embedding to new style. (#1916)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-07-20 02:35:21 -04:00
Hugh Perkins
e537023147 add functional embedding (#1987) 2017-07-20 01:53:37 -04:00
Aron Barreira Bordin
11f3ccf98f Add missing Modules to nn.functional (#1801)
* add dropout2d and dropout3d to functional

added some loss functions to functional

added tests

using dropout from backend

added docs

fixes

* edited loss modules to call functional
2017-07-19 15:55:21 -04:00
Fisher Yu
d6bc2642e7 Add ignore_index to NLLLoss2d 2017-07-13 23:22:48 -04:00
Soumith Chintala
58e4caf80f add missing docs 2017-07-13 01:01:04 -04:00
Soumith Chintala
169ca67a4e Adding Spatial Transformers w/CuDNN support 2017-07-12 14:32:06 -04:00
yunjey
1ef1dd9cad Add comments for readability (#2005) 2017-07-10 23:02:56 -07:00
Leonid Vlasenkov
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
Gregory Chanan
f6578c1b24 Implement double backwards for Dropout and FeatureDropout. 2017-07-03 18:51:22 -04:00
Gregory Chanan
daa84e7663 Implement bilinear double backward. 2017-07-03 18:51:22 -04:00
Gregory Chanan
1aa145dbac Implement ConstantPad2d double backwards. 2017-07-03 18:51:22 -04:00
Alykhan Tejani
457587088a Fix broadcasting issues in binary_cross_entropy_with_logits (#1944)
* done re-seed cuda device if in bad fork

* avoid broadcasting in binary_cross_entropy_with_logits

* assert input sizes for BCEWithLogitLoss

* added check that BCEWithLogitsLoss == Sigmoid + BCELoss

* fix flake8 issues

* rename test_bce_with_logits_gives_same_result_as_bce_and_sigmoid -> test_bce_with_logits_gives_same_result_as_sigmooid_and_bce_loss

* add warning in BCELoss about input shapes

* fix lint
2017-07-01 23:06:36 -04:00
Sam Gross
da0fad8a7a Use torch.matmul in nn.Linear (#1935)
This takes advantage of the broadcasting behavior of torch.matmul to
support inputs with more than two dimensions. The extra dimensions are
treated like part of the batch dimension, much like nn.Bottle in Lua
Torch.

There are a few related small performance changes:

 * Addmm computes the gradient in column-major for inputs in
   column-major format
 * Variable.mm calls Addmm in-place with the desired output buffer
2017-06-30 16:53:26 -04:00
Sam Gross
4d5075add2 Add ignore_index to nnl_loss and cross_entropy (#1937) 2017-06-29 00:10:13 -04:00
Leonid Vlasenkov
ae61f3ff42 adds poisson NLL loss (#1779) 2017-06-27 10:04:54 -04:00
Alykhan Tejani
67968cb60b Add numerically stable BCELoss which takes logits as input (#1792) 2017-06-19 22:05:51 -04:00
Francisco Massa
76ee014d10 Add documentation to SELU and AlphaDropout 2017-06-19 18:18:01 -04:00
Francisco Massa
f619ac6ac9 Quickfix for AlphaDropout on CUDA 2017-06-19 18:18:01 -04:00
Sam Gross
38b9598685 Added GLU (gated linear unit)
From https://arxiv.org/abs/1612.08083
2017-06-13 20:48:19 -04:00
Francisco Massa
6626881e7a Add Alpha Dropout (#1775) 2017-06-13 00:39:49 +02:00
Francisco Massa
a24db91a38 Add SELU activation function (#1769)
* Add SELU activation function

* Remove unnecessary case

* Add Function for SELU + tests and fix RReLU inplace

* Fix extra line in doc

* Fix tests

Remove in-place tests for RReLU. For some reason they fail on legacy nn, but passes on nn

* SELU in new-style Function

It also supports double backprop, verifyed with gradgradcheck

* Fix flake8
2017-06-11 10:07:48 +03:00
Luca Antiga
b9ab26765e Add 3D upsampling (nearest and trilinear) with tests 2017-06-07 11:29:27 -04:00
Soumith Chintala
df7c47142d fix for THNN NLLLoss signature change 2017-06-07 00:18:11 -04:00
Aron Barreira Bordin
d7db75c10f added CosineSimilarity to nn.distance and updated docs (#1672)
* added CosineSimilarity to nn.distance and updated docs
2017-06-06 22:53:21 -04:00
Marvin Cao
174c3cc399 Add support for double backward of LeakyReLU (#1714) 2017-06-05 11:53:27 -04:00
Alykhan Tejani
f1c57ace1b added input dim checks to convxD and conv_transposedxd (#1695)
* add input dim check for conv2d

* add None check to conv2d

* added input dim checks to convxD and conv_transposedxd

* flake8 fixes
2017-06-02 11:58:19 -04:00
Thomas Viehmann
6107d15d14 Twice differentiability of pointwise functions (#1531) 2017-05-15 12:00:59 -06:00
Adam Paszke
6b84dc26f0 Add F.cosine_similarity (#1502) 2017-05-15 11:12:54 -06:00
Marvin Cao
0ba20435ce Add high order grad support for Some operator (#1507) 2017-05-14 23:02:04 +02:00
Gregory Chanan
171638a451 Fix test_normalize NN test. 2017-05-09 14:25:06 -07:00
Gregory Chanan
ae2b2cbbec Make keepdim work with autograd. 2017-05-09 14:15:59 -07:00
Sergey Zagoruyko
6d693fe413 Add F.normalize (#1467) 2017-05-07 13:54:16 +02:00
Marvin CAO
e3f41a4962 Add high order gradient support for Sigmoid (#1496) 2017-05-07 13:00:20 +02:00
Ankit Vani
4e18d89791 added twice differentiation for a bunch of ops (#1426) 2017-05-04 06:47:14 -04:00
andrew giessel
2e7635b929 Add flexible bilinear upsampling aspect ratio redux (#1317) 2017-05-03 08:46:28 -04:00
Soumith Chintala
ecd51f8510 docs fixes 2017-05-02 15:42:33 -04:00
Soumith Chintala
7dd8571bc6 fix avg_pool docs in nn.functional 2017-04-30 08:44:43 -04:00
Adam Paszke
457d78a7d9 Use THCUNN backward kernels for Tanh and Sigmoid in Autograd (#1399) 2017-04-29 09:07:03 -04:00
Uridah Sami Ahmed
75f1989bec Add nn.Bilinear and tests 2017-04-28 10:11:30 -04:00
Shubham Jain
a35f507532 Update functional.py (#1298) 2017-04-19 11:07:12 -04:00
Edward Z. Yang
34546f022a Expose dilated convolutions.
Fixes #1225.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Edward Z. Yang
ab77742f6e Add some missing documentation for arguments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Christian Sarofeen
e9ff57176b Fused pointwise kernels for GRU/LSTM 2017-04-11 13:42:06 -07:00