Luca Antiga
cd5275e79f
Convert upsampling Functions to new style ( #2372 )
2017-08-11 21:03:58 -04:00
Soumith Chintala
42328b70f7
fix another is_same_size call
2017-08-02 19:53:39 -04:00
Soumith Chintala
b3ca3da4b6
fix type mismatch
2017-08-02 10:18:03 -04:00
yunjey
e1ca722988
Add comments for default value ( #2248 )
...
Added comments for default value in nn.functional
2017-08-01 14:27:46 +05:30
Alykhan Tejani
643f8d12ff
[bugfix] in bce_with_logits logsumexp calculation ( #2221 )
...
* fix bug in bce_with_logits logsumexp calculation
* flake8 fix
2017-07-27 05:58:56 +05:30
Gregory Chanan
bcea678e7b
Update rebased functions to call apply.
2017-07-25 07:37:25 +05:30
Gregory Chanan
1a52ca02ef
Always return indices from MaxPool autograd functions to simplify implementation;
...
The callers (in functional.py) will filter out the return instead.
2017-07-25 07:37:25 +05:30
Gregory Chanan
291369ff1b
Convert pooling functions to new-style, once_differentiable functions.
2017-07-25 07:37:25 +05:30
Gregory Chanan
9608e37969
Implement double backwards for PReLU.
2017-07-25 07:37:25 +05:30
Gregory Chanan
ec7c510557
Implement Softsign double backwards.
2017-07-25 07:37:25 +05:30
Gregory Chanan
852dd5f011
Convert _WeightedLoss functions to new style autograd functions.
2017-07-25 07:37:25 +05:30
Gregory Chanan
085abee444
Rebase kl_div changes.
2017-07-25 07:37:25 +05:30
Gregory Chanan
45ce4df74c
Convert auto nn Functions (non-criterion) to new style.
2017-07-25 07:37:25 +05:30
Alykhan Tejani
112728cbe9
reformulate bce_with_logits to not use abs ( #2195 )
...
* reformulate bce_with_logits to not use abs
* flake8 fixes
2017-07-25 03:46:27 +05:30
Alykhan Tejani
35757af6f7
Add broadcasting of weights to bce/bce_with_logits ( #2161 )
...
* added tests + removed explicit expand of weight in bce with logits
* add auto broadcasting of weight to BCELoss
* remove the need for _BCELoss
* formatting of warning
* remove TODO
* move across assert from _functions/thnn/loss.py
* flake8 fixes
2017-07-21 16:02:07 -04:00
yunjey
ea607afd06
Add comments in nn.Upsample ( #2175 )
2017-07-21 14:34:58 -04:00
Edward Z. Yang
f3f478960e
Convert Embedding to new style. ( #1916 )
...
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-07-20 02:35:21 -04:00
Hugh Perkins
e537023147
add functional embedding ( #1987 )
2017-07-20 01:53:37 -04:00
Aron Barreira Bordin
11f3ccf98f
Add missing Modules to nn.functional ( #1801 )
...
* add dropout2d and dropout3d to functional
added some loss functions to functional
added tests
using dropout from backend
added docs
fixes
* edited loss modules to call functional
2017-07-19 15:55:21 -04:00
Fisher Yu
d6bc2642e7
Add ignore_index to NLLLoss2d
2017-07-13 23:22:48 -04:00
Soumith Chintala
58e4caf80f
add missing docs
2017-07-13 01:01:04 -04:00
Soumith Chintala
169ca67a4e
Adding Spatial Transformers w/CuDNN support
2017-07-12 14:32:06 -04:00
yunjey
1ef1dd9cad
Add comments for readability ( #2005 )
2017-07-10 23:02:56 -07:00
Leonid Vlasenkov
46a868dab7
[Ready] Limit docs line length ( #1900 )
...
* some docs are ready
* docs
* docs
* fix some more
* fix some more
2017-07-10 10:24:54 -04:00
Gregory Chanan
f6578c1b24
Implement double backwards for Dropout and FeatureDropout.
2017-07-03 18:51:22 -04:00
Gregory Chanan
daa84e7663
Implement bilinear double backward.
2017-07-03 18:51:22 -04:00
Gregory Chanan
1aa145dbac
Implement ConstantPad2d double backwards.
2017-07-03 18:51:22 -04:00
Alykhan Tejani
457587088a
Fix broadcasting issues in binary_cross_entropy_with_logits ( #1944 )
...
* done re-seed cuda device if in bad fork
* avoid broadcasting in binary_cross_entropy_with_logits
* assert input sizes for BCEWithLogitLoss
* added check that BCEWithLogitsLoss == Sigmoid + BCELoss
* fix flake8 issues
* rename test_bce_with_logits_gives_same_result_as_bce_and_sigmoid -> test_bce_with_logits_gives_same_result_as_sigmooid_and_bce_loss
* add warning in BCELoss about input shapes
* fix lint
2017-07-01 23:06:36 -04:00
Sam Gross
da0fad8a7a
Use torch.matmul in nn.Linear ( #1935 )
...
This takes advantage of the broadcasting behavior of torch.matmul to
support inputs with more than two dimensions. The extra dimensions are
treated like part of the batch dimension, much like nn.Bottle in Lua
Torch.
There are a few related small performance changes:
* Addmm computes the gradient in column-major for inputs in
column-major format
* Variable.mm calls Addmm in-place with the desired output buffer
2017-06-30 16:53:26 -04:00
Sam Gross
4d5075add2
Add ignore_index to nnl_loss and cross_entropy ( #1937 )
2017-06-29 00:10:13 -04:00
Leonid Vlasenkov
ae61f3ff42
adds poisson NLL loss ( #1779 )
2017-06-27 10:04:54 -04:00
Alykhan Tejani
67968cb60b
Add numerically stable BCELoss which takes logits as input ( #1792 )
2017-06-19 22:05:51 -04:00
Francisco Massa
76ee014d10
Add documentation to SELU and AlphaDropout
2017-06-19 18:18:01 -04:00
Francisco Massa
f619ac6ac9
Quickfix for AlphaDropout on CUDA
2017-06-19 18:18:01 -04:00
Sam Gross
38b9598685
Added GLU (gated linear unit)
...
From https://arxiv.org/abs/1612.08083
2017-06-13 20:48:19 -04:00
Francisco Massa
6626881e7a
Add Alpha Dropout ( #1775 )
2017-06-13 00:39:49 +02:00
Francisco Massa
a24db91a38
Add SELU activation function ( #1769 )
...
* Add SELU activation function
* Remove unnecessary case
* Add Function for SELU + tests and fix RReLU inplace
* Fix extra line in doc
* Fix tests
Remove in-place tests for RReLU. For some reason they fail on legacy nn, but passes on nn
* SELU in new-style Function
It also supports double backprop, verifyed with gradgradcheck
* Fix flake8
2017-06-11 10:07:48 +03:00
Luca Antiga
b9ab26765e
Add 3D upsampling (nearest and trilinear) with tests
2017-06-07 11:29:27 -04:00
Soumith Chintala
df7c47142d
fix for THNN NLLLoss signature change
2017-06-07 00:18:11 -04:00
Aron Barreira Bordin
d7db75c10f
added CosineSimilarity to nn.distance and updated docs ( #1672 )
...
* added CosineSimilarity to nn.distance and updated docs
2017-06-06 22:53:21 -04:00
Marvin Cao
174c3cc399
Add support for double backward of LeakyReLU ( #1714 )
2017-06-05 11:53:27 -04:00
Alykhan Tejani
f1c57ace1b
added input dim checks to convxD and conv_transposedxd ( #1695 )
...
* add input dim check for conv2d
* add None check to conv2d
* added input dim checks to convxD and conv_transposedxd
* flake8 fixes
2017-06-02 11:58:19 -04:00
Thomas Viehmann
6107d15d14
Twice differentiability of pointwise functions ( #1531 )
2017-05-15 12:00:59 -06:00
Adam Paszke
6b84dc26f0
Add F.cosine_similarity ( #1502 )
2017-05-15 11:12:54 -06:00
Marvin Cao
0ba20435ce
Add high order grad support for Some operator ( #1507 )
2017-05-14 23:02:04 +02:00
Gregory Chanan
171638a451
Fix test_normalize NN test.
2017-05-09 14:25:06 -07:00
Gregory Chanan
ae2b2cbbec
Make keepdim work with autograd.
2017-05-09 14:15:59 -07:00
Sergey Zagoruyko
6d693fe413
Add F.normalize ( #1467 )
2017-05-07 13:54:16 +02:00
Marvin CAO
e3f41a4962
Add high order gradient support for Sigmoid ( #1496 )
2017-05-07 13:00:20 +02:00
Ankit Vani
4e18d89791
added twice differentiation for a bunch of ops ( #1426 )
2017-05-04 06:47:14 -04:00
andrew giessel
2e7635b929
Add flexible bilinear upsampling aspect ratio redux ( #1317 )
2017-05-03 08:46:28 -04:00
Soumith Chintala
ecd51f8510
docs fixes
2017-05-02 15:42:33 -04:00
Soumith Chintala
7dd8571bc6
fix avg_pool docs in nn.functional
2017-04-30 08:44:43 -04:00
Adam Paszke
457d78a7d9
Use THCUNN backward kernels for Tanh and Sigmoid in Autograd ( #1399 )
2017-04-29 09:07:03 -04:00
Uridah Sami Ahmed
75f1989bec
Add nn.Bilinear and tests
2017-04-28 10:11:30 -04:00
Shubham Jain
a35f507532
Update functional.py ( #1298 )
2017-04-19 11:07:12 -04:00
Edward Z. Yang
34546f022a
Expose dilated convolutions.
...
Fixes #1225 .
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Edward Z. Yang
ab77742f6e
Add some missing documentation for arguments.
...
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Christian Sarofeen
e9ff57176b
Fused pointwise kernels for GRU/LSTM
2017-04-11 13:42:06 -07:00
Christian Sarofeen
0b50f794e9
Use thnn version of Tanh/Sigmoid instead of autograd. ( #1234 )
2017-04-11 12:49:57 -07:00
Edgar Riba
9504246c32
add triplet margin loss ( #1165 )
2017-04-05 22:17:58 -04:00
Soumith Chintala
2979f4b989
add more functions to docs
2017-03-29 01:29:17 -04:00
Jason Kuen
f2c1071c33
Adaptive max and average pooling (1D & 2D) ( #1084 )
2017-03-26 17:09:28 +02:00
Edgar Riba
63f6c0d692
add Pairwise distance ( #835 )
2017-03-24 11:29:40 -04:00
ngimel
b3ab4b1094
Check torch.backends.cudnn.enabled, padding, and output_padding ( #996 )
...
* Check torch.backends.cudnn.enabled
* Don't allow negative padding and output_padding values
2017-03-22 19:42:11 -04:00
Kentaro Wada
7654b3f49e
Add function to compute cross_entropy for 2D image ( #802 )
2017-03-16 17:34:04 +01:00
Soumith Chintala
13b1580613
add F.pad to docs
2017-03-15 00:09:14 -04:00
Sam Gross
34ce58c909
Parallelize backwards
2017-03-03 11:26:00 -08:00
Sergey Zagoruyko
12efd53dba
ConstantPad2d and F.pad ( #856 )
2017-03-01 19:39:44 +01:00
Ofir Press
5e1d6a3691
Update functional.py ( #862 )
...
Fixed documentation error in conv3d
2017-02-27 10:42:02 -05:00
陈云
838842d4b2
fix documentation error. [issue #790 ]( https://github.com/pytorch/pytorch/issues/790 ) ( #831 )
2017-02-23 08:59:29 +01:00
Joo-Kyung Kim
336eeee895
kernel_size as the default stride for avg_pool1d ( #744 )
...
Following the documentation, let stride to be kernel_size if stride is not provided.
2017-02-15 13:12:18 +05:30
Soumith Chintala
d4c9a3782b
billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix ( #617 )
...
* billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix
2017-01-30 05:08:48 +05:30
Luke Yeager
3ed720079e
[pep8] Fix most remaining lint manually
2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3
[pep8] Fix most lint automatically with autopep8
...
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
f8d4f980b3
Add upsampling modules and functions
2017-01-24 17:30:50 -05:00
Alykhan Tejani
f8e89fbe11
fix docs for torch.nn.functional.conv1d ( #536 )
2017-01-21 10:41:52 -05:00
Adam Paszke
ee4c77c59f
Docs improvements ( #512 )
...
* Always compile .numpy() for all types
* Add torch.nn.functional docs and hidden headers
* Use sphinx to generate torchvision docs
* Remove unused import in ffi utils
2017-01-19 17:28:49 -05:00
Sergey Zagoruyko
9c218b419f
kl_div and docs ( #429 )
2017-01-17 19:24:01 -05:00
Adam Paszke
1dbf44c00d
Add SmoothL1Loss to functional
2017-01-16 12:59:47 -05:00
Sam Gross
3a07228509
Add ConvTranspose1d module ( #449 )
2017-01-13 15:22:57 -05:00
Sam Gross
24a2f2e3a0
Add MaxUnpool1d module ( #447 )
2017-01-13 14:36:25 -05:00
Sam Gross
d5e45b2278
Add AvgPool1d which just uses AvgPool2d implementation ( #439 )
2017-01-12 15:07:11 -05:00
Sam Gross
fd92470e23
Add cuDNN bindings for BatchNorm ( #421 )
2017-01-07 15:35:24 -05:00
Adam Paszke
483490cc25
Move PixelShuffle implementation to functional
2016-12-30 23:02:57 +01:00
Adam Paszke
8d60e39fdc
Rename torch.nn.functions to torch.nn._functions
2016-12-30 23:02:57 +01:00
Sam Gross
c367e0b64e
Support dilated 1d and 3d convolutions ( #372 )
...
Fixes #367
2016-12-29 18:20:32 -05:00
Sergey Zagoruyko
62af45d99f
Basic functional interface ( #354 )
2016-12-29 22:53:57 +01:00