Gregory Chanan
171638a451
Fix test_normalize NN test.
2017-05-09 14:25:06 -07:00
Gregory Chanan
ae2b2cbbec
Make keepdim work with autograd.
2017-05-09 14:15:59 -07:00
Sergey Zagoruyko
6d693fe413
Add F.normalize ( #1467 )
2017-05-07 13:54:16 +02:00
Marvin CAO
e3f41a4962
Add high order gradient support for Sigmoid ( #1496 )
2017-05-07 13:00:20 +02:00
Ankit Vani
4e18d89791
added twice differentiation for a bunch of ops ( #1426 )
2017-05-04 06:47:14 -04:00
andrew giessel
2e7635b929
Add flexible bilinear upsampling aspect ratio redux ( #1317 )
2017-05-03 08:46:28 -04:00
Soumith Chintala
ecd51f8510
docs fixes
2017-05-02 15:42:33 -04:00
Soumith Chintala
7dd8571bc6
fix avg_pool docs in nn.functional
2017-04-30 08:44:43 -04:00
Adam Paszke
457d78a7d9
Use THCUNN backward kernels for Tanh and Sigmoid in Autograd ( #1399 )
2017-04-29 09:07:03 -04:00
Uridah Sami Ahmed
75f1989bec
Add nn.Bilinear and tests
2017-04-28 10:11:30 -04:00
Shubham Jain
a35f507532
Update functional.py ( #1298 )
2017-04-19 11:07:12 -04:00
Edward Z. Yang
34546f022a
Expose dilated convolutions.
...
Fixes #1225 .
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Edward Z. Yang
ab77742f6e
Add some missing documentation for arguments.
...
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Christian Sarofeen
e9ff57176b
Fused pointwise kernels for GRU/LSTM
2017-04-11 13:42:06 -07:00
Christian Sarofeen
0b50f794e9
Use thnn version of Tanh/Sigmoid instead of autograd. ( #1234 )
2017-04-11 12:49:57 -07:00
Edgar Riba
9504246c32
add triplet margin loss ( #1165 )
2017-04-05 22:17:58 -04:00
Soumith Chintala
2979f4b989
add more functions to docs
2017-03-29 01:29:17 -04:00
Jason Kuen
f2c1071c33
Adaptive max and average pooling (1D & 2D) ( #1084 )
2017-03-26 17:09:28 +02:00
Edgar Riba
63f6c0d692
add Pairwise distance ( #835 )
2017-03-24 11:29:40 -04:00
ngimel
b3ab4b1094
Check torch.backends.cudnn.enabled, padding, and output_padding ( #996 )
...
* Check torch.backends.cudnn.enabled
* Don't allow negative padding and output_padding values
2017-03-22 19:42:11 -04:00
Kentaro Wada
7654b3f49e
Add function to compute cross_entropy for 2D image ( #802 )
2017-03-16 17:34:04 +01:00
Soumith Chintala
13b1580613
add F.pad to docs
2017-03-15 00:09:14 -04:00
Sam Gross
34ce58c909
Parallelize backwards
2017-03-03 11:26:00 -08:00
Sergey Zagoruyko
12efd53dba
ConstantPad2d and F.pad ( #856 )
2017-03-01 19:39:44 +01:00
Ofir Press
5e1d6a3691
Update functional.py ( #862 )
...
Fixed documentation error in conv3d
2017-02-27 10:42:02 -05:00
陈云
838842d4b2
fix documentation error. [issue #790 ]( https://github.com/pytorch/pytorch/issues/790 ) ( #831 )
2017-02-23 08:59:29 +01:00
Joo-Kyung Kim
336eeee895
kernel_size as the default stride for avg_pool1d ( #744 )
...
Following the documentation, let stride to be kernel_size if stride is not provided.
2017-02-15 13:12:18 +05:30
Soumith Chintala
d4c9a3782b
billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix ( #617 )
...
* billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix
2017-01-30 05:08:48 +05:30
Luke Yeager
3ed720079e
[pep8] Fix most remaining lint manually
2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3
[pep8] Fix most lint automatically with autopep8
...
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
f8d4f980b3
Add upsampling modules and functions
2017-01-24 17:30:50 -05:00
Alykhan Tejani
f8e89fbe11
fix docs for torch.nn.functional.conv1d ( #536 )
2017-01-21 10:41:52 -05:00
Adam Paszke
ee4c77c59f
Docs improvements ( #512 )
...
* Always compile .numpy() for all types
* Add torch.nn.functional docs and hidden headers
* Use sphinx to generate torchvision docs
* Remove unused import in ffi utils
2017-01-19 17:28:49 -05:00
Sergey Zagoruyko
9c218b419f
kl_div and docs ( #429 )
2017-01-17 19:24:01 -05:00
Adam Paszke
1dbf44c00d
Add SmoothL1Loss to functional
2017-01-16 12:59:47 -05:00
Sam Gross
3a07228509
Add ConvTranspose1d module ( #449 )
2017-01-13 15:22:57 -05:00
Sam Gross
24a2f2e3a0
Add MaxUnpool1d module ( #447 )
2017-01-13 14:36:25 -05:00
Sam Gross
d5e45b2278
Add AvgPool1d which just uses AvgPool2d implementation ( #439 )
2017-01-12 15:07:11 -05:00
Sam Gross
fd92470e23
Add cuDNN bindings for BatchNorm ( #421 )
2017-01-07 15:35:24 -05:00
Adam Paszke
483490cc25
Move PixelShuffle implementation to functional
2016-12-30 23:02:57 +01:00
Adam Paszke
8d60e39fdc
Rename torch.nn.functions to torch.nn._functions
2016-12-30 23:02:57 +01:00
Sam Gross
c367e0b64e
Support dilated 1d and 3d convolutions ( #372 )
...
Fixes #367
2016-12-29 18:20:32 -05:00
Sergey Zagoruyko
62af45d99f
Basic functional interface ( #354 )
2016-12-29 22:53:57 +01:00