Cosmo Stérin
45524ec33c
Fix indices bug in MM.py ( #1613 ) ( #1617 )
2017-05-22 16:47:51 -04:00
Gregory Chanan
c9d8e0a43a
Change all legacy/nn modules to use keepdim=True (even if tests don't fail).
...
We shouldn't be introducing changes in legacy modules if we can avoid it.
2017-05-09 14:16:31 -07:00
DigiDigi
fc19473501
Corrections in legacy modules. ( #1286 )
2017-04-18 17:13:53 -04:00
Adam Paszke
91c4ba7980
Add torch.arange and deprecate torch.range
2017-04-03 10:38:58 -04:00
Adam Paszke
1e8cb82a2d
Break only after the update in L-BFGS
2017-03-22 18:58:42 -04:00
Marko Vitez
937ba581d7
Improve nn.legacy compatibility with Torch7 ( #738 )
2017-02-16 21:17:12 +05:30
Adam Paszke
63edca44f2
Add tests for non-contiguous inputs and gradients
2017-02-14 21:28:50 +01:00
Adam Lerer
518864a7e0
Fix bug in legacy NN updateGradParameters ( #714 )
2017-02-11 11:04:18 +05:30
Soumith Chintala
d4c9a3782b
billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix ( #617 )
...
* billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix
2017-01-30 05:08:48 +05:30
Luke Yeager
3ed720079e
[pep8] Fix most remaining lint manually
2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3
[pep8] Fix most lint automatically with autopep8
...
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
f8ae34706e
Port L-BFGS from Lua optim
2017-01-22 18:02:40 -05:00
glample
99f4864674
fixed RMSprop initialization ( #485 )
...
* fixed RMSprop initialization
2017-01-18 17:05:53 -05:00
Adam Paszke
f91bb96071
Remove cmin, cmax and cinv
2017-01-16 19:07:37 -05:00
Adam Paszke
77136e4c13
Add anything in torch.legacy docs
2017-01-16 12:59:47 -05:00
Adam Paszke
7a162dd97a
Fix outputs of torch.* comparison functions
2016-12-30 23:02:57 +01:00
Sergey Zagoruyko
101950ce92
fix repr in legacy.nn.linear
2016-12-29 17:30:46 -05:00
Sergey Zagoruyko
55e850d825
test if modules can be printed with fixes
2016-12-29 17:30:46 -05:00
Adam Paszke
9b7eceddc8
Accept outputs in out argument
2016-12-29 12:25:59 +01:00
Adam Paszke
cd82b2b869
Implement comparison and logical operators for tensors
2016-12-28 00:04:08 +01:00
soumith
281e34d1b7
fixes for changes in THNN API
2016-12-13 18:10:07 -08:00
Sergey Zagoruyko
1031d671fb
legacy fixes ( #287 )
2016-12-11 20:13:48 +01:00
Adam Paszke
0580f5a928
Add __len__ for tensors
2016-12-01 23:14:41 +01:00
Adam Paszke
bcfa2d6c79
Add .t7 file reader
2016-11-25 00:41:55 +01:00
Adam Paszke
ae6f2dd11c
Adapt nn code to changes in THNN and THCUNN
2016-11-15 23:02:14 +01:00
Adam Paszke
df59b89fbb
Add more optimizers
2016-11-07 22:50:56 +01:00
Adam Paszke
55d32de331
Fix bugs in torch.legacy.nn and add regression tests
2016-11-05 22:48:52 +01:00
Sam Gross
f2d7e94948
Use torch.Size for Tensor sizes and tuple for strides
...
See issue #20
The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Adam Paszke
71cf8e14cb
Fixes in torch.legacy.nn
2016-10-24 22:29:43 +02:00
Brandon Amos
12de115305
Fix Lua->Python logic in legacy.optim
2016-10-24 20:04:23 +02:00
Adam Paszke
966adc6291
Simplify torch.cat
2016-10-10 20:51:15 -07:00
Adam Paszke
c8a4734b97
Add RReLU to both nn packages
2016-09-29 11:33:34 -07:00
Sam Gross
cb5d4e836f
Lazy load CUDA and THNN modules ( #64 )
2016-09-28 19:29:53 -04:00
soumith
5107f23126
fix ClassNLLCriterion targets in tests and legacy nn
2016-09-26 18:56:12 -07:00
Adam Paszke
e71204b52f
Improve error messages in storage and tensor C functions
2016-09-23 17:17:35 -07:00
Adam Paszke
8fdec15a55
Codemod to remove camel case method naming
2016-09-20 08:40:28 -07:00
Adam Paszke
fb39971464
Add more modules to nn
2016-09-14 11:05:56 -07:00
soumith
ccf7a3043f
fixing MaxPooling for changed THNN interface
2016-09-13 11:44:17 -07:00
Adam Paszke
f646391f26
Bug fixes and test improvements
...
Fixed:
* tensor and storage printing
* legacy.nn module printing
* SpatialCrosMapLRN tests
Also, all fixed bugs have regression tests now.
2016-09-08 19:07:05 -07:00
Adam Paszke
78a958ab61
Update and fix bugs in legacy nn
2016-08-19 14:23:59 -07:00
Adam Paszke
9fff8e7392
Fixes for changes in libs
2016-08-12 22:02:57 -07:00
Adam Paszke
ef7364b80e
Fix Python 2.7 compatibility
2016-08-12 18:26:10 -07:00
soumith
624dd3e10c
fixing optim (tests pass)
2016-08-12 16:44:25 -07:00
Adam Paszke
1e905eb4d5
copy -> copy_
2016-08-12 09:26:33 -07:00
Adam Paszke
fa6e5c5bff
Update tests and fix CosineEmbeddingCriterion
2016-08-11 13:10:54 -07:00
Adam Paszke
ff00cdd728
Add cunn tests
2016-08-11 08:56:30 -07:00
Adam Paszke
e9f9fd3727
Major refactor
2016-08-10 09:24:53 -07:00
Adam Paszke
6df0ae5d35
Add cunn
2016-08-02 09:20:18 -07:00
Adam Paszke
2f342af22f
Move optim to legacy
2016-08-01 12:01:46 -04:00
Adam Paszke
5c9bfe8c02
Fixes in nn
2016-08-01 11:58:54 -04:00
Adam Paszke
d12a358435
Add converted nn modules
2016-07-29 16:54:28 -04:00
Adam Paszke
27bbaf633b
New tests and container modules + bug fixes
2016-07-28 10:06:30 -04:00
Adam Paszke
a4f544ca14
Converted nn modules
2016-07-26 13:36:15 -04:00
Adam Paszke
ae40bcd58c
Base for nn conversion
2016-07-22 22:21:29 -04:00