Commit Graph

50 Commits

Author SHA1 Message Date
David Riazati
10c4b98ade Remove weak script (#22212)
Summary:
* Deletes all weak script decorators / associated data structures / methods
   * In order to keep supporting the standard library in script, this enables recursive script on any function defined in `torch.nn`
   * Most changes in `torch/nn` are the result of `ag -Q "weak" torch/nn/ -l | xargs sed -i '/weak/d'`, only `rnn.py` needed manual editing to use the `ignore` and `export` to continue supporting the overloaded `forward` methods
* `Sequential`/`ModuleList` no longer need to be added to constants since they are compiled on demand

This should also fix https://github.com/pytorch/pytorch/issues/22212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22212

Differential Revision: D15988346

Pulled By: driazati

fbshipit-source-id: af223e3ad0580be895377312949997a70e988e4f
2019-07-03 17:28:25 -07:00
davidriazati
736bf7b46c Fix __constants__ for some nn modules (#21071)
Summary:
A bunch of modules were missing entries for `__constants__` which was making their `__repr__`s not work. Others had `__constants__` that were not necessary since it was provided by some parent class instead.

Fixes #20978
](https://our.intern.facebook.com/intern/diff/15539518/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21071

Pulled By: driazati

Differential Revision: D15539518

fbshipit-source-id: 24bdd1ef41ef636eefd5d2bad4ab2d79646ed4f0
2019-05-29 13:55:53 -07:00
davidriazati
3a39ce0f41 Fix reflection on weak modules, copy attributes (#20190)
Summary:
* Constructs a new type at runtime so that `isinstance` checks work for
weak modules assigned to `ScriptModule`s
* Fix some extraneous names in `__constants__`
* Add `in_features` and `out_features` to `nn.Linear` `__constants__`

Fixes #19363
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20190

Pulled By: driazati

Differential Revision: D15302350

fbshipit-source-id: 1d4d21ed44ab9578a4bc2a72396a82e9bbcd387c
2019-05-10 17:14:49 -07:00
Richard Zou
2a2007e5ac EmbeddingBag CPU forward with per_sample_weights. (#18735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18735
ghimport-source-id: d81bef54dafd7167d2451250d7be478d3c013920

Reviewed By: cpuhrsch

Differential Revision: D14851415

Pulled By: zou3519

fbshipit-source-id: cea6039e760ad571b90f0a536e420498f34be325
2019-04-09 18:12:55 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
Thomas Viehmann
5360984fbd Remove TH(CU)NN Sparse Linear (#17610)
Summary:
Sparse Linear in TH(CU)NN implements sparse linear layers without
using sparse matrices.
It is currently not documented in PyTorch and there is no functional or
module interface. This means it is unused from a PyTorch point of view.

The reason for removing it is twofold:
- The module uses sort, which I would like to move to ATen.
- When we implement a SparseLinear layer, we would want to do it
  using sparse tensors, so it's not all that useful, anyway.

I checked this on slack with soumith, I hope the above is an accurate
representation. All bad ideas are my own.

This is part of the ongoing work to move
sort/topk/mode/median/kthvalue to ATen.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17610

Differential Revision: D14280663

Pulled By: gchanan

fbshipit-source-id: 289231d2c20626855ce2ceecd4f204b460c32378
2019-03-01 12:36:52 -08:00
ZhuBaohe
8852e21245 Correct recurrent/linear/dropout/sparse layers docstrings
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17238

Differential Revision: D14130811

Pulled By: soumith

fbshipit-source-id: d3998ca7da46aec5a59220c6af489f71f3d60735
2019-02-19 05:23:04 -08:00
Sasha Rush
dbe6a7a9ff Unify the shape notation for all of the pytorch modules (#15741)
Summary:
PR to update the shape notation for all of the torch.nn modules to take a unified form. The goal is to make these definitions machine-readable and those checkable by unifying the style across all of the different modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15741

Differential Revision: D13709601

Pulled By: ezyang

fbshipit-source-id: fb89a03903fdf0cd0dcf76f3e469b8582b2f3634
2019-01-17 10:32:14 -08:00
David Pollack
cdb8edce75 add from_pretrained method to EmbeddingBag (#15273)
Summary:
The `EmbeddingBag` module does not include a `from_pretrained` method like the `Embedding` module.  I added it for consistency between the two modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15273

Differential Revision: D13547842

Pulled By: soumith

fbshipit-source-id: 8ffde51ff0c1e8fc8310263b6f375da88089ff7d
2018-12-26 08:35:39 -08:00
Elias Ellison
6d63e9dbff Support Embedding + EmbeddingBag in Script + (Ignore flakey test) (#14509)
Summary:
Resubmitting PR #14415

The tests added for Embedding + EmbeddingBag had random numbers as input, which affected the random number generator & caused the flakey test to break.

Everything but the last two commits have already been accepted
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14509

Differential Revision: D13247917

Pulled By: eellison

fbshipit-source-id: ea6963c47f666c07687787e2fa82020cddc6aa15
2018-11-28 19:16:38 -08:00
Edward Yang
5f07b33857 Revert D13219647: [pytorch][PR] Support Embedding + EmbeddingBag in Script
Differential Revision:
D13219647

Original commit changeset: c90706aa6fbd

fbshipit-source-id: d189e717ba0773de43d633876bc3a688830a9303
2018-11-28 13:38:58 -08:00
Elias Ellison
7749804099 Support Embedding + EmbeddingBag in Script (#14415)
Summary:
Add support for Embedding and EmbeddingBag in script. Both functions require with torch.no_grad(), which we don't have any plans to support in the near future. To work around this, I added a embedding_renorm function without derivatives.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14415

Reviewed By: wanchaol

Differential Revision: D13219647

Pulled By: eellison

fbshipit-source-id: c90706aa6fbd48686eb10f3efdb65844be7b8717
2018-11-28 10:52:30 -08:00
verhoek
0db505bf27 Made docstrings for Embedding more accurate. (#13310)
Summary:
Made the previous description for max_norm more precise, avoiding 'this' and describing what actually happens in the code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13310

Differential Revision: D12840813

Pulled By: SsnL

fbshipit-source-id: 98090c884267a62ce93cd85da84252d46926dfa5
2018-10-30 12:25:38 -07:00
vishwakftw
f9a99d5504 Specify default initialization schemes for modules in docs (#9038)
Summary: This closes #6906 .

Reviewed By: ezyang

Differential Revision: D8698632

Pulled By: weiyangfb

fbshipit-source-id: 259c1dbdc264a8e9f83e196fa72d135babd97d48
2018-07-24 11:58:08 -07:00
Tongzhou Wang
f9926e4ce5 Fix EmbeddingBag max_norm option (#7959)
* fix EmbeddingBag max_norm option

* flake8

* add warning to the embedding bag arg change
2018-05-31 09:42:56 -04:00
Barlas Oguz
5f96a2d26a Add sparse gradient option to pretrained embedding (#7492)
* Add sparse gradient option to pretrained embedding

* Add sparse gradient option to pretrained embedding

* Trailing white space
2018-05-11 08:44:53 -04:00
Ethan Steinberg
ee00a8049a Add max pooling support to EmbeddingBag (#5725)
* Add max mode support to EmbeddingBag

* Lint fix

* Fix compilation issue on other platforms

* Rebase + don't waste memory when not in max mode

* Oops, missed a spot

* Fix whitespace from merge

* less precision

* Lower precision to avoid spurious failures

* Minor typo

* Switch to size()
2018-04-29 16:48:11 -04:00
li-roy
d564ecb4a5 Update docs with new tensor repr (#6454)
* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Link to torch.no_grad, etc, from torch doc

* Add dtype aliases to table

* regen docs again

* Tensor attributes stub page

* link to inplace sampling

* Link torch.dtype, device, and layout

* fix dots after nonfinite floats

* better layout docs
2018-04-21 07:35:37 -04:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Tongzhou Wang
c05acd3840
Clarify Embedding padding_idx arg (#6430)
* Clarify Embedding padding_idx arg

* add a sentence about gradient being zero
2018-04-09 23:06:00 -04:00
Tongzhou Wang
dfcd90783c fix sparse embedding backward when input contains only padding_idx (#6211) 2018-04-03 15:53:43 -04:00
Kaiyu Shi
605307f8f3 Add support for printing extra information in Module and refactor redundant codes (#5936)
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.

Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.

* Add support for extra information on Module

* Rewrite the repr method of Module

* Fix flake8

* Change the __repr__ to get_extra_repr in Linear

* Fix extra new-line for empty line

* Add test for __repr__ method

* Fix bug of block string indent

* Add indent for multi-line repr test.

* Address review comments

* Update tutorial for creating nn.Module

* Fix flake8, add extra_repr of bilinear

* Refactor DropoutNd

* Change to extra_repr in some Modules

* Fix flake8

* Refactor padding modules

* Refactor pooling module

* Fix typo

* Change to extra_repr

* Fix bug for GroupNorm

* Fix bug for LayerNorm
2018-04-02 13:52:33 -04:00
Vishwak Srinivasan
76a283db40 [ready] General Documentation Improvements - 2 (#5685)
* Fix some minor errors in existing docs.

* Fix Convolution and Pooling docs in torch.nn.functional

* Cleaned up torch.nn.functional docs

* Address @SsnL 's comments

* Add multiplication sign missing in docs

* Fix more typos, and clear some warnings

* Change infinity symbol in LPPool2d

* Revert some changes in torch.nn.functional

* Few more minor changes
2018-03-13 09:47:43 -04:00
Tongzhou Wang
27265503ad nn.* doc update after Variable/Tensor merge (#5459)
The nn.* counterpart of #5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU.

Notice that torch.randn(*, requires_grad=True) isn't documented until #5462 is done.
2018-03-01 18:11:39 -05:00
Andrzej Sołtysik
fc9837899d Embedding.load_pretrained method (#5350) 2018-02-26 17:46:25 +01:00
cpuhrsch
07be53b57f Move EmbeddingBag into ATen (#4856)
This diff creates code related to EmbeddingBag in ATen. It also allows sparse gradients.
2018-02-12 14:20:32 -05:00
Shagun Sodhani
d452291a72 updated documentation for Embedding layer. Fixes #4682 (#4684) 2018-01-16 13:18:30 -05:00
Riddhiman Dasgupta
f99c7d9429 Padding_idx in Embedding supports negative indexing (#4496) 2018-01-09 12:04:11 +01:00
Sam Gross
20b5e82155
Implement embedding in ATen (#4322)
Implements nn.Embedding (lookup table) in ATen.

Breaking change: new optional argument padding_idx in F.embedding to
match nn.Embedding.

Note that there are a few bugs in Embedding that are inherited from the
previous code:

 - CUDA renorm has race conditions if index contains duplicate entries
 - sparse gradient doesn't work with scale_grad_by_freq
2018-01-02 15:44:46 -05:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
Marcin Elantkowski
57ffe64cbe Embedding related fixes (#3128)
* Fix docs for nn.Embedding and F.embedding.
  - add description of 'sparse' argument (#3104)
  - fix F.embedding example (resulted in RuntimeError)
* Make EmbeddingBag a New Style Function.
* Add a functional interface for EmbeddingBag
* Fix failing tests: add max_norm and norm_type to context,
and fix typo in backend call.
* Docfix: remove torch.manual_seed from example code.
* Add a note about using sparse keyword in Embedding function.
2017-10-18 23:38:07 +02:00
Allen Ye
977b1f988c Fix EmbeddingBag doc (#2679) 2017-09-09 00:05:12 -04:00
Ryuichi Yamamoto
7fa7a101af Fix emmbedding doc formatting (#2605) 2017-09-03 11:27:11 -04:00
Adam Fisch
27bd3df71b Patching EmeddingBag to accept 2D input (#2429)
* Patching EmeddingBag to accept 2D input

* fix for CUDA inputs

* fix lint
2017-08-23 07:12:21 -04:00
Edward Z. Yang
f3f478960e Convert Embedding to new style. (#1916)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-07-20 02:35:21 -04:00
Soumith Chintala
20ce45b0c3 fix EmbeddingSum offsets initialization 2017-07-13 02:57:25 -04:00
Soumith Chintala
f61ec2495e nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean) 2017-06-15 12:32:47 -04:00
Soumith Chintala
2ef7331007 Update sparse.py 2017-04-27 02:25:00 +02:00
Luke Yeager
79f5bf84e5 [pep8] Potentially breaking docstring changes 2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Lerer
3152be5fb3 Add repr to RNNs and Embedding (#428) 2017-01-13 15:53:52 -05:00
Zeming Lin
59d66e6963 Sparse Library (#333) 2017-01-05 00:43:41 +01:00
Soumith Chintala
7fa60b2e44 fixing docs of activations, pixelshuffle, sparse for rst 2017-01-04 18:40:51 -05:00
Sam Gross
ffcc38cf05 Deterministic ordering of parameters and buffers. (#317)
Uses the assignment syntax to get deterministic ordering of parameters.
The ordering of parameters using the constructor syntax is
non-deterministic because kwargs use dict() in Python 3.5 and earlier.
2016-12-16 14:45:56 -05:00
soumith
59c628803a fixing padding_idx option 2016-10-14 15:05:21 -07:00
Adam Paszke
3cbe66ba8c Change requires_grad default to False 2016-10-05 08:46:34 -07:00
Adam Paszke
f9d25e8e72 Refactor nn (require specifying parameters explicitly) 2016-09-27 15:22:26 -07:00
Adam Paszke
eec0420eb3 Initialize nn modules' parameters with a default tensor type 2016-09-23 18:06:26 -07:00
Soumith Chintala
5114d94ad9 docstrings for conv, dropout, linear, pooling and sparse functions 2016-09-19 00:31:22 -04:00
Adam Paszke
fb39971464 Add more modules to nn 2016-09-14 11:05:56 -07:00