Commit Graph

36 Commits

Author SHA1 Message Date
Xuehai Pan
dff6342a0b [BE][Easy] enable UFMT for torch/nn/parallel (#128596)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128596
Approved by: https://github.com/mikaylagawarecki
2024-06-17 16:29:22 +00:00
Aaron Orenstein
27f9d3b0a1 Flip default value for mypy disallow_untyped_defs [8/11] (#127845)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127845
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843, #127844
2024-06-08 18:49:56 +00:00
loganthomas
c848a777e8 DOC: Various typo fixes (#97095)
Various typos found while browsing documentation/source code.

Thank you for a wonderful deep-learning library!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97095
Approved by: https://github.com/mikaylagawarecki, https://github.com/kit1980
2023-03-20 20:46:04 +00:00
Stanislau Hlebik
b774ce54f8 remediation of S205607
fbshipit-source-id: 798decc90db4f13770e97cdce3c0df7d5421b2a3
2020-07-17 17:19:47 -07:00
Stanislau Hlebik
8fdea489af remediation of S205607
fbshipit-source-id: 5113fe0c527595e4227ff827253b7414abbdf7ac
2020-07-17 17:17:03 -07:00
Gregory Chanan
23fde77d3d Remove Module._backend as it's not used anymore.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25342

Test Plan: Imported from OSS

Differential Revision: D17101571

Pulled By: gchanan

fbshipit-source-id: 2cda46fe197e26a1cacb5e912f535809973d306e
2019-08-29 15:43:49 -07:00
Aapo Kyrola
d9eec4ef0d backend.py: _getattr__ must raise AttributeError (#21763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21763

Custom __getattr__ functions can only raise AttributeError. This code throwed NotImplementedError which caused upstream troubles when hasattr() was called.

Differential Revision: D15815176

fbshipit-source-id: 0982e2382de4578d3fc05c5d2a63f624d6b4765e
2019-06-13 23:17:57 -07:00
Aapo Kyrola
aa6887e6ef add error message to missing function backend (#21742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21742

Add error message to NotImplementedError so we know which function it is about.

Reviewed By: bddppq

Differential Revision: D15806379

fbshipit-source-id: 14eab9d03aa5b44ab95c5caeadc0e01d51f22188
2019-06-13 13:10:48 -07:00
Adam Paszke
86363e1d8e Move RNN implementations to C++ (#10481)
Summary:
This is the first of two changes that are supposed to improve how we handle RNNs in the JIT. They still get traced as `PythonOp`s, but now it will be much easier to actually expose them to the JIT as e.g. `aten::lstm`, and ignore the Python interpreter entirely. This needs some symbolic adjustments that will be part of a second PR.

Even when we fix symbolics, there will still be a bit of a problem with statefulness of the cuDNN API (we need a mutable cache for the dropout state, but our IR has no way of representing that).

zdevito ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10481

Reviewed By: ezyang

Differential Revision: D9341113

Pulled By: apaszke

fbshipit-source-id: 0ae30ead72a1b12044b7c12369d11e5ca8ec30b5
2018-08-15 13:25:41 -07:00
Adam Paszke
adbcb3c1dc Move dropout and alpha dropout to ATen (#10384)
Summary:
zdevito ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10384

Reviewed By: ezyang

Differential Revision: D9272583

Pulled By: apaszke

fbshipit-source-id: ed5d37b28ce9ff25800bbaa0daf066cfbf1f9921
2018-08-10 14:55:28 -07:00
li-roy
e4eee7c2cf Implement MarginRankingLoss as native function and add reduce=True arg to it (#5346)
* add reduce=True arg to MarginRankingLoss

* make default margin arg match for legacy

* remove accidentally added test

* fix test

* fix native_functions.yaml alphabetical order
2018-03-21 15:40:58 -04:00
li-roy
4c4a42b3f9 implement CosineEmbeddingLoss as a native function and add reduce arg (#5646)
* implement CosineEmbeddingLoss as a native function and add reduce=True arg to it

* fix flake8

* address comments

* add reference function to tests

* fix flake8
2018-03-08 17:54:24 -05:00
Edward Z. Yang
9de922991c
Revert "implement CosineEmbeddingLoss as a native function and add reduce arg" (#5640)
* Revert "implement CosineEmbeddingLoss as a native function and add reduce arg (#5447)"

This reverts commit c16478fe3f.
2018-03-08 14:07:17 -05:00
li-roy
c16478fe3f implement CosineEmbeddingLoss as a native function and add reduce arg (#5447)
forward (new) [1.1905965859768912, 1.160144692985341, 1.1558120870031416]
backward (new) [1.9150976981036365, 1.9792822760064155, 1.8779143309220672]
double backward (new) [3.6898688060464337, 3.5784677929477766, 3.569505032035522]

forward (old) [3.2359962839400396, 3.275224728975445, 3.3409753759624436]
backward (old) [5.668679727939889, 5.722980880062096, 5.585088661056943]
double backward (old) N/A

* implement CosineEmbeddingLoss as a native function and add reduce=True arg to it

* fix flake8

* address comments

* add reference function to tests

* fix flake8
2018-03-08 13:15:12 -05:00
gchanan
fcccd07cc0
Implement hinge_embedding_loss as a native function. (#5080) 2018-02-06 14:43:36 -05:00
Sam Gross
9437644f66 Replace softmin and softsign with simple differentiable expressions 2017-10-10 16:57:47 -04:00
Trevor Killeen
58b7d1c764 remove python convnd function 2017-08-30 15:46:35 -04:00
Gregory Chanan
7aeb837895 Implement HingeEmbeddingLoss double backwards. 2017-08-14 16:19:10 -04:00
Sam Gross
da0fad8a7a Use torch.matmul in nn.Linear (#1935)
This takes advantage of the broadcasting behavior of torch.matmul to
support inputs with more than two dimensions. The extra dimensions are
treated like part of the batch dimension, much like nn.Bottle in Lua
Torch.

There are a few related small performance changes:

 * Addmm computes the gradient in column-major for inputs in
   column-major format
 * Variable.mm calls Addmm in-place with the desired output buffer
2017-06-30 16:53:26 -04:00
soumith
e48db02e10 remove unused python-level BatchNorm.py 2017-04-07 16:27:16 -04:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
1259a0648b Make nn containers copyable 2017-01-16 12:59:47 -05:00
Sam Gross
fd92470e23 Add cuDNN bindings for BatchNorm (#421) 2017-01-07 15:35:24 -05:00
Adam Paszke
8d60e39fdc Rename torch.nn.functions to torch.nn._functions 2016-12-30 23:02:57 +01:00
Sam Gross
c367e0b64e Support dilated 1d and 3d convolutions (#372)
Fixes #367
2016-12-29 18:20:32 -05:00
Sam Gross
8a29338837 Use cuDNN for Conv3d and ConvTranspose3d (#359)
I've also updated test_nn.py to run marked tests twice: once with cuDNN
enabled and once with it disabled.
2016-12-28 16:14:47 -05:00
Adam Paszke
e51d0bef97 Add cuDNN bindings for 2D transposed convolution 2016-11-17 14:34:40 -08:00
Adam Lerer
86288265ad Adding rnn cell library 2016-10-23 20:23:48 -07:00
Adam Lerer
d58b627b98 CUDNN RNN bindings 2016-10-23 20:23:48 -07:00
Sam Gross
779a460030 Add cuDNN support for convolutions (#36) 2016-09-27 17:55:04 -04:00
Adam Paszke
d1fda539b7 Fix nn serialization errors 2016-09-15 19:28:34 -07:00
Adam Paszke
fb39971464 Add more modules to nn 2016-09-14 11:05:56 -07:00
Sam Gross
ec22828169 Add torch.nn.AvgPool2d 2016-08-30 12:16:40 -07:00
Adam Paszke
cc645de37b MaxPooling2d -> MaxPool2d 2016-08-24 10:11:00 -07:00
Adam Paszke
ea93fb7ac0 Add more nn modules 2016-08-23 19:15:21 -07:00
Adam Paszke
e055ffbdc7 Add nn 2016-08-19 14:56:55 -07:00