Commit Graph

173 Commits

Author SHA1 Message Date
M.L. Croci
8eb90d4865 Add Gaussian NLL Loss (#50886)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48520.

cc albanD (This is a clean retry PR https://github.com/pytorch/pytorch/issues/49807)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50886

Reviewed By: ejguan

Differential Revision: D26007435

Pulled By: albanD

fbshipit-source-id: 88fe91b40dea6f72e093e6301f0f04fcc842d2f0
2021-01-22 06:56:49 -08:00
Jeffrey Wan
6e3e57095c Add complex support for torch.nn.L1Loss (#49912)
Summary:
Building on top of the work of anjali411 (https://github.com/pytorch/pytorch/issues/46640)

Things added in this PR:
1. Modify backward and double-backward formulas
2. Add complex support for `new module tests` and criterion tests (and add complex tests for L1)
3. Modify some existing tests to support complex

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49912

Reviewed By: zhangguanheng66

Differential Revision: D25853036

Pulled By: soulitzer

fbshipit-source-id: df619f1b71c450ab2818eb17804e0c55990aa8ad
2021-01-15 15:53:15 -08:00
Tongzhou Wang
d2e96fcf17 Update loss module doc (#48596)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48596

Reviewed By: izdeby

Differential Revision: D25889748

Pulled By: zou3519

fbshipit-source-id: 9f6e77ba2af4030c8b9ae4afcea6d002a4dae423
2021-01-13 10:41:20 -08:00
Guilherme Leobas
ca8b9437ab Add type annotations for a few torch.nn.modules (#46013)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46012

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46013

Reviewed By: gchanan

Differential Revision: D25012419

Pulled By: ezyang

fbshipit-source-id: 9fd8ad9fa3122efa294a08010171cb7ddf752778
2020-11-18 07:44:59 -08:00
Jeffrey Wan
9858b012ec Fix TripletMarginWithDistanceLoss example code (#46853)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45210

Removes `requires_grad=True` from all the `randint`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46853

Reviewed By: izdeby

Differential Revision: D24549483

Pulled By: soulitzer

fbshipit-source-id: c03576571ed0b2dbb281870f29a28eb6f6209c65
2020-10-26 21:02:54 -07:00
Brian Hirsh
869b2ca048 some documentation and style fixes to smooth_l1_loss (#45587)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45587

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D24024313

Pulled By: bdhirsh

fbshipit-source-id: c50efb2934d7b9d3b090e92678319cde42c0df45
2020-10-02 07:47:31 -07:00
Richard Zou
381f6d32a7 [docs] Fix hyperlinks for nn.CrossEntropyLoss (#45660)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45460. This PR makes it so that LogSoftmax and NLLLoss are correctly linked from the nn.CrossEntropyLoss documentation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45660

Test Plan:
- built and viewed docs locally

![image](https://user-images.githubusercontent.com/5652049/94816513-ee85fb80-03c9-11eb-8289-56642c133e11.png)

Reviewed By: glaringlee

Differential Revision: D24049009

Pulled By: zou3519

fbshipit-source-id: 3bd0660acb8575d753cefd2d0f1e523ca58a25b6
2020-10-01 12:18:43 -07:00
Richard Zou
1efdbfabcc [docs] Fix back quote rendering in loss modules docs (#45662)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42855. Previously, back quotes weren't rendering correctly in
equations. This is because we were quoting things like `'mean'`. In
order to backquote properly in latex in text-mode, the back-quote needs
to be written as a back-tick.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45662

Test Plan:
- built docs locally and viewed the changes.

For NLLLoss (which is not the original module mentioned in the issue, but it has the same problem), we can see how the back quotes now render properly:

![image](https://user-images.githubusercontent.com/5652049/94819862-c5676a00-03cd-11eb-9e92-01380ee52bd6.png)

Reviewed By: glaringlee

Differential Revision: D24049880

Pulled By: zou3519

fbshipit-source-id: 61a1257994144549eb8f29f19d639aea962dfec0
2020-10-01 11:52:27 -07:00
Xinyu Li
c9bb990707 [c++] Distance-agnostic triplet margin loss (#45377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45377

This PR adds a C++ implementation of the TripletMarginWithDistanceLoss, for which the Python implementation was introduced in PR #43680.  It's based on PR #44072, but I'm resubmitting this to unlink it from Phabricator.

Test Plan: Imported from OSS

Reviewed By: izdeby

Differential Revision: D24003973

fbshipit-source-id: 2d9ada7260a6f27425ff2fdbbf623dad0fb79405
2020-09-30 12:37:35 -07:00
Brian Hirsh
6c4aa2a79c Revert D24002415: Some fixes to smooth_l1_loss
Test Plan: revert-hammer

Differential Revision:
D24002415 (fdbed7118e)

Original commit changeset: 980c141019ec

fbshipit-source-id: 8981b5f6d982ed66c670122e437540444cb5f39c
2020-09-30 10:00:17 -07:00
Brian Hirsh
fdbed7118e Some fixes to smooth_l1_loss (#45532)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45532

- updated documentation
- explicitly not supporting negative values for beta (previously the
result was incorrect)
- Removing default value for beta in the backwards function, since it's
only used internally by autograd (as per convention)

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D24002415

Pulled By: bdhirsh

fbshipit-source-id: 980c141019ec2d437b771ee11fc1cec4b1fcfb48
2020-09-30 07:28:44 -07:00
Brian Hirsh
439930c81b adding a beta parameter to the smooth_l1 loss fn (#44433)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44433

Not entirely sure why, but changing the type of beta from `float` to `double in autocast_mode.cpp and FunctionsManual.h fixes my compiler errors, failing instead at link time

fixing some type errors, updated fn signature in a few more files

removing my usage of Scalar, making beta a double everywhere instead

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D23636720

Pulled By: bdhirsh

fbshipit-source-id: caea2a1f8dd72b3b5fd1d72dd886b2fcd690af6d
2020-09-25 16:36:28 -07:00
Mike Ruberry
ef885c10d8 [pytorch] Add triplet margin loss with custom distance (#43680)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43680

As discussed [here](https://github.com/pytorch/pytorch/issues/43342),
adding in a Python-only implementation of the triplet-margin loss that takes a
custom distance function.  Still discussing whether this is necessary to add to
PyTorch Core.

Test Plan:
python test/run_tests.py

Imported from OSS

Reviewed By: albanD

Differential Revision: D23363898

fbshipit-source-id: 1cafc05abecdbe7812b41deaa1e50ea11239d0cb
2020-09-22 11:35:52 -07:00
Vincent QB
fab012aa28 Revert "Added support for Huber Loss (#37599)" (#43351)
Summary:
This reverts commit 11e5174926 due to [comment](https://github.com/pytorch/pytorch/pull/37599#pullrequestreview-471950192).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43351

Reviewed By: pbelevich, seemethere

Differential Revision: D23249511

Pulled By: vincentqb

fbshipit-source-id: 18b8b346f00eaf0ef7376b06579d404a84add4de
2020-09-01 06:34:26 -07:00
mariosasko
0744dd6166 Fix shapes in the MarginRankingLoss docs (#43131)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42884

I did some additional research and considering the first few lines of the docs (`Creates a criterion that measures the loss given inputs x1, x2, two 1D mini-batch Tensors, and a label 1D mini-batch tensor y (containing 1 or -1`) and the provided tests, this loss should be used primarily with 1-D tensors. More advanced users (that may use this loss in non-standard ways) can easily check the source and see that the definition accepts inputs/targets of arbitrary dimension as long as they match in shape or are broadcastable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43131

Reviewed By: colesbury

Differential Revision: D23192011

Pulled By: mrshenli

fbshipit-source-id: c412c28daf9845c0142ea33b35d4287e5b65fbb9
2020-08-18 13:44:16 -07:00
alexandrosstergiou
11e5174926 Added support for Huber Loss (#37599)
Summary:
Current losses in PyTorch only include a (partial) implementation of Huber loss through `smooth l1` based on Fast RCNN - which essentially uses a delta value of 1. Changing/Renaming the [`_smooth_l1_loss()`](3e1859959a/torch/nn/functional.py (L2487)) and refactoring to include delta, enables to use the actual function.

Supplementary to this, I have also made a functional and criterion versions for anyone that wants to set the delta explicitly - based on the functional `smooth_l1_loss()` and the criterion `Smooth_L1_Loss()`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37599

Differential Revision: D21559311

Pulled By: vincentqb

fbshipit-source-id: 34b2a5a237462e119920d6f55ba5ab9b8e086a8c
2020-07-27 10:42:30 -07:00
mattip
b7bda236d1 DOC: split quantization.rst into smaller pieces (#41321)
Summary:
xref gh-38010 and gh-38011.

After this PR, there should be only two warnings:
```
pytorch/docs/source/index.rst:65: WARNING: toctree contains reference to nonexisting \
      document 'torchvision/index'
WARNING: autodoc: failed to import class 'tensorboard.writer.SummaryWriter' from module \
     'torch.utils'; the following exception was raised:
No module named 'tensorboard'
```

If tensorboard and torchvision are prerequisites to building docs, they should be added to the `requirements.txt`.

As for breaking up quantization into smaller pieces: I split out the list of supported operations and the list of modules to separate documents. I think this makes the page flow better, makes it much "lighter" in terms of page cost, and also removes some warnings since the same class names appear in multiple sub-modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41321

Reviewed By: ngimel

Differential Revision: D22753099

Pulled By: mruberry

fbshipit-source-id: d504787fcf1104a0b6e3d1c12747ec53450841da
2020-07-25 23:59:40 -07:00
SsnL
1922f2212a Make IterableDataset dataloader.__len__ warning clearer (#41175)
Summary:
Based on discussion with jlucier (https://github.com/pytorch/pytorch/pull/38925#issuecomment-655859195) . `batch_size` change isn't made because data loader only has the notion of `batch_sampler`, not batch size. If `batch_size` dependent sharding is needed, users can still access it from their own code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41175

Differential Revision: D22456525

Pulled By: zou3519

fbshipit-source-id: 5281fcf14807f219de06e32107d5fe7d5b6a8623
2020-07-09 13:49:29 -07:00
Peter Bell
35bd2b3c8b DOC: Clarify that CrossEntropyLoss mean is weighted (#40991)
Summary:
Closes https://github.com/pytorch/pytorch/issues/40560

This adds the equation for the weighted mean to `CrossEntropyLoss`'s docs and the `reduction` argument for `CrossEntropyLoss` and `NLLLoss` no longer describes a non-weighted mean of the outputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40991

Differential Revision: D22395805

Pulled By: ezyang

fbshipit-source-id: a623b6dd2aab17220fe0bf706bd9b62d6ba531fd
2020-07-06 15:05:31 -07:00
Jiayu Liu
0203d70c63 [nit] fix some typo within documentation (#40692)
Summary:
Apologize if this seems trivial, but i'd like to fix them on my way of reading some of the source code. Thanks!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40692

Differential Revision: D22284651

Pulled By: mrshenli

fbshipit-source-id: 4259d1808aa4d15a02cfd486cfb44dd75fdc58f8
2020-06-30 19:24:44 -07:00
Nicolas Hug
a4dec0674c [doc] fix typo in formula of MarginRankingLoss (#40285)
Summary:
This is just a minor doc fix:

the `MarginRankingLoss` takes 2 input samples `x1` and `x2`, not just `x`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40285

Reviewed By: ezyang

Differential Revision: D22195069

Pulled By: zou3519

fbshipit-source-id: 909f491c94dca329a37216524f4088e9096e0bc6
2020-06-24 07:24:51 -07:00
Edward Yang
eace053398 Move all torch.nn.modules type annotations inline (#38211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38211

Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)

For the most part the translation was completely mechanical, but there
were two hairy issues.  First, I needed to work around a Python 3.6 and
earlier bug where Generic has a nontrivial metaclass.  This fix is in
torch/jit/__init__.py.  Second, module.py, we need to apply the same
fix for avoiding contravariance checks that the pyi file used to have;
this is done by declaring forward as a variable (rather than a
function), which appears to be sufficient enough to get mypy to not
contravariantly check input arguments.

Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21497397

Pulled By: ezyang

fbshipit-source-id: 2b08bacc152c48f074e7edc4ee5dce1b77d83702
2020-06-11 15:59:57 -07:00
MLgdg
a86176dee2 CTC target un-pad example (#38393)
Summary:
CTC  target un-pad example
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38393

Differential Revision: D21620042

Pulled By: ezyang

fbshipit-source-id: 532d77c92fe1742c6f6b4a1b61c281f042cf8374
2020-05-18 10:31:14 -07:00
Edward Yang
4fef3763dd Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings" (#37778)
Summary:
Original PR: https://github.com/pytorch/pytorch/pull/37419

cc mattip suo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37778

Differential Revision: D21385774

Pulled By: ezyang

fbshipit-source-id: 5de532faab8bae132736b6b5189e0ee2ac9935be
2020-05-04 14:32:35 -07:00
Michael Suo
20f7e62b1d Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings
Test Plan: revert-hammer

Differential Revision:
D21337640

Original commit changeset: d4ad198780c3

fbshipit-source-id: fa9ba6ac542173a50bdb45bfa12f3fec0ed704fb
2020-05-04 10:57:55 -07:00
mattip
f10fbcc820 Split up documentation into subpages and clean up some warnings (#37419)
Summary:
xref gh-32838, gh-34032

This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.

Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`

I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419

Differential Revision: D21337640

Pulled By: ezyang

fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
2020-05-04 09:39:22 -07:00
Nik Ved
075b732f26 doc fix for KLDivLoss (#36137)
Summary:
Fixes doc for KLDivLoss as per [this comment](https://github.com/pytorch/pytorch/pull/34586#discussion_r404442741).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36137

Differential Revision: D20932395

Pulled By: gchanan

fbshipit-source-id: ecc395e6bc689fbf758e2cdca946049de8963856
2020-04-09 07:54:56 -07:00
Nik Ved
35cdb78522 Make kl_div accept target in log space (#34586)
Summary:
Fixes [32520](https://github.com/pytorch/pytorch/issues/32520), implements [34536](https://github.com/pytorch/pytorch/issues/34536).

Here are some benchmarks:
```python
import torch
import torch.nn.functional as F
from IPython import get_ipython

ipython = get_ipython()

torch.set_num_threads(1)

for d in [5, 10, 20, 50, 100, 1000]:
    i = torch.rand(d, d)
    t = torch.rand(d, d)
    print(f"Size: {d}x{d}")
    ipython.magic("timeit F.kl_div(i, t, reduction='none', log_target=False)")
    ipython.magic("timeit F.kl_div(i, t.log(), reduction='none', log_target=True)")
```
Output:
```
Size: 5x5
16 µs ± 33 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
8.24 µs ± 17.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 10x10
16.7 µs ± 17.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
8.7 µs ± 20.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 20x20
17.7 µs ± 47.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
9.7 µs ± 28.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 50x50
23.6 µs ± 60.1 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
15 µs ± 33.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Size: 100x100
42.8 µs ± 223 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
34 µs ± 17.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Size: 1000x1000
3.9 ms ± 1.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.45 ms ± 364 ns per loop (mean ± std. dev. of 7 runs, 100 loops each)

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34586

Differential Revision: D20652726

Pulled By: ezyang

fbshipit-source-id: 480697b4cd01341bbeee7514a8b812705a0600ea
2020-04-01 12:26:58 -07:00
hirano
a57a7b4c29 Change input value in examples of BCEWithLogitsLoss (#34053)
Summary:
In the examples of `BCEWithLogitsLoss`, `0.999` is passed as the prediction value. The value `0.999` seems to be a probability, but actually it's not. I think it's better to pass a value that is greater than 1, not to confuse readers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34053

Differential Revision: D20195456

Pulled By: ezyang

fbshipit-source-id: 3abbda6232ee1ab141d202d0ce1177526ad59c53
2020-03-02 14:35:56 -08:00
Simón Sepúlveda Osses
746e5218e7 Mistake in MSELoss documentation (#33836)
Summary:
Replaced `sum` with `mean` in [line 392](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/loss.py#L392)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33836

Differential Revision: D20142053

Pulled By: ailzhang

fbshipit-source-id: 2bfe19944ffc5534902dd9087023e70ddf5746c3
2020-02-27 15:34:46 -08:00
Kurt Mohler
3cfea39968 Document how BCELoss avoids infinite results (#33160)
Summary:
Issue https://github.com/pytorch/pytorch/issues/31453
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33160

Differential Revision: D19835527

Pulled By: albanD

fbshipit-source-id: 82fd2dd46ffbc87e90ca8e100db411b6ff6bfe32
2020-02-12 07:56:19 -08:00
Michael Suo
3552be1090 [jit] fix the NoneType param/buffer hack (#32745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32745

Some parameters (like `bias` in conv) are optional. To achieve this
previously, you had to add `bias` as a constant, which would invoke some
pretty weird behavior in the frontend, summarized as:
```
if bias is not None:
  add it as a parameter normally
else: # bias is None
  add it as a constant with the value None
```

There are several things bad about this:
1. Bias is not a constant. Marking it `__constants__` is confusing.
2. It basically relies on an implementation detail (the frontend
processes parameters before constants) to work.

Okay, whatever. I don't even know why we did this originally, but
getting rid of it doesn't break anything, so I assume improved NoneType
refinement has made this a non-issue.

Note on perf: this will make no difference; if bias was `None` it's still
folded out today, if bias is a Tensor it would be added as a parameter
both before and after this change

Test Plan: Imported from OSS

Differential Revision: D19628634

Pulled By: suo

fbshipit-source-id: d9128a09c5d096b938fcf567b8c23b09ac9ab37f
2020-01-29 17:04:39 -08:00
anjali411
49fe7a7401 Updated documentation for NLLLoss to explain what x, y and w refer to (#31488)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/31385

In the current documentation for NLLLoss, it's unclear what `y` refers to in the math section of the loss description. There was an issue(https://github.com/pytorch/pytorch/issues/31295) filed earlier where there was a confusion if the loss returned for reduction=mean is right or not, perhaps because of lack in clarity of formula symbol description in the current documentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31488

Differential Revision: D19181391

Pulled By: anjali411

fbshipit-source-id: 8b75f97aef93c92c26ecbce55b3faf2cd01d3e74
2019-12-19 12:28:16 -08:00
David Riazati
10c4b98ade Remove weak script (#22212)
Summary:
* Deletes all weak script decorators / associated data structures / methods
   * In order to keep supporting the standard library in script, this enables recursive script on any function defined in `torch.nn`
   * Most changes in `torch/nn` are the result of `ag -Q "weak" torch/nn/ -l | xargs sed -i '/weak/d'`, only `rnn.py` needed manual editing to use the `ignore` and `export` to continue supporting the overloaded `forward` methods
* `Sequential`/`ModuleList` no longer need to be added to constants since they are compiled on demand

This should also fix https://github.com/pytorch/pytorch/issues/22212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22212

Differential Revision: D15988346

Pulled By: driazati

fbshipit-source-id: af223e3ad0580be895377312949997a70e988e4f
2019-07-03 17:28:25 -07:00
Thomas Viehmann
aa42742df0 ctc_loss: fix backward when 2d target tensor is larger than max_target_length (#20971)
Summary:
Previously, we didn't work when 2d target tensors had extra columns at the end. Now we just ignore those.
Also fix the confusion in the doc example regarding the number of classes.

Thank you, ypw-rich for the report with reproducing example.

Fixes: #20522
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20971

Differential Revision: D15535481

Pulled By: ezyang

fbshipit-source-id: 397e44e20165fc4fa2547bee9390d4c0b688df93
2019-05-29 05:13:00 -07:00
zhiqiang
cfb87c1022 Update documentation for CTCLoss (#20422)
Summary:
Change `Inputs` to `Shape` to unify the format of CTCLoss `class`, and add the type of `Output` in `Shape`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20422

Differential Revision: D15393484

Pulled By: ezyang

fbshipit-source-id: 5b49647f9740de77db49a566fa2de74fcecd9110
2019-05-17 09:02:25 -07:00
PetroSokirniy
71d23ebc13 #20143 TripletMarginLoss example isn't clear (#20145)
Summary:
Pull Request for TripletMarginLoss example isn't clear #20143
SsnL
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20145

Differential Revision: D15217491

Pulled By: ezyang

fbshipit-source-id: 46e30496e9f0dd830a2eec42c5775209a773cc48
2019-05-06 06:32:30 -07:00
zhiqiang
ecdeef37df Fix math rendering of CTC loss docs and fix error of lint in test_torch.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19662

Differential Revision: D15063326

Pulled By: soumith

fbshipit-source-id: 71e79dee19c6259e27393d6b3d5ca3f8edfd6afa
2019-05-03 22:30:41 -07:00
Vadim Velicodnii
7a4189696f Fix the documentation for BCEWithLogitsLoss (#17218, #16804) (#19212)
Summary:
I fixed a mistake in the explanation of `pos_weight` argument in `BCEWithLogitsLoss` and added an example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19212

Differential Revision: D14923431

Pulled By: ezyang

fbshipit-source-id: 15696c67d56789102ac72afbe9bdd7b667eae5a0
2019-04-23 09:54:24 -07:00
zhiqiang
88f78c719a Fix math formatting of PairwiseDistance and CosineSimilarity docs and fix math formatting of CTC loss docs.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19534

Differential Revision: D15034011

Pulled By: ezyang

fbshipit-source-id: 60b81c970c919508a57c86fb23edc9f64973117c
2019-04-23 07:24:07 -07:00
Soumith Chintala
8638634a6e fixes link in TripletMarginLoss (#19417)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/19245
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19417

Differential Revision: D15001610

Pulled By: soumith

fbshipit-source-id: 1b85ebe196eb5a3af5eb83d914dafa83b9b35b31
2019-04-18 22:13:48 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
ryan
96456bfa4c Update documentation for CTCLoss (#18415)
Summary:
This is meant to resolve #18249, where I pointed out a few things that could improve the CTCLoss docs.

My main goal was to clarify:
- Target sequences are sequences of class indices, excluding the blank index
- Lengths of `target` and `input` are needed for masking unequal length sequences, and do not necessarily = S, which is the length of the longest sequence in the batch.

I thought about Thomas's suggestion to link the distill.pub article, but I'm not sure about it. I think that should be up to y'all to decide.

I have no experience with .rst, so it might not render as expected :)

t-vi ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18415

Differential Revision: D14691969

Pulled By: soumith

fbshipit-source-id: 381a2d52307174661c58053ae9dfae6e40cbfd46
2019-03-30 01:26:34 -07:00
ZhuBaohe
8c3285bf11 Fix loss functions doc (#18420)
Summary:
Correct docstring display error on web page caused by my previous PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18420

Differential Revision: D14642467

Pulled By: soumith

fbshipit-source-id: 16fdd3301a4c5bad27fbcd8686f7fbfcc1e908ee
2019-03-27 10:23:24 -07:00
Bharat123Rox
ebc9f75895 Added the exception of ignore_index (#18117)
Summary:
Fix #17801 to add an exception regarding `ignore_index` in the documentation for `torch.nn.CrossEntropyLoss` and `torch.nn.NLLLoss`

If any other files/functions are hit, I'd be glad to incorporate the changes there too! 😊
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18117

Differential Revision: D14542079

Pulled By: ezyang

fbshipit-source-id: 7b918ac61f441dde7d3d6782d080c500cf2097f1
2019-03-20 16:03:34 -07:00
Bharat Raghunathan
8ed2b88bf1 Corrected type of 'swap' in torch.nn.TripletMarginLoss (#18115)
Summary:
Fix #16428 by correcting type of 'swap' from `float` to `bool`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18115

Differential Revision: D14516615

Pulled By: ezyang

fbshipit-source-id: c61a45d533f3a443edf3c31c1ef3d9742bf46d2b
2019-03-19 07:09:15 -07:00
joy
9ecee93a16 Fix minor grammatical mistakes in torch/nn/modules/loss.py (#17892)
Summary:
Fixes some minor grammatical mistakes in the doc of `loss.py`.

I think in the doc:
>  Note that for some losses, there multiple elements per sample.

the "are" is lost between "there" and "multiple".

This mistake takes place in all the descriptions of parameter `size_average` and there are 17 of them.
It's minor but perfects the doc I think. 😁
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17892

Differential Revision: D14418177

Pulled By: ezyang

fbshipit-source-id: 412759f2f9b215819463bf8452ab0e0513218cd6
2019-03-12 08:42:50 -07:00
ZhuBaohe
75f88d4da6 Correct loss docstrings (#17300)
Summary:
In the loss doc description, replace the deprecated 'reduct' and 'size_average' parameters with the 'reduction' parameter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17300

Differential Revision: D14195789

Pulled By: soumith

fbshipit-source-id: 625e650ec20f13b2d22153a4a535656cf9c8f0eb
2019-03-10 11:56:41 -07:00
Bharat123Rox
6feded880e Fix #17218 by updating documentation (#17258)
Summary:
Fix Issue #17218 by updating the corresponding documentation in [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17258

Differential Revision: D14157336

Pulled By: ezyang

fbshipit-source-id: fb474d866464faeaae560ab58214cccaa8630f08
2019-02-21 14:17:35 -08:00
Igor Macedo Quintanilha
94a95a0c7f Fixing docstring in CTCLoss (#17307)
Summary:
The argument `zero_infinity` is in the wrong place! :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17307

Differential Revision: D14154850

Pulled By: ezyang

fbshipit-source-id: 7a9fe537483b23041f21ba1b80375b7f44265538
2019-02-21 08:13:28 -08:00