Commit Graph

12 Commits

Author SHA1 Message Date
Ram Rachum
351d73b97f Fix exception causes all over the codebase (#90271)
This is the continuation to #90134 and hopefully the final PR in this series.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90271
Approved by: https://github.com/kit1980
2022-12-07 04:29:00 +00:00
vfdev-5
6593d293f7 Added functorch to functional_autograd_benchmark
Description:

- Following https://github.com/pytorch/functorch/issues/497 adding an option to run benchmarks with functorch and compare to original functional autograd results.

Running the benchmark we get below table:

<details>
<summary>
Table
</summary>

```
| model | task | mean | var |
| -- | -- | -- | -- |
| resnet18 | vjp | 0.03826599195599556 | 4.3332115637895186e-06 |
| resnet18 | functorch vjp | 0.037201929837465286 | 6.139693198292662e-09 |
| resnet18 | vhp | 0.2202976644039154 | 2.8687209052691287e-08 |
| resnet18 | functorch vhp | 0.22117868065834045 | 4.108771278765744e-08 |
| resnet18 | jvp | 0.18679651618003845 | 1.832455254202614e-08 |
| resnet18 | functorch jvp | 0.05305683612823486 | 1.6690266946284282e-08 |
| fcn_resnet | vjp | 0.6071907877922058 | 7.436695454998699e-07 |
| fcn_resnet | functorch vjp | 0.6115708947181702 | 1.121692207561864e-06 |
| fcn_resnet | vhp | 3.419469118118286 | 0.020633839070796967 |
| fcn_resnet | jvp | 2.5421929359436035 | 3.1765587209520163e-06 |
| fcn_resnet | functorch jvp | 0.7628333568572998 | 1.4555752159139956e-07 |
| detr | vjp | 0.19494840502738953 | 1.9122715457342565e-05 |
| detr | vhp | 1.1664292812347412 | 0.000948643428273499 |
| detr | jvp | 0.9990308880805969 | 1.0214127541985363e-05 |
| ppl_simple_reg | vjp | 0.0007535457843914628 | 6.024204690646684e-09 |
| ppl_simple_reg | functorch vjp | 0.0016954183811321855 | 1.160151974488599e-08 |
| ppl_simple_reg | vhp | 0.0011888503795489669 | 5.93119386937957e-10 |
| ppl_simple_reg | functorch vhp | 0.0026826143730431795 | 1.6787025103326414e-08 |
| ppl_simple_reg | jvp | 0.001067900680936873 | 7.409912128331086e-10 |
| ppl_simple_reg | functorch jvp | 0.002065300941467285 | 9.710328185974504e-08 |
| ppl_simple_reg | hvp | 0.001212477684020996 | 1.974137298077494e-09 |
| ppl_simple_reg | functorch hvp | 0.00482442369684577 | 2.327668653379078e-07 |
| ppl_simple_reg | jacobian | 0.0009108781814575195 | 3.489469158068914e-09 |
| ppl_simple_reg | functorch jacobian | 0.0019866942893713713 | 1.938326299466553e-08 |
| ppl_simple_reg | hessian | 0.005053090862929821 | 3.370298600202659e-07 |
| ppl_simple_reg | functorch hessian | 0.006374978926032782 | 7.556796077778927e-08 |
| ppl_simple_reg | hessian_fwdrev | 0.0036706924438476562 | 1.996075527088692e-09 |
| ppl_simple_reg | functorch hessian_fwdrev | 0.0058908225037157536 | 7.548283775804521e-08 |
| ppl_simple_reg | hessian_revrev | 0.0015769004821777344 | 1.5754418214442012e-08 |
| ppl_simple_reg | functorch hessian_revrev | 0.0041002752259373665 | 6.713568723171193e-08 |
| ppl_simple_reg | jacfwd | 0.0018048763740807772 | 2.7375660849315864e-08 |
| ppl_simple_reg | functorch jacfwd | 0.002047991845756769 | 2.432247070416338e-09 |
| ppl_simple_reg | jacrev | 0.0009733677143231034 | 1.0078769818733235e-08 |
| ppl_simple_reg | functorch jacrev | 0.0021971464157104492 | 1.2729884701911942e-08 |
| ppl_robust_reg | vjp | 0.005820560269057751 | 8.582588151284654e-08 |
| ppl_robust_reg | functorch vjp | 0.00796132069081068 | 9.663100541956737e-09 |
| ppl_robust_reg | vhp | 0.009825301356613636 | 2.0081762386325863e-07 |
| ppl_robust_reg | functorch vhp | 0.014890861697494984 | 4.558066279969353e-07 |
| ppl_robust_reg | jvp | 0.008297419175505638 | 2.9454400873873965e-07 |
| ppl_robust_reg | functorch jvp | 0.008052706718444824 | 7.120377176761394e-08 |
| ppl_robust_reg | hvp | 0.015414690598845482 | 7.42123745567369e-07 |
| ppl_robust_reg | functorch hvp | 0.02699306048452854 | 1.4650488537881756e-06 |
| ppl_robust_reg | jacobian | 0.006207776255905628 | 1.7068457225377642e-07 |
| ppl_robust_reg | functorch jacobian | 0.009173822589218616 | 1.2214455580306094e-07 |
| ppl_robust_reg | hessian | 0.04670915752649307 | 1.4299343092716299e-05 |
| ppl_robust_reg | functorch hessian | 0.02337808534502983 | 3.0397418413485866e-06 |
| ppl_robust_reg | hessian_fwdrev | 0.024229884147644043 | 2.0425247839739313e-06 |
| ppl_robust_reg | functorch hessian_fwdrev | 0.022021746262907982 | 3.512146236062108e-07 |
| ppl_robust_reg | hessian_revrev | 0.012355780228972435 | 7.090877147675201e-07 |
| ppl_robust_reg | functorch hessian_revrev | 0.013960313983261585 | 6.326549737423193e-07 |
| ppl_robust_reg | jacfwd | 0.008112502284348011 | 2.88503088086145e-08 |
| ppl_robust_reg | functorch jacfwd | 0.008947920985519886 | 4.2070990247111695e-08 |
| ppl_robust_reg | jacrev | 0.00635871896520257 | 1.3403841592207755e-07 |
| ppl_robust_reg | functorch jacrev | 0.009123563766479492 | 2.677554675756255e-07 |
| wav2letter | vjp | 0.02078995667397976 | 2.1110793113621185e-06 |
| wav2letter | functorch vjp | 0.019202351570129395 | 9.210506135559626e-09 |
| wav2letter | vhp | 0.05997290462255478 | 8.558587616391833e-09 |
| wav2letter | functorch vhp | 0.06035261228680611 | 1.6448565842708263e-09 |
| wav2letter | jvp | 0.04507789760828018 | 1.5771547401399744e-09 |
| wav2letter | functorch jvp | 0.013057494536042213 | 3.804750292601966e-09 |
| deepspeech | vjp | 0.3648746609687805 | 1.5359055396402255e-05 |
| transformer | vjp | 0.05496881157159805 | 1.242562319703211e-08 |
| transformer | functorch vjp | 0.057835936546325684 | 2.6113376350167528e-08 |
| transformer | vhp | 0.18313491344451904 | 7.226336151688884e-08 |
| transformer | jvp | 0.13924935460090637 | 1.6989159234981344e-07 |
| multiheadattn | vjp | 0.0014708995586261153 | 3.710916729460223e-08 |
| multiheadattn | functorch vjp | 0.002404856728389859 | 2.1910574687922235e-08 |
| multiheadattn | vhp | 0.003382015274837613 | 5.3098595742540056e-08 |
| multiheadattn | functorch vhp | 0.005340623669326305 | 5.897558708056749e-08 |
| multiheadattn | jvp | 0.0027526854537427425 | 3.508620949332908e-08 |
| multiheadattn | functorch jvp | 0.0022981404326856136 | 1.327894807445773e-07 |

```

</details>

<details>
<summary>
Stdout
</summary>

```
Found functorch: 0.2.0a0+386a541
Results for model resnet18 on task vjp: 0.03826599195599556s (var: 4.3332115637895186e-06)
Results for model resnet18 on task vjp using Functorch: 0.037201929837465286s (var: 6.139693198292662e-09)
Results for model resnet18 on task vhp: 0.2202976644039154s (var: 2.8687209052691287e-08)
Results for model resnet18 on task vhp using Functorch: 0.22117868065834045s (var: 4.108771278765744e-08)
Results for model resnet18 on task jvp: 0.18679651618003845s (var: 1.832455254202614e-08)
Results for model resnet18 on task jvp using Functorch: 0.05305683612823486s (var: 1.6690266946284282e-08)
Results for model fcn_resnet on task vjp: 0.6071907877922058s (var: 7.436695454998699e-07)
Results for model fcn_resnet on task vjp using Functorch: 0.6115708947181702s (var: 1.121692207561864e-06)
Results for model fcn_resnet on task vhp: 3.419469118118286s (var: 0.020633839070796967)
Failed model using Functorch: fcn_resnet, task: vhp, Error message:
	 CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 47.46 GiB total capacity; 45.62 GiB already allocated; 5.31 MiB free; 46.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Results for model fcn_resnet on task jvp: 2.5421929359436035s (var: 3.1765587209520163e-06)
Results for model fcn_resnet on task jvp using Functorch: 0.7628333568572998s (var: 1.4555752159139956e-07)
Results for model detr on task vjp: 0.19494840502738953s (var: 1.9122715457342565e-05)
Failed model using Functorch: detr, task: vjp, Error message:
	 Cannot access data pointer of Tensor that doesn't have storage
Results for model detr on task vhp: 1.1664292812347412s (var: 0.000948643428273499)
Failed model using Functorch: detr, task: vhp, Error message:
	 Cannot access data pointer of Tensor that doesn't have storage
Results for model detr on task jvp: 0.9990308880805969s (var: 1.0214127541985363e-05)
Failed model using Functorch: detr, task: jvp, Error message:
	 Trying to use forward AD with _cdist_forward that does not support it because it has not been implemented yet.
Please file an issue to PyTorch at https://github.com/pytorch/pytorch/issues/new?template=feature-request.yml so that we can prioritize its implementation.
Results for model ppl_simple_reg on task vjp: 0.0007535457843914628s (var: 6.024204690646684e-09)
Results for model ppl_simple_reg on task vjp using Functorch: 0.0016954183811321855s (var: 1.160151974488599e-08)
Results for model ppl_simple_reg on task vhp: 0.0011888503795489669s (var: 5.93119386937957e-10)
Results for model ppl_simple_reg on task vhp using Functorch: 0.0026826143730431795s (var: 1.6787025103326414e-08)
Results for model ppl_simple_reg on task jvp: 0.001067900680936873s (var: 7.409912128331086e-10)
Results for model ppl_simple_reg on task jvp using Functorch: 0.002065300941467285s (var: 9.710328185974504e-08)
Results for model ppl_simple_reg on task hvp: 0.001212477684020996s (var: 1.974137298077494e-09)
Results for model ppl_simple_reg on task hvp using Functorch: 0.00482442369684577s (var: 2.327668653379078e-07)
Results for model ppl_simple_reg on task jacobian: 0.0009108781814575195s (var: 3.489469158068914e-09)
Results for model ppl_simple_reg on task jacobian using Functorch: 0.0019866942893713713s (var: 1.938326299466553e-08)
Results for model ppl_simple_reg on task hessian: 0.005053090862929821s (var: 3.370298600202659e-07)
Results for model ppl_simple_reg on task hessian using Functorch: 0.006374978926032782s (var: 7.556796077778927e-08)
Results for model ppl_simple_reg on task hessian_fwdrev: 0.0036706924438476562s (var: 1.996075527088692e-09)
Results for model ppl_simple_reg on task hessian_fwdrev using Functorch: 0.0058908225037157536s (var: 7.548283775804521e-08)
Results for model ppl_simple_reg on task hessian_revrev: 0.0015769004821777344s (var: 1.5754418214442012e-08)
Results for model ppl_simple_reg on task hessian_revrev using Functorch: 0.0041002752259373665s (var: 6.713568723171193e-08)
Results for model ppl_simple_reg on task jacfwd: 0.0018048763740807772s (var: 2.7375660849315864e-08)
Results for model ppl_simple_reg on task jacfwd using Functorch: 0.002047991845756769s (var: 2.432247070416338e-09)
Results for model ppl_simple_reg on task jacrev: 0.0009733677143231034s (var: 1.0078769818733235e-08)
Results for model ppl_simple_reg on task jacrev using Functorch: 0.0021971464157104492s (var: 1.2729884701911942e-08)
Results for model ppl_robust_reg on task vjp: 0.005820560269057751s (var: 8.582588151284654e-08)
Results for model ppl_robust_reg on task vjp using Functorch: 0.00796132069081068s (var: 9.663100541956737e-09)
Results for model ppl_robust_reg on task vhp: 0.009825301356613636s (var: 2.0081762386325863e-07)
Results for model ppl_robust_reg on task vhp using Functorch: 0.014890861697494984s (var: 4.558066279969353e-07)
Results for model ppl_robust_reg on task jvp: 0.008297419175505638s (var: 2.9454400873873965e-07)
Results for model ppl_robust_reg on task jvp using Functorch: 0.008052706718444824s (var: 7.120377176761394e-08)
Results for model ppl_robust_reg on task hvp: 0.015414690598845482s (var: 7.42123745567369e-07)
Results for model ppl_robust_reg on task hvp using Functorch: 0.02699306048452854s (var: 1.4650488537881756e-06)
Results for model ppl_robust_reg on task jacobian: 0.006207776255905628s (var: 1.7068457225377642e-07)
Results for model ppl_robust_reg on task jacobian using Functorch: 0.009173822589218616s (var: 1.2214455580306094e-07)
Results for model ppl_robust_reg on task hessian: 0.04670915752649307s (var: 1.4299343092716299e-05)
Results for model ppl_robust_reg on task hessian using Functorch: 0.02337808534502983s (var: 3.0397418413485866e-06)
Results for model ppl_robust_reg on task hessian_fwdrev: 0.024229884147644043s (var: 2.0425247839739313e-06)
Results for model ppl_robust_reg on task hessian_fwdrev using Functorch: 0.022021746262907982s (var: 3.512146236062108e-07)
Results for model ppl_robust_reg on task hessian_revrev: 0.012355780228972435s (var: 7.090877147675201e-07)
Results for model ppl_robust_reg on task hessian_revrev using Functorch: 0.013960313983261585s (var: 6.326549737423193e-07)
Results for model ppl_robust_reg on task jacfwd: 0.008112502284348011s (var: 2.88503088086145e-08)
Results for model ppl_robust_reg on task jacfwd using Functorch: 0.008947920985519886s (var: 4.2070990247111695e-08)
Results for model ppl_robust_reg on task jacrev: 0.00635871896520257s (var: 1.3403841592207755e-07)
Results for model ppl_robust_reg on task jacrev using Functorch: 0.009123563766479492s (var: 2.677554675756255e-07)
Results for model wav2letter on task vjp: 0.02078995667397976s (var: 2.1110793113621185e-06)
Results for model wav2letter on task vjp using Functorch: 0.019202351570129395s (var: 9.210506135559626e-09)
Results for model wav2letter on task vhp: 0.05997290462255478s (var: 8.558587616391833e-09)
Results for model wav2letter on task vhp using Functorch: 0.06035261228680611s (var: 1.6448565842708263e-09)
Results for model wav2letter on task jvp: 0.04507789760828018s (var: 1.5771547401399744e-09)
Results for model wav2letter on task jvp using Functorch: 0.013057494536042213s (var: 3.804750292601966e-09)
Results for model deepspeech on task vjp: 0.3648746609687805s (var: 1.5359055396402255e-05)
Failed model using Functorch: deepspeech, task: vjp, Error message:
	 Cannot access storage of TensorWrapper
Results for model transformer on task vjp: 0.05496881157159805s (var: 1.242562319703211e-08)
Results for model transformer on task vjp using Functorch: 0.057835936546325684s (var: 2.6113376350167528e-08)
Results for model transformer on task vhp: 0.18313491344451904s (var: 7.226336151688884e-08)
Failed model using Functorch: transformer, task: vhp, Error message:
	 bad optional access
Results for model transformer on task jvp: 0.13924935460090637s (var: 1.6989159234981344e-07)
Failed model using Functorch: transformer, task: jvp, Error message:
	 Trying to use forward AD with embedding that does not support it because it has not been implemented yet.
Please file an issue to PyTorch at https://github.com/pytorch/pytorch/issues/new?template=feature-request.yml so that we can prioritize its implementation.
Results for model multiheadattn on task vjp: 0.0014708995586261153s (var: 3.710916729460223e-08)
Results for model multiheadattn on task vjp using Functorch: 0.002404856728389859s (var: 2.1910574687922235e-08)
Results for model multiheadattn on task vhp: 0.003382015274837613s (var: 5.3098595742540056e-08)
Results for model multiheadattn on task vhp using Functorch: 0.005340623669326305s (var: 5.897558708056749e-08)
Results for model multiheadattn on task jvp: 0.0027526854537427425s (var: 3.508620949332908e-08)
Results for model multiheadattn on task jvp using Functorch: 0.0022981404326856136s (var: 1.327894807445773e-07)
```

</details>

All functorch errors are reported in its repository.

cc @zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75689
Approved by: https://github.com/zou3519
2022-04-22 14:04:26 +00:00
soulitzer
21c6de9fdc Extend autograd functional benchmarking to run vectorized tasks (#67045)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67045

To run: `python benchmarks/functional_autograd_benchmark/functional_autograd_benchmark.py --gpu -1 --model-filter=ppl    _robust_reg --num-iter 100`

```
Results for model ppl_robust_reg on task vjp: 0.0012262486852705479s (var: 2.2107682351446556e-10)
Results for model ppl_robust_reg on task vhp: 0.002099371049553156s (var: 6.906406557760647e-10)
Results for model ppl_robust_reg on task jvp: 0.001860950025729835s (var: 1.1251884146634694e-10)
Results for model ppl_robust_reg on task hvp: 0.003481731517240405s (var: 2.2713633751614282e-10)
Results for model ppl_robust_reg on task jacobian: 0.0012128615053370595s (var: 1.3687526667638394e-09)
Results for model ppl_robust_reg on task hessian: 0.009885427542030811s (var: 9.366265096844018e-09)
Results for model ppl_robust_reg on task hessian_fwdrev: 0.005268776323646307s (var: 2.4293791422991262e-09)
Results for model ppl_robust_reg on task hessian_revrev: 0.002561321249231696s (var: 7.557877101938004e-10)
Results for model ppl_robust_reg on task jacfwd: 0.002619938924908638s (var: 5.109343503839625e-10)
Results for model ppl_robust_reg on task jacrev: 0.0013469004770740867s (var: 3.1857563254078514e-09)
```
Notes:
 - We go through batched fallback for both
 - ppl_robust_reg takes 3 tensor inputs and returns a single scalar output
   - this means that jacobian is equivalent to doing vjp and vmap would not help us
   - we expect jacfwd to be slower than jacrev

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D33265947

Pulled By: soulitzer

fbshipit-source-id: 14f537a1376dea7e5afbe0c8e97f94731479b018
2021-12-21 17:20:29 -08:00
lezcano
0974215c4d Prefer mT and mH over transpose(-2, -1) and transpose(-2, -1).conj() (#64181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64181

This PR replaces all the calls to:
- `transpose(-2, -1)` or `transpose(-1, -2)` by `mT()` in C++ and `mT` in Python
- `conj().transpose(-2, -1)` or `transpose(-2, -1).conj()` or `conj().transpose(-1, -2)` or `transpose(-1, -2).conj()` by `mH()` in C++ and `mH` in Python.

It also simplifies two pieces of code, and fixes one bug where a pair
of parentheses were missing in the function `make_symmetric_matrices`.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D31692896

Pulled By: anjali411

fbshipit-source-id: e9112c42343663d442dc5bd53ff2b492094b434a
2021-10-18 13:02:25 -07:00
Basil Hosmer
cab926b2c0 faster generate_square_subsequent_mask in nn.Transformer (#60631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60631

Per #48360, speed up `Transformer.generate_square_subsequent_mask`. New impl is informally ~5x faster, though absolute difference is probably small.

PR includes Python and C++ versions as well as a couple of places where the previous impl had been copied around.

Test Plan: Imported from OSS

Reviewed By: jbschlosser, albanD

Differential Revision: D29356673

Pulled By: bhosmer

fbshipit-source-id: 4c062ba0ead61a445aeef451c78777bf0b3a631e
2021-06-25 16:07:01 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
Ikko Ashimine
7e39a40300 Fix typo in torchvision_models.py (#53968)
Summary:
accross -> across

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53968

Reviewed By: jbschlosser

Differential Revision: D27035761

Pulled By: ngimel

fbshipit-source-id: 94fac6f2e27648e70652fd29f7800e60b211acd5
2021-03-15 11:02:06 -07:00
Fritz Obermeyer
093aca082e Enable distribution validation if __debug__ (#48743)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47123
Follows https://github.com/pyro-ppl/pyro/pull/2701

This turns on `Distribution` validation by default. The motivation is to favor beginners by providing helpful error messages. Advanced users focused on speed can disable validation by calling
```py
torch.distributions.Distribution.set_default_validate_args(False)
```
or by disabling individual distribution validation via `MyDistribution(..., validate_args=False)`.

In practice I have found many beginners forget or do not know about validation. Therefore I have [enabled it by default](https://github.com/pyro-ppl/pyro/pull/2701) in Pyro. I believe PyTorch could also benefit from this change. Indeed validation caught a number of bugs in `.icdf()` methods, in tests, and in PPL benchmarks, all of which have been fixed in this PR.

## Release concerns
- This may slightly slow down some models. Concerned users may disable validation.
- This may cause new `ValueErrors` in models that rely on unsupported behavior, e.g. `Categorical.log_prob()` applied to continuous-valued tensors (only {0,1}-valued tensors are supported).

We should clearly note this change in release notes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48743

Reviewed By: heitorschueroff

Differential Revision: D25304247

Pulled By: neerajprad

fbshipit-source-id: 8d50f28441321ae691f848c55f71aa80cb356b41
2021-01-05 13:59:10 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
albanD
e08e93f946 Reland of benchmark code (#43428)
Summary:
Reland of the benchmark code that broke the slow tests because the GPU were running out of memory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43428

Reviewed By: ngimel

Differential Revision: D23296136

Pulled By: albanD

fbshipit-source-id: 0002ae23dc82f401604e33d0905d6b9eedebc851
2020-08-24 13:27:26 -07:00
Alban Desmaison
74781ab5b8 Revert D23242101: [pytorch][PR] Implement first draft of autograd benchmark.
Test Plan: revert-hammer

Differential Revision:
D23242101 (c2511bdfa4)

Original commit changeset: a2b92d5a4341

fbshipit-source-id: bda562d15565f074b448022d180ec8f959c6ecc9
2020-08-21 12:22:57 -07:00
albanD
c2511bdfa4 Implement first draft of autograd benchmark. (#40586)
Summary:
It is quite a lot of code because I pulled some code from torchaudio and torchvision to remove issues I had to get latest version with pytorch built from source while I can't build there libs from source (dependency missing for torchaudio).

The compare script generates table as follows:
| model | task | speedup | mean (before) | var (before) | mean (after) | var (after) |
| -- | -- | -- | -- | -- | -- | -- |
| resnet18 | vjp | 1.021151844124464 | 1.5627719163894653 | 0.005164200905710459 | 1.5304011106491089 | 0.003979875706136227 |
| resnet18 | vhp | 0.9919114430761606 | 6.8089728355407715 | 0.019538333639502525 | 6.86449670791626 | 0.014775685034692287 |
| resnet18 | jvp | 0.9715963084255123 | 5.720699310302734 | 0.08197150379419327 | 5.887938499450684 | 0.018408503383398056 |
| ppl_simple_reg | vjp | 0.9529183269165618 | 0.000362396240234375 | 7.526952949810095e-10 | 0.00038030146970413625 | 7.726220357939795e-11 |
| ppl_simple_reg | vhp | 0.9317708619586977 | 0.00048058031825348735 | 5.035701855504726e-10 | 0.0005157709238119423 | 3.250243477137538e-11 |
| ppl_simple_reg | jvp | 0.8609755877018406 | 0.00045447348384186625 | 9.646707044286273e-11 | 0.0005278587341308594 | 1.4493808930815533e-10 |
| ppl_simple_reg | hvp | 0.9764100147808232 | 0.0005881547695025802 | 7.618464747949361e-10 | 0.0006023645401000977 | 6.370915461850757e-10 |
| ppl_simple_reg | jacobian | 1.0019173715134297 | 0.0003612995205912739 | 2.2979899233499523e-11 | 0.0003606081008911133 | 1.2609764794835332e-11 |
| ppl_simple_reg | hessian | 1.0358429970264393 | 0.00206911563873291 | 2.590938796842579e-09 | 0.0019975185859948397 | 2.8916853356264482e-09 |
| ppl_robust_reg | vjp | 1.0669910916521521 | 0.0017304659122601151 | 3.1047047155396967e-09 | 0.0016218185191974044 | 4.926861585374809e-09 |
| ppl_robust_reg | vhp | 1.0181130455462972 | 0.0029563189018517733 | 2.6359153082466946e-08 | 0.0029037236236035824 | 1.020585038702393e-08 |
| ppl_robust_reg | jvp | 0.9818360373406179 | 0.0026934861671179533 | 6.981357714153091e-09 | 0.00274331565015018 | 3.589908459389335e-08 |
| ppl_robust_reg | hvp | 1.0270848910527002 | 0.005576515104621649 | 3.2798087801211295e-08 | 0.005429458804428577 | 6.438724398094564e-08 |
| ppl_robust_reg | jacobian | 1.0543611284155785 | 0.00167675013653934 | 2.3236829349571053e-08 | 0.001590299652889371 | 1.2011492245278532e-08 |
| ppl_robust_reg | hessian | 1.0535378727082656 | 0.01643357239663601 | 1.8450685956850066e-06 | 0.015598463825881481 | 2.1876705602608126e-07 |
| wav2letter | vjp | 1.0060408105086573 | 0.3516994118690491 | 1.4463969819189515e-05 | 0.349587619304657 | 9.897866402752697e-05 |
| wav2letter | vhp | 0.9873655295086051 | 1.1196287870407104 | 0.00474404776468873 | 1.133955717086792 | 0.009759620763361454 |
| wav2letter | jvp | 0.9741820317882822 | 0.7888165712356567 | 0.0017476462526246905 | 0.8097219467163086 | 0.0018235758179798722 |
| transfo | vjp | 0.9883954031921641 | 2.8865864276885986 | 0.008410997688770294 | 2.9204773902893066 | 0.006901870481669903 |
| transfo | vhp | 1.0111290842971339 | 8.374398231506348 | 0.014904373325407505 | 8.282224655151367 | 0.04449500888586044 |
| transfo | jvp | 1.0080534543381963 | 6.293097972869873 | 0.03796082362532616 | 6.24282169342041 | 0.010179692879319191 |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40586

Reviewed By: pbelevich

Differential Revision: D23242101

Pulled By: albanD

fbshipit-source-id: a2b92d5a4341fe1472711a685ca425ec257d6384
2020-08-21 07:36:26 -07:00