Commit Graph

27 Commits

Author SHA1 Message Date
Edward Z. Yang
f70844bec7 Enable UFMT on a bunch of low traffic Python files outside of main files (#106052)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106052
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-07-27 01:01:17 +00:00
ydwu4
6abb8c382c [export] add kwargs support for export. (#105337)
Solving #105242.

During export, the exported function's signature changes multiple times. Suppose we'd like to export f as shown in following example:
```python
def f(arg1, arg2, kw1, kw2):
  pass

args = (arg1, arg2)
kwargs =  {"kw2":arg3, "kw1":arg4}

torch.export(f, args, kwargs)
```
The signature changes mutiple times during export process in the following order:
1. **gm_torch_level = dynamo.export(f, *args, \*\*kwargs)**. In this step, we turn all  kinds of parameters such as **postional_only**, **var_positioinal**, **kw_only**, and **var_kwargs** into **positional_or_kw**.It also preserves the positional and kword argument names in original function (i.e. f in this example) [here](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/export.py#L546C13-L546C27). The order of kwargs will be the **key order** of kwargs (after python 3.6, the order is the insertion of order of keys) instead of the original function signature and the order is baked into a _orig_args varaible of gm_torch_level's pytree info. So we'll have:
```python
def gm_torch_level(arg1, arg2, kw2, kw1)
```
Such difference is acceptable as it's transparent to users of export.

2. **gm_aot_export = aot_export_module(gm_torch_level, pos_or_kw_args)**. In this step, we need to turn kwargs into positional args in the order of how gm_torch_level expected, which is stored in _orig_args. The returned gm_aot_export has the graph signature of flat_args, in_spec = pytree.tree_flatten(pos_or_kw_args):
``` python
flat_args, _ = pytree.tree_flatten(pos_or_kw_args)
def gm_aot_export(*flat_args)
```

3. **exported_program(*args, \*\*kwargs)**. The epxorted artifact is exported_program, which is a wrapper over gm_aot_export and has the same calling convention as the original function "f". To do this, we need to 1. specialize the order of kwargs into pos_or_kw_args and 2. flatten the pos_or_kw_args into what gm_aot_export expected.  We can combine the two steps into one with :
```python
_, in_spec = pytree.tree_flatten((args, kwargs))

# Then during exported_program.__call__(*args, **kwargs)
flat_args  = fx_pytree.tree_flatten_spec((args, kwargs), in_spec)
```
, where kwargs is treated as a normal pytree whose keyorder is preserved in in_spec.

Implementation-wise, we treat _orig_args in dynamo exported graph module as single source of truth and kwags are ordered following it.

Test plan:
See added tests in test_export.py.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105337
Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan
2023-07-20 19:53:08 +00:00
Justin Chu
14d87bb5ff [BE] Enable ruff's UP rules and autoformat tools and scripts (#105428)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105428
Approved by: https://github.com/albanD, https://github.com/soulitzer, https://github.com/malfet
2023-07-19 01:24:44 +00:00
angelayi
828b275740 [exportdb] Setup website (#104288)
<img width="1109" alt="image" src="https://github.com/pytorch/pytorch/assets/10901756/e67ff8a9-adb1-466f-8285-fb7d3653d139">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104288
Approved by: https://github.com/zhxchen17
2023-07-01 01:03:56 +00:00
Sherlock Huang
a6ac922eab Rename Canonical Aten IR to Core Aten IR (#92904)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92904
Approved by: https://github.com/bdhirsh
2023-01-25 05:12:23 +00:00
Sherlock Huang
b4b8a56589 Doc for Canonical Aten and Prims IR (#90644)
as title.

Sample output: https://docs-preview.pytorch.org/90644/ir.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90644
Approved by: https://github.com/ezyang
2022-12-13 21:30:47 +00:00
BowenBao
0581331963 [ONNX] Document ONNX diagnostics (#88371)
Reference pages:
- Landing page: https://docs-preview.pytorch.org/88371/onnx_diagnostics.html
- Individual rule: https://docs-preview.pytorch.org/88371/generated/onnx_diagnostics_rules/POE0004%3Aoperator-supported-in-newer-opset-version.html

An initial PR to setup the document generation for ONNX diagnostics.
* Add document page for ONNX diagnostics.
* Add document generation for diagnostics rules from `rules.yaml`.
* Add dependency on `myst-parser` for markdown to rst parsing.

More content to be added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88371
Approved by: https://github.com/abock, https://github.com/justinchuby, https://github.com/malfet, https://github.com/kit1980
2022-11-16 19:21:46 +00:00
Justin Chu
d6c2080eb4 [ONNX] Update ONNX documentation to include unsupported operators (#84496)
- Update ONNX documentation to include unsupported operators
- Include aten, quantized and other namespaces
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84496
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao, https://github.com/kit1980
2022-09-16 23:48:37 +00:00
Justin Chu
da33c93169 [ONNX] Clean up onnx_supported_ops (#79424)
- Hide the module from `torch.onnx` public namespace because it is for internal use
- Remove unused variables
- Fix lint errors
- Reformat
- Create `onnx` folder under docs/scripts and add it to the onnx merge rule
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79424
Approved by: https://github.com/thiagocrepaldi, https://github.com/garymm, https://github.com/kit1980, https://github.com/malfet
2022-06-23 20:44:51 +00:00
Shawn Zhong
243dd7e74f Fix LeakyReLU image (#78508)
Fixes #56363, Fixes #78243

| [Before](https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html) | [After](https://docs-preview.pytorch.org/78508/generated/torch.nn.LeakyReLU.html)  |
| --- | --- |
| ![image](https://user-images.githubusercontent.com/6421097/171110542-4a9e8ff3-015d-4f3c-88da-171d17dad42e.png) | ![LeakyReLU](https://user-images.githubusercontent.com/6421097/171110505-ba4bca24-2138-47c3-9ebd-35b75a7fe351.png) |

- Plot `LeakyReLU` with `negative_slope=0.1` instead of `negative_slope=0.01`
- Changed the title from `"{function_name} activation function"` to the name returned by `_get_name()` (with parameter info). The full list is attached at the end.
- Modernized the script and ran black on `docs/source/scripts/build_activation_images.py`. Apologies for the ugly diff.

```
ELU(alpha=1.0)
Hardshrink(0.5)
Hardtanh(min_val=-1.0, max_val=1.0)
Hardsigmoid()
Hardswish()
LeakyReLU(negative_slope=0.1)
LogSigmoid()
PReLU(num_parameters=1)
ReLU()
ReLU6()
RReLU(lower=0.125, upper=0.3333333333333333)
SELU()
SiLU()
Mish()
CELU(alpha=1.0)
GELU(approximate=none)
Sigmoid()
Softplus(beta=1, threshold=20)
Softshrink(0.5)
Softsign()
Tanh()
Tanhshrink()
```

cc @brianjo @mruberry @svekars @holly1238
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78508
Approved by: https://github.com/jbschlosser
2022-06-07 16:32:45 +00:00
Vasiliy Kuznetsov
c15fca1137 quant doc: improve rendered documentation for backend_config_dict
Summary:

This improves the documentation page for backend_config_dict to render
the configurations in a human readable format, such as

```
{
  'pattern': torch.nn.modules.pooling.AdaptiveAvgPool1d,
  'dtype_configs': [
    {
      'input_dtype': torch.quint8,
      'output_dtype': torch.quint8,
    },
    {
      'input_dtype': torch.float16,
      'weight_dtype': torch.float16,
      'bias_dtype': torch.float16,
      'output_dtype': torch.float16,
    },
  ],
  'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
```

The results are also now sorted alphabetically by the normalized name of
the root op in the pattern.

A couple of utility functions are created to help with this. If in the future
we convert backend_config_dict to use typed objects, we can move this logic
to the objects at that time.

Test plan:

```
cd docs
make html
cd build
python -m server.http
// renders correctly, example: https://gist.github.com/vkuzo/76adfc7c89e119c59813a733fa2cd56f
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77535

Approved by: https://github.com/andrewor14
2022-05-18 11:46:07 +00:00
Jerry Zhang
74454bdb46 [quant][fx] Move backend_config folder to torch.ao.quantization
Summary:
Following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md we implemented
the backend configuration for fbgemm/qnnpack backend, currently it was under fx folder, but we'd like to use this for all different
workflows, including eager, fx graph and define by run quantization, this PR moves it to torch.ao.quantization namespace so that
it can be shared by different workflows
Also moves some utility functions specific to fx to fx/backend_config_utils.py and some files are kept in fx folder (quantize_handler.py and fuse_handler.py)

Test Plan:
python test/teset_quantization.py TestQuantizeFx
python test/teset_quantization.py TestQuantizeFxOps
python test/teset_quantization.py TestQuantizeFxModels
python test/test_quantization.py TestAOMigrationQuantization
python test/test_quantization.py TestAOMigrationQuantizationFx

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75823

Approved by: https://github.com/vkuzo
2022-04-19 15:38:57 +00:00
Thiago Crepaldi
89e79f844d Add list of supported ATen ops by ONNX converter into torch.onnx page
This PR introduces a new documentation page with a list of supported ATen operators by the ONNX converter.

When `make html` (or similar) are called, a python script will generate a temporary CSV file inside the doc build folder with a list of operators/opsets currently supported by the PyTorch ONNX exporter. That CSV is used by Sphinx to build a HTML table using the same theme as the rest of the documentation.

That page is linked to the existing `onnx.rst`, including its table of contents.

@BowenBao @shubhambhokare1 Feel free to add more details on how the script cross reference onnx symbolics and aten operators list from torch jit api`

Below is the workflow for the changed pages:

The initial torch.onnx page was modified to add a link to the list of supported aten operators
![image](https://user-images.githubusercontent.com/5469809/159046387-c459bffc-c9b2-4fcb-8468-8181fdddf911.png)

The screen below highlights the text structure changes to the `ATen operartors` section
![image](https://user-images.githubusercontent.com/5469809/159046730-ccd1e594-c8e6-4b8d-a9ec-8bf6ad58a435.png)

Finally the new page with the list of supported operators is shown below
![image](https://user-images.githubusercontent.com/5469809/159046872-0d99b769-8b95-4c2b-99a9-a8cfdd0b6ecf.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74397
Approved by: https://github.com/garymm, https://github.com/malfet
2022-04-07 00:05:44 +00:00
Vasiliy Kuznetsov
74b23b2066 quantization: autogenerate quantization backend configs for documentation (#75126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75126

Quantization has a high volume of configurations of how to quantize an
op for a reference model representation which is useful for a lowering
step for a backend.  An example of this is

```
 {'dtype_configs': [{'input_dtype': torch.quint8,
										 'output_dtype': torch.quint8}],
	'observation_type': <ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT: 0>,
	'pattern': <class 'torch.nn.modules.conv.ConvTranspose1d'>},
```

These configs are checked into master, and they are created with Python functions.
Therefore, there is no easy way for the user to see what the configs actually
are without running some Python code.

This PR is one approach to document these configs. Here is what this is doing:
1. during documentation build, write a text file of the configs
2. render that text file on a quantization page, with some additional context

In the future, this could be extended to autogenerate better looking tables
such as: op support per backend and dtype, op support per valid quantization settings per backend,
etc.

Test Plan:
```
cd docs
make html
cd html
python -m http.server 8000
// render http://[::]:8000/quantization-backend-configuration.html
// it renders correctly
```

Reviewed By: ejguan

Differential Revision: D35365461

Pulled By: vkuzo

fbshipit-source-id: d60f776ccb57da9db3d09550e4b27bd5e725635a
(cherry picked from commit 14865c0e23bc080120342c8f9278f0fae8eb8fbd)
2022-04-04 22:22:30 +00:00
Rodrigo Berriel
11ca641491 [docs] Add images to some activation functions (#65415)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/65368. See discussion in the issue.

cc mruberry SsnL jbschlosser soulitzer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65415

Reviewed By: soulitzer

Differential Revision: D31093303

Pulled By: albanD

fbshipit-source-id: 621c74c7a2aceee95e3d3b708c7f1a1d59e59b93
2021-09-22 11:05:29 -07:00
Rodrigo Berriel
f0ada4bd54 [docs] Remove .data from some docs (#65358)
Summary:
Related to https://github.com/pytorch/pytorch/issues/30987. Fix the following task:

- [ ] Remove the use of `.data` in all our internal code:
  - [ ] ...
  - [x] `docs/source/scripts/build_activation_images.py` and `docs/source/notes/extending.rst`

In `docs/source/scripts/build_activation_images.py`, I used `nn.init` because the snippet already assumes `nn` is available (the class inherits from `nn.Module`).

cc albanD

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65358

Reviewed By: malfet

Differential Revision: D31061790

Pulled By: albanD

fbshipit-source-id: be936c2035f0bdd49986351026fe3e932a5b4032
2021-09-21 06:32:31 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Adnios
09a8f22bf9 Add mish activation function (#58648)
Summary:
See issus: https://github.com/pytorch/pytorch/issues/58375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
2021-05-25 10:36:21 -07:00
Xiaomeng Yang
4ae832e106 Optimize SiLU (Swish) op in PyTorch (#42976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42976

Optimize SiLU (Swish) op in PyTorch.

Some benchmark result

input = torch.rand(1024, 32768, dtype=torch.float, device="cpu")
forward: 221ms -> 133ms
backward: 600ms -> 170ms

input = torch.rand(1024, 32768, dtype=torch.double, device="cpu")
forward: 479ms -> 297ms
backward: 1438ms -> 387ms

input = torch.rand(8192, 32768, dtype=torch.float, device="cuda")
forward: 24.34ms -> 9.83ms
backward: 97.05ms -> 29.03ms

input = torch.rand(4096, 32768, dtype=torch.double, device="cuda")
forward: 44.24ms -> 30.15ms
backward: 126.21ms -> 49.68ms

Test Plan: buck test mode/dev-nosan //caffe2/test:nn -- "SiLU"

Reviewed By: houseroad

Differential Revision: D23093593

fbshipit-source-id: 1ba7b95d5926c4527216ed211a5ff1cefa3d3bfd
2020-08-16 13:21:57 -07:00
Xiaomeng Yang
2460dced8f Add torch.nn.GELU for GELU activation (#28944)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28944

Add torch.nn.GELU for GELU activation

Test Plan: buck test mode/dev-nosan //caffe2/test:nn -- "GELU"

Reviewed By: hl475, houseroad

Differential Revision: D18240946

fbshipit-source-id: 6284b30def9bd4c12bf7fb2ed08b1b2f0310bb78
2019-11-03 21:55:05 -08:00
Xiang Gao
6fc75eadf0 Add CELU activation to pytorch (#8551)
Summary:
Also fuse input scale multiplication into ELU

Paper:
https://arxiv.org/pdf/1704.07483.pdf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8551

Differential Revision: D9088477

Pulled By: SsnL

fbshipit-source-id: 877771bee251b27154058f2b67d747c9812c696b
2018-08-01 07:54:44 -07:00
vishwakftw
49f88ac956 Add grid lines for activation images, fixes #9130 (#9134)
Summary:
1. Add dashed light blue line for asymptotes.
2. RReLU was missing the activation image.
3. make clean in docs will remove the activation images too.

Sample image:
![image](https://user-images.githubusercontent.com/23639302/42224142-5d66bd0a-7ea7-11e8-8b0a-26918df12f7c.png)
Closes https://github.com/pytorch/pytorch/pull/9134

Differential Revision: D8726880

Pulled By: ezyang

fbshipit-source-id: 35f00ee08a34864ec15ffd6228097a9efbc8dd62
2018-07-03 19:10:00 -07:00
Tongzhou Wang
e0f3e5dc77 fix activation images not showing up on official website (#6367) 2018-04-07 11:06:24 -04:00
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
Adam Paszke
b1dec4a74f
Fix doc-push (#5494) 2018-03-01 17:37:30 +01:00
Piotr Mitros
7b33ef4cff Documentation cleanup for activation functions (#5457) 2018-03-01 14:53:11 +01:00