Commit Graph

37 Commits

Author SHA1 Message Date
Xuehai Pan
c73a92fbf5 [BE][CI] bump ruff to 0.9.2: multiline assert statements (#144546)
Reference: https://docs.astral.sh/ruff/formatter/black/#assert-statements

> Unlike Black, Ruff prefers breaking the message over breaking the assertion, similar to how both Ruff and Black prefer breaking the assignment value over breaking the assignment target:
>
> ```python
> # Input
> assert (
>     len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
>
> # Black
> assert (
>     len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
> # Ruff
> assert len(policy_types) >= priority + num_duplicates, (
>     f"This tests needs at least {priority + num_duplicates} many types."
> )
> ```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144546
Approved by: https://github.com/malfet
2025-02-27 20:46:16 +00:00
cyy
db81a3f31c [TorchGen] remove remove_non_owning_ref_types from valuetype_type (#142449)
It is not used
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142449
Approved by: https://github.com/ezyang
2024-12-12 00:15:44 +00:00
cyy
e5f08c0cbf [TorchGen] Remove cpp_type_registration_declarations (#142452)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/142452
Approved by: https://github.com/ezyang
2024-12-11 19:01:36 +00:00
cyy
aa95618268 [2/N] Apply py39 ruff fixes (#141938)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141938
Approved by: https://github.com/ezyang
2024-12-05 06:26:06 +00:00
cyy
55250b324d [1/N] Apply py39 ruff fixes (#138578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138578
Approved by: https://github.com/Skylion007
2024-12-02 21:46:18 +00:00
cyy
7624d625c0 [Reland][7/N] Fix Wextra-semi warning (#140342)
Reland of #140225 to fix a change in FBCODE_CAFFE2

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140342
Approved by: https://github.com/kit1980
2024-11-12 18:55:31 +00:00
PyTorch MergeBot
dbb55b448b Revert "[7/N] Fix Wextra-semi warning (#140225)"
This reverts commit ffb979032d.

Reverted https://github.com/pytorch/pytorch/pull/140225 on behalf of https://github.com/kit1980 due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/140225#issuecomment-2469312229))
2024-11-12 00:02:06 +00:00
cyy
ffb979032d [7/N] Fix Wextra-semi warning (#140225)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140225
Approved by: https://github.com/ezyang
2024-11-10 14:28:10 +00:00
Richard Barnes
068f7e7a78 torch::optional -> std::optional (#138987)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138987
Approved by: https://github.com/Skylion007
2024-10-28 19:09:46 +00:00
Xuehai Pan
267f82b860 [BE] Format .ci/ / .github/ / benchmarks/ / functorch/ / tools/ / torchgen/ with ruff format (#132577)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132577
Approved by: https://github.com/malfet
2024-10-11 18:30:26 +00:00
Manuel Candales
caa04e0cae [ET] codegen: bool array as array ref (#134886)
Test Plan: CI

Differential Revision: D62046959

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134886
Approved by: https://github.com/larryliu0820
2024-09-01 01:33:43 +00:00
cyy
b9cb1abf65 [12/N] Use std::optional (#132361)
Follows #132396

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132361
Approved by: https://github.com/eqy
2024-08-02 13:46:46 +00:00
Xuehai Pan
f6838d521a [BE][Easy][5/19] enforce style for empty lines in import segments in tools/ and torchgen/ (#129756)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129756
Approved by: https://github.com/ezyang
2024-07-17 06:44:35 +00:00
Xuehai Pan
9120992c72 [BE][Easy] enable postponed annotations in torchgen (#129376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129376
Approved by: https://github.com/ezyang
ghstack dependencies: #129375
2024-06-29 09:23:39 +00:00
PyTorch MergeBot
6063bb9d45 Revert "[BE][Easy] enable postponed annotations in torchgen (#129376)"
This reverts commit 494057d6d4.

Reverted https://github.com/pytorch/pytorch/pull/129376 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I need to revert to cleanly revert https://github.com/pytorch/pytorch/pull/129374, please do a rebase and reland this ([comment](https://github.com/pytorch/pytorch/pull/129375#issuecomment-2197800541))
2024-06-29 00:44:25 +00:00
Xuehai Pan
494057d6d4 [BE][Easy] enable postponed annotations in torchgen (#129376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129376
Approved by: https://github.com/ezyang
ghstack dependencies: #129375
2024-06-28 15:37:57 +00:00
Xuehai Pan
b697808056 [BE][Easy] eliminate relative import in torchgen (#128872)
Fix generated by:

```bash
ruff check --config 'lint.flake8-tidy-imports.ban-relative-imports="all"' --fix --select=TID $(fd '.pyi?$' torchgen)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128872
Approved by: https://github.com/zou3519
2024-06-21 14:11:46 +00:00
Aaron Gokaslan
c5fafe9f48 [BE]: TRY002 - Ban raising vanilla exceptions (#124570)
Adds a ruff lint rule to ban raising raw exceptions. Most of these should at the very least be runtime exception, value errors, type errors or some other errors. There are hundreds of instance of these bad exception types already in the codebase, so I have noqa'd most of them. Hopefully this error code will get commiters to rethink what exception type they should raise when they submit a PR.

I also encourage people to gradually go and fix all the existing noqas that have been added so they can be removed overtime and our exception typing can be improved.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124570
Approved by: https://github.com/ezyang
2024-04-21 22:26:40 +00:00
cyy
fb90b4d4b2 [TorchGen] Use std::optional in generated code (#121454)
This PR changes TorchGen to generate std::optional.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121454
Approved by: https://github.com/ezyang
2024-03-29 14:11:09 +00:00
Mengwei Liu
898554a3a3 [torchgen] Add logic in custom ops to return empty tensor (#114143)
Summary: Add two logic:

1. If the custom op is returning a `Tensor` but also doesn't have an out tensor as input, return an empty tensor.
2. If the custom op is returning more than one Tensor and the number of out tensors is not the same as return Tensor, return a tuple of empty tensors.

Test Plan: Rely on new unit tests

Differential Revision: D51471651

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114143
Approved by: https://github.com/cccclai
2023-12-08 17:03:44 +00:00
Kazuaki Ishizaki
ac48c11ab7 Fix typo under torchgen directory (#111154)
This PR fixes typo in comments and messages in files under `torchgen` directory.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111154
Approved by: https://github.com/rajveer43, https://github.com/Skylion007
2023-10-13 16:43:46 +00:00
Justin Chu
964d29f312 [BE] Enable ruff's UP rules and autoformat torchgen/ (#105423)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105423
Approved by: https://github.com/Skylion007
2023-07-18 06:44:20 +00:00
Dave Bort
d06e1df1aa [torchgen] Rename executorch's RuntimeContext to KernelRuntimeContext (#104892)
Rename the context type to match changes in executorch.

Differential Revision: [D46977359](https://our.internmc.facebook.com/intern/diff/D46977359/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104892
Approved by: https://github.com/larryliu0820
2023-07-14 21:15:50 +00:00
Jack Khuu
18dacf7e79 [Specialized Kernel] Update yaml syntax to use kernel instead of dispatch (#104070)
Based on this [code search](https://fburl.com/code/gjcnw8ly) (*.yaml with `dispatch: CPU:`), update all files found to use

```
kernels:
    - arg_meta: None
      kernel_name:
```
instead of
```
dispatch:
    CPU:
```
---
## Code changes:

- `fbcode/executorch/codegen/tools/gen_oplist.py`
  - Strip ET specific fields prior to calling parse_native_yaml_struct
---
## Files edited that are not `*functions.yaml` or `custom_ops.yaml`

- fbcode/executorch/kernels/optimized/optimized.yaml
- fbcode/executorch/kernels/quantized/quantized.yaml
- fbcode/executorch/kernels/test/custom_kernel_example/my_functions.yaml

---
## Found Files that were not edited

**Dispatched to more than just CPU**
- fbcode/caffe2/aten/src/ATen/native/native_functions.yaml
- xplat/caffe2/aten/src/ATen/native/native_functions.yaml
- xros/third-party/caffe2/caffe2/aten/src/ATen/native/native_functions.yaml

**Grouped ops.yaml path**
- fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/ops.yaml

---
**Design Doc:** https://docs.google.com/document/d/1gq4Wz2R6verKJ2EFseLyPdAF0wqomnCrVDDJpRkYsRw/edit?kh_source=GDOCS#heading=h.8raqyft9y50

Differential Revision: [D46952067](https://our.internmc.facebook.com/intern/diff/D46952067/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D46952067/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104070
Approved by: https://github.com/larryliu0820
2023-06-27 09:53:20 +00:00
Jack Khuu
d1c367470b [Specialized Kernel] Remove requirement for type_alias and dim_order_alias to be present (#104006)
These fields are not required when kernels provided do not use aliases (e.g. only a default kernel

Differential Revision: [D46916099](https://our.internmc.facebook.com/intern/diff/D46916099/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104006
Approved by: https://github.com/larryliu0820
2023-06-23 16:49:57 +00:00
Jack Khuu
e9674d146c [Specialized Kernel] Propagate Specialized Kernel Support through ComputeCodegenUnboxedKernels (#103113)
Updating ComputeCodegenUnboxedKernels to accept and write out kernel information to RegisterCodegenUnboxedKernels.cpp

Differential Revision: [D46486195](https://our.internmc.facebook.com/intern/diff/D46486195/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103113
Approved by: https://github.com/larryliu0820, https://github.com/kirklandsign
2023-06-14 10:18:16 +00:00
Jack Khuu
d0c0e13b69 [Specialized Kernel] Translate Kernel Assignment Logic from function.yaml to native_functions.yaml (#102576)
Updating `gen_executorch.translate_native_yaml()` to translate kernel assignments when converting `functions.yaml` to `native_functions.yaml`
---
Functions.yaml format:
```
- func: add.out
	type_alias:
		T0: [<Type>, <Type>]
		T1: [<Type>]
	dim_order_alias:
		D0: [0, 1, 2, 3]
		D1: [0, 3, 2, 1]
	kernels:
		- arg_meta: null
		  kernel_name: default_impl
		- arg_meta:
			self: [T0, D0]
			other:[T0, D0]
			out: [T0, D0]
		  kernel_name: test_impl
```

native_functions.yaml format
```
func: add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!)
  kernel:
    default: default_impl
    v<Version>/<TYPE Enum>;<DIM Order>|<TYPE Enum>;<DIM Order>|<TYPE Enum>;<DIM Order>: test_impl
```
Example: **'v1/6;0,1,2,3|3;0,1,2,3|6;0,1,2,3' : 'test_impl'**

## Note:
- If a "kernels" field is not present in functions.yaml (as it currently is), the output is unaffected
---
Design Doc: https://docs.google.com/document/d/1gq4Wz2R6verKJ2EFseLyPdAF0wqomnCrVDDJpRkYsRw/edit?kh_source=GDOCS#

Differential Revision: [D45971107](https://our.internmc.facebook.com/intern/diff/D45971107/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102576
Approved by: https://github.com/larryliu0820
2023-06-08 23:42:24 +00:00
Mengwei Liu
eebe0ee141 [Executorch][codegen] Add ETKernelIndex for aggregating all kernels for kernel (#102874)
Summary:
keys and change codegen to take ETKernelIndex

We are adding support for dtype and dim order specialized kernel registration. This requires us to reorganize `BackendIndex` (which is a `Dict[DispatchKey, Dict[OperatorName, BackendMetadata]]`) to be `Dict[OperatorName, Dict[ETKernelKey, BackendMetadata]]`. This PR adds new data structures in order to support this change:

* `ETKernelKey` to retrieve a certain kernel from the registry.
* `ETKernelIndex`, the dictionary from operator name to kernel key to kernel mapping.

Note that the codegen logic is not changed yet, we need subsequent diffs to actually generate code for different kernel keys.

Test Plan: Added tests

Reviewed By: Jack-Khuu

Differential Revision: D46407096

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102874
Approved by: https://github.com/Jack-Khuu, https://github.com/kirklandsign
2023-06-03 17:23:42 +00:00
Nikita Shulga
fb0729054b Revert "[Executorch][codegen] Add ETKernelIndex for aggregating all kernels for kernel (#102565)"
This reverts commit 019c38624c /
https://github.com/pytorch/pytorch/pull/102565 as it breaks
ExecutorchBuilds.
2023-06-01 12:35:23 -07:00
Larry Liu
019c38624c [Executorch][codegen] Add ETKernelIndex for aggregating all kernels for kernel (#102565)
keys and change codegen to take ETKernelIndex

We are adding support for dtype and dim order specialized kernel registration. This requires us to reorganize `BackendIndex` (which is a `Dict[DispatchKey, Dict[OperatorName, BackendMetadata]]`) to be `Dict[OperatorName, Dict[ETKernelKey, BackendMetadata]]`. This PR adds new data structures in order to support this change:

* `ETKernelKey` to retrieve a certain kernel from the registry.
* `ETKernelIndex`, the dictionary from operator name to kernel key to kernel mapping.

Note that the codegen logic is not changed yet, we need subsequent diffs to actually generate code for different kernel keys.

Differential Revision: [D46206339](https://our.internmc.facebook.com/intern/diff/D46206339/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102565
Approved by: https://github.com/Jack-Khuu
2023-05-31 09:41:36 +00:00
Mengwei Liu
41865bd8ed [executorch] Add RuntimeContext to generated C++ API Signature (#94570)
Summary:
Pass runtime context all the way to kernel level.

RegisterCodegenUnboxedKernels.cpp:

```
static Operator operators_to_register[] = {
    Operator(
        "aten::add.out",
        [](torch::executor::RuntimeContext & context, EValue** stack) {

            EValue& self = *stack[0];
    	EValue& other = *stack[1];
    	EValue& alpha = *stack[2];
    	EValue& out = *stack[3];
    	const torch::executor::Tensor & self_base = self.to<torch::executor::Tensor>();
    	const torch::executor::Tensor & other_base = other.to<torch::executor::Tensor>();
    	const torch::executor::Scalar & alpha_base = alpha.to<torch::executor::Scalar>();
    	torch::executor::Tensor & out_base = out.to<torch::executor::Tensor>();

            EXECUTORCH_SCOPE_PROF("native_call_add.out");
            torch::executor::aten::add_outf(context, self_base, other_base, alpha_base, out_base);

        }
    ),
}
```

Functions.h
```

// aten::add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!)
TORCH_API inline at::Tensor & add_outf(torch::executor::RuntimeContext & context, const at::Tensor & self, const at::Tensor & other, const at::Scalar & alpha, at::Tensor & out) {
    return torch::executor::native::add_out(self, other, alpha, out);
}

```

Test Plan: TBD

Differential Revision: D41325633

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94570
Approved by: https://github.com/cccclai
2023-02-16 02:43:18 +00:00
Larry Liu
7568484d54 [torchgen] Add CI job to cover custom ops registration for Executorch (#91291)
As titled. To register a custom op into Executorch, we need:

* `custom_ops.yaml`, defines the operator schema and the corresponding native function.
* `custom_ops.cpp`, defines the kernel.
* `RegisterDispatchKeyCustomOps.cpp`, a template to register operator into PyTorch.

Added a new test for custom ops. The custom op `custom::add_3.out` takes 3 tensors and add them together. The test makes sure it is registered correctly and then verifies the outcome is correct.

Differential Revision: [D42204263](https://our.internmc.facebook.com/intern/diff/D42204263/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91291
Approved by: https://github.com/ezyang
2023-01-14 02:30:54 +00:00
Larry Liu
679da8bd89 [torchgen] Move Executorch custom ops logic into torchgen (#90099)
## Logic to handle custom ops
We generate files for custom ops, so that they can be registered into PyTorch.

Generated files:
* `Register{dispatch_key}CustomOps.cpp` (dispatch_key = CPU), it's basically the same as vanilla PyTorch `RegisterCPU.cpp`. The only difference is that we bind to native functions directly.
* `Register{dispatch_key}Stub.cpp` (dispatch_key = CPU), register placeholder kernels for custom ops. Only used when there's no custom op kernel available.

As an example:
```cpp
namespace {

at::Tensor & wrapper_out_unsqueeze_out(const at::Tensor & self, int64_t dim, at::Tensor & out) {
    // No device check

  // DeviceGuard omitted
  return torch::executor::native::unsqueeze_out(self, dim, out);
}
} // anonymous namespace

TORCH_LIBRARY_IMPL(aten, CPU, m) {

m.impl("unsqueeze.out",
TORCH_FN(wrapper_out_unsqueeze_out));
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90099
Approved by: https://github.com/ezyang
2022-12-19 21:58:43 +00:00
Larry Liu
ca52f63fc0 [torchgen] Move Executorch unboxing logic into torchgen (#90098)
This PR adds `unboxing.py` which converts a `EValue` (similar to `IValue`) to its corresponding C++ type, based on the `ExecutorchCppSignature`.

Added unit tests to it in `test_executorch_unboxing.py`. Notice that this unboxing logic should work for both ATen types and Executorch types, hence the unit tests are parametrized.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90098
Approved by: https://github.com/ezyang
2022-12-19 21:58:43 +00:00
Larry Liu
f3393b7ea7 [torchgen] Introduce Executorch types and signatures (#90781)
Retry of #90591, which is a retry of #89595. Reverted due to dependency PR breaking internal fbcode.

## Forked BaseCppType
Created a module for Executorch: `torchgen.executorch`.

## In `torchgen.executorch.api.types.types`:

* Define `BaseCppType` with `torch::executor` namespace.
## In `torchgen.executorch.api.et_cpp`:

* Help generate `NamedCType` for `ExecutorchCppSignature` arguments.
## In `torchgen.executorch.api.types.signatures`:

* Define the signature using these types. (`ExecutorchCppSignature`)
## In `torchgen.executorch.api.types.__init__`:

* Suppress flake8 error for `import *`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90781
Approved by: https://github.com/ezyang
2022-12-14 20:13:04 +00:00
PyTorch MergeBot
b3e6a6dc0b Revert "[torchgen] Introduce Executorch types and signatures (#90591)"
This reverts commit ddf00c803b.

Reverted https://github.com/pytorch/pytorch/pull/90591 on behalf of https://github.com/seemethere due to Part of a stack that causes internal failures, see https://www.internalfb.com/intern/sandcastle/job/4503600464398605/insights
2022-12-13 03:36:31 +00:00
Larry Liu
ddf00c803b [torchgen] Introduce Executorch types and signatures (#90591)
Retry of #89595. Accidentally closed.

## Forked `BaseCppType`

Created a module for Executorch: `torchgen.executorch`.

In `torchgen.executorch.api.types.types`:
* Define `BaseCppType` with `torch::executor` namespace.

In `torchgen.executorch.api.et_cpp`:
* Help generate `NamedCType` for `ExecutorchCppSignature` arguments.

In `torchgen.executorch.api.types.signatures`:
* Define the signature using these types. (`ExecutorchCppSignature`)

In `torchgen.executorch.api.types.__init__`:
* Suppress flake8 error for `import *`.

Differential Revision: [D41501836](https://our.internmc.facebook.com/intern/diff/D41501836/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90591
Approved by: https://github.com/iseeyuan
2022-12-10 04:34:02 +00:00