Added check for unsupported dispatch key in codegen (#67961)

Summary:
Added a check for the dispatch keys present in native_function.yaml, they must be part of the fixed set of dispatch keys. If not, signal an error. I also removed two dispatch keys from the function schema copy_ , because they are not supported (SparseHIP, SpareXPU).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67961

Test Plan:
this function schema (for example) in native_function.yaml
```
- func: native_norm(Tensor self, Scalar p=2) -> Tensor
  dispatch:
    SparseCPU, SparseCUDA, SparseHIP: norm_sparse
```
now generates this error during codegen:  `AssertionError: SparseHIP is not a supported dispatch key.`

Fixes https://github.com/pytorch/pytorch/issues/66190

Reviewed By: albanD

Differential Revision: D34327853

Pulled By: ezyang

fbshipit-source-id: 6959d14a7752aefd025baa482d56547b4ed69b4c
This commit is contained in:
francescocastelli 2022-02-22 14:09:18 -08:00 committed by Facebook GitHub Bot
parent d43b5a7ed6
commit 26bea380af
3 changed files with 24 additions and 18 deletions

View File

@ -1366,7 +1366,7 @@
device_guard: False
dispatch:
MkldnnCPU: copy_mkldnn_
SparseCPU, SparseCUDA, SparseHIP: copy_sparse_wrapper_
SparseCPU, SparseCUDA: copy_sparse_wrapper_
CompositeExplicitAutograd: copy_
SparseCsrCPU, SparseCsrCUDA: copy_sparse_csr_

View File

@ -1612,23 +1612,8 @@ def main() -> None:
#include <ATen/hip/HIPDevice.h>
#include <ATen/hip/HIPContext.h>'''
dispatch_keys = [
DispatchKey.CPU,
DispatchKey.SparseCPU,
DispatchKey.SparseCsrCPU,
DispatchKey.MkldnnCPU,
DispatchKey.CUDA,
DispatchKey.SparseCUDA,
DispatchKey.SparseCsrCUDA,
DispatchKey.QuantizedCPU,
DispatchKey.QuantizedCUDA,
DispatchKey.CompositeImplicitAutograd,
DispatchKey.CompositeExplicitAutograd,
# Meta is a magic key: it is automatically generated for structured
# kernels
DispatchKey.Meta,
DispatchKey.ZeroTensor,
]
from tools.codegen.model import dispatch_keys
# Only a limited set of dispatch keys get CPUFunctions.h headers generated
# for them; this is the set
functions_keys = {

View File

@ -126,6 +126,25 @@ class DispatchKey(Enum):
STRUCTURED_DISPATCH_KEYS = {DispatchKey.CUDA, DispatchKey.CPU}
# Set of supported dispatch keys
dispatch_keys = [
DispatchKey.CPU,
DispatchKey.SparseCPU,
DispatchKey.SparseCsrCPU,
DispatchKey.MkldnnCPU,
DispatchKey.CUDA,
DispatchKey.SparseCUDA,
DispatchKey.SparseCsrCUDA,
DispatchKey.QuantizedCPU,
DispatchKey.QuantizedCUDA,
DispatchKey.CompositeImplicitAutograd,
DispatchKey.CompositeExplicitAutograd,
# Meta is a magic key: it is automatically generated for structured
# kernels
DispatchKey.Meta,
DispatchKey.ZeroTensor,
]
# Dispatch keys that "support all backends". These codegen slightly differently
# then backend specific keys.
def is_generic_dispatch_key(dk: DispatchKey) -> bool:
@ -367,6 +386,8 @@ class NativeFunction:
assert isinstance(ks, str), e
for k in ks.split(","):
dispatch_key = DispatchKey.parse(k.strip())
assert dispatch_key in dispatch_keys, f"Dispatch key {dispatch_key} of kernel {v} " \
"is not a supported dispatch key."
# Why is 'structured' included? External backends (e.g.
# XLA) opt into which ops are structured independently
# of which in-tree ops are structured