pytorch/test/test_gen_backend_stubs.py
Brian Hirsh 9354a68e7d [codegen] split out backend-specific information from NativeFunction in the model (#57361)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57361

Data model change in the codegen, which splits backend-specific information out of `NativeFunction`

### Overview
Currently in the codegen, native_functions.yaml has backend-specific information about each operator that is encoded directly into the data model, in the `NativeFunction` object. That's reasonable, since the native_functions.yaml is the source of truth for information about an operator, and the data model encodes that information into types.

Now that external backends can use the codegen though, that information is technically incomplete/inaccurate. In another PR, I tried patching the information on the `NativeFunction` object with the additional external information, by updating the `dispatch` entry to contain the external backend kernel name and dispatch key.

Instead, this PR tries to split out that information. The `NativeFunction` class contains all information about an operator from native_functions.yaml that's backend-independent and is known never to change regardless of what extra information backends provide. We also build up a backend "index", which is basically a mapping from [backend] -> [backend-specific-metadata]. Reading in an external backend yaml just involves updating that index with the new backend.

There were a few places where `NativeFunction` used the dispatch table directly, that I encoded as properties directly on the NativeFunction object (e.g. `is_abstract`). They were mostly around whether or not the operator has a composite kernel, which isn't something that's going to change for any external backends.

This has a few advantages:
- We can more easily re-use the existing logic in `native_function.py` and `register_dispatch_key.py` for both native and external backends, since they both involve a NativeFunction + a particular backend index
- The data in the data model will be the same regardless of how the codegen is run. Running the codegen with a new external backend doesn't change the data inside of NativeFunction or an existing backend index. It just adds a new index for that backend.
- There are several of codegen areas that don't care about backend-specific information: mostly the tracing and autograd codegen. We can reason about the codegen there more easily, knowing that backend-specific info is entirely uninvolved.

An alternative to this split would be to augment the NativeFunction objects with external backend information at the time that we create them. So the external codegen could read both native_functions.yaml and the external backend's yaml at the same time, and construct a NativeObject with a full dispatch table (including the XLA entry), and the correct setting of structured (taking into account both yamls). One disadvantage to this approach is that NativeFunction objects now contain different stuff depending on how you ran the codegen, and you have to make sure that any changes to the codegen can properly handle all the different variants.

### Data Model Changes
Removed 3 classes, which are used by the external codegen:
- ExternalBackendFunction
- ExternalBackendFunctionsGroup
- ExternalBackendMetadata

And added two new ones:
- BackendIndex
- BackendMetadata

`BackendIndex` contains any info that's specific to that backend, plus a mapping from operator names to backend specific metadata about the operator. One example of backend-specific info that's not operator-dependent is the fact that XLA prefers to implement functional kernels instead of out kernels (and so when they eventually mark an op as structured, they're going to mark the functional op and not the out op).

`BackendMetadata` contains info specific to an (operator, backend) pair. Right now, that's just (a) the name of the kernel, and (b) whether or not that operator is structured.

### Questions
I wanted to get this PR up earlier so I could get feedback, but there are a few things I want to call out:

**Dealing with `structured`.**
This PR separates out the notion of `structured` into two bits of information:
- Does [operator] have a meta() function. This is backend-agnostic, and is represented by the `structured` property on `NativeFunction`, same as before. This is used, e.g., to decide what signatures to add to `MetaFunctions.h`.
- Does [operator, backend] have an impl() function. This is backend dependent; even though technically all in-tree backends are forced to write impl() functions for an operator when we port the op to structured in native_functions.yaml, out-of-tree backends can decide to opt in independently. This is represented as a property on `BackendMetadata`. This is used in most other cases, e.g. in `RegisterDispatchKey` when we're deciding whether or not to gen a structured or unstructured wrapper.

I also baked `is_structured_dispatch_key` directly into each BackendIndex. So for operators marked "structured" in native_functions.yaml, their corresponding CPU/CUDA BackendIndex entries will be marked structured, and all others (except for potentially external backends) will not.

I ended up trying to deal with `structured` in this change since it's technically backend dependent (XLA can opt kernels into structured separately from in-tree ops), but that may have been too ambitious: it's technically not relevant until we actually add support for structured external kernels. If it's not clear that this is the right path for dealing with structured and we want to push that off, I'm fine with backing out the bits of this PR that make `structured` backend-dependent. I don't see anything *too* controversial related to structured in the change, but I tried to call out any areas in the comments

**Localizing the fact that external backends follow Dispatcher convention.**
Another thing that's sort of backend specific that I didn't totally address in this PR is the fact the fact that in-tree backends follow the Native API while external backends follow the Dispatcher API. I painted over that in `native_functions.py` by adding a helper, `kernel_signature`, that takes in a native function and gives you the "correct" signature for the specified backend- NativeSignature for in-tree backends, and DispatcherSignature for out-of-tree backends. In order to make that fully useable though, we'll need `NativeSignature` and `DispatcherSignature` to have matching interfaces. I didn't bother with that in this PR, which is why `gen_external_aten_fallbacks.py` still has a bunch of direct references to the dispatcher API. Thinking of adding it in a later PR but wanted to see if anyone has other opinions.

Maybe `is_external()` shouldn't even be a property on the BackendMetadata, and anything the codegen does that requires asking for that information should just be better abstracted away.

**Thoughts on the `BackendIndex` / `BackendMetadata` breakdown.**
One thing that's annoying right now is that to query for various pieces of metadata, you call helper functions like `backend_index.structured(f)`, which queries that particular backend and tells you if that specific NativeFunctionGroup is structured for that backend. It has to return an `Optional[bool]` though, since you have to handle the case where that operator doesn't have a kernel for that backend at all. So users of those helpers end up with a bunch of optionals that they need to unpack, even if they know at some point that the result isn't None. I think it would be easier instead to just store the NativeFunction object as a field directly on the BackendMetadata. Curious if there are any other opinions on a better way to model it though.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D28474362

Pulled By: bdhirsh

fbshipit-source-id: 41a00821acf172467d764cb41e771e096542f661
2021-05-17 12:25:35 -07:00

207 lines
8.0 KiB
Python

import os
import tempfile
from torch.testing._internal.common_utils import TestCase, run_tests
import tools.codegen.gen_backend_stubs
path = os.path.dirname(os.path.realpath(__file__))
gen_backend_stubs_path = os.path.join(path, '../tools/codegen/gen_backend_stubs.py')
# gen_backend_stubs.py is an integration point that is called directly by external backends.
# The tests here are to confirm that badly formed inputs result in reasonable error messages.
class TestGenBackendStubs(TestCase):
def assert_success_from_gen_backend_stubs(self, yaml_str: str) -> str:
with tempfile.NamedTemporaryFile(mode='w') as fp:
fp.write(yaml_str)
fp.flush()
tools.codegen.gen_backend_stubs.run(fp.name, '', True)
def get_errors_from_gen_backend_stubs(self, yaml_str: str) -> str:
with tempfile.NamedTemporaryFile(mode='w') as fp:
fp.write(yaml_str)
fp.flush()
try:
tools.codegen.gen_backend_stubs.run(fp.name, '', True)
except AssertionError as e:
# Scrub out the temp file name from any error messages to simplify assertions.
return str(e).replace(fp.name, '')
self.fail('Expected gen_backend_stubs to raise an AssertionError, but it did not.')
def test_valid_single_op(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_xla
supported:
- abs'''
self.assert_success_from_gen_backend_stubs(yaml_str)
def test_valid_multiple_ops(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_xla
supported:
- add.Tensor
- abs'''
self.assert_success_from_gen_backend_stubs(yaml_str)
def test_valid_zero_ops(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_xla
supported:'''
self.assert_success_from_gen_backend_stubs(yaml_str)
def test_valid_zero_ops_doesnt_require_backend_dispatch_key(self):
yaml_str = '''\
backend: BAD_XLA
cpp_namespace: torch_xla
supported:'''
# External codegen on a yaml file with no operators is effectively a no-op,
# so there's no reason to parse the backend
self.assert_success_from_gen_backend_stubs(yaml_str)
def test_valid_with_autograd_ops(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_xla
supported:
- abs
autograd:
- add.Tensor'''
# External codegen on a yaml file with no operators is effectively a no-op,
# so there's no reason to parse the backend
self.assert_success_from_gen_backend_stubs(yaml_str)
def test_missing_backend(self):
yaml_str = '''\
cpp_namespace: torch_xla
supported:
- abs'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''You must provide a value for "backend"''')
def test_empty_backend(self):
yaml_str = '''\
backend:
cpp_namespace: torch_xla
supported:
- abs'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''You must provide a value for "backend"''')
def test_backend_invalid_dispatch_key(self):
yaml_str = '''\
backend: NOT_XLA
cpp_namespace: torch_xla
supported:
- abs'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''\
unknown dispatch key NOT_XLA
The provided value for "backend" must be a valid DispatchKey, but got NOT_XLA.''') # noqa: B950
def test_missing_cpp_namespace(self):
yaml_str = '''\
backend: XLA
supported:
- abs'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''You must provide a value for "cpp_namespace"''')
def test_whitespace_cpp_namespace(self):
yaml_str = '''\
backend: XLA
cpp_namespace:\t
supported:
- abs'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''You must provide a value for "cpp_namespace"''')
# supported is a single item (it should be a list)
def test_nonlist_supported(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_xla
supported: abs'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''expected "supported" to be a list, but got: abs (of type <class 'str'>)''')
# supported contains an op that isn't in native_functions.yaml
def test_supported_invalid_op(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_xla
supported:
- abs_BAD'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''Found an invalid operator name: abs_BAD''')
# The backend is valid, but doesn't have a valid autograd key. They can't override autograd kernels in that case.
# Only using MSNPU here because it has a valid backend key but not an autograd key- if this changes we can update the test.
def test_backend_has_no_autograd_key_but_provides_entries(self):
yaml_str = '''\
backend: MSNPU
cpp_namespace: torch_msnpu
supported:
- add
autograd:
- sub'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''Found an invalid operator name: add''') # noqa: B950
# in an operator group, currently all operators must either be registered to the backend or autograd kernel.
# Here, functional and out mismatch
def test_backend_autograd_kernel_mismatch_out_functional(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_msnpu
supported:
- add.Tensor
autograd:
- add.out'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''Currently, all variants of an op must either be registered to a backend key, or to a backend's autograd key. They can not be mix and matched. If this is something you need, feel free to create an issue! add is listed under "supported", but add_out is listed under "autograd".''') # noqa: B950
# in an operator group, currently all operators must either be registered to the backend or autograd kernel.
# Here, functional and inplace mismatch
def test_backend_autograd_kernel_mismatch_functional_inplace(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_msnpu
supported:
- add.Tensor
autograd:
- add_.Tensor'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''Currently, all variants of an op must either be registered to a backend key, or to a backend's autograd key. They can not be mix and matched. If this is something you need, feel free to create an issue! add is listed under "supported", but add_ is listed under "autograd".''') # noqa: B950
# Currently, the same operator can't be listed under both 'supported' and 'autograd', which would
# involve registering the same kernel to both the XLA and AutogradXLA keys.
# If we need that functionality in the future, we'll need to augment the codegen.
def test_op_appears_in_supported_and_autograd_lists(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_msnpu
supported:
- add.Tensor
autograd:
- add.Tensor'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, '''Currently, all variants of an op must either be registered to a backend key, or to a backend's autograd key. They can not be mix and matched. If this is something you need, feel free to create an issue! add is listed under "supported", but add is listed under "autograd".''') # noqa: B950
# unrecognized extra yaml key
def test_unrecognized_key(self):
yaml_str = '''\
backend: XLA
cpp_namespace: torch_xla
supported:
- abs
invalid_key: invalid_val'''
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, ''' contains unexpected keys: invalid_key. Only the following keys are supported: backend, cpp_namespace, supported, autograd''') # noqa: B950
if __name__ == '__main__':
run_tests()