Fixes#104484
For >= 3.10, we use `inspect.get_annotations` instead of `getattr(.., "__annotations__")`. [Docs](https://docs.python.org/3/library/inspect.html#inspect.get_annotations) say that get_annotations() "Ignores inherited annotations on classes. If a class doesn’t have its own annotations dict, returns an empty dict.". In practice though, this doesn't seem always true; until you call inspect.getmembers it seems like you still get inherited annotations. In particular, this means that if you script a certain type twice, the first time it may pass scripting but on the second try it may not pass scripting.
This PR adds a more comprehensive handling of get_annotations by recursively reading the annotations of the base types. (TorchScript doesn't officially support this; but since it worked in <3.10, it's now breaking internal stuff as python gets upgraded to 3.10)
Verified in #104486 that the test does actually fail before the changes in this PR were added.
Differential Revision: [D47163891](https://our.internmc.facebook.com/intern/diff/D47163891)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104485
Approved by: https://github.com/eellison
For the most part, PrimTorch refs have the same signature as their
ATen equivalents. I modify most PrimTorch refs to register themselves
as decompositions, using the prim name they wrap to find the aten name
(except for a few cases where the prim/aten names mismatch). There are
some exclusions, falling into one of two categories:
- The torch equivalent was already implemented as a CompositeImplicitAutograd
decomposition in C++
- The ref doesn't support enough features (e.g., the real deal has more
kwargs / overloads than are currently implemented)
PrimTorch refs are written as a single function that supports all
overloads, and this style is convenient for cases where we have a bundle
of overloads for what morally is a single overload with a Union type
on an argument (which we ought to have supported in
native_functions.yaml but blah); to support registering a single decomp
for all the overloads, we modify register_decomposition to register
to ALL overloads if you pass it an overload packet. This is technically
BC breaking but no tests started failing because of it.
Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76835
Approved by: https://github.com/Chillee, https://github.com/mruberry
Summary:
This PR is created to replace https://github.com/pytorch/pytorch/pull/53180 PR stack, which has all the review discussions. Reason for needing a replacement is due to a messy Sandcastle issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64234
Reviewed By: gmagogsfm
Differential Revision: D30656444
Pulled By: ansley
fbshipit-source-id: 77536c8bcc88162e2c72636026ca3c16891d669a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57137
This PR corrects and expands our typing algorithm for unannotated, non-empty dicts and lists. Previously, to verify type correctness for an unannotated, non-empty container, we had gotten the type of the first element in the container, then checked if each following element was a subtype of the first type. That's too restrictive--what if the first element were a subtype of the second element? Instead, we should type the container by getting the smallest common supertype of all the given elements.
We need slightly different rules for keys and values in dicts, though: because the set of key types is restricted, finding two key types that cannot be unified should cause an error. On the other hand, the set of value types is not restricted, so we should be able to use `Any` as a valid supertype. We need to keep the set of keys restricted since the keys are used to generate and match schemas.
This does not break backwards compatibility, because the default element type is the smallest supertype of all the given types. So, if someone creates an unannotated dict where the keys are all `str` and the values are all `torch.Tensor`, the dict will be inferred to `Dict[str, Tensor]` just like it was before. Empty lists are still typed as `List[torch.Tensor],` and empty dicts are still typed as `Dict[str, Tensor]`.
This PR unblocks three engineers on an FB-internal team and improves FX-TorchScript compatibility.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D28231839
Pulled By: ansley
fbshipit-source-id: 7297bf239749daa54895add708185c75e6ca5999