pytorch/.github/label_to_label.yml
Richard Zou b6a4236e5d [label_to_label] minor updates (#166172)
vllm-compile implies "module: vllm" and "oncall: pt2".
The volume of issues in Flex -> HigherOrderOperators is too noisy,
plus we have a different set of folks looking at each, so I'm going to
make that not automatic anymore. We can still manually label flex issues
as higher order operator issues.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166172
Approved by: https://github.com/angelayi
2025-10-24 22:47:23 +00:00

61 lines
1.3 KiB
YAML

# Use this to auto apply labels based on other labels. Applies to both PRs and
# issues. Currently only supports any and all
- any:
- "module: opcheck"
then:
- "module: custom-operators"
- any:
- "module: custom-operators"
- "module: functionalization"
- "module: aotdispatch"
- "module: higher order operators"
- "module: fakeTensor"
- "module: ProxyTensor"
- "module: library"
- "module: reinplacing"
then:
- "module: pt2-dispatcher"
- any:
- "vllm-compile"
then:
- "module: vllm"
- "oncall: pt2"
- any:
- "module: vmap"
then:
- "module: functorch"
- any:
- "module: reinplacing"
then:
- "module: inductor"
- any:
- "module: pt2 optimizer"
then:
- "module: dynamo"
- any:
- "module: aotinductor"
then:
- "oncall: export"
- any:
- "module: dynamo"
- "module: pt2-dispatcher"
- "module: inductor"
- "module: aotinductor"
- "module: cudagraphs"
- "oncall: export"
- "module: compile-time"
- "module: compiled autograd"
- "module: flex attention"
- "module: dynamic shapes"
then:
- "oncall: pt2"
- any:
- "release notes: distributed (c10d)"
- "release notes: distributed (symm_mem)"
- "release notes: distributed (pipeline)"
- "release notes: distributed (fsdp)"
- "release notes: distributed (dtensor)"
- "oncall: distributed"
then:
- "ciflow/h100-distributed"