pytorch/benchmarks/transformer
Driss Guessous 1d9e1fca97 Update sdp dispatch logic to enable fused backward (#89154)
# Summary
Reorganizes how the sdp dispatch logic is down in order to enable backwards for fused kernels

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89154
Approved by: https://github.com/cpuhrsch
2022-11-21 20:02:09 +00:00
..
better_transformer_vs_mha_functional.py Use scaled_dot_product_attention within attention.cpp (#87312) 2022-10-31 04:06:31 +00:00
sdp_backwards.py Update sdp dispatch logic to enable fused backward (#89154) 2022-11-21 20:02:09 +00:00
sdp.py Support non-contiguous NestedTensors for elementwise ops (#87888) 2022-10-28 11:26:17 +00:00