Revert "[pt2-bench] fix accuracy failure for beit_base_patch16_224 during training (#130005)"

This reverts commit 0af8c8a981.

Reverted https://github.com/pytorch/pytorch/pull/130005 on behalf of https://github.com/jeanschmidt due to Seems to have introduced breakages in main cuda12 focal jobs ([comment](https://github.com/pytorch/pytorch/pull/129996#issuecomment-2209175516))
This commit is contained in:
PyTorch MergeBot 2024-07-04 14:55:38 +00:00
parent 57d05f2616
commit 54da35a2e0
3 changed files with 2 additions and 3 deletions

View File

@ -6,7 +6,7 @@ adv_inception_v3,pass,6
beit_base_patch16_224,pass,7
beit_base_patch16_224,fail_accuracy,7

1 name accuracy graph_breaks
6 coat_lite_mini pass 6
7 convit_base pass 7
8 convmixer_768_32 pass 5
9 convnext_base pass 7
10 crossvit_9_240 pass 7
11 cspdarknet53 pass 7
12 deit_base_distilled_patch16_224 pass 7

View File

@ -6,7 +6,7 @@ adv_inception_v3,pass,6
beit_base_patch16_224,pass,7
beit_base_patch16_224,fail_accuracy,7

1 name accuracy graph_breaks
6 coat_lite_mini pass 6
7 convit_base pass 7
8 convmixer_768_32 pass 5
9 convnext_base pass 7
10 crossvit_9_240 pass 7
11 cspdarknet53 pass 7
12 deit_base_distilled_patch16_224 pass 7

View File

@ -83,7 +83,6 @@ REQUIRE_HIGHER_TOLERANCE = {
REQUIRE_EVEN_HIGHER_TOLERANCE = {
"levit_128",
"sebotnet33ts_256",
"beit_base_patch16_224",
}
# These models need higher tolerance in MaxAutotune mode