mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Fixes https://github.com/pytorch/torchdynamo/issues/1839 Should I do this for all backends or just inductor? ## Test On a V100 I got from AWS ```python from torch._dynamo import optimize import torch def fn(x, y): a = torch.cos(x) b = torch.sin(y) return a + b new_fn = optimize("inductor")(fn) a = new_fn(torch.Tensor(1),torch.Tensor(1)) print(a) ``` ## New logs ``` (sourcetorch) ubuntu@ip-172-31-31-152:~/test$ python test.py /home/ubuntu/pytorch/torch/_dynamo/eval_frame.py:318: UserWarning: Tensor cores are available but not enabled. Consider setting torch.backends.cuda.matmul.allow_tf32 == True in your python script for speedups warnings.warn( tensor([1.3717]) ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/88844 Approved by: https://github.com/ngimel, https://github.com/mlazos, https://github.com/anijain2305 |
||
|---|---|---|
| .. | ||
| optimizations | ||
| variables | ||
| __init__.py | ||
| allowed_functions.py | ||
| bytecode_analysis.py | ||
| bytecode_transformation.py | ||
| codegen.py | ||
| config.py | ||
| convert_frame.py | ||
| debug_utils.py | ||
| eval_frame.py | ||
| exc.py | ||
| guards.py | ||
| logging.py | ||
| mutation_guard.py | ||
| output_graph.py | ||
| profiler.py | ||
| replay_record.py | ||
| resume_execution.py | ||
| side_effects.py | ||
| skipfiles.py | ||
| source.py | ||
| symbolic_convert.py | ||
| test_case.py | ||
| test_minifier_common.py | ||
| testing.py | ||
| utils.py | ||