mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: We use to skip tensor.to() during tracing when the device is the same. This will bring some performance improvement in eager but making graph capture losing the semantics from original model. In this diff, we add an additional condition to skip the fast path when we don't have actual data inside a tensor, which is the case when we're using FakeTensor / FunctionalTensor to trace the model. This won't have perf impact on previous eager models while making sure we can capture the _to_copy() node in the graph. Test Plan: buck test mode/opt caffe2/test:test_export -- -r device_to Differential Revision: D55969674 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123732 Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| fake_impls.py | ||
| fake_tensor.py | ||
| fake_utils.py | ||
| functional_tensor.py | ||
| meta_utils.py | ||
| schema_check_mode.py | ||