pytorch/torch/_subclasses
Zhengxu Chen 02ed2992d9 [export] Capture tensor.to() under export. (#123732)
Summary: We use to skip tensor.to() during tracing when the device is the same. This will bring some performance improvement in eager but making graph capture losing the semantics from original model. In this diff, we add an additional condition to skip the fast path when we don't have actual data inside a tensor, which is the case when we're using FakeTensor / FunctionalTensor to trace the model. This won't have perf impact on previous eager models while making sure we can capture the _to_copy() node in the graph.

Test Plan: buck test mode/opt caffe2/test:test_export -- -r device_to

Differential Revision: D55969674

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123732
Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan
2024-04-24 23:12:19 +00:00
..
__init__.py Add Fake Cross Ref Mode, migrate sparse to it (#85382) 2022-09-21 17:15:47 +00:00
fake_impls.py Add fake impl for aten.unique2 (#124306) 2024-04-17 22:55:27 +00:00
fake_tensor.py We should not be in kernel invocation before we restore fake mode (#124762) 2024-04-24 20:32:59 +00:00
fake_utils.py Revert "[fake_impls] Fix seed/offset device for attention kernels (#120839)" (#121447) 2024-03-08 01:48:23 +00:00
functional_tensor.py [export] Capture tensor.to() under export. (#123732) 2024-04-24 23:12:19 +00:00
meta_utils.py Fakeifying views shouldnt create symbols when dynamic=False (#123348) 2024-04-12 01:12:23 +00:00
schema_check_mode.py Replace follow_imports = silent with normal (#118414) 2024-01-27 02:44:11 +00:00