mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
I think this makes sense to do? Otherwise, if you call `backward()` in your traced function, you can't get gradients out of any tensors that should have been leaves. Pull Request resolved: https://github.com/pytorch/pytorch/pull/77474 Approved by: https://github.com/ezyang |
||
|---|---|---|
| .. | ||
| unification | ||
| __init__.py | ||
| accelerator_partitioner.py | ||
| const_fold.py | ||
| debug.py | ||
| graph_gradual_typechecker.py | ||
| merge_matmul.py | ||
| meta_tracer.py | ||
| normalize.py | ||
| optimization.py | ||
| partitioner_utils.py | ||
| proxy_tensor.py | ||
| refinement_types.py | ||
| rewriter.py | ||
| schema_type_annotation.py | ||
| unify_refinements.py | ||