pytorch/docs/source
Edward Z. Yang 9692b954c6 FakeTensorProp works with unbacked bindings (#124310)
This is a partial revert of https://github.com/pytorch/pytorch/pull/124059

Like in #124297, profiling has revealed that testing equality on *every* output is kind of expensive. So we only test equality when we know there is an unbacked binding.  This is the same playbook as the previous PR, just on FakeTensorProp instead of PropagateUnbackedSymInts. Note that we also need to populate `unbacked_bindings` in proxy_tensor.py, since we're generating an entirely new graph in that case.

We now have enough propagation that we're able to trigger a bug related to divisibility replacement. In https://github.com/pytorch/pytorch/pull/113165 we allowed to replace `u0` with `u1 * c` for some constant c, when we have determined that u0 is divisible by c. However, where does the binding for u1 come from? What we will have in practice is that there is some node that is supposed to have bound u1, but which actually is getting a `u1 * c` in its output. So, to get u1, we must divide out c. Fortunately, under the divisibility condition, this is always possible (but remember, we must test divisibility at runtime!)

Because we have tightened up asserts, it is now an error to allocate unbacked SymInts and then fail to track them under unbacked_bindings. In torch/_dynamo/eval_frame.py and torch/_functorch/_aot_autograd/collect_metadata_analysis.py there are examples of benign cases where we repropagated fake tensors but then immediately threw away the results. In these cases, it's not appropriate to rebind, since we're still using the old FX graph that has all of the old symbols. So we just manually clear it. It is possible that other cases will need to be updated, so this PR is "risky" from the perspective of hitting fbcode.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124310
Approved by: https://github.com/lezcano
2024-04-25 02:08:51 +00:00
..
_static [docs] Update PT2+Profiler docs (#122272) 2024-03-28 17:52:28 +00:00
_templates Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689) 2024-01-24 22:28:04 +00:00
community Fix typo on Contribution Guide (#119428) 2024-02-08 01:07:27 +00:00
elastic [Torch][Timer] Adding debug info logging interface for expired timers (#123883) 2024-04-25 01:15:52 +00:00
notes Graph-Safe RNG State Exchange for Tensor Parallelism (#114068) 2024-03-27 01:14:38 +00:00
rpc
scripts Fixed typo in build_activation_images.py (#117458) 2024-01-15 03:27:40 +00:00
amp.rst add GradScaler on CPU (#109993) 2024-01-29 23:42:35 +00:00
autograd.rst Add torch.library.register_autograd (#124071) 2024-04-18 12:47:59 +00:00
backends.rst preferred blas library; cublaslt gemm implementation (#122106) 2024-04-22 15:38:22 +00:00
benchmark_utils.rst
bottleneck.rst
checkpoint.rst Add missing words to torch.utils.checkpoint doc (#120196) 2024-02-20 20:18:42 +00:00
complex_numbers.rst Document complex optimizer semantic behavior (#121667) 2024-03-16 00:43:47 +00:00
cond.rst Fix typo under docs directory (#119657) 2024-02-15 21:14:34 +00:00
conf.py Ignore some known duplicated modules in doc build config script (#123425) 2024-04-05 21:12:14 +00:00
config_mod.rst
cpp_extension.rst
cpp_index.rst
cpu.rst Add current_device() to torch.cpu (#110987) 2023-10-11 05:13:10 +00:00
cuda_environment_variables.rst Add doc page for environment variables that effect PyTorch Runtime (#119087) 2024-02-15 21:41:38 +00:00
cuda._sanitizer.rst
cuda.rst [Doc][NVTX] Add documentation for nvtx.range (#121699) 2024-03-15 20:26:44 +00:00
cudnn_persistent_rnn.rst
cudnn_rnn_determinism.rst
data.rst Revert "reseed all Generators in Dataloader's _worker_loop() -- via GC (#107131)" 2023-08-23 17:08:07 +00:00
ddp_comm_hooks.rst
debugging_environment_variables.rst Add doc page for environment variables that effect PyTorch Runtime (#119087) 2024-02-15 21:41:38 +00:00
deploy.rst
deterministic.rst Add torch.utils.deterministic.fill_uninitialized_memory flag (#111377) 2023-11-01 16:10:09 +00:00
distributed.algorithms.join.rst
distributed.checkpoint.rst [DCP] DCP logger (#121352) 2024-04-05 17:50:50 +00:00
distributed.elastic.rst [Torch Elastic][Draft] Refactor SubprocessHandler to separate module for easier subclass (#120373) 2024-03-08 01:37:34 +00:00
distributed.optim.rst
distributed.rst [dtensor] add support for loss parallel (#119877) 2024-03-02 05:06:26 +00:00
distributed.tensor.parallel.rst [tp] doc fixes (#121431) 2024-03-08 17:46:44 +00:00
distributions.rst Add inverse gamma distribution and fix sign bug in PowerTransform. (#104501) 2023-11-01 02:26:25 +00:00
dlpack.rst
docutils.conf
export.ir_spec.rst [export] Remove torch._export.export (#119095) 2024-02-08 21:22:04 +00:00
export.rst Fix links rendering when surrounding code in Dynamo deepdive (#123427) 2024-04-13 04:55:15 +00:00
fft.rst
fsdp.rst [FSDP][state_dict] Expose optimizer state_dict config (#105949) 2023-08-21 07:29:49 +00:00
func.api.rst
func.batch_norm.rst
func.migrating.rst
func.rst
func.ux_limitations.rst
func.whirlwind_tour.rst
future_mod.rst Add swap_tensors path to nn.Module._apply (#117167) 2024-02-07 18:55:44 +00:00
futures.rst
fx.experimental.rst FakeTensorProp works with unbacked bindings (#124310) 2024-04-25 02:08:51 +00:00
fx.rst [Export] Add runtime assert to non-strict export (#123681) 2024-04-18 16:13:27 +00:00
hub.rst
index.rst torch.mtia module for MTIA device backend (#123612) 2024-04-24 20:51:20 +00:00
jit_builtin_functions.rst
jit_language_reference_v2.rst
jit_language_reference.rst
jit_python_reference.rst
jit_unsupported.rst Add support for torch.Generator type in TorchScript (#110413) 2023-11-21 23:07:21 +00:00
jit_utils.rst
jit.rst Doc test non packages (#110568) 2023-10-06 14:16:01 +00:00
library.rst Add torch.library.opcheck (#124496) 2024-04-23 21:48:00 +00:00
linalg.rst
logging.rst Change classification to beta for TORCH_LOGS (#118682) 2024-01-31 21:50:55 +00:00
masked.rst Doc test non packages (#110568) 2023-10-06 14:16:01 +00:00
math-quantizer-equation.png
meta.rst Add documentation for meta device (#119119) 2024-02-04 01:05:22 +00:00
miscellaneous_environment_variables.rst Add doc page for environment variables that effect PyTorch Runtime (#119087) 2024-02-15 21:41:38 +00:00
mobile_optimizer.rst
model_zoo.rst
monitor.rst
mps.rst Conform torch.mps to device module interface (#124676) 2024-04-23 18:38:48 +00:00
mtia.rst torch.mtia module for MTIA device backend (#123612) 2024-04-24 20:51:20 +00:00
multiprocessing.rst Doc test non packages (#110568) 2023-10-06 14:16:01 +00:00
name_inference.rst [docs] Properly link register_post_accumulate_grad_hook docs (#108157) 2023-08-29 22:13:33 +00:00
named_tensor.rst fixing named tensor unflatten example (#106921) 2023-08-22 18:00:10 +00:00
nested.rst
nn.attention.bias.rst Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689) 2024-01-24 22:28:04 +00:00
nn.attention.rst Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689) 2024-01-24 22:28:04 +00:00
nn.functional.rst Add RMSNorm module (#121364) 2024-03-29 18:05:28 +00:00
nn.init.rst
nn.rst Cleanup some duplicated placeholder py:module docs (#123244) 2024-04-05 03:18:53 +00:00
onnx_dynamo_onnxruntime_backend.rst Follow-up #108379 (#108905) 2023-09-09 01:38:36 +00:00
onnx_dynamo.rst [ez][doc] Fix sample code in onnx_dynamo.rst (#114770) 2023-11-29 19:27:52 +00:00
onnx_torchscript_supported_aten_ops.rst Refactor torch.onnx documentation (#108379) 2023-09-08 18:23:48 +00:00
onnx_torchscript.rst Follow-up #108379 (#108905) 2023-09-09 01:38:36 +00:00
onnx.rst fix pytorch version for onnx in doc (#124182) 2024-04-17 18:05:15 +00:00
optim.rst Added example regarding weight_decay distinction with per-parameter API (#117436) 2024-01-22 21:26:02 +00:00
package.rst Doc test non packages (#110568) 2023-10-06 14:16:01 +00:00
pipeline.rst [c10d] Deprecate torch.distributed.pipeline (#121464) 2024-03-08 19:55:02 +00:00
profiler.rst Doc test non packages (#110568) 2023-10-06 14:16:01 +00:00
quantization-accuracy-debugging.rst
quantization-backend-configuration.rst
quantization-support.rst [quant][pt2e] Add model_is_exported util function (#119726) 2024-02-16 19:29:36 +00:00
quantization.rst Cleanup some duplicated placeholder py:module docs (#123244) 2024-04-05 03:18:53 +00:00
random.rst
rpc.rst [BE] RPC is missing RRef docs (#106902) 2023-08-10 16:26:27 +00:00
signal.rst
size.rst Added a docstring for torch.Size.numel. (#124186) 2024-04-19 09:23:02 +00:00
sparse.rst Fix typo in sparse.rst (#121826) 2024-03-19 00:17:19 +00:00
special.rst
storage.rst
tensor_attributes.rst Include the scalar tensor auto-transfer in the doc (#119967) 2024-02-15 22:37:39 +00:00
tensor_view.rst
tensorboard.rst
tensors.rst Integrate swap_tensors into nn.Module.load_state_dict (#117913) 2024-02-09 22:32:29 +00:00
testing.rst
threading_environment_variables.rst Add doc page for environment variables that effect PyTorch Runtime (#119087) 2024-02-15 21:41:38 +00:00
torch_cuda_memory.rst Fix typo under docs directory (#110359) 2023-10-03 16:36:05 +00:00
torch_environment_variables.rst Add doc page for environment variables that effect PyTorch Runtime (#119087) 2024-02-15 21:41:38 +00:00
torch.ao.ns._numeric_suite_fx.rst
torch.ao.ns._numeric_suite.rst
torch.compiler_aot_inductor.rst Fix aoti doc to avoid cannot bind non-const lvalue reference error (#121672) 2024-03-12 23:43:40 +00:00
torch.compiler_api.rst [torch.export] Support is_compiling() flag for non-strict mode (#119602) 2024-02-29 05:52:51 +00:00
torch.compiler_best_practices_for_backends.rst Restructure torch.compile docs (#105376) 2023-07-28 20:58:57 +00:00
torch.compiler_cudagraph_trees.rst [docs] add mode="reduce-overhead" into torch.compile to enable cuda g… (#116529) 2024-01-05 22:54:20 +00:00
torch.compiler_custom_backends.rst Fix torch.compile links (#121824) 2024-03-15 19:49:37 +00:00
torch.compiler_dynamic_shapes.rst feat: Add min, max ranges to mark_dynamic API (#119737) 2024-03-07 23:26:03 +00:00
torch.compiler_dynamo_deepdive.rst Fix links rendering when surrounding code in Dynamo deepdive (#123427) 2024-04-13 04:55:15 +00:00
torch.compiler_dynamo_overview.rst Fix links rendering when surrounding code in Dynamo deepdive (#123427) 2024-04-13 04:55:15 +00:00
torch.compiler_fake_tensor.rst Restructure torch.compile docs (#105376) 2023-07-28 20:58:57 +00:00
torch.compiler_faq.rst Update torch.compile_faq w.r.t to functorch (#122213) 2024-04-05 03:29:11 +00:00
torch.compiler_fine_grain_apis.rst [torch.export] Support is_compiling() flag for non-strict mode (#119602) 2024-02-29 05:52:51 +00:00
torch.compiler_get_started.rst [Reland2] [inductor][BE] split triton_meta and inductor_meta (#112351) 2023-11-02 00:40:12 +00:00
torch.compiler_inductor_profiling.rst Restructure torch.compile docs (#105376) 2023-07-28 20:58:57 +00:00
torch.compiler_ir.rst [export] torch.export landing page (#108783) 2023-09-10 01:40:42 +00:00
torch.compiler_nn_module.rst Revert "Reland 3rd try [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#109323)" + Forward fixes + test (#110964) 2023-10-11 05:16:47 +00:00
torch.compiler_performance_dashboard.rst Restructure torch.compile docs (#105376) 2023-07-28 20:58:57 +00:00
torch.compiler_profiling_torch_compile.rst [docs] Update PT2+Profiler docs (#122272) 2024-03-28 17:52:28 +00:00
torch.compiler_transformations.rst Fix typo under docs directory (#110359) 2023-10-03 16:36:05 +00:00
torch.compiler_troubleshooting.rst Add a Dynamo deepdive to documentation (#122305) 2024-04-02 15:08:08 +00:00
torch.compiler.rst Fix links rendering when surrounding code in Dynamo deepdive (#123427) 2024-04-13 04:55:15 +00:00
torch.overrides.rst Doc test non packages (#110568) 2023-10-06 14:16:01 +00:00
torch.rst torch.mtia module for MTIA device backend (#123612) 2024-04-24 20:51:20 +00:00
type_info.rst
utils.rst New swap function (#111747) 2023-12-08 18:49:35 +00:00
xpu.rst [2/2] Intel GPU Runtime Upstreaming for Generator (#118613) 2024-02-28 05:28:11 +00:00