pytorch/torch
Philip Meier d64bc8f0f8 use sourceless builder for builtin getattr (#113340)
In TorchVision we use the following (simplified) dispatch mechanism:

```python
import torch

def kernel1(tensor):
    return tensor + 2

def dispatcher1(input):
    kernel = get_kernel(dispatcher1, type(input))
    return kernel(input)

def kernel2(tensor):
    return tensor - 2

def dispatcher2(input):
    kernel = get_kernel(dispatcher2, type(input))
    return kernel(input)

# We actually use the function and type as keys, rather than their names.
# However, this currently not supported, but should be easy to add after
# https://github.com/pytorch/pytorch/pull/111196
REGISTRY = {
    "dispatcher1": {"Tensor": kernel1},
    "dispatcher2": {"Tensor": kernel2},
}

def get_kernel(dispatcher, input_type):
    dispatcher_registry = REGISTRY[dispatcher.__name__]
    for cls in input_type.__mro__:
        kernel = dispatcher_registry[cls.__name__]
        break
    return kernel
```

This can be compiled without graph breaks:

```python
cfn = torch.compile(dispatcher1, fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 5)

cfn = torch.compile(dispatcher2, fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 1)
```

However, if we start chaining these calls, we hit some issues:

```python
class Pipeline(torch.nn.Module):
    def forward(self, input):
        input = dispatcher1(input)
        input = dispatcher2(input)
        return input

cfn = torch.compile(Pipeline(), fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 3)
```

```
Can't access members of type(obj) for a generated custom object. Please use __class__ instead
```

The error message is not really helpful here. The following happens: when compiling `dispatcher1`, `get_kernel` gets inlined. That means when hitting `dispatcher2`, the `type` call no longer happens on an input with a source. Thus, in the first iteration we hit the top branch, while in the second we hit the bottom:

addb8e29cd/torch/_dynamo/variables/builtin.py (L1264-L1268)

And the error message I posted above originates from the type being treated as constant. This PR replaces this with a `SourcelessBuilder` instead.

With that fix in place, we hit another pointing to `input_type.__mro__`

```
AssertionError: Consider SourcelessBuilder for ephemeral objects, usually objects created locally.
```

Fix is similar: instead of using a `VariableBuilder` here, we use a `SourcelessBuilder` in case we have no `source`:

addb8e29cd/torch/_dynamo/variables/builtin.py (L1167-L1168)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113340
Approved by: https://github.com/peterbell10, https://github.com/lezcano
2023-11-13 14:29:17 +00:00
..
_awaits
_C [inductor] Make graph.py pass follow_imports typechecking (#113518) 2023-11-11 22:15:46 +00:00
_C_flatbuffer
_custom_op Use pytree.tree_leaves everywhere (#112324) 2023-10-30 03:39:04 +00:00
_decomp [decomp] Fix _scaled_dot_product_flash_attention decomposition bug (#113102) 2023-11-08 21:47:37 +00:00
_dispatch
_dynamo use sourceless builder for builtin getattr (#113340) 2023-11-13 14:29:17 +00:00
_export Revert "[pytree] register pytree node type in both C++ pytree and Python pytree (#112111)" 2023-11-10 17:24:40 +00:00
_functorch Revert "[pytree] register pytree node type in both C++ pytree and Python pytree (#112111)" 2023-11-10 17:24:40 +00:00
_higher_order_ops [Inductor] Add Dynamic shape support to user defined triton kernels (#112523) 2023-11-02 23:58:50 +00:00
_inductor [inductor] Handle variance corrections larger than number of data points (#113284) 2023-11-13 11:16:17 +00:00
_lazy
_library torch.library: Create helper function is_functional_schema (#111660) 2023-10-27 15:20:25 +00:00
_logging Fix logging exception/stacks from logging (#113394) 2023-11-10 01:17:29 +00:00
_numpy Avoid calling as_tensor twice (#112866) 2023-11-07 16:10:59 +00:00
_prims Grandfather in some more pytorch ops to be pt2_compliant (#113050) 2023-11-09 02:35:33 +00:00
_prims_common Allow inferring divisibility on unbacked SymInts and do replacement trick (#113165) 2023-11-10 21:28:02 +00:00
_refs Remove TODOs to add docstrings (#113197) 2023-11-08 00:34:26 +00:00
_subclasses Prefer e.is_number over not e.free_symbols in SymPy (#112688) 2023-11-06 20:05:13 +00:00
amp
ao [quant][pt2e] Add transform_for_annotation method in Quantizer (#113115) 2023-11-09 20:23:29 +00:00
autograd Fix docstring errors in reductions.py, spawn.py, pool.py, parameter.py, cpp.py, grad.py, __init__.py, profiler.py, queue.py, graph.py (#113052) 2023-11-10 21:19:17 +00:00
backends docs: fix docstring errors in quantized modules and others (#112695) 2023-11-07 23:52:16 +00:00
compiler Fix torch.compiler.cudagraph_mark_step_begin example (#112807) 2023-11-07 04:15:31 +00:00
contrib Fixed docstring errors in _fuser.py, _state.py, __init__.py, _freeze.py, _async.py, _recursive.py, _tensorboard_vis.py, _trace.py, _await.py, _check.py, _serialization.py, _script.py, annotations.py, _monkeytype_config.py (#113371) 2023-11-12 03:19:02 +00:00
cpu [Dist] Enable FSDP on CPU (#112145) 2023-11-07 01:37:02 +00:00
csrc [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404) 2023-11-11 15:08:07 +00:00
cuda [pytorch] Remove dot if no suffix (#113273) 2023-11-12 15:41:27 +00:00
distributed [state_dict][11/N] Implement cpu_offload and full_state_dict for get_state_dict (#112837) 2023-11-13 10:03:06 +00:00
distributions Add inverse gamma distribution and fix sign bug in PowerTransform. (#104501) 2023-11-01 02:26:25 +00:00
export [pytree] align function signature between C++ and Python pytree (#112482) 2023-11-10 02:37:48 +00:00
fft
func
futures
fx [Dynamo] Match closures by code ID (#109427) 2023-11-12 08:20:14 +00:00
jit Fixed docstring errors in _fuser.py, _state.py, __init__.py, _freeze.py, _async.py, _recursive.py, _tensorboard_vis.py, _trace.py, _await.py, _check.py, _serialization.py, _script.py, annotations.py, _monkeytype_config.py (#113371) 2023-11-12 03:19:02 +00:00
legacy
lib
linalg [Docs] fix typo in example of torch.linalg.solve_triangular (#112361) 2023-10-30 10:33:14 +00:00
masked docs: Add docstring for torch.masked._ops.logaddexp (#113206) 2023-11-08 22:45:35 +00:00
monitor
mps
multiprocessing Fix docstring errors in reductions.py, spawn.py, pool.py, parameter.py, cpp.py, grad.py, __init__.py, profiler.py, queue.py, graph.py (#113052) 2023-11-10 21:19:17 +00:00
nested [nested tensor]add split and layer_norm_backward operations (#113108) 2023-11-08 07:44:35 +00:00
nn Fix docstring errors in reductions.py, spawn.py, pool.py, parameter.py, cpp.py, grad.py, __init__.py, profiler.py, queue.py, graph.py (#113052) 2023-11-10 21:19:17 +00:00
onnx [ONNX] Refactor MaxPool to support dynamic inputs (#113318) 2023-11-10 23:23:49 +00:00
optim Deprecated verbose parameter in LR schedulers (#111302) 2023-11-10 23:17:27 +00:00
package Add file name and size to the serialization metadata logging (#113077) 2023-11-09 11:14:24 +00:00
profiler [Profiler][Easy] Make timestamps in memory timelines be in microseconds (us) (#112772) 2023-11-03 00:41:41 +00:00
quantization
signal
sparse Fixed docstring errors in _fuser.py, _state.py, __init__.py, _freeze.py, _async.py, _recursive.py, _tensorboard_vis.py, _trace.py, _await.py, _check.py, _serialization.py, _script.py, annotations.py, _monkeytype_config.py (#113371) 2023-11-12 03:19:02 +00:00
special
testing [dynamo] Add run_inductor_tests entrypoint (#113278) 2023-11-11 08:54:43 +00:00
utils [BE] Remove stale CUDA version check from cpp_extension.py (#113447) 2023-11-11 00:20:08 +00:00
__config__.py
__future__.py
__init__.py Make dynamo configs more amenable to static type checking (#112130) 2023-11-08 21:17:45 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py [inductor] Make {output_graph,pad_mm}.py pass follow_imports typechecking (#113413) 2023-11-11 22:15:46 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Register SymInt-aware meta function for mm out, symintify resize (#113202) 2023-11-10 14:27:05 +00:00
_namedtensor_internals.py
_ops.py Update impl_abstract_pystub to be less boilerplatey (#113182) 2023-11-08 00:39:00 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py Document torch.from_file and fix UntypedStorage.from_file docs (#111688) 2023-10-25 19:28:11 +00:00
_streambase.py [dynamo][stream]support device-agnostic stream in dynamo and capture stream/event method in fx graph (#108312) 2023-10-22 13:22:58 +00:00
_tensor_docs.py Rewrite docs so that it is OK to use record_stream before uses (#113282) 2023-11-08 21:24:50 +00:00
_tensor_str.py
_tensor.py [dynamo] Make {testing,debug_utils,utils}.py pass follow_imports typechecking (#113519) 2023-11-11 22:15:46 +00:00
_torch_docs.py Add torch.utils.deterministic.fill_uninitialized_memory flag (#111377) 2023-11-01 16:10:09 +00:00
_utils_internal.py Update impl_abstract_pystub to be less boilerplatey (#113182) 2023-11-08 00:39:00 +00:00
_utils.py Fix torch.load(..., weights_only=True) for NT (#112516) 2023-11-02 14:41:04 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Fix torch.load(..., weights_only=True) for NT (#112516) 2023-11-02 14:41:04 +00:00
abi-check.cpp
CMakeLists.txt Revert "[BE] [cuDNN] Always build assuming cuDNN >= 8.0 (#95722)" 2023-11-10 17:26:36 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py Improve torch.unique docs (#113424) 2023-11-10 16:36:30 +00:00
hub.py
library.h [fbgemm_gpu] add pt2_compliant tag to some ops (#113201) 2023-11-10 00:32:30 +00:00
library.py Update impl_abstract_pystub to be less boilerplatey (#113182) 2023-11-08 00:39:00 +00:00
overrides.py Revert "Add support for torch.Generator type in TorchScript (#110413)" 2023-11-07 15:53:32 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py added 'weights_only' param in torch.load examples (#112860) 2023-11-06 21:17:36 +00:00
storage.py Clarify difference between share_memory and from_file (#111856) 2023-11-01 03:25:09 +00:00
torch_version.py
types.py Unify torch.SymInt and torch.types.SymInt (#110573) 2023-10-24 16:17:23 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.