pytorch/torch
Chirag Pandya fd90991790 [rfc] opentelemetry in pytorch (#122999)
1. Add current latest version (opentelemetry-cpp version v1.14.2) to PyTorch library.
Steps:
```
$cd pytorch
$git submodule add https://github.com/open-telemetry/opentelemetry-cpp.git third_party/opentelemetry-cpp
$cd third_party/opentelemetry-cpp
$git checkout v1.14.2
$git add third_party/opentelemetry-cpp .gitmodules
$git commit
```
Expected change in checkout size:
```
(/home/cpio/local/a/pytorch-env) [cpio@devvm17556.vll0 ~/local/pytorch (gh/c-p-i-o/otel)]$ git count-objects -vH
count: 654
size: 3.59 MiB
in-pack: 1229701
packs: 17
size-pack: 1.17 GiB
prune-packable: 76
garbage: 0
size-garbage: 0 bytes
```

2.

TODO
- [x] Figure out how dynamic linking works. App builders will somehow need to `target_include` opentelemetry-cpp at runtime.
- [ ] Examples on how to use opentelemetry + pytorch
- [ ] Tests + documentation (e.g. using null opentelemetry implementation).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122999
Approved by: https://github.com/ezyang
2024-04-21 15:20:21 +00:00
..
_awaits
_C Revert "Build device generic torch.Stream and torch.Event based on c10::Stream/Event (#123611)" 2024-04-19 22:44:26 +00:00
_C_flatbuffer
_custom_op Move schema inference to torch._library (#124199) 2024-04-19 17:56:30 +00:00
_decomp Refactored implementation for upsample_nearest decompostions (#122783) 2024-04-17 23:05:40 +00:00
_dispatch
_dynamo [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
_export [export] handle Dim.lower = 0, 1 for ep.run_decompositions() (#123602) 2024-04-19 21:29:36 +00:00
_functorch [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
_higher_order_ops [inductor] for UserDefinedTritonKernels don't mark all inputs as mutating (#124425) 2024-04-21 06:00:14 +00:00
_inductor [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
_lazy
_library [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
_logging [TORCH_TRACE] Record stack when no compile context is available (#122644) 2024-03-26 19:30:52 +00:00
_numpy [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_prims [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
_prims_common [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
_refs [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
_subclasses Avoid cuda init to FakeTensorMode (#124413) 2024-04-19 02:39:35 +00:00
_vendor
amp [BE][Ez]: Fix minor potential perf regression from #123960 (#124013) 2024-04-15 16:51:45 +00:00
ao [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
autograd [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
backends [BE]: Optimize min/max/sum comprehensions C419 (#123960) 2024-04-12 23:54:15 +00:00
compiler
contrib
cpu
csrc [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
cuda [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
distributed [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
distributions [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
export [export] handle Dim.lower = 0, 1 for ep.run_decompositions() (#123602) 2024-04-19 21:29:36 +00:00
fft
func
futures
fx [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
jit [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
legacy
lib [codemod][lowrisk] Remove unused exception parameter from caffe2/caffe2/image/image_input_op.h (#123056) 2024-04-04 17:24:43 +00:00
linalg
masked
monitor
mps
multiprocessing
nested [NT] Fix typo in declared strides variable (#123856) 2024-04-13 19:55:57 +00:00
nn [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
onnx [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
optim Better Error Message in ChainedScheduler and SequentialLR (#121633) 2024-04-19 13:37:41 +00:00
package
profiler [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
quantization
signal
sparse [sparse] Add fast semi-structured spasification kernels (#122350) 2024-04-19 13:31:58 +00:00
special
testing [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
utils [BE]: FURB142 - Remove set mutations. Use set update (#124551) 2024-04-21 14:12:33 +00:00
xpu Support gpu trace on XPU (#121795) 2024-03-30 13:07:53 +00:00
__config__.py
__future__.py
__init__.py Revert "torch.mtia module for MTIA device backend (#123612)" 2024-04-19 22:44:26 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_deploy.py
_guards.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_jit_internal.py Adjust logging content for TS usage logging (#123133) 2024-04-03 18:54:26 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py Fix svd_lowrank parameter M (#122681) 2024-03-29 18:06:38 +00:00
_meta_registrations.py Extend int[48]mm ops to float32 input (#124287) 2024-04-17 23:10:49 +00:00
_namedtensor_internals.py
_ops.py Support aot_export torchbind op (#123370) 2024-04-19 17:17:27 +00:00
_python_dispatcher.py
_size_docs.py Added a docstring for torch.Size.numel. (#124186) 2024-04-19 09:23:02 +00:00
_sources.py
_storage_docs.py
_streambase.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_tensor_docs.py Fix doc example of masked_scatter (#123664) 2024-04-09 22:15:12 +00:00
_tensor_str.py
_tensor.py Disallow {FakeTensor,FunctionalTensor}.data_ptr (#122514) 2024-03-26 23:55:42 +00:00
_torch_docs.py Graph-Safe RNG State Exchange for Tensor Parallelism (#114068) 2024-03-27 01:14:38 +00:00
_utils_internal.py rename sl to strobelight (#124455) 2024-04-19 22:50:13 +00:00
_utils.py Revert "torch.mtia module for MTIA device backend (#123612)" 2024-04-19 22:44:26 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Add testing and fix issues for weights_only load for LRScheduler (#123775) 2024-04-16 20:29:27 +00:00
abi-check.cpp
CMakeLists.txt [rfc] opentelemetry in pytorch (#122999) 2024-04-21 15:20:21 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py
hub.py
library.h Rename impl_abstract to register_fake, part 1/2 (#123937) 2024-04-17 12:46:01 +00:00
library.py Change register_autograd to reflect ordering of setup_context and backward (#124403) 2024-04-19 17:56:30 +00:00
overrides.py Revert "torch.mtia module for MTIA device backend (#123612)" 2024-04-19 22:44:26 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
storage.py
torch_version.py Enable UFMT on torch_version.py and types.py (#123131) 2024-04-09 15:03:17 +00:00
types.py Enable UFMT on torch_version.py and types.py (#123131) 2024-04-09 15:03:17 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.