pytorch/caffe2
Jeffrey Dunn 25d657c701 Fix possible naming collision issue (#107743)
Summary: As pointed out in https://github.com/pytorch/pytorch/pull/107479, using a set prevents collisions like "a" => "a", "a" => "a_1", "a_1" => "a_1" (but should go to "a_1_1"). We can combine using counters and a set to avoid this problem. Still gets us the performance benefit in the case of collisions with a very minor penalty in a case with no collision.

Test Plan:
Extract this code and run:
```
# New version
from typing import Dict, Set

class Net:
    _net_names_used_counters: Dict[str, int] = {}
    _net_names_used: Set[str] = set()

    staticmethod
    def current_prefix():
        return "test_prefix"

    staticmethod
    def _get_next_net_name(basename):
        basename = "/".join(x for x in [Net.current_prefix(), basename] if x)
        idx = Net._net_names_used_counters.get(basename, 0)
        while (name := basename if idx == 0 else f"{basename}_{idx}") in Net._net_names_used:
            idx += 1
        Net._net_names_used_counters[basename] = idx + 1
        Net._net_names_used.add(name)
        return name

print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("x_basename"))
print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("x_basename"))
print(Net._get_next_net_name("basename_1"))

> test_prefix/basename
> test_prefix/x_basename
> test_prefix/basename_1
> test_prefix/basename_2
> test_prefix/x_basename_1
> test_prefix/basename_1_1
```

Differential Revision: D48576516

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107743
Approved by: https://github.com/zdevito
2023-09-08 17:39:27 +00:00
..
contrib Revert "Add _foreach_clamp (#106574)" 2023-08-11 21:05:04 +00:00
core [caffe2] Replace CAFFE_ prefixes in static_tracepoint.h macros with TORCH_ (#106380) 2023-08-03 21:51:36 +00:00
cuda_rtc
db
distributed [BE] Enforce missing override keyword (#104032) 2023-06-24 02:34:24 +00:00
experiments [BE] Remove dependency on six and future (#94709) 2023-02-14 09:14:14 +00:00
ideep [ONEDNN][BC-breaking] update onednn from v2.7.3 to v3.1.1 (#97957) 2023-08-25 12:13:18 +00:00
image
mobile Fix typos under caffe2 directory (#87840) 2022-10-28 04:53:36 +00:00
mpi [BE] Enforce missing override keyword (#104032) 2023-06-24 02:34:24 +00:00
observers
onnx
operators [caffe2] Add enforce inside ScatterAssignOp (#106882) 2023-08-10 21:46:13 +00:00
opt fix some typos (#106018) 2023-07-26 18:14:44 +00:00
perfkernels Revert "Use missing-prototypes in torch_cpu (#103725)" 2023-06-22 18:30:31 +00:00
predictor
proto extract torch.proto to its own library (#97614) 2023-03-30 10:35:03 +00:00
python Fix possible naming collision issue (#107743) 2023-09-08 17:39:27 +00:00
quantization [BE]: Apply PYI autofixes to various types (#107521) 2023-08-20 02:42:21 +00:00
queue [caffe2] Replace CAFFE_ prefixes in static_tracepoint.h macros with TORCH_ (#106380) 2023-08-03 21:51:36 +00:00
serialize fix inline_container.cc inplace loading (#108573) 2023-09-06 00:02:42 +00:00
sgd [CUDA] Drop CUDA 10 support (#89582) 2023-01-05 05:11:53 +00:00
share Revert "Use missing-prototypes in torch_cpu (#103725)" 2023-06-22 18:30:31 +00:00
test
transforms Revert "Use missing-prototypes in torch_cpu (#103725)" 2023-06-22 18:30:31 +00:00
utils [ROCm] use hipblas instead of rocblas (#105881) 2023-07-31 20:42:55 +00:00
video [codemod][llvm15] LLVM-15 fixes for caffe2/caffe2/video/video_decoder.cc (#89937) 2022-12-01 03:46:22 +00:00
__init__.py
.clang-format
BUILD_MODE.bzl
CMakeLists.txt Fix finding Intel MKL on Windows, as well as LAPACK, cuDNN and cuSPARSELt (#108040) 2023-09-08 14:41:00 +00:00
README.md Update README.md (#85534) 2022-11-17 01:06:15 +00:00
release-notes.md Fix typos under caffe2 directory (#87840) 2022-10-28 04:53:36 +00:00
requirements.txt
unexported_symbols.lds
VERSION_NUMBER
version_script.lds

Caffe2

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

Questions and Feedback

Please use GitHub issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.

Further Resources on Caffe2.ai