pytorch/caffe2
Edward Z. Yang 9bce208dfb Replace follow_imports = silent with normal (#118414)
This is a lot of files changed! Don't panic! Here's how it works:

* Previously, we set `follow_imports = silent` for our mypy.ini configuration. Per https://mypy.readthedocs.io/en/stable/running_mypy.html#follow-imports, what this does is whenever we have an import to a module which is not listed as a file to be typechecked in mypy, we typecheck it as normal but suppress all errors that occurred in that file.
* When mypy is run inside lintrunner, the list of files is precisely the files covered by the glob in lintrunner.toml, but with files in excludes excluded.
* The top-level directive `# mypy: ignore-errors` instructs mypy to typecheck the file as normal, but ignore all errors.
* Therefore, it should be equivalent to set `follow_imports = normal`, if we put `# mypy: ignore-errors` on all files that were previously excluded from the file list.
* Having done this, we can remove the exclude list from .lintrunner.toml, since excluding a file from typechecking is baked into the files themselves.
* torch/_dynamo and torch/_inductor were previously in the exclude list, because they were covered by MYPYINDUCTOR. It is not OK to mark these as `# mypy: ignore-errors` as this will impede typechecking on the alternate configuration. So they are temporarily being checked twice, but I am suppressing the errors in these files as the configurations are not quite the same. I plan to unify the configurations so this is only a temporary state.
* There were some straggler type errors after these changes somehow, so I fixed them as needed. There weren't that many.

In the future, to start type checking a file, just remove the ignore-errors directive from the top of the file.

The codemod was done with this script authored by GPT-4:

```
import glob

exclude_patterns = [
    ...
]

for pattern in exclude_patterns:
    for filepath in glob.glob(pattern, recursive=True):
        if filepath.endswith('.py'):
            with open(filepath, 'r+') as f:
                content = f.read()
                f.seek(0, 0)
                f.write('# mypy: ignore-errors\n\n' + content)
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118414
Approved by: https://github.com/thiagocrepaldi, https://github.com/albanD
2024-01-27 02:44:11 +00:00
..
contrib Replace follow_imports = silent with normal (#118414) 2024-01-27 02:44:11 +00:00
core Add function to materialize COW storages (#117053) 2024-01-10 15:34:16 +00:00
cuda_rtc
db
distributed [4/N] Add -Wdeprecated and related fixes (#110204) 2023-10-07 19:46:08 +00:00
experiments [BE] Remove dependency on six and future (#94709) 2023-02-14 09:14:14 +00:00
ideep [ONEDNN][BC-breaking] update onednn from v2.7.3 to v3.1.1 (#97957) 2023-08-25 12:13:18 +00:00
image
mobile
mpi [BE] Enforce missing override keyword (#104032) 2023-06-24 02:34:24 +00:00
observers
onnx [codemod][highrisk] Fix shadowed variable in caffe2/caffe2/onnx/onnx_exporter.cc (#117996) 2024-01-22 22:57:06 +00:00
operators [codemod] Fix shadows in PyTorch (#117562) 2024-01-17 20:33:50 +00:00
opt [codemod][lowrisk] Remove extra semi colon from caffe2/caffe2/opt/optimizer.cc (#115018) 2023-12-13 23:11:33 +00:00
perfkernels Revert "Use missing-prototypes in torch_cpu (#103725)" 2023-06-22 18:30:31 +00:00
predictor
proto extract torch.proto to its own library (#97614) 2023-03-30 10:35:03 +00:00
python Revert "[Reland2] Update NVTX to NVTX3 (#109843)" 2023-12-05 16:10:20 +00:00
quantization [BE]: Apply PYI autofixes to various types (#107521) 2023-08-20 02:42:21 +00:00
queue [caffe2] Replace CAFFE_ prefixes in static_tracepoint.h macros with TORCH_ (#106380) 2023-08-03 21:51:36 +00:00
serialize Reduce single reader check time for inline_container (#113328) 2023-11-09 22:02:28 +00:00
sgd [CUDA] Drop CUDA 10 support (#89582) 2023-01-05 05:11:53 +00:00
share Revert "Use missing-prototypes in torch_cpu (#103725)" 2023-06-22 18:30:31 +00:00
test
transforms [1/N] Enable Wunused-result and Wunused-variable in torch targets (#110722) 2023-10-08 23:43:45 +00:00
utils [codemod][lowrisk] Remove extra semi colon from caffe2/c10/util/Float8_e5m2.h (#115761) 2024-01-04 02:02:26 +00:00
video
__init__.py
.clang-format
BUILD_MODE.bzl
CMakeLists.txt Add missing cuda libraries for context_gpu_test (#117493) 2024-01-25 18:04:23 +00:00
README.md
release-notes.md
requirements.txt
unexported_symbols.lds
VERSION_NUMBER
version_script.lds

Caffe2

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

Questions and Feedback

Please use GitHub issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.

Further Resources on Caffe2.ai