This PR enables `-Winconsistent-missing-destructor-override` and `-Winconsistent-missing-override`
and fixes violations.
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 47e904e</samp>
This pull request updates the code of various classes and operators in the `caffe2` and `aten` subdirectories to use the `override` specifier instead of the `virtual` keyword for destructors and other virtual functions that override a base class function. This improves the code readability, quality, and consistency with C++ best practices. It also modifies the `./CMakeLists.txt` file to enable warnings for these specifiers, but disable errors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104032
Approved by: https://github.com/malfet
Summary:
This directory is opted-in to clang-format but is not format-clean. This blocks continuous formatting from being enabled on fbcode, and causes hassle for other codemods that leave inconsistent formatting. This diff runs clang-format, which is widely used and considered safe.
If you are unhappy with the formatting of a particular block, please *accept this diff* and then in a stacked commit undo the change and wrap that code in `// clang-format off` and `// clang-format on`, or `/* clang-format off */` and `/* clang-format on */`.
drop-conflicts
Test Plan: sandcastleit
Reviewed By: jerryzh168
Differential Revision: D22311706
fbshipit-source-id: 1ca59a82e96156a4a5dfad70ba3e64d44c5e762a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15126
I want to make people stop manufacturing StreamId from thin air,
and a first step is to make people use the default stream.
Reviewed By: dzhulgakov
Differential Revision: D13432922
fbshipit-source-id: 9f0d8d70646c50d979bde5ba3c3addeebac48a3d
Summary:
We may not want to run the operator in a prefetch manner if we don't need any prefetching.
The option allows without modification to any operator to run it ina normal fashion.
Differential Revision: D6717720
fbshipit-source-id: 10114d68edd95258b823603d8532360120421649
Summary:
We shouldn't LOG(FATAL) in Caffe2 code under any conditions as it's a library.
The case where it failed was a bug in SparseAdaGrad that failed on empty input trying to launch 0-sized CUDA kernel.
Also, the trend for C2 core is in moving from bool to exceptions, so I just moved CAFFE_ENFORCE directly into FinishDeviceComputation. Most of the use cases were already doing that or ignoring the output (bad!).
Reviewed By: akyrola
Differential Revision: D5495913
fbshipit-source-id: 66f382369417a262da69d54470f720e7d04a5cdf
Summary: This uses `clang-tidy` to comment out unused parameters (in functions, methods and lambdas) in fbcode. Cases that the tool failed to handle are fixed manually.
Reviewed By: igorsugak
Differential Revision: D5454343
fbshipit-source-id: 5dee339b4334e25e963891b519a5aa81fbf627b2
Summary: In https://github.com/caffe2/caffe2/pull/802, slayton58 fixed issue in ImageInputOp where the std and mean blobs were allocated on wrong GPU (0). This fails when there is no P2P memory access. Fundamental reason was that ImageInputOp's constructor did not call SwitchToDevice. Operator's does, but ImageInputOp inherits PrefetchOp -> OperatorBase, neither of which does the switch. So made PrefetchOperator do the switch (OperatorBase does not have context, so it cannot).
Reviewed By: asaadaldien
Differential Revision: D5258729
fbshipit-source-id: c615c60eb2047ad26249c5bcba57ab0ef21d00e4
Summary:
aaronmarkham this solves your Windows build issue. Basically:
(1) VS 2017 does not have CUDA support yet, and we will be waiting on NVidia to do so.
(2) VS 2015 and 2017 need different cmake generator strings.
This PR shows how to determine those and also updates appveyor to do contbuild guard for the following 3 settings:
- VS2015 without cuda
- VS2017 without cuda
- VS2015 with cuda
Closes https://github.com/caffe2/caffe2/pull/210
Differential Revision: D4745007
Pulled By: Yangqing
fbshipit-source-id: 50952552843abd0eb6f4145d9f132daeee3a6794
Summary:
This diff introduces a new net type 'singlethread_async' which is based on my investigation of DPER/hogwild MLP bottlenecks.
It only uses one CPU thread, but multiple GPUs on each GPU. This is implemented by having each Net to submit their list of operators to
a central GPU-specific executor queue and a thread that executes them asynchronously. This executor takes all tasks in the queue and executes them on separate cuda streams and then waits them in the end. This solution can achieve >95% GPU utilization on 8 GPUs when sufficient amount of workers is used.
FYI: I also tried fancier solution such as using cudaStreamCallbacks(), but they did not have as good performance.
Improved the dper bench by adding the MomentumSGDUpdate operations and adding speed test capabilities. During my testing I also noticed that the startup costs for inizialing CUDA streams and contexts are high, so it is important to do a warm up.
Reviewed By: Yangqing
Differential Revision: D4553941
fbshipit-source-id: bb00524bef653d75de026dd64097b8d9b7a0acb3
Summary:
Countless hours were spent debugging why ImageInputOp failed with a cryptic exception P56967302. Turns out, that assertion happened in PrefetchOp destructor, that was triggered when a assertion failed in ImageInputOp constructor. Because of this, the underlying problem was shadowed. I fixed this by not asserting on finalize_ if there is no prefetch thread running, and now the error is clean:
[enforce fail at image_input_op.h:105] scale_ > 0. -1 vs 0. Must provide the scaling factor.
Reviewed By: Yangqing
Differential Revision: D4435105
fbshipit-source-id: 52f85a9fd30eea396c9faca54b6d946fa847b7ff
(1) various bugfixes.
(2) Tensor is now a class independent from its data type. This allows us
to write easier type-independent operators.
(3) code convention changes a bit: dtype -> T, Tensor<*Context> -> Tensor* alias.
(4) ParallelNet -> DAGNet to be more consistent with what it does.
(5) Caffe's own flags library instead of gflags.
(6) Caffe's own logging library instead of glog, but glog can be chosen with
compile-time definition -DCAFFE2_USE_GOOGLE_GLOG. As a result, glog macros
like CHECK, DCHECK now have prefix CAFFE_, and LOG(*) now becomes
CAFFE_LOG_*.
(7) an optional protobuf inclusion, which can be chosen with USE_SYSTEM_PROTOBUF
in build_env.py.