Commit Graph

12 Commits

Author SHA1 Message Date
Benny Chen
f25322fb97 Fix issues under caffe round 1
Summary: Some automation to fix uninitialized members for caffe2 code. Ran canary to make sure I don't have any regression in prod, but not sure how to test comprehensively for caffe2

Reviewed By: ezyang

Differential Revision: D13776185

fbshipit-source-id: fb2a479971cc0276d8784be1c44f01252410bd24
2019-01-23 19:04:59 -08:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Victor Gao
34be12353b comment out unused parameters
Summary: This uses `clang-tidy` to comment out unused parameters (in functions, methods and lambdas) in fbcode. Cases that the tool failed to handle are fixed manually.

Reviewed By: igorsugak

Differential Revision: D5454343

fbshipit-source-id: 5dee339b4334e25e963891b519a5aa81fbf627b2
2017-07-21 15:14:43 -07:00
Aapo Kyrola
f795bf0b2a Revert D5273337: [caffe2] Pare down on excessive futex() syscalls from the DAGNet executor
Summary: This reverts commit 67d50f9d838e9a9ef3682d9a3b5ba59c7d33350d

Differential Revision: D5273337

fbshipit-source-id: 85e2f3ef228871beed2afef569407474c8f8acb9
2017-06-21 01:48:24 -07:00
James Reed
956e40f0ea Pare down on excessive futex() syscalls from the DAGNet executor
Summary:
For our CNN training runs I noticed an excessive number of futex() syscalls. Using strace I narrowed this down to excessive calls to std::condition_variable member functions.

1) I added a PushBulk member function to SimpleQueue, that will push all items in a vector onto the queue and issue a single std::condition_variable::notify_all() call, rather than separate notify_one() calls per item.
2) In DAGNet::WorkerFunction, we were calling std::condition_variable::notify_one() after every single op chain was completed, even though it should have only been called when the number of remaining operators dropped to 0 or the execution failed. I added a conditional check around this call to further cut down on unnecessary syscalls.

Reviewed By: pietern

Differential Revision: D5273337

fbshipit-source-id: 67d50f9d838e9a9ef3682d9a3b5ba59c7d33350d
2017-06-19 14:19:39 -07:00
Aapo Kyrola
ba1d592b5f New 40% faster net-type for MLP on GPUs
Summary:
This diff introduces a new net type 'singlethread_async' which is based on my investigation of DPER/hogwild MLP bottlenecks.
It only uses one CPU thread, but multiple GPUs on each GPU. This is implemented by having each Net to submit their list of operators to
a central GPU-specific executor queue and a thread that executes them asynchronously. This executor takes all tasks in the queue and executes them on separate cuda streams and then waits them in the end. This solution can achieve >95% GPU utilization on 8 GPUs when sufficient amount of workers is used.

FYI: I also tried fancier solution such as using cudaStreamCallbacks(), but they did not have as good performance.

Improved the dper bench by adding the MomentumSGDUpdate operations and adding speed test capabilities. During my testing I also noticed that the startup costs for inizialing CUDA streams and contexts  are high, so it is important to do a warm up.

Reviewed By: Yangqing

Differential Revision: D4553941

fbshipit-source-id: bb00524bef653d75de026dd64097b8d9b7a0acb3
2017-02-21 21:40:15 -08:00
Yangqing Jia
589398950f fbsync at f5a877 2016-11-18 15:41:06 -08:00
Yangqing Jia
6463eebc7b chunky sync - build scripts to be written 2016-07-21 10:16:42 -07:00
Yangqing Jia
648d1b101a A consolidation of a couple random weekend work.
(1) various bugfixes.
(2) Tensor is now a class independent from its data type. This allows us
    to write easier type-independent operators.
(3) code convention changes a bit: dtype -> T, Tensor<*Context> -> Tensor* alias.
(4) ParallelNet -> DAGNet to be more consistent with what it does.
(5) Caffe's own flags library instead of gflags.
(6) Caffe's own logging library instead of glog, but glog can be chosen with
    compile-time definition -DCAFFE2_USE_GOOGLE_GLOG. As a result, glog macros
    like CHECK, DCHECK now have prefix CAFFE_, and LOG(*) now becomes
    CAFFE_LOG_*.
(7) an optional protobuf inclusion, which can be chosen with USE_SYSTEM_PROTOBUF
    in build_env.py.
2015-10-11 23:14:06 -07:00
Yangqing Jia
1e7730800f bottlefeeding. 2015-06-30 09:26:56 -07:00
Yangqing Jia
2ed1077a83 A clean init for Caffe2, removing my earlier hacky
commits.
2015-06-25 16:26:01 -07:00