ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper ShmCaffe: A Distributed Deep Learning Platform with Shared Memory Buffer for HPC Architecture
Cited 19 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Shinyoung Ahn, Joongheon Kim, Eunji Lim, Wan Choi, Aziz Mohaisen, Sungwon Kang
Issue Date
2018-07
Citation
International Conference on Distributed Computing Systems (ICDCS) 2018, pp.1118-1128
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICDCS.2018.00111
Abstract
One of the reasons behind the tremendous success of deep learning theory and applications in the recent days is advances in distributed and parallel high performance computing (HPC). This paper proposes a new distributed deep learning platform, named ShmCaffe, which utilizes remote shared memory for communication overhead reduction in massive deep neural network training parameter sharing. ShmCaffe is designed based on Soft Memory Box (SMB), a virtual shared memory framework. In the SMB framework, the remote shared memory is used as a shared buffer for asynchronous massive parameter sharing among many distributed deep learning processes. Moreover, a hybrid method that combines asynchronous and synchronous parameter sharing methods is also discussed in this paper for improving scalability. As a result, ShmCaffe is 10.1 times faster than Caffe and 2.8 times faster than Caffe-MPI for deep neural network training when Inception\-v1 is trained with 16 GPUs. We verify the convergence of the Inception\-v1 model training using ShmCaffe-A and ShmCaffe-H by varying the number of workers. Furthermore, we evaluate scalability of ShmCaffe by analyzing the computation and communication times per one iteration of deep learning training in four convolutional neural network (CNN) models.
KSP Keywords
Communication overhead, Convolution neural network(CNN), Deep neural network(DNN), High Performance Computing, Learning platform, Learning training, Neural network training, Shared buffer, Virtual Shared Memory, deep learning(DL), hybrid method