ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Soft Memory Box: A Virtual Shared Memory Framework for Fast Deep Neural Network Training in Distributed High Performance Computing
Cited 11 time in scopus Download 13 time Share share facebook twitter linkedin kakaostory
저자
안신영, 김중헌, 임은지, 강성원
발행일
201805
출처
IEEE Access, v.6, pp.26493-26504
ISSN
2169-3536
출판사
IEEE
DOI
https://dx.doi.org/10.1109/ACCESS.2018.2834146
협약과제
18HS2700, 대규모 딥러닝 고속 처리를 위한 HPC 시스템 개발, 최완
초록
Deep learning is one of the major promising machine learning methodologies. Deep learning is widely used in various application domains, e.g., image recognition, voice recognition, and natural language processing. In order to improve learning accuracy, deep neural networks have evolved by: 1) increasing the number of layers and 2) increasing the number of parameters in massive models. This implies that distributed deep learning platforms need to evolve to: 1) deal with huge/complex deep neural networks and 2) process with high-performance computing resources for massive training data. This paper proposes a new virtual shared memory framework, called Soft Memory Box (SMB), which enables sharing the memory of remote node among distributed processes in the nodes so as to improve communication performance via parameter sharing. According to data-intensive performance evaluation results, the communication time of deep learning using the proposed SMB is 2.1 times faster than that using the massage passing interface (MPI). In addition, the communication time of the SMB-based asynchronous parameter update becomes 2-7 times faster than that using the MPI depending on deep learning models and the number of deep learning workers.
키워드
deep neural network, distributed computing, distributed deep learning, High performance computing, shared memory, soft memory box
KSP 제안 키워드
Communication performance, Communication time, Computing resources, Deep neural network(DNN), Distributed processes, High Performance Computing, Learning platform, Massage passing interface, Massive models, Natural Language Processing, Neural network training