ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Accelerating Training of DNN in Distributed Machine Learning System with Shared Memory
Cited 5 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
임은지, 안신영, 최완
발행일
201710
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2017, pp.1210-1213
DOI
https://dx.doi.org/10.1109/ICTC.2017.8190900
협약과제
17HS1900, 대규모 딥러닝 고속 처리를 위한 HPC 시스템 개발, 최완
초록
In distributed DNN training, the speed of reading and updating model parameters greatly affects model training time. In this paper we investigate the performance of deep neural network training with parameter sharing based on shared memory for distributed machine learning. We propose a shared memory-based modification of the deep learning framework. In our framework, remote shared memory is used to maintain global shared parameters of parallel deep learning workers. Our framework can accelerate training of DNN by speeding up the parameter sharing in every training iteration in distributed model training. We evaluated our proposed framework by training the three different deep learning model. The experiment results show that our framework improves training time for deep learning models in distributed system.
키워드
deep learning, distributed machine learning, Machine learning, shared memory
KSP 제안 키워드
Deep learning framework, Deep neural network(DNN), Distributed System(DS), Distributed machine learning, Experiment results, Machine learning system, Memory-based, Model parameter, Neural network training, Shared Memory, Shared parameters