ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Deep Learning Framework using Scalable Shared Memory Buffer Framework
Cited 0 time in scopus Download 8 time Share share facebook twitter linkedin kakaostory
저자
임은지, 안신영
발행일
202102
출처
International Conference on Electronics, Information and Communication (ICEIC) 2021, pp.542-544
DOI
https://dx.doi.org/10.1109/ICEIC51217.2021.9369801
협약과제
20JS1800, 초병렬 프로세서 기반 고집적 컴퓨팅 노드 및 시스템 개발, 한우종
초록
Communication overhead among the distributed training workers can be a performance bottleneck in large-scale deep neural network (DNN) training. This overhead prevents the rapid development of high-performance DNNs, so distributed deep learning frameworks should provide efficient parameter sharing techniques. In previous work, we proposed TFSM that is a distributed deep learning framework based on the remote shared memory framework (SMB). In this paper, we propose an upgraded TFSM based on SMB2. SMB2 is a scalable shared memory buffer framework which provides scalability of memory server, lock function, and user-level implementation. SMB2-based TFSM can extends the parameter I/O bandwidth and shared memory capacity. It also uses modified asynchronous parameter update method using the lock function of SMB2. We verified that SMB2-based TFSM outperforms the previous TFSM and TensorFlow by measuring the training throughput of large-scale DNNs during distributed training.
KSP 제안 키워드
Communication overhead, Deep learning framework, Deep neural network(DNN), Distributed training, High performance, I/O bandwidth, Rapid development, Shared Memory, deep learning(DL), large-scale, memory capacity