ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article 딥러닝 모델 병렬 처리
Cited - time in scopus Download 27 time Share share facebook twitter linkedin kakaostory
Authors
박유미, 안신영, 임은지, 최용석, 우영춘, 최완
Issue Date
2018-08
Citation
전자통신동향분석, v.33, no.4, pp.1-13
ISSN
1225-6455
Publisher
한국전자통신연구원 (ETRI)
Language
Korean
Type
Journal Article
DOI
https://dx.doi.org/10.22648/ETRI.2018.J.330401
Abstract
Deep learning (DL) models have been widely applied to AI applications such image recognition and language translation with big data. Recently, DL models have becomes larger and more complicated, and have merged together. For the accelerated training of a large-scale deep learning model, model parallelism that partitions the model parameters for non-shared parallel access and updates across multiple machines was provided by a few distributed deep learning frameworks. Model parallelism as a training acceleration method, however, is not as commonly used as data parallelism owing to the difficulty of efficient model parallelism. This paper provides a comprehensive survey of the state of the art in model parallelism by comparing the implementation technologies in several deep learning frameworks that support model parallelism, and suggests a future research directions for improving model parallelism technology.
KSP Keywords
AI Applications, Acceleration method, Big Data, Deep learning framework, Future research directions, Language Translation, Learning model, Model parameter, data parallelism, deep learning(DL), efficient model