ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Crossover‐SGD: A gossip‐based communication in distributed deep learning for alleviating large mini‐batch problem and enhancing scalability
Cited 1 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Sangho Yeo, Minho Bae, Minjoong Jeong, Oh-Kyoung Kwon, Sangyoon Oh
Issue Date
2023-07
Citation
Concurrency Computation Practice and Experience, v.35, no.15, pp.1-16
ISSN
1532-0626
Publisher
John Wiley & Sons Inc.
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1002/cpe.7508
Abstract
Distributed deep learning is an effective way to reduce the training time for large datasets as well as complex models. However, the limited scalability caused by network‐overheads makes it difficult to synchronize the parameters of all workers and gossip‐based methods that demonstrate stable scalability regardless of the number of workers have been proposed. However, to use gossip‐based methods in general cases, the validation accuracy for a large mini‐batch needs to be verified. For this, we first empirically study the characteristics of gossip methods in a large mini‐batch problem and observe that gossip methods preserve higher validation accuracy than AllReduce‐SGD (stochastic gradient descent) when the number of batch sizes is increased, and the number of workers is fixed. However, the delayed parameter propagation of the gossip‐based models decreases validation accuracy in large node scales. To cope with this problem, we propose Crossover‐SGD that alleviates the delay propagation of weight parameters via segment‐wise communication and random network topology with fair peer selection. We also adapt hierarchical communication to limit the number of workers in gossip‐based communication methods. To validate the effectiveness of our method, we conduct empirical experiments and observe that our Crossover‐SGD shows higher node scalability than stochastic gradient push.
KSP Keywords
Complex models, Empirical experiments, Gossip methods, Hierarchical Communication, Large datasets, Peer selection, Stochastic Gradient Descent, Training time, deep learning(DL), network topology, random network