ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술지 Accelerating On-Device Learning with Layer-Wise Processor Selection Method on Unified Memory
Cited 0 time in scopus Download 7 time Share share facebook twitter linkedin kakaostory
하동휘, 김무섭, 문경덕, 정치윤
Sensors, v.21 no.7, pp.1-19
21ZS1200, 인간중심의 자율지능시스템 원천기술연구, 최정단
Recent studies have applied the superior performance of deep learning to mobile devices, and these studies have enabled the running of the deep learning model on a mobile device with limited computing power. However, there is performance degradation of the deep learning model when it is deployed in mobile devices, due to the different sensors of each device. To solve this issue, it is necessary to train a network model specific to each mobile device. Therefore, herein, we propose an acceleration method for on-device learning to mitigate the device heterogeneity. The proposed method efficiently utilizes unified memory for reducing the latency of data transfer during network model training. In addition, we propose the layer-wise processor selection method to consider the latency generated by the difference in the processor performing the forward propagation step and the backpropagation step in the same layer. The experiments were performed on an ODROID-XU4 with the ResNet-18 model, and the experimental results indicate that the proposed method reduces the latency by at most 28.4% compared to the central processing unit (CPU) and at most 21.8% compared to the graphics processing unit (GPU). Through experiments using various batch sizes to measure the average power consumption, we confirmed that device heterogeneity is alleviated by performing on-device learning using the proposed method.
Acoustic scene classification, Deep learning acceleration, Mobile devices, On-device learning, Processor selection algorithm
KSP 제안 키워드
Acceleration method, Acoustic Scene Classification, Average power consumption, Computing power, Data transfer, Device heterogeneity, Forward Propagation, Graphic Processing Unit(GPU), Learning model, Mobile devices, Network model