ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Evaluating the Performance of Deep Learning Inference Service on Edge Platform
Cited 0 time in scopus Download 3 time Share share facebook twitter linkedin kakaostory
저자
최현화, 차재근, 윤성현, 김대원, 장수민, 김선욱
발행일
202110
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2021, pp.1789-1793
DOI
https://dx.doi.org/10.1109/ICTC52510.2021.9620870
협약과제
21HS4400, 10msec 미만의 서비스 응답 속도를 보장하는 초저지연 지능형 클라우드 엣지 SW 플랫폼 핵심 기술 개발, 김선욱
초록
Deep learning inference requires tremendous amount of computation and typically is offloaded the cloud for execution. Recently, edge computing, which processes and stores data at the edge of the Internet closest to the mobile devices or sensors, has been considered as new computing paradigm. We have studied the performance of the deep neural network (DNN) inference service based on different configurations of resources assigned to a container. In this work, we measured and analyzed a real-world edge service on containerization platform. An edge service is named A!Eye, an application with various DNN inferences. The edge service has both CPU-friendly and GPU-friendly tasks. CPU tasks account for more than half of the latency of the edge service. Our analyses reveal interesting findings about running the DNN inference service on the container-based execution platform; (a) The latency of DNN inference-based edge services is affected by CPU-based operation performance. (b) Pinning CPUs can reduce the latency of an edge service. (c) In order to improve the performance of an edge service, it is very important to avoid PCIe bottleneck shared by resources like CPUs, GPUs and NICs.
KSP 제안 키워드
Deep neural network(DNN), Edge services, Mobile devices, Operation performance, Real-world, deep learning(DL), edge computing