ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술대회 Structure of Deep Learning Inference Engines for Embedded Systems
Cited 2 time in scopus Download 8 time Share share facebook twitter linkedin kakaostory
유승목, 조창식, 이경희, 박재복, 윤석진, 이영운, 김병규
International Conference on Information and Communication Technology Convergence (ICTC) 2019, pp.920-922
19HS1400, 운전자 주행경험 모사기반 일반도로환경의 자율주행4단계(SAE)를 지원하는 주행판단엔진 개발, 최정단
For the last several years, various types of deep learning applications have been introduced. Most deep learning related research and development have been done on servers or PCs with GPUs. Recently there have been a number of moves to apply those applications to the industrial sector. When deep learning techniques are applied to actual targets, we can face some spatial and environmental constraints unlike the laboratory environment.In this paper, we describe requirements when deep learning applications run for embedded systems. We introduce our ongoing project on developing a deep learning framework for embedded systems, especially automotive vehicles. Generally, deep learning application development process can be divided to two steps: Training a data model with a big data set and executing the data model with actual data. In our framework, we focus on the execution step. We try to design an inference engine to satisfy the operational requirements for embedded systems. We describe our design direction and the structure. We also show preliminary evaluation result.
deep learning neural network, embedded system
KSP 제안 키워드
Big Data, Data Model, Data sets, Deep learning application, Deep learning framework, Embedded system, Environmental constraints, Inference Engine, Learning neural network, Preliminary evaluation, application development