ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
Cited 122 time in scopus Download 257 time Share share facebook twitter linkedin kakaostory
저자
정승은, 임지연, 노경주, 김가규, 정현태
발행일
201904
출처
Sensors, v.19 no.7, pp.1-20
ISSN
1424-8220
출판사
MDPI
DOI
https://dx.doi.org/10.3390/s19071716
협약과제
19ZS1100, 자율성장형 AI 핵심원천기술 연구, 송화전
초록
In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities.
KSP 제안 키워드
Activities of Daily Living(ADLs), Android mobile device, Classification Performance, Data Acquisition(DAQ), Data Collection, Ensemble models, Experiment results, Human activity recognition(HAR), Inertial measurement units(IMUs), Learning model, Long-short term memory(LSTM)
본 저작물은 크리에이티브 커먼즈 저작자 표시 (CC BY) 조건에 따라 이용할 수 있습니다.
저작자 표시 (CC BY)