ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Multimodal Sensor Data Fusion and Ensemble Modeling for Human Locomotion Activity Recognition
Cited 3 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Se Won Oh, Hyuntae Jeong, Seungeun Chung, Jeong Mook Lim, Kyoung Ju Noh
Issue Date
2023-10
Citation
International Conference on Pervasive and Ubiquitous Computing (UbiComp) 2023, pp.546-550
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1145/3594739.3610753
Abstract
The primary research objective of this study is to develop an algorithm pipeline for recognizing human locomotion activities using multimodal sensor data from smartphones, while minimizing prediction errors due to data differences between individuals. The multimodal sensor data provided for the 2023 SHL recognition challenge comprises three types of motion data and two types of radio sensor data. Our team, ‘HELP,’ presents an approach that aligns all the multimodal data to derive a form of vector composed of 106 features, and then blends predictions from multiple learning models which are trained using different number of feature vectors. The proposed neural network models, trained solely on data from a specific individual, yield F1 scores of up to 0.8 in recognizing the locomotion activities of other users. Through post-processing operations, including the ensemble of multiple learning models, it is expected to achieve a performance improvement of 10% or greater in terms of F1 score.
KSP Keywords
Activity Recognition, Ensemble modeling, Feature Vector, Learning model, Motion Data, Multimodal sensor, Post-Processing, Prediction error, Sensor data fusion, human locomotion, multimodal data