ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Assembling three one‐camera images for three‐camera intersection classification
Cited 1 time in scopus Download 67 time Share share facebook twitter linkedin kakaostory
Authors
Marcella Astrid, Seung-Ik Lee
Issue Date
2023-10
Citation
ETRI Journal, v.45, no.5, pp.862-873
ISSN
1225-6463
Publisher
한국전자통신연구원
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.4218/etrij.2023-0100
Abstract
Determining whether an autonomous self‐driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three‐camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian‐view intersection classification experiments show that our feature fusion model provides an area under the curve and F1‐score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three‐ and one‐camera models.
KSP Keywords
Camera model, Data collected, Feature Fusion, Fusion model, Visual input, area under the curve(AUC), fusion method, single camera, training data
This work is distributed under the term of Korea Open Government License (KOGL)
(Type 4: : Type 1 + Commercial Use Prohibition+Change Prohibition)
Type 4: