ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article TGSNet: Multi-Field Feature Fusion for Glass Region Segmentation Using Transformers
Cited 6 time in scopus Download 128 time Share share facebook twitter linkedin kakaostory
Authors
Xiaohang Hu, Rui Gao, Seungjun Yang, Kyungeun Cho
Issue Date
2023-02
Citation
MATHEMATICS, v.11, no.4, pp.1-21
ISSN
2227-7390
Publisher
MDPI
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.3390/math11040843
Abstract
Glass is a common object in living environments, but detecting it can be difficult because of the reflection and refraction of various colors of light in different environments; even humans are sometimes unable to detect glass. Currently, many methods are used to detect glass, but most rely on other sensors, which are costly and have difficulty collecting data. This study aims to solve the problem of detecting glass regions in a single RGB image by concatenating contextual features from multiple receptive fields and proposing a new enhanced feature fusion algorithm. To do this, we first construct a contextual attention module to extract backbone features through a self-attention approach. We then propose a VIT-based deep semantic segmentation architecture called MFT, which associates multilevel receptive field features and retains the feature information captured by each level of features. It is shown experimentally that our proposed method performs better on existing glass detection datasets than several state-of-the-art glass detection and transparent object detection methods, which fully demonstrates the better performance of our TGSNet.
KSP Keywords
Collecting data, Contextual features, Detection Method, Feature fusion, Feature information, Fusion Algorithm, Living environment, Multiple receptive fields, RGB image, Reflection and refraction, Semantic segmentation
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY