ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Adaptive FOA Region Extraction for Saliency-Based Visual Attention
Cited 2 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Hyungjik Lee, Changseok Bae, Janghan Lee, Sungwon Sohn
Issue Date
2012-07
Citation
International Journal of Information Processing and Management, v.3, no.3, pp.36-43
ISSN
2093-4009
Publisher
차세대융합기술연구원(AICIT)
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.4156/ijipm.vol3.issue3.5
Abstract
This paper describes an adaptive extraction of focus of attention region for saliency-based visual attention. The saliency map model generates the most salient and significant location in the visual scene. In human brain, there is an inhibition of return property for which current attending point is prevented from being attended again. Therefore, we need to pay attention to the focus of attention and inhibition of return function by employing an appropriate mask for the salient region and shapedbased mask is maybe more suitable than any other masks. On the contrary to the existing fixed-size FOA, we proposed an adaptive and shape-based FOA region according to the most salient region from saliency map. We determine the most salient point by checking every value in saliency map, and expand the neighborhood of the point until the average value of the neighborhood is smaller than 75% value of the most salient point, and then find the contour of the neighborhood. Therefore our adaptive FOA is close to the shape of attended object and it is efficient to the object recognition or other computer vision fields.
KSP Keywords
Attention region, Computer Vision(CV), Focus of attention, Human brain, Inhibition of return, Object recognition, Saliency map model, Visual attention, saliency-based, salient point, salient region