ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Adaptive FOA Region Extraction for Saliency-Based Visual Attention
Cited 2 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
이형직, 배창석, 이장한, 손승원
발행일
201207
출처
International Journal of Information Processing and Management, v.3 no.3, pp.36-43
ISSN
2093-4009
출판사
차세대융합기술연구원(AICIT)
DOI
https://dx.doi.org/10.4156/ijipm.vol3.issue3.5
협약과제
11SC1700, 인간 교감 신개념 UI 기반 인터랙션 기술, 손승원
초록
This paper describes an adaptive extraction of focus of attention region for saliency-based visual attention. The saliency map model generates the most salient and significant location in the visual scene. In human brain, there is an inhibition of return property for which current attending point is prevented from being attended again. Therefore, we need to pay attention to the focus of attention and inhibition of return function by employing an appropriate mask for the salient region and shapedbased mask is maybe more suitable than any other masks. On the contrary to the existing fixed-size FOA, we proposed an adaptive and shape-based FOA region according to the most salient region from saliency map. We determine the most salient point by checking every value in saliency map, and expand the neighborhood of the point until the average value of the neighborhood is smaller than 75% value of the most salient point, and then find the contour of the neighborhood. Therefore our adaptive FOA is close to the shape of attended object and it is efficient to the object recognition or other computer vision fields.
KSP 제안 키워드
Attention region, Computer Vision(CV), Focus of attention, Inhibition of return, Object Recognition, Saliency map model, human brain, saliency-based, salient point, salient region, visual attention