ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Occluded Pedestrian-Attribute Recognition for Video Sensors Using Group Sparsity
Cited 1 time in scopus Download 35 time Share share facebook twitter linkedin kakaostory
저자
이건우, 윤기민, 조정찬
발행일
202209
출처
Sensors, v.22 no.17, pp.1-16
ISSN
1424-8220
출판사
MDPI
DOI
https://dx.doi.org/10.3390/s22176626
협약과제
21HS4600, (딥뷰-1세부) 실시간 대규모 영상 데이터 이해·예측을 위한 고성능 비주얼 디스커버리 플랫폼 개발, 배유석
초록
Pedestrians are often obstructed by other objects or people in real-world vision sensors. These obstacles make pedestrian-attribute recognition (PAR) difficult; hence, occlusion processing for visual sensing is a key issue in PAR. To address this problem, we first formulate the identification of non-occluded frames as temporal attention based on the sparsity of a crowded video. In other words, a model for PAR is guided to prevent paying attention to the occluded frame. However, we deduced that this approach cannot include a correlation between attributes when occlusion occurs. For example, ?쐀oots?? and ?쐓hoe color?? cannot be recognized simultaneously when the foot is invisible. To address the uncorrelated attention issue, we propose a novel temporal-attention module based on group sparsity. Group sparsity is applied across attention weights in correlated attributes. Accordingly, physically-adjacent pedestrian attributes are grouped, and the attention weights of a group are forced to focus on the same frames. Experimental results indicate that the proposed method achieved 1.18% and 6.21% higher (Formula presented.) -scores than the advanced baseline method on the occlusion samples in DukeMTMC-VideoReID and MARS video-based PAR datasets, respectively.
KSP 제안 키워드
Attribute recognition, Formula presented, Group Sparsity, Real-world, Vision sensor, occlusion processing, video based, visual sensing
본 저작물은 크리에이티브 커먼즈 저작자 표시 (CC BY) 조건에 따라 이용할 수 있습니다.
저작자 표시 (CC BY)