ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Deep Neural Networks with a Set of Node-wise Varying Activation Functions
Cited 0 time in scopus
저자
장진혁, 조현중, 김재홍, 이재연, 양승준
발행일
202006
출처
Neural Networks, v.126, pp.118-131
ISSN
0893-6080
출판사
Elsevier
DOI
https://dx.doi.org/10.1016/j.neunet.2020.03.004
협약과제
19HS6200, 고령 사회에 대응하기 위한 실환경 휴먼케어 로봇 기술 개발, 이재연
초록
© 2020 Elsevier Ltd In this study, we present deep neural networks with a set of node-wise varying activation functions. The feature-learning abilities of the nodes are affected by the selected activation functions, where the nodes with smaller indices become increasingly more sensitive during training. As a result, the features learned by the nodes are sorted by the node indices in order of their importance such that more sensitive nodes are related to more important features. The proposed networks learn input features but also the importance of the features. Nodes with lower importance in the proposed networks can be pruned to reduce the complexity of the networks, and the pruned networks can be retrained without incurring performance losses. We validated the feature-sorting property of the proposed method using both shallow and deep networks as well as deep networks transferred from existing networks.
키워드
Deep network, Principal component analysis, Pruning, Varying activation
KSP 제안 키워드
Activation function, Deep neural network(DNN), Input features, Principal Component analysis, deep networks