ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 An Efficient Pruning and Weight Sharing Method for Neural Network
Cited 5 time in scopus Download 0 time Share share facebook twitter linkedin kakaostory
저자
김진규, 이미영, 김주엽, 김병조, 이주현
발행일
201610
출처
International Conference on Consumer Electronics (ICCE) 2016 : Asia, pp.472-473
DOI
https://dx.doi.org/10.1109/ICCE-Asia.2016.7804738
협약과제
16HB1100, 신경모사 인지형 모바일 컴퓨팅 지능형반도체 기술개발, 이주현
초록
This paper presents a compression method to reduce the number of parameters in convolutional neural networks (CNNs). Although neural networks have an excellent recognition performance in computer vision application, there is a need for a large memory for storing amount of parameters and also necessary in a high-speed computational block. Therefore we propose two the compression schemes (pruning, weight sharing) in LeNet network model using MNIST dataset. The proposed schemes reduced the number of parameters of LeNet from 430,500 to 32 excluding index buffer size.
키워드
Convolutional neural network, Deep learning, Network compression
KSP 제안 키워드
Buffer Size, Compression method, Computer Vision(CV), Convolution neural network(CNN), High Speed, Large memory, MNIST Dataset, Network model, Vision application, deep learning(DL), need for