ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술대회 Improvement in Deep Networks for Optimization Using eXplainable Artificial Intelligence
Cited 5 time in scopus Download 23 time Share share facebook twitter linkedin kakaostory
이진하, 신익희, 정상구, 이승익, 무함마드, 서범수
International Conference on Information and Communication Technology Convergence (ICTC) 2019, pp.525-530
19HR1200, 안전한 무인이동체를 위한 ICT 기반기술 개발, 안재영
With the recent advancements in the field of explainable artificial intelligence (XAI), which covers the domain of turning deep network architectures from black boxes to comprehensible structures, it became easy to understand what goes on inside a network when it predicts an output. Many researchers have successfully shown the 'thought process' behind a network's decision making. This rich and interesting information has not been utilized beyond the scope of visualizations once the training finish. In this work, a novel idea to utilize this insight to the network as a training parameter is proposed. Layer-wise Relevance Propagation (LRP), which obtains the effect of each neuron towards the output of the whole network, is used as a parameter, along with learning rate and network weights, to optimize the training. Various intuitive formulations have been proposed, and the results of the experiments on MNIST and CIFAR-10 datasets have been reported in this paper. Our proposed methodologies show better or comparable performances against conventional optimization algorithms. This would open a new dimension of research to explore the possibility of using XAI in optimizing the training of neural networks.
KSP 제안 키워드
CIFAR-10, Learning rate, Network Architecture, Optimization algorithm, Thought process, artificial intelligence, decision making, deep networks, neural network