ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Empirical Evaluation of Activation Functions and Kernel Initializers on Deep Reinforcement Learning
Cited 5 time in scopus Download 4 time Share share facebook twitter linkedin kakaostory
저자
장수영, 손영성
발행일
201910
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2019, pp.1140-1142
DOI
https://dx.doi.org/10.1109/ICTC46691.2019.8939854
협약과제
19ZH1100, 사물-사람-공간의 유기적 연결을 위한 초연결 공간의 분산 지능 핵심원천 기술, 박준희
초록
Hyperparameter optimization has a considerable impact on system performance. However many researchers use the default hyperparameter configurations provided by the machine learning frameworks they use because it require a lot of effort, time and computing resources. In this paper, we analyze the effect of the activation function and the kernel initializer on the performance of deep reinforcement learning. In detail, we compared the performance of 42 combinations of 7 activation functions and 6 kernel initializers on two different network types, namely, multi-layer perceptron (MLP) and convolutional neural network (CNN). From the results, it has been revealed that not only did the optimal combinations differ but also, even with the same hyperparameter, the performance difference was dramatic depending on the network type. This phenomenon was prominent in the kernel initializer.
KSP 제안 키워드
Activation function, Computing resources, Convolution neural network(CNN), Deep reinforcement learning, Empirical Evaluation, Hyperparameter optimization, Performance difference, Reinforcement Learning(RL), System performance, machine Learning, multilayer perceptron