ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper An Empirical Investigation of Visual Reinforcement Learning for 3D Continuous Control
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Samyeul Noh, Seonghyun Kim, Ingook Jang, Donghun Lee
Issue Date
2023-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2023, pp.1699-1702
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC58733.2023.10393822
Abstract
Sample-efficient reinforcement learning (RL) methods that can learn directly from raw sensory data will open up real-world applications in robotics and control. Recent breakthroughs in visual RL have shown that incorporating a latent representation alongside traditional RL techniques bridges the gap between state-based and image-based training paradigms. In this paper, we conduct an empirical investigation of visual RL, which can be trained end-to-end directly from image pixels, to address 3D continuous control problems. To this end, we evaluate three recent visual RL algorithms (CURL, SAC+AE, and DrQ-v2) with respect to sample efficiency and task performance on two 3D locomotion tasks (‘quadruped-walk' and ‘quadrupedrun') from the DeepMind control suite. We find that using data augmentation, rather than using contrastive learning or an auto-encoder, plays an important role in improving sample efficiency and task performance in image-based training.
KSP Keywords
3D locomotion, Auto-Encoder(AE), Continuous control, Control problems, Control sample, Data Augmentation, Empirical investigation, End to End(E2E), Image-based, Real-world applications, Reinforcement Learning(RL)