ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Exploring Visual Reinforcement Learning for Sample-Efficient Robotic Manipulation
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Samyeul Noh, Seonghyun Kim, Ingook Jang
Issue Date
2024-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2024, pp.207-210
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC62082.2024.10827272
Abstract
Sample-efficient reinforcement learning (RL) from high-dimensional sensor readings holds significant potential for real physical robot applications. Recent developments in visual RL, which learns directly from image observations, have narrowed the gap between image-based and state-based training, especially in handling continuous actions. In this paper, we explore the potential of visual RL for achieving sample-efficient robotic manipulation. We evaluate four state-of-the-art visual RL algorithms-CURL, DrQ-v2, TACO, and DrM-on three robotic manipulation tasks from the DeepMind Control manipulation suite: 'reach-duplo,' 'push-box,' and 'lift-box.' Our findings reveal that while visual RL shows promising sample efficiency for robotic manipulation, it still lags behind state-based RL methods that learn from low-dimensional state vectors.
KSP Keywords
High-dimensional, Image-based, Recent developments, Reinforcement learning(RL), Robot applications, Robotic Manipulation, low-dimensional, state-based, state-of-The-Art