ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Sample-efficient and occlusion-robust reinforcement learning for robotic manipulation via multimodal fusion dualization and representation normalization
Cited 1 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Samyeul Noh, Wooju Lee, Hyun Myung
Issue Date
2025-05
Citation
Neural Networks, v.185, pp.1-14
ISSN
0893-6080
Publisher
Elsevier Ltd.
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.neunet.2025.107202
Abstract
Recent advances in visual reinforcement learning (visual RL), which learns from high-dimensional image observations, have narrowed the gap between state-based and image-based training. However, visual RL continues to face significant challenges in robotic manipulation tasks involving occlusions, such as lifting obscured objects. Although high-resolution tactile sensors have shown promise in addressing these occlusion issues through visuotactile manipulation, their high cost and complexity limit widespread adoption. In this paper, we propose a novel RL approach that introduces multimodal fusion dualization and representation normalization to enhance sample efficiency and robustness in robotic manipulation tasks involving occlusions — without relying on tactile feedback. Our multimodal fusion dualization technique separates the fusion process into two distinct modules, each optimized individually for the actor and the critic, resulting in tailored representations for each network. Additionally, representation normalization techniques, including LayerNorm and SimplexNorm, are incorporated into the representation learning process to stabilize training and prevent issues such as gradient explosion. We demonstrate that our method not only effectively tackles challenging robotic manipulation tasks involving occlusions but also outperforms state-of-the-art visual RL and state-based RL methods in both sample efficiency and task performance. Notably, this is achieved without relying on tactile sensors or prior knowledge, such as predefined low-dimensional coordinate states or pre-trained representations, making our approach both cost-effective and scalable for real-world robotic applications.
KSP Keywords
Fusion process, High resolution, High-dimensional image, Image-based, Learning Process, Real-world, Reinforcement learning(RL), Representation learning, Robotic Manipulation, Tactile Feedback, cost-effective