ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Do LLMs Need Inherent Reasoning Before Reinforcement Learning? A Study in Korean Self-Correction
Cited - time in scopus Share share facebook twitter linkedin kakaostory
Authors
Hongjin Kim, Jaewook Lee, Kiyoung Lee, Jong‑hun Shin, Soojong Lim, Oh‑Woog Kwon
Issue Date
2025-12
Citation
International Joint Conference on Natural Language Processing and Asia-Pacific Chapter of the Association for Computational Linguistics (IJCNLP-AACL) 2025, pp.527-542
Publisher
AFNLP
Language
English
Type
Conference Paper
Abstract
Large Language Models (LLMs) demonstrate strong reasoning and self-correction abilities in high-resource languages like English, but their performance remains limited in low-resource languages such as Korean. In this study, we investigate whether reinforcement learning (RL) can enhance Korean reasoning abilities to a degree comparable to English. Our findings reveal that RL alone yields limited improvements when applied to models lacking inherent Korean reasoning capabilities. To address this, we explore several fine-tuning strategies and show that aligning the model’s internal reasoning processes with Korean inputs—particularly by tuning Korean-specific neurons in early layers—is key to unlocking RL’s effectiveness. We introduce a self-correction code-switching dataset to facilitate this alignment and observe significant performance gains in both mathematical reasoning and self-correction tasks. Ultimately, we conclude that the crucial factor in multilingual reasoning enhancement is not injecting new linguistic knowledge, but effectively eliciting and aligning existing reasoning capabilities. Our study provides a new perspective on how internal translation and neuron-level tuning contribute to multilingual reasoning alignment in LLMs.
KSP Keywords
Code-switching, Fine-tuning strategies, Linguistic knowledge, Low-Resource, Reinforcement learning(RL), language models, self-correction