In the rapidly advancing Reinforcement Learning (RL) field, Multi-Agent Reinforcement Learning (MARL) has emerged as a key player in solving complex real-world challenges. A pivotal development in this realm is the introduction of the mixing network, representing a significant leap forward in the capabilities of multi-agent systems. Drawing inspiration from COMA and VDN methodologies, the mixing network overcomes limitations in extracting combined Q-values from joint state-action interactions. Previous approaches like COMA and VDN faced constraints in fully utilizing the state-provided information during training, limiting their effectiveness. QMIX and QVMinMax addressed this issue by employing neural networks to convert centralized states into weights for a second neural network, akin to hyper- networks. However, these solutions presented challenges such as computational intensity and susceptibility to local minima. To overcome these hurdles, our proposed methodology introduces three key contributions. First, we introduce the state- fusion network, an innovative alternative to traditional mixing, with a self-attention mechanism. Second, to address the local optima problem in MARL algorithms, we leverage the Grey Wolf Optimizer for weight and bias selection, adding a stochastic element for improved optimization. Finally, we comprehensively compare with QMIX, evaluating performance under two optimization methods: Gradient Descent and Stochastic Optimizer. Using the StarCraft II Learning Environment (SC2LE) as our experimental platform, our results demonstrate the superiority of our methodology over QMIX, QVMinMax, and QSOD in absolute performance, particularly when operating under resource constraints. Our proposed methodology contributes to the ongoing evolution of MARL techniques, showcasing advancements in attention mechanisms and optimization strategies for enhanced multi-agent system capabilities.
KSP Keywords
Attention mechanism, Grey Wolf optimizer, Local minima, Multi-agent system(MAS), Optimization methods, Optimization strategies, Q-value, Real-world, Reinforcement learning(RL), StarCraft II, Stochastic Gradient Descent
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
Copyright Policy
ETRI KSP Copyright Policy
The materials provided on this website are subject to copyrights owned by ETRI and protected by the Copyright Act. Any reproduction, modification, or distribution, in whole or in part, requires the prior explicit approval of ETRI. However, under Article 24.2 of the Copyright Act, the materials may be freely used provided the user complies with the following terms:
The materials to be used must have attached a Korea Open Government License (KOGL) Type 4 symbol, which is similar to CC-BY-NC-ND (Creative Commons Attribution Non-Commercial No Derivatives License). Users are free to use the materials only for non-commercial purposes, provided that original works are properly cited and that no alterations, modifications, or changes to such works is made. This website may contain materials for which ETRI does not hold full copyright or for which ETRI shares copyright in conjunction with other third parties. Without explicit permission, any use of such materials without KOGL indication is strictly prohibited and will constitute an infringement of the copyright of ETRI or of the relevant copyright holders.
J. Kim et. al, "Trends in Lightweight Kernel for Many core Based High-Performance Computing", Electronics and Telecommunications Trends. Vol. 32, No. 4, 2017, KOGL Type 4: Source Indication + Commercial Use Prohibition + Change Prohibition
J. Sim et.al, “the Fourth Industrial Revolution and ICT – IDX Strategy for leading the Fourth Industrial Revolution”, ETRI Insight, 2017, KOGL Type 4: Source Indication + Commercial Use Prohibition + Change Prohibition
If you have any questions or concerns about these terms of use, or if you would like to request permission to use any material on this website, please feel free to contact us
KOGL Type 4:(Source Indication + Commercial Use Prohibition+Change Prohibition)
Contact ETRI, Research Information Service Section
Privacy Policy
ETRI KSP Privacy Policy
ETRI does not collect personal information from external users who access our Knowledge Sharing Platform (KSP). Unathorized automated collection of researcher information from our platform without ETRI's consent is strictly prohibited.
[Researcher Information Disclosure] ETRI publicly shares specific researcher information related to research outcomes, including the researcher's name, department, work email, and work phone number.
※ ETRI does not share employee photographs with external users without the explicit consent of the researcher. If a researcher provides consent, their photograph may be displayed on the KSP.