ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Small Changes, Big Impact: How Manipulating a Few Neurons Can Drastically Alter LLM Aggression
Cited 0 time in scopus Download 99 time Share share facebook twitter linkedin kakaostory
Authors
Jaewook Lee, Junseo Jang, Oh-Woog Kwon, Harksoo Kim
Issue Date
2025-07
Citation
Annual Meeting of the Association for Computational Linguistics (ACL) 2025, pp.23478-23505
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.18653/v1/2025.acl-long.1144
Abstract
Recent remarkable advances in Large Language Models (LLMs) have led to innovations in various domains such as education, healthcare, and finance, while also raising serious concerns that they can be easily misused for malicious purposes. Most previous research has focused primarily on observing how jailbreak attack techniques bypass safety mechanisms like Reinforcement Learning through Human Feedback (RLHF). However, whether there are neurons within LLMs that directly govern aggression has not been sufficiently investigated. To fill this gap, this study identifies specific neurons (“aggression neurons”) closely related to the expression of aggression and systematically analyzes how manipulating them affects the model's overall aggression. Specifically, using a large-scale synthetic text corpus (aggressive and non-aggressive), we measure the activation frequency of each neuron, then apply masking and activation techniques to quantitatively evaluate changes in aggression by layer and by manipulation ratio. Experimental results show that, in all models, manipulating only a small number of neurons can increase aggression by up to 33%, and the effect is even more extreme when aggression neurons are concentrated in certain layers. Moreover, even models of the same scale exhibit nonlinear changes in aggression patterns, suggesting that simple external safety measures alone may not be sufficient for complete defense.
KSP Keywords
Attack techniques, Reinforcement learning(RL), Safety measures, human feedback, language models, large-scale, text corpus
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY