ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Learning to Rewind via Iterative Prediction of Past Weights for Practical Unlearning
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jinhyeok Jang, Jaehong Kim, Chan-Hyun Youn
Issue Date
2025-02
Citation
The Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI) 2025, pp.26248-26255
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1609/aaai.v39i25.34822
Abstract
In artificial intelligence (AI), many legal conflicts have arisen, especially concerning privacy and copyright associated with training data. When an AI model’s training data incurs privacy concerns, it becomes imperative to develop a new model devoid of influences from such contentious data. However, retraining from scratch is often not viable due to the extensive data requirements and heavy computational costs. Machine unlearning presents a promising solution by enabling the selective erasure of specific knowledge from models. Despite its potential, many existing approaches in machine unlearning are based on scenarios that are either impractical or could lead to unintended degradation of model performance. We utilize the concept of weight prediction to approximate the less-learned weights based on observations about further training. By repetition of 1) finetuning on specific data and 2) weight prediction, our work gradually eliminates knowledge about the specific data. We verify its ability to eliminate side effects caused by problematic data and show its effectiveness across various architectures, datasets, and tasks.
KSP Keywords
Existing Approaches, Machine Unlearning, Model performance, New model, Privacy concerns, Selective erasure, Side effects, Weight Prediction, artificial intelligence, computational cost, data requirements