ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Tailored ViT Slimming: Budget-Aware Multi-Dimensional Sparsity Regularization for Vision Transformers Pruning (Student Abstract)
Cited - time in scopus Share share facebook twitter linkedin kakaostory
Authors
Suwoong Lee, Seungjae Lee, Yunho Jeon, Junmo Kim
Issue Date
2026-01
Citation
The Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI) 2026, v.40, no.48, pp.41255-41257
Publisher
Association for the Advancement of Artificial Intelligence
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1609/aaai.v40i48.42233
Abstract
We propose Tailored ViT Slimming (TVS), a budget-aware multi-dimensional pruning framework for Vision Transformers. TVS injects learnable masks into MHSA and MLP modules and applies adaptive non-convex sparsity regularization to achieve maximal utilization of parameters under strict module-wise budgets. In addition, by retaining scaled masks after pruning, TVS avoids abrupt accuracy drops and provides stable initialization for fine-tuning. On ImageNet-1k with DeiT-S and DeiT-B, TVS consistently outperforms prior ViT compression methods. This result empirically shows that the non-convex sparsity regularizer is effective not only in CNNs but also in ViTs.
KSP Keywords
Compression method, Fine-tuning, Sparsity regularization, multi-dimensional, non-convex