ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper POaaS: Minimal-Edit Prompt Optimization as a Service to Lift Accuracy and Cut Hallucinations on On-Device sLLMs
Cited - time in scopus Download 16 time Share share facebook twitter linkedin kakaostory
Authors
Jungwoo Shim, Dae Won Kim, Sunwook Kim, Sooyoung Kim, Myungcheol Lee, Jaegeun Cha, Hyunhwa Choi
Issue Date
2026-03
Citation
European Chapter of the Association for Computational Linguistics (EACL) 2026 Workshop: Fact Extraction and Verification (FEVER) 2026, pp.13-27
Publisher
Association for Computational Linguistics
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.18653/v1/2026.fever-1.2
Abstract
Small language models (sLLMs) are increasingly deployed on-device, where imperfect user prompts--typos, unclear intent, or missing context--can trigger factual errors and hallucinations. Existing automatic prompt optimization (APO) methods were designed for large cloud LLMs and rely on search that often produces long, structured instructions; when executed under an on-device constraint where the same small model must act as optimizer and solver, these pipelines can waste context and even hurt accuracy. We propose POaaS, a minimal-edit prompt optimization layer that routes each query to lightweight specialists (Cleaner, Paraphraser, Fact-Adder) and merges their outputs under strict drift and length constraints, with a conservative skip policy for well-formed prompts. Under a strict fixed-model setting with Llama-3.2-3B-Instruct and Llama-3.1-8B-Instruct, POaaS improves both task accuracy and factuality while representative APO baselines degrade them, and POaaS recovers up to +7.4% under token deletion and mixup. Overall, per-query conservative optimization is a practical alternative to search-heavy APO for on-device sLLMs.
KSP Keywords
Length constraints, language models
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY