MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA

Research output: Contribution to journalConference articlepeer-review

34 Scopus citations

Abstract

Large language models (LLMs) have shown great success in various Natural Language Processing (NLP) tasks, whist they still need updates after deployment to fix errors or keep pace with the changing knowledge in the world. Researchers formulate such problem as Model Editing and have developed various editors focusing on different axes of editing properties. However, current editors can hardly support all properties and rely on heavy computational resources. In this paper, we propose a plug-in Model Editing method based on neuron-indexed dynamic LoRA (MELO), which alters the behavior of language models by dynamically activating certain LoRA blocks according to the index built in an inner vector database. Our method satisfies various editing properties with high efficiency and can be easily integrated into multiple LLM backbones. Experimental results show that our proposed MELO achieves state-of-the-art editing performance on three sequential editing tasks (document classification, question answering and hallucination correction), while requires the least trainable parameters and computational cost.

Original languageEnglish
Pages (from-to)19449-19457
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number17
DOIs
StatePublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024

Fingerprint

Dive into the research topics of 'MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA'. Together they form a unique fingerprint.

Cite this