Guiding LLMs to decode text via aligning semantics in EEG signals and language

  • Huanran Zheng
  • , Yuanbin Wu
  • , Tianwen Qian
  • , Wenjing Yue
  • , Xiaoling Wang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the rapid development of brain-computer interfaces (BCI) in recent years, the electroencephalography (EEG) to text task has drawn increasing attention. This task aims to generate natural text based on EEG signals to assist individuals who have lost their communication ability. Previous methods have defined it as a sequence-to-sequence translation task. However, their model was trained using a teacher-forcing strategy, which introduced language bias and could not effectively utilize EEG signals. To address this issue, we propose a novel framework in this paper, which innovatively treats the EEG-to-text task as a fine-grained controllable text generation task. Specifically, since large language models (LLMs) have strong text generation capabilities, we guide LLMs in generating the desired sentences step by step by re-ranking the predicted candidate words based on their semantic similarity with the EEG segment representations. Therefore, our approach focuses on training a word-level EEG representation model to effectively extract information from EEG signals and align EEG representations with word semantics without using teacher-forcing strategies. Extensive experiments on the ZuCo benchmark demonstrate the effectiveness of our approach, which achieves state-of-the-art performance in both multi-subject and single-subject settings. Furthermore, experimental results in cross-subject scenarios further verify that our method has a strong generalization ability and can be applied to unseen subjects.

Original languageEnglish
JournalExpert Systems with Applications
Volume299
DOIs
StatePublished - 1 Mar 2026

Keywords

  • Brain-computer interface
  • Brain-to-Text
  • Contrastive learning
  • Electroencephalography
  • Large language models

Fingerprint

Dive into the research topics of 'Guiding LLMs to decode text via aligning semantics in EEG signals and language'. Together they form a unique fingerprint.

Cite this