Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model

  • Xiao Wang
  • , Weikang Zhou
  • , Qi Zhang*
  • , Jie Zhou
  • , Songyang Gao
  • , Junzhe Wang
  • , Menghan Zhang
  • , Xiang Gao
  • , Yunwen Chen
  • , Tao Gui*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

Pretrained language models have achieved remarkable success in various natural language processing tasks. However, pretraining has recently shifted toward larger models and larger data, and this has resulted in significant computational and energy costs. In this paper, we propose Influence Subset Selection (ISS) for language model, which explicitly utilizes end-task knowledge to select a tiny subset of the pretraining corpus. Specifically, the ISS selects the samples that will provide the most positive influence on the performance of the end-task. Furthermore, we design a gradient matching based influence estimation method, which can drastically reduce the computation time of influence. With only 0.45% of the data and a three-orders-of-magnitude lower computational cost, ISS outperformed pretrained models (e.g., RoBERTa) on eight datasets covering four domains.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics, ACL 2023
PublisherAssociation for Computational Linguistics (ACL)
Pages555-568
Number of pages14
ISBN (Electronic)9781959429623
DOIs
StatePublished - 2023
Externally publishedYes
EventFindings of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada
Duration: 9 Jul 202314 Jul 2023

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

ConferenceFindings of the Association for Computational Linguistics, ACL 2023
Country/TerritoryCanada
CityToronto
Period9/07/2314/07/23

Fingerprint

Dive into the research topics of 'Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model'. Together they form a unique fingerprint.

Cite this