A Linearly Convergent Optimization Framework for Learning Graphs from Smooth Signals

  • Xiaolu Wang*
  • , Chaorui Yao
  • , Anthony Man Cho So
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Learning graph structures from a collection of smooth graph signals is a fundamental problem in data analysis and has attracted much interest in recent years. Although various optimization formulations of the problem have been proposed in the literature, existing methods for solving them either are not practically efficient or lack strong convergence guarantees. In this article, we consider a unified graph learning formulation that captures a wide range of static and time-varying graph learning models and develop a first-order method for solving it. By showing that the set of Karush-Kuhn-Tucker points of the formulation possesses a so-called error bound property, we establish the linear convergence of our proposed method. Moreover, through extensive numerical experiments on both synthetic and real data, we show that our method exhibits sharp linear convergence and can be substantially faster than a host of other existing methods. To the best of our knowledge, our work is the first to develop a first-order method that not only is practically efficient but also enjoys a linear convergence guarantee when applied to a large class of graph learning models.

Original languageEnglish
Pages (from-to)490-504
Number of pages15
JournalIEEE Transactions on Signal and Information Processing over Networks
Volume9
DOIs
StatePublished - 2023
Externally publishedYes

Keywords

  • Graph learning
  • error bound
  • graph signal processing
  • linear convergence
  • proximal ADMM

Fingerprint

Dive into the research topics of 'A Linearly Convergent Optimization Framework for Learning Graphs from Smooth Signals'. Together they form a unique fingerprint.

Cite this