Efficient second-order optimization with predictions in differential games

Deliang Wei, Peng Chen, Fang Li, Xiangyun Zhang

Research output: Contribution to journalReview articlepeer-review

Abstract

A growing number of training methods for generative adversarial networks (GANs) are differential games. Different from convex optimization problems on single functions, gradient descent on multiple objectives may not converge to stable fixed points (SFPs). In order to improve learning dynamics in such games, many recently proposed methods utilize the second-order information of the game, such as the Hessian matrix. Unfortunately, these methods often suffer from the enormous computational cost of Hessian, which hinders their further applications. In this paper, we present efficient second-order optimization (ESO), in which only a part of Hessian is updated in each iteration, and the algorithm is derived. Furthermore, we give the local convergence of the method under reasonable assumptions. In order to further speed up the training process of GANs, we propose efficient second-order optimization with predictions (ESOP) using a novel accelerator. Basic experiments show that the proposed learning methods are faster than some state-of-art methods in GANs, while applicable to many other n-player differential games with local convergence guarantee.

Original languageEnglish
Pages (from-to)861-886
Number of pages26
JournalOptimization Methods and Software
Volume38
Issue number5
DOIs
StatePublished - 2023

Keywords

  • GANs
  • adaptive accelerator
  • differential games
  • efficient second-order optimization with predictions
  • local convergence guarantee
  • stable fixed point

Fingerprint

Dive into the research topics of 'Efficient second-order optimization with predictions in differential games'. Together they form a unique fingerprint.

Cite this