Generative Adversarial Networks with Joint Distribution Moment Matching

Yi Ying Zhang, Chao Min Shen, Hao Feng, Preston Thomas Fletcher, Gui Xu Zhang

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Generative adversarial networks (GANs) have shown impressive power in the field of machine learning. Traditional GANs have focused on unsupervised learning tasks. In recent years, conditional GANs that can generate data with labels have been proposed in semi-supervised learning and have achieved better image quality than traditional GANs. Conditional GANs, however, generally only minimize the difference between marginal distributions of real and generated data, neglecting the difference with respect to each class of the data. To address this challenge, we propose the GAN with joint distribution moment matching (JDMM-GAN) for matching the joint distribution based on maximum mean discrepancy, which minimizes the differences of both the marginal and conditional distributions. The learning procedure is iteratively conducted by the stochastic gradient descent and back-propagation. We evaluate JDMM-GAN on several benchmark datasets, including MNIST, CIFAR-10 and the Extended Yale Face. Compared with the state-of-the-art GANs, JDMM-GAN generates more realistic images and achieves the best inception score for CIFAR-10 dataset.

Original languageEnglish
Pages (from-to)579-597
Number of pages19
JournalJournal of the Operations Research Society of China
Volume7
Issue number4
DOIs
StatePublished - 1 Dec 2019

Keywords

  • Generative Adversarial Networks
  • Joint Distribution Moment Matching
  • Maximum mean discrepancy

Fingerprint

Dive into the research topics of 'Generative Adversarial Networks with Joint Distribution Moment Matching'. Together they form a unique fingerprint.

Cite this