Improving MMD-GaN training with repulsive loss function

Wei Wang, Yuan Sun, Saman Halgamuge

Research output: Contribution to conferencePosterpeer-review

25 Citations (Scopus)

Abstract

Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget. This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions. First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data. To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD. Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function. The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets. Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions. The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization.

Original languageEnglish
Number of pages11
Publication statusPublished - 2019
Externally publishedYes
EventInternational Conference on Learning Representations 2019 - Ernest N. Morial Convention Center, New Orleans, United States of America
Duration: 6 May 20199 May 2019
Conference number: 7th
https://iclr.cc/Conferences/2019
https://openreview.net/group?id=ICLR.cc/2019/Conference (Proceedings)

Conference

ConferenceInternational Conference on Learning Representations 2019
Abbreviated titleICLR 2019
Country/TerritoryUnited States of America
CityNew Orleans
Period6/05/199/05/19
OtherThe International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.

ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
Internet address

Cite this