MGAN: training generative adversarial nets with multiple generators

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.
Original languageEnglish
Title of host publication6th International Conference on Learning Representations, ICLR 2018
EditorsIain Murray, Marc’Aurelio Ranzato, Oriol Vinyals
Place of PublicationAmherst MA USA
PublisherOpenReview
Number of pages24
Publication statusPublished - 2018
Externally publishedYes
EventInternational Conference on Learning Representations 2018 - Vancouver, Canada
Duration: 30 Apr 20183 May 2018
https://iclr.cc/Conferences/2018

Conference

ConferenceInternational Conference on Learning Representations 2018
Abbreviated titleICLR 2018
CountryCanada
CityVancouver
Period30/04/183/05/18
Internet address

Keywords

  • GANs
  • Mode Collapse
  • Mixture
  • Jensen-Shannon Divergence
  • Inception Score
  • Generator
  • Discriminator
  • CIFAR-10
  • STL-10
  • ImageNet

Cite this

Hoang, Q., Nguyen, T. D., Le, T., & Phung, D. (2018). MGAN: training generative adversarial nets with multiple generators. In I. Murray, MA. Ranzato, & O. Vinyals (Eds.), 6th International Conference on Learning Representations, ICLR 2018 Amherst MA USA: OpenReview.
Hoang, Quan ; Nguyen, Tu Dinh ; Le, Trung ; Phung, Dinh. / MGAN : training generative adversarial nets with multiple generators. 6th International Conference on Learning Representations, ICLR 2018. editor / Iain Murray ; Marc’Aurelio Ranzato ; Oriol Vinyals. Amherst MA USA : OpenReview, 2018.
@inproceedings{c9e0d27b4d8840eea90743d0d56b772b,
title = "MGAN: training generative adversarial nets with multiple generators",
abstract = "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.",
keywords = "GANs, Mode Collapse, Mixture, Jensen-Shannon Divergence, Inception Score, Generator, Discriminator, CIFAR-10, STL-10, ImageNet",
author = "Quan Hoang and Nguyen, {Tu Dinh} and Trung Le and Dinh Phung",
year = "2018",
language = "English",
editor = "Murray, {Iain } and Ranzato, {Marc’Aurelio } and Vinyals, {Oriol }",
booktitle = "6th International Conference on Learning Representations, ICLR 2018",
publisher = "OpenReview",

}

Hoang, Q, Nguyen, TD, Le, T & Phung, D 2018, MGAN: training generative adversarial nets with multiple generators. in I Murray, MA Ranzato & O Vinyals (eds), 6th International Conference on Learning Representations, ICLR 2018. OpenReview, Amherst MA USA, International Conference on Learning Representations 2018, Vancouver, Canada, 30/04/18.

MGAN : training generative adversarial nets with multiple generators. / Hoang, Quan; Nguyen, Tu Dinh; Le, Trung; Phung, Dinh.

6th International Conference on Learning Representations, ICLR 2018. ed. / Iain Murray; Marc’Aurelio Ranzato; Oriol Vinyals. Amherst MA USA : OpenReview, 2018.

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

TY - GEN

T1 - MGAN

T2 - training generative adversarial nets with multiple generators

AU - Hoang, Quan

AU - Nguyen, Tu Dinh

AU - Le, Trung

AU - Phung, Dinh

PY - 2018

Y1 - 2018

N2 - We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.

AB - We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.

KW - GANs

KW - Mode Collapse

KW - Mixture

KW - Jensen-Shannon Divergence

KW - Inception Score

KW - Generator

KW - Discriminator

KW - CIFAR-10

KW - STL-10

KW - ImageNet

UR - http://www.scopus.com/inward/record.url?scp=85056787962&partnerID=8YFLogxK

M3 - Conference Paper

BT - 6th International Conference on Learning Representations, ICLR 2018

A2 - Murray, Iain

A2 - Ranzato, Marc’Aurelio

A2 - Vinyals, Oriol

PB - OpenReview

CY - Amherst MA USA

ER -

Hoang Q, Nguyen TD, Le T, Phung D. MGAN: training generative adversarial nets with multiple generators. In Murray I, Ranzato MA, Vinyals O, editors, 6th International Conference on Learning Representations, ICLR 2018. Amherst MA USA: OpenReview. 2018