Dirichlet belief networks for topic structure learning

He Zhao, Lan Du, Wray Buntine, Mingyuan Zhou

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review

Abstract

Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures. Although several deep models have been proposed to learn better topic proportions of documents, how to leverage the benefits of deep structures for learning word distributions of topics has not yet been rigorously studied. Here we propose a new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above. As the topics in all layers can be directly interpreted by words, the proposed model is able to discover interpretable topic hierarchies. As a self-contained module, our model can be flexibly adapted to different kinds of topic models to improve their modelling accuracy and interpretability. Extensive experiments on text corpora demonstrate the advantages of the proposed model.
Original languageEnglish
Title of host publicationNIPS Proceedings
Subtitle of host publicationAdvances in Neural Information Processing Systems 31 (NIPS 2018)
EditorsS. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett
Place of PublicationSan Diego CA USA
PublisherNeural Information Processing Systems (NIPS)
Pages7966-7977
Number of pages12
Publication statusPublished - 2018
EventAdvances in Neural Information Processing Systems 2018 - Montreal , Canada
Duration: 2 Dec 20188 Dec 2018
Conference number: 31st
https://nips.cc/Conferences/2018

Conference

ConferenceAdvances in Neural Information Processing Systems 2018
Abbreviated titleNIPS 2018
CountryCanada
CityMontreal
Period2/12/188/12/18
Internet address

Cite this

Zhao, H., Du, L., Buntine, W., & Zhou, M. (2018). Dirichlet belief networks for topic structure learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), NIPS Proceedings: Advances in Neural Information Processing Systems 31 (NIPS 2018) (pp. 7966-7977). San Diego CA USA: Neural Information Processing Systems (NIPS).
Zhao, He ; Du, Lan ; Buntine, Wray ; Zhou, Mingyuan. / Dirichlet belief networks for topic structure learning. NIPS Proceedings: Advances in Neural Information Processing Systems 31 (NIPS 2018). editor / S. Bengio ; H. Wallach ; H. Larochelle ; K. Grauman ; N. Cesa-Bianchi ; R. Garnett. San Diego CA USA : Neural Information Processing Systems (NIPS), 2018. pp. 7966-7977
@inbook{06694ddbe8054e1790014c01e4049ba7,
title = "Dirichlet belief networks for topic structure learning",
abstract = "Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures. Although several deep models have been proposed to learn better topic proportions of documents, how to leverage the benefits of deep structures for learning word distributions of topics has not yet been rigorously studied. Here we propose a new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above. As the topics in all layers can be directly interpreted by words, the proposed model is able to discover interpretable topic hierarchies. As a self-contained module, our model can be flexibly adapted to different kinds of topic models to improve their modelling accuracy and interpretability. Extensive experiments on text corpora demonstrate the advantages of the proposed model.",
author = "He Zhao and Lan Du and Wray Buntine and Mingyuan Zhou",
year = "2018",
language = "English",
pages = "7966--7977",
editor = "S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett",
booktitle = "NIPS Proceedings",
publisher = "Neural Information Processing Systems (NIPS)",

}

Zhao, H, Du, L, Buntine, W & Zhou, M 2018, Dirichlet belief networks for topic structure learning. in S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi & R Garnett (eds), NIPS Proceedings: Advances in Neural Information Processing Systems 31 (NIPS 2018). Neural Information Processing Systems (NIPS), San Diego CA USA, pp. 7966-7977, Advances in Neural Information Processing Systems 2018, Montreal , Canada, 2/12/18.

Dirichlet belief networks for topic structure learning. / Zhao, He; Du, Lan; Buntine, Wray; Zhou, Mingyuan.

NIPS Proceedings: Advances in Neural Information Processing Systems 31 (NIPS 2018). ed. / S. Bengio; H. Wallach; H. Larochelle; K. Grauman; N. Cesa-Bianchi; R. Garnett. San Diego CA USA : Neural Information Processing Systems (NIPS), 2018. p. 7966-7977.

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review

TY - CHAP

T1 - Dirichlet belief networks for topic structure learning

AU - Zhao, He

AU - Du, Lan

AU - Buntine, Wray

AU - Zhou, Mingyuan

PY - 2018

Y1 - 2018

N2 - Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures. Although several deep models have been proposed to learn better topic proportions of documents, how to leverage the benefits of deep structures for learning word distributions of topics has not yet been rigorously studied. Here we propose a new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above. As the topics in all layers can be directly interpreted by words, the proposed model is able to discover interpretable topic hierarchies. As a self-contained module, our model can be flexibly adapted to different kinds of topic models to improve their modelling accuracy and interpretability. Extensive experiments on text corpora demonstrate the advantages of the proposed model.

AB - Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures. Although several deep models have been proposed to learn better topic proportions of documents, how to leverage the benefits of deep structures for learning word distributions of topics has not yet been rigorously studied. Here we propose a new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above. As the topics in all layers can be directly interpreted by words, the proposed model is able to discover interpretable topic hierarchies. As a self-contained module, our model can be flexibly adapted to different kinds of topic models to improve their modelling accuracy and interpretability. Extensive experiments on text corpora demonstrate the advantages of the proposed model.

M3 - Chapter (Book)

SP - 7966

EP - 7977

BT - NIPS Proceedings

A2 - Bengio, S.

A2 - Wallach, H.

A2 - Larochelle, H.

A2 - Grauman, K.

A2 - Cesa-Bianchi, N.

A2 - Garnett, R.

PB - Neural Information Processing Systems (NIPS)

CY - San Diego CA USA

ER -

Zhao H, Du L, Buntine W, Zhou M. Dirichlet belief networks for topic structure learning. In Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R, editors, NIPS Proceedings: Advances in Neural Information Processing Systems 31 (NIPS 2018). San Diego CA USA: Neural Information Processing Systems (NIPS). 2018. p. 7966-7977