Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures. Although several deep models have been proposed to learn better topic proportions of documents, how to leverage the benefits of deep structures for learning word distributions of topics has not yet been rigorously studied. Here we propose a new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above. As the topics in all layers can be directly interpreted by words, the proposed model is able to discover interpretable topic hierarchies. As a self-contained module, our model can be flexibly adapted to different kinds of topic models to improve their modelling accuracy and interpretability. Extensive experiments on text corpora demonstrate the advantages of the proposed model.
|Title of host publication||NIPS Proceedings|
|Subtitle of host publication||Advances in Neural Information Processing Systems 31 (NIPS 2018)|
|Editors||S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett|
|Place of Publication||San Diego CA USA|
|Publisher||Neural Information Processing Systems (NIPS)|
|Number of pages||12|
|Publication status||Published - 2018|
|Event||Advances in Neural Information Processing Systems 2018 - Montreal , Canada|
Duration: 2 Dec 2018 → 8 Dec 2018
Conference number: 31st
|Conference||Advances in Neural Information Processing Systems 2018|
|Abbreviated title||NIPS 2018|
|Period||2/12/18 → 8/12/18|
Zhao, H., Du, L., Buntine, W., & Zhou, M. (2018). Dirichlet belief networks for topic structure learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), NIPS Proceedings: Advances in Neural Information Processing Systems 31 (NIPS 2018) (pp. 7966-7977). San Diego CA USA: Neural Information Processing Systems (NIPS).