Leveraging external information in topic modelling

Research output: Contribution to journalArticleResearchpeer-review

2 Citations (Scopus)

Abstract

Besides the text content, documents usually come with rich sets of meta-information, such as categories of documents and semantic/syntactic features of words, like those encoded in word embeddings. Incorporating such meta-information directly into the generative process of topic models can improve modelling accuracy and topic quality, especially in the case where the word-occurrence information in the training data is insufficient. In this article, we present a topic model called MetaLDA, which is able to leverage either document or word meta-information, or both of them jointly, in the generative process. With two data augmentation techniques, we can derive an efficient Gibbs sampling algorithm, which benefits from the fully local conjugacy of the model. Moreover, the algorithm is favoured by the sparsity of the meta-information. Extensive experiments on several real-world datasets demonstrate that our model achieves superior performance in terms of both perplexity and topic quality, particularly in handling sparse texts. In addition, our model runs significantly faster than other models using meta-information.

Original languageEnglish
Pages (from-to)661-693
Number of pages33
JournalKnowledge and Information Systems
Volume61
Issue number2
DOIs
Publication statusPublished - 2019

Keywords

  • Data augmentation
  • Gibbs sampling
  • Latent Dirichlet allocation
  • Side information

Cite this