TY - JOUR
T1 - Improved parsimonious topic modeling based on the bayesian information criterion
AU - Wang, Hang
AU - Miller, David
N1 - Publisher Copyright:
©2020 by the authors.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/3/1
Y1 - 2020/3/1
N2 - In a previous work, a parsimonious topic model (PTM) was proposed for text corpora. In that work, unlike LDA, the modeling determined a subset of salient words for each topic, with topic-specific probabilities, with the rest of the words in the dictionary explained by a universal sharedmodel. Further,in LDA all topics are in principle presentin every document. Incontrast,PTM gives sparse topic representation, determining the(small)subset of relevant topics foreach document. A customized Bayesian information criterion (BIC) was derived, balancing model complexity and goodness of fit, with the BIC minimized to jointly determine the entire model-the topic-specific words, document-specific topics, all model parameter values, and the total number of topics-in a wholly unsupervised fashion. In the present work, several important modeling and algorithm (parameter learning) extensions of PTM are proposed. First, we modify the BIC objective function using a lossless coding scheme with low modeling cost for describing words that are non-salient for all topics-such words are essentially identified as wholly noisy/uninformative. This approach increases the PTM's model sparsity, which also allows model selection of more topics and with lower BIC cost than the original PTM. Second, in the original PTM model learning strategy, word switches were updated sequentially, which is myopic and susceptible to finding poor locally optimal solutions. Here, instead, we jointly optimize all the switches that correspond to the same word (across topics). This approach jointly optimizes many more parameters at each step than the original PTM, which in principle should be less susceptible to ?nding poor local minima. Results on several document data sets show that our proposed method outperformed the original PTM model with respect to multiple performance measures, and gave a sparser topic model representation than the original PTM.
AB - In a previous work, a parsimonious topic model (PTM) was proposed for text corpora. In that work, unlike LDA, the modeling determined a subset of salient words for each topic, with topic-specific probabilities, with the rest of the words in the dictionary explained by a universal sharedmodel. Further,in LDA all topics are in principle presentin every document. Incontrast,PTM gives sparse topic representation, determining the(small)subset of relevant topics foreach document. A customized Bayesian information criterion (BIC) was derived, balancing model complexity and goodness of fit, with the BIC minimized to jointly determine the entire model-the topic-specific words, document-specific topics, all model parameter values, and the total number of topics-in a wholly unsupervised fashion. In the present work, several important modeling and algorithm (parameter learning) extensions of PTM are proposed. First, we modify the BIC objective function using a lossless coding scheme with low modeling cost for describing words that are non-salient for all topics-such words are essentially identified as wholly noisy/uninformative. This approach increases the PTM's model sparsity, which also allows model selection of more topics and with lower BIC cost than the original PTM. Second, in the original PTM model learning strategy, word switches were updated sequentially, which is myopic and susceptible to finding poor locally optimal solutions. Here, instead, we jointly optimize all the switches that correspond to the same word (across topics). This approach jointly optimizes many more parameters at each step than the original PTM, which in principle should be less susceptible to ?nding poor local minima. Results on several document data sets show that our proposed method outperformed the original PTM model with respect to multiple performance measures, and gave a sparser topic model representation than the original PTM.
UR - http://www.scopus.com/inward/record.url?scp=85082707220&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082707220&partnerID=8YFLogxK
U2 - 10.3390/e22030326
DO - 10.3390/e22030326
M3 - Article
C2 - 33286100
AN - SCOPUS:85082707220
VL - 22
JO - Entropy
JF - Entropy
SN - 1099-4300
IS - 3
M1 - 326
ER -