Parsimonious topic models with salient word discovery

Hossein Soleimani, David Jonathan Miller

    Research output: Contribution to journalArticle

    9 Citations (Scopus)

    Abstract

    We propose a parsimonious topic model for text corpora. In related models such as Latent Dirichlet Allocation (LDA), all words are modeled topic-specifically, even though many words occur with similar frequencies across different topics. Our modeling determines salient words for each topic, which have topic-specific probabilities, with the rest explained by a universal shared model. Further, in LDA all topics are in principle present in every document. By contrast, our model gives sparse topic representation, determining the (small) subset of relevant topics for each document. We derive a Bayesian Information Criterion (BIC), balancing model complexity and goodness of fit. Here, interestingly, we identify an effective sample size and corresponding penalty specific to each parameter type in our model. We minimize BIC to jointly determine our entire model - the topic-specific words, document-specific topics, all model parameter values, and the total number of topics - in a wholly unsupervised fashion. Results on three text corpora and an image dataset show that our model achieves higher test set likelihood and better agreement with ground-truth class labels, compared to LDA and to a model designed to incorporate sparsity.

    Original languageEnglish (US)
    Article number6871387
    Pages (from-to)824-837
    Number of pages14
    JournalIEEE Transactions on Knowledge and Data Engineering
    Volume27
    Issue number3
    DOIs
    StatePublished - Mar 1 2015

    Fingerprint

    Labels

    All Science Journal Classification (ASJC) codes

    • Information Systems
    • Computer Science Applications
    • Computational Theory and Mathematics

    Cite this

    @article{93318bc8a23f490a95d49812c7e04f72,
    title = "Parsimonious topic models with salient word discovery",
    abstract = "We propose a parsimonious topic model for text corpora. In related models such as Latent Dirichlet Allocation (LDA), all words are modeled topic-specifically, even though many words occur with similar frequencies across different topics. Our modeling determines salient words for each topic, which have topic-specific probabilities, with the rest explained by a universal shared model. Further, in LDA all topics are in principle present in every document. By contrast, our model gives sparse topic representation, determining the (small) subset of relevant topics for each document. We derive a Bayesian Information Criterion (BIC), balancing model complexity and goodness of fit. Here, interestingly, we identify an effective sample size and corresponding penalty specific to each parameter type in our model. We minimize BIC to jointly determine our entire model - the topic-specific words, document-specific topics, all model parameter values, and the total number of topics - in a wholly unsupervised fashion. Results on three text corpora and an image dataset show that our model achieves higher test set likelihood and better agreement with ground-truth class labels, compared to LDA and to a model designed to incorporate sparsity.",
    author = "Hossein Soleimani and Miller, {David Jonathan}",
    year = "2015",
    month = "3",
    day = "1",
    doi = "10.1109/TKDE.2014.2345378",
    language = "English (US)",
    volume = "27",
    pages = "824--837",
    journal = "IEEE Transactions on Knowledge and Data Engineering",
    issn = "1041-4347",
    publisher = "IEEE Computer Society",
    number = "3",

    }

    Parsimonious topic models with salient word discovery. / Soleimani, Hossein; Miller, David Jonathan.

    In: IEEE Transactions on Knowledge and Data Engineering, Vol. 27, No. 3, 6871387, 01.03.2015, p. 824-837.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Parsimonious topic models with salient word discovery

    AU - Soleimani, Hossein

    AU - Miller, David Jonathan

    PY - 2015/3/1

    Y1 - 2015/3/1

    N2 - We propose a parsimonious topic model for text corpora. In related models such as Latent Dirichlet Allocation (LDA), all words are modeled topic-specifically, even though many words occur with similar frequencies across different topics. Our modeling determines salient words for each topic, which have topic-specific probabilities, with the rest explained by a universal shared model. Further, in LDA all topics are in principle present in every document. By contrast, our model gives sparse topic representation, determining the (small) subset of relevant topics for each document. We derive a Bayesian Information Criterion (BIC), balancing model complexity and goodness of fit. Here, interestingly, we identify an effective sample size and corresponding penalty specific to each parameter type in our model. We minimize BIC to jointly determine our entire model - the topic-specific words, document-specific topics, all model parameter values, and the total number of topics - in a wholly unsupervised fashion. Results on three text corpora and an image dataset show that our model achieves higher test set likelihood and better agreement with ground-truth class labels, compared to LDA and to a model designed to incorporate sparsity.

    AB - We propose a parsimonious topic model for text corpora. In related models such as Latent Dirichlet Allocation (LDA), all words are modeled topic-specifically, even though many words occur with similar frequencies across different topics. Our modeling determines salient words for each topic, which have topic-specific probabilities, with the rest explained by a universal shared model. Further, in LDA all topics are in principle present in every document. By contrast, our model gives sparse topic representation, determining the (small) subset of relevant topics for each document. We derive a Bayesian Information Criterion (BIC), balancing model complexity and goodness of fit. Here, interestingly, we identify an effective sample size and corresponding penalty specific to each parameter type in our model. We minimize BIC to jointly determine our entire model - the topic-specific words, document-specific topics, all model parameter values, and the total number of topics - in a wholly unsupervised fashion. Results on three text corpora and an image dataset show that our model achieves higher test set likelihood and better agreement with ground-truth class labels, compared to LDA and to a model designed to incorporate sparsity.

    UR - http://www.scopus.com/inward/record.url?scp=84922879149&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84922879149&partnerID=8YFLogxK

    U2 - 10.1109/TKDE.2014.2345378

    DO - 10.1109/TKDE.2014.2345378

    M3 - Article

    AN - SCOPUS:84922879149

    VL - 27

    SP - 824

    EP - 837

    JO - IEEE Transactions on Knowledge and Data Engineering

    JF - IEEE Transactions on Knowledge and Data Engineering

    SN - 1041-4347

    IS - 3

    M1 - 6871387

    ER -