We propose a mixture of class-conditioned topic models for classifying text documents using both labeled and unlabeled training documents in a semi-supervised fashion. Most topic models incorporate documents' class labels by generating them after generating the word space. In these models, the training class labels have relatively small effect on the estimated topics, as the likelihood function is mostly dominated by the word space, whose size dwarfs a single class label per document. In this paper, we propose to increase the influence of class labels on model parameters by generating the word space in each document conditioned on the class label. We show that our specific generative process improves classification performance while maintaining the ability of the model to discover topics from the word space. Within our framework, we also provide a principled mechanism to control the contribution of the class labels and the word space to the likelihood function. Experimental results show that our approach achieves better classification performance compared to some standard semi-supervised and supervised topic models. We provide the required code to replicate our experiments at https://github.com/hsoleimani/MCCTM.