This paper introduces a new generative semisupervised (transductive) mixture model with a more fine-grained class label generation mechanism than that of previous works. Our approach effectively combines the advantages of standard semisupervised mixtures, which achieve label extrapolation over a mixture component when there are few labeled samples, and nearest-neighbor (NN) classification, which achieves accurate classification in the local vicinity of labeled samples. Toward this end, we propose a two-stage stochastic data generation mechanism, with the unlabeled samples first produced and then the labeled samples generated conditioned on both the unlabeled data and on their components of origin. This nested data generation entails a more complicated (albeit still closed-form) E-step evaluation than that for standard mixtures. Our model is advantageous, compared with previous semisupervised mixtures, when mixture components model data from more than one class and when within-component class proportions are not constant over the feature space region "owned" by a component. Experiments demonstrate gains in classification accuracy over both the previous semisupervised mixture of experts model and over K-NN classification on data sets from the DC Irvine Repository.