Model-based compressive sensing (CS) exploits the structure inherent in sparse signals for the design of better signal recovery algorithms. This information about structure is often captured in the form of a prior on the sparse coefficients, the Laplacian being the most common such choice (leading to l1-norm minimization). The recent seminal contribution by Wright et al. exploits the discriminative capability of sparse representations for image classification, specifically face recognition. Their approach employs the analytical framework of CS with class-specific dictionaries. Our contribution is a logical extension of these ideas into structured sparsity for classification. We use class-specific dictionaries in conjunction with discriminative class-specific priors, specifically the spike-and-slab prior widely applied in Bayesian regression. Significantly, the proposed framework takes the burden off the demand for abundant training image samples necessary for the success of sparsity-based classification schemes.