We introduce a multi-stage ensemble framework, Error-Driven Generalist+Expert or Edge, for improved classica-tion on large-scale text categorization problems. Edgerst trains a generalist, capable of classifying under all classes, to deliver a reasonably accurate initial category ranking given an instance. Edge then computes a confusion graph for the generalist and allocates the learning resources to train experts on relatively small groups of classes that tend to be systematically confused with one another by the generalist. The experts' votes, when invoked on a given instance, yield a reranking of the classes, thereby correcting the errors of the generalist. Our evaluations showcase the improved classification and ranking performance on several large-scale text categorization datasets. Edge is in particular effcient when the underlying learners are effcient. Our study of confusion graphs is also of independent interest.