Among ensemble learning methods, stacking with a meta-level classifier is frequently adopted to fuse the output of multiple base-level classifiers and generate a final score. Labeled data is usually split for basetraining and meta-training, so that the meta-level learning is not impacted by over-fitting of base level classifiers on their training data. We propose a novel knowledge-transfer framework that reutilizes the basetraining data for learning the meta-level classifier without such negative consequences. By recycling the knowledge obtained during the base-classifier-training stage, we make the most efficient use of all available information and achieve better fusion, thus a better overall performance. With extensive experiments on complicated video event detection, where training data is scarce, we demonstrate the improved performance of our framework over other alternatives.