TY - GEN
T1 - Distractor generation with generative adversarial nets for automatically creating fill-in-the-blank questions
AU - Liang, Chen
AU - Yang, Xiao
AU - Wham, Drew
AU - Pursel, Bart
AU - Passonneau, Rebecca
AU - Giles, C. Lee
N1 - Funding Information:
6 ACKNOWLEDGEMENTS We gratefully acknowledge partial support from the Penn State Center for Online Innovation in Learning. REFERENCES
Publisher Copyright:
© 2017 Copyright held by the owner/author(s).
PY - 2017/12/4
Y1 - 2017/12/4
N2 - Distractor generation is a crucial step for fill-in-the-blank question generation. We propose a generative model learned from training generative adversarial nets (GANs) to create useful distractors. Our method utilizes only context information and does not use the correct answer, which is completely different from previous Ontologybased or similarity-based approaches. Trained on the Wikipedia corpus, the proposed model is able to predict Wiki entities as distractors. Our method is evaluated on two biology question datasets collected from Wikipedia and actual college-level exams. Experimental results show that our context-based method achieves comparable performance to a frequently used word2vec-based method for the Wiki dataset. In addition, we propose a second-stage learner to combine the strengths of the two methods, which further improves the performance on both datasets, with 51.7% and 48.4% of generated distractors being acceptable.
AB - Distractor generation is a crucial step for fill-in-the-blank question generation. We propose a generative model learned from training generative adversarial nets (GANs) to create useful distractors. Our method utilizes only context information and does not use the correct answer, which is completely different from previous Ontologybased or similarity-based approaches. Trained on the Wikipedia corpus, the proposed model is able to predict Wiki entities as distractors. Our method is evaluated on two biology question datasets collected from Wikipedia and actual college-level exams. Experimental results show that our context-based method achieves comparable performance to a frequently used word2vec-based method for the Wiki dataset. In addition, we propose a second-stage learner to combine the strengths of the two methods, which further improves the performance on both datasets, with 51.7% and 48.4% of generated distractors being acceptable.
UR - http://www.scopus.com/inward/record.url?scp=85040581664&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85040581664&partnerID=8YFLogxK
U2 - 10.1145/3148011.3154463
DO - 10.1145/3148011.3154463
M3 - Conference contribution
AN - SCOPUS:85040581664
T3 - Proceedings of the Knowledge Capture Conference, K-CAP 2017
BT - Proceedings of the Knowledge Capture Conference, K-CAP 2017
PB - Association for Computing Machinery, Inc
T2 - 9th International Conference on Knowledge Capture, K-CAP 2017
Y2 - 4 December 2017 through 6 December 2017
ER -