Automated pyramid summarization evaluation

Yanjun Gao, Chen Sun, Rebecca J. Passonneau

Research output: Chapter in Book/Report/Conference proceedingConference contribution

16 Scopus citations

Abstract

Pyramid evaluation was developed to assess the content of paragraph length summaries of source texts. A pyramid lists the distinct units of content found in several reference summaries, weights content units by how many reference summaries they occur in, and produces three scores based on the weighted content of new summaries. We present an automated method that is more efficient, more transparent, and more complete than previous automated pyramid methods. It is tested on a new dataset of student summaries, and historical NIST data from extractive summarizers.

Original languageEnglish (US)
Title of host publicationCoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference
PublisherAssociation for Computational Linguistics
Pages404-418
Number of pages15
ISBN (Electronic)9781950737727
StatePublished - 2019
Event23rd Conference on Computational Natural Language Learning, CoNLL 2019 - Hong Kong, China
Duration: Nov 3 2019Nov 4 2019

Publication series

NameCoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference

Conference

Conference23rd Conference on Computational Natural Language Learning, CoNLL 2019
Country/TerritoryChina
CityHong Kong
Period11/3/1911/4/19

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Information Systems
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Automated pyramid summarization evaluation'. Together they form a unique fingerprint.

Cite this