An evaluation framework for natural language understanding in spoken dialogue systems

Joshua B. Gordon, Rebecca J. Passonneau

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

We present an evaluation framework to enable developers of information seeking, transaction based spoken dialogue systems to compare the robustness of natural language understanding (NLU) approaches across varying levels of word error rate and contrasting domains. We develop statistical and semantic parsing based approaches to dialogue act identification and concept retrieval. Voice search is used in each approach to ultimately query the database. Included in the framework is a method for developers to bootstrap a representative pseudo-corpus, which is used to estimate NLU performance in a new domain. We illustrate the relative merits of these NLU techniques by contrasting our statistical NLU approach with a semantic parsing method over two contrasting applications, our CheckItOut library system and the deployed Let's Go Public! system, across four levels of word error rate. We find that with respect to both dialogue act identification and concept retrieval, our statistical NLU approach is more likely to robustly accommodate the freer form, less constrained utterances of CheckItOut at higher word error rates than is possible with semantic parsing.

Original languageEnglish (US)
Title of host publicationProceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010
EditorsDaniel Tapias, Irene Russo, Olivier Hamon, Stelios Piperidis, Nicoletta Calzolari, Khalid Choukri, Joseph Mariani, Helene Mazo, Bente Maegaard, Jan Odijk, Mike Rosner
PublisherEuropean Language Resources Association (ELRA)
Pages72-77
Number of pages6
ISBN (Electronic)2951740867, 9782951740860
StatePublished - Jan 1 2010
Event7th International Conference on Language Resources and Evaluation, LREC 2010 - Valletta, Malta
Duration: May 17 2010May 23 2010

Publication series

NameProceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010

Other

Other7th International Conference on Language Resources and Evaluation, LREC 2010
CountryMalta
CityValletta
Period5/17/105/23/10

Fingerprint

communication technology
language
evaluation
semantics
dialogue
transaction
Evaluation
Spoken Dialogue Systems
Natural Language
Language Understanding
performance
Parsing
Dialogue Acts

All Science Journal Classification (ASJC) codes

  • Education
  • Library and Information Sciences
  • Linguistics and Language
  • Language and Linguistics

Cite this

Gordon, J. B., & Passonneau, R. J. (2010). An evaluation framework for natural language understanding in spoken dialogue systems. In D. Tapias, I. Russo, O. Hamon, S. Piperidis, N. Calzolari, K. Choukri, J. Mariani, H. Mazo, B. Maegaard, J. Odijk, ... M. Rosner (Eds.), Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010 (pp. 72-77). (Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010). European Language Resources Association (ELRA).
Gordon, Joshua B. ; Passonneau, Rebecca J. / An evaluation framework for natural language understanding in spoken dialogue systems. Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010. editor / Daniel Tapias ; Irene Russo ; Olivier Hamon ; Stelios Piperidis ; Nicoletta Calzolari ; Khalid Choukri ; Joseph Mariani ; Helene Mazo ; Bente Maegaard ; Jan Odijk ; Mike Rosner. European Language Resources Association (ELRA), 2010. pp. 72-77 (Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010).
@inproceedings{41c0cc5801af41358ae8038ec5791e66,
title = "An evaluation framework for natural language understanding in spoken dialogue systems",
abstract = "We present an evaluation framework to enable developers of information seeking, transaction based spoken dialogue systems to compare the robustness of natural language understanding (NLU) approaches across varying levels of word error rate and contrasting domains. We develop statistical and semantic parsing based approaches to dialogue act identification and concept retrieval. Voice search is used in each approach to ultimately query the database. Included in the framework is a method for developers to bootstrap a representative pseudo-corpus, which is used to estimate NLU performance in a new domain. We illustrate the relative merits of these NLU techniques by contrasting our statistical NLU approach with a semantic parsing method over two contrasting applications, our CheckItOut library system and the deployed Let's Go Public! system, across four levels of word error rate. We find that with respect to both dialogue act identification and concept retrieval, our statistical NLU approach is more likely to robustly accommodate the freer form, less constrained utterances of CheckItOut at higher word error rates than is possible with semantic parsing.",
author = "Gordon, {Joshua B.} and Passonneau, {Rebecca J.}",
year = "2010",
month = "1",
day = "1",
language = "English (US)",
series = "Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010",
publisher = "European Language Resources Association (ELRA)",
pages = "72--77",
editor = "Daniel Tapias and Irene Russo and Olivier Hamon and Stelios Piperidis and Nicoletta Calzolari and Khalid Choukri and Joseph Mariani and Helene Mazo and Bente Maegaard and Jan Odijk and Mike Rosner",
booktitle = "Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010",

}

Gordon, JB & Passonneau, RJ 2010, An evaluation framework for natural language understanding in spoken dialogue systems. in D Tapias, I Russo, O Hamon, S Piperidis, N Calzolari, K Choukri, J Mariani, H Mazo, B Maegaard, J Odijk & M Rosner (eds), Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010. Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010, European Language Resources Association (ELRA), pp. 72-77, 7th International Conference on Language Resources and Evaluation, LREC 2010, Valletta, Malta, 5/17/10.

An evaluation framework for natural language understanding in spoken dialogue systems. / Gordon, Joshua B.; Passonneau, Rebecca J.

Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010. ed. / Daniel Tapias; Irene Russo; Olivier Hamon; Stelios Piperidis; Nicoletta Calzolari; Khalid Choukri; Joseph Mariani; Helene Mazo; Bente Maegaard; Jan Odijk; Mike Rosner. European Language Resources Association (ELRA), 2010. p. 72-77 (Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - An evaluation framework for natural language understanding in spoken dialogue systems

AU - Gordon, Joshua B.

AU - Passonneau, Rebecca J.

PY - 2010/1/1

Y1 - 2010/1/1

N2 - We present an evaluation framework to enable developers of information seeking, transaction based spoken dialogue systems to compare the robustness of natural language understanding (NLU) approaches across varying levels of word error rate and contrasting domains. We develop statistical and semantic parsing based approaches to dialogue act identification and concept retrieval. Voice search is used in each approach to ultimately query the database. Included in the framework is a method for developers to bootstrap a representative pseudo-corpus, which is used to estimate NLU performance in a new domain. We illustrate the relative merits of these NLU techniques by contrasting our statistical NLU approach with a semantic parsing method over two contrasting applications, our CheckItOut library system and the deployed Let's Go Public! system, across four levels of word error rate. We find that with respect to both dialogue act identification and concept retrieval, our statistical NLU approach is more likely to robustly accommodate the freer form, less constrained utterances of CheckItOut at higher word error rates than is possible with semantic parsing.

AB - We present an evaluation framework to enable developers of information seeking, transaction based spoken dialogue systems to compare the robustness of natural language understanding (NLU) approaches across varying levels of word error rate and contrasting domains. We develop statistical and semantic parsing based approaches to dialogue act identification and concept retrieval. Voice search is used in each approach to ultimately query the database. Included in the framework is a method for developers to bootstrap a representative pseudo-corpus, which is used to estimate NLU performance in a new domain. We illustrate the relative merits of these NLU techniques by contrasting our statistical NLU approach with a semantic parsing method over two contrasting applications, our CheckItOut library system and the deployed Let's Go Public! system, across four levels of word error rate. We find that with respect to both dialogue act identification and concept retrieval, our statistical NLU approach is more likely to robustly accommodate the freer form, less constrained utterances of CheckItOut at higher word error rates than is possible with semantic parsing.

UR - http://www.scopus.com/inward/record.url?scp=84858375480&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84858375480&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84858375480

T3 - Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010

SP - 72

EP - 77

BT - Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010

A2 - Tapias, Daniel

A2 - Russo, Irene

A2 - Hamon, Olivier

A2 - Piperidis, Stelios

A2 - Calzolari, Nicoletta

A2 - Choukri, Khalid

A2 - Mariani, Joseph

A2 - Mazo, Helene

A2 - Maegaard, Bente

A2 - Odijk, Jan

A2 - Rosner, Mike

PB - European Language Resources Association (ELRA)

ER -

Gordon JB, Passonneau RJ. An evaluation framework for natural language understanding in spoken dialogue systems. In Tapias D, Russo I, Hamon O, Piperidis S, Calzolari N, Choukri K, Mariani J, Mazo H, Maegaard B, Odijk J, Rosner M, editors, Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010. European Language Resources Association (ELRA). 2010. p. 72-77. (Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010).