Audiovisual perceptual learning with multiple speakers

Aaron D. Mitchel, Chip Gerfen, Daniel J. Weiss

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

Original languageEnglish (US)
Pages (from-to)66-74
Number of pages9
JournalJournal of Phonetics
Volume56
DOIs
StatePublished - May 1 2016

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Linguistics and Language
  • Speech and Hearing

Fingerprint Dive into the research topics of 'Audiovisual perceptual learning with multiple speakers'. Together they form a unique fingerprint.

Cite this