Multimodal integration in statistical learning: Evidence from the McGurk illusion

Aaron D. Mitchel, Morten H. Christiansen, Daniel J. Weiss

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g.,/gi/) and auditory (e.g.,/mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g.,/ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.

Original languageEnglish (US)
Article number407
JournalFrontiers in Psychology
Volume5
Issue numberMAY
DOIs
StatePublished - 2014

All Science Journal Classification (ASJC) codes

  • Psychology(all)

Fingerprint Dive into the research topics of 'Multimodal integration in statistical learning: Evidence from the McGurk illusion'. Together they form a unique fingerprint.

Cite this