TY - JOUR
T1 - Do I look like I'm sure?
T2 - Partial metacognitive access to the low-level aspects of one's own facial expressions
AU - Ciston, Anthony B.
AU - Forster, Carina
AU - Brick, Timothy R.
AU - Kühn, Simone
AU - Verrel, Julius
AU - Filevich, Elisa
N1 - Funding Information:
We thank student assistants for help in data collection in Experiment 1, and Manuel Zellhöfer for help in programming the experimental paradigm. We thank Soledad Galli for assistance with the ML models and Nathan Faivre for comments on an earlier version of this manuscript. ABC, CF and EF were supported by a Freigeist Fellowship to EF from the Volkswagen Foundation (grant number 91620 ). This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 337619223/RTG2386 and the Max-Planck Society . The funders had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript.
Publisher Copyright:
© 2022
PY - 2022/8
Y1 - 2022/8
N2 - As humans we communicate important information through fine nuances in our facial expressions, but because conscious motor representations are noisy, we might not be able to report these fine movements. Here we measured the precision of the explicit metacognitive information that young adults have about their own facial expressions. Participants imitated pictures of themselves making facial expressions and triggered a camera to take a picture of them while doing so. They then rated how well they thought they imitated each expression. We defined metacognitive access to facial expressions as the relationship between objective performance (how well the two pictures matched) and subjective performance ratings. As a group, participants' metacognitive confidence ratings were only about four times less precise than their own similarity ratings. In turn, machine learning analyses revealed that participants' performance ratings were based on idiosyncratic subsets of features. We conclude that metacognitive access to one's own facial expressions is only partial.
AB - As humans we communicate important information through fine nuances in our facial expressions, but because conscious motor representations are noisy, we might not be able to report these fine movements. Here we measured the precision of the explicit metacognitive information that young adults have about their own facial expressions. Participants imitated pictures of themselves making facial expressions and triggered a camera to take a picture of them while doing so. They then rated how well they thought they imitated each expression. We defined metacognitive access to facial expressions as the relationship between objective performance (how well the two pictures matched) and subjective performance ratings. As a group, participants' metacognitive confidence ratings were only about four times less precise than their own similarity ratings. In turn, machine learning analyses revealed that participants' performance ratings were based on idiosyncratic subsets of features. We conclude that metacognitive access to one's own facial expressions is only partial.
UR - http://www.scopus.com/inward/record.url?scp=85129711332&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85129711332&partnerID=8YFLogxK
U2 - 10.1016/j.cognition.2022.105155
DO - 10.1016/j.cognition.2022.105155
M3 - Article
C2 - 35537345
AN - SCOPUS:85129711332
SN - 0010-0277
VL - 225
JO - Cognition
JF - Cognition
M1 - 105155
ER -