• search hit 1 of 1
Back to Result List

Incongruence between observers’ and observed facial muscle activation reduces recognition of emotional facial expressions from video stimuli

  • According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Metadaten
Author:Tanja S. H. Wingenbach, Mark Brosnan, Monique Christine Pfaltz, Michael M. PlichtaORCiDGND, Chrin Ashwin
URN:urn:nbn:de:hebis:30:3-468528
DOI:https://doi.org/10.3389/fpsyg.2018.00864
ISSN:1664-1078
Pubmed Id:https://pubmed.ncbi.nlm.nih.gov/29928240
Parent Title (English):Frontiers in psychology
Publisher:Frontiers Research Foundation
Place of publication:Lausanne
Contributor(s):Eva Krumhuber
Document Type:Article
Language:English
Year of Completion:2018
Date of first Publication:2018/06/06
Publishing Institution:Universitätsbibliothek Johann Christian Senckenberg
Release Date:2018/07/05
Tag:dynamic stimuli; embodiment; facial EMG; facial emotion recognition; facial expressions of emotion; facial muscle activity; imitation; videos
Volume:9
Issue:Art. 864
Page Number:12
First Page:1
Last Page:12
Note:
Copyright © 2018 Wingenbach, Brosnan, Pfaltz, Plichta and Ashwin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
HeBIS-PPN:43566994X
Institutes:Medizin / Medizin
Dewey Decimal Classification:6 Technik, Medizin, angewandte Wissenschaften / 61 Medizin und Gesundheit / 610 Medizin und Gesundheit
Sammlungen:Universitätspublikationen
Licence (German):License LogoCreative Commons - Namensnennung 4.0