Refine
Document Type
- Article (3)
Language
- English (3) (remove)
Has Fulltext
- yes (3) (remove)
Is part of the Bibliography
- no (3)
Keywords
- eye tracking (3) (remove)
Institute
- Medizin (1)
- Neuere Philologien (1)
- Psychologie (1)
Progression of pupil dilation (PD) in response to visual stimuli may indicate distinct internal processes. No study has been performed on PD progression during a social cognition task. Here, we describe PD progression during the Movie for the Assessment of Social Cognition (MASC) test in n = 23 adolescents with Autism Spectrum Disorder (ASD) and n = 24 age, IQ and sex‐matched neurotypical controls (NTC). The MASC consists of 43 video sequences depicting human social interactions, each followed by a multiple‐choice question concerning characters' mental states. PD progression data were extracted by eye tracking and controlled for fixation behavior. Segmenting PD progression during video sequences by principal component analysis, three sequential PD components were unveiled. In ASD compared with NTC, a distinct PD progression was observed with increased constriction amplitude, increased dilation latency, and increased dilation amplitude that correlated with PD progression components. These components predicted social cognition performance. The first and second PD components correlated positively with MASC behavioral performance in ASD but negatively in NTC. These PD components may be interpreted as indicators of sensory‐perceptual processing and attention function. In ASD, aberrant sensory‐perceptual processing and attention function could contribute to attenuated social cognition performance. This needs to be tested by additional studies combining the respective cognitive tests and the outlined PD progression analysis. Phasic activity of the locus coeruleus–norepinephrine system is discussed as putatively shared underlying mechanism.
When mapping eye-movement behavior to the visual information presented to an observer, Areas of Interest (AOIs) are commonly employed. For static stimuli (screen without moving elements), this requires that one AOI set is constructed for each stimulus, a possibility in most eye-tracker manufacturers' software. For moving stimuli (screens with moving elements), however, it is often a time-consuming process, as AOIs have to be constructed for each video frame. A popular use-case for such moving AOIs is to study gaze behavior to moving faces. Although it is technically possible to construct AOIs automatically, the standard in this field is still manual AOI construction. This is likely due to the fact that automatic AOI-construction methods are (1) technically complex, or (2) not effective enough for empirical research. To aid researchers in this field, we present and validate a method that automatically achieves AOI construction for videos containing a face. The fully-automatic method uses an open-source toolbox for facial landmark detection, and a Voronoi-based AOI-construction method. We compared the position of AOIs obtained using our new method, and the eye-tracking measures derived from it, to a recently published semi-automatic method. The differences between the two methods were negligible. The presented method is therefore both effective (as effective as previous methods), and efficient; no researcher time is needed for AOI construction. The software is freely available from https://osf.io/zgmch/.
Understanding a sentence and integrating it into the discourse depends upon the identification of its focus, which, in spoken German, is marked by accentuation. In the case of written language, which lacks explicit cues to accent, readers have to draw on other kinds of information to determine the focus. We study the joint or interactive effects of two kinds of information that have no direct representation in print but have each been shown to be influential in the reader's text comprehension: (i) the (low-level) rhythmic-prosodic structure that is based on the distribution of lexically stressed syllables, and (ii) the (high-level) discourse context that is grounded in the memory of previous linguistic content. Systematically manipulating these factors, we examine the way readers resolve a syntactic ambiguity involving the scopally ambiguous focus operator auch (engl. “too”) in both oral (Experiment 1) and silent reading (Experiment 2). The results of both experiments attest that discourse context and local linguistic rhythm conspire to guide the syntactic and, concomitantly, the focus-structural analysis of ambiguous sentences. We argue that reading comprehension requires the (implicit) assignment of accents according to the focus structure and that, by establishing a prominence profile, the implicit prosodic rhythm directly affects accent assignment.