Refine
Document Type
- Article (4) (remove)
Language
- English (4)
Has Fulltext
- yes (4) (remove)
Is part of the Bibliography
- no (4)
Keywords
- visual attention (4) (remove)
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Background: Visual exploration in autism spectrum disorder (ASD) is characterized by attenuated social attention. The underlying oculomotor function during visual exploration is understudied, whereas oculomotor function during restricted viewing suggested saccade dysmetria in ASD by altered pontocerebellar motor modulation. Methods: Oculomotor function was recorded using remote eye tracking in 142 ASD participants and 142 matched neurotypical controls during free viewing of naturalistic videos with and without human content. The sample was heterogenous concerning age (6–30 years), cognitive ability (60–140 IQ), and male/female ratio (3:1). Oculomotor function was defined as saccade, fixation, and pupil-dilation features that were compared between groups in linear mixed models. Oculomotor function was investigated as ASD classifier and features were correlated with clinical measures. Results: We observed decreased saccade duration (∆M = −0.50, CI [−0.21, −0.78]) and amplitude (∆M = −0.42, CI [−0.12, −0.72]), which was independent of human video content. We observed null findings concerning fixation and pupil-dilation features (POWER = .81). Oculomotor function is a valid ASD classifier comparable to social attention concerning discriminative power. Within ASD, saccade features correlated with measures of restricted and repetitive behavior. Conclusions: We conclude saccade dysmetria as ASD oculomotor phenotype relevant to visual exploration. Decreased saccade amplitude and duration indicate spatially clustered fixations that attenuate visual exploration and emphasize endogenous over exogenous attention. We propose altered pontocerebellar motor modulation as underlying mechanism that contributes to atypical (oculo-)motor coordination and attention function in ASD.
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Reaction times to previously ignored information are often delayed, a phenomenon referred to as negative priming (NP). Rothermund et al. (2005) proposed that NP is caused by the retrieval of incidental stimulus-response associations when consecutive displays share visual features but require different responses. In two experiments we examined whether the features (color, shape) that reappear in consecutive displays, or their level of processing (early-perceptual, late-semantic) moderate the likelihood that stimulus-response associations are retrieved. Using a perceptual matching task (Experiment 1), NP occurred independently of whether responses were repeated or switched. Only when implementing a semantic-matching task (Experiment 2), negative priming was determined by response-repetition as predicted by response-retrieval theory. The results can be explained in terms of a task-dependent temporal discrimination process (Milliken et al., 1998): Response-relevant features are encoded more strongly and/or are more likely to be retrieved than irrelevant features.