Refine
Document Type
- Article (4)
Language
- English (4)
Has Fulltext
- yes (4)
Is part of the Bibliography
- no (4)
Keywords
- virtual reality (4) (remove)
Institute
- Psychologie (4) (remove)
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Based on neurofeedback (NF) training as a neurocognitive treatment in attention-deficit/hyperactivity disorder (ADHD), we designed a randomized, controlled functional near-infrared spectroscopy (fNIRS) NF intervention embedded in an immersive virtual reality classroom in which participants learned to control overhead lighting with their dorsolateral prefrontal brain activation. We tested the efficacy of the intervention on healthy adults displaying high impulsivity as a sub-clinical population sharing common features with ADHD. Twenty participants, 10 in an experimental and 10 in a shoulder muscle-based electromyography control group, underwent eight training sessions across 2 weeks. Training was bookended by a pre- and post-test including go/no-go, n-back, and stop-signal tasks (SST). Results indicated a significant reduction in commission errors on the no-go task with a simultaneous increase in prefrontal oxygenated hemoglobin concentration for the experimental group, but not for the control group. Furthermore, the ability of the subjects to gain control over the feedback parameter correlated strongly with the reduction in commission errors for the experimental, but not for the control group, indicating the potential importance of learning feedback control in moderating behavioral outcomes. In addition, participants of the fNIRS group showed a reduction in reaction time variability on the SST. Results indicate a clear effect of our NF intervention in reducing impulsive behavior possibly via a strengthening of frontal lobe functioning. Virtual reality additions to conventional NF may be one way to improve the ecological validity and symptom-relevance of the training situation, hence positively affecting transfer of acquired skills to real life.