Refine
Document Type
- Article (14) (remove)
Has Fulltext
- yes (14)
Is part of the Bibliography
- no (14) (remove)
Keywords
- Human behaviour (4)
- virtual reality (3)
- visual search (3)
- Object vision (2)
- gaze-contingent protocol (2)
- visual attention (2)
- Berlin Affective Word List (BAWL) (1)
- Electroencephalography – EEG (1)
- Long-term memory (1)
- Mixed models (1)
- Perception (1)
- Power (1)
- Psychology (1)
- R (1)
- Sensory processing (1)
- Simulation (1)
- aesthetics (1)
- development (1)
- emotion (1)
- erps (1)
- eye movements (1)
- fixation memory (1)
- gaze-contingent display (1)
- incidental memory (1)
- introspection (1)
- lexical decision task (1)
- lme4 (1)
- mixedpower (1)
- n400 (1)
- neurocognitive poetics (1)
- object-scene inconsistency effect (1)
- reading (1)
- repeated search (1)
- scene grammar (1)
- scene knowledge (1)
- scene processing (1)
- valence decision task (1)
- visual field loss (1)
- visual fields (1)
- word recognition models (1)
Institute
- Psychologie (10)
- Psychologie und Sportwissenschaften (3)
- Präsidium (1)
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
We wish to make the following correction to the published paper 'Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment'. We have identified a flaw in the implementation of a latency mitigation strategy for our gaze-contingent protocol written in Unity3D. As a result, the maximum latency is now estimated to be 30 ms instead of 15 ms, which should not affect any of the results originally published but should be noted for further reference.
The arrangement of the contents of real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is the role different types of objects hold in a scene. In the current work, we seek to unveil an important building block of scenes—anchor objects. Anchors hold specific spatial predictions regarding the likely position of other objects in an environment. In a series of three eye tracking experiments we tested what role anchor objects occupy during visual search. In all of the experiments, participants searched through scenes for an object that was cued in the beginning of each trial. Critically, in half of the scenes a target relevant anchor was swapped for an irrelevant, albeit semantically consistent, object. We found that relevant anchor objects can guide visual search leading to faster reaction times, less scene coverage, and less time between fixating the anchor and the target. The choice of anchor objects was confirmed through an independent large image database, which allowed us to identify key attributes of anchors. Anchor objects seem to play a unique role in the spatial layout of scenes and need to be considered for understanding the efficiency of visual search in realistic stimuli.
Reading is not only "cold" information processing, but involves affective and aesthetic processes that go far beyond what current models of word recognition, sentence processing, or text comprehension can explain. To investigate such "hot" reading processes, standardized instruments that quantify both psycholinguistic and emotional variables at the sublexical, lexical, inter-, and supralexical levels (e.g., phonological iconicity, word valence, arousal-span, or passage suspense) are necessary. One such instrument, the Berlin Affective Word List (BAWL) has been used in over 50 published studies demonstrating effects of lexical emotional variables on all relevant processing levels (experiential, behavioral, neuronal). In this paper, we first present new data from several BAWL studies. Together, these studies examine various views on affective effects in reading arising from dimensional (e.g., valence) and discrete emotion features (e.g., happiness), or embodied cognition features like smelling. Second, we extend our investigation of the complex issue of affective word processing to words characterized by a mixture of affects. These words entail positive and negative valence, and/or features making them beautiful or ugly. Finally, we discuss tentative neurocognitive models of affective word processing in the light of the present results, raising new issues for future studies.