Refine
Has Fulltext
- yes (17)
Is part of the Bibliography
- no (17)
Keywords
- Human behaviour (4)
- virtual reality (3)
- visual search (3)
- Object vision (2)
- gaze-contingent protocol (2)
- visual attention (2)
- Berlin Affective Word List (BAWL) (1)
- Electroencephalography – EEG (1)
- Long-term memory (1)
- Mixed models (1)
- Object recognition (1)
- Perception (1)
- Power (1)
- Psychology (1)
- R (1)
- Scene context effects (1)
- Sensory processing (1)
- Simulation (1)
- Viewpoint dependence (1)
- aesthetics (1)
- development (1)
- emotion (1)
- erps (1)
- eye movements (1)
- fixation memory (1)
- gaze-contingent display (1)
- incidental memory (1)
- introspection (1)
- lexical decision task (1)
- lme4 (1)
- mixedpower (1)
- n400 (1)
- neurocognitive poetics (1)
- object-scene inconsistency effect (1)
- reading (1)
- repeated search (1)
- scene grammar (1)
- scene knowledge (1)
- scene processing (1)
- valence decision task (1)
- visual field loss (1)
- visual fields (1)
- word recognition models (1)
Institute
- Psychologie (13)
- Psychologie und Sportwissenschaften (3)
- Präsidium (1)
Reading is not only "cold" information processing, but involves affective and aesthetic processes that go far beyond what current models of word recognition, sentence processing, or text comprehension can explain. To investigate such "hot" reading processes, standardized instruments that quantify both psycholinguistic and emotional variables at the sublexical, lexical, inter-, and supralexical levels (e.g., phonological iconicity, word valence, arousal-span, or passage suspense) are necessary. One such instrument, the Berlin Affective Word List (BAWL) has been used in over 50 published studies demonstrating effects of lexical emotional variables on all relevant processing levels (experiential, behavioral, neuronal). In this paper, we first present new data from several BAWL studies. Together, these studies examine various views on affective effects in reading arising from dimensional (e.g., valence) and discrete emotion features (e.g., happiness), or embodied cognition features like smelling. Second, we extend our investigation of the complex issue of affective word processing to words characterized by a mixture of affects. These words entail positive and negative valence, and/or features making them beautiful or ugly. Finally, we discuss tentative neurocognitive models of affective word processing in the light of the present results, raising new issues for future studies.
The arrangement of the contents of real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is the role different types of objects hold in a scene. In the current work, we seek to unveil an important building block of scenes—anchor objects. Anchors hold specific spatial predictions regarding the likely position of other objects in an environment. In a series of three eye tracking experiments we tested what role anchor objects occupy during visual search. In all of the experiments, participants searched through scenes for an object that was cued in the beginning of each trial. Critically, in half of the scenes a target relevant anchor was swapped for an irrelevant, albeit semantically consistent, object. We found that relevant anchor objects can guide visual search leading to faster reaction times, less scene coverage, and less time between fixating the anchor and the target. The choice of anchor objects was confirmed through an independent large image database, which allowed us to identify key attributes of anchors. Anchor objects seem to play a unique role in the spatial layout of scenes and need to be considered for understanding the efficiency of visual search in realistic stimuli.
We wish to make the following correction to the published paper 'Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment'. We have identified a flaw in the implementation of a latency mitigation strategy for our gaze-contingent protocol written in Unity3D. As a result, the maximum latency is now estimated to be 30 ms instead of 15 ms, which should not affect any of the results originally published but should be noted for further reference.
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Estimating power in (generalized) linear mixed models: An open introduction and tutorial in R
(2021)
Mixed-effects models are a powerful tool for modeling fixed and random effects simultaneously, but do not offer a feasible analytic solution for estimating the probability that a test correctly rejects the null hypothesis. Being able to estimate this probability, however, is critical for sample size planning, as power is closely linked to the reliability and replicability of empirical findings. A flexible and very intuitive alternative to analytic power solutions are simulation-based power analyses. Although various tools for conducting simulation-based power analyses for mixed-effects models are available, there is lack of guidance on how to appropriately use them. In this tutorial, we discuss how to estimate power for mixed-effects models in different use cases: first, how to use models that were fit on available (e.g. published) data to determine sample size; second, how to determine the number of stimuli required for sufficient power; and finally, how to conduct sample size planning without available data. Our examples cover both linear and generalized linear models and we provide code and resources for performing simulation-based power analyses on openly accessible data sets. The present work therefore helps researchers to navigate sound research design when using mixed-effects models, by summarizing resources, collating available knowledge, providing solutions and tools, and applying them to real-world problems in sample sizing planning when sophisticated analysis procedures like mixed-effects models are outlined as inferential procedures.
People know surprisingly little about their own visual behavior, which can be problematic when learning or executing complex visual tasks such as search of medical images. We investigated whether providing observers with online information about their eye position during search would help them recall their own fixations immediately afterwards. Seventeen observers searched for various objects in “Where's Waldo” images for 3 s. On two-thirds of trials, observers made target present/absent responses. On the other third (critical trials), they were asked to click twelve locations in the scene where they thought they had just fixated. On half of the trials, a gaze-contingent window showed observers their current eye position as a 7.5° diameter “spotlight.” The spotlight “illuminated” everything fixated, while the rest of the display was still visible but dimmer. Performance was quantified as the overlap of circles centered on the actual fixations and centered on the reported fixations. Replicating prior work, this overlap was quite low (26%), far from ceiling (66%) and quite close to chance performance (21%). Performance was only slightly better in the spotlight condition (28%, p = 0.03). Giving observers information about their fixation locations by dimming the periphery improved memory for those fixations modestly, at best.
Human observers can quickly and accurately categorize scenes. This remarkable ability is related to the usage of information at different spatial frequencies (SFs) following a coarse-to-fine pattern: Low SFs, conveying coarse layout information, are thought to be used earlier than high SFs, representing more fine-grained information. Alternatives to this pattern have rarely been considered. Here, we probed all possible SF usage strategies randomly with high resolution in both the SF and time dimensions at two categorization levels. We show that correct basic-level categorizations of indoor scenes are linked to the sampling of relatively high SFs, whereas correct outdoor scene categorizations are predicted by an early use of high SFs and a later use of low SFs (fine-to-coarse pattern of SF usage). Superordinate-level categorizations (indoor vs. outdoor scenes) rely on lower SFs early on, followed by a shift to higher SFs and a subsequent shift back to lower SFs in late stages. In summary, our results show no consistent pattern of SF usage across tasks and only partially replicate the diagnostic SFs found in previous studies. We therefore propose that SF sampling strategies of observers differ with varying stimulus and task characteristics, thus favouring the notion of flexible SF usage.
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search
(2017)
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one’s scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants’ memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants’ construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.