• Deutsch
Login

Open Access

  • Home
  • Search
  • Browse
  • Publish
  • FAQ

Refine

Author

  • Võ, Melissa Lê-Hoa (13)
  • Draschkow, Dejan (5)
  • Beitner, Julia (4)
  • David, Erwan (2)
  • Willenbockel, Verena (2)
  • Aizenman, Avi M. (1)
  • Boettcher, Sage E. P. (1)
  • Braun, Mario (1)
  • Briesemeister, Benny B. (1)
  • Caplette, Laurent (1)
+ more

Year of publication

  • 2021 (4)
  • 2015 (2)
  • 2017 (2)
  • 2020 (2)
  • 2018 (1)
  • 2019 (1)
  • 2022 (1)

Document Type

  • Article (12)
  • Preprint (1)

Language

  • English (12)
  • German (1)

Has Fulltext

  • yes (13)

Is part of the Bibliography

  • no (13)

Keywords

  • Human behaviour (3)
  • virtual reality (3)
  • visual search (3)
  • gaze-contingent protocol (2)
  • visual attention (2)
  • Berlin Affective Word List (BAWL) (1)
  • Electroencephalography – EEG (1)
  • Long-term memory (1)
  • Object recognition (1)
  • Object vision (1)
+ more

Institute

  • Psychologie (10)
  • Psychologie und Sportwissenschaften (2)
  • Präsidium (1)

13 search hits

  • 1 to 10
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
Heureka! : wie hat Sie in Ihrer Forschung ein Geistesblitz getroffen? (2015)
Leppin, Hartmut ; Võ, Melissa Lê-Hoa ; Đikić, Ivan ; Kemmers, Fleur ; Jussen, Bernhard ; Forst, Rainer
10 years of BAWLing into affective and aesthetic processes in reading: what are the echoes? (2015)
Jacobs, Arthur M. ; Võ, Melissa Lê-Hoa ; Briesemeister, Benny B. ; Conrad, Markus ; Hofmann, Markus J. ; Kuchinke, Lars ; Lüdtke, Jana ; Braun, Mario
Reading is not only "cold" information processing, but involves affective and aesthetic processes that go far beyond what current models of word recognition, sentence processing, or text comprehension can explain. To investigate such "hot" reading processes, standardized instruments that quantify both psycholinguistic and emotional variables at the sublexical, lexical, inter-, and supralexical levels (e.g., phonological iconicity, word valence, arousal-span, or passage suspense) are necessary. One such instrument, the Berlin Affective Word List (BAWL) has been used in over 50 published studies demonstrating effects of lexical emotional variables on all relevant processing levels (experiential, behavioral, neuronal). In this paper, we first present new data from several BAWL studies. Together, these studies examine various views on affective effects in reading arising from dimensional (e.g., valence) and discrete emotion features (e.g., happiness), or embodied cognition features like smelling. Second, we extend our investigation of the complex issue of affective word processing to words characterized by a mixture of affects. These words entail positive and negative valence, and/or features making them beautiful or ugly. Finally, we discuss tentative neurocognitive models of affective word processing in the light of the present results, raising new issues for future studies.
The role of scene summary statistics in object recognition (2018)
Lauer, Tim ; Cornelissen, Tim H. W. ; Draschkow, Dejan ; Willenbockel, Verena ; Võ, Melissa Lê-Hoa
Objects that are semantically related to the visual scene context are typically better recognized than unrelated objects. While context effects on object recognition are well studied, the question which particular visual information of an object’s surroundings modulates its semantic processing is still unresolved. Typically, one would expect contextual influences to arise from high-level, semantic components of a scene but what if even low-level features could modulate object processing? Here, we generated seemingly meaningless textures of real-world scenes, which preserved similar summary statistics but discarded spatial layout information. In Experiment 1, participants categorized such textures better than colour controls that lacked higher-order scene statistics while original scenes resulted in the highest performance. In Experiment 2, participants recognized briefly presented consistent objects on scenes significantly better than inconsistent objects, whereas on textures, consistent objects were recognized only slightly more accurately. In Experiment 3, we recorded event-related potentials and observed a pronounced mid-central negativity in the N300/N400 time windows for inconsistent relative to consistent objects on scenes. Critically, inconsistent objects on textures also triggered N300/N400 effects with a comparable time course, though less pronounced. Our results suggest that a scene’s low-level features contribute to the effective processing of objects in complex real-world environments.
Get your guidance going: investigating the activation of spatial priors for efficient search in virtual reality (2021)
Beitner, Julia ; Helbing, Jason ; Draschkow, Dejan ; Võ, Melissa Lê-Hoa
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Correction: David et al. Effects of transient loss of vision on head and eye movements during visual search in a virtual environment. Brain Sci. 2020, 10, 841 (2021)
David, Erwan ; Beitner, Julia ; Võ, Melissa Lê-Hoa
We wish to make the following correction to the published paper 'Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment'. We have identified a flaw in the implementation of a latency mitigation strategy for our gaze-contingent protocol written in Unity3D. As a result, the maximum latency is now estimated to be 30 ms instead of 15 ms, which should not affect any of the results originally published but should be noted for further reference.
Viewpoint-dependence and scene context effects generalize to depth rotated 3D objects (2022)
Kallmayer, Aylin ; Võ, Melissa Lê-Hoa ; Draschkow, Dejan
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from “accidental” viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0°-rotation) and non-canonical (120°-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.
Effects of transient loss of vision on head and eye movements during visual search in a virtual environment (2020)
David, Erwan ; Beitner, Julia ; Võ, Melissa Lê-Hoa
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search (2017)
Draschkow, Dejan ; Võ, Melissa Lê-Hoa
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one’s scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants’ memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants’ construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Anchoring visual search in scenes : assessing the role of anchor objects on eye movements during visual search (2019)
Boettcher, Sage E. P. ; Draschkow, Dejan ; Dienhart, Eric ; Võ, Melissa Lê-Hoa
The arrangement of the contents of real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is the role different types of objects hold in a scene. In the current work, we seek to unveil an important building block of scenes—anchor objects. Anchors hold specific spatial predictions regarding the likely position of other objects in an environment. In a series of three eye tracking experiments we tested what role anchor objects occupy during visual search. In all of the experiments, participants searched through scenes for an object that was cued in the beginning of each trial. Critically, in half of the scenes a target relevant anchor was swapped for an irrelevant, albeit semantically consistent, object. We found that relevant anchor objects can guide visual search leading to faster reaction times, less scene coverage, and less time between fixating the anchor and the target. The choice of anchor objects was confirmed through an independent large image database, which allowed us to identify key attributes of anchors. Anchor objects seem to play a unique role in the spatial layout of scenes and need to be considered for understanding the efficiency of visual search in realistic stimuli.
Even if I showed you where you looked, remembering where you just looked is hard (2017)
Kok, Ellen M. ; Aizenman, Avi M. ; Võ, Melissa Lê-Hoa ; Wolfe, Jeremy M.
People know surprisingly little about their own visual behavior, which can be problematic when learning or executing complex visual tasks such as search of medical images. We investigated whether providing observers with online information about their eye position during search would help them recall their own fixations immediately afterwards. Seventeen observers searched for various objects in “Where's Waldo” images for 3 s. On two-thirds of trials, observers made target present/absent responses. On the other third (critical trials), they were asked to click twelve locations in the scene where they thought they had just fixated. On half of the trials, a gaze-contingent window showed observers their current eye position as a 7.5° diameter “spotlight.” The spotlight “illuminated” everything fixated, while the rest of the display was still visible but dimmer. Performance was quantified as the overlap of circles centered on the actual fixations and centered on the reported fixations. Replicating prior work, this overlap was quite low (26%), far from ceiling (66%) and quite close to chance performance (21%). Performance was only slightly better in the spotlight condition (28%, p = 0.03). Giving observers information about their fixation locations by dimming the periphery improved memory for those fixations modestly, at best.
  • 1 to 10

OPUS4 Logo

  • Contact
  • Imprint
  • Sitelinks