Refine
Document Type
- Article (7)
Language
- English (7)
Has Fulltext
- yes (7)
Is part of the Bibliography
- no (7)
Keywords
- Vision (7) (remove)
Institute
- Mathematik (2)
- Medizin (2)
- Informatik und Mathematik (1)
- MPI für Hirnforschung (1)
- MPI für empirische Ästhetik (1)
- Pharmazie (1)
- Psychologie und Sportwissenschaften (1)
In 1957, Craig Mooney published a set of human face stimuli to study perceptual closure: the formation of a coherent percept on the basis of minimal visual information. Images of this type, now known as “Mooney faces”, are widely used in cognitive psychology and neuroscience because they offer a means of inducing variable perception with constant visuo-spatial characteristics (they are often not perceived as faces if viewed upside down). Mooney’s original set of 40 stimuli has been employed in several studies. However, it is often necessary to use a much larger stimulus set. We created a new set of over 500 Mooney faces and tested them on a cohort of human observers. We present the results of our tests here, and make the stimuli freely available via the internet. Our test results can be used to select subsets of the stimuli that are most suited for a given experimental purpose.
Purpose: There is some controversy whether or not saccades change with age. This cross-sectional study aims to clarify the characteristics of reflexive saccades at various ages to establish a normative cohort in a standardized set-up. Second objective is to investigate the feasibility of saccadometry in daily ophthalmological practice.
Methods: One hundred healthy participants aged between 6 and 76 years underwent an ophthalmologic examination and saccadometry, using an infrared video-oculography device, sampling at 220 Hz. The reflexive saccades were evoked in four directions and three target displacements each (5°/15°/30° horizontally and of 5°/10°/20° vertically). Saccadic peak velocity, gain (amplitude/target displacement) and latency were measured.
Results: Mean peak velocity of saccades was 213°/s (± 29°/s), 352°/s (± 50°/s) and 455°/s (± 67°/s) to a target position 5°, 15°and 30° horizontally, respectively, and 208°/s (± 36°/s), 303°/s (± 50°/s) and 391°/s (± 71°/s) to a target position 5°, 10° and 20° vertically. The association between peak velocity and eccentricity proved to be present at any age in all four directions. We found no relevant effect of age on peak velocity, gain and latency in a fitted linear mixed model. However, latency becomes shorter during childhood and adolescence, while in adulthood it is relatively stable with a slight trend to increase in the elderly. Saccades are more precise when the target displacement is small. Isometric saccades are most common, followed by hypometric ones. All children and elderly were able to perform good quality saccadometry in a recording time of approximately 10 minutes.
Conclusion: The presented data may serve as normative control for further studies using such a video-oculography device for saccadometry. The means of peak velocity and the gain can be used independently from age respecting the target displacement. Latency is susceptible to age.
Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds–hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.
Abstract: The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.
Author Summary: Human visual perception is a complex cognitive feat known to be mediated by distinct cortical regions of the brain. However, the exact function of these regions remains unknown, and thus it remains unclear how those regions together orchestrate visual perception. Here, we apply an AI-driven brain mapping approach to reveal visual brain function. This approach integrates multiple artificial deep neural networks trained on a diverse set of functions with functional recordings of the whole human brain. Our results reveal a systematic tiling of visual cortex by mapping regions to particular functions of the deep networks. Together this constitutes a comprehensive account of the functions of the distinct cortical regions of the brain that mediate human visual perception.
Autism spectrum disorders (ASD) have been associated with sensory hypersensitivity. A recent study reported visual acuity (VA) in ASD in the region reported for birds of prey. The validity of the results was subsequently doubted. This study examined VA in 34 individuals with ASD, 16 with schizophrenia (SCH), and 26 typically developing (TYP). Participants with ASD did not show higher VA than those with SCH and TYP. There were no substantial correlations of VA with clinical severity in ASD or SCH. This study could not confirm the eagle-eyed acuity hypothesis of ASD, or find evidence for a connection of VA and clinical phenotypes. Research needs to further address the origins and circumstances associated with altered sensory or perceptual processing in ASD.
Full reconstruction of large lobula plate tangential cells in Drosophila from a 3D EM dataset
(2018)
With the advent of neurogenetic methods, the neural basis of behavior is presently being analyzed in more and more detail. This is particularly true for visually driven behavior of Drosophila melanogaster where cell-specific driver lines exist that, depending on the combination with appropriate effector genes, allow for targeted recording, silencing and optogenetic stimulation of individual cell-types. Together with detailed connectomic data of large parts of the fly optic lobe, this has recently led to much progress in our understanding of the neural circuits underlying local motion detection. However, how such local information is combined by optic flow sensitive large-field neurons is still incompletely understood. Here, we aim to fill this gap by a dense reconstruction of lobula plate tangential cells of the fly lobula plate. These neurons collect input from many hundreds of local motion-sensing T4/T5 neurons and connect them to descending neurons or central brain areas. We confirm all basic features of HS and VS cells as published previously from light microscopy. In addition, we identified the dorsal and the ventral centrifugal horizontal, dCH and vCH cell, as well as three VSlike cells, including their distinct dendritic and axonal projection area.
Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to potential underlying mechanisms.