Refine
Document Type
- Article (9)
- Conference Proceeding (1)
- Contribution to a Periodical (1)
Has Fulltext
- yes (11)
Is part of the Bibliography
- no (11)
Keywords
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds–hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.
The timing of feedback to early visual cortex in the perception of long-range apparent motion
(2008)
When 2 visual stimuli are presented one after another in different locations, they are often perceived as one, but moving object. Feedback from area human motion complex hMT/V5+ to V1 has been hypothesized to play an important role in this illusory perception of motion. We measured event-related responses to illusory motion stimuli of varying apparent motion (AM) content and retinal location using Electroencephalography. Detectable cortical stimulus processing started around 60-ms poststimulus in area V1. This component was insensitive to AM content and sequential stimulus presentation. Sensitivity to AM content was observed starting around 90 ms post the second stimulus of a sequence and most likely originated in area hMT/V5+. This AM sensitive response was insensitive to retinal stimulus position. The stimulus sequence related response started to be sensitive to retinal stimulus position at a longer latency of 110 ms. We interpret our findings as evidence for feedback from area hMT/V5+ or a related motion processing area to early visual cortices (V1, V2, V3).
Species distributed across vast continental areas and across major biomes provide unique model systems for studies of biotic diversification, yet also constitute daunting financial, logistic and political challenges for data collection across such regions. The tree frog Dendropsophus minutus (Anura: Hylidae) is a nominal species, continentally distributed in South America, that may represent a complex of multiple species, each with a more limited distribution. To understand the spatial pattern of molecular diversity throughout the range of this species complex, we obtained DNA sequence data from two mitochondrial genes, cytochrome oxidase I (COI) and the 16S rhibosomal gene (16S) for 407 samples of D. minutus and closely related species distributed across eleven countries, effectively comprising the entire range of the group. We performed phylogenetic and spatially explicit phylogeographic analyses to assess the genetic structure of lineages and infer ancestral areas. We found 43 statistically supported, deep mitochondrial lineages, several of which may represent currently unrecognized distinct species. One major clade, containing 25 divergent lineages, includes samples from the type locality of D. minutus. We defined that clade as the D. minutus complex. The remaining lineages together with the D. minutus complex constitute the D. minutus species group. Historical analyses support an Amazonian origin for the D. minutus species group with a subsequent dispersal to eastern Brazil where the D. minutus complex originated. According to our dataset, a total of eight mtDNA lineages have ranges >100,000 km2. One of them occupies an area of almost one million km2 encompassing multiple biomes. Our results, at a spatial scale and resolution unprecedented for a Neotropical vertebrate, confirm that widespread amphibian species occur in lowland South America, yet at the same time a large proportion of cryptic diversity still remains to be discovered.
Interpret und kreativer Lückenfüller : wie optische Illusionen in der Großhirnrinde entstehen
(2005)
Optische Täuschungen sind nicht nur kuriose Beispiele dafür, wie leicht unser ahrnehmungsapparat »ausgetrickst« werden kann, sie werden seit langem von Psychologen und Kognitionsforschern genutzt, um das visuelle System und seine neurophysiologischen Prinzipien zu erforschen. Auch Scheinbewegungen gehören zu diesen Täuschungen: Sie entstehen durch den schnellen Wechsel statischer Bilder. Frankfurter Wissenschaftler des Max-Planck-Instituts für Hirnforschung konnten mit Hilfe der funktionellen Magnetresonanztomografie zeigen, wie das Gehirn die Illusion einer Bewegung erzeugt, obwohl der gebotene Reiz nur aus benachbarten, abwechselnd aufblinkenden Quadraten bestand. Hier wird nicht nur das konstruktive Prinzip deutlich, mit dem das visuelle System arbeitet, mehr noch: Die Großhirnrinde betätigt sich als »kreativer Lückenfüller«, der aktiv fehlende Sinnesdaten zu »plausiblen« Gesamteindrücken ergänzt.
While prediction errors (PE) have been established to drive learning through adaptation of internal models, the role of model-compliant events in predictive processing is less clear. Checkpoints (CP) were recently introduced as points in time where expected sensory input resolved ambiguity regarding the validity of the internal model. Conceivably, these events serve as on-line reference points for model evaluation, particularly in uncertain contexts. Evidence from fMRI has shown functional similarities of CP and PE to be independent of event-related surprise, raising the important question of how these event classes relate to one another. Consequently, the aim of the present study was to characterise the functional relationship of checkpoints and prediction errors in a serial pattern detection task using electroencephalography (EEG). Specifically, we first hypothesised a joint P3b component of both event classes to index recourse to the internal model (compared to non-informative standards, STD). Second, we assumed the mismatch signal of PE to be reflected in an N400 component when compared to CP. Event-related findings supported these hypotheses. We suggest that while model adaptation is instigated by prediction errors, checkpoints are similarly used for model evaluation. Intriguingly, behavioural subgroup analyses showed that the exploitation of potentially informative reference points may depend on initial cue learning: Strict reliance on cue-based predictions may result in less attentive processing of these reference points, thus impeding upregulation of response gain that would prompt flexible model adaptation. Overall, present results highlight the role of checkpoints as model-compliant, informative reference points and stimulate important research questions about their processing as function of learning und uncertainty.
In binocular rivalry, presentation of different images to the separate eyes leads to conscious perception alternating between the two possible interpretations every few seconds. During perceptual transitions, a stimulus emerging into dominance can spread in a wave-like manner across the visual field. These traveling waves of rivalry dominance have been successfully related to the cortical magnification properties and functional activity of early visual areas, including the primary visual cortex (V1). Curiously however, these traveling waves undergo a delay when passing from one hemifield to another. In the current study, we used diffusion tensor imaging (DTI) to investigate whether the strength of interhemispheric connections between the left and right visual cortex might be related to the delay of traveling waves across hemifields. We measured the delay in traveling wave times (ΔTWT) in 19 participants and repeated this test 6 weeks later to evaluate the reliability of our behavioral measures. We found large interindividual variability but also good test–retest reliability for individual measures of ΔTWT. Using DTI in connection with fiber tractography, we identified parts of the corpus callosum connecting functionally defined visual areas V1–V3. We found that individual differences in ΔTWT was reliably predicted by the diffusion properties of transcallosal fibers connecting left and right V1, but observed no such effect for neighboring transcallosal visual fibers connecting V2 and V3. Our results demonstrate that the anatomical characteristics of topographically specific transcallosal connections predict the individual delay of interhemispheric traveling waves, providing further evidence that V1 is an important site for neural processes underlying binocular rivalry.
In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left or rightward auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left or rightward) more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.