MPI für empirische Ästhetik
Refine
Document Type
- Article (5)
Language
- English (5)
Has Fulltext
- yes (5)
Is part of the Bibliography
- no (5)
Keywords
- Human behaviour (2)
- Acoustics (1)
- Auditory cortex (1)
- Cognitive neuroscience (1)
- Cortex (1)
- Language (1)
- Left hemisphere (1)
- MEG (1)
- Medical implants (1)
- Right hemisphere (1)
Institute
- Medizin (5) (remove)
Speech perception is mediated by both left and right auditory cortices but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex (AAC). We presented short acoustic transients to noninvasively estimate the dynamical properties of multiple functional regions along the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with evoked activity composed of dynamics in the theta (around 4–8 Hz) and beta–gamma (around 15–40 Hz) ranges. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (6/40 Hz) activity prevailing in the left. This asymmetry is also present during syllables presentation, but the evoked responses in AAC are more heterogeneous, with the co-occurrence of alpha (around 10 Hz) and gamma (>25 Hz) activity bilaterally. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the 2 hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.
A body of research demonstrates convincingly a role for synchronization of auditory cortex to rhythmic structure in sounds including speech and music. Some studies hypothesize that an oscillator in auditory cortex could underlie important temporal processes such as segmentation and prediction. An important critique of these findings raises the plausible concern that what is measured is perhaps not an oscillator but is instead a sequence of evoked responses. The two distinct mechanisms could look very similar in the case of rhythmic input, but an oscillator might better provide the computational roles mentioned above (i.e., segmentation and prediction). We advance an approach to adjudicate between the two models: analyzing the phase lag between stimulus and neural signal across different stimulation rates. We ran numerical simulations of evoked and oscillatory computational models, showing that in the evoked case,phase lag is heavily rate-dependent, while the oscillatory model displays marked phase concentration across stimulation rates. Next, we compared these model predictions with magnetoencephalography data recorded while participants listened to music of varying note rates. Our results show that the phase concentration of the experimental data is more in line with the oscillatory model than with the evoked model. This finding supports an auditory cortical signal that (i) contains components of both bottom-up evoked responses and internal oscillatory synchronization whose strengths are weighted by their appropriateness for particular stimulus types and (ii) cannot be explained by evoked responses alone.
Previous magnetoencephalography (MEG) studies have revealed gamma-band activity at sensors over parietal and fronto-temporal cortex during the delay phase of auditory spatial and non-spatial match-to-sample tasks, respectively. While this activity was interpreted as reflecting the memory maintenance of sound features, we noted that task-related activation differences might have been present already prior to the onset of the sample stimulus. The present study focused on the interval between a visual cue indicating which sound feature was to be memorized (lateralization or pitch) and sample sound presentation to test for task-related activation differences preceding stimulus encoding. MEG spectral activity was analyzed with cluster randomization tests (N = 15). Whereas there were no differences in frequencies below 40 Hz, gamma-band spectral amplitude (about 50–65 and 90–100 Hz) was higher for the lateralization than the pitch task. This activity was localized at right posterior and central sensors and present for several hundred ms after task cue offset. Activity at 50–65 Hz was also increased throughout the delay phase for the lateralization compared with the pitch task. Apparently cortical networks related to auditory spatial processing were activated after participants had been informed about the task.
Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively ‘local’ and ‘global’ scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively ‘unitary’ auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex.
In the later stages of addiction, automatized processes play a prominent role in guiding drug-seeking and drug-taking behavior. However, little is known about the neural correlates of automatized drug-taking skills and drug-related action knowledge in humans. We employed functional magnetic resonance imaging (fMRI) while smokers and non-smokers performed an orientation affordance task, where compatibility between the hand used for a behavioral response and the spatial orientation of a priming stimulus leads to shorter reaction times resulting from activation of the corresponding motor representations. While non-smokers exhibited this behavioral effect only for control objects, smokers showed the affordance effect for both control and smoking-related objects. Furthermore, smokers exhibited reduced fMRI activation for smoking-related as compared to control objects for compatible stimulus-response pairings in a sensorimotor brain network consisting of the right primary motor cortex, supplementary motor area, middle occipital gyrus, left fusiform gyrus and bilateral cingulate gyrus. In the incompatible condition, we found higher fMRI activation in smokers for smoking-related as compared to control objects in the right primary motor cortex, cingulate gyrus, and left fusiform gyrus. This suggests that the activation and performance of deeply embedded, automatized drug-taking schemata employ less brain resources. This might reduce the threshold for relapsing in individuals trying to abstain from smoking. In contrast, the interruption or modification of already triggered automatized action representations require increased neural resources.