Refine
Document Type
- Article (15) (remove)
Has Fulltext
- yes (15)
Is part of the Bibliography
- no (15) (remove)
Keywords
- Cortex (4)
- auditory cortex (3)
- Neural circuits (2)
- Sensory processing (2)
- bats (2)
- frontal cortex (2)
- local-field potentials (2)
- Acoustic signals (1)
- Animal physiology (1)
- Auditory midbrain (1)
Institute
- Biowissenschaften (14)
- Ernst Strüngmann Institut (1)
- MPI für empirische Ästhetik (1)
- Medizin (1)
- Präsidium (1)
In the cochlea of the mustached bat, cochlear resonance produces extremely sharp frequency tuning to the dominant frequency of the echolocation calls, around 61 kHz. Such high frequency resolution in the cochlea is accomplished at the expense of losing temporal resolution because of cochlear ringing, an effect that is observable not only in the cochlea but also in the cochlear nucleus. In the midbrain, the duration of sounds is thought to be analyzed by duration-tuned neurons, which are selective to both stimulus duration and frequency. We recorded from 57 DTNs in the auditory midbrain of the mustached bat to assess if a spectral-temporal trade-off is present. Such spectral-temporal trade-off is known to occur as sharp tuning in the frequency domain which results in poorer resolution in the time domain, and vice versa. We found that a specialized sub-population of midbrain DTNs tuned to the bat’s mechanical cochlear resonance frequency escape the cochlear spectral-temporal trade-off. We also show evidence that points towards an underlying neuronal inhibition that appears to be specific only at the resonance frequency.
Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats’ cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats’ behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element.
The mechanisms by which the mammalian brain copes with information from natural vocalization streams remain poorly understood. This article shows that in highly vocal animals, such as the bat species Carollia perspicillata, the spike activity of auditory cortex neurons does not track the temporal information flow enclosed in fast time-varying vocalization streams emitted by conspecifics. For example, leading syllables of so-called distress sequences (produced by bats subjected to duress) suppress cortical spiking to lagging syllables. Local fields potentials (LFPs) recorded simultaneously to cortical spiking evoked by distress sequences carry multiplexed information, with response suppression occurring in low frequency LFPs (i.e. 2–15 Hz) and steady-state LFPs occurring at frequencies that match the rate of energy fluctuations in the incoming sound streams (i.e. >50 Hz). Such steady-state LFPs could reflect underlying synaptic activity that does not necessarily lead to cortical spiking in response to natural fast time-varying vocal sequences.
In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA) as well as local field potentials (LFP), and current source density (CSD) waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of “call-specificity” in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns) could indeed be important for encoding sounds that differ in their acoustic attributes.