Refine
Language
- English (16)
Has Fulltext
- yes (16)
Is part of the Bibliography
- no (16)
Keywords
- auditory cortex (4)
- bats (2)
- frontal cortex (2)
- integrate-and-fire (2)
- local-field potentials (2)
- ABR (1)
- Auditory midbrain (1)
- Brain-stimulus synchrony (1)
- Cortex (1)
- Inferior colliculus (1)
Institute
- Biowissenschaften (15)
- Ernst Strüngmann Institut (1)
- Medizin (1)
- Physik (1)
Frontal areas of the mammalian cortex are thought to be important for cognitive control and complex behaviour. These areas have been studied mostly in humans, non-human primates and rodents. In this article, we present a quantitative characterization of response properties of a frontal auditory area responsive to sound in the brain of Carollia perspicillata, the frontal auditory field (FAF). Bats are highly vocal animals, and they constitute an important experimental model for studying the auditory system. We combined electrophysiology experiments and computational simulations to compare the response properties of auditory neurons found in the bat FAF and auditory cortex (AC) to simple sounds (pure tones). Anatomical studies have shown that the latter provides feedforward inputs to the former. Our results show that bat FAF neurons are responsive to sounds, and however, when compared to AC neurons, they presented sparser, less precise spiking and longer-lasting responses. Based on the results of an integrate-and-fire neuronal model, we suggest that slow, subthreshold, synaptic dynamics can account for the activity pattern of neurons in the FAF. These properties reflect the general function of the frontal cortex and likely result from its connections with multiple brain regions, including cortico-cortical projections from the AC to the FAF.
Sound discrimination is essential in many species for communicating and foraging. Bats, for example, use sounds for echolocation and communication. In the bat auditory cortex there are neurons that process both sound categories, but how these neurons respond to acoustic transitions, that is, echolocation streams followed by a communication sound, remains unknown. Here, we show that the acoustic context, a leading sound sequence followed by a target sound, changes neuronal discriminability of echolocation versus communication calls in the cortex of awake bats of both sexes. Nonselective neurons that fire equally well to both echolocation and communication calls in the absence of context become category selective when leading context is present. On the contrary, neurons that prefer communication sounds in the absence of context turn into nonselective ones when context is added. The presence of context leads to an overall response suppression, but the strength of this suppression is stimulus specific. Suppression is strongest when context and target sounds belong to the same category, e.g.,echolocation followed by echolocation. A neuron model of stimulus-specific adaptation replicated our results in silico The model predicts selectivity to communication and echolocation sounds in the inputs arriving to the auditory cortex, as well as two forms of adaptation, presynaptic frequency-specific adaptation acting in cortical inputs and stimulus-unspecific postsynaptic adaptation. In addition, the model predicted that context effects can last up to 1.5 s after context offset and that synaptic inputs tuned to low-frequency sounds (communication signals) have the shortest decay constant of presynaptic adaptation.SIGNIFICANCE STATEMENT We studied cortical responses to isolated calls and call mixtures in awake bats and show that (1) two neuronal populations coexist in the bat cortex, including neurons that discriminate social from echolocation sounds well and neurons that are equally driven by these two ethologically different sound types; (2) acoustic context (i.e., other natural sounds preceding the target sound) affects natural sound selectivity in a manner that could not be predicted based on responses to isolated sounds; and (3) a computational model similar to those used for explaining stimulus-specific adaptation in rodents can account for the responses observed in the bat cortex to natural sounds. This model depends on segregated feedforward inputs, synaptic depression, and postsynaptic neuronal adaptation.
The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial patterns of oscillations in the fronto-auditory network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominant top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depend on the behavioral role of the vocalization and on the timing relative to vocal onset. We observed the emergence of predominant bottom-up (auditory-to-frontal) information transfer during the post-vocal period specific to echolocation pulse emission, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.
Most mammals rely on the extraction of acoustic information from the environment in order to survive. However, the mechanisms that support sound representation in auditory neural networks involving sensory and association brain areas remain underexplored. In this study, we address the functional connectivity between an auditory region in frontal cortex (the frontal auditory field, FAF) and the auditory cortex (AC) in the bat Carollia perspicillata. The AC is a classic sensory area central for the processing of acoustic information. On the other hand, the FAF belongs to the frontal lobe, a brain region involved in the integration of sensory inputs, modulation of cognitive states, and in the coordination of behavioral outputs. The FAF-AC network was examined in terms of oscillatory coherence (local-field potentials, LFPs), and within an information theoretical framework linking FAF and AC spiking activity. We show that in the absence of acoustic stimulation, simultaneously recorded LFPs from FAF and AC are coherent in low frequencies (1–12 Hz). This “default” coupling was strongest in deep AC layers and was unaltered by acoustic stimulation. However, presenting auditory stimuli did trigger the emergence of coherent auditory-evoked gamma-band activity (>25 Hz) between the FAF and AC. In terms of spiking, our results suggest that FAF and AC engage in distinct coding strategies for representing artificial and natural sounds. Taken together, our findings shed light onto the neuronal coding strategies and functional coupling mechanisms that enable sound representation at the network level in the mammalian brain.
Neural oscillations are at the core of important computations in the mammalian brain. Interactions between oscillatory activities in different frequency bands, such as delta (1–4 Hz), theta (4–8 Hz) or gamma (>30 Hz), are a powerful mechanism for binding fundamentally distinct spatiotemporal scales of neural processing. Phase-amplitude coupling (PAC) is one such plausible and well-described interaction, but much is yet to be uncovered regarding how PAC dynamics contribute to sensory representations. In particular, although PAC appears to have a major role in audition, the characteristics of coupling profiles in sensory and integration (i.e. frontal) cortical areas remain obscure. Here, we address this question by studying PAC dynamics in the frontal-auditory field (FAF; an auditory area in the bat frontal cortex) and the auditory cortex (AC) of the bat Carollia perspicillata. By means of simultaneous electrophysiological recordings in frontal and auditory cortices examining local-field potentials (LFPs), we show that the amplitude of gamma-band activity couples with the phase of low-frequency LFPs in both structures. Our results demonstrate that the coupling in FAF occurs most prominently in delta/high-gamma frequencies (1-4/75-100 Hz), whereas in the AC the coupling is strongest in the delta-theta/low-gamma (2-8/25-55 Hz) range. We argue that distinct PAC profiles may represent different mechanisms for neuronal processing in frontal and auditory cortices, and might complement oscillatory interactions for sensory processing in the frontal-auditory cortex network.
Identifying unexpected acoustic inputs, which allows to react appropriately to new situations, is of major importance for animals. Neural deviance detection describes a change of neural response strength to a stimulus solely caused by the stimulus' probability of occurrence. In the present study, we searched for correlates of deviance detection in auditory brainstem responses obtained in anaesthetised bats (Carollia perspicillata). In an oddball paradigm, we used two pure tone stimuli that represented the main frequencies used by the animal during echolocation (60 kHz) and communication (20 kHz). For both stimuli, we could demonstrate significant differences of response strength between deviant and standard response in slow and fast components of the auditory brainstem response. The data suggest the presence of correlates of deviance detection in brain stations below the inferior colliculus (IC), at the level of the cochlea nucleus and lateral lemniscus. Additionally, our results suggest that deviance detection is mainly driven by repetition suppression in the echolocation frequency band, while in the communication band, a deviant-related enhancement of the response plays a more important role. This finding suggests a contextual dependence of the mechanisms underlying subcortical deviance detection. The present study demonstrates the value of auditory brainstem responses for studying deviance detection and suggests that auditory specialists, such as bats, use different frequency-specific strategies to ensure an appropriate sensation of unexpected sounds.
Substantial progress in the field of neuroscience has been made from anaesthetized preparations. Ketamine is one of the most used drugs in electrophysiology studies, but how ketamine affects neuronal responses is poorly understood. Here, we used in vivo electrophysiology and computational modelling to study how the auditory cortex of bats responds to vocalisations under anaesthesia and in wakefulness. In wakefulness, acoustic context increases neuronal discrimination of natural sounds. Neuron models predicted that ketamine affects the contextual discrimination of sounds regardless of the type of context heard by the animals (echolocation or communication sounds). However, empirical evidence showed that the predicted effect of ketamine occurs only if the acoustic context consists of low-pitched sounds (e.g., communication calls in bats). Using the empirical data, we updated the naïve models to show that differential effects of ketamine on cortical responses can be mediated by unbalanced changes in the firing rate of feedforward inputs to cortex, and changes in the depression of thalamo-cortical synaptic receptors. Combined, our findings obtained in vivo and in silico reveal the effects and mechanisms by which ketamine affects cortical responses to vocalisations.
The auditory midbrain (inferior colliculus, IC) plays an important role in sound processing, acting as hub for acoustic information extraction and for the implementation of fast audio-motor behaviors. IC neurons are topographically organized according to their sound frequency preference: dorsal IC regions encode low frequencies while ventral areas respond best to high frequencies, a type of sensory map defined as tonotopy. Tonotopic maps have been studied extensively using artificial stimuli (pure tones) but our knowledge of how these maps represent information about sequences of natural, spectro-temporally rich sounds is sparse. We studied this question by conducting simultaneous extracellular recordings across IC depths in awake bats (Carollia perspicillata) that listened to sequences of natural communication and echolocation sounds. The hypothesis was that information about these two types of sound streams is represented at different IC depths since they exhibit large differences in spectral composition, i.e., echolocation covers the high-frequency portion of the bat soundscape (> 45 kHz), while communication sounds are broadband and carry most power at low frequencies (20–25 kHz). Our results showed that mutual information between neuronal responses and acoustic stimuli, as well as response redundancy in pairs of neurons recorded simultaneously, increase exponentially with IC depth. The latter occurs regardless of the sound type presented to the bats (echolocation or communication). Taken together, our results indicate the existence of mutual information and redundancy maps at the midbrain level whose response cannot be predicted based on the frequency composition of natural sounds and classic neuronal tuning curves.
Deviance detection describes an increase of neural response strength caused by a stimulus with a low probability of occurrence. This ubiquitous phenomenon has been reported for multiple species, from subthalamic areas to auditory cortex. While cortical deviance detection has been well characterised by a range of studies covering neural activity at population level (mismatch negativity, MMN) as well as at cellular level (stimulus-specific adaptation, SSA), subcortical deviance detection has been studied mainly on cellular level in the form of SSA. Here, we aim to bridge this gap by using noninvasively recorded auditory brainstem responses (ABRs) to investigate deviance detection at population level in the lower stations of the auditory system of a hearing specialist: the bat Carollia perspicillata. Our present approach uses behaviourally relevant vocalisation stimuli that are closer to the animals' natural soundscape than artificial stimuli used in previous studies that focussed on subcortical areas. We show that deviance detection in ABRs is significantly stronger for echolocation pulses than for social communication calls or artificial sounds, indicating that subthalamic deviance detection depends on the behavioural meaning of a stimulus. Additionally, complex physical sound features like frequency- and amplitude-modulation affected the strength of deviance detection in the ABR. In summary, our results suggest that at population level, the bat brain can detect different types of deviants already in the brainstem. This shows that subthalamic brain structures exhibit more advanced forms of deviance detection than previously known.
Most mammals rely on the extraction of acoustic information from the environment in order to survive. However, the mechanisms that support sound representation in auditory neural networks involving sensory and association brain areas remain underexplored. In this study, we address the functional connectivity between an auditory region in frontal cortex (the frontal auditory field, FAF) and the auditory cortex (AC) in the bat Carollia perspicillata. The AC is a classic sensory area central for the processing of acoustic information. On the other hand, the FAF belongs to the frontal lobe, a brain region involved in the integration of sensory inputs, modulation of cognitive states, and in the coordination of behavioural outputs. The FAF-AC network was examined in terms of oscillatory coherence (local-field potentials, LFPs), and within an information theoretical framework linking FAF and AC spiking activity. We show that in the absence of acoustic stimulation, simultaneously recorded LFPs from FAF and AC are coherent in low frequencies (1-12 Hz). This “default” coupling was strongest in deep AC layers and was unaltered by acoustic stimulation. However, presenting auditory stimuli did trigger the emergence of coherent auditory-evoked gamma-band activity (>25 Hz) between the FAF and AC. In terms of spiking, our results suggest that FAF and AC engage in distinct coding strategies for representing artificial and natural sounds. Taken together, our findings shed light onto the neuronal coding strategies and functional coupling mechanisms that enable sound representation at the network level in the mammalian brain.