570 Biowissenschaften; Biologie
Refine
Language
- English (23)
Has Fulltext
- yes (23)
Is part of the Bibliography
- no (23)
Keywords
- Cortex (3)
- bats (3)
- Neural circuits (2)
- Sensory processing (2)
- auditory cortex (2)
- prefrontal cortex (2)
- Acoustic signals (1)
- Animal physiology (1)
- Auditory midbrain (1)
- Bats (1)
Institute
- Biowissenschaften (23) (remove)
Deviance detection describes an increase of neural response strength caused by a stimulus with a low probability of occurrence. This ubiquitous phenomenon has been reported for multiple species, from subthalamic areas to auditory cortex. While cortical deviance detection has been well characterised by a range of studies covering neural activity at population level (mismatch negativity, MMN) as well as at cellular level (stimulus-specific adaptation, SSA), subcortical deviance detection has been studied mainly on cellular level in the form of SSA. Here, we aim to bridge this gap by using noninvasively recorded auditory brainstem responses (ABRs) to investigate deviance detection at population level in the lower stations of the auditory system of a hearing specialist: the bat Carollia perspicillata. Our present approach uses behaviourally relevant vocalisation stimuli that are closer to the animals' natural soundscape than artificial stimuli used in previous studies that focussed on subcortical areas. We show that deviance detection in ABRs is significantly stronger for echolocation pulses than for social communication calls or artificial sounds, indicating that subthalamic deviance detection depends on the behavioural meaning of a stimulus. Additionally, complex physical sound features like frequency- and amplitude-modulation affected the strength of deviance detection in the ABR. In summary, our results suggest that at population level, the bat brain can detect different types of deviants already in the brainstem. This shows that subthalamic brain structures exhibit more advanced forms of deviance detection than previously known.
In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.
One Sentence Summary: Bats avoid acoustic interference by rapidly adjusting the timing of vocalizations to the temporal pattern of varying noise.
The auditory midbrain (inferior colliculus, IC) plays an important role in sound processing, acting as hub for acoustic information extraction and for the implementation of fast audio-motor behaviors. IC neurons are topographically organized according to their sound frequency preference: dorsal IC regions encode low frequencies while ventral areas respond best to high frequencies, a type of sensory map defined as tonotopy. Tonotopic maps have been studied extensively using artificial stimuli (pure tones) but our knowledge of how these maps represent information about sequences of natural, spectro-temporally rich sounds is sparse. We studied this question by conducting simultaneous extracellular recordings across IC depths in awake bats (Carollia perspicillata) that listened to sequences of natural communication and echolocation sounds. The hypothesis was that information about these two types of sound streams is represented at different IC depths since they exhibit large differences in spectral composition, i.e., echolocation covers the high-frequency portion of the bat soundscape (> 45 kHz), while communication sounds are broadband and carry most power at low frequencies (20–25 kHz). Our results showed that mutual information between neuronal responses and acoustic stimuli, as well as response redundancy in pairs of neurons recorded simultaneously, increase exponentially with IC depth. The latter occurs regardless of the sound type presented to the bats (echolocation or communication). Taken together, our results indicate the existence of mutual information and redundancy maps at the midbrain level whose response cannot be predicted based on the frequency composition of natural sounds and classic neuronal tuning curves.
Vocal communication is essential to coordinate social interactions in mammals and it requires a fine discrimination of communication sounds. Auditory neurons can exhibit selectivity for specific calls, but how it is affected by preceding sounds is still debated. We tackled this using ethologically relevant vocalizations in a highly vocal mammalian species: Seba’s short-tailed bat. We show that cortical neurons present several degrees of selectivity for echolocation and distress calls. Embedding vocalizations within natural acoustic streams leads to stimulus-specific suppression of neuronal responses that changes sound selectivity in disparate manners: increases in neurons with poor discriminability in silence and decreases in neurons selective in silent settings. A computational model indicates that the observed effects arise from two forms of adaptation: presynaptic frequency specific adaptation acting in cortical inputs and stimulus unspecific postsynaptic adaptation. These results shed light into how acoustic context modulates natural sound discriminability in the mammalian cortex.
Although new advances in neuroscience allow the study of vocal communication in awake animals, substantial progress in the processing of vocalizations has been made from brains of anaesthetized preparations. Thus, understanding how anaesthetics affect neuronal responses is of paramount importance. Here, we used electrophysiological recordings and computational modelling to study how the auditory cortex of bats responds to vocalizations under anaesthesia and in wakefulness. We found that multifunctional neurons that process echolocation and communication sounds were affected by ketamine anaesthesia in a manner that could not be predicted by known anaesthetic effects. In wakefulness, acoustic contexts (preceding echolocation or communication sequences) led to stimulus-specific suppression of lagging sounds, accentuating neuronal responses to sound transitions. However, under anaesthesia, communication contexts (but not echolocation) led to a global suppression of responses to lagging sounds. Such asymmetric effect was dependent on the frequency composition of the contexts and not on their temporal patterns. We constructed a neuron model that could replicate the data obtained in vivo. In the model, anaesthesia modulates spiking activity in a channel-specific manner, decreasing responses of cortical inputs tuned to high-frequency sounds and increasing adaptation in the respective cortical synapses. Combined, our findings obtained in vivo and in silico reveal that ketamine anaesthesia does not reduce uniformly the neurons’ responsiveness to low and high frequency sounds. This effect depends on combined mechanisms that unbalance cortical inputs and ultimately affect how auditory cortex neurons respond to natural sounds in anaesthetized preparations.
Communication sounds are ubiquitous in the animal kingdom, where they play a role in advertising physiological states and/or socio-contextual scenarios. Distress sounds, for example, are typically uttered in distressful scenarios such as agonistic interactions. Here, we report on the occurrence of superfast temporal periodicities in distress calls emitted by bats (species Carollia perspicillata). Distress vocalizations uttered by this bat species are temporally modulated at frequencies close to 1.7 kHz, that is, ∼17 times faster than modulation rates observed in human screams. Fast temporal periodicities are represented in the bats’ brain by means of frequency following responses, and temporally periodic sounds are more effective in boosting the heart rate of awake bats than their demodulated versions. Altogether, our data suggest that bats, an animal group classically regarded as ultrasonic, can exploit the low frequency portion of the soundscape during distress calling to create spectro-temporally complex, arousing sounds.
Neural oscillations are at the core of important computations in the mammalian brain. Interactions between oscillatory activities in different frequency bands, such as delta (1-4 Hz), theta (4-8 Hz), or gamma (>30 Hz), are a powerful mechanism for binding fundamentally distinct spatiotemporal scales of neural processing. Phase-amplitude coupling (PAC) is one such plausible and well-described interaction, but much is yet to be uncovered regarding how PAC dynamics contribute to sensory representations. In particular, although PAC appears to have a major role in audition, the characteristics of coupling profiles in sensory and integration (i.e. frontal) cortical areas remain obscure. Here, we address this question by studying PAC dynamics in the frontal-auditory field (FAF; an auditory area in the bat frontal cortex) and the auditory cortex (AC) of the bat Carollia perspicillata. By means of simultaneous electrophysiological recordings in frontal and auditory cortices examining local-field potentials (LFPs), we show that the amplitude of gamma-band activity couples with the phase of low-frequency LFPs in both structures. Our results demonstrate that the coupling in FAF occurs most prominently in delta/high-gamma frequencies (1-4/75-100 Hz), whereas in the AC the coupling is strongest in the theta/low-gamma (2-8/25-55 Hz) range. We argue that distinct PAC profiles may represent different mechanisms for neuronal processing in frontal and auditory cortices, and might complement oscillatory interactions for sensory processing in the frontal-auditory cortex network.
In humans, screams have strong amplitude modulations (AM) at 30 to 150 Hz. These AM correspond to the acoustic correlate of perceptual roughness. In bats, distress calls can carry AMs, which elicit heart rate increases in playback experiments. Whether amplitude modulation occurs in fearful vocalisations of other animal species beyond humans and bats remains unknown. Here we analysed the AM pattern of rats’ 22-kHz ultrasonic vocalisations emitted in a fear conditioning task. We found that the number of vocalisations decreases during the presentation of conditioned stimuli. We also observed that AMs do occur in rat 22-kHz vocalisations. AMs are stronger during the presentation of conditioned stimuli, and during escape behaviour compared to freezing. Our results suggest that the presence of AMs in vocalisations emitted could reflect the animal’s internal state of fear related to avoidance behaviour.
Summary The auditory midbrain (inferior colliculus, IC) plays an important role in sound processing, acting as hub for acoustic information extraction and for the implementation of fast audio-motor behaviors. IC neurons are topographically organized according to their sound frequency preference: dorsal IC regions encode low frequencies while ventral areas respond best to high frequencies, a type of sensory map defined as tonotopy. Tonotopic maps have been studied extensively using artificial stimuli (pure tones) but our knowledge of how these maps represent information about sequences of natural, spectro-temporally rich sounds is sparse. We studied this question by conducting simultaneous extracellular recordings across IC depths in awake bats (Carollia perspicillata) that listened to sequences of natural communication and echolocation sounds. The hypothesis was that information about these two types of sound streams is represented at different IC depths since they exhibit large differences in spectral composition, i.e. echolocation covers the high frequency portion of the bat soundscape (> 45 kHz), while communication sounds are broadband and carry most power at low frequencies (20-25 kHz). Our results showed that mutual information between neuronal responses and acoustic stimuli, as well as response redundancy in pairs of neurons recorded simultaneously, increase exponentially with IC depth. The latter occurs regardless of the sound type presented to the bats (echolocation or communication). Taken together, our results indicate the existence of mutual information and redundancy maps at the midbrain level whose response cannot be predicted based on the frequency composition of natural sounds and classic neuronal tuning curves.
The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial patterns of oscillations in the fronto-auditory network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominant top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depend on the behavioral role of the vocalization and on the timing relative to vocal onset. We observed the emergence of predominant bottom-up (auditory-to-frontal) information transfer during the post-vocal period specific to echolocation pulse emission, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.