Refine
Year of publication
Has Fulltext
- yes (35)
Is part of the Bibliography
- no (35)
Keywords
- Cortex (4)
- auditory cortex (4)
- bats (4)
- Neural circuits (2)
- Sensory processing (2)
- frontal cortex (2)
- integrate-and-fire (2)
- local-field potentials (2)
- neuroethology (2)
- prefrontal cortex (2)
Institute
- Biowissenschaften (33)
- Ernst Strüngmann Institut (3)
- Medizin (2)
- MPI für empirische Ästhetik (1)
- Physik (1)
- Präsidium (1)
Frontal areas of the mammalian cortex are thought to be important for cognitive control and complex behaviour. These areas have been studied mostly in humans, non-human primates and rodents. In this article, we present a quantitative characterization of response properties of a frontal auditory area responsive to sound in the brain of Carollia perspicillata, the frontal auditory field (FAF). Bats are highly vocal animals, and they constitute an important experimental model for studying the auditory system. We combined electrophysiology experiments and computational simulations to compare the response properties of auditory neurons found in the bat FAF and auditory cortex (AC) to simple sounds (pure tones). Anatomical studies have shown that the latter provides feedforward inputs to the former. Our results show that bat FAF neurons are responsive to sounds, and however, when compared to AC neurons, they presented sparser, less precise spiking and longer-lasting responses. Based on the results of an integrate-and-fire neuronal model, we suggest that slow, subthreshold, synaptic dynamics can account for the activity pattern of neurons in the FAF. These properties reflect the general function of the frontal cortex and likely result from its connections with multiple brain regions, including cortico-cortical projections from the AC to the FAF.
Low-frequency spike-field coherence is a fingerprint of periodicity coding in the auditory cortex
(2018)
The extraction of temporal information from sensory input streams is of paramount importance in the auditory system. In this study, amplitude-modulated sounds were used as stimuli to drive auditory cortex (AC) neurons of the bat species Carollia perspicillata, to assess the interactions between cortical spikes and local-field potentials (LFPs) for the processing of temporal acoustic cues. We observed that neurons in the AC capable of eliciting synchronized spiking to periodic acoustic envelopes were significantly more coherent to theta- and alpha-band LFPs than their non-synchronized counterparts. These differences occurred independently of the modulation rate tested and could not be explained by power or phase modulations of the field potentials. We argue that the coupling between neuronal spiking and the phase of low-frequency LFPs might be important for orchestrating the coding of temporal acoustic structures in the AC.
The mechanisms by which the mammalian brain copes with information from natural vocalization streams remain poorly understood. This article shows that in highly vocal animals, such as the bat species Carollia perspicillata, the spike activity of auditory cortex neurons does not track the temporal information flow enclosed in fast time-varying vocalization streams emitted by conspecifics. For example, leading syllables of so-called distress sequences (produced by bats subjected to duress) suppress cortical spiking to lagging syllables. Local fields potentials (LFPs) recorded simultaneously to cortical spiking evoked by distress sequences carry multiplexed information, with response suppression occurring in low frequency LFPs (i.e. 2–15 Hz) and steady-state LFPs occurring at frequencies that match the rate of energy fluctuations in the incoming sound streams (i.e. >50 Hz). Such steady-state LFPs could reflect underlying synaptic activity that does not necessarily lead to cortical spiking in response to natural fast time-varying vocal sequences.
Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats’ cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats’ behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element.
In the cochlea of the mustached bat, cochlear resonance produces extremely sharp frequency tuning to the dominant frequency of the echolocation calls, around 61 kHz. Such high frequency resolution in the cochlea is accomplished at the expense of losing temporal resolution because of cochlear ringing, an effect that is observable not only in the cochlea but also in the cochlear nucleus. In the midbrain, the duration of sounds is thought to be analyzed by duration-tuned neurons, which are selective to both stimulus duration and frequency. We recorded from 57 DTNs in the auditory midbrain of the mustached bat to assess if a spectral-temporal trade-off is present. Such spectral-temporal trade-off is known to occur as sharp tuning in the frequency domain which results in poorer resolution in the time domain, and vice versa. We found that a specialized sub-population of midbrain DTNs tuned to the bat’s mechanical cochlear resonance frequency escape the cochlear spectral-temporal trade-off. We also show evidence that points towards an underlying neuronal inhibition that appears to be specific only at the resonance frequency.
Communication sounds are ubiquitous in the animal kingdom, where they play a role in advertising physiological states and/or socio-contextual scenarios. Human screams, for example, are typically uttered in fearful contexts and they have a distinctive feature termed as “roughness”, which depicts amplitude fluctuations at rates from 30–150 Hz. In this article, we report that the occurrence of fast acoustic periodicities in harsh sounding vocalizations is not unique to humans. A roughness-like structure is also present in vocalizations emitted by bats (species Carollia perspicillata) in distressful contexts. We report that 47.7% of distress calls produced by bats carry amplitude fluctuations at rates ~1.7 kHz (>10 times faster than temporal modulations found in human screams). In bats, rough-like vocalizations entrain brain potentials and are more effective in accelerating the bats’ heart rate than slow amplitude modulated sounds. Our results are consistent with a putative role of fast amplitude modulations (roughness in humans) for grabbing the listeners attention in situations in which the emitter is in distressful, potentially dangerous, contexts.
The ability to vocalize is ubiquitous in vertebrates, but neural networks underlying vocal control remain poorly understood. Here, we performed simultaneous neuronal recordings in the frontal cortex and dorsal striatum (caudate nucleus, CN) during the production of echolocation pulses and communication calls in bats. This approach allowed us to assess the general aspects underlying vocal production in mammals and the unique evolutionary adaptations of bat echolocation. Our data indicate that before vocalization, a distinctive change in high-gamma and beta oscillations (50–80 Hz and 12–30 Hz, respectively) takes place in the bat frontal cortex and dorsal striatum. Such precise fine-tuning of neural oscillations could allow animals to selectively activate motor programs required for the production of either echolocation or communication vocalizations. Moreover, the functional coupling between frontal and striatal areas, occurring in the theta oscillatory band (4–8 Hz), differs markedly at the millisecond level, depending on whether the animals are in a navigational mode (that is, emitting echolocation pulses) or in a social communication mode (emitting communication calls). Overall, this study indicates that fronto-striatal oscillations could provide a neural correlate for vocal control in bats.
Experimental evidence supports that cortical oscillations represent multiscale temporal modulations existent in natural stimuli, yet little is known about the processing of these multiple timescales at a neuronal level. Here, using extracellular recordings from the auditory cortex (AC) of awake bats (Carollia perspicillata), we show the existence of three neuronal types which represent different levels of the temporal structure of conspecific vocalizations, and therefore constitute direct evidence of multiscale temporal processing of naturalistic stimuli by neurons in the AC. These neuronal subpopulations synchronize differently to local-field potentials, particularly in theta- and high frequency bands, and are informative to a different degree in terms of their spike rate. Interestingly, we also observed that both low and high frequency cortical oscillations can be highly informative about the listened calls. Our results suggest that multiscale neuronal processing allows for the precise and non-redundant representation of natural vocalizations in the AC.
Echolocation behavior, a navigation strategy based on acoustic signals, allows scientists to explore neural processing of behaviorally relevant stimuli. For the purpose of orientation, bats broadcast echolocation calls and extract spatial information from the echoes. Because bats control call emission and thus the availability of spatial information, the behavioral relevance of these signals is undiscussable. While most neurophysiological studies, conducted in the past, used synthesized acoustic stimuli that mimic portions of the echolocation signals, recent progress has been made to understand how naturalistic echolocation signals are encoded in the bat brain. Here, we review how does stimulus history affect neural processing, how spatial information from multiple objects and how echolocation signals embedded in a naturalistic, noisy environment are processed in the bat brain. We end our review by discussing the huge potential that state-of-the-art recording techniques provide to gain a more complete picture on the neuroethology of echolocation behavior.
In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA) as well as local field potentials (LFP), and current source density (CSD) waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of “call-specificity” in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns) could indeed be important for encoding sounds that differ in their acoustic attributes.
The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial patterns of oscillations in the fronto-auditory network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominant top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depend on the behavioral role of the vocalization and on the timing relative to vocal onset. We observed the emergence of predominant bottom-up (auditory-to-frontal) information transfer during the post-vocal period specific to echolocation pulse emission, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.
Most mammals rely on the extraction of acoustic information from the environment in order to survive. However, the mechanisms that support sound representation in auditory neural networks involving sensory and association brain areas remain underexplored. In this study, we address the functional connectivity between an auditory region in frontal cortex (the frontal auditory field, FAF) and the auditory cortex (AC) in the bat Carollia perspicillata. The AC is a classic sensory area central for the processing of acoustic information. On the other hand, the FAF belongs to the frontal lobe, a brain region involved in the integration of sensory inputs, modulation of cognitive states, and in the coordination of behavioral outputs. The FAF-AC network was examined in terms of oscillatory coherence (local-field potentials, LFPs), and within an information theoretical framework linking FAF and AC spiking activity. We show that in the absence of acoustic stimulation, simultaneously recorded LFPs from FAF and AC are coherent in low frequencies (1–12 Hz). This “default” coupling was strongest in deep AC layers and was unaltered by acoustic stimulation. However, presenting auditory stimuli did trigger the emergence of coherent auditory-evoked gamma-band activity (>25 Hz) between the FAF and AC. In terms of spiking, our results suggest that FAF and AC engage in distinct coding strategies for representing artificial and natural sounds. Taken together, our findings shed light onto the neuronal coding strategies and functional coupling mechanisms that enable sound representation at the network level in the mammalian brain.
Neural oscillations are at the core of important computations in the mammalian brain. Interactions between oscillatory activities in different frequency bands, such as delta (1–4 Hz), theta (4–8 Hz) or gamma (>30 Hz), are a powerful mechanism for binding fundamentally distinct spatiotemporal scales of neural processing. Phase-amplitude coupling (PAC) is one such plausible and well-described interaction, but much is yet to be uncovered regarding how PAC dynamics contribute to sensory representations. In particular, although PAC appears to have a major role in audition, the characteristics of coupling profiles in sensory and integration (i.e. frontal) cortical areas remain obscure. Here, we address this question by studying PAC dynamics in the frontal-auditory field (FAF; an auditory area in the bat frontal cortex) and the auditory cortex (AC) of the bat Carollia perspicillata. By means of simultaneous electrophysiological recordings in frontal and auditory cortices examining local-field potentials (LFPs), we show that the amplitude of gamma-band activity couples with the phase of low-frequency LFPs in both structures. Our results demonstrate that the coupling in FAF occurs most prominently in delta/high-gamma frequencies (1-4/75-100 Hz), whereas in the AC the coupling is strongest in the delta-theta/low-gamma (2-8/25-55 Hz) range. We argue that distinct PAC profiles may represent different mechanisms for neuronal processing in frontal and auditory cortices, and might complement oscillatory interactions for sensory processing in the frontal-auditory cortex network.
Identifying unexpected acoustic inputs, which allows to react appropriately to new situations, is of major importance for animals. Neural deviance detection describes a change of neural response strength to a stimulus solely caused by the stimulus' probability of occurrence. In the present study, we searched for correlates of deviance detection in auditory brainstem responses obtained in anaesthetised bats (Carollia perspicillata). In an oddball paradigm, we used two pure tone stimuli that represented the main frequencies used by the animal during echolocation (60 kHz) and communication (20 kHz). For both stimuli, we could demonstrate significant differences of response strength between deviant and standard response in slow and fast components of the auditory brainstem response. The data suggest the presence of correlates of deviance detection in brain stations below the inferior colliculus (IC), at the level of the cochlea nucleus and lateral lemniscus. Additionally, our results suggest that deviance detection is mainly driven by repetition suppression in the echolocation frequency band, while in the communication band, a deviant-related enhancement of the response plays a more important role. This finding suggests a contextual dependence of the mechanisms underlying subcortical deviance detection. The present study demonstrates the value of auditory brainstem responses for studying deviance detection and suggests that auditory specialists, such as bats, use different frequency-specific strategies to ensure an appropriate sensation of unexpected sounds.
Echolocating bats exhibit remarkable auditory behaviors, enabled by adaptations within and outside their auditory system. Yet, research in echolocating bats has focused mostly on brain areas that belong to the classic ascending auditory pathway. This study provides direct evidence linking the cerebellum, an evolutionarily ancient and non-classic auditory structure, to vocalization and hearing. We report that in the fruit-eating bat Carollia perspicillata, external sounds can evoke cerebellar responses with latencies below 20 ms. Such fast responses are indicative of early inputs to the bat cerebellum. In vocalizing bats, distinct spike train patterns allow the prediction with over 85% accuracy of the sound they are about to produce, or have just produced, i.e., communication calls or echolocation pulses. Taken together, our findings provide evidence of specializations for vocalization and hearing in the cerebellum of an auditory specialist.
Vocal communication is essential to coordinate social interactions in mammals and it requires a fine discrimination of communication sounds. Auditory neurons can exhibit selectivity for specific calls, but how it is affected by preceding sounds is still debated. We tackled this using ethologically relevant vocalizations in a highly vocal mammalian species: Seba’s short-tailed bat. We show that cortical neurons present several degrees of selectivity for echolocation and distress calls. Embedding vocalizations within natural acoustic streams leads to stimulus-specific suppression of neuronal responses that changes sound selectivity in disparate manners: increases in neurons with poor discriminability in silence and decreases in neurons selective in silent settings. A computational model indicates that the observed effects arise from two forms of adaptation: presynaptic frequency specific adaptation acting in cortical inputs and stimulus unspecific postsynaptic adaptation. These results shed light into how acoustic context modulates natural sound discriminability in the mammalian cortex.
The brains of black 6 mice (Mus musculus) and Seba’s short-tailed bats (Carollia perspicillata) weigh roughly the same and share the mammalian neocortical laminar architecture. Bats have highly developed sonar calls and social communication and are an excellent neuroethological animal model for auditory research. Mice are olfactory and somatosensory specialists and are used frequently in auditory neuroscience, particularly for their advantage of standardization and genetic tools. Investigating their potentially different general auditory processing principles would advance our understanding of how the ecological needs of a species shape the development and function of the mammalian nervous system. We compared two existing datasets, recorded with linear multichannel electrodes down the depth of the primary auditory cortex (A1) while awake, across both species while presenting repetitive stimulus trains with different frequencies (∼5 and ∼40 Hz). We found that while there are similarities between cortical response profiles in bats and mice, there was a better signal to noise ratio in bats under these conditions, which allowed for a clearer following response to stimuli trains. This was most evident at higher frequency trains, where bats had stronger response amplitude suppression to consecutive stimuli. Phase coherence was far stronger in bats during stimulus response, indicating less phase variability in bats across individual trials. These results show that although both species share cortical laminar organization, there are structural differences in relative depth of layers. Better signal to noise ratio in bats could represent specialization for faster temporal processing shaped by their individual ecological niches.
In humans, screams have strong amplitude modulations (AM) at 30 to 150 Hz. These AM correspond to the acoustic correlate of perceptual roughness. In bats, distress calls can carry AMs, which elicit heart rate increases in playback experiments. Whether amplitude modulation occurs in fearful vocalisations of other animal species beyond humans and bats remains unknown. Here we analysed the AM pattern of rats’ 22-kHz ultrasonic vocalisations emitted in a fear conditioning task. We found that the number of vocalisations decreases during the presentation of conditioned stimuli. We also observed that AMs do occur in rat 22-kHz vocalisations. AMs are stronger during the presentation of conditioned stimuli, and during escape behaviour compared to freezing. Our results suggest that the presence of AMs in vocalisations emitted could reflect the animal’s internal state of fear related to avoidance behaviour.
Summary statement When echolocating under demanding conditions e.g. noisy, narrow space, or cluttered environments, frugivorous bats adapt their call pattern by increasing the call rate within biosonar groups.
Abstract For orientation, echolocating bats emit biosonar calls and use echoes arising from call reflections. They often pattern their calls into groups which increases the rate of sensory feedback over time. Insectivorous bats emit call groups at a higher rate when orienting in cluttered compared to uncluttered environments. Frugivorous bats increase the rate of call group emission when they echolocate in noisy environments. Here, calls emitted by conspecifics potentially interfere with the bat’s biosonar signals and complicate the echolocation behavior. To minimize the information loss followed by signal interference, bats may profit from a temporally increased sensory acquisition rate, as it is the case for the call groups. In frugivorous bats, it remains unclear if call group emission represents an exclusive adaptation to avoid interference by signals from other bats or if it represents an adaptation that allows to orient under demanding environmental conditions. Here, we compared the emission pattern of the frugivorous bat Carollia perspicillata when the bats were flying in noisy versus silent, narrow versus wide or cluttered versus non-cluttered corridors. According to our results, the bats emitted larger call groups and they increased the call rate within the call groups when navigating in narrow, cluttered, or noisy environments. Thus, call group emission represents an adaptive behavior when the bats orient in complex environments.
Neural oscillations are at the core of important computations in the mammalian brain. Interactions between oscillatory activities in different frequency bands, such as delta (1-4 Hz), theta (4-8 Hz), or gamma (>30 Hz), are a powerful mechanism for binding fundamentally distinct spatiotemporal scales of neural processing. Phase-amplitude coupling (PAC) is one such plausible and well-described interaction, but much is yet to be uncovered regarding how PAC dynamics contribute to sensory representations. In particular, although PAC appears to have a major role in audition, the characteristics of coupling profiles in sensory and integration (i.e. frontal) cortical areas remain obscure. Here, we address this question by studying PAC dynamics in the frontal-auditory field (FAF; an auditory area in the bat frontal cortex) and the auditory cortex (AC) of the bat Carollia perspicillata. By means of simultaneous electrophysiological recordings in frontal and auditory cortices examining local-field potentials (LFPs), we show that the amplitude of gamma-band activity couples with the phase of low-frequency LFPs in both structures. Our results demonstrate that the coupling in FAF occurs most prominently in delta/high-gamma frequencies (1-4/75-100 Hz), whereas in the AC the coupling is strongest in the theta/low-gamma (2-8/25-55 Hz) range. We argue that distinct PAC profiles may represent different mechanisms for neuronal processing in frontal and auditory cortices, and might complement oscillatory interactions for sensory processing in the frontal-auditory cortex network.
Animals extract behaviorally relevant signals from “noisy” environments. To investigate signal extraction, echolocating provides a rich system testbed. For orientation, bats broadcast calls and assign each echo to the corresponding call. When orienting in acoustically enriched environments or when approaching targets, bats change their spectro-temporal call design. Thus, to assess call adjustments that are exclusively meant to facilitate signal extraction in “noisy” environments, it is necessary to control for distance-dependent call changes. By swinging bats in a pendulum, we tested the influence of acoustic playback on the echolocation behavior of Carollia perspicillata. This paradigm evokes reproducible orientation behavior and allows a precise definition of the influence of the acoustic context. Our results show that bats dynamically switch between different adaptations to cope with sound-based navigation in acoustically contaminated environments. These dynamics of echolocation behavior may explain the large variety of adaptations that have been reported in the bat literature.
Substantial progress in the field of neuroscience has been made from anaesthetized preparations. Ketamine is one of the most used drugs in electrophysiology studies, but how ketamine affects neuronal responses is poorly understood. Here, we used in vivo electrophysiology and computational modelling to study how the auditory cortex of bats responds to vocalisations under anaesthesia and in wakefulness. In wakefulness, acoustic context increases neuronal discrimination of natural sounds. Neuron models predicted that ketamine affects the contextual discrimination of sounds regardless of the type of context heard by the animals (echolocation or communication sounds). However, empirical evidence showed that the predicted effect of ketamine occurs only if the acoustic context consists of low-pitched sounds (e.g., communication calls in bats). Using the empirical data, we updated the naïve models to show that differential effects of ketamine on cortical responses can be mediated by unbalanced changes in the firing rate of feedforward inputs to cortex, and changes in the depression of thalamo-cortical synaptic receptors. Combined, our findings obtained in vivo and in silico reveal the effects and mechanisms by which ketamine affects cortical responses to vocalisations.
The ability to vocalize is ubiquitous in vertebrates, but neural networks leading to vocalization production remain poorly understood. Here we performed simultaneous, large scale, neuronal recordings in the frontal cortex and dorsal striatum (caudate nucleus) during the production of echolocation and non-echolocation calls in bats. This approach allows to assess the general aspects underlying vocalization production in mammals and the unique evolutionary adaptations of bat echolocation. Our findings show that distinct intra-areal brain rhythms in the beta (12-30 Hz) and gamma (30-80 Hz) bands of the local field potential can be used to predict the bats’ vocal output and that phase locking between spikes and field potentials occurs prior vocalization production. Moreover, the fronto-striatal network is differentially coupled in the theta-band during the production of echolocation and non-echolocation calls. Overall, our results present evidence for fronto-striatal network oscillations in motor action prediction in mammals.
Although new advances in neuroscience allow the study of vocal communication in awake animals, substantial progress in the processing of vocalizations has been made from brains of anaesthetized preparations. Thus, understanding how anaesthetics affect neuronal responses is of paramount importance. Here, we used electrophysiological recordings and computational modelling to study how the auditory cortex of bats responds to vocalizations under anaesthesia and in wakefulness. We found that multifunctional neurons that process echolocation and communication sounds were affected by ketamine anaesthesia in a manner that could not be predicted by known anaesthetic effects. In wakefulness, acoustic contexts (preceding echolocation or communication sequences) led to stimulus-specific suppression of lagging sounds, accentuating neuronal responses to sound transitions. However, under anaesthesia, communication contexts (but not echolocation) led to a global suppression of responses to lagging sounds. Such asymmetric effect was dependent on the frequency composition of the contexts and not on their temporal patterns. We constructed a neuron model that could replicate the data obtained in vivo. In the model, anaesthesia modulates spiking activity in a channel-specific manner, decreasing responses of cortical inputs tuned to high-frequency sounds and increasing adaptation in the respective cortical synapses. Combined, our findings obtained in vivo and in silico reveal that ketamine anaesthesia does not reduce uniformly the neurons’ responsiveness to low and high frequency sounds. This effect depends on combined mechanisms that unbalance cortical inputs and ultimately affect how auditory cortex neurons respond to natural sounds in anaesthetized preparations.
Deviance detection describes an increase of neural response strength caused by a stimulus with a low probability of occurrence. This ubiquitous phenomenon has been reported for multiple species, from subthalamic areas to auditory cortex. While cortical deviance detection has been well characterised by a range of studies covering neural activity at population level (mismatch negativity, MMN) as well as at cellular level (stimulus-specific adaptation, SSA), subcortical deviance detection has been studied mainly on cellular level in the form of SSA. Here, we aim to bridge this gap by using noninvasively recorded auditory brainstem responses (ABRs) to investigate deviance detection at population level in the lower stations of the auditory system of a hearing specialist: the bat Carollia perspicillata. Our present approach uses behaviourally relevant vocalisation stimuli that are closer to the animals' natural soundscape than artificial stimuli used in previous studies that focussed on subcortical areas. We show that deviance detection in ABRs is significantly stronger for echolocation pulses than for social communication calls or artificial sounds, indicating that subthalamic deviance detection depends on the behavioural meaning of a stimulus. Additionally, complex physical sound features like frequency- and amplitude-modulation affected the strength of deviance detection in the ABR. In summary, our results suggest that at population level, the bat brain can detect different types of deviants already in the brainstem. This shows that subthalamic brain structures exhibit more advanced forms of deviance detection than previously known.
Sound discrimination is essential in many species for communicating and foraging. Bats, for example, use sounds for echolocation and communication. In the bat auditory cortex there are neurons that process both sound categories, but how these neurons respond to acoustic transitions, that is, echolocation streams followed by a communication sound, remains unknown. Here, we show that the acoustic context, a leading sound sequence followed by a target sound, changes neuronal discriminability of echolocation versus communication calls in the cortex of awake bats of both sexes. Nonselective neurons that fire equally well to both echolocation and communication calls in the absence of context become category selective when leading context is present. On the contrary, neurons that prefer communication sounds in the absence of context turn into nonselective ones when context is added. The presence of context leads to an overall response suppression, but the strength of this suppression is stimulus specific. Suppression is strongest when context and target sounds belong to the same category, e.g.,echolocation followed by echolocation. A neuron model of stimulus-specific adaptation replicated our results in silico The model predicts selectivity to communication and echolocation sounds in the inputs arriving to the auditory cortex, as well as two forms of adaptation, presynaptic frequency-specific adaptation acting in cortical inputs and stimulus-unspecific postsynaptic adaptation. In addition, the model predicted that context effects can last up to 1.5 s after context offset and that synaptic inputs tuned to low-frequency sounds (communication signals) have the shortest decay constant of presynaptic adaptation.SIGNIFICANCE STATEMENT We studied cortical responses to isolated calls and call mixtures in awake bats and show that (1) two neuronal populations coexist in the bat cortex, including neurons that discriminate social from echolocation sounds well and neurons that are equally driven by these two ethologically different sound types; (2) acoustic context (i.e., other natural sounds preceding the target sound) affects natural sound selectivity in a manner that could not be predicted based on responses to isolated sounds; and (3) a computational model similar to those used for explaining stimulus-specific adaptation in rodents can account for the responses observed in the bat cortex to natural sounds. This model depends on segregated feedforward inputs, synaptic depression, and postsynaptic neuronal adaptation.
Summary The auditory midbrain (inferior colliculus, IC) plays an important role in sound processing, acting as hub for acoustic information extraction and for the implementation of fast audio-motor behaviors. IC neurons are topographically organized according to their sound frequency preference: dorsal IC regions encode low frequencies while ventral areas respond best to high frequencies, a type of sensory map defined as tonotopy. Tonotopic maps have been studied extensively using artificial stimuli (pure tones) but our knowledge of how these maps represent information about sequences of natural, spectro-temporally rich sounds is sparse. We studied this question by conducting simultaneous extracellular recordings across IC depths in awake bats (Carollia perspicillata) that listened to sequences of natural communication and echolocation sounds. The hypothesis was that information about these two types of sound streams is represented at different IC depths since they exhibit large differences in spectral composition, i.e. echolocation covers the high frequency portion of the bat soundscape (> 45 kHz), while communication sounds are broadband and carry most power at low frequencies (20-25 kHz). Our results showed that mutual information between neuronal responses and acoustic stimuli, as well as response redundancy in pairs of neurons recorded simultaneously, increase exponentially with IC depth. The latter occurs regardless of the sound type presented to the bats (echolocation or communication). Taken together, our results indicate the existence of mutual information and redundancy maps at the midbrain level whose response cannot be predicted based on the frequency composition of natural sounds and classic neuronal tuning curves.
In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.
One Sentence Summary: Bats avoid acoustic interference by rapidly adjusting the timing of vocalizations to the temporal pattern of varying noise.
Communication sounds are ubiquitous in the animal kingdom, where they play a role in advertising physiological states and/or socio-contextual scenarios. Distress sounds, for example, are typically uttered in distressful scenarios such as agonistic interactions. Here, we report on the occurrence of superfast temporal periodicities in distress calls emitted by bats (species Carollia perspicillata). Distress vocalizations uttered by this bat species are temporally modulated at frequencies close to 1.7 kHz, that is, ∼17 times faster than modulation rates observed in human screams. Fast temporal periodicities are represented in the bats’ brain by means of frequency following responses, and temporally periodic sounds are more effective in boosting the heart rate of awake bats than their demodulated versions. Altogether, our data suggest that bats, an animal group classically regarded as ultrasonic, can exploit the low frequency portion of the soundscape during distress calling to create spectro-temporally complex, arousing sounds.
Frontal areas of the mammalian cortex are thought to be important for cognitive control and complex behaviour. These areas have been studied mostly in humans, non-human primates and rodents. In this article, we present a quantitative characterization of response properties of a frontal auditory area responsive to sound in the bat brain, the frontal auditory field (FAF). Bats are highly vocal animals and they constitute an important experimental model for studying the auditory system. At present, little is known about neuronal sound processing in the bat FAF. We combined electrophysiology experiments and computational simulations to compare the response properties of auditory neurons found in the bat FAF and auditory cortex (AC) to simple sounds (pure tones). Anatomical studies have shown that the latter provide feedforward inputs to the former. Our results show that bat FAF neurons are responsive to sounds, however, when compared to AC neurons, they presented sparser, less precise spiking and longer-lasting responses. Based on the results of an integrate-and-fire neuronal model, we speculate that slow, low-threshold, synaptic dynamics could contribute to the changes in activity pattern that occur as information travels through cortico-cortical projections from the AC to the FAF.
Most mammals rely on the extraction of acoustic information from the environment in order to survive. However, the mechanisms that support sound representation in auditory neural networks involving sensory and association brain areas remain underexplored. In this study, we address the functional connectivity between an auditory region in frontal cortex (the frontal auditory field, FAF) and the auditory cortex (AC) in the bat Carollia perspicillata. The AC is a classic sensory area central for the processing of acoustic information. On the other hand, the FAF belongs to the frontal lobe, a brain region involved in the integration of sensory inputs, modulation of cognitive states, and in the coordination of behavioural outputs. The FAF-AC network was examined in terms of oscillatory coherence (local-field potentials, LFPs), and within an information theoretical framework linking FAF and AC spiking activity. We show that in the absence of acoustic stimulation, simultaneously recorded LFPs from FAF and AC are coherent in low frequencies (1-12 Hz). This “default” coupling was strongest in deep AC layers and was unaltered by acoustic stimulation. However, presenting auditory stimuli did trigger the emergence of coherent auditory-evoked gamma-band activity (>25 Hz) between the FAF and AC. In terms of spiking, our results suggest that FAF and AC engage in distinct coding strategies for representing artificial and natural sounds. Taken together, our findings shed light onto the neuronal coding strategies and functional coupling mechanisms that enable sound representation at the network level in the mammalian brain.
The mammalian frontal and auditory cortices are important for vocal behaviour. Here, using local field potential recordings, we demonstrate for the first time that the timing and spatial pattern of oscillations in the fronto-auditory cortical network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominantly top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depended on the behavioural role of the vocalization and on the timing relative to vocal onset. Remarkably, we observed the emergence of predominantly bottom-up (auditory-to-frontal cortex) information transfer patterns specific echolocation production, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to echolocation sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.
The auditory midbrain (inferior colliculus, IC) plays an important role in sound processing, acting as hub for acoustic information extraction and for the implementation of fast audio-motor behaviors. IC neurons are topographically organized according to their sound frequency preference: dorsal IC regions encode low frequencies while ventral areas respond best to high frequencies, a type of sensory map defined as tonotopy. Tonotopic maps have been studied extensively using artificial stimuli (pure tones) but our knowledge of how these maps represent information about sequences of natural, spectro-temporally rich sounds is sparse. We studied this question by conducting simultaneous extracellular recordings across IC depths in awake bats (Carollia perspicillata) that listened to sequences of natural communication and echolocation sounds. The hypothesis was that information about these two types of sound streams is represented at different IC depths since they exhibit large differences in spectral composition, i.e., echolocation covers the high-frequency portion of the bat soundscape (> 45 kHz), while communication sounds are broadband and carry most power at low frequencies (20–25 kHz). Our results showed that mutual information between neuronal responses and acoustic stimuli, as well as response redundancy in pairs of neurons recorded simultaneously, increase exponentially with IC depth. The latter occurs regardless of the sound type presented to the bats (echolocation or communication). Taken together, our results indicate the existence of mutual information and redundancy maps at the midbrain level whose response cannot be predicted based on the frequency composition of natural sounds and classic neuronal tuning curves.
The brains of black 6 mice (Mus musculus) and Seba’s short-tailed bats (Carollia perspicillata) weigh roughly the same and share mammalian neocortical laminar architecture. Bats have highly developed sonar calls and social communication and are an excellent neuroethological animal model for auditory research. Mice are olfactory and somatosensory specialists, used frequently in auditory neuroscience for their advantage of standardization and wide genetic toolkit. This study presents an analytical approach to overcome the challenge of inter-species comparison with existing data. In both data sets, we recorded with linear multichannel electrodes down the depth of the primary auditory cortex (A1) while presenting repetitive stimuli trains at ~5 and ~40 Hz to awake bats and mice. We found that while there are similarities between cortical response profiles in both, there was a better signal to noise ratio in bats under these conditions, which allowed for a clearer following response to stimuli trains. Model fit analysis supported this, illustrating that bats had stronger response amplitude suppression to consecutive stimuli. Additionally, continuous wavelet transform revealed that bats had significantly stronger power and phase coherence during stimulus response and mice had stronger power in the background. Better signal to noise ratio and lower intertrial phase variability in bats could represent specialization for faster and more accurate temporal processing at lower metabolic costs. Our findings demonstrate a potentially different general auditory processing principle; investigating such differences may increase our understanding of how the ecological need of a species shapes the development and function of its nervous system.