Refine
Year of publication
Document Type
- Preprint (68) (remove)
Language
- English (68)
Has Fulltext
- yes (68)
Is part of the Bibliography
- no (68) (remove)
Keywords
- natural scenes (2)
- neuronal populations (2)
- primary visual cortex (2)
- stimulus encoding (2)
- visual attention (2)
- Computational model (1)
- Cortical column (1)
- Hypercolumn (1)
- Neural map (1)
- Optimal wiring (1)
Institute
- Ernst Strüngmann Institut (68) (remove)
Difficulty producing intelligible speech is a common and debilitating symptom of Parkinson’s disease (PD). Yet, both the robust evaluation of speech impairments and the identification of the affected brain systems are challenging. We examine the spectral and spatial definitions of the functional neuropathology underlying reduced speech quality in patients with PD using a new approach to characterize speech impairments and a novel brain-imaging marker. We found that the interactive scoring of speech impairments in PD (N=59) is reliable across non-expert raters, and better related to the hallmark motor and cognitive impairments of PD than automatically-extracted acoustical features. By relating these speech impairment ratings to neurophysiological deviations from healthy adults (N=65), we show that articulation impairments in patients with PD are robustly predicted from aberrant activity in the left inferior frontal cortex, and that functional connectivity of this region with somatomotor cortices mediates the influence of cognitive decline on speech deficits.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Several studies have probed perceptual performance at different times after a self-paced motor action and found frequency-specific modulations of perceptual performance phase-locked to the action. Such action-related modulation has been reported for various frequencies and modulation strengths. In an attempt to establish a basic effect at the population level, we had a relatively large number of participants (n=50) perform a self-paced button press followed by a detection task at threshold, and we applied both fixed- and random-effects tests. The combined data of all trials and participants surprisingly did not show any significant action-related modulation. However, based on previous studies, we explored the possibility that such modulation depends on the participant’s internal state. Indeed, when we split trials based on performance in neighboring trials, then trials in periods of low performance showed an action-related modulation at ≈17 Hz. When we split trials based on the performance in the preceding trial, we found that trials following a “miss” showed an action-related modulation at ≈17 Hz. Finally, when we split participants based on their false-alarm rate, we found that participants with no false alarms showed an action-related modulation at ≈17 Hz. All these effects were significant in random-effects tests, supporting an inference on the population. Together, these findings indicate that action-related modulations are not always detectable. However, the results suggest that specific internal states such as lower attentional engagement and/or higher decision criterion are characterized by a modulation in the beta-frequency range.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the Discrete Fourier Transform (DFT) and the Least Square Spectrum (LSS). DFT and LSS can be applied both on the averaged accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effect) or on the population (random-effect) of simulated participants. Multiple comparisons across frequencies were corrected using False-Discovery-Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated Sensitivity, Specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the averaged-accuracy-time-course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved Sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest Specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effect approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
Music, like language, is characterized by hierarchically organized structure that unfolds over time. Music listening therefore requires not only the tracking of notes and beats but also internally constructing high-level musical structures or phrases and anticipating incoming contents. Unlike for language, mechanistic evidence for online musical segmentation and prediction at a structural level is sparse. We recorded neurophysiological data from participants listening to music in its original forms as well as in manipulated versions with locally or globally reversed harmonic structures. We discovered a low-frequency neural component that modulated the neural rhythms of beat tracking and reliably parsed musical phrases. We next identified phrasal phase precession, suggesting that listeners established structural predictions from ongoing listening experience to track phrasal boundaries. The data point to brain mechanisms that listeners use to segment continuous music at the phrasal level and to predict abstract structural features of music.
Dendrites display a striking variety of neuronal type-specific morphologies, but the mechanisms and principles underlying such diversity remain elusive. A major player in defining the morphology of dendrites is the neuronal cytoskeleton, including evolutionarily conserved actin-modulatory proteins (AMPs). Still, we lack a clear understanding of how AMPs might support developmental phenomena such as neuron-type specific dendrite dynamics. To address precisely this level of in vivo specificity, we concentrated on a defined neuronal type, the class III dendritic arborisation (c3da) neuron of Drosophila larvae, displaying actin-enriched short terminal branchlets (STBs). Computational modelling reveals that the main branches of c3da neurons follow a general growth model based on optimal wiring, but the STBs do not. Instead, model STBs are defined by a short reach and a high affinity to grow towards the main branches. We thus concentrated on c3da STBs and developed new methods to quantitatively describe dendrite morphology and dynamics based on in vivo time-lapse imaging of mutants lacking individual AMPs. In this way, we extrapolated the role of these AMPs in defining STB properties. We propose that dendrite diversity is supported by the combination of a common step, refined by a neuron type-specific second level. For c3da neurons, we present a molecular model of how the combined action of multiple AMPs in vivo define the properties of these second level specialisations, the STBs.
Path integration is a sensorimotor computation that can be used to infer latent dynamical states by integrating self-motion cues. We studied the influence of sensory observation (visual/vestibular) and latent control dynamics (velocity/acceleration) on human path integration using a novel motion-cueing algorithm. Sensory modality and control dynamics were both varied randomly across trials, as participants controlled a joystick to steer to a memorized target location in virtual reality. Visual and vestibular steering cues allowed comparable accuracies only when participants controlled their acceleration, suggesting that vestibular signals, on their own, fail to support accurate path integration in the absence of sustained acceleration. Nevertheless, performance in all conditions reflected a failure to fully adapt to changes in the underlying control dynamics, a result that was well explained by a bias in the dynamics estimation. This work demonstrates how an incorrect internal model of control dynamics affects navigation in volatile environments in spite of continuous sensory feedback.
Under natural conditions, the visual system often sees a given input repeatedly. This provides an opportunity to optimize processing of the repeated stimuli. Stimulus repetition has been shown to strongly modulate neuronal-gamma band synchronization, yet crucial questions remained open. Here we used magnetoencephalography in 30 human subjects and find that gamma decreases across ~10 repetitions and then increases across further repetitions, revealing plastic changes of the activated neuronal circuits. Crucially, changes induced by one stimulus did not affect responses to other stimuli, demonstrating stimulus specificity. Changes partially persisted when the inducing stimulus was repeated after 25 minutes of intervening stimuli. They were strongest in early visual cortex and increased interareal feedforward influences. Our results suggest that early visual cortex gamma synchronization enables adaptive neuronal processing of recurring stimuli. These and previously reported changes might be due to an interaction of oscillatory dynamics with established synaptic plasticity mechanisms.
Spatial attention increases both inter-areal synchronization and spike rates across the visual hierarchy. To investigate whether these attentional changes reflect distinct or common mechanisms, we performed simultaneous laminar recordings of identified cell classes in macaque V1 and V4. Enhanced V4 spike rates were expressed by both excitatory neurons and fast-spiking interneurons, and were most prominent and arose earliest in time in superficial layers, consistent with a feedback modulation. By contrast, V1-V4 gamma-synchronization reflected feedforward communication and surprisingly engaged only fast-spiking interneurons in the V4 input layer. In mouse visual cortex, we found a similar motif for optogenetically identified inhibitory-interneuron classes. Population decoding analyses further indicate that feedback-related increases in spikes rates encoded attention more reliably than feedforward-related increases in synchronization. These findings reveal distinct, cell-type-specific feedforward and feedback pathways for the attentional modulation of inter-areal synchronization and spike rates, respectively.
The traditional view on coding in the cortex is that populations of neurons primarily convey stimulus information through the spike count. However, given the speed of sensory processing, it has been hypothesized that sensory encoding may rely on the spike-timing relationships among neurons. Here, we use a recently developed method based on Optimal Transport Theory called SpikeShip to study the encoding of natural movies by high-dimensional ensembles of neurons in visual cortex. SpikeShip is a generic measure of dissimilarity between spike train patterns based on the relative spike-timing relations among all neurons and with computational complexity similar to the spike count. We compared spike-count and spike-timing codes in up to N > 8000 neurons from six visual areas during natural video presentations. Using SpikeShip, we show that temporal spiking sequences convey substantially more information about natural movies than population spike-count vectors when the neural population size is larger than about 200 neurons. Remarkably, encoding through temporal sequences did not show representational drift both within and between blocks. By contrast, population firing rates showed better coding performance when there were few active neurons. Furthermore, the population firing rate showed memory across frames and formed a continuous trajectory across time. In contrast to temporal spiking sequences, population firing rates exhibited substantial drift across repetitions and between blocks. These findings suggest that spike counts and temporal sequences constitute two different coding schemes with distinct information about natural movies.
SpikeShip: a method for fast, unsupervised discovery of high-dimensional neural spiking patterns
(2023)
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multi-neuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-P urpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, relative duration and relative distance in time of two sequentially-organized events —standard S, with constant duration, and comparison C, varying trial-by-trial— are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer ISI helps counteracting such serial distortion effect only the constant S is in first position, but not if the unpredictable C is in first position. These results suggest the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. We simulated our behavioral results with a Bayesian model and replicated the finding that participants disproportionately expand first-position dynamic (unpredictable) short events. Our results clarify the mechanics generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.
Rhythmic flicker stimulation has gained interest as a treatment for neurodegenerative diseases and a method for frequency tagging neural activity in human EEG/MEG recordings. Yet, little is known about the way in which flicker-induced synchronization propagates across cortical levels and impacts different cell types. Here, we used Neuropixels to simultaneously record from LGN, V1, and CA1 while presenting visual flicker stimuli at different frequencies. LGN neurons showed strong phase locking up to 40Hz, whereas phase locking was substantially weaker in V1 units and absent in CA1 units. Laminar analyses revealed an attenuation of phase locking at 40Hz for each processing stage, with substantially weaker phase locking in the superficial layers of V1. Gamma-rhythmic flicker predominantly entrained fast-spiking interneurons. Optotagging experiments showed that these neurons correspond to either PV+ or narrow-waveform Sst+ neurons. A computational model could explain the observed differences in phase locking based on the neurons’ capacitative low-pass filtering properties. In summary, the propagation of synchronized activity and its effect on distinct cell types strongly depend on its frequency.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Analyzing non-invasive recordings of electroencephalography (EEG) and magnetoencephalography (MEG) directly in sensor space, using the signal from individual sensors, is a convenient and standard way of working with this type of data. However, volume conduction introduces considerable challenges for sensor space analysis. While the general idea of signal mixing due to volume conduction in EEG/MEG is recognized, the implications have not yet been clearly exemplified. Here, we illustrate how different types of activity overlap on the level of individual sensors. We show spatial mixing in the context of alpha rhythms, which are known to have generators in different areas of the brain. Using simulations with a realistic 3D head model and lead field and data analysis of a large resting-state EEG dataset, we show that electrode signals can be differentially affected by spatial mixing by computing a sensor complexity measure. While prominent occipital alpha rhythms result in less heterogeneous spatial mixing on posterior electrodes, central electrodes show a diversity of rhythms present. This makes the individual contributions, such as the sensorimotor mu-rhythm and temporal alpha rhythms, hard to disentangle from the dominant occipital alpha. Additionally, we show how strong occipital rhythms rhythms can contribute the majority of activity to frontal channels, potentially compromising analyses that are solely conducted in sensor space. We also outline specific consequences of signal mixing for frequently used assessment of power, power ratios and connectivity profiles in basic research and for neurofeedback application. With this work, we hope to illustrate the effects of volume conduction in a concrete way, such that the provided practical illustrations may be of use to EEG researchers to in order to evaluate whether sensor space is an appropriate choice for their topic of investigation.
Anticipating future events is a key computational task for neuronal networks. Experimental evidence suggests that reliable temporal sequences in neural activity play a functional role in the association and anticipation of events in time. However, how neurons can differentiate and anticipate multiple spike sequences remains largely unknown. We implement a learning rule based on predictive processing, where neurons exclusively fire for the initial, unpredictable inputs in a spiking sequence, leading to an efficient representation with reduced post-synaptic firing. Combining this mechanism with inhibitory feedback leads to sparse firing in the network, enabling neurons to selectively anticipate different sequences in the input. We demonstrate that intermediate levels of inhibition are optimal to decorrelate neuronal activity and to enable the prediction of future inputs. Notably, each sequence is independently encoded in the sparse, anticipatory firing of the network. Overall, our results demonstrate that the interplay of self-supervised predictive learning rules and inhibitory feedback enables fast and efficient classification of different input sequences.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
Brookshire (2022) claims that previous analyses of periodicity in detection performance after a reset event suffer from extreme false-positive rates. Here we show that this conclusion is based on an incorrect implemention of a null-hypothesis of aperiodicity, and that a correct implementation confirms low false-positive rates. Furthermore, we clarify that the previously used method of shuffling-in-time, and thereby shuffling-in-phase, cleanly implements the null hypothesis of no temporal structure after the reset, and thereby of no phase locking to the reset. Moving from a corresponding phase-locking spectrum to an inference on the periodicity of the underlying process can be accomplished by parameterizing the spectrum. This can separate periodic from non-periodic components, and quantify the strength of periodicity.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a fundamental ability linked to cognitive functions such as inner speech, phonological working memory, and predictive processing. Speech imagery is also considered an ideal tool to test theories of overt speech. The study of speech imagery is challenging, primarily because of the absence of overt behavioral output as well as the difficulty in temporally aligning imagery events across trials and individuals. We used magnetoencephalography (MEG) paired with temporal-generalization-based neural decoding and a simple behavioral protocol to determine the processing stages underlying speech imagery. We monitored participants’ lip and jaw micromovements during mental imagery of syllable production using electromyography. Decoding participants’ imagined syllables revealed a sequence of task-elicited representations. Importantly, participants’ micromovements did not discriminate between syllables. The decoded sequence of neuronal patterns maps well onto the predictions of current computational models of overt speech motor control and provides evidence for hypothesized internal and external feedback loops for speech planning and production, respectively. Additionally, the results expose the compressed nature of representations during planning which contrasts with the natural rate at which internal productions unfold. We conjecture that the same sequence underlies the motor-based generation of sensory predictions that modulate speech perception as well as the hypothesized articulatory loop of phonological working memory. The results underscore the potential of speech imagery, based on new experimental approaches and analytical methods, and further pave the way for successful non-invasive brain-computer interfaces.
Research points to neurofunctional differences underlying fluent speech production in stutterers and non-stutterers. There has been considerably less work focusing on the processes that underlie stuttered speech, primarily due to the difficulty of reliably eliciting stuttering in the unnatural contexts associated with neuroimaging experiments. We used magnetoencephalography (MEG) to test the hypothesis that stuttering events result from global motor inhibition–a “freeze” response typically characterized by increased beta power in nodes of the action-stopping network. We leveraged a novel clinical interview to develop participant-specific stimuli in order to elicit a comparable amount of stuttered and fluent trials. Twenty-nine adult stutterers participated. The paradigm included a cue prior to a go signal, which allowed us to isolate processes associated with stuttered and fluent trials prior to speech initiation. During this pre-speech time window, stuttered trials were associated with greater beta power in the right pre-supplementary motor area, a key node in the action-stopping network, compared to fluent trials. Beta power in the right pre-supplementary area was related to a clinical measure of stuttering severity. We also found that anticipated words identified independently by participants were stuttered more often than those generated by the researchers, which were based on the participants’ reported anticipated sounds. This suggests that global motor inhibition results from stuttering anticipation. This study represents the largest comparison of stuttered and fluent speech to date. The findings provide a foundation for clinical trials that test the efficacy of neuromodulation on stuttering. Moreover, our study demonstrates the feasibility of using our approach for eliciting stuttering during MEG and functional magnetic resonance imaging experiments so that the neurobiological bases of stuttered speech can be further elucidated.
The ability to extract regularities from the environment is arguably an adaptive characteristic of intelligent systems. In the context of speech, statistical learning is thought to be an important mechanism for language acquisition. By considering individual differences in speech auditory-motor synchronization, an independent component analysis of fMRI data revealed that the neural substrates of statistical word form learning are not fully shared across individuals. While a network of auditory and superior pre/motor regions is universally activated in the process of learning, a fronto-parietal network is instead additionally and selectively engaged by some individuals, boosting their performance. Furthermore, interfering with the use of this network via articulatory suppression (producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on language-related statistical learning and reconciles previous contrasting findings, while highlighting the need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.
Neuronal hyperexcitability is a feature of Alzheimer’s disease (AD). Three main mechanisms have been proposed to explain it: i), dendritic degeneration leading to increased input resistance, ii), ion channel changes leading to enhanced intrinsic excitability, and iii), synaptic changes leading to excitation-inhibition (E/I) imbalance. However, the relative contribution of these mechanisms is not fully understood. Therefore, we performed biophysically realistic multi-compartmental modelling of excitability in reconstructed CA1 pyramidal neurons of wild-type and APP/PS1 mice, a well-established animal model of AD. We show that, for synaptic activation, the excitability promoting effects of dendritic degeneration are cancelled out by excitability decreasing effects of synaptic loss. We find an interesting balance of excitability regulation with enhanced degeneration in the basal dendrites of APP/PS1 cells potentially leading to increased excitation by the apical but decreased excitation by the basal Schaffer collateral pathway. Furthermore, our simulations reveal that three additional pathomechanistic scenarios can account for the experimentally observed increase in firing and bursting of CA1 pyramidal neurons in APP/PS1 mice. Scenario 1: increased excitatory burst input; scenario 2: enhanced E/I ratio and scenario 3: alteration of intrinsic ion channels (IAHP down-regulated; INap, INa and ICaT up-regulated) in addition to enhanced E/I ratio. Our work supports the hypothesis that pathological network and ion channel changes are major contributors to neuronal hyperexcitability in AD. Overall, our results are in line with the concept of multi-causality and degeneracy according to which multiple different disruptions are separately sufficient but no single disruption is necessary for neuronal hyperexcitability.
The neural mechanisms that unfold when humans form a large group defined by an overarching context, such as audiences in theater or sports, are largely unknown and unexplored. This is mainly due to the lack of availability of a scalable system that can record the brain activity from a significantly large portion of such an audience simultaneously. Although the technology for such a system has been readily available for a long time, the high cost as well as the large overhead in human resources and logistic planning have prohibited the development of such a system. However, during the recent years reduction in technology costs and size have led to the emergence of low-cost, consumer-oriented EEG systems, developed primarily for recreational use. Here by combining such a low-cost EEG system with other off-the-shelve hardware and tailor-made software, we develop in the lab and test in a cinema such a scalable EEG hyper-scanning system. The system has a robust and stable performance and achieves accurate unambiguous alignment of the recorded data of the different EEG headsets. These characteristics combined with small preparation time and low-cost make it an ideal candidate for recording large portions of audiences.
Research on psychopathy has so far been largely limited to the investigation of high-level processes, such as emotion perception and regulation. In the present work, we investigate whether psychopathy has an effect on the estimation of fundamental physical parameters, which are computed in the brain during early stages of sensory processing. We employed a simple task in which participants had to estimate their interpersonal distance from a moving avatar and stop it at a given distance. The face expression of the avatars were positive, negative, or neutral. Participants carried out the task online on their home computers. We measured the psychopathy level via a self-report questionnaire. Regardless of the degree of psychopathy, the facial expression of the avatars showed no effect on distance estimation. Our results show that individuals with a high degree of psychopathy underestimate distance of approaching avatars significantly less (let the avatar approach them significantly closer) than did participants with a lesser degree of psychopathy. Moreover, participants who scored high in Self-Centered Impulsivity underestimate the distance to approaching avatars significantly less (let the avatar approach closer) than participants with a low score. Distance estimation is considered an automatic process performed at early stages of visual processing. Therefore, our results imply that psychopathy affects basic early sensory processes, such as feature extraction, in the visual cortex.
Moving in synchrony to external rhythmic stimuli is an elementary function that humans regularly engage in. It is termed “sensorimotor synchronization” and it is governed by two main parameters, the period and the phase of the movement with respect to the external rhythm. There has been an extensive body of research on the characteristics of these parameters, primarily once the movement synchronization has reached a steady-state level. Particular interest has been shown about how these parameters are corrected when there are deviations for the steady-state level. However, little is known about the initial “tuning-in” interval, when one aligns the movement to the external rhythm from rest. The current work investigates this “tuning-in” period for each of the four limbs and makes various novel contributions in the understanding of sensorimotor synchronization. The results suggest that phase and period alignment appear to be separate processes. Phase alignment involves limb-specific somatosensory memory in the order of minutes while period alignment has very limited memory usage. Phase alignment is the primary task but then the brain switches to period alignment where it spends most its resources. In overall this work suggests a central, cognitive role of period alignment and a peripheral, sensorimotor role of phase alignment.
The hippocampal formation is linked to spatial navigation, but there is little corroboration from freely-moving primates with concurrent monitoring of three-dimensional head and gaze stances. We recorded neurons and local field potentials across hippocampal regions in rhesus macaques during free foraging in an open environment while tracking their head and eye. Theta band activity was intermittently present at movement onset and modulated by saccades. Many cells were phase-locked to theta, with few showing theta phase precession. Most hippocampal neurons encoded a mixture of spatial variables beyond place fields and a negligible number showed prominent grid tuning. Spatial representations were dominated by facing location and allocentric direction, mostly in head, rather than gaze, coordinates. Importantly, eye movements strongly modulated neural activity in all regions. These findings reveal that the macaque hippocampal formation represents three-dimensional space using a multiplexed code, with head orientation and eye movement properties dominating over simple place and grid coding during free exploration.
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioral experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (Digit Span) and higher linguistic, model-based sentence predictability – particularly so at higher speech rates and for individuals with high auditory-motor synchronization. These findings support the notion of an individual preferred auditory– motor regime that allows for optimal speech processing. The data provide evidence for a model that assigns a central role to motor-system-dependent individual flexibility in continuous speech comprehension.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi’s fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
Signal transfer of visual stimuli to V4 occurs in gamma-rhythmic, pulsed information packages
(2020)
Summary Selective visual attention allows the brain to focus on behaviorally relevant information while ignoring irrelevant signals. As a possible mechanism, routing by synchronization was proposed: neural populations sending attended signals align their gamma-rhythmic activities with receiving populations, such that spikes from the senders arrive at excitability peaks of the receivers, enhancing signal transfer. Conversely, the non-attended signals arrive unaligned to the receiver’s oscillation, reducing signal transfer. Therefore, visual signals should be transferred through periodically pulsed information packages, resulting in a modulation of the stimulus content within the receiver’s activity by its gamma phase and amplitude. To test this prediction, we quantified gamma phase-specific stimulus content within neural activity from area V4 of macaques performing a visual attention task. For the attended stimulus we find enhanced stimulus content reaching its maximum near excitability peaks, with effect magnitude increasing with oscillation amplitude, establishing a functional link between selective processing and gamma activity.
Synchronization has been implicated in neuronal communication, but causal evidence remains indirect. We used optogenetics to generate depolarizing currents in pyramidal neurons of cat visual cortex, emulating excitatory synaptic inputs under precise temporal control, while measuring spike output. Cortex transformed constant excitation into strong gamma-band synchronization, revealing the well-known cortical resonance. Increasing excitation with ramps increased the strength and frequency of synchronization. Slow, symmetric excitation profiles revealed hysteresis of power and frequency. Crucially, white-noise input sequences enabled causal analysis of network transmission, establishing that cortical resonance selectively transmits coherent input components. Models composed of recurrently coupled excitatory and inhibitory units uncovered a crucial role of feedback inhibition and suggest that hysteresis can arise through spike-frequency adaptation. The presented approach provides a powerful means to investigate the resonance properties of local circuits and probe how these properties transform input and shape transmission.
The gamma rhythm has been implicated in neuronal communication, but causal evidence remains indirect. We measured spike output of local neuronal networks and emulated their synaptic input through optogenetics. Opsins provide currents through somato-dendritic membranes, similar to synapses, yet under experimental control with high temporal precision. We expressed Channelrhodopsin-2 in excitatory neurons of cat visual cortex and recorded neuronal responses to light with different temporal characteristics. Sine waves of different frequencies entrained neuronal responses with a reliability that peaked for input frequencies in the gamma band. Crucially, we also presented white-noise sequences, because their temporal unpredictability enables analysis of causality. Neuronal spike output was caused specifically by the input’s gamma component. This gamma-specific transfer function is likely an emergent property of in-vivo networks with feedback inhibition. The method described here could reveal the transfer function between the input to any one and the output of any other neuronal group.
Intrinsic covariation of brain activity has been studied across many levels of brain organization. Between visual areas, neuronal activity covaries primarily among portions with similar retinotopic selectivity. We hypothesized that spontaneous inter-areal co-activation is subserved by neuronal synchronization. We performed simultaneous high-density electrocorticographic recordings across several visual areas in awake monkeys to investigate spatial patterns of local and inter-areal synchronization. We show that stimulation-induced patterns of inter-areal co-activation were reactivated in the absence of stimulation. Reactivation occurred through both, inter-areal co-fluctuation of local activity and inter-areal phase synchronization. Furthermore, the trial-by-trial covariance of the induced responses recapitulated the pattern of inter-areal coupling observed during stimulation, i.e. the signal correlation. Reactivation-related synchronization showed distinct peaks in the theta, alpha and gamma frequency bands. During passive states, this rhythmic reactivation was augmented by specific patterns of arrhythmic correspondence. These results suggest that networks of intrinsic covariation observed at multiple levels and with several recording techniques are related to synchronization and that behavioral state may affect the structure of intrinsic dynamics.
Individual differences in perception are widespread. Considering inter-individual variability, synesthetes experience stable additional sensations; schizophrenia patients suffer perceptual deficits in e.g. perceptual organization (alongside hallucinations and delusions). Is there a unifying principle explaining inter-individual variability in perception? There is good reason to believe perceptual experience results from inferential processes whereby sensory evidence is weighted by prior knowledge about the world. Different perceptual phenotypes may result from different precision weighting of sensory evidence and prior knowledge. We tested this hypothesis by comparing visibility thresholds in a perceptual hysteresis task across medicated schizophrenia patients, synesthetes, and controls. Participants rated the subjective visibility of stimuli embedded in noise while we parametrically manipulated the availability of sensory evidence. Additionally, precise long-term priors in synesthetes were leveraged by presenting either synesthesia-inducing or neutral stimuli. Schizophrenia patients showed increased visibility thresholds, consistent with overreliance on sensory evidence. In contrast, synesthetes exhibited lowered thresholds exclusively for synesthesia-inducing stimuli suggesting high-precision long-term priors. Additionally, in both synesthetes and schizophrenia patients explicit, short-term priors – introduced during the hysteresis experiment – lowered thresholds but did not normalize perception. Our results imply that distinct perceptual phenotypes might result from differences in the precision afforded to prior beliefs and sensory evidence, respectively.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. Here, we hypothesized that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we found that attention enhanced the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Population analysis revealed that neuronal responses converged to a low dimensional subspace for natural but not for synthetic images. Critically, we determined that the attentional enhancement in stimulus decodability was captured by the dominant low dimensional subspace, suggesting an alignment between the attentional and natural stimulus variance. The alignment was pronounced for late evoked responses but not for early transient responses of V1 neurons, supporting the notion that top-down feedback was required. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. We hypothesize that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we show that attention enhances the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Attentional enhancement of stimulus decodability from population responses occurs in low dimensional spaces, as revealed by principal component analysis, suggesting an alignment between the attentional and the natural stimulus variance. Moreover, natural scenes produce stimulus-specific oscillatory responses in V1, whose power undergoes a global shift from low to high frequencies with attention. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Neuroscience studies in non-human primates (NHP) often follow the rule of thumb that results observed in one animal must be replicated in at least one other. However, we lack a statistical justification for this rule of thumb, or an analysis of whether including three or more animals is better than including two. Yet, a formal statistical framework for experiments with few subjects would be crucial for experimental design, ethical justification, and data analysis. Also, including three or four animals in a study creates the possibility that the results observed in one animal will differ from those observed in the others: we need a statistically justified rule to resolve such situations. Here, I present a statistical framework to address these issues. This framework assumes that conducting an experiment will produce a similar result for a large proportion of the population (termed ‘representative’), but will produce spurious results for a substantial proportion of animals (termed ‘outliers’); the fractions of ‘representative’ and ‘outliers’ animals being defined by a prior distribution. I propose a procedure in which experimenters collect results from M animals and accept results that are observed in at least N of them (‘N-out-of-M’ procedure). I show how to compute the risks α (of reaching an incorrect conclusion) and β (of failing to reach a conclusion) for any prior distribution, and as a function of N and M. Strikingly, I find that the N-out-of-M model leads to a similar conclusion across a wide range of prior distributions: recordings from two animals lowers the risk α and therefore ensures reliable result, but leaves a large risk β; and recordings from three animals and accepting results observed in two of them strikes an efficient balance between acceptable risks α and β. This framework gives a formal justification for the rule of thumb of using at least two animals in NHP studies, suggests that recording from three animals when possible markedly improves statistical power, provides a statistical solution for situations where results are not consistent between all animals, and may apply to other types of studies involving few animals.
Rhythmic neural spiking and attentional sampling arising from cortical receptive field interactions
(2018)
Summary: Growing evidence suggests that distributed spatial attention may invoke theta (3-9 Hz) rhythmic sampling processes. The neuronal basis of such attentional sampling is however not fully understood. Here we show using array recordings in visual cortical area V4 of two awake macaques that presenting separate visual stimuli to the excitatory center and suppressive surround of neuronal receptive fields elicits rhythmic multi-unit activity (MUA) at 3-6 Hz. This neuronal rhythm did not depend on small fixational eye movements. In the context of a distributed spatial attention task, during which the monkeys detected a spatially and temporally uncertain target, reaction times (RT) exhibited similar rhythmic fluctuations. RTs were fast or slow depending on the target occurrence during high or low MUA, resulting in rhythmic MUA-RT cross-correlations at at theta frequencies. These findings suggest that theta-rhythmic neuronal activity arises from competitive receptive field interactions and that this rhythm may subserve attentional sampling.
Highlights:
* Center-surround interactions induce theta-rhythmic MUA of visual cortex neurons
* The MUA rhythm does not depend on small fixational eye movements
* Reaction time fluctuations lock to the neuronal rhythm under distributed attention
In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.
One Sentence Summary Bats avoid acoustic interference by rapidly adjusting the timing of vocalizations to the temporal pattern of varying noise.
In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.
One Sentence Summary: Bats avoid acoustic interference by rapidly adjusting the timing of vocalizations to the temporal pattern of varying noise.
Selective attention implements preferential routing of attended stimuli, likely through increasing the influence of the respective synaptic inputs on higher-area neurons. As the inputs of competing stimuli converge onto postsynaptic neurons, presynaptic circuits might offer the best target for attentional top-down influences. If those influences enabled presynaptic circuits to selectively entrain postsynaptic neurons, this might lead to selective routing. Indeed, when two visual stimuli induce two gamma rhythms in V1, only the gamma induced by the attended stimulus entrains gamma in V4. Here, we modeled this selective entrainment with a Dynamic Causal Model for Cross-Spectral Densities and found that it can be explained by attentional modulation of intrinsic V1 connections. Specifically, local inhibition was decreased in the granular input layer and increased in the supragranular output layer of the V1 circuit that processed the attended stimulus. Thus, presynaptic attentional influences and ensuing entrainment were sufficient to mediate selective routing.
Selective attention implements preferential routing of attended stimuli, likely through increasing the influence of the respective synaptic inputs on higher-area neurons. As the inputs of competing stimuli converge onto postsynaptic neurons, presynaptic circuits might offer the best target for attentional top-down influences. If those influences enabled presynaptic circuits to selectively entrain postsynaptic neurons, this might explain selective routing. Indeed, when two visual stimuli induce two gamma rhythms in V1, only the gamma induced by the attended stimulus entrains gamma in V4. Here, we modeled induced responses with a Dynamic Causal Model for Cross-Spectral Densities and found that selective entrainment can be explained by attentional modulation of intrinsic V1 connections. Specifically, local inhibition was decreased in the granular input layer and increased in the supragranular output layer of the V1 circuit that processed the attended stimulus. Thus, presynaptic attentional influences and ensuing entrainment were sufficient to mediate selective routing.
In a dynamic environment, the already limited information that human working memory can maintain needs to be constantly updated to optimally guide behaviour. Indeed, previous studies showed that working memory representations are continuously being transformed during delay periods leading up to a response. This goes hand-in-hand with the removal of task-irrelevant items. However, does such removal also include veridical, original stimuli, as they were prior to transformation? Here we aimed to assess the neural representation of task-relevant transformed representations, compared to the no-longer-relevant veridical representations they originated from. We applied multivariate pattern analysis to electroencephalographic data during maintenance of orientation gratings with and without mental rotation. During maintenance, we perturbed the representational network by means of a visual impulse stimulus, and were thus able to successfully decode veridical as well as imaginary, transformed orientation gratings from impulse-driven activity. On the one hand, the impulse response reflected only task-relevant (cued), but not task-irrelevant (uncued) items, suggesting that the latter were quickly discarded from working memory. By contrast, even though the original cued orientation gratings were also no longer task-relevant after mental rotation, these items continued to be represented next to the rotated ones, in different representational formats. This seemingly inefficient use of scarce working memory capacity was associated with reduced probe response times and may thus serve to increase precision and flexibility in guiding behaviour in dynamic environments.
Cognition requires the dynamic modulation of effective connectivity, i.e. the modulation of the postsynaptic neuronal response to a given input. If postsynaptic neurons are rhythmically active, this might entail rhythmic gain modulation, such that inputs synchronized to phases of high gain benefit from enhanced effective connectivity. We show that visually induced gamma-band activity in awake macaque area V4 rhythmically modulates responses to unpredictable stimulus events. This modulation exceeded a simple additive superposition of a constant response onto ongoing gamma-rhythmic firing, demonstrating the modulation of multiplicative gain. Gamma phases leading to strongest neuronal responses also led to shortest behavioral reaction times, suggesting functional relevance of the effect. Furthermore, we find that constant optogenetic stimulation of anesthetized cat area 21a produces gamma-band activity entailing a similar gain modulation. As the gamma rhythm in area 21a did not spread backwards to area 17, this suggests that postsynaptic gamma is sufficient for gain modulation.
We explore the potential of optically-pumped magnetometers (OPMs) to infer the laminar origins of neural activity non-invasively. OPM sensors can be positioned closer to the scalp than conventional cryogenic MEG sensors, opening an avenue to higher spatial resolution when combined with high-precision forward modelling. By simulating the forward model projection of single dipole sources onto OPM sensor arrays with varying sensor densities and measurement axes, and employing sparse source reconstruction approaches, we find that laminar inference with OPM arrays is possible at relatively low sensor counts at moderate to high signal-to-noise ratios (SNR). We observe improvements in laminar inference with increasing spatial sampling densities and number of measurement axes. Surprisingly, moving sensors closer to the scalp is less advantageous than anticipated - and even detrimental at high SNRs. Biases towards both the superficial and deep surfaces at very low SNRs and a notable bias towards the deep surface when combining empirical Bayesian beamformer (EBB) source reconstruction with a whole-brain analysis pose further challenges. Adequate SNR through appropriate trial numbers and shielding, as well as precise co-registration, is crucial for reliable laminar inference with OPMs.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
Temporal anticipation is a fundamental process underlying complex neural functions such as associative learning, decision-making, and motor-preparation. Here we study event anticipation in its simplest form in human participants using magnetoencephalography. We distributed events in time according to different probability density functions and presented the stimuli separately in two different sensory modalities. We found that the temporal dynamics in right parietal cortex correlate with reaction times to anticipated events. Specifically, after an event occurred, event probability was represented in right parietal activity, hinting at a functional role of event-related potential component P300 in temporal expectancy. The results are consistent across both visual and auditory modalities. The right parietal cortex seems to play a central role in the processing of event probability density. Overall, this work contributes to the understanding of the neural processes involved in the anticipation of events in time.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. In other words, any two signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks, and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex, is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy in auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
A growing body of psychophysical research reports theta (3-8 Hz) rhythmic fluctuations in visual perception that are often attributed to an attentional sampling mechanism arising from theta rhythmic neural activity in mid- to high-level cortical association areas. However, it remains unclear to what extent such neuronal theta oscillations might already emerge at early sensory cortex like the primary visual cortex (V1), e.g. from the stimulus filter properties of neurons. To address this question, we recorded multi-unit neural activity from V1 of two macaque monkeys viewing a static visual stimulus with variable sizes, orientations and contrasts. We found that among the visually responsive electrode sites, more than 50 % showed a spectral peak at theta frequencies. Theta power varied with varying basic stimulus properties. Within each of these stimulus property domains (e.g. size), there was usually a single stimulus value that induced the strongest theta activity. In addition to these variations in theta power, the peak frequency of theta oscillations increased with increasing stimulus size and also changed depending on the stimulus position in the visual field. Further analysis confirmed that this neural theta rhythm was indeed stimulus-induced and did not arise from small fixational eye movements (microsaccades). When the monkeys performed a detection task of a target embedded in a theta-generating visual stimulus, reaction times also tended to fluctuate at the same theta frequency as the one observed in the neural activity. The present study shows that a highly stimulus-dependent neuronal theta oscillation can be elicited in V1 that appears to influence the temporal dynamics of visual perception.
Quantitative MRI maps of human neocortex explored using cell type-specific gene expression analysis
(2022)
Quantitative MRI (qMRI) allows extraction of reproducible and robust parameter maps. However, the connection to underlying biological substrates remains murky, especially in the complex, densely packed cortex. We investigated associations in human neocortex between qMRI parameters and neocortical cell types by comparing the spatial distribution of the qMRI parameters longitudinal relaxation rate (R1), effective transverse relaxation rate (R2∗), and magnetization transfer saturation (MTsat) to gene expression from the Allen Human Brain Atlas, then combining this with lists of genes enriched in specific cell types found in the human brain. As qMRI parameters are magnetic field strength-dependent, the analysis was performed on MRI data at 3T and 7T. All qMRI parameters significantly covaried with genes enriched in GABA- and glutamatergic neurons, i.e. they were associated with cytoarchitecture. The qMRI parameters also significantly covaried with the distribution of genes enriched in astrocytes (R2∗ at 3T, R1 at 7T), endothelial cells (R1 and MTsat at 3T), microglia (R1 and MTsat at 3T, R1 at 7T), and oligodendrocytes (R1 at 7T). These results advance the potential use of qMRI parameters as biomarkers for specific cell types.
Some pitfalls of measuring representational similarity using Representational Similarity Analysis
(2022)
A core challenge in cognitive and brain sciences is to assess whether different biological systems represent the world in a similar manner. Representational Similarity Analysis (RSA) is an innovative approach that addresses this problem by looking for a second-order isomorphisim in neural activation patterns. This innovation makes it easy to compare latent representations across individuals, species and computational models, and accounts for its popularity across disciplines ranging from artificial intelligence to computational neuroscience. Despite these successes, using RSA has led to difficult-to-reconcile and contradictory findings, particularly when comparing primate visual representations with deep neural networks (DNNs): even though DNNs have been shown to learn and behave in vastly different ways to humans, comparisons based on RSA have shown striking similarities in some studies. Here, we demonstrate some pitfalls of using RSA and explain how contradictory findings can arise due to false inferences about representational similarity based on RSA-scores. In a series of studies that capture increasingly plausible training and testing scenarios, we compare neural representations in computational models, primate cortex and human cortex. These studies reveal two problematic phenomena that are ubiquitous in current research: a “mimic effect”, where confounds in stimuli can lead to high RSA-scores between provably dissimilar systems, and a “modulation effect”, where RSA-scores become dependent on stimuli used for testing. Since our results bear on a number of influential findings, such as comparisons made between human visual representations and those of primates and DNNs, we provide recommendations to avoid these pitfalls and sketch a way forward to a more solid science of representation in cognitive systems.
The pitfalls of measuring representational similarity using representational similarity analysis
(2022)
A core challenge in neuroscience is to assess whether diverse systems represent the world similarly. Representational Similarity Analysis (RSA) is an innovative approach to address this problem and has become increasingly popular across disciplines from machine learning to computational neuroscience. Despite these successes, RSA regularly uncovers difficult-to-reconcile and contradictory findings. Here we demonstrate the pitfalls of using RSA to infer representational similarity and explain how contradictory findings arise and support false inferences when left unchecked. By comparing neural representations in primate, human and computational models, we reveal two problematic phenomena that are ubiquitous in current research: a “mimic” effect, where confounds in stimuli can lead to high RSA scores between provably dissimilar systems, and a “modulation effect”, where RSA-scores become dependent on stimuli used for testing. Since our results bear on existing findings and inferences, we provide recommendations to avoid these pitfalls and sketch a way forward.
The pitfalls of measuring representational similarity using representational similarity analysis
(2022)
A core challenge in cognitive and brain sciences is to assess whether different biological systems represent the world in a similar manner. Representational Similarity Analysis (RSA) is an innovative approach to address this problem and has become increasingly popular across disciplines ranging from artificial intelligence to computational neuroscience. Despite these successes, RSA regularly uncovers difficult-to-reconcile and contradictory findings. Here, we demonstrate the pitfalls of using RSA and explain how contradictory findings arise due to false inferences about representational similarity based on RSA-scores. In a series of studies that capture increasingly plausible training and testing scenarios, we compare neural representations in computational models, primate cortex and human cortex. These studies reveal two problematic phenomena that are ubiquitous in current research: a “mimic” effect, where confounds in stimuli can lead to high RSA-scores between provably dissimilar systems, and a “modulation effect”, where RSA-scores become dependent on stimuli used for testing. Since our results bear on a number of influential findings and the inferences drawn by current practitioners in a wide range of disciplines, we provide recommendations to avoid these pitfalls and sketch a way forward to a more solid science of representation in cognitive systems.
Representational Similarity Analysis (RSA) is an innovative approach used to compare neural representations across individuals, species and computational models. Despite its popularity within neuroscience, psychology and artificial intelligence, this approach has led to difficult-to-reconcile and contradictory findings, particularly when comparing primate visual representations with deep neural networks (DNNs). Here, we demonstrate how such contradictory findings could arise due to incorrect inferences about mechanism when comparing complex systems processing high-dimensional stimuli. In a series of studies comparing computational models, primate cortex and human cortex we find two problematic phenomena: a “mimic effect”, where confounds in stimuli can lead to high RSA-scores between provably dissimilar systems, and a “modulation effect”, where RSA- scores become dependent on stimuli used for testing. Since our results bear on a number of influential findings, we provide recommendations to avoid these pitfalls and sketch a way forward to a more solid science of representation in cognitive systems.
Human language relies on hierarchically structured syntax to facilitate efficient and robust communication. The correct processing of syntactic information is essential for successful communication between speakers. As an abstract level of language, syntax has often been studied separately from the physical form of the speech signal, thus often masking the interactions that can promote better syntactic processing in the human brain. We analyzed a MEG dataset to investigate how acoustic cues, specifically prosody, interact with syntactic operations. We examined whether prosody enhances the cortical encoding of syntactic representations. We decoded left-sided dependencies directly from brain activity and evaluated possible modulations of the decoding by the presence of prosodic boundaries. Our findings demonstrate that prosodic boundary presence improves the representation of left-sided dependencies, indicating the facilitative role of prosodic cues in processing abstract linguistic features. This study gives neurobiological evidence for the boosting of syntactic processing via interaction with prosody.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
Spike count correlations (SCCs) are ubiquitous in sensory cortices, are characterized by rich structure and arise from structured internal interactions. Yet, most theories of visual perception focus exclusively on the mean responses of individual neurons. Here, we argue that feedback interactions in primary visual cortex (V1) establish the context in which individual neurons process complex stimuli and that changes in visual context give rise to stimulus-dependent SCCs. Measuring V1 population responses to natural scenes in behaving macaques, we show that the fine structure of SCCs is stimulus-specific and variations in response correlations across-stimuli are independent of variations in response means. Moreover, we demonstrate that stimulus-specificity of SCCs in V1 can be directly manipulated by controlling the high-order structure of synthetic stimuli. We propose that stimulus-specificity of SCCs is a natural consequence of hierarchical inference where inferences on the presence of high-level image features modulate inferences on the presence of low-level features.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
Developmental loss of ErbB4 in PV interneurons disrupts state-dependent cortical circuit dynamics
(2020)
GABAergic inhibition plays an important role in the establishment and maintenance of cortical circuits during development. Neuregulin 1 (Nrg1) and its interneuron-specific receptor ErbB4 are key elements of a signaling pathway critical for the maturation and proper synaptic connectivity of interneurons. Using conditional deletions of the ERBB4 gene in mice, we tested the role of this signaling pathway at two developmental timepoints in parvalbumin-expressing (PV) interneurons, the largest subpopulation of cortical GABAergic cells. Loss of ErbB4 in PV interneurons during embryonic, but not late postnatal, development leads to alterations in the activity of excitatory and inhibitory cortical neurons, along with severe disruption of cortical temporal organization. These impairments emerge by the end of the second postnatal week, prior to the complete maturation of the PV interneurons themselves. Early loss of ErbB4 in PV interneurons also results in profound dysregulation of excitatory pyramidal neuron dendritic architecture and a redistribution of spine density at the apical dendritic tuft. In association with these deficits, excitatory cortical neurons exhibit normal tuning for sensory inputs, but a loss of state-dependent modulation of the gain of sensory responses. Together these data support a key role for early developmental Nrg1/ErbB4 signaling in PV interneurons as powerful mechanism underlying the maturation of both the inhibitory and excitatory components of cortical circuits.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.