Refine
Year of publication
Language
- English (93)
Has Fulltext
- yes (93)
Is part of the Bibliography
- no (93)
Keywords
- dendrite (3)
- Visual cortex (2)
- morphology (2)
- natural scenes (2)
- neuronal populations (2)
- primary visual cortex (2)
- stimulus encoding (2)
- visual attention (2)
- Alpha rhythm (1)
- CNNs (1)
Institute
- Ernst Strüngmann Institut (93) (remove)
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.
Moving in synchrony to external rhythmic stimuli is an elementary function that humans regularly engage in. It is termed “sensorimotor synchronization” and it is governed by two main parameters, the period and the phase of the movement with respect to the external rhythm. There has been an extensive body of research on the characteristics of these parameters, primarily once the movement synchronization has reached a steady-state level. Particular interest has been shown about how these parameters are corrected when there are deviations for the steady-state level. However, little is known about the initial “tuning-in” interval, when one aligns the movement to the external rhythm from rest. The current work investigates this “tuning-in” period for each of the four limbs and makes various novel contributions in the understanding of sensorimotor synchronization. The results suggest that phase and period alignment appear to be separate processes. Phase alignment involves limb-specific somatosensory memory in the order of minutes while period alignment has very limited memory usage. Phase alignment is the primary task but then the brain switches to period alignment where it spends most its resources. In overall this work suggests a central, cognitive role of period alignment and a peripheral, sensorimotor role of phase alignment.
Branching allows neurons to make synaptic contacts with large numbers of other neurons, facilitating the high connectivity of nervous systems. Neuronal arbors have geometric properties such as branch lengths and diameters that are optimal in that they maximize signaling speeds while minimizing construction costs. In this work, we asked whether neuronal arbors have topological properties that may also optimize their growth or function. We discovered that for a wide range of invertebrate and vertebrate neurons the distributions of their subtree sizes follow power laws, implying that they are scale invariant. The power-law exponent distinguishes different neuronal cell types. Postsynaptic spines and branchlets perturb scale invariance. Through simulations, we show that the subtree-size distribution depends on the symmetry of the branching rules governing arbor growth and that optimal morphologies are scale invariant. Thus, the subtree-size distribution is a topological property that recapitulates the functional morphology of dendrites.
Neuroscience studies in non-human primates (NHP) often follow the rule of thumb that results observed in one animal must be replicated in at least one other. However, we lack a statistical justification for this rule of thumb, or an analysis of whether including three or more animals is better than including two. Yet, a formal statistical framework for experiments with few subjects would be crucial for experimental design, ethical justification, and data analysis. Also, including three or four animals in a study creates the possibility that the results observed in one animal will differ from those observed in the others: we need a statistically justified rule to resolve such situations. Here, I present a statistical framework to address these issues. This framework assumes that conducting an experiment will produce a similar result for a large proportion of the population (termed ‘representative’), but will produce spurious results for a substantial proportion of animals (termed ‘outliers’); the fractions of ‘representative’ and ‘outliers’ animals being defined by a prior distribution. I propose a procedure in which experimenters collect results from M animals and accept results that are observed in at least N of them (‘N-out-of-M’ procedure). I show how to compute the risks α (of reaching an incorrect conclusion) and β (of failing to reach a conclusion) for any prior distribution, and as a function of N and M. Strikingly, I find that the N-out-of-M model leads to a similar conclusion across a wide range of prior distributions: recordings from two animals lowers the risk α and therefore ensures reliable result, but leaves a large risk β; and recordings from three animals and accepting results observed in two of them strikes an efficient balance between acceptable risks α and β. This framework gives a formal justification for the rule of thumb of using at least two animals in NHP studies, suggests that recording from three animals when possible markedly improves statistical power, provides a statistical solution for situations where results are not consistent between all animals, and may apply to other types of studies involving few animals.
The pitfalls of measuring representational similarity using representational similarity analysis
(2022)
A core challenge in neuroscience is to assess whether diverse systems represent the world similarly. Representational Similarity Analysis (RSA) is an innovative approach to address this problem and has become increasingly popular across disciplines from machine learning to computational neuroscience. Despite these successes, RSA regularly uncovers difficult-to-reconcile and contradictory findings. Here we demonstrate the pitfalls of using RSA to infer representational similarity and explain how contradictory findings arise and support false inferences when left unchecked. By comparing neural representations in primate, human and computational models, we reveal two problematic phenomena that are ubiquitous in current research: a “mimic” effect, where confounds in stimuli can lead to high RSA scores between provably dissimilar systems, and a “modulation effect”, where RSA-scores become dependent on stimuli used for testing. Since our results bear on existing findings and inferences, we provide recommendations to avoid these pitfalls and sketch a way forward.
The pitfalls of measuring representational similarity using representational similarity analysis
(2022)
A core challenge in cognitive and brain sciences is to assess whether different biological systems represent the world in a similar manner. Representational Similarity Analysis (RSA) is an innovative approach to address this problem and has become increasingly popular across disciplines ranging from artificial intelligence to computational neuroscience. Despite these successes, RSA regularly uncovers difficult-to-reconcile and contradictory findings. Here, we demonstrate the pitfalls of using RSA and explain how contradictory findings arise due to false inferences about representational similarity based on RSA-scores. In a series of studies that capture increasingly plausible training and testing scenarios, we compare neural representations in computational models, primate cortex and human cortex. These studies reveal two problematic phenomena that are ubiquitous in current research: a “mimic” effect, where confounds in stimuli can lead to high RSA-scores between provably dissimilar systems, and a “modulation effect”, where RSA-scores become dependent on stimuli used for testing. Since our results bear on a number of influential findings and the inferences drawn by current practitioners in a wide range of disciplines, we provide recommendations to avoid these pitfalls and sketch a way forward to a more solid science of representation in cognitive systems.
The cytoskeleton is crucial for defining neuronal-type-specific dendrite morphologies. To explore how the complex interplay of actin-modulatory proteins (AMPs) can define neuronal types in vivo, we focused on the class III dendritic arborization (c3da) neuron of Drosophila larvae. Using computational modeling, we reveal that the main branches (MBs) of c3da neurons follow general models based on optimal wiring principles, while the actin-enriched short terminal branches (STBs) require an additional growth program. To clarify the cellular mechanisms that define this second step, we thus concentrated on STBs for an in-depth quantitative description of dendrite morphology and dynamics. Applying these methods systematically to mutants of six known and novel AMPs, we revealed the complementary roles of these individual AMPs in defining STB properties. Our data suggest that diverse dendrite arbors result from a combination of optimal-wiring-related growth and individualized growth programs that are neuron-type specific.
Dendrites display a striking variety of neuronal type-specific morphologies, but the mechanisms and principles underlying such diversity remain elusive. A major player in defining the morphology of dendrites is the neuronal cytoskeleton, including evolutionarily conserved actin-modulatory proteins (AMPs). Still, we lack a clear understanding of how AMPs might support developmental phenomena such as neuron-type specific dendrite dynamics. To address precisely this level of in vivo specificity, we concentrated on a defined neuronal type, the class III dendritic arborisation (c3da) neuron of Drosophila larvae, displaying actin-enriched short terminal branchlets (STBs). Computational modelling reveals that the main branches of c3da neurons follow a general growth model based on optimal wiring, but the STBs do not. Instead, model STBs are defined by a short reach and a high affinity to grow towards the main branches. We thus concentrated on c3da STBs and developed new methods to quantitatively describe dendrite morphology and dynamics based on in vivo time-lapse imaging of mutants lacking individual AMPs. In this way, we extrapolated the role of these AMPs in defining STB properties. We propose that dendrite diversity is supported by the combination of a common step, refined by a neuron type-specific second level. For c3da neurons, we present a molecular model of how the combined action of multiple AMPs in vivo define the properties of these second level specialisations, the STBs.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here, we show in awake macaque area V1 that both repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects show some persistence on the timescale of minutes. Gamma increases are specific to the presented stimulus location. Further, repetition effects on gamma and on firing rates generalize to images of natural objects. These findings support the notion that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
Under natural conditions, the visual system often sees a given input repeatedly. This provides an opportunity to optimize processing of the repeated stimuli. Stimulus repetition has been shown to strongly modulate neuronal-gamma band synchronization, yet crucial questions remained open. Here we used magnetoencephalography in 30 human subjects and find that gamma decreases across ~10 repetitions and then increases across further repetitions, revealing plastic changes of the activated neuronal circuits. Crucially, changes induced by one stimulus did not affect responses to other stimuli, demonstrating stimulus specificity. Changes partially persisted when the inducing stimulus was repeated after 25 minutes of intervening stimuli. They were strongest in early visual cortex and increased interareal feedforward influences. Our results suggest that early visual cortex gamma synchronization enables adaptive neuronal processing of recurring stimuli. These and previously reported changes might be due to an interaction of oscillatory dynamics with established synaptic plasticity mechanisms.
Intrinsic covariation of brain activity has been studied across many levels of brain organization. Between visual areas, neuronal activity covaries primarily among portions with similar retinotopic selectivity. We hypothesized that spontaneous inter-areal co-activation is subserved by neuronal synchronization. We performed simultaneous high-density electrocorticographic recordings across several visual areas in awake monkeys to investigate spatial patterns of local and inter-areal synchronization. We show that stimulation-induced patterns of inter-areal co-activation were reactivated in the absence of stimulation. Reactivation occurred through both, inter-areal co-fluctuation of local activity and inter-areal phase synchronization. Furthermore, the trial-by-trial covariance of the induced responses recapitulated the pattern of inter-areal coupling observed during stimulation, i.e. the signal correlation. Reactivation-related synchronization showed distinct peaks in the theta, alpha and gamma frequency bands. During passive states, this rhythmic reactivation was augmented by specific patterns of arrhythmic correspondence. These results suggest that networks of intrinsic covariation observed at multiple levels and with several recording techniques are related to synchronization and that behavioral state may affect the structure of intrinsic dynamics.
A growing body of psychophysical research reports theta (3-8 Hz) rhythmic fluctuations in visual perception that are often attributed to an attentional sampling mechanism arising from theta rhythmic neural activity in mid- to high-level cortical association areas. However, it remains unclear to what extent such neuronal theta oscillations might already emerge at early sensory cortex like the primary visual cortex (V1), e.g. from the stimulus filter properties of neurons. To address this question, we recorded multi-unit neural activity from V1 of two macaque monkeys viewing a static visual stimulus with variable sizes, orientations and contrasts. We found that among the visually responsive electrode sites, more than 50 % showed a spectral peak at theta frequencies. Theta power varied with varying basic stimulus properties. Within each of these stimulus property domains (e.g. size), there was usually a single stimulus value that induced the strongest theta activity. In addition to these variations in theta power, the peak frequency of theta oscillations increased with increasing stimulus size and also changed depending on the stimulus position in the visual field. Further analysis confirmed that this neural theta rhythm was indeed stimulus-induced and did not arise from small fixational eye movements (microsaccades). When the monkeys performed a detection task of a target embedded in a theta-generating visual stimulus, reaction times also tended to fluctuate at the same theta frequency as the one observed in the neural activity. The present study shows that a highly stimulus-dependent neuronal theta oscillation can be elicited in V1 that appears to influence the temporal dynamics of visual perception.
Spike count correlations (SCCs) are ubiquitous in sensory cortices, are characterized by rich structure and arise from structured internal interactions. Yet, most theories of visual perception focus exclusively on the mean responses of individual neurons. Here, we argue that feedback interactions in primary visual cortex (V1) establish the context in which individual neurons process complex stimuli and that changes in visual context give rise to stimulus-dependent SCCs. Measuring V1 population responses to natural scenes in behaving macaques, we show that the fine structure of SCCs is stimulus-specific and variations in response correlations across-stimuli are independent of variations in response means. Moreover, we demonstrate that stimulus-specificity of SCCs in V1 can be directly manipulated by controlling the high-order structure of synthetic stimuli. We propose that stimulus-specificity of SCCs is a natural consequence of hierarchical inference where inferences on the presence of high-level image features modulate inferences on the presence of low-level features.
SpikeShip: a method for fast, unsupervised discovery of high-dimensional neural spiking patterns
(2023)
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multi-neuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-P urpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
Human language relies on hierarchically structured syntax to facilitate efficient and robust communication. The correct processing of syntactic information is essential for successful communication between speakers. As an abstract level of language, syntax has often been studied separately from the physical form of the speech signal, thus often masking the interactions that can promote better syntactic processing in the human brain. We analyzed a MEG dataset to investigate how acoustic cues, specifically prosody, interact with syntactic operations. We examined whether prosody enhances the cortical encoding of syntactic representations. We decoded left-sided dependencies directly from brain activity and evaluated possible modulations of the decoding by the presence of prosodic boundaries. Our findings demonstrate that prosodic boundary presence improves the representation of left-sided dependencies, indicating the facilitative role of prosodic cues in processing abstract linguistic features. This study gives neurobiological evidence for the boosting of syntactic processing via interaction with prosody.
The hippocampal formation is linked to spatial navigation, but there is little corroboration from freely-moving primates with concurrent monitoring of three-dimensional head and gaze stances. We recorded neurons and local field potentials across hippocampal regions in rhesus macaques during free foraging in an open environment while tracking their head and eye. Theta band activity was intermittently present at movement onset and modulated by saccades. Many cells were phase-locked to theta, with few showing theta phase precession. Most hippocampal neurons encoded a mixture of spatial variables beyond place fields and a negligible number showed prominent grid tuning. Spatial representations were dominated by facing location and allocentric direction, mostly in head, rather than gaze, coordinates. Importantly, eye movements strongly modulated neural activity in all regions. These findings reveal that the macaque hippocampal formation represents three-dimensional space using a multiplexed code, with head orientation and eye movement properties dominating over simple place and grid coding during free exploration.