Universitätspublikationen
Refine
Year of publication
Document Type
- Preprint (66) (remove)
Language
- English (66)
Has Fulltext
- yes (66)
Is part of the Bibliography
- no (66)
Keywords
- natural scenes (2)
- neuronal populations (2)
- primary visual cortex (2)
- stimulus encoding (2)
- visual attention (2)
- Computational model (1)
- Cortical column (1)
- Hypercolumn (1)
- Neural map (1)
- Optimal wiring (1)
Institute
- Ernst Strüngmann Institut (66) (remove)
The pitfalls of measuring representational similarity using representational similarity analysis
(2022)
A core challenge in cognitive and brain sciences is to assess whether different biological systems represent the world in a similar manner. Representational Similarity Analysis (RSA) is an innovative approach to address this problem and has become increasingly popular across disciplines ranging from artificial intelligence to computational neuroscience. Despite these successes, RSA regularly uncovers difficult-to-reconcile and contradictory findings. Here, we demonstrate the pitfalls of using RSA and explain how contradictory findings arise due to false inferences about representational similarity based on RSA-scores. In a series of studies that capture increasingly plausible training and testing scenarios, we compare neural representations in computational models, primate cortex and human cortex. These studies reveal two problematic phenomena that are ubiquitous in current research: a “mimic” effect, where confounds in stimuli can lead to high RSA-scores between provably dissimilar systems, and a “modulation effect”, where RSA-scores become dependent on stimuli used for testing. Since our results bear on a number of influential findings and the inferences drawn by current practitioners in a wide range of disciplines, we provide recommendations to avoid these pitfalls and sketch a way forward to a more solid science of representation in cognitive systems.
Neuroscience studies in non-human primates (NHP) often follow the rule of thumb that results observed in one animal must be replicated in at least one other. However, we lack a statistical justification for this rule of thumb, or an analysis of whether including three or more animals is better than including two. Yet, a formal statistical framework for experiments with few subjects would be crucial for experimental design, ethical justification, and data analysis. Also, including three or four animals in a study creates the possibility that the results observed in one animal will differ from those observed in the others: we need a statistically justified rule to resolve such situations. Here, I present a statistical framework to address these issues. This framework assumes that conducting an experiment will produce a similar result for a large proportion of the population (termed ‘representative’), but will produce spurious results for a substantial proportion of animals (termed ‘outliers’); the fractions of ‘representative’ and ‘outliers’ animals being defined by a prior distribution. I propose a procedure in which experimenters collect results from M animals and accept results that are observed in at least N of them (‘N-out-of-M’ procedure). I show how to compute the risks α (of reaching an incorrect conclusion) and β (of failing to reach a conclusion) for any prior distribution, and as a function of N and M. Strikingly, I find that the N-out-of-M model leads to a similar conclusion across a wide range of prior distributions: recordings from two animals lowers the risk α and therefore ensures reliable result, but leaves a large risk β; and recordings from three animals and accepting results observed in two of them strikes an efficient balance between acceptable risks α and β. This framework gives a formal justification for the rule of thumb of using at least two animals in NHP studies, suggests that recording from three animals when possible markedly improves statistical power, provides a statistical solution for situations where results are not consistent between all animals, and may apply to other types of studies involving few animals.
Moving in synchrony to external rhythmic stimuli is an elementary function that humans regularly engage in. It is termed “sensorimotor synchronization” and it is governed by two main parameters, the period and the phase of the movement with respect to the external rhythm. There has been an extensive body of research on the characteristics of these parameters, primarily once the movement synchronization has reached a steady-state level. Particular interest has been shown about how these parameters are corrected when there are deviations for the steady-state level. However, little is known about the initial “tuning-in” interval, when one aligns the movement to the external rhythm from rest. The current work investigates this “tuning-in” period for each of the four limbs and makes various novel contributions in the understanding of sensorimotor synchronization. The results suggest that phase and period alignment appear to be separate processes. Phase alignment involves limb-specific somatosensory memory in the order of minutes while period alignment has very limited memory usage. Phase alignment is the primary task but then the brain switches to period alignment where it spends most its resources. In overall this work suggests a central, cognitive role of period alignment and a peripheral, sensorimotor role of phase alignment.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.