Refine
Year of publication
Language
- English (93)
Has Fulltext
- yes (93) (remove)
Is part of the Bibliography
- no (93)
Keywords
- dendrite (3)
- Visual cortex (2)
- morphology (2)
- natural scenes (2)
- neuronal populations (2)
- primary visual cortex (2)
- stimulus encoding (2)
- visual attention (2)
- Alpha rhythm (1)
- CNNs (1)
Institute
- Ernst Strüngmann Institut (93) (remove)
Residual connections have been proposed as an architecture-based inductive bias to mitigate the problem of exploding and vanishing gradients and increased task performance in both feed-forward and recurrent networks (RNNs) when trained with the backpropagation algorithm. Yet, little is known about how residual connections in RNNs influence their dynamics and fading memory properties. Here, we introduce weakly coupled residual recurrent networks (WCRNNs) in which residual connections result in well-defined Lyapunov exponents and allow for studying properties of fading memory. We investigate how the residual connections of WCRNNs influence their performance, network dynamics, and memory properties on a set of benchmark tasks. We show that several distinct forms of residual connections yield effective inductive biases that result in increased network expressivity. In particular, those are residual connections that (i) result in network dynamics at the proximity of the edge of chaos, (ii) allow networks to capitalize on characteristic spectral properties of the data, and (iii) result in heterogeneous memory properties. In addition, we demonstrate how our results can be extended to non-linear residuals and introduce a weakly coupled residual initialization scheme that can be used for Elman RNNs.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. In other words, any two signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks, and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
We explore the potential of optically-pumped magnetometers (OPMs) to infer the laminar origins of neural activity non-invasively. OPM sensors can be positioned closer to the scalp than conventional cryogenic MEG sensors, opening an avenue to higher spatial resolution when combined with high-precision forward modelling. By simulating the forward model projection of single dipole sources onto OPM sensor arrays with varying sensor densities and measurement axes, and employing sparse source reconstruction approaches, we find that laminar inference with OPM arrays is possible at relatively low sensor counts at moderate to high signal-to-noise ratios (SNR). We observe improvements in laminar inference with increasing spatial sampling densities and number of measurement axes. Surprisingly, moving sensors closer to the scalp is less advantageous than anticipated - and even detrimental at high SNRs. Biases towards both the superficial and deep surfaces at very low SNRs and a notable bias towards the deep surface when combining empirical Bayesian beamformer (EBB) source reconstruction with a whole-brain analysis pose further challenges. Adequate SNR through appropriate trial numbers and shielding, as well as precise co-registration, is crucial for reliable laminar inference with OPMs.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition. Furthermore, CNNs have major applications in understanding the nature of visual representations in the human brain. Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans. Specifically, there is a major debate about the question of whether CNNs primarily rely on surface regularities of objects, or whether they are capable of exploiting the spatial arrangement of features, similar to humans. Here, we develop a novel feature-scrambling approach to explicitly test whether CNNs use the spatial arrangement of features (i.e. object parts) to classify objects. We combine this approach with a systematic manipulation of effective receptive field sizes of CNNs as well as minimal recognizable configurations (MIRCs) analysis. In contrast to much previous literature, we provide evidence that CNNs are in fact capable of using relatively long-range spatial relationships for object classification. Moreover, the extent to which CNNs use spatial relationships depends heavily on the dataset, e.g. texture vs. sketch. In fact, CNNs even use different strategies for different classes within heterogeneous datasets (ImageNet), suggesting CNNs have a continuous spectrum of classification strategies. Finally, we show that CNNs learn the spatial arrangement of features only up to an intermediate level of granularity, which suggests that intermediate rather than global shape features provide the optimal trade-off between sensitivity and specificity in object classification. These results provide novel insights into the nature of CNN representations and the extent to which they rely on the spatial arrangement of features for object classification.
Anticipating future events is a key computational task for neuronal networks. Experimental evidence suggests that reliable temporal sequences in neural activity play a functional role in the association and anticipation of events in time. However, how neurons can differentiate and anticipate multiple spike sequences remains largely unknown. We implement a learning rule based on predictive processing, where neurons exclusively fire for the initial, unpredictable inputs in a spiking sequence, leading to an efficient representation with reduced post-synaptic firing. Combining this mechanism with inhibitory feedback leads to sparse firing in the network, enabling neurons to selectively anticipate different sequences in the input. We demonstrate that intermediate levels of inhibition are optimal to decorrelate neuronal activity and to enable the prediction of future inputs. Notably, each sequence is independently encoded in the sparse, anticipatory firing of the network. Overall, our results demonstrate that the interplay of self-supervised predictive learning rules and inhibitory feedback enables fast and efficient classification of different input sequences.
Aberrant neurophysiological signaling associated with speech impairments in Parkinson’s disease
(2023)
Difficulty producing intelligible speech is a debilitating symptom of Parkinson’s disease (PD). Yet, both the robust evaluation of speech impairments and the identification of the affected brain systems are challenging. Using task-free magnetoencephalography, we examine the spectral and spatial definitions of the functional neuropathology underlying reduced speech quality in patients with PD using a new approach to characterize speech impairments and a novel brain-imaging marker. We found that the interactive scoring of speech impairments in PD (N = 59) is reliable across non-expert raters, and better related to the hallmark motor and cognitive impairments of PD than automatically-extracted acoustical features. By relating these speech impairment ratings to neurophysiological deviations from healthy adults (N = 65), we show that articulation impairments in patients with PD are associated with aberrant activity in the left inferior frontal cortex, and that functional connectivity of this region with somatomotor cortices mediates the influence of cognitive decline on speech deficits.
SpikeShip: a method for fast, unsupervised discovery of high-dimensional neural spiking patterns
(2023)
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multi-neuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-P urpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Sensory processing relies on interactions between excitatory and inhibitory neurons, which are often coordinated by 30-80Hz gamma oscillations. However, the specific contributions of distinct interneurons to gamma synchronization remain unclear. We performed high-density recordings from V1 in awake mice and used optogenetics to identify PV+ (Parvalbumin) and Sst+ (Somatostatin) interneurons. PV interneurons were highly phase-locked to visually-induced gamma oscillations. Sst cells were heterogeneous, with only a subset of narrow-waveform cells showing strong gamma phase-locking. Interestingly, PV interneurons consistently fired at an earlier phase in the gamma cycle (≈6ms or 60 degrees) than Sst interneurons. Consequently, PV and Sst activity showed differential temporal relations with excitatory cells. In particular, the 1st and 2nd spikes in burst events, which were strongly gamma phase-locked, shortly preceded PV and Sst activity, respectively. These findings indicate a primary role of PV interneurons in synchronizing excitatory cells and suggest that PV and Sst interneurons control the excitability of somatic and dendritic neural compartments with precise time delays coordinated by gamma oscillations.
Rhythmic flicker stimulation has gained interest as a treatment for neurodegenerative diseases and a method for frequency tagging neural activity in human EEG/MEG recordings. Yet, little is known about the way in which flicker-induced synchronization propagates across cortical levels and impacts different cell types. Here, we used Neuropixels to simultaneously record from LGN, V1, and CA1 while presenting visual flicker stimuli at different frequencies. LGN neurons showed strong phase locking up to 40Hz, whereas phase locking was substantially weaker in V1 units and absent in CA1 units. Laminar analyses revealed an attenuation of phase locking at 40Hz for each processing stage, with substantially weaker phase locking in the superficial layers of V1. Gamma-rhythmic flicker predominantly entrained fast-spiking interneurons. Optotagging experiments showed that these neurons correspond to either PV+ or narrow-waveform Sst+ neurons. A computational model could explain the observed differences in phase locking based on the neurons’ capacitative low-pass filtering properties. In summary, the propagation of synchronized activity and its effect on distinct cell types strongly depend on its frequency.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability—particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.
In order to investigate the involvement of primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multiunit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of working memory contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
Inter-areal coherence has been hypothesized as a mechanism for inter-areal communication. Indeed, empirical studies have observed an increase in inter-areal coherence with attention. Yet, the mechanisms underlying changes in coherence remain largely unknown. Both attention and stimulus salience are associated with shifts in the peak frequency of gamma oscillations in V1, which suggests that the frequency of oscillations may play a role in facilitating changes in inter-areal communication and coherence. In this study, we used computational modeling to investigate how the peak frequency of a sender influences inter-areal coherence. We show that changes in the magnitude of coherence are largely determined by the peak frequency of the sender. However, the pattern of coherence depends on the intrinsic properties of the receiver, specifically whether the receiver integrates or resonates with its synaptic inputs. Because resonant receivers are frequency-selective, resonance has been proposed as a mechanism for selective communication. However, the pattern of coherence changes produced by a resonant receiver is inconsistent with empirical studies. By contrast, an integrator receiver does produce the pattern of coherence with frequency shifts in the sender observed in empirical studies. These results indicate that coherence can be a misleading measure of inter-areal interactions. This led us to develop a new measure of inter-areal interactions, which we refer to as Explained Power. We show that Explained Power maps directly to the signal transmitted by the sender filtered by the receiver, and thus provides a method to quantify the true signals transmitted between the sender and receiver. Together, these findings provide a model of changes in inter-areal coherence and Granger-causality as a result of frequency shifts.
Representational Similarity Analysis (RSA) is an innovative approach used to compare neural representations across individuals, species and computational models. Despite its popularity within neuroscience, psychology and artificial intelligence, this approach has led to difficult-to-reconcile and contradictory findings, particularly when comparing primate visual representations with deep neural networks (DNNs). Here, we demonstrate how such contradictory findings could arise due to incorrect inferences about mechanism when comparing complex systems processing high-dimensional stimuli. In a series of studies comparing computational models, primate cortex and human cortex we find two problematic phenomena: a “mimic effect”, where confounds in stimuli can lead to high RSA-scores between provably dissimilar systems, and a “modulation effect”, where RSA- scores become dependent on stimuli used for testing. Since our results bear on a number of influential findings, we provide recommendations to avoid these pitfalls and sketch a way forward to a more solid science of representation in cognitive systems.
The traditional view on coding in the cortex is that populations of neurons primarily convey stimulus information through the spike count. However, given the speed of sensory processing, it has been hypothesized that sensory encoding may rely on the spike-timing relationships among neurons. Here, we use a recently developed method based on Optimal Transport Theory called SpikeShip to study the encoding of natural movies by high-dimensional ensembles of neurons in visual cortex. SpikeShip is a generic measure of dissimilarity between spike train patterns based on the relative spike-timing relations among all neurons and with computational complexity similar to the spike count. We compared spike-count and spike-timing codes in up to N > 8000 neurons from six visual areas during natural video presentations. Using SpikeShip, we show that temporal spiking sequences convey substantially more information about natural movies than population spike-count vectors when the neural population size is larger than about 200 neurons. Remarkably, encoding through temporal sequences did not show representational drift both within and between blocks. By contrast, population firing rates showed better coding performance when there were few active neurons. Furthermore, the population firing rate showed memory across frames and formed a continuous trajectory across time. In contrast to temporal spiking sequences, population firing rates exhibited substantial drift across repetitions and between blocks. These findings suggest that spike counts and temporal sequences constitute two different coding schemes with distinct information about natural movies.
The brains of black 6 mice (Mus musculus) and Seba’s short-tailed bats (Carollia perspicillata) weigh roughly the same and share mammalian neocortical laminar architecture. Bats have highly developed sonar calls and social communication and are an excellent neuroethological animal model for auditory research. Mice are olfactory and somatosensory specialists, used frequently in auditory neuroscience for their advantage of standardization and wide genetic toolkit. This study presents an analytical approach to overcome the challenge of inter-species comparison with existing data. In both data sets, we recorded with linear multichannel electrodes down the depth of the primary auditory cortex (A1) while presenting repetitive stimuli trains at ~5 and ~40 Hz to awake bats and mice. We found that while there are similarities between cortical response profiles in both, there was a better signal to noise ratio in bats under these conditions, which allowed for a clearer following response to stimuli trains. Model fit analysis supported this, illustrating that bats had stronger response amplitude suppression to consecutive stimuli. Additionally, continuous wavelet transform revealed that bats had significantly stronger power and phase coherence during stimulus response and mice had stronger power in the background. Better signal to noise ratio and lower intertrial phase variability in bats could represent specialization for faster and more accurate temporal processing at lower metabolic costs. Our findings demonstrate a potentially different general auditory processing principle; investigating such differences may increase our understanding of how the ecological need of a species shapes the development and function of its nervous system.
Selective attention implements preferential routing of attended stimuli, likely through increasing the influence of the respective synaptic inputs on higher-area neurons. As the inputs of competing stimuli converge onto postsynaptic neurons, presynaptic circuits might offer the best target for attentional top-down influences. If those influences enabled presynaptic circuits to selectively entrain postsynaptic neurons, this might explain selective routing. Indeed, when two visual stimuli induce two gamma rhythms in V1, only the gamma induced by the attended stimulus entrains gamma in V4. Here, we modeled induced responses with a Dynamic Causal Model for Cross-Spectral Densities and found that selective entrainment can be explained by attentional modulation of intrinsic V1 connections. Specifically, local inhibition was decreased in the granular input layer and increased in the supragranular output layer of the V1 circuit that processed the attended stimulus. Thus, presynaptic attentional influences and ensuing entrainment were sufficient to mediate selective routing.
An important question concerning inter-areal communication in the cortex, is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy in auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
Human language relies on hierarchically structured syntax to facilitate efficient and robust communication. The correct processing of syntactic information is essential for successful communication between speakers. As an abstract level of language, syntax has often been studied separately from the physical form of the speech signal, thus often masking the interactions that can promote better syntactic processing in the human brain. We analyzed a MEG dataset to investigate how acoustic cues, specifically prosody, interact with syntactic operations. We examined whether prosody enhances the cortical encoding of syntactic representations. We decoded left-sided dependencies directly from brain activity and evaluated possible modulations of the decoding by the presence of prosodic boundaries. Our findings demonstrate that prosodic boundary presence improves the representation of left-sided dependencies, indicating the facilitative role of prosodic cues in processing abstract linguistic features. This study gives neurobiological evidence for the boosting of syntactic processing via interaction with prosody.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. Here, we hypothesized that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we found that attention enhanced the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Population analysis revealed that neuronal responses converged to a low dimensional subspace for natural but not for synthetic images. Critically, we determined that the attentional enhancement in stimulus decodability was captured by the dominant low dimensional subspace, suggesting an alignment between the attentional and natural stimulus variance. The alignment was pronounced for late evoked responses but not for early transient responses of V1 neurons, supporting the notion that top-down feedback was required. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. We hypothesize that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we show that attention enhances the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Attentional enhancement of stimulus decodability from population responses occurs in low dimensional spaces, as revealed by principal component analysis, suggesting an alignment between the attentional and the natural stimulus variance. Moreover, natural scenes produce stimulus-specific oscillatory responses in V1, whose power undergoes a global shift from low to high frequencies with attention. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.
One Sentence Summary Bats avoid acoustic interference by rapidly adjusting the timing of vocalizations to the temporal pattern of varying noise.
In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.
One Sentence Summary: Bats avoid acoustic interference by rapidly adjusting the timing of vocalizations to the temporal pattern of varying noise.
Branching allows neurons to make synaptic contacts with large numbers of other neurons, facilitating the high connectivity of nervous systems. Neuronal arbors have geometric properties such as branch lengths and diameters that are optimal in that they maximize signaling speeds while minimizing construction costs. In this work, we asked whether neuronal arbors have topological properties that may also optimize their growth or function. We discovered that for a wide range of invertebrate and vertebrate neurons the distributions of their subtree sizes follow power laws, implying that they are scale invariant. The power-law exponent distinguishes different neuronal cell types. Postsynaptic spines and branchlets perturb scale invariance. Through simulations, we show that the subtree-size distribution depends on the symmetry of the branching rules governing arbor growth and that optimal morphologies are scale invariant. Thus, the subtree-size distribution is a topological property that recapitulates the functional morphology of dendrites.
In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.
Quantitative MRI maps of human neocortex explored using cell type-specific gene expression analysis
(2022)
Quantitative magnetic resonance imaging (qMRI) allows extraction of reproducible and robust parameter maps. However, the connection to underlying biological substrates remains murky, especially in the complex, densely packed cortex. We investigated associations in human neocortex between qMRI parameters and neocortical cell types by comparing the spatial distribution of the qMRI parameters longitudinal relaxation rate (equation ImEquation1), effective transverse relaxation rate (equation ImEquation2), and magnetization transfer saturation (MTsat) to gene expression from the Allen Human Brain Atlas, then combining this with lists of genes enriched in specific cell types found in the human brain. As qMRI parameters are magnetic field strength-dependent, the analysis was performed on MRI data at 3T and 7T. All qMRI parameters significantly covaried with genes enriched in GABA- and glutamatergic neurons, i.e. they were associated with cytoarchitecture. The qMRI parameters also significantly covaried with the distribution of genes enriched in astrocytes (equation ImEquation3 at 3T, equation ImEquation4 at 7T), endothelial cells (equation ImEquation5 and MTsat at 3T), microglia (equation ImEquation6 and MTsat at 3T, equation ImEquation7 at 7T), and oligodendrocytes and oligodendrocyte precursor cells (equation ImEquation8 at 7T). These results advance the potential use of qMRI parameters as biomarkers for specific cell types.
The cytoskeleton is crucial for defining neuronal-type-specific dendrite morphologies. To explore how the complex interplay of actin-modulatory proteins (AMPs) can define neuronal types in vivo, we focused on the class III dendritic arborization (c3da) neuron of Drosophila larvae. Using computational modeling, we reveal that the main branches (MBs) of c3da neurons follow general models based on optimal wiring principles, while the actin-enriched short terminal branches (STBs) require an additional growth program. To clarify the cellular mechanisms that define this second step, we thus concentrated on STBs for an in-depth quantitative description of dendrite morphology and dynamics. Applying these methods systematically to mutants of six known and novel AMPs, we revealed the complementary roles of these individual AMPs in defining STB properties. Our data suggest that diverse dendrite arbors result from a combination of optimal-wiring-related growth and individualized growth programs that are neuron-type specific.
Highlights
• Microstimulation of visual area V4 improves visual stimulus detection
• Effects of V4 microstimulation extend to the other hemifield
• Microstimulation effects are time dependent and consistent with attention dynamics
Summary
Neuronal activity in visual area V4 is well known to be modulated by selective attention, and there are reports on V4 lesions leading to attentional deficits. However, it remains unclear whether V4 microstimulation can elicit attentional benefits. To test this hypothesis, we performed local microstimulation in area V4 and explored its spatial and time dynamics in two macaque monkeys performing a visual detection task. Microstimulation was delivered via chronically implanted multi-electrode arrays. We found that microstimulation increases average performance by 35% and reduces luminance detection thresholds by −30%. This benefit critically depends on the onset of microstimulation relative to the stimulus, consistent with known dynamics of endogenous attention. These results show that local microstimulation of V4 can improve behavior and highlight the critical role of V4 for attention.
Quantitative MRI maps of human neocortex explored using cell type-specific gene expression analysis
(2022)
Quantitative MRI (qMRI) allows extraction of reproducible and robust parameter maps. However, the connection to underlying biological substrates remains murky, especially in the complex, densely packed cortex. We investigated associations in human neocortex between qMRI parameters and neocortical cell types by comparing the spatial distribution of the qMRI parameters longitudinal relaxation rate (R1), effective transverse relaxation rate (R2∗), and magnetization transfer saturation (MTsat) to gene expression from the Allen Human Brain Atlas, then combining this with lists of genes enriched in specific cell types found in the human brain. As qMRI parameters are magnetic field strength-dependent, the analysis was performed on MRI data at 3T and 7T. All qMRI parameters significantly covaried with genes enriched in GABA- and glutamatergic neurons, i.e. they were associated with cytoarchitecture. The qMRI parameters also significantly covaried with the distribution of genes enriched in astrocytes (R2∗ at 3T, R1 at 7T), endothelial cells (R1 and MTsat at 3T), microglia (R1 and MTsat at 3T, R1 at 7T), and oligodendrocytes (R1 at 7T). These results advance the potential use of qMRI parameters as biomarkers for specific cell types.
Moving in synchrony to external rhythmic stimuli is an elementary function that humans regularly engage in. It is termed “sensorimotor synchronization” and it is governed by two main parameters, the period and the phase of the movement with respect to the external rhythm. There has been an extensive body of research on the characteristics of these parameters, primarily once the movement synchronization has reached a steady-state level. Particular interest has been shown about how these parameters are corrected when there are deviations for the steady-state level. However, little is known about the initial “tuning-in” interval, when one aligns the movement to the external rhythm from rest. The current work investigates this “tuning-in” period for each of the four limbs and makes various novel contributions in the understanding of sensorimotor synchronization. The results suggest that phase and period alignment appear to be separate processes. Phase alignment involves limb-specific somatosensory memory in the order of minutes while period alignment has very limited memory usage. Phase alignment is the primary task but then the brain switches to period alignment where it spends most its resources. In overall this work suggests a central, cognitive role of period alignment and a peripheral, sensorimotor role of phase alignment.
Temporal anticipation is a fundamental process underlying complex neural functions such as associative learning, decision-making, and motor-preparation. Here we study event anticipation in its simplest form in human participants using magnetoencephalography. We distributed events in time according to different probability density functions and presented the stimuli separately in two different sensory modalities. We found that the temporal dynamics in right parietal cortex correlate with reaction times to anticipated events. Specifically, after an event occurred, event probability was represented in right parietal activity, hinting at a functional role of event-related potential component P300 in temporal expectancy. The results are consistent across both visual and auditory modalities. The right parietal cortex seems to play a central role in the processing of event probability density. Overall, this work contributes to the understanding of the neural processes involved in the anticipation of events in time.
Brookshire (2022) claims that previous analyses of periodicity in detection performance after a reset event suffer from extreme false-positive rates. Here we show that this conclusion is based on an incorrect implemention of a null-hypothesis of aperiodicity, and that a correct implementation confirms low false-positive rates. Furthermore, we clarify that the previously used method of shuffling-in-time, and thereby shuffling-in-phase, cleanly implements the null hypothesis of no temporal structure after the reset, and thereby of no phase locking to the reset. Moving from a corresponding phase-locking spectrum to an inference on the periodicity of the underlying process can be accomplished by parameterizing the spectrum. This can separate periodic from non-periodic components, and quantify the strength of periodicity.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi’s fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
Some pitfalls of measuring representational similarity using Representational Similarity Analysis
(2022)
A core challenge in cognitive and brain sciences is to assess whether different biological systems represent the world in a similar manner. Representational Similarity Analysis (RSA) is an innovative approach that addresses this problem by looking for a second-order isomorphisim in neural activation patterns. This innovation makes it easy to compare latent representations across individuals, species and computational models, and accounts for its popularity across disciplines ranging from artificial intelligence to computational neuroscience. Despite these successes, using RSA has led to difficult-to-reconcile and contradictory findings, particularly when comparing primate visual representations with deep neural networks (DNNs): even though DNNs have been shown to learn and behave in vastly different ways to humans, comparisons based on RSA have shown striking similarities in some studies. Here, we demonstrate some pitfalls of using RSA and explain how contradictory findings can arise due to false inferences about representational similarity based on RSA-scores. In a series of studies that capture increasingly plausible training and testing scenarios, we compare neural representations in computational models, primate cortex and human cortex. These studies reveal two problematic phenomena that are ubiquitous in current research: a “mimic effect”, where confounds in stimuli can lead to high RSA-scores between provably dissimilar systems, and a “modulation effect”, where RSA-scores become dependent on stimuli used for testing. Since our results bear on a number of influential findings, such as comparisons made between human visual representations and those of primates and DNNs, we provide recommendations to avoid these pitfalls and sketch a way forward to a more solid science of representation in cognitive systems.
In a dynamic environment, the already limited information that human working memory can maintain needs to be constantly updated to optimally guide behaviour. Indeed, previous studies showed that working memory representations are continuously being transformed during delay periods leading up to a response. This goes hand-in-hand with the removal of task-irrelevant items. However, does such removal also include veridical, original stimuli, as they were prior to transformation? Here we aimed to assess the neural representation of task-relevant transformed representations, compared to the no-longer-relevant veridical representations they originated from. We applied multivariate pattern analysis to electroencephalographic data during maintenance of orientation gratings with and without mental rotation. During maintenance, we perturbed the representational network by means of a visual impulse stimulus, and were thus able to successfully decode veridical as well as imaginary, transformed orientation gratings from impulse-driven activity. On the one hand, the impulse response reflected only task-relevant (cued), but not task-irrelevant (uncued) items, suggesting that the latter were quickly discarded from working memory. By contrast, even though the original cued orientation gratings were also no longer task-relevant after mental rotation, these items continued to be represented next to the rotated ones, in different representational formats. This seemingly inefficient use of scarce working memory capacity was associated with reduced probe response times and may thus serve to increase precision and flexibility in guiding behaviour in dynamic environments.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the Discrete Fourier Transform (DFT) and the Least Square Spectrum (LSS). DFT and LSS can be applied both on the averaged accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effect) or on the population (random-effect) of simulated participants. Multiple comparisons across frequencies were corrected using False-Discovery-Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated Sensitivity, Specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the averaged-accuracy-time-course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved Sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest Specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effect approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
The ability to extract regularities from the environment is arguably an adaptive characteristic of intelligent systems. In the context of speech, statistical learning is thought to be an important mechanism for language acquisition. By considering individual differences in speech auditory-motor synchronization, an independent component analysis of fMRI data revealed that the neural substrates of statistical word form learning are not fully shared across individuals. While a network of auditory and superior pre/motor regions is universally activated in the process of learning, a fronto-parietal network is instead additionally and selectively engaged by some individuals, boosting their performance. Furthermore, interfering with the use of this network via articulatory suppression (producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on language-related statistical learning and reconciles previous contrasting findings, while highlighting the need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the discrete Fourier transform (DFT) and the least square spectrum (LSS). DFT and LSS can be applied both on the average accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effects) or on the population (random-effects) of simulated participants. Multiple comparisons across frequencies were corrected using False Discovery Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated sensitivity, specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the average accuracy time course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effects approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
Neuronal hyperexcitability is a feature of Alzheimer’s disease (AD). Three main mechanisms have been proposed to explain it: i), dendritic degeneration leading to increased input resistance, ii), ion channel changes leading to enhanced intrinsic excitability, and iii), synaptic changes leading to excitation-inhibition (E/I) imbalance. However, the relative contribution of these mechanisms is not fully understood. Therefore, we performed biophysically realistic multi-compartmental modelling of excitability in reconstructed CA1 pyramidal neurons of wild-type and APP/PS1 mice, a well-established animal model of AD. We show that, for synaptic activation, the excitability promoting effects of dendritic degeneration are cancelled out by excitability decreasing effects of synaptic loss. We find an interesting balance of excitability regulation with enhanced degeneration in the basal dendrites of APP/PS1 cells potentially leading to increased excitation by the apical but decreased excitation by the basal Schaffer collateral pathway. Furthermore, our simulations reveal that three additional pathomechanistic scenarios can account for the experimentally observed increase in firing and bursting of CA1 pyramidal neurons in APP/PS1 mice. Scenario 1: increased excitatory burst input; scenario 2: enhanced E/I ratio and scenario 3: alteration of intrinsic ion channels (IAHP down-regulated; INap, INa and ICaT up-regulated) in addition to enhanced E/I ratio. Our work supports the hypothesis that pathological network and ion channel changes are major contributors to neuronal hyperexcitability in AD. Overall, our results are in line with the concept of multi-causality and degeneracy according to which multiple different disruptions are separately sufficient but no single disruption is necessary for neuronal hyperexcitability.
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioral experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (Digit Span) and higher linguistic, model-based sentence predictability – particularly so at higher speech rates and for individuals with high auditory-motor synchronization. These findings support the notion of an individual preferred auditory– motor regime that allows for optimal speech processing. The data provide evidence for a model that assigns a central role to motor-system-dependent individual flexibility in continuous speech comprehension.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
The neural mechanisms that unfold when humans form a large group defined by an overarching context, such as audiences in theater or sports, are largely unknown and unexplored. This is mainly due to the lack of availability of a scalable system that can record the brain activity from a significantly large portion of such an audience simultaneously. Although the technology for such a system has been readily available for a long time, the high cost as well as the large overhead in human resources and logistic planning have prohibited the development of such a system. However, during the recent years reduction in technology costs and size have led to the emergence of low-cost, consumer-oriented EEG systems, developed primarily for recreational use. Here by combining such a low-cost EEG system with other off-the-shelve hardware and tailor-made software, we develop in the lab and test in a cinema such a scalable EEG hyper-scanning system. The system has a robust and stable performance and achieves accurate unambiguous alignment of the recorded data of the different EEG headsets. These characteristics combined with small preparation time and low-cost make it an ideal candidate for recording large portions of audiences.
Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a fundamental ability linked to cognitive functions such as inner speech, phonological working memory, and predictive processing. Speech imagery is also considered an ideal tool to test theories of overt speech. The study of speech imagery is challenging, primarily because of the absence of overt behavioral output as well as the difficulty in temporally aligning imagery events across trials and individuals. We used magnetoencephalography (MEG) paired with temporal-generalization-based neural decoding and a simple behavioral protocol to determine the processing stages underlying speech imagery. We monitored participants’ lip and jaw micromovements during mental imagery of syllable production using electromyography. Decoding participants’ imagined syllables revealed a sequence of task-elicited representations. Importantly, participants’ micromovements did not discriminate between syllables. The decoded sequence of neuronal patterns maps well onto the predictions of current computational models of overt speech motor control and provides evidence for hypothesized internal and external feedback loops for speech planning and production, respectively. Additionally, the results expose the compressed nature of representations during planning which contrasts with the natural rate at which internal productions unfold. We conjecture that the same sequence underlies the motor-based generation of sensory predictions that modulate speech perception as well as the hypothesized articulatory loop of phonological working memory. The results underscore the potential of speech imagery, based on new experimental approaches and analytical methods, and further pave the way for successful non-invasive brain-computer interfaces.
Neuroscience studies in non-human primates (NHP) often follow the rule of thumb that results observed in one animal must be replicated in at least one other. However, we lack a statistical justification for this rule of thumb, or an analysis of whether including three or more animals is better than including two. Yet, a formal statistical framework for experiments with few subjects would be crucial for experimental design, ethical justification, and data analysis. Also, including three or four animals in a study creates the possibility that the results observed in one animal will differ from those observed in the others: we need a statistically justified rule to resolve such situations. Here, I present a statistical framework to address these issues. This framework assumes that conducting an experiment will produce a similar result for a large proportion of the population (termed ‘representative’), but will produce spurious results for a substantial proportion of animals (termed ‘outliers’); the fractions of ‘representative’ and ‘outliers’ animals being defined by a prior distribution. I propose a procedure in which experimenters collect results from M animals and accept results that are observed in at least N of them (‘N-out-of-M’ procedure). I show how to compute the risks α (of reaching an incorrect conclusion) and β (of failing to reach a conclusion) for any prior distribution, and as a function of N and M. Strikingly, I find that the N-out-of-M model leads to a similar conclusion across a wide range of prior distributions: recordings from two animals lowers the risk α and therefore ensures reliable result, but leaves a large risk β; and recordings from three animals and accepting results observed in two of them strikes an efficient balance between acceptable risks α and β. This framework gives a formal justification for the rule of thumb of using at least two animals in NHP studies, suggests that recording from three animals when possible markedly improves statistical power, provides a statistical solution for situations where results are not consistent between all animals, and may apply to other types of studies involving few animals.
Research on psychopathy has so far been largely limited to the investigation of high-level processes, such as emotion perception and regulation. In the present work, we investigate whether psychopathy has an effect on the estimation of fundamental physical parameters, which are computed in the brain during early stages of sensory processing. We employed a simple task in which participants had to estimate their interpersonal distance from a moving avatar and stop it at a given distance. The face expression of the avatars were positive, negative, or neutral. Participants carried out the task online on their home computers. We measured the psychopathy level via a self-report questionnaire. Regardless of the degree of psychopathy, the facial expression of the avatars showed no effect on distance estimation. Our results show that individuals with a high degree of psychopathy underestimate distance of approaching avatars significantly less (let the avatar approach them significantly closer) than did participants with a lesser degree of psychopathy. Moreover, participants who scored high in Self-Centered Impulsivity underestimate the distance to approaching avatars significantly less (let the avatar approach closer) than participants with a low score. Distance estimation is considered an automatic process performed at early stages of visual processing. Therefore, our results imply that psychopathy affects basic early sensory processes, such as feature extraction, in the visual cortex.
Research points to neurofunctional differences underlying fluent speech production in stutterers and non-stutterers. There has been considerably less work focusing on the processes that underlie stuttered speech, primarily due to the difficulty of reliably eliciting stuttering in the unnatural contexts associated with neuroimaging experiments. We used magnetoencephalography (MEG) to test the hypothesis that stuttering events result from global motor inhibition–a “freeze” response typically characterized by increased beta power in nodes of the action-stopping network. We leveraged a novel clinical interview to develop participant-specific stimuli in order to elicit a comparable amount of stuttered and fluent trials. Twenty-nine adult stutterers participated. The paradigm included a cue prior to a go signal, which allowed us to isolate processes associated with stuttered and fluent trials prior to speech initiation. During this pre-speech time window, stuttered trials were associated with greater beta power in the right pre-supplementary motor area, a key node in the action-stopping network, compared to fluent trials. Beta power in the right pre-supplementary area was related to a clinical measure of stuttering severity. We also found that anticipated words identified independently by participants were stuttered more often than those generated by the researchers, which were based on the participants’ reported anticipated sounds. This suggests that global motor inhibition results from stuttering anticipation. This study represents the largest comparison of stuttered and fluent speech to date. The findings provide a foundation for clinical trials that test the efficacy of neuromodulation on stuttering. Moreover, our study demonstrates the feasibility of using our approach for eliciting stuttering during MEG and functional magnetic resonance imaging experiments so that the neurobiological bases of stuttered speech can be further elucidated.