MPI für Hirnforschung
Refine
Year of publication
Has Fulltext
- yes (139)
Is part of the Bibliography
- no (139)
Keywords
- schizophrenia (4)
- EEG (3)
- MEG (3)
- Neuroscience (3)
- human (3)
- neuroscience (3)
- predictive coding (3)
- synaptic plasticity (3)
- Axons (2)
- Cell biology (2)
Institute
Magnetoencephalography (MEG) and Electroencephalography (EEG) provide direct electrophysiological measures at an excellent temporal resolution, but the spatial resolution of source-reconstructed current activity is limited to several millimetres. Here we show, using simulations of MEG signals and Bayesian model comparison, that non-invasive myelin estimates from high-resolution quantitative magnetic resonance imaging (MRI) can enhance MEG/EEG source reconstruction. Our approach assumes that MEG/EEG signals primarily arise from the synchronised activity of pyramidal cells, and since most of the myelin in the cortical sheet originates from these cells, myelin density can predict the strength of cortical sources measured by MEG/EEG. Leveraging recent advances in quantitative MRI, we exploit this structure-function relationship and scale the leadfields of the forward model according to the local myelin density estimates from in vivo quantitative MRI to inform MEG/EEG source reconstruction. Using Bayesian model comparison and dipole localisation errors (DLEs), we demonstrate that adapting local forward fields to reflect increased local myelination at the site of a simulated source explains the simulated data better than models without such leadfield scaling. Our model comparison framework proves sensitive to myelin changes in simulations with exact coregistration and moderate-to-high sensor-level signal-to-noise ratios (≥10 dB) for the multiple sparse priors (MSP) and empirical Bayesian beamformer (EBB) approaches. Furthermore, we sought to infer the microstructure giving rise to specific functional activation patterns by comparing the myelin-informed model which was used to generate the activation with a set of test forward models incorporating different myelination patterns. We found that the direction of myelin changes, however not their magnitude, can be inferred by Bayesian model comparison. Finally, we apply myelin-informed forward models to MEG data from a visuo-motor experiment. We demonstrate improved source reconstruction accuracy using myelin estimates from a quantitative longitudinal relaxation (R1) map and discuss the limitations of our approach.
Highlights
We use quantitative MRI to implement myelin-informed forward models for M/EEG
Local myelin density was modelled by adapting the local leadfields
Myelin-informed forward models can improve source reconstruction accuracy
We can infer the directionality of myelination patterns, but not their strength
We apply our approach to MEG data from a visuo-motor experiment
During the co-translational assembly of protein complexes, a fully synthesized subunit engages with the nascent chain of a newly synthesized interaction partner. Such events are thought to contribute to productive assembly, but their exact physiological relevance remains underexplored. Here, we examine structural motifs contained in nucleoporins for their potential to facilitate co-translational assembly. We experimentally test candidate structural motifs and identify several previously unknown co-translational interactions. We demonstrate by selective ribosome profiling that domain invasion motifs of beta-propellers, coiled-coils, and short linear motifs may act as co-translational assembly domains. Such motifs are often contained in proteins that are members of multiple complexes (moonlighters) and engage with closely related paralogs. Surprisingly, moonlighters and paralogs assemble co-translationally in only some but not all of the relevant biogenesis pathways. Our results highlight the regulatory complexity of assembly pathways.
Snapshots of acetyl-CoA synthesis, the final step of CO₂ fixation in the Wood-Ljungdahl pathway
(2024)
In the ancient microbial Wood-Ljungdahl pathway, CO2 is fixed in a multi-step process with acetyl-CoA synthesis at the bifunctional carbon monoxide dehydrogenase/acetyl-CoA synthase complex (CODH/ACS). Here, we present catalytic snapshots of the CODH/ACS from the gas-converting acetogen Clostridium autoethanogenum, characterizing the molecular choreography of the overall reaction including electron transfer to the CODH for CO2 reduction, methyl transfer from the corrinoid iron-sulfur protein (CoFeSP) partner to the ACS active site and acetyl-CoA production. Unlike CODH, the multidomain ACS undergoes large conformational changes to form an internal connection to the CODH active site, accommodate the CoFeSP for methyl transfer and protect the reaction intermediates. Altogether, the structures allow us to draw a detailed reaction mechanism of this enzyme crucial for CO2 fixation in anaerobic organisms.
The intensity and the features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity rescales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity invariant. This emergence of network-invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though (i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and (ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the nonlinear network behavior. In this study we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity. In particular, we demonstrate that an effective power-law synaptic transformation at the population level is necessary for invariance. In a range of firing rates, purely depressing short-term synapses fulfills this condition, and in this case, the network is contrast-invariant. Instead, facilitating short-term plasticity generally narrows the network selectivity. We found that facilitating and depressing short-term plasticity can be combined to approximate a power-law that leads to contrast invariance. These results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
Owing to their morphological complexity and dense network connections, neurons modify their proteomes locally, using mRNAs and ribosomes present in the neuropil (tissue enriched for dendrites and axons). Although ribosome biogenesis largely takes place in the nucleus and perinuclear region, neuronal ribosomal protein (RP) mRNAs have been frequently detected remotely, in dendrites and axons. Here, using imaging and ribosome profiling, we directly detected the RP mRNAs and their translation in the neuropil. Combining brief metabolic labeling with mass spectrometry, we found that a group of RPs rapidly associated with translating ribosomes in the cytoplasm and that this incorporation was independent of canonical ribosome biogenesis. Moreover, the incorporation probability of some RPs was regulated by location (neurites vs. cell bodies) and changes in the cellular environment (following oxidative stress). Our results suggest new mechanisms for the local activation, repair and/or specialization of the translational machinery within neuronal processes, potentially allowing neuronal synapses a rapid means to regulate local protein synthesis.
Owing to their morphological complexity and dense network connections, neurons modify their proteomes locally, using mRNAs and ribosomes present in the neuropil (tissue enriched for dendrites and axons). Although ribosome biogenesis largely takes place in the nucleus and perinuclear region, neuronal ribosomal protein (RP) mRNAs have been frequently detected remotely, in dendrites and axons. Here, using imaging and ribosome profiling, we directly detected the RP mRNAs and their translation in the neuropil. Combining brief metabolic labeling with mass spectrometry, we found that a group of RPs quickly associated with translating ribosomes in the cytoplasm and that this incorporation is independent of canonical ribosome biogenesis. Moreover, the incorporation probability of some RPs was regulated by location (neurites vs. cell bodies) and changes in the cellular environment (in response to oxidative stress). Our results suggest new mechanisms for the local activation, repair and/or specialization of the translational machinery within neuronal processes, potentially allowing remote neuronal synapses a rapid solution to the relatively slow and energy-demanding requirement of nuclear ribosome biogenesis.
Protein turnover, the net result of protein synthesis and degradation, enables cells to remodel their proteomes in response to internal and external cues. Previously, we analyzed protein turnover rates in cultured brain cells under basal neuronal activity and found that protein turnover is influenced by subcellular localization, protein function, complex association, cell type of origin, and by the cellular environment (Dörrbaum et al., 2018). Here, we advanced our experimental approach to quantify changes in protein synthesis and degradation, as well as the resulting changes in protein turnover or abundance in rat primary hippocampal cultures during homeostatic scaling. Our data demonstrate that a large fraction of the neuronal proteome shows changes in protein synthesis and/or degradation during homeostatic up- and down-scaling. More than half of the quantified synaptic proteins were regulated, including pre- as well as postsynaptic proteins with diverse molecular functions.
EphrinB2 and GRIP1 stabilize mushroom spines during denervation-induced homeostatic plasticity
(2021)
Highlights
• Denervation induces mushroom spine loss and AMPAR redistribution to the surface
• GRIP1 and ephrinB2 mediate homeostatic mechanisms after lesion
• Stimulation with the ephrinB2 receptor EphB4 promotes a surface shift of AMPARs
• AMPARs surface shift restores impaired spine recovery after lesion in GRIP1 mutants
Summary
Despite decades of work, much remains elusive about molecular events at the interplay between physiological and structural changes underlying neuronal plasticity. Here, we combined repetitive live imaging and expansion microscopy in organotypic brain slice cultures to quantitatively characterize the dynamic changes of the intracellular versus surface pools of GluA2-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPARs) across the different dendritic spine types and the shaft during hippocampal homeostatic plasticity. Mechanistically, we identify ephrinB2 and glutamate receptor interacting protein (GRIP) 1 as mediating AMPAR relocation to the mushroom spine surface following lesion-induced denervation. Moreover, stimulation with the ephrinB2 specific receptor EphB4 not only prevents the lesion-induced disappearance of mushroom spines but is also sufficient to shift AMPARs to the surface and rescue spine recovery in a GRIP1 dominant-negative background. Thus, our results unravel a crucial role for ephrinB2 during homeostatic plasticity and identify a potential pharmacological target to improve dendritic spine plasticity upon injury.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called "contrastive analysis"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.
In order to investigate the involvement of primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multiunit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of working memory contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
Emerging evidence indicates that protein synthesis and degradation are necessary for the remodeling of synapses. These two processes govern cellular protein turnover, are tightly regulated, and are modulated by neuronal activity in time and space. The anisotropic anatomy of the neurons presents a challenge for the study of protein turnover, but the understanding of protein turnover in neurons and its modulation in response to activity can help us to unravel the fine-tuned changes that occur at synapses in response to activity. Here we review the key experimental evidence demonstrating the role of protein synthesis and degradation in synaptic plasticity, as well as the turnover rates of specific neuronal proteins.
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a ‘goal function’, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. ‘edge filtering’, ‘working memory’). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon’s mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called ‘coding with synergy’, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing.
Gephyrin is an ubiquitously expressed protein that, in the nervous system, is essential for synaptic anchoring of glycine receptors (GlyRs) and major GABAA receptor subtypes. The binding of gephyrin to the GlyR depends on an amphipathic motif within the large intracellular loop of the GlyRβ subunit. The mouse gephyrin gene consists of 30 exons. Ten of these exons, encoding cassettes of 5–40 amino acids, are subject to alternative splicing (C1–C7, C4′–C6′). Since one of the cassettes, C5′, has recently been reported to exclude GlyRs from GABAergic synapses, we investigated which cassettes are found in gephyrin associated with the GlyR. Gephyrin variants were purified from rat spinal cord, brain, and liver by binding to the glutathione S-transferase-tagged GlyRβ loop or copurified with native GlyR from spinal cord by affinity chromatography and analyzed by mass spectrometry. In addition to C2 and C6′, already known to be prominent, C4 was found to be abundant in gephyrin from all tissues examined. The nonneuronal cassette C3 was easily detected in liver but not in GlyR-associated gephyrin from spinal cord. C5 was present in brain and spinal cord polypeptides, whereas C5′ was coisolated mainly from liver. Notably C5′-containing gephyrin bound to the GlyRβ loop, inconsistent with its proposed selectivity for GABAA receptors. Our data show that GlyR-associated gephyrin, lacking C3, but enriched in C4 without C5, differs from other neuronal and nonneuronal gephyrin isoforms.
The inhibitory glycine receptor (GlyR) in developing spinal neurones is internalized efficiently upon antagonist inhibition. Here we used surface labeling combined with affinity purification to show that homopentameric α1 GlyRs generated inXenopus oocytes are proteolytically nicked into fragments of 35 and 13 kDa upon prolonged incubation. Nicked GlyRs do not exist at the cell surface, indicating that proteolysis occurs exclusively in the endocytotic pathway. Consistent with this interpretation, elevation of the lysosomal pH, but not the proteasome inhibitor lactacystin, prevents GlyR cleavage. Prior to internalization, α1 GlyRs are conjugated extensively with ubiquitin in the plasma membrane. Our results are consistent with ubiquitination regulating the endocytosis and subsequent proteolysis of GlyRs residing in the plasma membrane. Ubiquitin-conjugating enzymes thus may have a crucial role in synaptic plasticity by determining postsynaptic receptor numbers.
In a dynamic environment, the already limited information that human working memory can maintain needs to be constantly updated to optimally guide behaviour. Indeed, previous studies showed that working memory representations are continuously being transformed during delay periods leading up to a response. This goes hand-in-hand with the removal of task-irrelevant items. However, does such removal also include veridical, original stimuli, as they were prior to transformation? Here we aimed to assess the neural representation of task-relevant transformed representations, compared to the no-longer-relevant veridical representations they originated from. We applied multivariate pattern analysis to electroencephalographic data during maintenance of orientation gratings with and without mental rotation. During maintenance, we perturbed the representational network by means of a visual impulse stimulus, and were thus able to successfully decode veridical as well as imaginary, transformed orientation gratings from impulse-driven activity. On the one hand, the impulse response reflected only task-relevant (cued), but not task-irrelevant (uncued) items, suggesting that the latter were quickly discarded from working memory. By contrast, even though the original cued orientation gratings were also no longer task-relevant after mental rotation, these items continued to be represented next to the rotated ones, in different representational formats. This seemingly inefficient use of scarce working memory capacity was associated with reduced probe response times and may thus serve to increase precision and flexibility in guiding behaviour in dynamic environments.
Quantitative MRI maps of human neocortex explored using cell type-specific gene expression analysis
(2022)
Quantitative magnetic resonance imaging (qMRI) allows extraction of reproducible and robust parameter maps. However, the connection to underlying biological substrates remains murky, especially in the complex, densely packed cortex. We investigated associations in human neocortex between qMRI parameters and neocortical cell types by comparing the spatial distribution of the qMRI parameters longitudinal relaxation rate (equation ImEquation1), effective transverse relaxation rate (equation ImEquation2), and magnetization transfer saturation (MTsat) to gene expression from the Allen Human Brain Atlas, then combining this with lists of genes enriched in specific cell types found in the human brain. As qMRI parameters are magnetic field strength-dependent, the analysis was performed on MRI data at 3T and 7T. All qMRI parameters significantly covaried with genes enriched in GABA- and glutamatergic neurons, i.e. they were associated with cytoarchitecture. The qMRI parameters also significantly covaried with the distribution of genes enriched in astrocytes (equation ImEquation3 at 3T, equation ImEquation4 at 7T), endothelial cells (equation ImEquation5 and MTsat at 3T), microglia (equation ImEquation6 and MTsat at 3T, equation ImEquation7 at 7T), and oligodendrocytes and oligodendrocyte precursor cells (equation ImEquation8 at 7T). These results advance the potential use of qMRI parameters as biomarkers for specific cell types.
Quantitative MRI maps of human neocortex explored using cell type-specific gene expression analysis
(2022)
Quantitative MRI (qMRI) allows extraction of reproducible and robust parameter maps. However, the connection to underlying biological substrates remains murky, especially in the complex, densely packed cortex. We investigated associations in human neocortex between qMRI parameters and neocortical cell types by comparing the spatial distribution of the qMRI parameters longitudinal relaxation rate (R1), effective transverse relaxation rate (R2∗), and magnetization transfer saturation (MTsat) to gene expression from the Allen Human Brain Atlas, then combining this with lists of genes enriched in specific cell types found in the human brain. As qMRI parameters are magnetic field strength-dependent, the analysis was performed on MRI data at 3T and 7T. All qMRI parameters significantly covaried with genes enriched in GABA- and glutamatergic neurons, i.e. they were associated with cytoarchitecture. The qMRI parameters also significantly covaried with the distribution of genes enriched in astrocytes (R2∗ at 3T, R1 at 7T), endothelial cells (R1 and MTsat at 3T), microglia (R1 and MTsat at 3T, R1 at 7T), and oligodendrocytes (R1 at 7T). These results advance the potential use of qMRI parameters as biomarkers for specific cell types.
We explore the potential of optically-pumped magnetometers (OPMs) to infer the laminar origins of neural activity non-invasively. OPM sensors can be positioned closer to the scalp than conventional cryogenic MEG sensors, opening an avenue to higher spatial resolution when combined with high-precision forward modelling. By simulating the forward model projection of single dipole sources onto OPM sensor arrays with varying sensor densities and measurement axes, and employing sparse source reconstruction approaches, we find that laminar inference with OPM arrays is possible at relatively low sensor counts at moderate to high signal-to-noise ratios (SNR). We observe improvements in laminar inference with increasing spatial sampling densities and number of measurement axes. Surprisingly, moving sensors closer to the scalp is less advantageous than anticipated - and even detrimental at high SNRs. Biases towards both the superficial and deep surfaces at very low SNRs and a notable bias towards the deep surface when combining empirical Bayesian beamformer (EBB) source reconstruction with a whole-brain analysis pose further challenges. Adequate SNR through appropriate trial numbers and shielding, as well as precise co-registration, is crucial for reliable laminar inference with OPMs.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. In other words, any two signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks, and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex, is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy in auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Decades of work have demonstrated that messenger RNAs (mRNAs) are localized and translated within neuronal dendrites and axons to provide proteins for remodeling and maintaining growth cones or synapses. It remains unknown, however, whether specific forms of plasticity differentially regulate the dynamics and translation of individual mRNA species. To address this, we targeted three individual synaptically localized mRNAs, CamkIIa, β-actin, Psd95, and used molecular beacons to track endogenous mRNA movements. We used reporters and CRISPR/Cas9 gene editing to track mRNA translation in cultured neurons. We found alterations in mRNA dynamic properties occurred during two forms of synaptic plasticity, long-term potentiation (cLTP) and depression (mGluR-LTD). Changes in mRNA dynamics following either form of plasticity resulted in an enrichment of mRNA in the vicinity of dendritic spines. Both the reporters and tagging of endogenous proteins revealed the transcript-specific stimulation of protein synthesis following cLTP or mGluR-LTD. As such, the plasticity-induced enrichment of mRNA near synapses could be uncoupled from its translational status. The enrichment of mRNA in the proximity of spines allows for localized signaling pathways to decode plasticity milieus and stimulate a specific translational profile, resulting in a customized remodeling of the synaptic proteome.
Decades of work have demonstrated that mRNAs are localized and translated within neuronal dendrites and axons to provide proteins for remodeling and maintaining growth cones or synapses. It remains unknown, however, whether specific forms of plasticity differentially regulate the dynamics and translation of individual mRNA species. To address these issues, we targeted three individual synaptically-localized mRNAs, CamkIIa, Beta actin, Psd95, and used molecular beacons to track endogenous mRNA movements and reporters and Crispr-Cas9 gene editing to track their translation. We found widespread alterations in mRNA behavior during two forms of synaptic plasticity, long-term potentiation (LTP) and depression (LTD). Changes in mRNA dynamics following plasticity resulted in an enrichment of mRNA in the vicinity of dendritic spines. Both the reporters and tagging of endogenous proteins revealed the transcript-specific stimulation of protein synthesis following LTP or LTD. The plasticity-induced enrichment of mRNA near synapses could be uncoupled from its translational status. The enrichment of mRNA in the proximity of spines allows for localized signaling pathways to decode plasticity milieus and stimulate a specific translational profile, resulting in a customized remodeling of the synaptic proteome.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. Here, we hypothesized that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we found that attention enhanced the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Population analysis revealed that neuronal responses converged to a low dimensional subspace for natural but not for synthetic images. Critically, we determined that the attentional enhancement in stimulus decodability was captured by the dominant low dimensional subspace, suggesting an alignment between the attentional and natural stimulus variance. The alignment was pronounced for late evoked responses but not for early transient responses of V1 neurons, supporting the notion that top-down feedback was required. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Anticipating future events is a key computational task for neuronal networks. Experimental evidence suggests that reliable temporal sequences in neural activity play a functional role in the association and anticipation of events in time. However, how neurons can differentiate and anticipate multiple spike sequences remains largely unknown. We implement a learning rule based on predictive processing, where neurons exclusively fire for the initial, unpredictable inputs in a spiking sequence, leading to an efficient representation with reduced post-synaptic firing. Combining this mechanism with inhibitory feedback leads to sparse firing in the network, enabling neurons to selectively anticipate different sequences in the input. We demonstrate that intermediate levels of inhibition are optimal to decorrelate neuronal activity and to enable the prediction of future inputs. Notably, each sequence is independently encoded in the sparse, anticipatory firing of the network. Overall, our results demonstrate that the interplay of self-supervised predictive learning rules and inhibitory feedback enables fast and efficient classification of different input sequences.
Inter-areal coherence has been hypothesized as a mechanism for inter-areal communication. Indeed, empirical studies have observed an increase in inter-areal coherence with attention. Yet, the mechanisms underlying changes in coherence remain largely unknown. Both attention and stimulus salience are associated with shifts in the peak frequency of gamma oscillations in V1, which suggests that the frequency of oscillations may play a role in facilitating changes in inter-areal communication and coherence. In this study, we used computational modeling to investigate how the peak frequency of a sender influences inter-areal coherence. We show that changes in the magnitude of coherence are largely determined by the peak frequency of the sender. However, the pattern of coherence depends on the intrinsic properties of the receiver, specifically whether the receiver integrates or resonates with its synaptic inputs. Because resonant receivers are frequency-selective, resonance has been proposed as a mechanism for selective communication. However, the pattern of coherence changes produced by a resonant receiver is inconsistent with empirical studies. By contrast, an integrator receiver does produce the pattern of coherence with frequency shifts in the sender observed in empirical studies. These results indicate that coherence can be a misleading measure of inter-areal interactions. This led us to develop a new measure of inter-areal interactions, which we refer to as Explained Power. We show that Explained Power maps directly to the signal transmitted by the sender filtered by the receiver, and thus provides a method to quantify the true signals transmitted between the sender and receiver. Together, these findings provide a model of changes in inter-areal coherence and Granger-causality as a result of frequency shifts.
Sensory processing relies on interactions between excitatory and inhibitory neurons, which are often coordinated by 30-80Hz gamma oscillations. However, the specific contributions of distinct interneurons to gamma synchronization remain unclear. We performed high-density recordings from V1 in awake mice and used optogenetics to identify PV+ (Parvalbumin) and Sst+ (Somatostatin) interneurons. PV interneurons were highly phase-locked to visually-induced gamma oscillations. Sst cells were heterogeneous, with only a subset of narrow-waveform cells showing strong gamma phase-locking. Interestingly, PV interneurons consistently fired at an earlier phase in the gamma cycle (≈6ms or 60 degrees) than Sst interneurons. Consequently, PV and Sst activity showed differential temporal relations with excitatory cells. In particular, the 1st and 2nd spikes in burst events, which were strongly gamma phase-locked, shortly preceded PV and Sst activity, respectively. These findings indicate a primary role of PV interneurons in synchronizing excitatory cells and suggest that PV and Sst interneurons control the excitability of somatic and dendritic neural compartments with precise time delays coordinated by gamma oscillations.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here, we show in awake macaque area V1 that both repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects show some persistence on the timescale of minutes. Gamma increases are specific to the presented stimulus location. Further, repetition effects on gamma and on firing rates generalize to images of natural objects. These findings support the notion that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters.
Evading imminent predator threat is critical for survival. Effective defensive strategies can vary, even between closely related species. However, the neural basis of such species-specific behaviours is still poorly understood. Here we find that two sister species of deer mice (genus Peromyscus) show different responses to the same looming stimulus: P. maniculatus, which occupy densely vegetated habitats, predominantly dart to escape, while the open field specialist, P. polionotus, pause their movement. This difference arises from species-specific escape thresholds, is largely context-independent, and can be triggered by both visual and auditory threat stimuli. Using immunohistochemistry and electrophysiological recordings, we find that although visual threat activates the superior colliculus in both species, the role of the dorsal periaqueductal gray (dPAG) in driving behaviour differs. While dPAG activity scales with running speed and involves both excitatory and inhibitory neurons in P. maniculatus, the dPAG is largely silent in P. polionotus, even when darting is triggered. Moreover, optogenetic activation of excitatory dPAG neurons reliably elicits darting behaviour in P. maniculatus but not P. polionotus. Together, we trace the evolution of species-specific escape thresholds to a central circuit node, downstream of peripheral sensory neurons, localizing an ecologically relevant behavioural difference to a specific region of the complex mammalian brain.
Individual differences in perception are widespread. Considering inter-individual variability, synesthetes experience stable additional sensations; schizophrenia patients suffer perceptual deficits in e.g. perceptual organization (alongside hallucinations and delusions). Is there a unifying principle explaining inter-individual variability in perception? There is good reason to believe perceptual experience results from inferential processes whereby sensory evidence is weighted by prior knowledge about the world. Different perceptual phenotypes may result from different precision weighting of sensory evidence and prior knowledge. We tested this hypothesis by comparing visibility thresholds in a perceptual hysteresis task across medicated schizophrenia patients, synesthetes, and controls. Participants rated the subjective visibility of stimuli embedded in noise while we parametrically manipulated the availability of sensory evidence. Additionally, precise long-term priors in synesthetes were leveraged by presenting either synesthesia-inducing or neutral stimuli. Schizophrenia patients showed increased visibility thresholds, consistent with overreliance on sensory evidence. In contrast, synesthetes exhibited lowered thresholds exclusively for synesthesia-inducing stimuli suggesting high-precision long-term priors. Additionally, in both synesthetes and schizophrenia patients explicit, short-term priors – introduced during the hysteresis experiment – lowered thresholds but did not normalize perception. Our results imply that distinct perceptual phenotypes might result from differences in the precision afforded to prior beliefs and sensory evidence, respectively.
Ribosomes translate the genetic code into proteins. Recent technical advances have facilitated in situ structural analyses of ribosome functional states inside eukaryotic cells and the minimal bacterium Mycoplasma. However, such analyses of Gram-negative bacteria are lacking, despite their ribosomes being major antimicrobial drug targets. Here we compare two E. coli strains, a lab E. coli K-12 and human gut isolate E. coli ED1a, for which tetracycline exhibits bacteriostatic and bactericidal action, respectively. The in situ ribosome structures upon tetracycline treatment show a virtually identical drug binding-site in both strains, yet the distribution of ribosomal complexes clearly differs. While K-12 retains ribosomes in a translation competent state, tRNAs are lost in the vast majority of ED1a ribosomes. A differential response is also reflected in proteome-wide abundance and thermal stability assessment. Our study underlines the need to include molecular analyses and to consider gut bacteria when addressing antibiotic mode of action.
Taraxerol und 3α, 7α, 22α-Trihydroxy-stigmasten-(5) in den Blättern der Haselnuß (Corylus avellana)
(1966)
Aus den Blättern der Haselnuß (Corylus avellana) konnte Taraxerol, β-Sitosterin und 3α,7α,22α-Trihydroxy-stigmasten- (5) isoliert werden. Letzteres war bisher lediglich in den Blättern der Roßkastanie (Aesculus hyppocastanum) nachgewiesen worden. Triterpene mit dem Dammaranskelett waren in den Haselnußblättern nicht auffindbar.
An der Umwandlung von Tritium-markiertem Tropin- (3β-T) zu Pseudotropin- (3α-T) in Hirnhomogenat, unter der synergistischen Wirkung eines Sporenbildners und eines Enterococcen-Stammes, konnte bewiesen werden, daß diese trans-cis-Umlagerung durch Abspaltung und Wiederanlagerung von Wasser erfolgt. Die Abspaltung von Wasser aus 3α-Tropanol zu Tropen- (2) ist reversibel, wie aus dem Einbau von Tritiumwasser in das Tropin hervorgeht.
Unter dem synergistischen Einfluß zweier Bakterien-Stämme, eines aeroben Sporenbildners (Bac. alvei) und eines Enterococcen-Stammes (Diplococcus I) wird Tropin vollständig in Pseudotropin umgewandelt. Der Mechanismus dieser trans-cis-Umlagerung wird diskutiert.
Zur Trennung von Tropan-Alkaloiden und deren Derivate werden geeignete chromatographische Laufmittelsysteme angegeben.
The traditional view on coding in the cortex is that populations of neurons primarily convey stimulus information through the spike count. However, given the speed of sensory processing, it has been hypothesized that sensory encoding may rely on the spike-timing relationships among neurons. Here, we use a recently developed method based on Optimal Transport Theory called SpikeShip to study the encoding of natural movies by high-dimensional ensembles of neurons in visual cortex. SpikeShip is a generic measure of dissimilarity between spike train patterns based on the relative spike-timing relations among all neurons and with computational complexity similar to the spike count. We compared spike-count and spike-timing codes in up to N > 8000 neurons from six visual areas during natural video presentations. Using SpikeShip, we show that temporal spiking sequences convey substantially more information about natural movies than population spike-count vectors when the neural population size is larger than about 200 neurons. Remarkably, encoding through temporal sequences did not show representational drift both within and between blocks. By contrast, population firing rates showed better coding performance when there were few active neurons. Furthermore, the population firing rate showed memory across frames and formed a continuous trajectory across time. In contrast to temporal spiking sequences, population firing rates exhibited substantial drift across repetitions and between blocks. These findings suggest that spike counts and temporal sequences constitute two different coding schemes with distinct information about natural movies.
Human language relies on hierarchically structured syntax to facilitate efficient and robust communication. The correct processing of syntactic information is essential for successful communication between speakers. As an abstract level of language, syntax has often been studied separately from the physical form of the speech signal, thus often masking the interactions that can promote better syntactic processing in the human brain. We analyzed a MEG dataset to investigate how acoustic cues, specifically prosody, interact with syntactic operations. We examined whether prosody enhances the cortical encoding of syntactic representations. We decoded left-sided dependencies directly from brain activity and evaluated possible modulations of the decoding by the presence of prosodic boundaries. Our findings demonstrate that prosodic boundary presence improves the representation of left-sided dependencies, indicating the facilitative role of prosodic cues in processing abstract linguistic features. This study gives neurobiological evidence for the boosting of syntactic processing via interaction with prosody.
Neural computations emerge from recurrent neural circuits that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, it is challenging to predict which spiking network connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We established a mapping between the stabilized supralinear network (SSN) and spiking activity which allowed us to pinpoint the location in parameter space where these activity regimes occur. Notably, we found that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we showed that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.
The firing pattern of ventral midbrain dopamine neurons is controlled by afferent and intrinsic activity to generate prediction error signals that are essential for reward-based learning. Given the absence of intracellular in vivo recordings in the last three decades, the subthreshold membrane potential events that cause changes in dopamine neuron firing patterns remain unknown. By establishing stable in vivo whole-cell recordings of >100 spontaneously active midbrain dopamine neurons in anaesthetized mice, we identified the repertoire of subthreshold membrane potential signatures associated with distinct in vivo firing patterns. We demonstrate that dopamine neuron in vivo activity deviates from a single spike pacemaker pattern by eliciting transient increases in firing rate generated by at least two diametrically opposing biophysical mechanisms: a transient depolarization resulting in high frequency plateau bursts associated with a reactive, depolarizing shift in action potential threshold; and a prolonged hyperpolarization preceding slower rebound bursts characterized by a predictive, hyperpolarizing shift in action potential threshold. Our findings therefore illustrate a framework for the biophysical implementation of prediction error and sensory cue coding in dopamine neurons by tuning action potential threshold dynamics.
Several studies have probed perceptual performance at different times after a self-paced motor action and found frequency-specific modulations of perceptual performance phase-locked to the action. Such action-related modulation has been reported for various frequencies and modulation strengths. In an attempt to establish a basic effect at the population level, we had a relatively large number of participants (n=50) perform a self-paced button press followed by a detection task at threshold, and we applied both fixed- and random-effects tests. The combined data of all trials and participants surprisingly did not show any significant action-related modulation. However, based on previous studies, we explored the possibility that such modulation depends on the participant’s internal state. Indeed, when we split trials based on performance in neighboring trials, then trials in periods of low performance showed an action-related modulation at ≈17 Hz. When we split trials based on the performance in the preceding trial, we found that trials following a “miss” showed an action-related modulation at ≈17 Hz. Finally, when we split participants based on their false-alarm rate, we found that participants with no false alarms showed an action-related modulation at ≈17 Hz. All these effects were significant in random-effects tests, supporting an inference on the population. Together, these findings indicate that action-related modulations are not always detectable. However, the results suggest that specific internal states such as lower attentional engagement and/or higher decision criterion are characterized by a modulation in the beta-frequency range.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the discrete Fourier transform (DFT) and the least square spectrum (LSS). DFT and LSS can be applied both on the average accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effects) or on the population (random-effects) of simulated participants. Multiple comparisons across frequencies were corrected using False Discovery Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated sensitivity, specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the average accuracy time course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effects approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the Discrete Fourier Transform (DFT) and the Least Square Spectrum (LSS). DFT and LSS can be applied both on the averaged accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effect) or on the population (random-effect) of simulated participants. Multiple comparisons across frequencies were corrected using False-Discovery-Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated Sensitivity, Specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the averaged-accuracy-time-course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved Sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest Specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effect approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
Analyzing non-invasive recordings of electroencephalography (EEG) and magnetoencephalography (MEG) directly in sensor space, using the signal from individual sensors, is a convenient and standard way of working with this type of data. However, volume conduction introduces considerable challenges for sensor space analysis. While the general idea of signal mixing due to volume conduction in EEG/MEG is recognized, the implications have not yet been clearly exemplified. Here, we illustrate how different types of activity overlap on the level of individual sensors. We show spatial mixing in the context of alpha rhythms, which are known to have generators in different areas of the brain. Using simulations with a realistic 3D head model and lead field and data analysis of a large resting-state EEG dataset, we show that electrode signals can be differentially affected by spatial mixing by computing a sensor complexity measure. While prominent occipital alpha rhythms result in less heterogeneous spatial mixing on posterior electrodes, central electrodes show a diversity of rhythms present. This makes the individual contributions, such as the sensorimotor mu-rhythm and temporal alpha rhythms, hard to disentangle from the dominant occipital alpha. Additionally, we show how strong occipital rhythms rhythms can contribute the majority of activity to frontal channels, potentially compromising analyses that are solely conducted in sensor space. We also outline specific consequences of signal mixing for frequently used assessment of power, power ratios and connectivity profiles in basic research and for neurofeedback application. With this work, we hope to illustrate the effects of volume conduction in a concrete way, such that the provided practical illustrations may be of use to EEG researchers to in order to evaluate whether sensor space is an appropriate choice for their topic of investigation.
Analyzing non-invasive recordings of electroencephalography (EEG) and magnetoencephalography (MEG) directly in sensor space, using the signal from individual sensors, is a convenient and standard way of working with this type of data. However, volume conduction introduces considerable challenges for sensor space analysis. While the general idea of signal mixing due to volume conduction in EEG/MEG is recognized, the implications have not yet been clearly exemplified. Here, we illustrate how different types of activity overlap on the level of individual sensors. We show spatial mixing in the context of alpha rhythms, which are known to have generators in different areas of the brain. Using simulations with a realistic 3D head model and lead field and data analysis of a large resting-state EEG dataset, we show that electrode signals can be differentially affected by spatial mixing by computing a sensor complexity measure. While prominent occipital alpha rhythms result in less heterogeneous spatial mixing on posterior electrodes, central electrodes show a diversity of rhythms present. This makes the individual contributions, such as the sensorimotor mu-rhythm and temporal alpha rhythms, hard to disentangle from the dominant occipital alpha. Additionally, we show how strong occipital rhythms can contribute the majority of activity to frontal channels, potentially compromising analyses that are solely conducted in sensor space. We also outline specific consequences of signal mixing for frequently used assessment of power, power ratios and connectivity profiles in basic research and for neurofeedback application. With this work, we hope to illustrate the effects of volume conduction in a concrete way, such that the provided practical illustrations may be of use to EEG researchers to in order to evaluate whether sensor space is an appropriate choice for their topic of investigation.
Entorhinal-retrosplenial circuits for allocentric-egocentric transformation of boundary coding
(2020)
Spatial navigation requires landmark coding from two perspectives, relying on viewpoint-invariant and self-referenced representations. The brain encodes information within each reference frame but their interactions and functional dependency remains unclear. Here we investigate the relationship between neurons in the rat's retrosplenial cortex (RSC) and entorhinal cortex (MEC) that increase firing near boundaries of space. Border cells in RSC specifically encode walls, but not objects, and are sensitive to the animal’s direction to nearby borders. These egocentric representations are generated independent of visual or whisker sensation but are affected by inputs from MEC that contains allocentric spatial cells. Pharmaco- and optogenetic inhibition of MEC led to a disruption of border coding in RSC, but not vice versa, indicating allocentric-to-egocentric transformation. Finally, RSC border cells fire prospective to the animal’s next motion, unlike those in MEC, revealing the MEC-RSC pathway as an extended border coding circuit that implements coordinate transformation to guide navigation behavior.
Borders and edges are salient and behaviourally relevant features for navigating the environment. The brain forms dedicated neural representations of environmental boundaries, which are assumed to serve as a reference for spatial coding. Here we expand this border coding network to include the retrosplenial cortex (RSC) in which we identified neurons that increase their firing near all boundaries of an arena. RSC border cells specifically encode walls, but not objects, and maintain their tuning in the absence of direct sensory detection. Unlike border cells in the medial entorhinal cortex (MEC), RSC border cells are sensitive to the animal’s direction to nearby walls located contralateral to the recorded hemisphere. Pharmacogenetic inactivation of MEC led to a disruption of RSC border coding, but not vice versa, indicating network directionality. Together these data shed light on how information about distance and direction of boundaries is generated in the brain for guiding navigation behaviour.
Brookshire (2022) claims that previous analyses of periodicity in detection performance after a reset event suffer from extreme false-positive rates. Here we show that this conclusion is based on an incorrect implemention of a null-hypothesis of aperiodicity, and that a correct implementation confirms low false-positive rates. Furthermore, we clarify that the previously used method of shuffling-in-time, and thereby shuffling-in-phase, cleanly implements the null hypothesis of no temporal structure after the reset, and thereby of no phase locking to the reset. Moving from a corresponding phase-locking spectrum to an inference on the periodicity of the underlying process can be accomplished by parameterizing the spectrum. This can separate periodic from non-periodic components, and quantify the strength of periodicity.
Cognition requires the dynamic modulation of effective connectivity, i.e., the modulation of the postsynaptic neuronal response to a given input. If postsynaptic neurons are rhythmically active, this might entail rhythmic gain modulation, such that inputs synchronized to phases of high gain benefit from enhanced effective connectivity. We show that visually induced gamma-band activity in awake macaque area V4 rhythmically modulates responses to unpredictable stimulus events. This modulation exceeded a simple additive superposition of a constant response onto ongoing gamma-rhythmic firing, demonstrating the modulation of multiplicative gain. Gamma phases leading to strongest neuronal responses also led to shortest behavioral reaction times, suggesting functional relevance of the effect. Furthermore, we find that constant optogenetic stimulation of anesthetized cat area 21a produces gamma-band activity entailing a similar gain modulation. As the gamma rhythm in area 21a did not spread backward to area 17, this suggests that postsynaptic gamma is sufficient for gain modulation.
Cognition requires the dynamic modulation of effective connectivity, i.e. the modulation of the postsynaptic neuronal response to a given input. If postsynaptic neurons are rhythmically active, this might entail rhythmic gain modulation, such that inputs synchronized to phases of high gain benefit from enhanced effective connectivity. We show that visually induced gamma-band activity in awake macaque area V4 rhythmically modulates responses to unpredictable stimulus events. This modulation exceeded a simple additive superposition of a constant response onto ongoing gamma-rhythmic firing, demonstrating the modulation of multiplicative gain. Gamma phases leading to strongest neuronal responses also led to shortest behavioral reaction times, suggesting functional relevance of the effect. Furthermore, we find that constant optogenetic stimulation of anesthetized cat area 21a produces gamma-band activity entailing a similar gain modulation. As the gamma rhythm in area 21a did not spread backwards to area 17, this suggests that postsynaptic gamma is sufficient for gain modulation.
Synchronization has been implicated in neuronal communication, but causal evidence remains indirect. We use optogenetics to generate depolarizing currents in pyramidal neurons of the cat visual cortex, emulating excitatory synaptic inputs under precise temporal control, while measuring spike output. The cortex transforms constant excitation into strong gamma-band synchronization, revealing the well-known cortical resonance. Increasing excitation with ramps increases the strength and frequency of synchronization. Slow, symmetric excitation profiles reveal hysteresis of power and frequency. White-noise input sequences enable causal analysis of network transmission, establishing that the cortical gamma-band resonance preferentially transmits coherent input components. Models composed of recurrently coupled excitatory and inhibitory units uncover a crucial role of feedback inhibition and suggest that hysteresis can arise through spike-frequency adaptation. The presented approach provides a powerful means to investigate the resonance properties of local circuits and probe how these properties transform input and shape transmission.
Synchronization has been implicated in neuronal communication, but causal evidence remains indirect. We used optogenetics to generate depolarizing currents in pyramidal neurons of cat visual cortex, emulating excitatory synaptic inputs under precise temporal control, while measuring spike output. Cortex transformed constant excitation into strong gamma-band synchronization, revealing the well-known cortical resonance. Increasing excitation with ramps increased the strength and frequency of synchronization. Slow, symmetric excitation profiles revealed hysteresis of power and frequency. Crucially, white-noise input sequences enabled causal analysis of network transmission, establishing that cortical resonance selectively transmits coherent input components. Models composed of recurrently coupled excitatory and inhibitory units uncovered a crucial role of feedback inhibition and suggest that hysteresis can arise through spike-frequency adaptation. The presented approach provides a powerful means to investigate the resonance properties of local circuits and probe how these properties transform input and shape transmission.
The gamma rhythm has been implicated in neuronal communication, but causal evidence remains indirect. We measured spike output of local neuronal networks and emulated their synaptic input through optogenetics. Opsins provide currents through somato-dendritic membranes, similar to synapses, yet under experimental control with high temporal precision. We expressed Channelrhodopsin-2 in excitatory neurons of cat visual cortex and recorded neuronal responses to light with different temporal characteristics. Sine waves of different frequencies entrained neuronal responses with a reliability that peaked for input frequencies in the gamma band. Crucially, we also presented white-noise sequences, because their temporal unpredictability enables analysis of causality. Neuronal spike output was caused specifically by the input’s gamma component. This gamma-specific transfer function is likely an emergent property of in-vivo networks with feedback inhibition. The method described here could reveal the transfer function between the input to any one and the output of any other neuronal group.
Signal transfer of visual stimuli to V4 occurs in gamma-rhythmic, pulsed information packages
(2020)
Summary Selective visual attention allows the brain to focus on behaviorally relevant information while ignoring irrelevant signals. As a possible mechanism, routing by synchronization was proposed: neural populations sending attended signals align their gamma-rhythmic activities with receiving populations, such that spikes from the senders arrive at excitability peaks of the receivers, enhancing signal transfer. Conversely, the non-attended signals arrive unaligned to the receiver’s oscillation, reducing signal transfer. Therefore, visual signals should be transferred through periodically pulsed information packages, resulting in a modulation of the stimulus content within the receiver’s activity by its gamma phase and amplitude. To test this prediction, we quantified gamma phase-specific stimulus content within neural activity from area V4 of macaques performing a visual attention task. For the attended stimulus we find enhanced stimulus content reaching its maximum near excitability peaks, with effect magnitude increasing with oscillation amplitude, establishing a functional link between selective processing and gamma activity.
Afterimages result from a prolonged exposure to still visual stimuli. They are best detectable when viewed against uniform backgrounds and can persist for multiple seconds. Consequently, the dynamics of afterimages appears to be slow by their very nature. To the contrary, we report here that about 50% of an afterimage intensity can be erased rapidly—within less than a second. The prerequisite is that subjects view a rich visual content to erase the afterimage; fast erasure of afterimages does not occur if subjects view a blank screen. Moreover, we find evidence that fast removal of afterimages is a skill learned with practice as our subjects were always more effective in cleaning up afterimages in later parts of the experiment. These results can be explained by a tri-level hierarchy of adaptive mechanisms, as has been proposed by the theory of practopoiesis.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Probing the association between resting state brain network dynamics and psychological resilience
(2021)
Abstract
This study aimed at replicating a previously reported negative correlation between node flexibility and psychological resilience, i.e., the ability to retain mental health in the face of stress and adversity. To this end, we used multiband resting-state BOLD fMRI (TR = .675 sec) from 52 participants who had filled out three psychological questionnaires assessing resilience. Time-resolved functional connectivity was calculated by performing a sliding window approach on averaged time series parcellated according to different established atlases. Multilayer modularity detection was performed to track network reconfigurations over time and node flexibility was calculated as the number of times a node changes community assignment. In addition, node promiscuity (the fraction of communities a node participates in) and node degree (as proxy for time-varying connectivity) were calculated to extend previous work. We found no substantial correlations between resilience and node flexibility. We observed a small number of correlations between the two other brain measures and resilience scores, that were however very inconsistently distributed across brain measures, differences in temporal sampling, and parcellation schemes. This heterogeneity calls into question the existence of previously postulated associations between resilience and brain network flexibility and highlights how results may be influenced by specific analysis choices.
Author Summary We tested the replicability and generalizability of a previously proposed negative association between dynamic brain network reconfigurations derived from multilayer modularity detection (node flexibility) and psychological resilience. Using multiband resting-state BOLD fMRI data and exploring several parcellation schemes, sliding window approaches, and temporal resolutions of the data, we could not replicate previously reported findings regarding the association between node flexibility and resilience. By extending this work to other measures of brain dynamics (node promiscuity, degree) we observe a rather inconsistent pattern of correlations with resilience, that strongly varies across analysis choices. We conclude that further research is needed to understand the network neuroscience basis of mental health and discuss several reasons that may account for the variability in results.
Probing the association between resting-state brain network dynamics and psychological resilience
(2022)
Abstract
This study aimed at replicating a previously reported negative correlation between node flexibility and psychological resilience, that is, the ability to retain mental health in the face of stress and adversity. To this end, we used multiband resting-state BOLD fMRI (TR = .675 sec) from 52 participants who had filled out three psychological questionnaires assessing resilience. Time-resolved functional connectivity was calculated by performing a sliding window approach on averaged time series parcellated according to different established atlases. Multilayer modularity detection was performed to track network reconfigurations over time, and node flexibility was calculated as the number of times a node changes community assignment. In addition, node promiscuity (the fraction of communities a node participates in) and node degree (as proxy for time-varying connectivity) were calculated to extend previous work. We found no substantial correlations between resilience and node flexibility. We observed a small number of correlations between the two other brain measures and resilience scores that were, however, very inconsistently distributed across brain measures, differences in temporal sampling, and parcellation schemes. This heterogeneity calls into question the existence of previously postulated associations between resilience and brain network flexibility and highlights how results may be influenced by specific analysis choices.
Author Summary
We tested the replicability and generalizability of a previously proposed negative association between dynamic brain network reconfigurations derived from multilayer modularity detection (node flexibility) and psychological resilience. Using multiband resting-state BOLD fMRI data and exploring several parcellation schemes, sliding window approaches, and temporal resolutions of the data, we could not replicate previously reported findings regarding the association between node flexibility and resilience. By extending this work to other measures of brain dynamics (node promiscuity, degree) we observe a rather inconsistent pattern of correlations with resilience that strongly varies across analysis choices. We conclude that further research is needed to understand the network neuroscience basis of mental health and discuss several reasons that may account for the variability in results.
Word familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/ phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N=38 human participants) while Experiment 2 assessed behavioral facilitation effects (N=24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context based-facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in pseudowords familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context based-facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism.
To characterize the left-ventral occipito-temporal cortex (lvOT) role during reading in a quantitatively explicit and testable manner, we propose the lexical categorization model (LCM). The LCM assumes that lvOT optimizes linguistic processing by allowing fast meaning access when words are familiar and filter out orthographic strings without meaning. The LCM successfully simulates benchmark results from functional brain imaging. Empirically, using functional magnetic resonance imaging, we demonstrate that quantitative LCM simulations predict lvOT activation across three studies better than alternative models. Besides, we found that word-likeness, which is assumed as input to LCM, is represented posterior to lvOT. In contrast, a dichotomous word/non-word contrast, which is assumed as the LCM’s output, could be localized to upstream frontal brain regions. Finally, we found that training lexical categorization results in more efficient reading. Thus, we propose a ventral-visual-stream processing framework for reading involving word-likeness extraction followed by lexical categorization, before meaning extraction.
To a crucial extent, the efficiency of reading results from the fact that visual word recognition is faster in predictive contexts. Predictive coding models suggest that this facilitation results from pre-activation of predictable stimulus features across multiple representational levels before stimulus onset. Still, it is not sufficiently understood which aspects of the rich set of linguistic representations that are activated during reading—visual, orthographic, phonological, and/or lexical-semantic—contribute to context-dependent facilitation. To investigate in detail which linguistic representations are pre-activated in a predictive context and how they affect subsequent stimulus processing, we combined a well-controlled repetition priming paradigm, including words and pseudowords (i.e., pronounceable nonwords), with behavioral and magnetoencephalography measurements. For statistical analysis, we used linear mixed modeling, which we found had a higher statistical power compared to conventional multivariate pattern decoding analysis. Behavioral data from 49 participants indicate that word predictability (i.e., context present vs. absent) facilitated orthographic and lexical-semantic, but not visual or phonological processes. Magnetoencephalography data from 38 participants show sustained activation of orthographic and lexical-semantic representations in the interval before processing the predicted stimulus, suggesting selective pre-activation at multiple levels of linguistic representation as proposed by predictive coding. However, we found more robust lexical-semantic representations when processing predictable in contrast to unpredictable letter strings, and pre-activation effects mainly resembled brain responses elicited when processing the expected letter string. This finding suggests that pre-activation did not result in “explaining away” predictable stimulus features, but rather in a “sharpening” of brain responses involved in word processing.
To a crucial extent, the efficiency of reading results from the fact that visual word recognition is faster in predictive contexts. Predictive coding models suggest that this facilitation results from pre-activation of predictable stimulus features across multiple representational levels before stimulus onset. Still, it is not sufficiently understood which aspects of the rich set of linguistic representations that are activated during reading – visual, orthographic, phonological, and/or lexical-semantic – contribute to context-dependent facilitation. To investigate in detail which linguistic representations are pre-activated in a predictive context and how they affect subsequent stimulus processing, we combined a well-controlled repetition priming paradigm, including words and pseudowords (i.e., pronounceable nonwords), with behavioral and magnetoencephalography measurements. For statistical analysis, we used linear mixed modeling, which we found had a higher statistical power compared to conventional multivariate pattern decoding analysis. Behavioral data from 49 participants indicate that word predictability (i.e., context present vs. absent) facilitated orthographic and lexical-semantic, but not visual or phonological processes. Magnetoencephalography data from 38 participants show sustained activation of orthographic and lexical-semantic representations in the interval before processing the predicted stimulus, suggesting selective pre-activation at multiple levels of linguistic representation as proposed by predictive coding. However, we found more robust lexical-semantic representations when processing predictable in contrast to unpredictable letter strings, and pre-activation effects mainly resembled brain responses elicited when processing the expected letter string. This finding suggests that pre-activation did not result in ‘explaining away’ predictable stimulus features, but rather in a ‘sharpening’ of brain responses involved in word processing.
Word familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword (PW) familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N = 38 human participants), while experiment 2 assessed behavioral facilitation effects (N = 24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context-based facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in PWs familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context-based facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism.
Abstract
To characterize the functional role of the left-ventral occipito-temporal cortex (lvOT) during reading in a quantitatively explicit and testable manner, we propose the lexical categorization model (LCM). The LCM assumes that lvOT optimizes linguistic processing by allowing fast meaning access when words are familiar and filtering out orthographic strings without meaning. The LCM successfully simulates benchmark results from functional brain imaging described in the literature. In a second evaluation, we empirically demonstrate that quantitative LCM simulations predict lvOT activation better than alternative models across three functional magnetic resonance imaging studies. We found that word-likeness, assumed as input into a lexical categorization process, is represented posteriorly to lvOT, whereas a dichotomous word/non-word output of the LCM could be localized to the downstream frontal brain regions. Finally, training the process of lexical categorization resulted in more efficient reading. In sum, we propose that word recognition in the ventral visual stream involves word-likeness extraction followed by lexical categorization before one can access word meaning.
Author summary
Visual word recognition is a critical process for reading and relies on the human brain’s left ventral occipito-temporal (lvOT) regions. However, the lvOTs specific function in visual word recognition is not yet clear. We propose that these occipito-temporal brain systems are critical for lexical categorization, i.e., the process of determining whether an orthographic percept is a known word or not, so that further lexical and semantic processing can be restricted to those percepts that are part of our "mental lexicon". We demonstrate that a computational model implementing this process, the lexical categorization model, can explain seemingly contradictory benchmark results from the published literature. We further use functional magnetic resonance imaging to show that the lexical categorization model successfully predicts brain activation in the left ventral occipito-temporal cortex elicited during a word recognition task. It does so better than alternative models proposed so far. Finally, we provide causal evidence supporting this model by empirically demonstrating that training the process of lexical categorization improves reading performance.
Most current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.
Most current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, i.e., the ease vs. difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English, German) read isolated words, we encode and decode word vectors taken from the family of prediction-based word2vec algorithms. We find that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.
The outstanding speed of language comprehension necessitates a highly efficient implementation of cognitive-linguistic processes. The domain-general theory of Predictive Coding suggests that our brain solves this problem by continuously forming linguistic predictions about expected upcoming input. The neurophysiological implementation of these predictive linguistic processes, however, is not yet understood. Here, we use EEG (human participants, both sexes) to investigate the existence and nature of online-generated, category-level semantic representations during sentence processing. We conducted two experiments in which some nouns – embedded in a predictive spoken sentence context – were unexpectedly delayed by 1 second. Target nouns were either abstract/concrete (Experiment 1) or animate/inanimate (Experiment 2). We hypothesized that if neural prediction error signals following (temporary) omissions carry specific information about the stimulus, the semantic category of the upcoming target word is encoded in brain activity prior to its presentation. Using time-generalized multivariate pattern analysis, we demonstrate significant decoding of word category from silent periods directly preceding the target word, in both experiments. This provides direct evidence for predictive coding during sentence processing, i.e., that information about a word can be encoded in brain activity before it is perceived. While the same semantic contrast could also be decoded from EEG activity elicited by isolated words (Experiment 1), the identified neural patterns did not generalize to pre-stimulus delay period activity in sentences. Our results not only indicate that the brain processes language predictively, but also demonstrate the nature and sentence-specificity of category-level semantic predictions preactivated during sentence comprehension.
Across languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum between about 4.3-5.5Hz, reflecting the production and processing of linguistic information chunks (syllables, words) every ∼200ms. Interestingly, ∼200ms is also the typical duration of eye fixations during reading. Prompted by this observation, we demonstrate that German readers sample written text at ∼5Hz. A subsequent meta-analysis with 142 studies from 14 languages replicates this result, but also shows that sampling frequencies vary across languages between 3.9Hz and 5.2Hz, and that this variation systematically depends on the complexity of the writing systems (character-based vs. alphabetic systems, orthographic transparency). Finally, we demonstrate empirically a positive correlation between speech spectrum and eye-movement sampling in low-skilled readers. Based on this convergent evidence, we propose that during reading, our brain’s linguistic processing systems imprint a preferred processing rate, i.e., the rate of spoken language production and perception, onto the oculomotor system.
Mental imagery provides an essential simulation tool for remembering the past and planning the future, with its strength affecting both cognition and mental health. Research suggests that neural activity spanning prefrontal, parietal, temporal, and visual areas supports the generation of mental images. Exactly how this network controls the strength of visual imagery remains unknown. Here, brain imaging and transcranial magnetic phosphene data show that lower resting activity and excitability levels in early visual cortex (V1-V3) predict stronger sensory imagery. Further, electrically decreasing visual cortex excitability using tDCS increases imagery strength, demonstrating a causative role of visual cortex excitability in controlling visual imagery. Together, these data suggest a neurophysiological mechanism of cortical excitability involved in controlling the strength of mental images.
Mental imagery provides an essential simulation tool for remembering the past and planning the future, with its strength affecting both cognition and mental health. Research suggests that neural activity spanning prefrontal, parietal, temporal, and visual areas supports the generation of mental images. Exactly how this network controls the strength of visual imagery remains unknown. Here, brain imaging and transcranial magnetic phosphene data show that lower resting activity and excitability levels in early visual cortex (V1-V3) predict stronger sensory imagery. Electrically decreasing visual cortex excitability using tDCS increases imagery strength, demonstrating a causative role of visual cortex excitability in controlling visual imagery. These data suggest a neurophysiological mechanism of cortical excitability involved in controlling the strength of mental images.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. We hypothesize that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we show that attention enhances the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Attentional enhancement of stimulus decodability from population responses occurs in low dimensional spaces, as revealed by principal component analysis, suggesting an alignment between the attentional and the natural stimulus variance. Moreover, natural scenes produce stimulus-specific oscillatory responses in V1, whose power undergoes a global shift from low to high frequencies with attention. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
The prevalence and specificity of local protein synthesis during neuronal synaptic plasticity
(2021)
To supply proteins to their vast volume, neurons localize mRNAs and ribosomes in dendrites and axons. While local protein synthesis is required for synaptic plasticity, the abundance and distribution of ribosomes and nascent proteins near synapses remain elusive. Here, we quantified the occurrence of local translation and visualized the range of synapses supplied by nascent proteins during basal and plastic conditions. We detected dendritic ribosomes and nascent proteins at single-molecule resolution using DNA-PAINT and metabolic labeling. Both ribosomes and nascent proteins positively correlated with synapse density. Ribosomes were detected at ~85% of synapses with ~2 translational sites per synapse; ~50% of the nascent protein was detected near synapses. The amount of locally synthesized protein detected at a synapse correlated with its spontaneous Ca2+ activity. A multifold increase in synaptic nascent protein was evident following both local and global plasticity at respective scales, albeit with substantial heterogeneity between neighboring synapses.
Background: Cognitive dysfunctions represent a core feature of schizophrenia and a predictor for clinical outcomes. One possible mechanism for cognitive impairments could involve an impairment in the experience-dependent modifications of cortical networks.
Methods: To address this issue, we employed magnetoencephalography (MEG) during a visual priming paradigm in a sample of chronic patients with schizophrenia (n = 14), and in a group of healthy controls (n = 14). We obtained MEG-recordings during the presentation of visual stimuli that were presented three times either consecutively or with intervening stimuli. MEG-data were analyzed for event-related fields as well as spectral power in the 1–200 Hz range to examine repetition suppression and repetition enhancement. We defined regions of interest in occipital and thalamic regions and obtained virtual-channel data.
Results: Behavioral priming did not differ between groups. However, patients with schizophrenia showed prominently reduced oscillatory response to novel stimuli in the gamma-frequency band as well as significantly reduced repetition suppression of gamma-band activity and reduced repetition enhancement of beta-band power in occipital cortex to both consecutive repetitions as well as repetitions with intervening stimuli. Moreover, schizophrenia patients were characterized by a significant deficit in suppression of the C1m component in occipital cortex and thalamus as well as of the late positive component (LPC) in occipital cortex.
Conclusions: These data provide novel evidence for impaired repetition suppression in cortical and subcortical circuits in schizophrenia. Although behavioral priming was preserved, patients with schizophrenia showed deficits in repetition suppression as well as repetition enhancement in thalamic and occipital regions, suggesting that experience-dependent modification of neural circuits is impaired in the disorder.
Glia, the helper cells of the brain, are essential in maintaining neural resilience across time and varying challenges: By reacting to changes in neuronal health glia carefully balance repair or disposal of injured neurons. Malfunction of these interactions is implicated in many neurodegenerative diseases. We present a reductionist model that mimics repair-or-dispose decisions to generate a hypothesis for the cause of disease onset. The model assumes four tissue states: healthy and challenged tissue, primed tissue at risk of acute damage propagation, and chronic neurodegeneration. We discuss analogies to progression stages observed in the most common neurodegenerative conditions and to experimental observations of cellular signaling pathways of glia-neuron crosstalk. The model suggests that the onset of neurodegeneration can result as a compromise between two conflicting goals: short-term resilience to stressors versus long-term prevention of tissue damage.
Interest in time-resolved connectivity in fMRI has grown rapidly in recent years. The most widely used technique for studying connectivity changes over time utilizes a sliding windows approach. There has been some debate about the utility of shorter versus longer windows, the use of fixed versus adaptive windows, as well as whether observed resting state dynamics during wakefulness may be predominantly due to changes in sleep state and subject head motion. In this work we use an independent component analysis (ICA)-based pipeline applied to concurrent EEG/fMRI data collected during wakefulness and various sleep stages and show: 1) connectivity states obtained from clustering sliding windowed correlations of resting state functional network time courses well classify the sleep states obtained from EEG data, 2) using shorter sliding windows instead of longer non-overlapping windows improves the ability to capture transition dynamics even at windows as short as 30 s, 3) motion appears to be mostly associated with one of the states rather than spread across all of them 4) a fixed tapered sliding window approach outperforms an adaptive dynamic conditional correlation approach, and 5) consistent with prior EEG/fMRI work, we identify evidence of multiple states within the wakeful condition which are able to be classified with high accuracy. Classification of wakeful only states suggest the presence of time-varying changes in connectivity in fMRI data beyond sleep state or motion. Results also inform about advantageous technical choices, and the identification of different clusters within wakefulness that are separable suggest further studies in this direction.
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP) which depends on the interactions of three precisely-timed spikes and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, that is, the ease versus difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English and German) read isolated words, we encoded and decoded word vectors taken from the family of prediction-based Word2vec algorithms. We found that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.
Type IV pili are flexible filaments on the surface of bacteria, consisting of a helical assembly of pilin proteins. They are involved in bacterial motility (twitching), surface adhesion, biofilm formation and DNA uptake (natural transformation). Here, we use cryo-electron microscopy and mass spectrometry to show that the bacterium Thermus thermophilus produces two forms of type IV pilus ("wide" and "narrow"), differing in structure and protein composition. Wide pili are composed of the major pilin PilA4, while narrow pili are composed of a so-far uncharacterized pilin which we name PilA5. Functional experiments indicate that PilA4 is required for natural transformation, while PilA5 is important for twitching motility.
In pursuit of food, hungry animals mobilize significant energy resources and overcome exhaustion and fear. How need and motivation control the decision to continue or change behavior is not understood. Using a single fly treadmill, we show that hungry flies persistently track a food odor and increase their effort over repeated trials in the absence of reward suggesting that need dominates negative experience. We further show that odor tracking is regulated by two mushroom body output neurons (MBONs) connecting the MB to the lateral horn. These MBONs, together with dopaminergic neurons and Dop1R2 signaling, control behavioral persistence. Conversely, an octopaminergic neuron, VPM4, which directly innervates one of the MBONs, acts as a brake on odor tracking by connecting feeding and olfaction. Together, our data suggest a function for the MB in internal state-dependent expression of behavior that can be suppressed by external inputs conveying a competing behavioral drive.
Rhythmic actions benefit from synchronization with external events. Auditory-paced finger tapping studies indicate the two cerebral hemispheres preferentially control different rhythms. It is unclear whether left-lateralized processing of faster rhythms and right-lateralized processing of slower rhythms bases upon hemispheric timing differences that arise in the motor or sensory system or whether asymmetry results from lateralized sensorimotor interactions. We measured fMRI and MEG during symmetric finger tapping, in which fast tapping was defined as auditory-motor synchronization at 2.5 Hz. Slow tapping corresponded to tapping to every fourth auditory beat (0.625 Hz). We demonstrate that the left auditory cortex preferentially represents the relative fast rhythm in an amplitude modulation of low beta oscillations while the right auditory cortex additionally represents the internally generated slower rhythm. We show coupling of auditory-motor beta oscillations supports building a metric structure. Our findings reveal a strong contribution of sensory cortices to hemispheric specialization in action control.
Spermatogonial stem cells (SSCs) are adult stem cells that are slowly cycling and self-renewing. The pool of SSCs generates very large numbers of male gametes throughout the life of the individual. SSCs can be cultured in vitro for long periods of time, and established SSC lines can be manipulated genetically. Upon transplantation into the testes of infertile mice, long-term cultured mouse SSCs can differentiate into fertile spermatozoa, which can give rise to live offspring. Here, we show that the testicular soma of mice with a conditional knockout (conKO) in the X-linked gene Tsc22d3 supports spermatogenesis and germline transmission from cultured mouse SSCs upon transplantation. Infertile males were produced by crossing homozygous Tsc22d3 floxed females with homozygous ROSA26-Cre males. We obtained 96 live offspring from six long-term cultured SSC lines with the aid of intracytoplasmic sperm injection. We advocate the further optimization of Tsc22d3-conKO males as recipients for testis transplantation of SSC lines.
Previous reports of improved oral reading performance for dyslexic children but not for regular readers when between-letter spacing was enlarged led to the proposal of a dyslexia-specific deficit in visual crowding. However, it is in this context also critical to understand how letter spacing affects visual word recognition and reading in unimpaired readers. Adopting an individual differences approach, the present study, accordingly, examined whether wider letter spacing improves reading performance also for non-impaired adults during silent reading and whether there is an association between letter spacing and crowding sensitivity. We report eye movement data of 24 German students who silently read texts presented either with normal or wider letter spacing. Foveal and parafoveal crowding sensitivity were estimated using two independent tests. Wider spacing reduced first fixation durations, gaze durations, and total fixation time for all participants, with slower readers showing stronger effects. However, wider letter spacing also reduced skipping probabilities and elicited more fixations, especially for faster readers. In terms of words read per minute, wider letter spacing did not provide a benefit, and faster readers in particular were slowed down. Neither foveal nor parafoveal crowding sensitivity correlated with the observed letter-spacing effects. In conclusion, wide letter spacing reduces single word processing time in typically developed readers during silent reading, but affects reading rates negatively since more words must be fixated. We tentatively propose that wider letter spacing reinforces serial letter processing in slower readers, but disrupts parallel processing of letter chunks in faster readers. These effects of letter spacing do not seem to be mediated by individual differences in crowding sensitivity.
Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
(2019)
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal–frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder.
Synesthesia is a phenomenon in which additional perceptual experiences are elicited by sensory stimuli or cognitive concepts. Synesthetes possess a unique type of phenomenal experiences not directly triggered by sensory stimulation. Therefore, for better understanding of consciousness it is relevant to identify the mental and physiological processes that subserve synesthetic experience. In the present work we suggest several reasons why synesthesia has merit for research on consciousness. We first review the research on the dynamic and rapidly growing field of the studies of synesthesia. We particularly draw attention to the role of semantics in synesthesia, which is important for establishing synesthetic associations in the brain. We then propose that the interplay between semantics and sensory input in synesthesia can be helpful for the study of the neural correlates of consciousness, especially when making use of ambiguous stimuli for inducing synesthesia. Finally, synesthesia-related alterations of brain networks and functional connectivity can be of merit for the study of consciousness.
Following a brief review of current efforts to identify the neuronal correlates of conscious processing (NCCP) an attempt is made to bridge the gap between the material neuronal processes and the immaterial dimensions of subjective experience. It is argued that this "hard problem" of consciousness research cannot be solved by only considering the neuronal underpinnings of cognition. The proposal is that the hard problem can be treated within a naturalistic framework if one considers not only the biological but also the socio-cultural dimensions of evolution. The argument is based on the following premises: perceptions are the result of a constructivist process that depends on priors. This applies both for perceptions of the outer world and the perception of oneself. Social interactions between agents endowed with the cognitive abilities of humans generated immaterial realities, addressed as social or cultural realities. This novel class of realities assumed the role of priors for the perception of oneself and the embedding world. A natural consequence of these extended perceptions is a dualist classification of observables into material and immaterial phenomena nurturing the concept of ontological substance dualism. It is argued that perceptions shaped by socio-cultural priors lead to the construction of a self-model that has both a material and an immaterial dimension. As priors are implicit and not amenable to conscious recollection the perceived immaterial dimension is experienced as veridical and not derivable from material processes—which is the hallmark of the hard problem. These considerations let the hard problem appear as the result of cognitive constructs that are amenable to naturalistic explanations in an evolutionary framework.
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems
Inhibitory interneurons govern virtually all computations in neocortical circuits and are in turn controlled by neuromodulation. While a detailed understanding of the distinct marker expression, physiology, and neuromodulator responses of different interneuron types exists for rodents and recent studies have highlighted the role of specific interneurons in converting rapid neuromodulatory signals into altered sensory processing during locomotion, attention, and associative learning, it remains little understood whether similar mechanisms exist in human neocortex. Here, we use whole-cell recordings combined with agonist application, transgenic mouse lines, in situ hybridization, and unbiased clustering to directly determine these features in human layer 1 interneurons (L1-INs). Our results indicate pronounced nicotinic recruitment of all L1-INs, whereas only a small subset co-expresses the ionotropic HTR3 receptor. In addition to human specializations, we observe two comparable physiologically and genetically distinct L1-IN types in both species, together indicating conserved rapid neuromodulation of human neocortical circuits through layer 1.
In homeostatic scaling at central synapses, the depth and breadth of cellular mechanisms that detect the offset from the set-point, detect the duration of the offset and implement a cellular response are not well understood. To understand the time-dependent scaling dynamics we treated cultured rat hippocampal cells with either TTX or bicucculline for 2 hr to induce the process of up- or down-scaling, respectively. During the activity manipulation we metabolically labeled newly synthesized proteins using BONCAT. We identified 168 newly synthesized proteins that exhibited significant changes in expression. To obtain a temporal trajectory of the response, we compared the proteins synthesized within 2 hr or 24 hr of the activity manipulation. Surprisingly, there was little overlap in the significantly regulated newly synthesized proteins identified in the early- and integrated late response datasets. There was, however, overlap in the functional categories that are modulated early and late. These data indicate that within protein function groups, different proteomic choices can be made to effect early and late homeostatic responses that detect the duration and polarity of the activity manipulation.