MPI für Hirnforschung
Refine
Year of publication
Has Fulltext
- yes (138)
Is part of the Bibliography
- no (138)
Keywords
- schizophrenia (4)
- EEG (3)
- MEG (3)
- Neuroscience (3)
- human (3)
- neuroscience (3)
- predictive coding (3)
- synaptic plasticity (3)
- Axons (2)
- Cell biology (2)
Institute
Previous reports of improved oral reading performance for dyslexic children but not for regular readers when between-letter spacing was enlarged led to the proposal of a dyslexia-specific deficit in visual crowding. However, it is in this context also critical to understand how letter spacing affects visual word recognition and reading in unimpaired readers. Adopting an individual differences approach, the present study, accordingly, examined whether wider letter spacing improves reading performance also for non-impaired adults during silent reading and whether there is an association between letter spacing and crowding sensitivity. We report eye movement data of 24 German students who silently read texts presented either with normal or wider letter spacing. Foveal and parafoveal crowding sensitivity were estimated using two independent tests. Wider spacing reduced first fixation durations, gaze durations, and total fixation time for all participants, with slower readers showing stronger effects. However, wider letter spacing also reduced skipping probabilities and elicited more fixations, especially for faster readers. In terms of words read per minute, wider letter spacing did not provide a benefit, and faster readers in particular were slowed down. Neither foveal nor parafoveal crowding sensitivity correlated with the observed letter-spacing effects. In conclusion, wide letter spacing reduces single word processing time in typically developed readers during silent reading, but affects reading rates negatively since more words must be fixated. We tentatively propose that wider letter spacing reinforces serial letter processing in slower readers, but disrupts parallel processing of letter chunks in faster readers. These effects of letter spacing do not seem to be mediated by individual differences in crowding sensitivity.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
The neuronal transcriptome changes dynamically to adapt to stimuli from the extracellular and intracellular environment. In this study, we adapted for the first time a click chemistry technique to label the newly synthesized RNA in cultured hippocampal neurons and intact larval zebrafish brain. Ethynyl uridine (EU) was incorporated into neuronal RNA in a time- and concentration-dependent manner. Newly synthesized RNA granules observed throughout the dendrites were colocalized with mRNA and rRNA markers. In zebrafish larvae, the application of EU to the swim water resulted in uptake and labeling throughout the brain. Using a GABA receptor antagonist, PTZ (pentylenetetrazol), to elevate neuronal activity, we demonstrate that newly transcribed RNA signal increased in specific regions involved in neurogenesis.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.
The inhibitory glycine receptor (GlyR) in developing spinal neurones is internalized efficiently upon antagonist inhibition. Here we used surface labeling combined with affinity purification to show that homopentameric α1 GlyRs generated inXenopus oocytes are proteolytically nicked into fragments of 35 and 13 kDa upon prolonged incubation. Nicked GlyRs do not exist at the cell surface, indicating that proteolysis occurs exclusively in the endocytotic pathway. Consistent with this interpretation, elevation of the lysosomal pH, but not the proteasome inhibitor lactacystin, prevents GlyR cleavage. Prior to internalization, α1 GlyRs are conjugated extensively with ubiquitin in the plasma membrane. Our results are consistent with ubiquitination regulating the endocytosis and subsequent proteolysis of GlyRs residing in the plasma membrane. Ubiquitin-conjugating enzymes thus may have a crucial role in synaptic plasticity by determining postsynaptic receptor numbers.
GABARAP belongs to an evolutionary highly conserved gene family that has a fundamental role in autophagy. There is ample evidence for a crosstalk between autophagy and apoptosis as well as the immune response. However, the molecular details for these interactions are not fully characterized. Here, we report that the ablation of murine GABARAP, a member of the Atg8/LC3 family that is central to autophagosome formation, suppresses the incidence of tumor formation mediated by the carcinogen DMBA and results in an enhancement of the immune response through increased secretion of IL-1β, IL-6, IL-2 and IFN-γ from stimulated macrophages and lymphocytes. In contrast, TGF-β1 was significantly reduced in the serum of these knockout mice. Further, DMBA treatment of these GABARAP knockout mice reduced the cellularity of the spleen and the growth of mammary glands through the induction of apoptosis. Gene expression profiling of mammary glands revealed significantly elevated levels of Xaf1, an apoptotic inducer and tumor-suppressor gene, in knockout mice. Furthermore, DMBA treatment triggered the upregulation of pro-apoptotic (Bid, Apaf1, Bax), cell death (Tnfrsf10b, Ripk1) and cell cycle inhibitor (Cdkn1a, Cdkn2c) genes in the mammary glands. Finally, tumor growth of B16 melanoma cells after subcutaneous inoculation was inhibited in GABARAP-deficient mice. Together, these data provide strong evidence for the involvement of GABARAP in tumorigenesis in vivo by delaying cell death and its associated immune-related response.
Startle disease or hereditary hyperekplexia has been shown to result from mutations in the α1‐subunit gene of the inhibitory glycine receptor (GlyR). In hyperekplexia patients, neuromotor symptoms generally become apparent at birth, improve with age, and often disappear in adulthood. Loss‐of‐function mutations of GlyR α or β‐subunits in mice show rather severe neuromotor phenotypes. Here, we generated mutant mice with a transient neuromotor deficiency by introducing a GlyR β transgene into the spastic mouse (spa/spa), a recessive mutant carrying a transposon insertion within the GlyR β‐subunit gene. In spa/spa TG456 mice, one of three strains generated with this construct, which expressed very low levels of GlyR β transgene‐dependent mRNA and protein, the spastic phenotype was found to depend upon the transgene copy number. Notably, mice carrying two copies of the transgene showed an age‐dependent sensitivity to tremor induction, which peaked at ∼ 3–4 weeks postnatally. This closely resembles the development of symptoms in human hyperekplexia patients, where motor coordination significantly improves after adolescence. The spa/spa TG456 line thus may serve as an animal model of human startle disease.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, that is, the ease versus difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English and German) read isolated words, we encoded and decoded word vectors taken from the family of prediction-based Word2vec algorithms. We found that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, i.e., the ease vs. difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English, German) read isolated words, we encode and decode word vectors taken from the family of prediction-based word2vec algorithms. We find that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.