Refine
Year of publication
Document Type
- Preprint (2163) (remove)
Has Fulltext
- yes (2163)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- heavy-ion collisions (11)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
Institute
- Physik (1308)
- Frankfurt Institute for Advanced Studies (FIAS) (940)
- Informatik (755)
- Medizin (172)
- Extern (82)
- Biowissenschaften (71)
- Ernst Strüngmann Institut (69)
- Mathematik (48)
- MPI für Hirnforschung (46)
- Psychologie (46)
Neuronal hyperexcitability is a feature of Alzheimer’s disease (AD). Three main mechanisms have been proposed to explain it: i), dendritic degeneration leading to increased input resistance, ii), ion channel changes leading to enhanced intrinsic excitability, and iii), synaptic changes leading to excitation-inhibition (E/I) imbalance. However, the relative contribution of these mechanisms is not fully understood. Therefore, we performed biophysically realistic multi-compartmental modelling of excitability in reconstructed CA1 pyramidal neurons of wild-type and APP/PS1 mice, a well-established animal model of AD. We show that, for synaptic activation, the excitability promoting effects of dendritic degeneration are cancelled out by excitability decreasing effects of synaptic loss. We find an interesting balance of excitability regulation with enhanced degeneration in the basal dendrites of APP/PS1 cells potentially leading to increased excitation by the apical but decreased excitation by the basal Schaffer collateral pathway. Furthermore, our simulations reveal that three additional pathomechanistic scenarios can account for the experimentally observed increase in firing and bursting of CA1 pyramidal neurons in APP/PS1 mice. Scenario 1: increased excitatory burst input; scenario 2: enhanced E/I ratio and scenario 3: alteration of intrinsic ion channels (IAHP down-regulated; INap, INa and ICaT up-regulated) in addition to enhanced E/I ratio. Our work supports the hypothesis that pathological network and ion channel changes are major contributors to neuronal hyperexcitability in AD. Overall, our results are in line with the concept of multi-causality and degeneracy according to which multiple different disruptions are separately sufficient but no single disruption is necessary for neuronal hyperexcitability.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
Disordered proteins and nucleic acids can condense into droplets that resemble the membraneless organelles observed in living cells. MD simulations offer a unique tool to characterize the molecular interactions governing the formation of these biomolecular condensates, their physico-chemical properties, and the factors controlling their composition and size. However, biopolymer condensation depends sensitively on the balance between different energetic and entropic contributions. Here, we develop a general strategy to fine-tune the potential energy function for molecular dynamics simulations of biopolymer phase separation. We rebalance protein-protein interactions against solvation and entropic contributions to match the excess free energy of transferring proteins between dilute solution and condensate. We illustrate this formalism by simulating liquid droplet formation of the FUS low complexity domain (LCD) with a rebalanced MARTINI model. By scaling the strength of the nonbonded interactions in the coarse-grained MARTINI potential energy function, we map out a phase diagram in the plane of protein concentration and interaction strength. Above a critical scaling factor of αc ≈ 0.6, FUS LCD condensation is observed, where α = 1 and 0 correspond to full and repulsive interactions in the MARTINI model, respectively. For a scaling factor α = 0.65, we recover the experimental densities of the dilute and dense phases, and thus the excess protein transfer free energy into the droplet and the saturation concentration where FUS LCD condenses. In the region of phase separation, we simulate FUS LCD droplets of four different sizes in stable equilibrium with the dilute phase and slabs of condensed FUS LCD for tens of microseconds, and over one millisecond in aggregate. We determine surface tensions in the range of 0.01 to 0.4mN/m from the fluctuations of the droplet shape and from the capillary-wave-like broadening of the interface between the two phases. From the dynamics of the protein end-to-end distance, we estimate shear viscosities from 0.001 to 0.02Pas for the FUS LCD droplets with scaling factors α in the range of 0.625 to 0.75, where we observe liquid droplets. Significant hydration of the interior of the droplets keeps the proteins mobile and the droplets fluid.
The protein Atg2 has been proposed to form a membrane tether that mediates lipid transfer from the ER to the phagophore in autophagy. However, recent kinetic measurements on the human homolog ATG2A indicated a transport rate of only about one lipid per minute, which would be far too slow to deliver the millions of lipids required to form a phagophore on a physiological time scale. Here, we revisit the analysis of the fluorescence quenching experiments. We develop a detailed kinetic model of the lipid transfer between two membranes bridged by a tether that forms a conduit for lipids. The model provides an excellent fit to the fluorescence experiments, with a lipid transfer rate of about 100 per second and protein. At this rate, Atg2-mediated transfer can supply a significant fraction of the lipids required in autophagosome biogenesis. Our kinetic model is generally applicable to lipid-transfer experiments, in particular to proteins forming organelle contact sites in cells.
Binding of the spike protein of SARS-CoV-2 to the human angiotensin converting enzyme 2 (ACE2) receptor triggers translocation of the virus into cells. Both the ACE2 receptor and the spike protein are heavily glycosylated, including at sites near their binding interface. We built fully glycosylated models of the ACE2 receptor bound to the receptor binding domain (RBD) of the SARS-CoV-2 spike protein. Using atomistic molecular dynamics (MD) simulations, we found that the glycosylation of the human ACE2 receptor contributes substantially to the binding of the virus. Interestingly, the glycans at two glycosylation sites, N90 and N322, have opposite effects on spike protein binding. The glycan at the N90 site partly covers the binding interface of the spike RBD. Therefore, this glycan can interfere with the binding of the spike protein and protect against docking of the virus to the cell. By contrast, the glycan at the N322 site interacts tightly with the RBD of the ACE2-bound spike protein and strengthens the complex. Remarkably, the N322 glycan binds into a conserved region of the spike protein identified previously as a cryptic epitope for a neutralizing antibody. By mapping the glycan binding sites, our MD simulations aid in the targeted development of neutralizing antibodies and SARS-CoV-2 fusion inhibitors.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Background Corticospinal excitability depends on the current brain state. The recent development of real-time EEG-triggered transcranial magnetic stimulation (EEG-TMS) allows studying this relationship in a causal fashion. Specifically, it has been shown that corticospinal excitability is higher during the scalp surface negative EEG peak compared to the positive peak of µ-oscillations in sensorimotor cortex, as indexed by larger motor evoked potentials (MEPs) for fixed stimulation intensity.
Objective We further characterize the effect of µ-rhythm phase on the MEP input-output (IO) curve by measuring the degree of excitability modulation across a range of stimulation intensities. We furthermore seek to optimize stimulation parameters to enable discrimination of functionally relevant EEG-defined brain states.
Methods A real-time EEG-TMS system was used to trigger MEPs during instantaneous brain-states corresponding to µ-rhythm surface positive and negative peaks with five different stimulation intensities covering an individually calibrated MEP IO curve in 15 healthy participants.
Results MEP amplitude is modulated by µ-phase across a wide range of stimulation intensities, with larger MEPs at the surface negative peak. The largest relative MEP-modulation was observed for weak intensities, the largest absolute MEP-modulation for intermediate intensities. These results indicate a leftward shift of the MEP IO curve during the µ-rhythm negative peak.
Conclusion The choice of stimulation intensity influences the observed degree of corticospinal excitability modulation by µ-phase. Lower stimulation intensities enable more efficient differentiation of EEG µ-phase-defined brain states.