Refine
Year of publication
Language
- English (28)
Has Fulltext
- yes (28)
Is part of the Bibliography
- no (28)
Keywords
- dendrite (4)
- morphology (2)
- Adult neurogenesis (1)
- Backpropagating action potential (1)
- Cellular imaging (1)
- Compartmental modeling (1)
- Computational model (1)
- Computer simulation (1)
- Connectomics (1)
- Cortical column (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Ernst Strüngmann Institut (15)
- Medizin (15)
- Biowissenschaften (6)
- Informatik und Mathematik (1)
- MPI für Hirnforschung (1)
- Physik (1)
Branching allows neurons to make synaptic contacts with large numbers of other neurons, facilitating the high connectivity of nervous systems. Neuronal arbors have geometric properties such as branch lengths and diameters that are optimal in that they maximize signaling speeds while minimizing construction costs. In this work, we asked whether neuronal arbors have topological properties that may also optimize their growth or function. We discovered that for a wide range of invertebrate and vertebrate neurons the distributions of their subtree sizes follow power laws, implying that they are scale invariant. The power-law exponent distinguishes different neuronal cell types. Postsynaptic spines and branchlets perturb scale invariance. Through simulations, we show that the subtree-size distribution depends on the symmetry of the branching rules governing arbor growth and that optimal morphologies are scale invariant. Thus, the subtree-size distribution is a topological property that recapitulates the functional morphology of dendrites.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
The cytoskeleton is crucial for defining neuronal-type-specific dendrite morphologies. To explore how the complex interplay of actin-modulatory proteins (AMPs) can define neuronal types in vivo, we focused on the class III dendritic arborization (c3da) neuron of Drosophila larvae. Using computational modeling, we reveal that the main branches (MBs) of c3da neurons follow general models based on optimal wiring principles, while the actin-enriched short terminal branches (STBs) require an additional growth program. To clarify the cellular mechanisms that define this second step, we thus concentrated on STBs for an in-depth quantitative description of dendrite morphology and dynamics. Applying these methods systematically to mutants of six known and novel AMPs, we revealed the complementary roles of these individual AMPs in defining STB properties. Our data suggest that diverse dendrite arbors result from a combination of optimal-wiring-related growth and individualized growth programs that are neuron-type specific.
Neuronal hyperexcitability is a feature of Alzheimer’s disease (AD). Three main mechanisms have been proposed to explain it: i), dendritic degeneration leading to increased input resistance, ii), ion channel changes leading to enhanced intrinsic excitability, and iii), synaptic changes leading to excitation-inhibition (E/I) imbalance. However, the relative contribution of these mechanisms is not fully understood. Therefore, we performed biophysically realistic multi-compartmental modelling of excitability in reconstructed CA1 pyramidal neurons of wild-type and APP/PS1 mice, a well-established animal model of AD. We show that, for synaptic activation, the excitability promoting effects of dendritic degeneration are cancelled out by excitability decreasing effects of synaptic loss. We find an interesting balance of excitability regulation with enhanced degeneration in the basal dendrites of APP/PS1 cells potentially leading to increased excitation by the apical but decreased excitation by the basal Schaffer collateral pathway. Furthermore, our simulations reveal that three additional pathomechanistic scenarios can account for the experimentally observed increase in firing and bursting of CA1 pyramidal neurons in APP/PS1 mice. Scenario 1: increased excitatory burst input; scenario 2: enhanced E/I ratio and scenario 3: alteration of intrinsic ion channels (IAHP down-regulated; INap, INa and ICaT up-regulated) in addition to enhanced E/I ratio. Our work supports the hypothesis that pathological network and ion channel changes are major contributors to neuronal hyperexcitability in AD. Overall, our results are in line with the concept of multi-causality and degeneracy according to which multiple different disruptions are separately sufficient but no single disruption is necessary for neuronal hyperexcitability.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Dendrites display a striking variety of neuronal type-specific morphologies, but the mechanisms and principles underlying such diversity remain elusive. A major player in defining the morphology of dendrites is the neuronal cytoskeleton, including evolutionarily conserved actin-modulatory proteins (AMPs). Still, we lack a clear understanding of how AMPs might support developmental phenomena such as neuron-type specific dendrite dynamics. To address precisely this level of in vivo specificity, we concentrated on a defined neuronal type, the class III dendritic arborisation (c3da) neuron of Drosophila larvae, displaying actin-enriched short terminal branchlets (STBs). Computational modelling reveals that the main branches of c3da neurons follow a general growth model based on optimal wiring, but the STBs do not. Instead, model STBs are defined by a short reach and a high affinity to grow towards the main branches. We thus concentrated on c3da STBs and developed new methods to quantitatively describe dendrite morphology and dynamics based on in vivo time-lapse imaging of mutants lacking individual AMPs. In this way, we extrapolated the role of these AMPs in defining STB properties. We propose that dendrite diversity is supported by the combination of a common step, refined by a neuron type-specific second level. For c3da neurons, we present a molecular model of how the combined action of multiple AMPs in vivo define the properties of these second level specialisations, the STBs.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.