Refine
Year of publication
Language
- English (28)
Has Fulltext
- yes (28)
Is part of the Bibliography
- no (28)
Keywords
- dendrite (4)
- morphology (2)
- Adult neurogenesis (1)
- Backpropagating action potential (1)
- Cellular imaging (1)
- Compartmental modeling (1)
- Computational model (1)
- Computer simulation (1)
- Connectomics (1)
- Cortical column (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Ernst Strüngmann Institut (15)
- Medizin (15)
- Biowissenschaften (6)
- Informatik und Mathematik (1)
- MPI für Hirnforschung (1)
- Physik (1)
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Neurons collect their inputs from other neurons by sending out arborized dendritic structures. However, the relationship between the shape of dendrites and the precise organization of synaptic inputs in the neural tissue remains unclear. Inputs could be distributed in tight clusters, entirely randomly or else in a regular grid-like manner. Here, we analyze dendritic branching structures using a regularity index R, based on average nearest neighbor distances between branch and termination points, characterizing their spatial distribution. We find that the distributions of these points depend strongly on cell types, indicating possible fundamental differences in synaptic input organization. Moreover, R is independent of cell size and we find that it is only weakly correlated with other branching statistics, suggesting that it might reflect features of dendritic morphology that are not captured by commonly studied branching statistics. We then use morphological models based on optimal wiring principles to study the relation between input distributions and dendritic branching structures. Using our models, we find that branch point distributions correlate more closely with the input distributions while termination points in dendrites are generally spread out more randomly with a close to uniform distribution. We validate these model predictions with connectome data. Finally, we find that in spatial input distributions with increasing regularity, characteristic scaling relationships between branching features are altered significantly. In summary, we conclude that local statistics of input distributions and dendrite morphology depend on each other leading to potentially cell type specific branching features.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of stochastic growth and retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.