Refine
Year of publication
Language
- English (28)
Has Fulltext
- yes (28)
Is part of the Bibliography
- no (28)
Keywords
- dendrite (4)
- morphology (2)
- Adult neurogenesis (1)
- Backpropagating action potential (1)
- Cellular imaging (1)
- Compartmental modeling (1)
- Computational model (1)
- Computer simulation (1)
- Connectomics (1)
- Cortical column (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Ernst Strüngmann Institut (15)
- Medizin (15)
- Biowissenschaften (6)
- Informatik und Mathematik (1)
- MPI für Hirnforschung (1)
- Physik (1)
Neurons collect their inputs from other neurons by sending out arborized dendritic structures. However, the relationship between the shape of dendrites and the precise organization of synaptic inputs in the neural tissue remains unclear. Inputs could be distributed in tight clusters, entirely randomly or else in a regular grid-like manner. Here, we analyze dendritic branching structures using a regularity index R, based on average nearest neighbor distances between branch and termination points, characterizing their spatial distribution. We find that the distributions of these points depend strongly on cell types, indicating possible fundamental differences in synaptic input organization. Moreover, R is independent of cell size and we find that it is only weakly correlated with other branching statistics, suggesting that it might reflect features of dendritic morphology that are not captured by commonly studied branching statistics. We then use morphological models based on optimal wiring principles to study the relation between input distributions and dendritic branching structures. Using our models, we find that branch point distributions correlate more closely with the input distributions while termination points in dendrites are generally spread out more randomly with a close to uniform distribution. We validate these model predictions with connectome data. Finally, we find that in spatial input distributions with increasing regularity, characteristic scaling relationships between branching features are altered significantly. In summary, we conclude that local statistics of input distributions and dendrite morphology depend on each other leading to potentially cell type specific branching features.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
Compartmental models are the theoretical tool of choice for understanding single neuron computations. However, many models are incomplete, built ad hoc and require tuning for each novel condition rendering them of limited usability. Here, we present T2N, a powerful interface to control NEURON with Matlab and TREES toolbox, which supports generating models stable over a broad range of reconstructed and synthetic morphologies. We illustrate this for a novel, highly detailed active model of dentate granule cells (GCs) replicating a wide palette of experiments from various labs. By implementing known differences in ion channel composition and morphology, our model reproduces data from mouse or rat, mature or adult-born GCs as well as pharmacological interventions and epileptic conditions. This work sets a new benchmark for detailed compartmental modeling. T2N is suitable for creating robust models useful for large-scale networks that could lead to novel predictions. We discuss possible T2N application in degeneracy studies.
Sholl analysis has been an important technique in dendritic anatomy for more than 60 years. The Sholl intersection profile is obtained by counting the number of dendritic branches at a given distance from the soma and is a key measure of dendritic complexity; it has applications from evaluating the changes in structure induced by pathologies to estimating the expected number of anatomical synaptic contacts. We find that the Sholl intersection profiles of most neurons can be reproduced from three basic, functional measures: the domain spanned by the dendritic arbor, the total length of the dendrite, and the angular distribution of how far dendritic segments deviate from a direct path to the soma (i.e., the root angle distribution). The first two measures are determined by axon location and hence microcircuit structure; the third arises from optimal wiring and represents a branching statistic estimating the need for conduction speed in a neuron.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Abstract: Integration of synaptic currents across an extensive dendritic tree is a prerequisite for computation in the brain. Dendritic tapering away from the soma has been suggested to both equalise contributions from synapses at different locations and maximise the current transfer to the soma. To find out how this is achieved precisely, an analytical solution for the current transfer in dendrites with arbitrary taper is required. We derive here an asymptotic approximation that accurately matches results from numerical simulations. From this we then determine the diameter profile that maximises the current transfer to the soma. We find a simple quadratic form that matches diameters obtained experimentally, indicating a fundamental architectural principle of the brain that links dendritic diameters to signal transmission.
Author Summary: Neurons take a great variety of shapes that allow them to perform their different computational roles across the brain. The most distinctive visible feature of many neurons is the extensively branched network of cable-like projections that make up their dendritic tree. A neuron receives current-inducing synaptic contacts from other cells across its dendritic tree. As in the case of botanical trees, dendritic trees are strongly tapered towards their tips. This tapering has previously been shown to offer a number of advantages over a constant width, both in terms of reduced energy requirements and the robust integration of inputs at different locations. However, in order to predict the computations that neurons perform, analytical solutions for the flow of input currents tend to assume constant dendritic diameters. Here we introduce an asymptotic approximation that accurately models the current transfer in dendritic trees with arbitrary, continuously changing, diameters. When we then determine the diameter profiles that maximise current transfer towards the cell body we find diameters similar to those observed in real neurons. We conclude that the tapering in dendritic trees to optimise signal transmission is a fundamental architectural principle of the brain.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
The true revolution in the age of digital neuroanatomy is the ability to extensively quantify anatomical structures and thus investigate structure-function relationships in great detail. Large-scale projects were recently launched with the aim of providing infrastructure for brain simulations. These projects will increase the need for a precise understanding of brain structure, e.g., through statistical analysis and models.
From articles in this Research Topic, we identify three main themes that clearly illustrate how new quantitative approaches are helping advance our understanding of neural structure and function. First, new approaches to reconstruct neurons and circuits from empirical data are aiding neuroanatomical mapping. Second, methods are introduced to improve understanding of the underlying principles of organization. Third, by combining existing knowledge from lower levels of organization, models can be used to make testable predictions about a higher-level organization where knowledge is absent or poor. This latter approach is useful for examining statistical properties of specific network connectivity when current experimental methods have not yet been able to fully reconstruct whole circuits of more than a few hundred neurons.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.