Refine
Year of publication
Language
- English (28)
Has Fulltext
- yes (28)
Is part of the Bibliography
- no (28)
Keywords
- dendrite (4)
- morphology (2)
- Adult neurogenesis (1)
- Backpropagating action potential (1)
- Cellular imaging (1)
- Compartmental modeling (1)
- Computational model (1)
- Computer simulation (1)
- Connectomics (1)
- Cortical column (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Ernst Strüngmann Institut (15)
- Medizin (15)
- Biowissenschaften (6)
- Informatik und Mathematik (1)
- MPI für Hirnforschung (1)
- Physik (1)
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
Important brain functions need to be conserved throughout organisms of extremely varying sizes. Here we study the scaling properties of an essential component of computation in the brain: the single neuron. We compare morphology and signal propagation of a uniquely identifiable interneuron, the HS cell, in the blowfly (Calliphora) with its exact counterpart in the fruit fly (Drosophila) which is about four times smaller in each dimension. Anatomical features of the HS cell scale isometrically and minimise wiring costs but, by themselves, do not scale to preserve the electrotonic behaviour. However, the membrane properties are set to conserve dendritic as well as axonal delays and attenuation as well as dendritic integration of visual information. In conclusion, the electrotonic structure of a neuron, the HS cell in this case, is surprisingly stable over a wide range of morphological scales.
Abstract: Integration of synaptic currents across an extensive dendritic tree is a prerequisite for computation in the brain. Dendritic tapering away from the soma has been suggested to both equalise contributions from synapses at different locations and maximise the current transfer to the soma. To find out how this is achieved precisely, an analytical solution for the current transfer in dendrites with arbitrary taper is required. We derive here an asymptotic approximation that accurately matches results from numerical simulations. From this we then determine the diameter profile that maximises the current transfer to the soma. We find a simple quadratic form that matches diameters obtained experimentally, indicating a fundamental architectural principle of the brain that links dendritic diameters to signal transmission.
Author Summary: Neurons take a great variety of shapes that allow them to perform their different computational roles across the brain. The most distinctive visible feature of many neurons is the extensively branched network of cable-like projections that make up their dendritic tree. A neuron receives current-inducing synaptic contacts from other cells across its dendritic tree. As in the case of botanical trees, dendritic trees are strongly tapered towards their tips. This tapering has previously been shown to offer a number of advantages over a constant width, both in terms of reduced energy requirements and the robust integration of inputs at different locations. However, in order to predict the computations that neurons perform, analytical solutions for the flow of input currents tend to assume constant dendritic diameters. Here we introduce an asymptotic approximation that accurately models the current transfer in dendritic trees with arbitrary, continuously changing, diameters. When we then determine the diameter profiles that maximise current transfer towards the cell body we find diameters similar to those observed in real neurons. We conclude that the tapering in dendritic trees to optimise signal transmission is a fundamental architectural principle of the brain.
Much is known about the computation in individual neurons in the cortical column. Also, the selective connectivity between many cortical neuron types has been studied in great detail. However, due to the complexity of this microcircuitry its functional role within the cortical column remains a mystery. Some of the wiring behavior between neurons can be interpreted directly from their particular dendritic and axonal shapes. Here, I describe the dendritic density field (DDF) as one key element that remains to be better understood. I sketch an approach to relate DDFs in general to their underlying potential connectivity schemes. As an example, I show how the characteristic shape of a cortical pyramidal cell appears as a direct consequence of connecting inputs arranged in two separate parallel layers.
Dendritic morphology has been shown to have a dramatic impact on neuronal function. However, population features such as the inherent variability in dendritic morphology between cells belonging to the same neuronal type are often overlooked when studying computation in neural networks. While detailed models for morphology and electrophysiology exist for many types of single neurons, the role of detailed single cell morphology in the population has not been studied quantitatively or computationally. Here we use the structural context of the neural tissue in which dendritic trees exist to drive their generation in silico. We synthesize the entire population of dentate gyrus granule cells, the most numerous cell type in the hippocampus, by growing their dendritic trees within their characteristic dendritic fields bounded by the realistic structural context of (1) the granule cell layer that contains all somata and (2) the molecular layer that contains the dendritic forest. This process enables branching statistics to be linked to larger scale neuroanatomical features. We find large differences in dendritic total length and individual path length measures as a function of location in the dentate gyrus and of somatic depth in the granule cell layer. We also predict the number of unique granule cell dendrites invading a given volume in the molecular layer. This work enables the complete population-level study of morphological properties and provides a framework to develop complex and realistic neural network models.
Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013.
Neuronal death and subsequent denervation of target areas is a major feature of several neurological conditions such as brain trauma, ischemia or neurodegeneration. The denervation-induced axonal loss results in reorganization of the dendritic tree of denervated neurons. Dendritic reorganization of denervated neurons has been previously studied using entorhinal cortex lesion (ECL).
ECL leads to shortening and loss of dendritic segments in the denervated outer molecular layer of the dentate gyrus [1]. However, the functional importance of these long-term dendritic alterations is not yet understood and their impact on neuronal electrical properties remains unclear. Therefore, in this study we analyzed what happens to the electrotonic structure and excitability of dentate granule cells after denervation-induced alterations of their dendritic morphology, assuming all other parameters remain equal.
To perform comparative electrotonic analysis we used computer simulations in anatomically and biophysically realistic compartmental models of 3D-reconstructed healthy and denervated granule cells. Our results show that somatofugal and somatopetal voltage attenuation due to passive cable properties was strongly reduced in denervated granule cells. In line with these predictions, the attenuation of simulated backpropagating action potentials and forward propagating EPSPs was significantly reduced in dendrites of denervated neurons. In addition, simulations of somatic and dendritic frequency-current (f-I) curves revealed increased excitability in deafferentated granule cells.
Taken together, our results indicate that unless counterbalanced by a compensatory adjustment of passive and/or active membrane properties, the plastic remodeling of dendrites following lesion of entorhinal cortex inputs to granule cells will boost their electrotonic compactness and excitability.
Neurogenesis of hippocampal granule cells (GCs) persists throughout mammalian life and is important for learning and memory. How newborn GCs differentiate and mature into an existing circuit during this time period is not yet fully understood. We established a method to visualize postnatally generated GCs in organotypic entorhino-hippocampal slice cultures (OTCs) using retroviral (RV) GFP-labeling and performed time-lapse imaging to study their morphological development in vitro. Using anterograde tracing we could, furthermore, demonstrate that the postnatally generated GCs in OTCs, similar to adult born GCs, grow into an existing entorhino-dentate circuitry. RV-labeled GCs were identified and individual cells were followed for up to four weeks post injection. Postnatally born GCs exhibited highly dynamic structural changes, including dendritic growth spurts but also retraction of dendrites and phases of dendritic stabilization. In contrast, older, presumably prenatally born GCs labeled with an adeno-associated virus (AAV), were far less dynamic. We propose that the high degree of structural flexibility seen in our preparations is necessary for the integration of newborn granule cells into an already existing neuronal circuit of the dentate gyrus in which they have to compete for entorhinal input with cells generated and integrated earlier.
The cytoskeleton is crucial for defining neuronal-type-specific dendrite morphologies. To explore how the complex interplay of actin-modulatory proteins (AMPs) can define neuronal types in vivo, we focused on the class III dendritic arborization (c3da) neuron of Drosophila larvae. Using computational modeling, we reveal that the main branches (MBs) of c3da neurons follow general models based on optimal wiring principles, while the actin-enriched short terminal branches (STBs) require an additional growth program. To clarify the cellular mechanisms that define this second step, we thus concentrated on STBs for an in-depth quantitative description of dendrite morphology and dynamics. Applying these methods systematically to mutants of six known and novel AMPs, we revealed the complementary roles of these individual AMPs in defining STB properties. Our data suggest that diverse dendrite arbors result from a combination of optimal-wiring-related growth and individualized growth programs that are neuron-type specific.
The true revolution in the age of digital neuroanatomy is the ability to extensively quantify anatomical structures and thus investigate structure-function relationships in great detail. Large-scale projects were recently launched with the aim of providing infrastructure for brain simulations. These projects will increase the need for a precise understanding of brain structure, e.g., through statistical analysis and models.
From articles in this Research Topic, we identify three main themes that clearly illustrate how new quantitative approaches are helping advance our understanding of neural structure and function. First, new approaches to reconstruct neurons and circuits from empirical data are aiding neuroanatomical mapping. Second, methods are introduced to improve understanding of the underlying principles of organization. Third, by combining existing knowledge from lower levels of organization, models can be used to make testable predictions about a higher-level organization where knowledge is absent or poor. This latter approach is useful for examining statistical properties of specific network connectivity when current experimental methods have not yet been able to fully reconstruct whole circuits of more than a few hundred neurons.
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Dendrite morphology, a neuron's anatomical fingerprint, is a neuroscientist's asset in unveiling organizational principles in the brain. However, the genetic program encoding the morphological identity of a single dendrite remains a mystery. In order to obtain a formal understanding of dendritic branching, we studied distributions of morphological parameters in a group of four individually identifiable neurons of the fly visual system. We found that parameters relating to the branching topology were similar throughout all cells. Only parameters relating to the area covered by the dendrite were cell type specific. With these areas, artificial dendrites were grown based on optimization principles minimizing the amount of wiring and maximizing synaptic democracy. Although the same branching rule was used for all cells, this yielded dendritic structures virtually indistinguishable from their real counterparts. From these principles we derived a fully-automated model-based neuron reconstruction procedure validating the artificial branching rule. In conclusion, we suggest that the genetic program implementing neuronal branching could be constant in all cells whereas the one responsible for the dendrite spanning field should be cell specific.
Compartmental models are the theoretical tool of choice for understanding single neuron computations. However, many models are incomplete, built ad hoc and require tuning for each novel condition rendering them of limited usability. Here, we present T2N, a powerful interface to control NEURON with Matlab and TREES toolbox, which supports generating models stable over a broad range of reconstructed and synthetic morphologies. We illustrate this for a novel, highly detailed active model of dentate granule cells (GCs) replicating a wide palette of experiments from various labs. By implementing known differences in ion channel composition and morphology, our model reproduces data from mouse or rat, mature or adult-born GCs as well as pharmacological interventions and epileptic conditions. This work sets a new benchmark for detailed compartmental modeling. T2N is suitable for creating robust models useful for large-scale networks that could lead to novel predictions. We discuss possible T2N application in degeneracy studies.
Neurons collect their inputs from other neurons by sending out arborized dendritic structures. However, the relationship between the shape of dendrites and the precise organization of synaptic inputs in the neural tissue remains unclear. Inputs could be distributed in tight clusters, entirely randomly or else in a regular grid-like manner. Here, we analyze dendritic branching structures using a regularity index R, based on average nearest neighbor distances between branch and termination points, characterizing their spatial distribution. We find that the distributions of these points depend strongly on cell types, indicating possible fundamental differences in synaptic input organization. Moreover, R is independent of cell size and we find that it is only weakly correlated with other branching statistics, suggesting that it might reflect features of dendritic morphology that are not captured by commonly studied branching statistics. We then use morphological models based on optimal wiring principles to study the relation between input distributions and dendritic branching structures. Using our models, we find that branch point distributions correlate more closely with the input distributions while termination points in dendrites are generally spread out more randomly with a close to uniform distribution. We validate these model predictions with connectome data. Finally, we find that in spatial input distributions with increasing regularity, characteristic scaling relationships between branching features are altered significantly. In summary, we conclude that local statistics of input distributions and dendrite morphology depend on each other leading to potentially cell type specific branching features.
Dendrites form predominantly binary trees that are exquisitely embedded in the networks of the brain. While neuronal computation is known to depend on the morphology of dendrites, their underlying topological blueprint remains unknown. Here, we used a centripetal branch ordering scheme originally developed to describe river networks—the Horton-Strahler order (SO)–to examine hierarchical relationships of branching statistics in reconstructed and model dendritic trees. We report on a number of universal topological relationships with SO that are true for all binary trees and distinguish those from SO-sorted metric measures that appear to be cell type-specific. The latter are therefore potential new candidates for categorising dendritic tree structures. Interestingly, we find a faithful correlation of branch diameters with centripetal branch orders, indicating a possible functional importance of SO for dendritic morphology and growth. Also, simulated local voltage responses to synaptic inputs are strongly correlated with SO. In summary, our study identifies important SO-dependent measures in dendritic morphology that are relevant for neural function while at the same time it describes other relationships that are universal for all dendrites.
Branching allows neurons to make synaptic contacts with large numbers of other neurons, facilitating the high connectivity of nervous systems. Neuronal arbors have geometric properties such as branch lengths and diameters that are optimal in that they maximize signaling speeds while minimizing construction costs. In this work, we asked whether neuronal arbors have topological properties that may also optimize their growth or function. We discovered that for a wide range of invertebrate and vertebrate neurons the distributions of their subtree sizes follow power laws, implying that they are scale invariant. The power-law exponent distinguishes different neuronal cell types. Postsynaptic spines and branchlets perturb scale invariance. Through simulations, we show that the subtree-size distribution depends on the symmetry of the branching rules governing arbor growth and that optimal morphologies are scale invariant. Thus, the subtree-size distribution is a topological property that recapitulates the functional morphology of dendrites.
Sholl analysis has been an important technique in dendritic anatomy for more than 60 years. The Sholl intersection profile is obtained by counting the number of dendritic branches at a given distance from the soma and is a key measure of dendritic complexity; it has applications from evaluating the changes in structure induced by pathologies to estimating the expected number of anatomical synaptic contacts. We find that the Sholl intersection profiles of most neurons can be reproduced from three basic, functional measures: the domain spanned by the dendritic arbor, the total length of the dendrite, and the angular distribution of how far dendritic segments deviate from a direct path to the soma (i.e., the root angle distribution). The first two measures are determined by axon location and hence microcircuit structure; the third arises from optimal wiring and represents a branching statistic estimating the need for conduction speed in a neuron.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
Neuronal hyperexcitability is a feature of Alzheimer’s disease (AD). Three main mechanisms have been proposed to explain it: i), dendritic degeneration leading to increased input resistance, ii), ion channel changes leading to enhanced intrinsic excitability, and iii), synaptic changes leading to excitation-inhibition (E/I) imbalance. However, the relative contribution of these mechanisms is not fully understood. Therefore, we performed biophysically realistic multi-compartmental modelling of excitability in reconstructed CA1 pyramidal neurons of wild-type and APP/PS1 mice, a well-established animal model of AD. We show that, for synaptic activation, the excitability promoting effects of dendritic degeneration are cancelled out by excitability decreasing effects of synaptic loss. We find an interesting balance of excitability regulation with enhanced degeneration in the basal dendrites of APP/PS1 cells potentially leading to increased excitation by the apical but decreased excitation by the basal Schaffer collateral pathway. Furthermore, our simulations reveal that three additional pathomechanistic scenarios can account for the experimentally observed increase in firing and bursting of CA1 pyramidal neurons in APP/PS1 mice. Scenario 1: increased excitatory burst input; scenario 2: enhanced E/I ratio and scenario 3: alteration of intrinsic ion channels (IAHP down-regulated; INap, INa and ICaT up-regulated) in addition to enhanced E/I ratio. Our work supports the hypothesis that pathological network and ion channel changes are major contributors to neuronal hyperexcitability in AD. Overall, our results are in line with the concept of multi-causality and degeneracy according to which multiple different disruptions are separately sufficient but no single disruption is necessary for neuronal hyperexcitability.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.