• Deutsch
Login

Open Access

  • Home
  • Search
  • Browse
  • Publish
  • FAQ
  • Institutes

Frankfurt Institute for Advanced Studies (FIAS)

Refine

Author

  • Toia, Alberica (91)
  • Feofilov, Grigori A. (90)
  • Kurepin, Alexander (90)
  • Rybicki, Andrzej (90)
  • Röhrich, Dieter (90)
  • Selyuzhenkov, Ilya (90)
  • Kisel, Ivan (89)
  • Leeuwen, Marco van (89)
  • Pshenichnov, Igor A. (87)
  • Renfordt, Rainer Arno Ernst (87)
+ more

Year of publication

  • 2019 (53)
  • 2020 (50)
  • 2017 (42)
  • 2018 (41)
  • 2021 (35)
  • 2011 (32)
  • 2013 (27)
  • 2016 (26)
  • 2012 (25)
  • 2009 (24)
+ more

Document Type

  • Article (409)
  • Preprint (66)
  • Conference Proceeding (25)
  • Doctoral Thesis (17)
  • Part of Periodical (6)
  • Part of a Book (2)
  • Contribution to a Periodical (1)
  • diplomthesis (1)
  • Periodical (1)
  • Review (1)
+ more

Language

  • English (521)
  • German (8)

Has Fulltext

  • yes (528)
  • no (1)

Is part of the Bibliography

  • no (529)

Keywords

  • Heavy Ion Experiments (16)
  • Hadron-Hadron scattering (experiments) (10)
  • schizophrenia (6)
  • Heavy-ion collision (5)
  • visual cortex (5)
  • cognition (4)
  • gamma (4)
  • heavy-ion collisions (4)
  • mathematical modeling (4)
  • Black holes (3)
+ more

Institute

  • Frankfurt Institute for Advanced Studies (FIAS) (529)
  • Physik (307)
  • Informatik (73)
  • Medizin (52)
  • Biowissenschaften (17)
  • MPI für Hirnforschung (17)
  • Ernst Strüngmann Institut (12)
  • Biochemie und Chemie (8)
  • Helmholtz International Center for FAIR (7)
  • ELEMENTS (5)
+ more

529 search hits

  • 1 to 10
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
Dendritic normalisation improves learning in sparsely connected artificial neural networks (2020)
Bird, Alexander D. ; Cuntz, Hermann
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
A simple model for detailed visual cortex maps predicts fixed hypercolumn sizes (2022)
Weigand, Marvin ; Cuntz, Hermann
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Dendritic normalisation improves learning in sparsely connected artificial neural networks (2021)
Bird, Alex D. ; Jedlička, Peter ; Cuntz, Hermann
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Skewed distribution of spines is independent of presynaptic transmitter release and synaptic plasticity and emerges early during adult neurogenesis (2023)
Rößler, Nina ; Jungenitz, Tassilo ; Sigler, Albrecht ; Bird, Alexander D ; Mittag, Martin ; Rhee, Jeong Seop ; Deller, Thomas ; Cuntz, Hermann ; Brose, Nils ; Schwarzacher, Stephan ; Jedlička, Peter
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction (2020)
Castro, André Ferreira ; Baltruschat, Lothar Gunnar ; Stürner, Tomke ; Bahrami, Amirhoushang ; Jedlička, Peter ; Tavosanis, Gaia ; Cuntz, Hermann
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of stochastic growth and retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction (2020)
Castro, André Ferreira ; Baltruschat, Lothar Gunnar ; Stürner, Tomke ; Bahrami, Amirhoushang ; Jedlička, Peter ; Tavosanis, Gaia ; Cuntz, Hermann
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
A developmental stretch-and-fill process that optimises dendritic wiring (2020)
Baltruschat, Lothar Gunnar ; Tavosanis, Gaia ; Cuntz, Hermann
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
A general principle of dendritic constancy a neuron’s size and shape invariant excitability (2019)
Cuntz, Hermann ; Bird, Alexander D ; Beining, Marcel ; Schneider, Marius ; Mediavilla Santos, Laura ; Hoffmann, Felix Z. ; Deller, Thomas ; Jedlička, Peter
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex (2019)
Bird, Alexander D. ; Deters, Lisa Hilde ; Cuntz, Hermann
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
Visual exposure enhances stimulus encoding and persistence in primary cortex (2021)
Lazar, Andreea ; Lewis, Christopher ; Fries, Pascal ; Singer, Wolf ; Nikolić, Danko
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
  • 1 to 10

OPUS4 Logo

  • Contact
  • Imprint
  • Sitelinks