Refine
Year of publication
- 2021 (3) (remove)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- Depolarization (1)
- Gamma-aminobutyric acid (1)
- Neuronal dendrites (1)
- Neuronal morphology (1)
- Neuronal plasticity (1)
- Neurons (1)
- Receptor physiology (1)
- Synapses (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (3) (remove)
Epilepsy can have many different causes and its development (epileptogenesis) involves a bewildering complexity of interacting processes. Here, we present a first-of-its-kind computational model to better understand the role of neuroimmune interactions in the development of acquired epilepsy. Our model describes the interactions between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model is validated using data from animal models that mimic human epileptogenesis caused by infection, status epilepticus, and blood-brain barrier disruption. The mathematical model successfully explains characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries, or its dependence on the intensity of an injury. Furthermore, stochasticity in the model captures the variability of epileptogenesis outcomes in individuals exposed to identical injury. Notably, in line with the concept of degeneracy, our simulations reveal multiple routes towards epileptogenesis with neuronal loss as a sufficient but non-necessary component. We show that our framework allows for in silico predictions of therapeutic strategies, providing information on injury-specific therapeutic targets and optimal time windows for intervention.
The impact of GABAergic transmission on neuronal excitability depends on the Cl--gradient across membranes. However, the Cl--fluxes through GABAA receptors alter the intracellular Cl- concentration ([Cl-]i) and in turn attenuate GABAergic responses, a process termed ionic plasticity. Recently it has been shown that coincident glutamatergic inputs significantly affect ionic plasticity. Yet how the [Cl-]i changes depend on the properties of glutamatergic inputs and their spatiotemporal relation to GABAergic stimuli is unknown. To investigate this issue, we used compartmental biophysical models of Cl- dynamics simulating either a simple ball-and-stick topology or a reconstructed CA3 neuron. These computational experiments demonstrated that glutamatergic co-stimulation enhances GABA receptor-mediated Cl- influx at low and attenuates or reverses the Cl- efflux at high initial [Cl-]i. The size of glutamatergic influence on GABAergic Cl--fluxes depends on the conductance, decay kinetics, and localization of glutamatergic inputs. Surprisingly, the glutamatergic shift in GABAergic Cl--fluxes is invariant to latencies between GABAergic and glutamatergic inputs over a substantial interval. In agreement with experimental data, simulations in a reconstructed CA3 pyramidal neuron with physiological patterns of correlated activity revealed that coincident glutamatergic synaptic inputs contribute significantly to the activity-dependent [Cl-]i changes. Whereas the influence of spatial correlation between distributed glutamatergic and GABAergic inputs was negligible, their temporal correlation played a significant role. In summary, our results demonstrate that glutamatergic co-stimulation had a substantial impact on ionic plasticity of GABAergic responses, enhancing the attenuation of GABAergic inhibition in the mature nervous systems, but suppressing GABAergic [Cl-]i changes in the immature brain. Therefore, glutamatergic shift in GABAergic Cl--fluxes should be considered as a relevant factor of short-term plasticity.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.