Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (962)
- Article (754)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (1776)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (10)
- Heavy-ion collisions (8)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1776)
- Physik (1315)
- Informatik (1008)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
Hypofunction of the N-methyl-D-aspartate receptor (NMDAR) has been implicated as a possible mechanism underlying cognitive deficits and aberrant neuronal dynamics in schizophrenia. To test this hypothesis, we first administered a sub-anaesthetic dose of S-ketamine (0.006 mg/kg/min) or saline in a single-blind crossover design in 14 participants while magnetoencephalographic data were recorded during a visual task. In addition, magnetoencephalographic data were obtained in a sample of unmedicated first-episode psychosis patients (n = 10) and in patients with chronic schizophrenia (n = 16) to allow for comparisons of neuronal dynamics in clinical populations versus NMDAR hypofunctioning. Magnetoencephalographic data were analysed at source-level in the 1–90 Hz frequency range in occipital and thalamic regions of interest. In addition, directed functional connectivity analysis was performed using Granger causality and feedback and feedforward activity was investigated using a directed asymmetry index. Psychopathology was assessed with the Positive and Negative Syndrome Scale. Acute ketamine administration in healthy volunteers led to similar effects on cognition and psychopathology as observed in first-episode and chronic schizophrenia patients. However, the effects of ketamine on high-frequency oscillations and their connectivity profile were not consistent with these observations. Ketamine increased amplitude and frequency of gamma-power (63–80 Hz) in occipital regions and upregulated low frequency (5–28 Hz) activity. Moreover, ketamine disrupted feedforward and feedback signalling at high and low frequencies leading to hypo- and hyper-connectivity in thalamo-cortical networks. In contrast, first-episode and chronic schizophrenia patients showed a different pattern of magnetoencephalographic activity, characterized by decreased task-induced high-gamma band oscillations and predominantly increased feedforward/feedback-mediated Granger causality connectivity. Accordingly, the current data have implications for theories of cognitive dysfunctions and circuit impairments in the disorder, suggesting that acute NMDAR hypofunction does not recreate alterations in neural oscillations during visual processing observed in schizophrenia.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different cell types in motor cortex due to transcranial magnetic stimulation. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict detailed neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to predict activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also predicts differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of corctial pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
Background: Recent epidemics have entailed global discussions on revamping epidemic control and prevention approaches. A general consensus is that all sources of data should be embraced to improve epidemic preparedness. As a disease transmission is inherently governed by individual-level responses, pathogen dynamics within infected hosts posit high potentials to inform population-level phenomena. We propose a multiscale approach showing that individual dynamics were able to reproduce population-level observations.
Methods: Using experimental data, we formulated mathematical models of pathogen infection dynamics from which we simulated mechanistically its transmission parameters. The models were then embedded in our implementation of an age-specific contact network that allows to express individual differences relevant to the transmission processes. This approach is illustrated with an example of Ebola virus (EBOV).
Results: The results showed that a within-host infection model can reproduce EBOV’s transmission parameters obtained from population data. At the same time, population age-structure, contact distribution and patterns can be expressed using network generating algorithm. This framework opens a vast opportunity to investigate individual roles of factors involved in the epidemic processes. Estimating EBOV’s reproduction number revealed a heterogeneous pattern among age-groups, prompting cautions on estimates unadjusted for contact pattern. Assessments of mass vaccination strategies showed that vaccination conducted in a time window from five months before to one week after the start of an epidemic appeared to strongly reduce epidemic size. Noticeably, compared to a non-intervention scenario, a low critical vaccination coverage of 33% cannot ensure epidemic extinction but could reduce the number of cases by ten to hundred times as well as lessen the case-fatality rate.
Conclusions: Experimental data on the within-host infection have been able to capture upfront key transmission parameters of a pathogen; the applications of this approach will give us more time to prepare for potential epidemics. The population of interest in epidemic assessments could be modelled with an age-specific contact network without exhaustive amount of data. Further assessments and adaptations for different pathogens and scenarios to explore multilevel aspects in infectious diseases epidemics are underway.
Transmission of temporally correlated spike trains through synapses with short-term depression
(2018)
Short-term synaptic depression, caused by depletion of releasable neurotransmitter, modulates the strength of neuronal connections in a history-dependent manner. Quantifying the statistics of synaptic transmission requires stochastic models that link probabilistic neurotransmitter release with presynaptic spike-train statistics. Common approaches are to model the presynaptic spike train as either regular or a memory-less Poisson process: few analytical results are available that describe depressing synapses when the afferent spike train has more complex, temporally correlated statistics such as bursts. Here we present a series of analytical results—from vesicle release-site occupancy statistics, via neurotransmitter release, to the post-synaptic voltage mean and variance—for depressing synapses driven by correlated presynaptic spike trains. The class of presynaptic drive considered is that fully characterised by the inter-spike-interval distribution and encompasses a broad range of models used for neuronal circuit and network analyses, such as integrate-and-fire models with a complete post-spike reset and receiving sufficiently short-time correlated drive. We further demonstrate that the derived post-synaptic voltage mean and variance allow for a simple and accurate approximation of the firing rate of the post-synaptic neuron, using the exponential integrate-and-fire model as an example. These results extend the level of biological detail included in models of synaptic transmission and will allow for the incorporation of more complex and physiologically relevant firing patterns into future studies of neuronal networks.
Recent experiments have demonstrated that visual cortex engages in spatio-temporal sequence learning and prediction. The cellular basis of this learning remains unclear, however. Here we present a spiking neural network model that explains a recent study on sequence learning in the primary visual cortex of rats. The model posits that the sequence learning and prediction abilities of cortical circuits result from the interaction of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. It also reproduces changes in stimulus-evoked multi-unit activity during learning. Furthermore, it makes precise predictions regarding how training shapes network connectivity to establish its prediction ability. Finally, it predicts that the adapted connectivity gives rise to systematic changes in spontaneous network activity. Taken together, our model establishes a new conceptual bridge between the structure and function of cortical circuits in the context of sequence learning and prediction.
The transverse momentum distributions of the strange and double-strange hyperon resonances (Σ(1385)±,Ξ(1530)0) produced in p–Pb collisions at √sNN = 5.02 TeV were measured in the rapidity range −0.5<yCMS<0 for event classes corresponding to different charged-particle multiplicity densities, ⟨dNch/dηlab⟩. The mean transverse momentum values are presented as a function of ⟨dNch/dηlab⟩, as well as a function of the particle masses and compared with previous results on hyperon production. The integrated yield ratios of excited to ground-state hyperons are constant as a function of ⟨dNch/dηlab⟩. The equivalent ratios to pions exhibit an increase with ⟨dNch/dηlab⟩, depending on their strangeness content.
A key hallmark of visual perceptual awareness is robustness to instabilities arising from unnoticeable eye and eyelid movements. In previous human intracranial (iEEG) work (Golan et al., 2016) we found that excitatory broadband high-frequency activity transients, driven by eye blinks, are suppressed in higher-level but not early visual cortex. Here, we utilized the broad anatomical coverage of iEEG recordings in 12 eye-tracked neurosurgical patients to test whether a similar stabilizing mechanism operates following small saccades. We compared saccades (1.3°−3.7°) initiated during inspection of large individual visual objects with similarly-sized external stimulus displacements. Early visual cortex sites responded with positive transients to both conditions. In contrast, in both dorsal and ventral higher-level sites the response to saccades (but not to external displacements) was suppressed. These findings indicate that early visual cortex is highly unstable compared to higher-level visual regions which apparently constitute the main target of stabilizing extra-retinal oculomotor influences.
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.
Working memory and conscious perception are thought to share similar brain mechanisms, yet recent reports of non-conscious working memory challenge this view. Combining visual masking with magnetoencephalography, we investigate the reality of non-conscious working memory and dissect its neural mechanisms. In a spatial delayed-response task, participants reported the location of a subjectively unseen target above chance-level after several seconds. Conscious perception and conscious working memory were characterized by similar signatures: a sustained desynchronization in the alpha/beta band over frontal cortex, and a decodable representation of target location in posterior sensors. During non-conscious working memory, such activity vanished. Our findings contradict models that identify working memory with sustained neural firing, but are compatible with recent proposals of ‘activity-silent’ working memory. We present a theoretical framework and simulations showing how slowly decaying synaptic changes allow cell assemblies to go dormant during the delay, yet be retrieved above chance-level after several seconds.
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics. Such EoS-meter is model-independent and insensitive to other simulation inputs including the initial conditions for hydrodynamic simulations.
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
Current theories of schizophrenia (ScZ) posit that the symptoms and cognitive dysfunctions arise from a dysconnection syndrome. However, studies that have examined this hypothesis with physiological data at realistic time scales are so far scarce. The current study employed a state-of-the-art approach using Magnetoencephalography (MEG) to test alterations in large-scale phase synchronization in a sample of n = 16 chronic ScZ patients, 10 males and n = 19 healthy participants, 10 males, during a perceptual closure task. We identified large-scale networks from source reconstructed MEG data using data-driven analyses of neuronal synchronization. Oscillation amplitudes and interareal phase-synchronization in the 3–120 Hz frequency range were estimated for 400 cortical parcels and correlated with clinical symptoms and neuropsychological scores. ScZ patients were characterized by a reduction in γ-band (30–120 Hz) oscillation amplitudes that was accompanied by a pronounced deficit in large-scale synchronization at γ-band frequencies. Synchronization was reduced within visual regions as well as between visual and frontal cortex and the reduction of synchronization correlated with elevated clinical disorganization. Accordingly, these data highlight that ScZ is associated with a profound disruption of transient synchronization, providing critical support for the notion that core aspect of the pathophysiology arises from an impairment in coordination of distributed neural activity.
The goal of heavy ion reactions at low beam energies is to explore the QCD phase diagram at high net baryon chemical potential. To relate experimental observations with a first order phase transition or a critical endpoint, dynamical approaches for the theoretical description have to be developed. In this summary of the corresponding plenary talk, the status of the dynamical modeling including the most recent advances is presented. The remaining challenges are highlighted and promising experimental measurements are pointed out.
The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson– Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H10/C10 chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stemloop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs.
We present results on transverse momentum (pT) and rapidity (y) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/ψ and ψ(2S) at forward rapidity (2.5 < y < 4) as well as ψ(2S)-to-J/ψ cross section ratios. These quantities are measured in pp collisions at center of mass energies s√=5.02 and 13 TeV with the ALICE detector. Both charmonium states are reconstructed in the dimuon decay channel, using the muon spectrometer. A comprehensive comparison to inclusive charmonium cross sections measured at s√=2.76, 7 and 8 TeV is performed. A comparison to non-relativistic quantum chromodynamics and fixed-order next-to-leading logarithm calculations, which describe prompt and non-prompt charmonium production respectively, is also presented. A good description of the data is obtained over the full pT range, provided that both contributions are summed. In particular, it is found that for pT > 15 GeV/c the non-prompt contribution reaches up to 50% of the total charmonium yield.
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.
Two theories address the origin of repeating patterns, such as hair follicles, limb digits, and intestinal villi, during development. The Turing reaction–diffusion system posits that interacting diffusible signals produced by static cells first define a prepattern that then induces cell rearrangements to produce an anatomical structure. The second theory, that of mesenchymal self-organisation, proposes that mobile cells can form periodic patterns of cell aggregates directly, without reference to any prepattern. Early hair follicle development is characterised by the rapid appearance of periodic arrangements of altered gene expression in the epidermis and prominent clustering of the adjacent dermal mesenchymal cells. We assess the contributions and interplay between reaction–diffusion and mesenchymal self-organisation processes in hair follicle patterning, identifying a network of fibroblast growth factor (FGF), wingless-related integration site (WNT), and bone morphogenetic protein (BMP) signalling interactions capable of spontaneously producing a periodic pattern. Using time-lapse imaging, we find that mesenchymal cell condensation at hair follicles is locally directed by an epidermal prepattern. However, imposing this prepattern’s condition of high FGF and low BMP activity across the entire skin reveals a latent dermal capacity to undergo spatially patterned self-organisation in the absence of epithelial direction. This mesenchymal self-organisation relies on restricted transforming growth factor (TGF) β signalling, which serves to drive chemotactic mesenchymal patterning when reaction–diffusion patterning is suppressed, but, in normal conditions, facilitates cell movement to locally prepatterned sources of FGF. This work illustrates a hierarchy of periodic patterning modes operating in organogenesis.
Dendrites form predominantly binary trees that are exquisitely embedded in the networks of the brain. While neuronal computation is known to depend on the morphology of dendrites, their underlying topological blueprint remains unknown. Here, we used a centripetal branch ordering scheme originally developed to describe river networks—the Horton-Strahler order (SO)–to examine hierarchical relationships of branching statistics in reconstructed and model dendritic trees. We report on a number of universal topological relationships with SO that are true for all binary trees and distinguish those from SO-sorted metric measures that appear to be cell type-specific. The latter are therefore potential new candidates for categorising dendritic tree structures. Interestingly, we find a faithful correlation of branch diameters with centripetal branch orders, indicating a possible functional importance of SO for dendritic morphology and growth. Also, simulated local voltage responses to synaptic inputs are strongly correlated with SO. In summary, our study identifies important SO-dependent measures in dendritic morphology that are relevant for neural function while at the same time it describes other relationships that are universal for all dendrites.
I summarize recent developments in the hard-thermal-loop approach to QCD. I first discuss a finite-temperature and -density calculation of QCD thermodynamics at NNLO from the hard-thermal-loop perturbation theory. I then discuss a generalization of the hard-thermal-loop framework to the magnetic scale g2T, from which a novel non-Abelian massless mode is uncovered.
The pA system is typically regarded in heavy ion collisions as a “cold” nuclear matter environment and thought to isolate and identify initial state effects due to the presence of multiple nucleons in the incoming nucleus. Moreover, pA collisions bridge the gap between peripheral AA collisions and the pp baseline to create a more complete understanding of underlying production mechanisms and how they evolve with multiplicity. Recent measurements at both RHIC and the LHC provide an indication, however, that the “cold” nuclear matter picture may be somewhat naïve.
Recent LHC results from the 2013 p–Pb run at √sNN = 5.02 TeV will be discussed.
Overrepresentation of bidirectional connections in local cortical networks has been repeatedly reported and is a focus of the ongoing discussion of nonrandom connectivity. Here we show in a brief mathematical analysis that in a network in which connection probabilities are symmetric in pairs, Pij = Pji, the occurrences of bidirectional connections and nonrandom structures are inherently linked; an overabundance of reciprocally connected pairs emerges necessarily when some pairs of neurons are more likely to be connected than others. Our numerical results imply that such overrepresentation can also be sustained when connection probabilities are only approximately symmetric.
Criticality meets learning : criticality signatures in a self-organizing recurrent neural network
(2017)
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
BACKGROUND: The analysis of microarray time series promises a deeper insight into the dynamics of the cellular response following stimulation. A common observation in this type of data is that some genes respond with quick, transient dynamics, while other genes change their expression slowly over time. The existing methods for detecting significant expression dynamics often fail when the expression dynamics show a large heterogeneity. Moreover, these methods often cannot cope with irregular and sparse measurements.
RESULTS: The method proposed here is specifically designed for the analysis of perturbation responses. It combines different scores to capture fast and transient dynamics as well as slow expression changes, and performs well in the presence of low replicate numbers and irregular sampling times. The results are given in the form of tables including links to figures showing the expression dynamics of the respective transcript. These allow to quickly recognise the relevance of detection, to identify possible false positives and to discriminate early and late changes in gene expression. An extension of the method allows the analysis of the expression dynamics of functional groups of genes, providing a quick overview of the cellular response. The performance of this package was tested on microarray data derived from lung cancer cells stimulated with epidermal growth factor (EGF).
CONCLUSION: Here we describe a new, efficient method for the analysis of sparse and heterogeneous time course data with high detection sensitivity and transparency. It is implemented as R package TTCA (transcript time course analysis) and can be installed from the Comprehensive R Archive Network, CRAN. The source code is provided with the Additional file 1.
Study of hard core repulsive interactions in an hadronic gas from a comparison with lattice QCD
(2016)
We study the influence of hard-core repulsive interactions within the Hadron-Resonace Gas model in comparison to first principle calculation performed on a lattice. We check the effect of a bag-like parametrization for particle eigenvolume on flavor correlators, looking for an extension of the agreement with lattice simulations up to higher temperatures, as was yet pointed out in an analysis of hadron yields measured by the ALICE experiment. Hints for a flavor depending eigenvolume are present.
The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on measurement of very rare probes at interaction rates up to 10 MHz with data flow of up to 1 TB/s. The beam will provide free stream of beam particles without bunch structure. That requires full online event reconstruction and selection not only in space, but also in time, so-called 4D event building and selection.
The FLES (First-Level Event Selection) reconstruction and selection package consists of several modules: track finding, track fitting, short-lived particles finding, event building and event selection. A time-slice is reconstructed in parallel between cores within a same CPU, thus minimizing the communication between CPUs. After all tracks are found and fitted in 4D, they are collected into clusters of tracks originated from common primary vertices, which then are fitted, thus identifying 4D interaction points registered within the time-slice. Secondary tracks are associated with primary vertices according to their estimated production time. After that, short-lived particles are found and the full event building process is finished. The last stage of the FLES package is the selection of events according to the requested trigger signatures.
Neural oscillations at low- and high-frequency ranges are a fundamental feature of large-scale networks. Recent evidence has indicated that schizophrenia is associated with abnormal amplitude and synchrony of oscillatory activity, in particular, at high (beta/gamma) frequencies. These abnormalities are observed during task-related and spontaneous neuronal activity which may be important for understanding the pathophysiology of the syndrome. In this paper, we shall review the current evidence for impaired beta/gamma-band oscillations and their involvement in cognitive functions and certain symptoms of the disorder. In the first part, we will provide an update on neural oscillations during normal brain functions and discuss underlying mechanisms. This will be followed by a review of studies that have examined high-frequency oscillatory activity in schizophrenia and discuss evidence that relates abnormalities of oscillatory activity to disturbed excitatory/inhibitory (E/I) balance. Finally, we shall identify critical issues for future research in this area.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
Abstract: Integration of synaptic currents across an extensive dendritic tree is a prerequisite for computation in the brain. Dendritic tapering away from the soma has been suggested to both equalise contributions from synapses at different locations and maximise the current transfer to the soma. To find out how this is achieved precisely, an analytical solution for the current transfer in dendrites with arbitrary taper is required. We derive here an asymptotic approximation that accurately matches results from numerical simulations. From this we then determine the diameter profile that maximises the current transfer to the soma. We find a simple quadratic form that matches diameters obtained experimentally, indicating a fundamental architectural principle of the brain that links dendritic diameters to signal transmission.
Author Summary: Neurons take a great variety of shapes that allow them to perform their different computational roles across the brain. The most distinctive visible feature of many neurons is the extensively branched network of cable-like projections that make up their dendritic tree. A neuron receives current-inducing synaptic contacts from other cells across its dendritic tree. As in the case of botanical trees, dendritic trees are strongly tapered towards their tips. This tapering has previously been shown to offer a number of advantages over a constant width, both in terms of reduced energy requirements and the robust integration of inputs at different locations. However, in order to predict the computations that neurons perform, analytical solutions for the flow of input currents tend to assume constant dendritic diameters. Here we introduce an asymptotic approximation that accurately models the current transfer in dendritic trees with arbitrary, continuously changing, diameters. When we then determine the diameter profiles that maximise current transfer towards the cell body we find diameters similar to those observed in real neurons. We conclude that the tapering in dendritic trees to optimise signal transmission is a fundamental architectural principle of the brain.
Abstract We consider the phase structure of hadronic and hadron-quark models at finite temperature and density. The basis for the hadronic part is an extension of a flavor-SU(3) ? ? ? model. We study the effect on the phase diagram by adding additional hadronic resonances to the model. With the resulting equation of state we investigate heavy-ion c... collisions using hydrodynamical simulations. In a combined approach we include quarks and the Polyakov loop field in the calculation and study chiral symmetry restoration and the deconfinement transition.
The true revolution in the age of digital neuroanatomy is the ability to extensively quantify anatomical structures and thus investigate structure-function relationships in great detail. Large-scale projects were recently launched with the aim of providing infrastructure for brain simulations. These projects will increase the need for a precise understanding of brain structure, e.g., through statistical analysis and models.
From articles in this Research Topic, we identify three main themes that clearly illustrate how new quantitative approaches are helping advance our understanding of neural structure and function. First, new approaches to reconstruct neurons and circuits from empirical data are aiding neuroanatomical mapping. Second, methods are introduced to improve understanding of the underlying principles of organization. Third, by combining existing knowledge from lower levels of organization, models can be used to make testable predictions about a higher-level organization where knowledge is absent or poor. This latter approach is useful for examining statistical properties of specific network connectivity when current experimental methods have not yet been able to fully reconstruct whole circuits of more than a few hundred neurons.
Abstract: Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring.
Author Summary: The problem of how the brain wires itself up has important implications for the understanding of both brain development and cognition. The microscopic structure of the circuits of the adult neocortex, often considered the seat of our highest cognitive abilities, is still poorly understood. Recent experiments have provided a first set of findings on the structural features of these circuits, but it is unknown how these features come about and how they are maintained. Here we present a neural network model that shows how these features might come about. It gives rise to numerous connectivity features, which have been observed in experiments, but never before simultaneously produced by a single model. Our model explains the development of these structural features as the result of a process of self-organization. The results imply that only a few simple mechanisms and constraints are required to produce, at least to the first approximation, various characteristic features of a typical fragment of brain microcircuitry. In the absence of any of these mechanisms, simultaneous production of all desired features fails, suggesting a minimal set of necessary mechanisms for their production.
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.
Tumour hypoxia plays a pivotal role in cancer therapy for most therapeutic approaches from radiotherapy to immunotherapy. The detailed and accurate knowledge of the oxygen distribution in a tumour is necessary in order to determine the right treatment strategy. Still, due to the limited spatial and temporal resolution of imaging methods as well as lacking fundamental understanding of internal oxygenation dynamics in tumours, the precise oxygen distribution map is rarely available for treatment planing. We employ an agent-based in silico tumour spheroid model in order to study the complex, localized and fast oxygen dynamics in tumour micro-regions which are induced by radiotherapy. A lattice-free, 3D, agent-based approach for cell representation is coupled with a high-resolution diffusion solver that includes a tissue density-dependent diffusion coefficient. This allows us to assess the space- and time-resolved reoxygenation response of a small subvolume of tumour tissue in response to radiotherapy. In response to irradiation the tumour nodule exhibits characteristic reoxygenation and re-depletion dynamics which we resolve with high spatio-temporal resolution. The reoxygenation follows specific timings, which should be respected in treatment in order to maximise the use of the oxygen enhancement effects. Oxygen dynamics within the tumour create windows of opportunity for the use of adjuvant chemotherapeutica and hypoxia-activated drugs. Overall, we show that by using modelling it is possible to follow the oxygenation dynamics beyond common resolution limits and predict beneficial strategies for therapy and in vitro verification. Models of cell cycle and oxygen dynamics in tumours should in the future be combined with imaging techniques, to allow for a systematic experimental study of possible improved schedules and to ultimately extend the reach of oxygenation monitoring available in clinical treatment.
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
FIAS Scientific Report 2014
(2015)
FIAS Scientific Report 2013
(2014)
FIAS Scientific Report 2012
(2013)
Intrinsic motivations drive the acquisition of knowledge and skills on the basis of novel or surprising stimuli or the pleasure to learn new skills. In so doing, they are different from extrinsic motivations that are mainly linked to drives that promote survival and reproduction. Intrinsic motivations have been implicitly exploited in several psychological experiments but, due to the lack of proper paradigms, they are rarely a direct subject of investigation. This article investigates how different intrinsic motivation mechanisms can support the learning of visual skills, such as "foveate a particular object in space", using a gaze contingency paradigm. In the experiment participants could freely foveate objects shown in a computer screen. Foveating each of two “button” pictures caused different effects: one caused the appearance of a simple image (blue rectangle) in unexpected positions, while the other evoked the appearance of an always-novel picture (objects or animals). The experiment studied how two possible intrinsic motivation mechanisms might guide learning to foveate one or the other button picture. One mechanism is based on the sudden, surprising appearance of a familiar image at unpredicted locations, and a second one is based on the content novelty of the images. The results show the comparative effectiveness of the mechanism based on image novelty, whereas they do not support the operation of the mechanism based on the surprising location of the image appearance. Interestingly, these results were also obtained with participants that, according to a post experiment questionnaire, had not understood the functions of the different buttons suggesting that novelty-based intrinsic motivation mechanisms might operate even at an unconscious level.
We investigate the properties of the QCD matter across the deconfinement phase transition. In the scope of the parton-hadron string dynamics (PHSD) transport approach, we study the strongly interacting matter in equilibrium as well as the out-of equilibrium dynamics of relativistic heavy-ion collisions. We present here in particular the results on the electromagnetic radiation, i.e. photon and dilepton production, in relativistic heavy-ion collisions and the relevant correlator in equilibrium, i.e. the electric conductivity. By comparing our calculations for the heavy-ion collisions to the available data, we determine the relative importance of the various production sources and address the possible origin of the observed strong elliptic flow ν2 of direct photons.
In the presence of a minimal length, physical objects cannot collapse to an infinite density, singular, matter point. In this paper, we consider the possible final stage of the gravitational collapse of "thick" matter layers. The energy momentum tensor we choose to model these shell-like objects is a proper modification of the source for "noncommutative geometry inspired," regular black holes. By using higher momenta of Gaussian distribution to localize matter at finite distance from the origin, we obtain new solutions of the Einstein equation which smoothly interpolates between Minkowski's geometry near the center of the shell and Schwarzschild’s spacetime far away from the matter layer. The metric is curvature singularity free. Black hole type solutions exist only for "heavy" shells; that is, M >= Me, where Me is the mass of the extremal configuration. We determine the Hawking temperature and a modified area law taking into account the extended nature of the source.
While the existence of a strongly interacting state of matter, known as “quark-gluon plasma” (QGP), has been established in heavy ion collision experiments in the past decade, the task remains to map out the transition from the hadronic matter to the QGP. This is done by measuring the dependence of key observables (such as particle suppression and elliptic flow) on the collision energy of the heavy ions. This procedure, known as "beam energy scan", has been most recently performed at the Relativistic Heavy Ion Collider (RHIC).
Utilizing a Boltzmann+hydrodynamics hybrid model, we study the collision energy dependence of initial state eccentricities and the final state elliptic and triangular flow. This approach is well suited to investigate the relative importance of hydrodynamics and hadron transport at different collision energies.
We derive the Polyakov-loop thermodynamic potential in the perturbative approach to pure SU(3) Yang-Mills theory. The potential expressed in terms of the Polyakov loop in the fundamental representation corresponds to that of the strong-coupling expansion, of which the relevant coefficients of the gluon energy distribution are specified by characters of the SU(3) group. At high temperature, the potential exhibits the correct asymptotic behavior, whereas at low temperature, it disfavors gluons as appropriate dynamical degrees of freedom. To quantify the Yang-Mills thermodynamics in confined phase, we introduce a hybrid approach which matches the effective gluon potential to that of glueballs, constrained by the QCD trace anomaly in terms of dilaton fields.
We study the impact of nonequilibrium effects on the relevant signals within a chiral fluid dynamics model including explicit propagation of the Polyakov loop. An expanding heat bath of quarks is coupled to the Langevin dynamics of the order parameter fields. The model is able to describe relaxational processes, including critical slowing down and the enhancement of soft modes near the critical point. At the first-order phase transition we observe domain formation and phase coexistence in the sigma and Polyakov loop field leading to a significant amount of clumping in the energy density. This effect gets even more pronounced if we go to systems at finite baryon density. Here the formation of high-density clusters could provide an important observable signal for upcoming experiments at FAIR and NICA.We conclude that improving our understanding of dynamical symmetry breaking is important to give realistic estimates for experimental observables connected to the QCD phase transition.
The QGP that might be created in ultrarelativistic heavy-ion collisions is expected to radiate thermal dilepton radiation. However, this thermal dilepton radiation interferes with dileptons originating from hadron decays. In the invariant mass region between the f and J=y peak (1GeV <= M l+l <=. 3GeV) the most substantial background of hadron decays originates from correlated DD¯ -meson decays. We evaluate this background using a Langevin simulation for charm quarks. As background medium we utilize the well-tested UrQMD-hybrid model. The required drag and diffusion coefficients are taken from a resonance approach. The decoupling of the charm quarks from the hot medium is performed at a temperature of 130MeV and as hadronization mechanism a coalescence approach is chosen. This model for charm quark interactions with the medium has already been successfully applied to the study of the medium modification and the elliptic flow at FAIR, RHIC and LHC energies. In this proceeding we present our results for the dilepton radiation from correlated D¯D decays at RHIC energy in comparison to PHENIX measurements in the invariant mass range between 1 and 3 GeV using different interaction scenarios. These results can be utilized to estimate the thermal QGP radiation.
As microscopic transport models usually have difficulties to deal with in-medium effects in heavy-ion collisions, we present an alternative approach that uses coarse-grained output from transport calculations with the UrQMD model to determine thermal dilepton emission rates. A four-dimensional space-time grid is set up to extract local baryon and energy densities, respectively temperature and baryon chemical potential. The lepton pair emission is then calculated for each cell of the grid using thermal equilibrium rates. In the current investigation we inlcude the medium-modified r spectral function by Eletsky et al., as well as contributions from the QGP and four-pion interactions for high collision energies. First dielectron invariant mass spectra for Au+Au collisions at 1.25 AGeV and for dimuons from In+In at 158 AGeV are shown. At 1.25 AGeV a clear enhancement of the total dilepton yield as compared to a pure transport result is observed. In the latter case, we compare our outcome with the NA60 dimuon excess data. Here a good agreement is achieved, but the yield in the low-mass tail is underestimated. In general the results show that the coarse-graining approach gives reasonable results and can cover a broad collision-energy range.
The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.
We study the equilibrium properties of strongly-interacting infinite parton-hadron matter, characterized by the transport coefficients such as shear and bulk viscosity and electric conductivity, and the non-equilibrium dynamics of heavy-ion collisions within the Parton-Hadron-String Dynamics (PHSD) transport approach, which incorporates explicit partonic degrees of freedom in terms of strongly interacting quasiparticles (quarks and gluons) in line with an equation of state from lattice QCD as well as the dynamical hadronization and hadronic collision dynamics in the final reaction phase. We discuss in particular the possible origin for the strong elliptic flow v2 of direct photons observed at RHIC energies.
Dendritic morphology has been shown to have a dramatic impact on neuronal function. However, population features such as the inherent variability in dendritic morphology between cells belonging to the same neuronal type are often overlooked when studying computation in neural networks. While detailed models for morphology and electrophysiology exist for many types of single neurons, the role of detailed single cell morphology in the population has not been studied quantitatively or computationally. Here we use the structural context of the neural tissue in which dendritic trees exist to drive their generation in silico. We synthesize the entire population of dentate gyrus granule cells, the most numerous cell type in the hippocampus, by growing their dendritic trees within their characteristic dendritic fields bounded by the realistic structural context of (1) the granule cell layer that contains all somata and (2) the molecular layer that contains the dendritic forest. This process enables branching statistics to be linked to larger scale neuroanatomical features. We find large differences in dendritic total length and individual path length measures as a function of location in the dentate gyrus and of somatic depth in the granule cell layer. We also predict the number of unique granule cell dendrites invading a given volume in the molecular layer. This work enables the complete population-level study of morphological properties and provides a framework to develop complex and realistic neural network models.
Currently, little is known about how synesthesia develops and which aspects of synesthesia can be acquired through a learning process. We review the increasing evidence for the role of semantic representations in the induction of synesthesia, and argue for the thesis that synesthetic abilities are developed and modified by semantic mechanisms. That is, in certain people semantic mechanisms associate concepts with perception-like experiences—and this association occurs in an extraordinary way. This phenomenon can be referred to as “higher” synesthesia or ideasthesia. The present analysis suggests that synesthesia develops during childhood and is being enriched further throughout the synesthetes’ lifetime; for example, the already existing concurrents may be adopted by novel inducers or new concurrents may be formed. For a deeper understanding of the origin and nature of synesthesia we propose to focus future research on two aspects: (i) the similarities between synesthesia and ordinary phenomenal experiences based on concepts; and (ii) the tight entanglement of perception, cognition and the conceptualization of the world. Importantly, an explanation of how biological systems get to generate experiences, synesthetic or not, may have to involve an explanation of how semantic networks are formed in general and what their role is in the ability to be aware of the surrounding world.
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.
Poster presentation at The Twenty Third Annual Computational Neuroscience Meeting: CNS*2014 Québec City, Canada. 26-31 July 2014: We study random strongly heterogeneous recurrent networks of firing rate neurons, introducing the notion of cohorts: groups of co-active neurons, who compete for firing with one another and whose presence depends sensitively on the structure of the input. The identities of neurons recruited to and dropped from an active cohort changes smoothly with varying input features. We search for network parameter regimes in which the activation of cohorts is robust yet easily switchable by the external input and which exhibit large repertoires of different cohorts. We apply these networks to model the emergence of orientation and direction selectivity in visual cortex. We feed these random networks with a set of harmonic inputs that vary across neurons only in their temporal phase, mimicking the feedforward drive due to a moving grating stimulus. The relationship between the phases that carries the information about the orientation of the stimulus determines which cohort of neurons is activated. As a result the individual neurons acquire non-monotonic orientation tuning curves which are characterized by high orientation and direction selectivity. This mechanism of emergence for direction selectivity differs from the classical motion detector scheme, which is based on the nonlinear summation of the time-shifted inputs. In our model these two mechanisms coexist in the same network, but can be distinguished by their different frequency and contrast dependences. In general, the mechanism we are studying here converts temporal phase sequence into population activity and could therefore be used to extract and represent also various other relevant stimulus features.
The YaeJ protein is a codon-independent release factor with peptidyl-tRNA hydrolysis (PTH) activity, and functions as a stalled-ribosome rescue factor in Escherichia coli. To identify residues required for YaeJ function, we performed mutational analysis for in vitro PTH activity towards rescue of ribosomes stalled on a non-stop mRNA, and for ribosome-binding efficiency. We focused on residues conserved among bacterial YaeJ proteins. Additionally, we determined the solution structure of the GGQ domain of YaeJ from E. coli using nuclear magnetic resonance spectroscopy. YaeJ and a human homolog, ICT1, had similar levels of PTH activity, despite various differences in sequence and structure. While no YaeJ-specific residues important for PTH activity occur in the structured GGQ domain, Arg118, Leu119, Lys122, Lys129 and Arg132 in the following C-terminal extension were required for PTH activity. All of these residues are completely conserved among bacteria. The equivalent residues were also found in the C-terminal extension of ICT1, allowing an appropriate sequence alignment between YaeJ and ICT1 proteins from various species. Single amino acid substitutions for each of these residues significantly decreased ribosome-binding efficiency. These biochemical findings provide clues to understanding how YaeJ enters the A-site of stalled ribosomes.
Background: Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments.
Results: To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments.
Conclusions: Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap.
Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.
Evidence from anatomical and functional imaging studies have highlighted major modifications of cortical circuits during adolescence. These include reductions of gray matter (GM), increases in the myelination of cortico-cortical connections and changes in the architecture of large-scale cortical networks. It is currently unclear, however, how the ongoing developmental processes impact upon the folding of the cerebral cortex and how changes in gyrification relate to maturation of GM/WM-volume, thickness and surface area. In the current study, we acquired high-resolution (3 Tesla) magnetic resonance imaging (MRI) data from 79 healthy subjects (34 males and 45 females) between the ages of 12 and 23 years and performed whole brain analysis of cortical folding patterns with the gyrification index (GI). In addition to GI-values, we obtained estimates of cortical thickness, surface area, GM and white matter (WM) volume which permitted correlations with changes in gyrification. Our data show pronounced and widespread reductions in GI-values during adolescence in several cortical regions which include precentral, temporal and frontal areas. Decreases in gyrification overlap only partially with changes in the thickness, volume and surface of GM and were characterized overall by a linear developmental trajectory. Our data suggest that the observed reductions in GI-values represent an additional, important modification of the cerebral cortex during late brain maturation which may be related to cognitive development.
The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness.
Tumour cells show a varying susceptibility to radiation damage as a function of the current cell cycle phase. While this sensitivity is averaged out in an unperturbed tumour due to unsynchronised cell cycle progression, external stimuli such as radiation or drug doses can induce a resynchronisation of the cell cycle and consequently induce a collective development of radiosensitivity in tumours. Although this effect has been regularly described in experiments it is currently not exploited in clinical practice and thus a large potential for optimisation is missed. We present an agent-based model for three-dimensional tumour spheroid growth which has been combined with an irradiation damage and kinetics model. We predict the dynamic response of the overall tumour radiosensitivity to delivered radiation doses and describe corresponding time windows of increased or decreased radiation sensitivity. The degree of cell cycle resynchronisation in response to radiation delivery was identified as a main determinant of the transient periods of low and high radiosensitivity enhancement. A range of selected clinical fractionation schemes is examined and new triggered schedules are tested which aim to maximise the effect of the radiation-induced sensitivity enhancement. We find that the cell cycle resynchronisation can yield a strong increase in therapy effectiveness, if employed correctly. While the individual timing of sensitive periods will depend on the exact cell and radiation types, enhancement is a universal effect which is present in every tumour and accordingly should be the target of experimental investigation. Experimental observables which can be assessed non-invasively and with high spatio-temporal resolution have to be connected to the radiosensitivity enhancement in order to allow for a possible tumour-specific design of highly efficient treatment schedules based on induced cell cycle synchronisation.
Author Summary: The sensitivity of a cell to a dose of radiation is largely affected by its current position within the cell cycle. While under normal circumstances progression through the cell cycle will be asynchronous in a tumour mass, external influences such as chemo- or radiotherapy can induce a synchronisation. Such a common progression of the inner clock of the cancer cells results in the critical dependence on the effectiveness of any drug or radiation dose on a suitable timing for its administration. We analyse the exact evolution of the radiosensitivity of a sample tumour spheroid in a computer model, which enables us to predict time windows of decreased or increased radiosensitivity. Fractionated radiotherapy schedules can be tailored in order to avoid periods of high resistance and exploit the induced radiosensitivity for an increase in therapy efficiency. We show that the cell cycle effects can drastically alter the outcome of fractionated irradiation schedules in a spheroid cell system. By using the correct observables and continuous monitoring, the cell cycle sensitivity effects have the potential to be integrated into treatment planing of the future and thus to be employed for a better outcome in clinical cancer therapies.
Synchronous neuronal firing has been proposed as a potential neuronal code. To determine whether synchronous firing is really involved in different forms of information processing, one needs to directly compare the amount of synchronous firing due to various factors, such as different experimental or behavioral conditions. In order to address this issue, we present an extended version of the previously published method, NeuroXidence. The improved method incorporates bi- and multivariate testing to determine whether different factors result in synchronous firing occurring above the chance level. We demonstrate through the use of simulated data sets that bi- and multivariate NeuroXidence reliably and robustly detects joint-spike-events across different factors.
Network or graph theory has become a popular tool to represent and analyze large-scale interaction patterns in the brain. To derive a functional network representation from experimentally recorded neural time series one has to identify the structure of the interactions between these time series. In neuroscience, this is often done by pairwise bivariate analysis because a fully multivariate treatment is typically not possible due to limited data and excessive computational cost. Furthermore, a true multivariate analysis would consist of the analysis of the combined effects, including information theoretic synergies and redundancies, of all possible subsets of network components. Since the number of these subsets is the power set of the network components, this leads to a combinatorial explosion (i.e. a problem that is computationally intractable). In contrast, a pairwise bivariate analysis of interactions is typically feasible but introduces the possibility of false detection of spurious interactions between network components, especially due to cascade and common drive effects. These spurious connections in a network representation may introduce a bias to subsequently computed graph theoretical measures (e.g. clustering coefficient or centrality) as these measures depend on the reliability of the graph representation from which they are computed. Strictly speaking, graph theoretical measures are meaningful only if the underlying graph structure can be guaranteed to consist of one type of connections only, i.e. connections in the graph are guaranteed to be non-spurious. ...
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
Two generic mechanisms for emergence of direction selectivity coexist in recurrent neural networks
(2013)
Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013.
In the mammalian visual cortex, the time-averaged response of many neurons is maximal for stimuli moving in a particular direction. Such a direction selective response is not found in LGN, upstream of the visual processing pathway, suggesting that cortical networks play a strong role in the generation of direction selectivity. Here we investigate the mechanisms for the emergence of direction selectivity in the recurrent networks of nonlinear firing rate neurons in layer 4 of V1 receiving the input from LGN. In the model the LGN inputs are characterized by different receptive field positions, and their relative temporal phase shifts are reversed for the stimuli moving in the opposite direction. We propose that two distinct mechanisms result in the neuronal direction selective response in these recurrent networks. The first one is a result of nonlinear feed-forward summation of several time-shifted inputs. The second mechanism is based on the competition between neurons for firing in a winner-take-all regime. Both mechanisms rely on inhibitory interactions in the connectivity matrix of lateral connections, but the second one involves inhibitory loops. Typically, the first mechanism results in lower selectivity values than the second, but the time-course of acquiring direction selective response is faster for the first mechanism. Importantly, the two mechanisms have different input frequency tuning. The first mechanism, based on the nonlinear summation, result in a relatively narrow tuning curve around the preferred frequency of the stimulus in the case of the moving grating. In contrast, the direction selectivity arising from the second mechanism depends only weakly on the input frequency, i.e. has a broader tuning curve. These differences allow us to provide the recipe for identifying in experiment which of the two mechanisms is used by a given direction selective neuron. We then analyze how the statistics of the connections in the random recurrent networks affect the relative contributions from these two mechanisms and determine the distributions of the direction selectivity values. We identify the motifs in the connectivity matrix, which are required for each mechanism and show that the minimal conditions for both mechanisms are met in a very broad set of random recurrent networks with sufficiently strong inhibitory connections. Thus, we propose that these mechanisms coexist in generic recurrent networks with inhibition. Our results may account for the recent experimental observations that direction selectivity is present in dark-reared mice and ferrets [1,2]. It can also explain the emergence of direction selectivity in species lacking a spatially organized direction selectivity map.
Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ...
The Taiwan cobra (Naja naja atra) chymotrypsin inhibitor (NACI) consists of 57 amino acids and is related to other Kunitz-type inhibitors such as bovine pancreatic trypsin inhibitor (BPTI) and Bungarus fasciatus fraction IX (BF9), another chymotrypsin inhibitor. Here we present the solution structure of NACI. We determined the NMR structure of NACI with a root-mean-square deviation of 0.37 Å for the backbone atoms and 0.73 Å for the heavy atoms on the basis of 1,075 upper distance limits derived from NOE peaks measured in its NOESY spectra. To investigate the structural characteristics of NACI, we compared the three-dimensional structure of NACI with BPTI and BF9. The structure of the NACI protein comprises one 310-helix, one α-helix and one double-stranded antiparallel β-sheet, which is comparable with the secondary structures in BPTI and BF9. The RMSD value between the mean structures is 1.09 Å between NACI and BPTI and 1.27 Å between NACI and BF9. In addition to similar secondary and tertiary structure, NACI might possess similar types of protein conformational fluctuations as reported in BPTI, such as Cys14–Cys38 disulfide bond isomerization, based on line broadening of resonances from residues which are mainly confined to a region around the Cys14–Cys38 disulfide bond.
Adequate digital resolution and signal sensitivity are two critical factors for protein structure determinations by solution NMR spectroscopy. The prime objective for obtaining high digital resolution is to resolve peak overlap, especially in NOESY spectra with thousands of signals where the signal analysis needs to be performed on a large scale. Achieving maximum digital resolution is usually limited by the practically available measurement time. We developed a method utilizing non-uniform sampling for balancing digital resolution and signal sensitivity, and performed a large-scale analysis of the effect of the digital resolution on the accuracy of the resulting protein structures. Structure calculations were performed as a function of digital resolution for about 400 proteins with molecular sizes ranging between 5 and 33 kDa. The structural accuracy was assessed by atomic coordinate RMSD values from the reference structures of the proteins. In addition, we monitored also the number of assigned NOESY cross peaks, the average signal sensitivity, and the chemical shift spectral overlap. We show that high resolution is equally important for proteins of every molecular size. The chemical shift spectral overlap depends strongly on the corresponding spectral digital resolution. Thus, knowing the extent of overlap can be a predictor of the resulting structural accuracy. Our results show that for every molecular size a minimal digital resolution, corresponding to the natural linewidth, needs to be achieved for obtaining the highest accuracy possible for the given protein size using state-of-the-art automated NOESY assignment and structure calculation methods.
Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.
Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice.
Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain’s activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction.
Background: After induction of DNA double strand breaks (DSBs), the DNA damage response (DDR) is activated. One of the earliest events in DDR is the phosphorylation of serine 139 on the histone variant H2AX (gH2AX) catalyzed by phosphatidylinositol 3-kinases-related kinases. Despite being extensively studied, H2AX distribution[1] across the genome and gH2AX spreading around DSBs sites[2] in the context of different chromatin compaction states or transcription are yet to be fully elucidated.
Materials and methods: gH2AX was induced in human hepatocellular carcinoma cells (HepG2) by exposure to 10 Gy X-rays (250 kV, 16 mA). Samples were incubated 0.5, 3 or 24 hours post irradiation to investigate early, intermediate and late stages of DDR, respectively. Chromatin immunoprecipitation was performed to select H2AX, H3 and gH2AX-enriched chromatin fractions. Chromatin-associated DNA was then sequenced by Illumina ChIP-Seq platform. HepG2 gene expression and histone modification (H3K36me3, H3K9me3) ChIP-Seq profiles were retrieved from Gene Expression Omnibus (accession numbers GSE30240 and GSE26386, respectively).
Results: First, we combined G/C usage, gene content, gene expression or histone modification profiles (H3K36me3, H3K9me3) to define genomic compartments characterized by different chromatin compaction states or transcriptional activity. Next, we investigated H3, H2AX and gH2AX distributions in such defined compartments before and after exposure to ionizing radiation (IR) to study DNA repair kinetics during DDR. Our sequencing results indicate that H2AX distribution followed H3 occupancy and, thus, the nucleosome pattern. The highest H2AX and H3 enrichment was observed in transcriptionally active compartments (euchromatin) while the lowest was found in low G/C and gene-poor compartments (heterochromatin). Under physiological conditions, the body of highly and moderately transcribed genes was devoid of gH2AX, despite presenting high H2AX levels. gH2AX accumulation was observed in 5’ or 3’ flanking regions, instead. The same genes showed a prompt gH2AX accumulation during the early stage of DDR which then decreased over time as DDR proceeded.
Finally, during the late stage of DDR the residual gH2AX signal was entirely retained in heterochromatic compartments. At this stage, euchromatic compartments were completely devoid of gH2AX despite presenting high levels of non-phosphorylated H2AX.
Conclusions: We show that gH2AX distribution ultimately depends on H2AX occupancy, the latter following H3 occupancy and, thus, nucleosome pattern. Both H2AX and H3 levels were higher in actively transcribed compartments. However, gH2AX levels were remarkably low over the body of actively transcribed genes suggesting that transcription levels antagonize gH2AX spreading. Moreover, repair processes did not take place uniformly across the genome; rather, DNA repair was affected by genomic location and transcriptional activity. We propose that higher H2AX density in euchromaticcompartments results in high relative gH2AXconcentration soon after the activation of DDR, thus favoring the recruitment of the DNA repair machinery to those compartments. When the damage is repaired and gH2AX is removed, its residual fraction is retained in the heterochromatic compartments which are then targeted and repaired at later times.
Current theories of the pathophysiology of schizophrenia have focused on abnormal temporal coordination of neural activity. Oscillations in the gamma-band range (>25 Hz) are of particular interest as they establish synchronization with great precision in local cortical networks. However, the contribution of high gamma (>60 Hz) oscillations toward the pathophysiology is less established. To address this issue, we recorded magnetoencephalographic (MEG) data from 16 medicated patients with chronic schizophrenia and 16 controls during the perception of Mooney faces. MEG data were analysed in the 25–150 Hz frequency range. Patients showed elevated reaction times and reduced detection rates during the perception of upright Mooney faces while responses to inverted stimuli were intact. Impaired processing of Mooney faces in schizophrenia patients was accompanied by a pronounced reduction in spectral power between 60–120 Hz (effect size: d = 1.26) which was correlated with disorganized symptoms (r = −0.72). Our findings demonstrate that deficits in high gamma-band oscillations as measured by MEG are a sensitive marker for aberrant cortical functioning in schizophrenia, suggesting an important aspect of the pathophysiology of the disorder.
The way we perceive the visual world depends crucially on the state of the observer. In the present study we show that what we are holding in working memory (WM) can bias the way we perceive ambiguous structure from motion stimuli. Holding in memory the percept of an unambiguously rotating sphere influenced the perceived direction of motion of an ambiguously rotating sphere presented shortly thereafter. In particular, we found a systematic difference between congruent dominance periods where the perceived direction of the ambiguous stimulus corresponded to the direction of the unambiguous one and incongruent dominance periods. Congruent dominance periods were more frequent when participants memorized the speed of the unambiguous sphere for delayed discrimination than when they performed an immediate judgment on a change in its speed. The analysis of dominance time-course showed that a sustained tendency to perceive the same direction of motion as the prior stimulus emerged only in the WM condition, whereas in the attention condition perceptual dominance dropped to chance levels at the end of the trial. The results are explained in terms of a direct involvement of early visual areas in the active representation of visual motion in WM.
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.
Radiation damage following the ionising radiation of tissue has different scenarios and mechanisms depending on the projectiles or radiation modality. We investigate the radiation damage effects due to shock waves produced by ions. We analyse the strength of the shock wave capable of directly producing DNA strand breaks and, depending on the ion's linear energy transfer, estimate the radius from the ion's path, within which DNA damage by the shock wave mechanism is dominant. At much smaller values of linear energy transfer, the shock waves turn out to be instrumental in propagating reactive species formed close to the ion's path to large distances, successfully competing with diffusion.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
In recent years, Hagedorn states have been used to explain the equilibrium and transport properties of a hadron gas close to the QCD critical temperature. These massive resonances are shown to lower h/s to near the AdS/CFT limit close to the phase transition. A comparison of the Hagedorn model to recent lattice results is made and it is found that the hadrons can reach chemical equilibrium almost immediately, well before the chemical freeze-out temperatures found in thermal fits for a hadron gas without Hagedorn states.
Perception is an active inferential process in which prior knowledge is combined with sensory input, the result of which determines the contents of awareness. Accordingly, previous experience is known to help the brain “decide” what to perceive. However, a critical aspect that has not been addressed is that previous experience can exert 2 opposing effects on perception: An attractive effect, sensitizing the brain to perceive the same again (hysteresis), or a repulsive effect, making it more likely to perceive something else (adaptation). We used functional magnetic resonance imaging and modeling to elucidate how the brain entertains these 2 opposing processes, and what determines the direction of such experience-dependent perceptual effects. We found that although affecting our perception concurrently, hysteresis and adaptation map into distinct cortical networks: a widespread network of higher-order visual and fronto-parietal areas was involved in perceptual stabilization, while adaptation was confined to early visual areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. We provide a Bayesian model that accounts for the coexistence of hysteresis and adaptation by separating their causes into 2 distinct terms: Hysteresis alters the prior, whereas adaptation changes the sensory evidence (the likelihood function).
LatticeQCD using OpenCL
(2011)
We study the implications on compact star properties of a soft nuclear equation of state determined from kaon production at subthreshold energies in heavy-ion collisions. On one hand, we apply these results to study radii and moments of inertia of light neutron stars. Heavy-ion data provides constraints on nuclear matter at densities relevant for those stars and, in particular, to the density dependence of the symmetry energy of nuclear matter. On the other hand, we derive a limit for the highest allowed neutron star mass of three solar masses. For that purpouse, we use the information on the nucleon potential obtained from the analysis of the heavy-ion data combined with causality on the nuclear equation of state.
The biological effects of energetic heavy ions are attracting increasing interest for their applications in cancer therapy and protection against space radiation. The cascade of events leading to cell death or late effects starts from stochastic energy deposition on the nanometer scale and the corresponding lesions in biological molecules, primarily DNA. We have developed experimental techniques to visualize DNA nanolesions induced by heavy ions. Nanolesions appear in cells as “streaks” which can be visualized by using different DNA repair markers. We have studied the kinetics of repair of these “streaks” also with respect to the chromatin conformation. Initial steps in the modeling of the energy deposition patterns at the micrometer and nanometer scale were made with MCHIT and TRAX models, respectively.
The results of the microscopic transport calculations of -nucleus interactions within a GiBUU model are presented. The dominating mechanism of hyperon production is the strangeness exchange processes → γπ and → ΞK. The calculated rapidity spectra of Ξ hyperons are significantly shifted to forward rapidities with respect to the spectra of S = −1 hyperons. We argue that this shift should be a sensitive test for the possible exotic mechanisms of -nucleus annihilation. The production of the double Λ-hypernuclei by Ξ− interaction with a secondary target is calculated.
FIAS Scientific Report
(2011)
FIAS Scientific Report 2011
(2012)
FIAS Scientific Report 2010
(2011)
In the year 2010 the Frankfurt Institute for Advanced Studies has successfully continued to follow its agenda to pursue theoretical research in the natural sciences. As stipulated in its charter, FIAS closely collaborates with extramural research institutions, like the Max Planck Institute for Brain Research in Frankfurt and the GSI Helmholtz Center for Heavy Ion Research, Darmstadt and with research groups at the science departments of Goethe University. The institute also engages in the training of young researchers and the education of doctoral students. This Annual Report documents how these goals have been pursued in the year 2010. Notable events in the scientific life of the Institute will be presented, e.g., teaching activities in the framework of the Frankfurt International Graduate School for Science (FIGSS), colloquium schedules, conferences organized by FIAS, and a full bibliography of publications by authors affiliated with FIAS. The main part of the Report consists of short one-page summaries describing the scientific progress reached in individual research projects in the year 2010...
FIAS Scientific Report 2009
(2010)
In this Annual Report we present some of the ongoing activities of FIAS and of the associated graduate
school, the “Frankfurt International Graduate School for Science” (FIGSS) in the year 2009. The main part of the Report consists of a collection of short reports describing the research projects of scientists working at or associated with FIAS.
In the juvenile brain, the synaptic architecture of the visual cortex remains in a state of flux for months after the natural onset of vision and the initial emergence of feature selectivity in visual cortical neurons. It is an attractive hypothesis that visual cortical architecture is shaped during this extended period of juvenile plasticity by the coordinated optimization of multiple visual cortical maps such as orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we introduced a class of analytically tractable coordinated optimization models and solved representative examples, in which a spatially complex organization of the OP map is induced by interactions between the maps. We found that these solutions near symmetry breaking threshold predict a highly ordered map layout. Here we examine the time course of the convergence towards attractor states and optima of these models. In particular, we determine the timescales on which map optimization takes place and how these timescales can be compared to those of visual cortical development and plasticity. We also assess whether our models exhibit biologically more realistic, spatially irregular solutions at a finite distance from threshold, when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. We show that, although maps typically undergo substantial rearrangement, no other solutions than pinwheel crystals and stripes dominate in the emerging layouts. Pinwheel crystallization takes place on a rather short timescale and can also occur for detuned wavelengths of different maps. Our numerical results thus support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the architecture of the visual cortex. We discuss several alternative scenarios that may improve the agreement between model solutions and biological observations.
In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps.
At nonzero temperature, it is expected that QCD undergoes a phase transition to a deconfined, chirally symmetric phase, the Quark-Gluon Plasma (QGP). I review what we expect theoretically about this possible transition, and what we have learned from heavy ion experiments at RHIC. I argue that while there are unambiguous signals for qualitatively new behavior at RHIC, versus experiments at lower energies, that in detail, no simple theoretical model can explain all salient features of the data.
I discuss the physics of non-Abelian plasmas which are locally anisotropic in momentum space. Such momentum-space anisotropies are generated by the rapid longitudinal expansion of the matter created in the first 1 fm/c of an ultrarelativistic heavy ion collision. In contrast to locally isotropic plasmas anisotropic plasmas have a spectrum of soft unstable modes which are characterized by exponential growth of transverse chromo-magnetic/-electric fields at short times. This instability is the QCD analogue of the Weibel instability of QED. Parametrically the chromo-Weibel instability provides the fastest method for generation of soft background fields and dominates the short-time dynamics of the system. The existence of the chromo-Weibel instability has been proven using diagrammatic methods, transport theory, and numerical solution of classical Yang-Mills fields. I review the results obtained from each of these methods and discuss the numerical techniques which are being used to determine the late-time behavior of plasmas subject to a chromo-Weibel instability.
Extraction of network topology from multi-electrode recordings : is there a small-world effect?
(2011)
The simultaneous recording of the activity of many neurons poses challenges for multivariate data analysis. Here, we propose a general scheme of reconstruction of the functional network from spike train recordings. Effective, causal interactions are estimated by fitting generalized linear models on the neural responses, incorporating effects of the neurons’ self-history, of input from other neurons in the recorded network and of modulation by an external stimulus. The coupling terms arising from synaptic input can be transformed by thresholding into a binary connectivity matrix which is directed. Each link between two neurons represents a causal influence from one neuron to the other, given the observation of all other neurons from the population. The resulting graph is analyzed with respect to small-world and scale-free properties using quantitative measures for directed networks. Such graph-theoretic analyses have been performed on many complex dynamic networks, including the connectivity structure between different brain areas. Only few studies have attempted to look at the structure of cortical neural networks on the level of individual neurons. Here, using multi-electrode recordings from the visual system of the awake monkey, we find that cortical networks lack scale-free behavior, but show a small, but significant small-world structure. Assuming a simple distance-dependent probabilistic wiring between neurons, we find that this connectivity structure can account for all of the networks’ observed small-world-ness. Moreover, for multi-electrode recordings the sampling of neurons is not uniform across the population. We show that the small-world-ness obtained by such a localized sub-sampling overestimates the strength of the true small-world structure of the network. This bias is likely to be present in all previous experiments based on multi-electrode recordings.
Mitochondria form a dynamic tubular reticulum within eukaryotic cells. Currently, quantitative understanding of its morphological characteristics is largely absent, despite major progress in deciphering the molecular fission and fusion machineries shaping its structure. Here we address the principles of formation and the large-scale organization of the cell-wide network of mitochondria. On the basis of experimentally determined structural features we establish the tip-to-tip and tip-to-side fission and fusion events as dominant reactions in the motility of this organelle. Subsequently, we introduce a graph-based model of the chondriome able to encompass its inherent variability in a single framework. Using both mean-field deterministic and explicit stochastic mathematical methods we establish a relationship between the chondriome structural network characteristics and underlying kinetic rate parameters. The computational analysis indicates that mitochondrial networks exhibit a percolation threshold. Intrinsic morphological instability of the mitochondrial reticulum resulting from its vicinity to the percolation transition is proposed as a novel mechanism that can be utilized by cells for optimizing their functional competence via dynamic remodeling of the chondriome. The detailed size distribution of the network components predicted by the dynamic graph representation introduces a relationship between chondriome characteristics and cell function. It forms a basis for understanding the architecture of mitochondria as a cell-wide but inhomogeneous organelle. Analysis of the reticulum adaptive configuration offers a direct clarification for its impact on numerous physiological processes strongly dependent on mitochondrial dynamics and organization, such as efficiency of cellular metabolism, tissue differentiation and aging.
Saccade-related modulations of neuronal excitability support synchrony of visually elicited spikes
(2011)
During natural vision, primates perform frequent saccadic eye movements, allowing only a narrow time window for processing the visual information at each location. Individual neurons may contribute only with a few spikes to the visual processing during each fixation, suggesting precise spike timing as a relevant mechanism for information processing. We recently found in V1 of monkeys freely viewing natural images, that fixation-related spike synchronization occurs at the early phase of the rate response after fixation-onset, suggesting a specific role of the first response spikes in V1. Here, we show that there are strong local field potential (LFP) modulations locked to the onset of saccades, which continue into the successive fixation periods. Visually induced spikes, in particular the first spikes after the onset of a fixation, are locked to a specific epoch of the LFP modulation. We suggest that the modulation of neural excitability, which is reflected by the saccade-related LFP changes, serves as a corollary signal enabling precise timing of spikes in V1 and thereby providing a mechanism for spike synchronization.
Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies.
In binocular rivalry, presentation of different images to the separate eyes leads to conscious perception alternating between the two possible interpretations every few seconds. During perceptual transitions, a stimulus emerging into dominance can spread in a wave-like manner across the visual field. These traveling waves of rivalry dominance have been successfully related to the cortical magnification properties and functional activity of early visual areas, including the primary visual cortex (V1). Curiously however, these traveling waves undergo a delay when passing from one hemifield to another. In the current study, we used diffusion tensor imaging (DTI) to investigate whether the strength of interhemispheric connections between the left and right visual cortex might be related to the delay of traveling waves across hemifields. We measured the delay in traveling wave times (ΔTWT) in 19 participants and repeated this test 6 weeks later to evaluate the reliability of our behavioral measures. We found large interindividual variability but also good test–retest reliability for individual measures of ΔTWT. Using DTI in connection with fiber tractography, we identified parts of the corpus callosum connecting functionally defined visual areas V1–V3. We found that individual differences in ΔTWT was reliably predicted by the diffusion properties of transcallosal fibers connecting left and right V1, but observed no such effect for neighboring transcallosal visual fibers connecting V2 and V3. Our results demonstrate that the anatomical characteristics of topographically specific transcallosal connections predict the individual delay of interhemispheric traveling waves, providing further evidence that V1 is an important site for neural processes underlying binocular rivalry.
Neuronal mechanisms underlying beta/gamma oscillations (20-80 Hz) are not completely understood. Here, we show that in vivo beta/gamma oscillations in the cat visual cortex sometimes exhibit remarkably stable frequency even when inputs fluctuate dramatically. Enhanced frequency stability is associated with stronger oscillations measured in individual units and larger power in the local field potential. Simulations of neuronal circuitry demonstrate that membrane properties of inhibitory interneurons strongly determine the characteristics of emergent oscillations. Exploration of networks containing either integrator or resonator inhibitory interneurons revealed that: (i) Resonance, as opposed to integration, promotes robust oscillations with large power and stable frequency via a mechanism called RING (Resonance INduced Gamma); resonance favors synchronization by reducing phase delays between interneurons and imposes bounds on oscillation cycle duration; (ii) Stability of frequency and robustness of the oscillation also depend on the relative timing of excitatory and inhibitory volleys within the oscillation cycle; (iii) RING can reproduce characteristics of both Pyramidal INterneuron Gamma (PING) and INterneuron Gamma (ING), transcending such classifications; (iv) In RING, robust gamma oscillations are promoted by slow but are impaired by fast inputs. Results suggest that interneuronal membrane resonance can be an important ingredient for generation of robust gamma oscillations having stable frequency.
The cerebral cortex presents itself as a distributed dynamical system with the characteristics of a small world network. The neuronal correlates of cognitive and executive processes often appear to consist of the coordinated activity of large assemblies of widely distributed neurons. These features require mechanisms for the selective routing of signals across densely interconnected networks, the flexible and context dependent binding of neuronal groups into functionally coherent assemblies and the task and attention dependent integration of subsystems. In order to implement these mechanisms, it is proposed that neuronal responses should convey two orthogonal messages in parallel. They should indicate (1) the presence of the feature to which they are tuned and (2) with which other neurons (specific target cells or members of a coherent assembly) they are communicating. The first message is encoded in the discharge frequency of the neurons (rate code) and it is proposed that the second message is contained in the precise timing relationships between individual spikes of distributed neurons (temporal code). It is further proposed that these precise timing relations are established either by the timing of external events (stimulus locking) or by internal timing mechanisms. The latter are assumed to consist of an oscillatory modulation of neuronal responses in different frequency bands that cover a broad frequency range from <2 Hz (delta) to >40 Hz (gamma) and ripples. These oscillations limit the communication of cells to short temporal windows whereby the duration of these windows decreases with oscillation frequency. Thus, by varying the phase relationship between oscillating groups, networks of functionally cooperating neurons can be flexibly configurated within hard wired networks. Moreover, by synchronizing the spikes emitted by neuronal populations, the saliency of their responses can be enhanced due to the coincidence sensitivity of receiving neurons in very much the same way as can be achieved by increasing the discharge rate. Experimental evidence will be reviewed in support of the coexistence of rate and temporal codes. Evidence will also be provided that disturbances of temporal coding mechanisms are likely to be one of the pathophysiological mechanisms in schizophrenia.
The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.
Even in V1, where neurons have well characterized classical receptive fields (CRFs), it has been difficult to deduce which features of natural scenes stimuli they actually respond to. Forward models based upon CRF stimuli have had limited success in predicting the response of V1 neurons to natural scenes. As natural scenes exhibit complex spatial and temporal correlations, this could be due to surround effects that modulate the sensitivity of the CRF. Here, instead of attempting a forward model, we quantify the importance of the natural scenes surround for awake macaque monkeys by modeling it non-parametrically. We also quantify the influence of two forms of trial to trial variability. The first is related to the neuron’s own spike history. The second is related to ongoing mean field population activity reflected by the local field potential (LFP). We find that the surround produces strong temporal modulations in the firing rate that can be both suppressive and facilitative. Further, the LFP is found to induce a precise timing in spikes, which tend to be temporally localized on sharp LFP transients in the gamma frequency range. Using the pseudo R2 as a measure of model fit, we find that during natural scene viewing the CRF dominates, accounting for 60% of the fit, but that taken collectively the surround, spike history and LFP are almost as important, accounting for 40%. However, overall only a small proportion of V1 spiking statistics could be explained (R2~5%), even when the full stimulus, spike history and LFP were taken into account. This suggests that under natural scene conditions, the dominant influence on V1 neurons is not the stimulus, nor the mean field dynamics of the LFP, but the complex, incoherent dynamics of the network in which neurons are embedded.
Mitochondrial dynamics and mitophagy play a key role in ensuring mitochondrial quality control. Impairment thereof was proposed to be causative to neurodegenerative diseases, diabetes, and cancer. Accumulation of mitochondrial dysfunction was further linked to aging. Here we applied a probabilistic modeling approach integrating our current knowledge on mitochondrial biology allowing us to simulate mitochondrial function and quality control during aging in silico. We demonstrate that cycles of fusion and fission and mitophagy indeed are essential for ensuring a high average quality of mitochondria, even under conditions in which random molecular damage is present. Prompted by earlier observations that mitochondrial fission itself can cause a partial drop in mitochondrial membrane potential, we tested the consequences of mitochondrial dynamics being harmful on its own. Next to directly impairing mitochondrial function, pre-existing molecular damage may be propagated and enhanced across the mitochondrial population by content mixing. In this situation, such an infection-like phenomenon impairs mitochondrial quality control progressively. However, when imposing an age-dependent deceleration of cycles of fusion and fission, we observe a delay in the loss of average quality of mitochondria. This provides a rational why fusion and fission rates are reduced during aging and why loss of a mitochondrial fission factor can extend life span in fungi. We propose the ‘mitochondrial infectious damage adaptation’ (MIDA) model according to which a deceleration of fusion–fission cycles reflects a systemic adaptation increasing life span.