Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (889)
- Article (738)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (1687)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1687)
- Physik (1311)
- Informatik (1002)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (11)
- Helmholtz International Center for FAIR (7)
Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ...
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.
Interacting with the environment to process sensory information, generate perceptions, and shape behavior engages neural networks in brain areas with highly varied representations, ranging from unimodal sensory cortices to higher-order association areas. Recent work suggests a much greater degree of commonality across areas, with distributed and modular networks present in both sensory and non-sensory areas during early development. However, it is currently unknown whether this initially common modular structure undergoes an equally common developmental trajectory, or whether such a modular functional organization persists in some areas—such as primary visual cortex—but not others. Here we examine the development of network organization across diverse cortical regions in ferrets of both sexes using in vivo widefield calcium imaging of spontaneous activity. We find that all regions examined, including both primary sensory cortices (visual, auditory, and somatosensory—V1, A1, and S1, respectively) and higher order association areas (prefrontal and posterior parietal cortices) exhibit a largely similar pattern of changes over an approximately 3 week developmental period spanning eye opening and the transition to predominantly externally-driven sensory activity. We find that both a modular functional organization and millimeter-scale correlated networks remain present across all cortical areas examined. These networks weakened over development in most cortical areas, but strengthened in V1. Overall, the conserved maintenance of modular organization across different cortical areas suggests a common pathway of network refinement, and suggests that a modular organization—known to encode functional representations in visual areas—may be similarly engaged in highly diverse brain areas.
Significance Different areas of the mature brain encode vastly different representations of the world. This study shows that a modular functional organization where nearby neurons participate in similar functional networks is shared across different brain areas not only during early development, but also as the brain matures where it remains a shared feature that shapes neural activity. The largely conserved trajectory of developmental changes across brain areas suggests that similar circuit mechanisms may drive this maturation. This implies that the large literature on developing cortical circuits, which is largely focused on sensory areas, may also apply more broadly, and that perturbations during development that impinge on any such shared mechanisms may produce deficits that extend across multiple brain systems.
We present the black hole accretion code (BHAC), a new multidimensional general-relativistic magnetohydrodynamics module for the MPI-AMRVAC framework. BHAC has been designed to solve the equations of ideal general-relativistic magnetohydrodynamics in arbitrary spacetimes and exploits adaptive mesh refinement techniques with an efficient block-based approach. Several spacetimes have already been implemented and tested. We demonstrate the validity of BHAC by means of various one-, two-, and three-dimensional test problems, as well as through a close comparison with the HARM3D code in the case of a torus accreting onto a black hole. The convergence of a turbulent accretion scenario is investigated with several diagnostics and we find accretion rates and horizon-penetrating fluxes to be convergent to within a few percent when the problem is run in three dimensions. Our analysis also involves the study of the corresponding thermal synchrotron emission, which is performed by means of a new general-relativistic radiative transfer code, BHOSS. The resulting synthetic intensity maps of accretion onto black holes are found to be convergent with increasing resolution and are anticipated to play a crucial role in the interpretation of horizon-scale images resulting from upcoming radio observations of the source at the Galactic Center.
The wave function of a spheroidal harmonic oscillator without spin-orbit interaction is expressed in terms of associated Laguerre and Hermite polynomials. The pairing gap and Fermi energy are found by solving the BCS system of two equations. Analytical relationships for the matrix elements of inertia are obtained function of the main quantum numbers and potential derivative. They may be used to test complex computer codes one should develop in a realistic approach of the fission dynamics. The results given for the 240 Pu nucleus are compared with a hydrodynamical model. The importance of taking into account the correction term due to the variation of the occupation number is stressed.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
Complex fission phenomena
(2004)
Complex fission phenomena are studied in a unified way. Very general reflection asymmetrical equilibrium (saddle point) nuclear shapes are obtained by solving an integro-differential equation without being necessary to specify a certain parametrization. The mass asymmetry in binary cold fission of Th and U isotopes is explained as the result of adding a phenomenological shell correction to the liquid drop model deformation energy. Applications to binary, ternary, and quaternary fission are outlined.
Sharp wave-ripples (SPW-Rs) are a hippocampal network phenomenon critical for memory consolidation and planning. SPW-Rs have been extensively studied in the adult brain, yet their developmental trajectory is poorly understood. While SPWs have been recorded in rodents shortly after birth, the time point and mechanisms of ripple emergence are still unclear. Here, we combine in vivo electrophysiology with optogenetics and chemogenetics in 4 to 12 days-old mice to address this knowledge gap. We show that ripples are robustly detected and induced by light stimulation of ChR2-transfected CA1 pyramidal neurons only from postnatal day (P) 10 onwards. Leveraging a spiking neural network model, we mechanistically link the maturation of inhibition and ripple emergence. We corroborate these findings by reducing ripple rate upon chemogenetic silencing of CA1 interneurons. Finally, we show that early SPW-Rs elicit a more robust prefrontal cortex response then SPWs lacking ripples. Thus, development of inhibition promotes ripples emergence.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
At nonzero temperature, it is expected that QCD undergoes a phase transition to a deconfined, chirally symmetric phase, the Quark-Gluon Plasma (QGP). I review what we expect theoretically about this possible transition, and what we have learned from heavy ion experiments at RHIC. I argue that while there are unambiguous signals for qualitatively new behavior at RHIC, versus experiments at lower energies, that in detail, no simple theoretical model can explain all salient features of the data.
NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events
(2009)
Poster presentation: We present a non-parametric and computationally-efficient method named NeuroXidence (see http://www.NeuroXidence.com ) that detects coordinated firing within a group of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. NeuroXidence [1] considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. We demonstrate that NeuroXidence can identify epochs with significant spike synchronisation even if these coincide with strong and fast rate modulations. We also show, that the method accounts for trial-by-trial variability in the rate responses and their latencies, and that it can be applied to short data windows lasting only tens of milliseconds. Based on simulated data we compare the performance of NeuroXidence with the UE-method [2,3] and the cross-correlation analysis. An application of NeuroXidence to 42 single-units (SU) recorded in area 17 of an anesthetized cat revealed significant coincident events of high complexities, involving firing of up to 8 SUs simultaneously (5 ms window). The results were highly consistent with those obtained by traditional pair-wise measures based on cross-correlation: Neuronal synchrony was strongest in stimulation conditions in which the orientation of the sinusoidal grating matched the preferred orientation of most of the SUs included in the analysis, and was the weakest when the neurons were stimulated least optimally. Interestingly, events of higher complexities showed stronger stimulus-specific modulation than pair-wise interactions. The results suggest strong evidence for stimulus specific synchronous firing and, therefore, support the temporal coding hypothesis in visual cortex. ...
Poster presentation: Coordinated neuronal activity across many neurons, i.e. synchronous or spatiotemporal pattern, had been believed to be a major component of neuronal activity. However, the discussion if coordinated activity really exists remained heated and controversial. A major uncertainty was that many analysis approaches either ignored the auto-structure of the spiking activity, assumed a very simplified model (poissonian firing), or changed the auto-structure by spike jittering. We studied whether a statistical inference that tests whether coordinated activity is occurring beyond chance can be made false if one ignores or changes the real auto-structure of recorded data. To this end, we investigated the distribution of coincident spikes in mutually independent spike-trains modeled as renewal processes. We considered Gamma processes with different shape parameters as well as renewal processes in which the ISI distribution is log-normal. For Gamma processes of integer order, we calculated the mean number of coincident spikes, as well as the Fano factor of the coincidences, analytically. We determined how these measures depend on the bin width and also investigated how they depend on the firing rate, and on rate difference between the neurons. We used Monte-Carlo simulations to estimate the whole distribution for these parameters and also for other values of gamma. Moreover, we considered the effect of dithering for both of these processes and saw that while dithering does not change the average number of coincidences, it does change the shape of the coincidence distribution. Our major findings are: 1) the width of the coincidence count distribution depends very critically and in a non-trivial way on the detailed properties of the inter-spike interval distribution, 2) the dependencies of the Fano factor on the coefficient of variation of the ISI distribution are complex and mostly non-monotonic. Moreover, the Fano factor depends on the very detailed properties of the individual point processes, and cannot be predicted by the CV alone. Hence, given a recorded data set, the estimated value of CV of the ISI distribution is not sufficient to predict the Fano factor of the coincidence count distribution, and 3) spike jittering, even if it is as small as a fraction of the expected ISI, can falsify the inference on coordinated firing. In most of the tested cases and especially for complex synchronous and spatiotemporal pattern across many neurons, spike jittering increased the likelihood of false positive finding very strongly. Last, we discuss a procedure [1] that considers the complete auto-structure of each individual spike-train for testing whether synchrony firing occurs at chance and therefore overcomes the danger of an increased level of false positives.
Poster presentation: How can two distant neural assemblies synchronize their firings at zero-lag even in the presence of non-negligible delays in the transfer of information between them? Neural synchronization stands today as one of the most promising mechanisms to counterbalance the huge anatomical and functional specialization of the different brain areas. However, and albeit more evidence is being accumulated in favor of its functional role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization of spiking assemblies of neurons for a wide range of temporal delays. We demonstrate that when two distant neural assemblies do not interact directly but relaying their dynamics via a third mediating single neuron or population and eventually achieve zero-lag coherent firing. Extensive numerical simulations of populations of Hodgkin-Huxley neurons interacting in such a network are analyzed. The results show that even with axonal delays as large as 15 ms the distant neural populations can synchronize their firings at zero-lag in a millisecond precision after the exchange of a few spikes. The role of noise and a distribution of axonal delays in the synchronized dynamics of the neural populations are also studied confirming the robustness of this sync mechanism. The proposed network module is densely embedded within the complex functional architecture of the brain and especially within the reciprocal thalamocortical interactions where the role of indirect pathways mimicking direct cortico-cortical fibers has been already suggested to facilitate trans-areal cortical communication. In summary the robust neural synchronization mechanism presented here arises as a consequence of the relay and redistribution of the dynamics performed by a mediating neuronal population. In opposition to previous works, neither inhibitory, gap junctions, nor complex networks need to be invoked to provide a stable mechanism of zero-phase correlated activity of neural populations in the presence of large conduction delays.
Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory.
Poster presentation: Characterizing neuronal encoding is essential for understanding information processing in the brain. Three methods are commonly used to characterize the relationship between neural spiking activity and the features of putative stimuli. These methods include: Wiener-Volterra kernel methods (WVK), the spike-triggered average (STA), and more recently, the point process generalized linear model (GLM). We compared the performance of these three approaches in estimating receptive field properties and orientation tuning of 251 V1 neurons recorded from 2 monkeys during a fixation period in response to a moving bar. The GLM consisted of two formulations of the conditional intensity function for a point process characterization of the spiking activity: one with a stimulus only component and one with the stimulus and spike history. We fit the GLMs by maximum likelihood using GLMfit in Matlab. Goodness-of-fit was assessed using cross-validation with Kolmogorov-Smirnov (KS) tests based on the time-rescaling theorem to evaluate the accuracy with which each model predicts the spiking activity of individual neurons and for each movement direction (4016 models in total, for 251 neurons and 16 different directions). The GLMs that considered spike history of up to 35 ms, accurately predicted neuronal spiking activity (95% confidence intervals for KS test) with a performance of 97.0% (3895/4016) for the training data, and 96.5% (3876/4016) for the test data. If spike history was not considered, performance dropped to 73,1% in the training and 71.3% in the testing data. In contrast, the WVF and the STA predicted spiking accurately for 24.2% and 44.5% of the test data examples respectively. The receptive field size estimates obtained from the GLM (with and without history), WVF and STA were comparable. Relative to the GLM orientation tuning was underestimated on average by a factor of 0.45 by the WVF and the STA. The main reason for using the STA and WVF approaches is their apparent simplicity. However, our analyses suggest that more accurate spike prediction as well as more credible estimates of receptive field size and orientation tuning can be computed easily using GLMs implemented in Matlab with standard functions such as GLMfit.
The cumulant ratios up to fourth order of the Z distributions of the largest fragment in spectator fragmentation following 107,124Sn+Sn and 124La+Sn collisions at 600 MeV/nucleon have been investigated. They are found to exhibit the signatures of a second-order phase transition established with cubic bond percolation and previously observed in the ALADIN experimental data for fragmentation of 197Au projectiles at similar energies. The deduced pseudocritical points are found to be only weakly dependent on the A/Z ratio of the fragmenting spectator source. The same holds for the corresponding chemical freeze-out temperatures of close to 6 MeV.The experimental cumulant distributions are quantitatively reproduced with the Statistical Multifragmentation Model and parameters used to describe the experimental fragment multiplicities, isotope distributions and their correlations with impact-parameter related observables in these reactions. The characteristic coincidence of the zero transition of the skewness with the minimum of the kurtosis excess appears to be a generic property of statistical models and is found to coincide with the maximum of the heat capacity in the canonical thermodynamic fragmentation model.
Self-organized complexity and Coherent Infomax from the viewpoint of Jaynes’s probability theory
(2012)
This paper discusses concepts of self-organized complexity and the theory of Coherent Infomax in the light of Jaynes’s probability theory. Coherent Infomax, shows, in principle, how adaptively self-organized complexity can be preserved and improved by using probabilistic inference that is context-sensitive. It argues that neural systems do this by combining local reliability with flexible, holistic, context-sensitivity. Jaynes argued that the logic of probabilistic inference shows it to be based upon Bayesian and Maximum Entropy methods or special cases of them. He presented his probability theory as the logic of science; here it is considered as the logic of life. It is concluded that the theory of Coherent Infomax specifies a general objective for probabilistic inference, and that contextual interactions in neural systems perform functions required of the scientist within Jaynes’s theory.
Lattice QCD with heavy quarks reduces to a three-dimensional effective theory of Polyakov loops, which is amenable to series expansion methods. We analyse the effective theory in the cold and dense regime for a general number of colours, Nc. In particular, we investigate the transition from a hadron gas to baryon condensation. For any finite lattice spacing, we find the transition to become stronger, i.e. ultimately first-order, as Nc is made large. Moreover, in the baryon condensed regime, we find the pressure to scale as p ∼ Nc through three orders in the hopping expansion. Such a phase differs from a hadron gas with p ∼ N0c, or a quark gluon plasma, p ∼ N2c, and was termed quarkyonic in the literature, since it shows both baryon-like and quark-like aspects. A lattice filling with baryon number shows a rapid and smooth transition from condensing baryons to a crystal of saturated quark matter, due to the Pauli principle, and is consistent with this picture. For continuum physics, the continuum limit needs to be taken before the large Nc limit, which is not yet possible in practice. However, in the controlled range of lattice spacings and Nc-values, our results are stable when the limits are approached in this order. We discuss possible implications for physical QCD.
LatticeQCD using OpenCL
(2011)
The global energy system is undergoing a major transition, and in energy planning and decision-making across governments, industry and academia, models play a crucial role. Because of their policy relevance and contested nature, the transparency and open availability of energy models and data are of particular importance. Here we provide a practical how-to guide based on the collective experience of members of the Open Energy Modelling Initiative (Openmod). We discuss key steps to consider when opening code and data, including determining intellectual property ownership, choosing a licence and appropriate modelling languages, distributing code and data, and providing support and building communities. After illustrating these decisions with examples and lessons learned from the community, we conclude that even though individual researchers' choices are important, institutional changes are still also necessary for more openness and transparency in energy research.
Volatility is a widely recognized measure of market risk. As volatility is not observed it has to be estimated from market prices, i.e., as the implied volatility from option prices. The volatility index VIX making volatility a tradeable asset in its own right is computed from near- and next-term put and call options on the S&P 500 with more than 23 days and less than 37 days to expiration and non-vanishing bid. In the present paper we quantify the information content of the constituents of the VIX about the volatility of the S&P 500 in terms of the Fisher information matrix. Assuming that observed option prices are centered on the theoretical price provided by Heston's model perturbed by additive Gaussian noise we relate their Fisher information matrix to the Greeks in the Heston model. We find that the prices of options contained in the VIX basket allow for reliable estimates of the volatility of the S&P 500 with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value of roughly 3%, inferences from option prices become imprecise because Vega, the derivative of a European option w.r.t. volatility, and thereby the Fisher information nearly vanishes.
The goal of heavy ion reactions at low beam energies is to explore the QCD phase diagram at high net baryon chemical potential. To relate experimental observations with a first order phase transition or a critical endpoint, dynamical approaches for the theoretical description have to be developed. In this summary of the corresponding plenary talk, the status of the dynamical modeling including the most recent advances is presented. The remaining challenges are highlighted and promising experimental measurements are pointed out.
Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
(2019)
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here, we show in awake macaque area V1 that both repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects show some persistence on the timescale of minutes. Gamma increases are specific to the presented stimulus location. Further, repetition effects on gamma and on firing rates generalize to images of natural objects. These findings support the notion that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
Background: The technical development of imaging techniques in life sciences has enabled the three-dimensional recording of living samples at increasing temporal resolutions. Dynamic 3D data sets of developing organisms allow for time-resolved quantitative analyses of morphogenetic changes in three dimensions, but require efficient and automatable analysis pipelines to tackle the resulting Terabytes of image data. Particle image velocimetry (PIV) is a robust and segmentation-free technique that is suitable for quantifying collective cellular migration on data sets with different labeling schemes. This paper presents the implementation of an efficient 3D PIV package using the Julia programming language—quickPIV. Our software is focused on optimizing CPU performance and ensuring the robustness of the PIV analyses on biological data.
Results: QuickPIV is three times faster than the Python implementation hosted in openPIV, both in 2D and 3D. Our software is also faster than the fastest 2D PIV package in openPIV, written in C++. The accuracy evaluation of our software on synthetic data agrees with the expected accuracies described in the literature. Additionally, by applying quickPIV to three data sets of the embryogenesis of Tribolium castaneum, we obtained vector fields that recapitulate the migration movements of gastrulation, both in nuclear and actin-labeled embryos. We show normalized squared error cross-correlation to be especially accurate in detecting translations in non-segmentable biological image data.
Conclusions: The presented software addresses the need for a fast and open-source 3D PIV package in biological research. Currently, quickPIV offers efficient 2D and 3D PIV analyses featuring zero-normalized and normalized squared error cross-correlations, sub-pixel/voxel approximation, and multi-pass. Post-processing options include filtering and averaging of the resulting vector fields, extraction of velocity, divergence and collectiveness maps, simulation of pseudo-trajectories, and unit conversion. In addition, our software includes functions to visualize the 3D vector fields in Paraview.
This a review of the present status of heavy-ion collisions at intermediate energies. The main goal of heavy-ion physics in this energy regime is to shed some light on the nuclear equation of state (EOS), hence we present the basic concept of the EOS in nuclear matter as well as of nuclear shock waves which provide the key mechanism for the compression of nuclear matter. The main part of this article is devoted to the models currently used for describing heavy-ion reactions theoretically and to the observables useful for extracting information about the EOS from experiments. A detailed discussion of the flow effects with a broad comparison with the avaible data is presented. The many-body aspects of such reactions are investigated via the multifragmentation break up of excited nuclear systems and a comparison of model calculations with the most recent multifragmentation experiments is presented.
Reprogramming of tomato leaf metabolome by the activity of heat stress transcription factor HsfB1
(2020)
Plants respond to high temperatures with global changes of the transcriptome, proteome, and metabolome. Heat stress transcription factors (Hsfs) are the core regulators of transcriptome responses as they control the reprogramming of expression of hundreds of genes. The thermotolerance-related function of Hsfs is mainly based on the regulation of many heat shock proteins (HSPs). Instead, the Hsf-dependent reprogramming of metabolic pathways and their contribution to thermotolerance are not well described. In tomato (Solanum lycopersicum), manipulation of HsfB1, either by suppression or overexpression (OE) leads to enhanced thermotolerance and coincides with distinct profile of metabolic routes based on a metabolome profiling of wild-type (WT) and HsfB1 transgenic plants. Leaves of HsfB1 knock-down plants show an accumulation of metabolites with a positive effect on thermotolerance such as the sugars sucrose and glucose and the polyamine putrescine. OE of HsfB1 leads to the accumulation of products of the phenylpropanoid and flavonoid pathways, including several caffeoyl quinic acid isomers. The latter is due to the enhanced transcription of genes coding key enzymes in both pathways, in some cases in both non-stressed and stressed plants. Our results show that beyond the control of the expression of Hsfs and HSPs, HsfB1 has a wider activity range by regulating important metabolic pathways providing an important link between stress response and physiological tomato development.
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for treatment and prevention of influenza and its complications is largely debatable. Here, we developed a transparent mathematical modelling setting to analyse the impact of NAIs on influenza disease at within-host and population level. Analytical and simulation results indicate that even assuming unrealistically high efficacies for NAIs, drug intake starting on the onset of symptoms has a negligible effect on an individual's viral load and symptoms score. Increasing NAIs doses does not provide a better outcome as is generally believed. Considering Tamiflu's pandemic regimen for prophylaxis, different multiscale simulation scenarios reveal modest reductions in epidemic size despite high investments in stockpiling. Our results question the use of NAIs in general to treat influenza as well as the respective stockpiling by regulatory authorities.
Neuraminidase inhibitors in influenza treatment and prevention – is it time to call it a day?
(2018)
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for the treatment and prevention of influenza and its complications is largely debatable due to constraints in the ability to control for confounders and to explore unobserved areas of the drug effects. For this study, we used a mathematical model of influenza infection which allowed transparent analyses. The model recreated the oseltamivir effects and indicated that: (i) the efficacy was limited by design, (ii) a 99% efficacy could be achieved by using high drug doses (however, taking high doses of drug 48 h post-infection could only yield a maximum of 1.6-day reduction in the time to symptom alleviation), and (iii) contributions of oseltamivir to epidemic control could be high, but were observed only in fragile settings. In a typical influenza infection, NAIs’ efficacy is inherently not high, and even if their efficacy is improved, the effect can be negligible in practice.
Adjuvanted influenza vaccines constitute a key element towards inducing neutralizing antibody responses in populations with reduced responsiveness, such as infants and elderly subjects, as well as in devising antigen-sparing strategies. In particular, squalene-containing adjuvants have been observed to induce enhanced antibody responses, as well as having an influence on cross-reactive immunity. To explore the effects of adjuvanted vaccine formulations on antibody response and their relation to protein-specific immunity, we propose different mathematical models of antibody production dynamics in response to influenza vaccination. Data from ferrets immunized with commercial H1N1pdm09 vaccine antigen alone or formulated with different adjuvants was instrumental to adjust model parameters. While the affinity maturation process complexity is abridged, the proposed model is able to recapitulate the essential features of the observed dynamics. Our numerical results suggest that there exists a qualitative shift in protein-specific antibody response, with enhanced production of antibodies targeting the NA protein in adjuvanted versus non-adjuvanted formulations, in conjunction with a protein-independent boost that is over one order of magnitude larger for squalene-containing adjuvants. Furthermore, simulations predict that vaccines formulated with squalene-containing adjuvants are able to induce sustained antibody titers in a robust way, with little impact of the time interval between immunizations.
Motivation: Partial differential equations (PDEs) is a well-established and powerful tool to simulate multi-cellular biological systems. However, available free tools for validation against data are not established. The PDEparams module provides flexible functionality in Python for parameter estimation in PDE models.
Results: The PDEparams module provides a flexible interface and readily accommodates different parameter analysis tools in PDE models such as computation of likelihood profiles, and parametric boot-strapping, along with direct visualisation of the results. To our knowledge, it is the first open, freely available tool for parameter fitting of PDE models.
Availability and implementation: The PDEparams module is distributed under the MIT license. The source code, usage instructions and step-by-step examples are freely available on GitHub at github.com/systemsmedicine/PDE_params.
We propose a generalized modeling framework for the kinetic mechanisms of transcriptional riboswitches. The formalism accommodates time-dependent transcription rates and changes of metabolite concentration and permits incorporation of variations in transcription rate depending on transcript length. We derive explicit analytical expressions for the fraction of transcripts that determine repression or activation of gene expression, pause site location and its slowing down of transcription for the case of the (2’dG)-sensing riboswitch from Mesoplasma florum. Our modeling challenges the current view on the exclusive importance of metabolite binding to transcripts containing only the aptamer domain. Numerical simulations of transcription proceeding in a continuous manner under time-dependent changes of metabolite concentration further suggest that rapid modulations in concentration result in a reduced dynamic range for riboswitch function regardless of transcription rate, while a combination of slow modulations and small transcription rates ensures a wide range of finely tuneable regulatory outcomes.
Criticality meets learning : criticality signatures in a self-organizing recurrent neural network
(2017)
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics. Such EoS-meter is model-independent and insensitive to other simulation inputs including the initial conditions for hydrodynamic simulations.
The state-of-the-art pattern recognition method in machine learning (deep convolution neural network) is used to identify the equation of state (EoS) employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in QCD. The EoS-meter is model independent and insensitive to other simulation inputs including the initial conditions and shear viscosity for hydrodynamic simulations. Through this study we demonstrate that there is a traceable encoder of the dynamical information from the phase structure that survives the evolution and exists in the final snapshot of heavy ion collisions and one can exclusively and effectively decode these information from the highly complex final output with machine learning when traditional methods fail. Besides the deep neural network, the performance of traditional machine learning classifiers are also provided.
The scope of this Thesis is to understand the position dependency phenomenon of human visual perception. First, under the ecological assumption, meaning under the assumption that animals adapt to the statistical regularities of their environment, we study the consequences of the imaging on the local statistics of the input to the human visual system. Second, we model efficient representations of these statistics and their contribution to shape the properties of eye sensory neurons. Third, we model efficient representations of the semantic context of images and the correctness of different underneath geometrical assumptions on the statistics of images.
The efficient coding hypothesis posits that sensory systems are adapted to the regularities of their signal input in order to reduce redundancy in the resulting representations. It is therefore important to characterize the regularities of natural signals to gain insight into the processing of natural stimuli. While measurements of statistical regularity in vision have focused on photographic images of natural environments it has been much less investigated, how the specific imaging process embodied by the organism’s eye induces statistical dependencies on the natural input to the visual system. This has allowed using the convenient assumption that natural image data is homogeneous across the visual field. Here we give up on this assumption and show how the imaging process in a human eye model influences the local statistics of the natural input to the visual system across the entire visual field. ...
We study the kinetic and chemical equilibration in 'infinite' parton-hadron matter within the Parton-Hadron-String Dynamics transport approach, which is based on a dynamical quasiparticle model for partons matched to reproduce lattice-QCD results – including the partonic equation of state – in thermodynamic equilibrium. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different baryon density (or chemical potential) and energy density. The transition from initially pure partonic matter to hadronic degrees of freedom (or vice versa) occurs dynamically by interactions. Different thermody-namical distributions of the strongly-interacting quark-gluon plasma (sQGP) are addressed and discussed.
Steep rise of parton densities in the limit of small parton momentum fraction x poses a challenge for describing the observed energy-dependence of the total and inelastic proton-proton cross sections σtot/inelpp : considering a realistic parton spatial distribution, one obtains a too-strong increase of σtot/inelpp in the limit of very high energies. We discuss various mechanisms which allow one to tame such a rise, paying special attention to the role of parton-parton correlations. In addition, we investigate a potential impact on model predictions for σtotpp, related to dynamical higher twist corrections to parton-production process.
We apply the phenomenological Reggeon field theory framework to investigate rapidity gap survival (RGS) probability for diffractive dijet production in proton–proton collisions. In particular, we study in some detail rapidity gap suppression due to elastic rescatterings of intermediate partons in the underlying parton cascades, described by enhanced (Pomeron–Pomeron interaction) diagrams. We demonstrate that such contributions play a subdominant role, compared to the usual, so-called “eikonal”, rapidity gap suppression due to elastic rescatterings of constituent partons of the colliding protons. On the other hand, the overall RGS factor proves to be sensitive to color fluctuations in the proton. Hence, experimental data on diffractive dijet production can be used to constrain the respective model approaches.
I review the state-of-the-art concerning the treatment of high energy cosmic ray interactions in the atmosphere, discussing in some detail the underlying physical concepts and the possibilities to constrain the latter by current and future measurements at the Large Hadron Collider. The relation of basic characteristics of hadronic interactions tothe properties of nuclear-electromagnetic cascades induced by primary cosmic rays in the atmosphere is addressed.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
Predictions of popular cosmic ray interaction models for some basic characteristics of cosmic ray-induced extensive air showers are analyzed in view of experimental data on proton-proton collisions, obtained at the Large Hadron Collider. The differences between the results are traced down to different approaches for the treatment of hadronic interactions, implemented in those models. Potential measurements by LHC and cosmic ray experiments, which could be able to discriminate between the alternative approaches, are proposed.
We discuss in some detail the physics content of the new model, QGSJET-III-01, focusing on major problems related to the treatment of semihard processes in the very high energy limit. A special attention has been payed to the main improvement, compared to the QGSJET-II model, which is related to a phenomenological treatment of leading power corrections corresponding to final parton rescattering off soft gluons. In particular, this allowed us to use a twice smaller separation scale between the soft and hard parton physics, compared to the previous model version, QGSJET-II-04. Preliminary results obtained with the new model are also presented.
COVID-19 pandemic is a major public health threat with unanswered questions regarding the role of the immune system in the severity level of the disease. In this paper, based on antibody kinetic data of patients with different disease severity, topological data analysis highlights clear differences in the shape of antibody dynamics between three groups of patients, which were non-severe, severe, and one intermediate case of severity. Subsequently, different mathematical models were developed to quantify the dynamics between the different severity groups. The best model was the one with the lowest media value of Akaike Information Criterion for all groups of patients. Although it has been reported high IgG level in severe patients, our findings suggest that IgG antibodies in severe patients may be less effective than non-severe patients due to early B cell production and early activation of the seroconversion process from IgM to IgG antibody.
A novel method for identifying the nature of QCD transitions in heavy-ion collision experiments is introduced. PointNet based Deep Learning (DL) models are developed to classify the equation of state (EoS) that drives the hydrodynamic evolution of the system created in Au-Au collisions at 10 AGeV. The DL models were trained and evaluated in different hypothetical experimental situations. A decreased performance is observed when more realistic experimental effects (acceptance cuts and decreased resolutions) are taken into account. It is shown that the performance can be improved by combining multiple events to make predictions. The PointNet based models trained on the reconstructed tracks of charged particles from the CBM detector simulation discriminate a crossover transition from a first order phase transition with an accuracy of up to 99.8%. The models were subjected to several tests to evaluate the dependence of its performance on the centrality of the collisions and physical parameters of fluid dynamic simulations. The models are shown to work in a broad range of centralities (b=0–7 fm). However, the performance is found to improve for central collisions (b=0–3 fm). There is a drop in the performance when the model parameters lead to reduced duration of the fluid dynamic evolution or when less fraction of the medium undergoes the transition. These effects are due to the limitations of the underlying physics and the DL models are shown to be superior in its discrimination performance in comparison to conventional mean observables.
In this talk we presented a novel technique, based on Deep Learning, to determine the impact parameter of nuclear collisions at the CBM experiment. PointNet based Deep Learning models are trained on UrQMD followed by CBMRoot simulations of Au+Au collisions at 10 AGeV to reconstruct the impact parameter of collisions from raw experimental data such as hits of the particles in the detector planes, tracks reconstructed from the hits or their combinations. The PointNet models can perform fast, accurate, event-by-event impact parameter determination in heavy ion collision experiments. They are shown to outperform a simple model which maps the track multiplicity to the impact parameter. While conventional methods for centrality classification merely provide an expected impact parameter distribution for a given centrality class, the PointNet models predict the impact parameter from 2–14 fm on an event-by-event basis with a mean error of −0.33 to 0.22 fm.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
In this thesis we investigate the role played by gauge fields in providing new observable signatures that can attest to the presence of color superconductivity in neutron stars. We show that thermal gluon fluctuations in color-flavor locked superconductors can substantially increase their critical temperature and also change the order of the transition, which becomes a strong first-order phase transition. Moreover, we explore the effects of strong magnetic fields on the properties of color-flavor locked superconducting matter. We find that both the energy gaps as well as the magnetization are oscillating functions of the magnetic field. Also, it is shown that the magnetization can be so strong that homogeneous quark matter becomes metastable for a range of parameters. This points towards the existence of magnetic domains or other types of magnetic inhomogeneities in the hypothesized quark cores of magnetars. Obviously, our results only apply if the strong magnetic fields observed on the surface of magnetars can be transmitted to their inner core. This can occur if the superconducting protons expected to exist in the outer core form a type-I I superconductor. However, it has been argued that the observed long periodic oscillations in isolated pulsars can only be explained if the outer core is a type-I superconductor rather than type-I I. We show that this is not the only solution for the precession puzzle by demonstrating that the long-term variation in the spin of PSR 1828-11 can be explained in terms of Tkachenko oscillations within superfluid shells.
Glia, the helper cells of the brain, are essential in maintaining neural resilience across time and varying challenges: By reacting to changes in neuronal health glia carefully balance repair or disposal of injured neurons. Malfunction of these interactions is implicated in many neurodegenerative diseases. We present a reductionist model that mimics repair-or-dispose decisions to generate a hypothesis for the cause of disease onset. The model assumes four tissue states: healthy and challenged tissue, primed tissue at risk of acute damage propagation, and chronic neurodegeneration. We discuss analogies to progression stages observed in the most common neurodegenerative conditions and to experimental observations of cellular signaling pathways of glia-neuron crosstalk. The model suggests that the onset of neurodegeneration can result as a compromise between two conflicting goals: short-term resilience to stressors versus long-term prevention of tissue damage.