Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (889)
- Article (738)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (1687)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1687)
- Physik (1311)
- Informatik (1002)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (11)
- Helmholtz International Center for FAIR (7)
Volatility is a widely recognized measure of market risk. As volatility is not observed it has to be estimated from market prices, i.e., as the implied volatility from option prices. The volatility index VIX making volatility a tradeable asset in its own right is computed from near- and next-term put and call options on the S&P 500 with more than 23 days and less than 37 days to expiration and non-vanishing bid. In the present paper we quantify the information content of the constituents of the VIX about the volatility of the S&P 500 in terms of the Fisher information matrix. Assuming that observed option prices are centered on the theoretical price provided by Heston's model perturbed by additive Gaussian noise we relate their Fisher information matrix to the Greeks in the Heston model. We find that the prices of options contained in the VIX basket allow for reliable estimates of the volatility of the S&P 500 with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value of roughly 3%, inferences from option prices become imprecise because Vega, the derivative of a European option w.r.t. volatility, and thereby the Fisher information nearly vanishes.
The goal of heavy ion reactions at low beam energies is to explore the QCD phase diagram at high net baryon chemical potential. To relate experimental observations with a first order phase transition or a critical endpoint, dynamical approaches for the theoretical description have to be developed. In this summary of the corresponding plenary talk, the status of the dynamical modeling including the most recent advances is presented. The remaining challenges are highlighted and promising experimental measurements are pointed out.
Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
(2019)
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here, we show in awake macaque area V1 that both repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects show some persistence on the timescale of minutes. Gamma increases are specific to the presented stimulus location. Further, repetition effects on gamma and on firing rates generalize to images of natural objects. These findings support the notion that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
Background: The technical development of imaging techniques in life sciences has enabled the three-dimensional recording of living samples at increasing temporal resolutions. Dynamic 3D data sets of developing organisms allow for time-resolved quantitative analyses of morphogenetic changes in three dimensions, but require efficient and automatable analysis pipelines to tackle the resulting Terabytes of image data. Particle image velocimetry (PIV) is a robust and segmentation-free technique that is suitable for quantifying collective cellular migration on data sets with different labeling schemes. This paper presents the implementation of an efficient 3D PIV package using the Julia programming language—quickPIV. Our software is focused on optimizing CPU performance and ensuring the robustness of the PIV analyses on biological data.
Results: QuickPIV is three times faster than the Python implementation hosted in openPIV, both in 2D and 3D. Our software is also faster than the fastest 2D PIV package in openPIV, written in C++. The accuracy evaluation of our software on synthetic data agrees with the expected accuracies described in the literature. Additionally, by applying quickPIV to three data sets of the embryogenesis of Tribolium castaneum, we obtained vector fields that recapitulate the migration movements of gastrulation, both in nuclear and actin-labeled embryos. We show normalized squared error cross-correlation to be especially accurate in detecting translations in non-segmentable biological image data.
Conclusions: The presented software addresses the need for a fast and open-source 3D PIV package in biological research. Currently, quickPIV offers efficient 2D and 3D PIV analyses featuring zero-normalized and normalized squared error cross-correlations, sub-pixel/voxel approximation, and multi-pass. Post-processing options include filtering and averaging of the resulting vector fields, extraction of velocity, divergence and collectiveness maps, simulation of pseudo-trajectories, and unit conversion. In addition, our software includes functions to visualize the 3D vector fields in Paraview.
This a review of the present status of heavy-ion collisions at intermediate energies. The main goal of heavy-ion physics in this energy regime is to shed some light on the nuclear equation of state (EOS), hence we present the basic concept of the EOS in nuclear matter as well as of nuclear shock waves which provide the key mechanism for the compression of nuclear matter. The main part of this article is devoted to the models currently used for describing heavy-ion reactions theoretically and to the observables useful for extracting information about the EOS from experiments. A detailed discussion of the flow effects with a broad comparison with the avaible data is presented. The many-body aspects of such reactions are investigated via the multifragmentation break up of excited nuclear systems and a comparison of model calculations with the most recent multifragmentation experiments is presented.
Reprogramming of tomato leaf metabolome by the activity of heat stress transcription factor HsfB1
(2020)
Plants respond to high temperatures with global changes of the transcriptome, proteome, and metabolome. Heat stress transcription factors (Hsfs) are the core regulators of transcriptome responses as they control the reprogramming of expression of hundreds of genes. The thermotolerance-related function of Hsfs is mainly based on the regulation of many heat shock proteins (HSPs). Instead, the Hsf-dependent reprogramming of metabolic pathways and their contribution to thermotolerance are not well described. In tomato (Solanum lycopersicum), manipulation of HsfB1, either by suppression or overexpression (OE) leads to enhanced thermotolerance and coincides with distinct profile of metabolic routes based on a metabolome profiling of wild-type (WT) and HsfB1 transgenic plants. Leaves of HsfB1 knock-down plants show an accumulation of metabolites with a positive effect on thermotolerance such as the sugars sucrose and glucose and the polyamine putrescine. OE of HsfB1 leads to the accumulation of products of the phenylpropanoid and flavonoid pathways, including several caffeoyl quinic acid isomers. The latter is due to the enhanced transcription of genes coding key enzymes in both pathways, in some cases in both non-stressed and stressed plants. Our results show that beyond the control of the expression of Hsfs and HSPs, HsfB1 has a wider activity range by regulating important metabolic pathways providing an important link between stress response and physiological tomato development.
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for treatment and prevention of influenza and its complications is largely debatable. Here, we developed a transparent mathematical modelling setting to analyse the impact of NAIs on influenza disease at within-host and population level. Analytical and simulation results indicate that even assuming unrealistically high efficacies for NAIs, drug intake starting on the onset of symptoms has a negligible effect on an individual's viral load and symptoms score. Increasing NAIs doses does not provide a better outcome as is generally believed. Considering Tamiflu's pandemic regimen for prophylaxis, different multiscale simulation scenarios reveal modest reductions in epidemic size despite high investments in stockpiling. Our results question the use of NAIs in general to treat influenza as well as the respective stockpiling by regulatory authorities.
Neuraminidase inhibitors in influenza treatment and prevention – is it time to call it a day?
(2018)
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for the treatment and prevention of influenza and its complications is largely debatable due to constraints in the ability to control for confounders and to explore unobserved areas of the drug effects. For this study, we used a mathematical model of influenza infection which allowed transparent analyses. The model recreated the oseltamivir effects and indicated that: (i) the efficacy was limited by design, (ii) a 99% efficacy could be achieved by using high drug doses (however, taking high doses of drug 48 h post-infection could only yield a maximum of 1.6-day reduction in the time to symptom alleviation), and (iii) contributions of oseltamivir to epidemic control could be high, but were observed only in fragile settings. In a typical influenza infection, NAIs’ efficacy is inherently not high, and even if their efficacy is improved, the effect can be negligible in practice.
Adjuvanted influenza vaccines constitute a key element towards inducing neutralizing antibody responses in populations with reduced responsiveness, such as infants and elderly subjects, as well as in devising antigen-sparing strategies. In particular, squalene-containing adjuvants have been observed to induce enhanced antibody responses, as well as having an influence on cross-reactive immunity. To explore the effects of adjuvanted vaccine formulations on antibody response and their relation to protein-specific immunity, we propose different mathematical models of antibody production dynamics in response to influenza vaccination. Data from ferrets immunized with commercial H1N1pdm09 vaccine antigen alone or formulated with different adjuvants was instrumental to adjust model parameters. While the affinity maturation process complexity is abridged, the proposed model is able to recapitulate the essential features of the observed dynamics. Our numerical results suggest that there exists a qualitative shift in protein-specific antibody response, with enhanced production of antibodies targeting the NA protein in adjuvanted versus non-adjuvanted formulations, in conjunction with a protein-independent boost that is over one order of magnitude larger for squalene-containing adjuvants. Furthermore, simulations predict that vaccines formulated with squalene-containing adjuvants are able to induce sustained antibody titers in a robust way, with little impact of the time interval between immunizations.
Motivation: Partial differential equations (PDEs) is a well-established and powerful tool to simulate multi-cellular biological systems. However, available free tools for validation against data are not established. The PDEparams module provides flexible functionality in Python for parameter estimation in PDE models.
Results: The PDEparams module provides a flexible interface and readily accommodates different parameter analysis tools in PDE models such as computation of likelihood profiles, and parametric boot-strapping, along with direct visualisation of the results. To our knowledge, it is the first open, freely available tool for parameter fitting of PDE models.
Availability and implementation: The PDEparams module is distributed under the MIT license. The source code, usage instructions and step-by-step examples are freely available on GitHub at github.com/systemsmedicine/PDE_params.
We propose a generalized modeling framework for the kinetic mechanisms of transcriptional riboswitches. The formalism accommodates time-dependent transcription rates and changes of metabolite concentration and permits incorporation of variations in transcription rate depending on transcript length. We derive explicit analytical expressions for the fraction of transcripts that determine repression or activation of gene expression, pause site location and its slowing down of transcription for the case of the (2’dG)-sensing riboswitch from Mesoplasma florum. Our modeling challenges the current view on the exclusive importance of metabolite binding to transcripts containing only the aptamer domain. Numerical simulations of transcription proceeding in a continuous manner under time-dependent changes of metabolite concentration further suggest that rapid modulations in concentration result in a reduced dynamic range for riboswitch function regardless of transcription rate, while a combination of slow modulations and small transcription rates ensures a wide range of finely tuneable regulatory outcomes.
Criticality meets learning : criticality signatures in a self-organizing recurrent neural network
(2017)
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics. Such EoS-meter is model-independent and insensitive to other simulation inputs including the initial conditions for hydrodynamic simulations.
The state-of-the-art pattern recognition method in machine learning (deep convolution neural network) is used to identify the equation of state (EoS) employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in QCD. The EoS-meter is model independent and insensitive to other simulation inputs including the initial conditions and shear viscosity for hydrodynamic simulations. Through this study we demonstrate that there is a traceable encoder of the dynamical information from the phase structure that survives the evolution and exists in the final snapshot of heavy ion collisions and one can exclusively and effectively decode these information from the highly complex final output with machine learning when traditional methods fail. Besides the deep neural network, the performance of traditional machine learning classifiers are also provided.
The scope of this Thesis is to understand the position dependency phenomenon of human visual perception. First, under the ecological assumption, meaning under the assumption that animals adapt to the statistical regularities of their environment, we study the consequences of the imaging on the local statistics of the input to the human visual system. Second, we model efficient representations of these statistics and their contribution to shape the properties of eye sensory neurons. Third, we model efficient representations of the semantic context of images and the correctness of different underneath geometrical assumptions on the statistics of images.
The efficient coding hypothesis posits that sensory systems are adapted to the regularities of their signal input in order to reduce redundancy in the resulting representations. It is therefore important to characterize the regularities of natural signals to gain insight into the processing of natural stimuli. While measurements of statistical regularity in vision have focused on photographic images of natural environments it has been much less investigated, how the specific imaging process embodied by the organism’s eye induces statistical dependencies on the natural input to the visual system. This has allowed using the convenient assumption that natural image data is homogeneous across the visual field. Here we give up on this assumption and show how the imaging process in a human eye model influences the local statistics of the natural input to the visual system across the entire visual field. ...
We study the kinetic and chemical equilibration in 'infinite' parton-hadron matter within the Parton-Hadron-String Dynamics transport approach, which is based on a dynamical quasiparticle model for partons matched to reproduce lattice-QCD results – including the partonic equation of state – in thermodynamic equilibrium. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different baryon density (or chemical potential) and energy density. The transition from initially pure partonic matter to hadronic degrees of freedom (or vice versa) occurs dynamically by interactions. Different thermody-namical distributions of the strongly-interacting quark-gluon plasma (sQGP) are addressed and discussed.
Steep rise of parton densities in the limit of small parton momentum fraction x poses a challenge for describing the observed energy-dependence of the total and inelastic proton-proton cross sections σtot/inelpp : considering a realistic parton spatial distribution, one obtains a too-strong increase of σtot/inelpp in the limit of very high energies. We discuss various mechanisms which allow one to tame such a rise, paying special attention to the role of parton-parton correlations. In addition, we investigate a potential impact on model predictions for σtotpp, related to dynamical higher twist corrections to parton-production process.
We apply the phenomenological Reggeon field theory framework to investigate rapidity gap survival (RGS) probability for diffractive dijet production in proton–proton collisions. In particular, we study in some detail rapidity gap suppression due to elastic rescatterings of intermediate partons in the underlying parton cascades, described by enhanced (Pomeron–Pomeron interaction) diagrams. We demonstrate that such contributions play a subdominant role, compared to the usual, so-called “eikonal”, rapidity gap suppression due to elastic rescatterings of constituent partons of the colliding protons. On the other hand, the overall RGS factor proves to be sensitive to color fluctuations in the proton. Hence, experimental data on diffractive dijet production can be used to constrain the respective model approaches.
I review the state-of-the-art concerning the treatment of high energy cosmic ray interactions in the atmosphere, discussing in some detail the underlying physical concepts and the possibilities to constrain the latter by current and future measurements at the Large Hadron Collider. The relation of basic characteristics of hadronic interactions tothe properties of nuclear-electromagnetic cascades induced by primary cosmic rays in the atmosphere is addressed.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
Predictions of popular cosmic ray interaction models for some basic characteristics of cosmic ray-induced extensive air showers are analyzed in view of experimental data on proton-proton collisions, obtained at the Large Hadron Collider. The differences between the results are traced down to different approaches for the treatment of hadronic interactions, implemented in those models. Potential measurements by LHC and cosmic ray experiments, which could be able to discriminate between the alternative approaches, are proposed.
We discuss in some detail the physics content of the new model, QGSJET-III-01, focusing on major problems related to the treatment of semihard processes in the very high energy limit. A special attention has been payed to the main improvement, compared to the QGSJET-II model, which is related to a phenomenological treatment of leading power corrections corresponding to final parton rescattering off soft gluons. In particular, this allowed us to use a twice smaller separation scale between the soft and hard parton physics, compared to the previous model version, QGSJET-II-04. Preliminary results obtained with the new model are also presented.
COVID-19 pandemic is a major public health threat with unanswered questions regarding the role of the immune system in the severity level of the disease. In this paper, based on antibody kinetic data of patients with different disease severity, topological data analysis highlights clear differences in the shape of antibody dynamics between three groups of patients, which were non-severe, severe, and one intermediate case of severity. Subsequently, different mathematical models were developed to quantify the dynamics between the different severity groups. The best model was the one with the lowest media value of Akaike Information Criterion for all groups of patients. Although it has been reported high IgG level in severe patients, our findings suggest that IgG antibodies in severe patients may be less effective than non-severe patients due to early B cell production and early activation of the seroconversion process from IgM to IgG antibody.
A novel method for identifying the nature of QCD transitions in heavy-ion collision experiments is introduced. PointNet based Deep Learning (DL) models are developed to classify the equation of state (EoS) that drives the hydrodynamic evolution of the system created in Au-Au collisions at 10 AGeV. The DL models were trained and evaluated in different hypothetical experimental situations. A decreased performance is observed when more realistic experimental effects (acceptance cuts and decreased resolutions) are taken into account. It is shown that the performance can be improved by combining multiple events to make predictions. The PointNet based models trained on the reconstructed tracks of charged particles from the CBM detector simulation discriminate a crossover transition from a first order phase transition with an accuracy of up to 99.8%. The models were subjected to several tests to evaluate the dependence of its performance on the centrality of the collisions and physical parameters of fluid dynamic simulations. The models are shown to work in a broad range of centralities (b=0–7 fm). However, the performance is found to improve for central collisions (b=0–3 fm). There is a drop in the performance when the model parameters lead to reduced duration of the fluid dynamic evolution or when less fraction of the medium undergoes the transition. These effects are due to the limitations of the underlying physics and the DL models are shown to be superior in its discrimination performance in comparison to conventional mean observables.
In this talk we presented a novel technique, based on Deep Learning, to determine the impact parameter of nuclear collisions at the CBM experiment. PointNet based Deep Learning models are trained on UrQMD followed by CBMRoot simulations of Au+Au collisions at 10 AGeV to reconstruct the impact parameter of collisions from raw experimental data such as hits of the particles in the detector planes, tracks reconstructed from the hits or their combinations. The PointNet models can perform fast, accurate, event-by-event impact parameter determination in heavy ion collision experiments. They are shown to outperform a simple model which maps the track multiplicity to the impact parameter. While conventional methods for centrality classification merely provide an expected impact parameter distribution for a given centrality class, the PointNet models predict the impact parameter from 2–14 fm on an event-by-event basis with a mean error of −0.33 to 0.22 fm.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
In this thesis we investigate the role played by gauge fields in providing new observable signatures that can attest to the presence of color superconductivity in neutron stars. We show that thermal gluon fluctuations in color-flavor locked superconductors can substantially increase their critical temperature and also change the order of the transition, which becomes a strong first-order phase transition. Moreover, we explore the effects of strong magnetic fields on the properties of color-flavor locked superconducting matter. We find that both the energy gaps as well as the magnetization are oscillating functions of the magnetic field. Also, it is shown that the magnetization can be so strong that homogeneous quark matter becomes metastable for a range of parameters. This points towards the existence of magnetic domains or other types of magnetic inhomogeneities in the hypothesized quark cores of magnetars. Obviously, our results only apply if the strong magnetic fields observed on the surface of magnetars can be transmitted to their inner core. This can occur if the superconducting protons expected to exist in the outer core form a type-I I superconductor. However, it has been argued that the observed long periodic oscillations in isolated pulsars can only be explained if the outer core is a type-I superconductor rather than type-I I. We show that this is not the only solution for the precession puzzle by demonstrating that the long-term variation in the spin of PSR 1828-11 can be explained in terms of Tkachenko oscillations within superfluid shells.
Glia, the helper cells of the brain, are essential in maintaining neural resilience across time and varying challenges: By reacting to changes in neuronal health glia carefully balance repair or disposal of injured neurons. Malfunction of these interactions is implicated in many neurodegenerative diseases. We present a reductionist model that mimics repair-or-dispose decisions to generate a hypothesis for the cause of disease onset. The model assumes four tissue states: healthy and challenged tissue, primed tissue at risk of acute damage propagation, and chronic neurodegeneration. We discuss analogies to progression stages observed in the most common neurodegenerative conditions and to experimental observations of cellular signaling pathways of glia-neuron crosstalk. The model suggests that the onset of neurodegeneration can result as a compromise between two conflicting goals: short-term resilience to stressors versus long-term prevention of tissue damage.
Autophagosome biogenesis requires a localized perturbation of lipid membrane dynamics and a unique protein-lipid conjugate. Autophagy-related (ATG) proteins catalyze this biogenesis on cellular membranes, but the underlying molecular mechanism remains unclear. Focusing on the final step of the protein-lipid conjugation reaction, ATG8/LC3 lipidation, we show how membrane association of the conjugation machinery is organized and fine-tuned at the atomistic level. Amphipathic α-helices in ATG3 proteins (AHATG3) are found to have low hydrophobicity and to be less bulky. Molecular dynamics simulations reveal that AHATG3 regulates the dynamics and accessibility of the thioester bond of the ATG3∼LC3 conjugate to lipids, allowing covalent lipidation of LC3. Live cell imaging shows that the transient membrane association of ATG3 with autophagic membranes is governed by the less bulky- hydrophobic feature of AHATG3. Collectively, the unique properties of AHATG3 facilitate protein- lipid bilayer association leading to the remodeling of the lipid bilayer required for the formation of autophagosomes.
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (<= ~20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs.
We study odd parity J=1/2 and J=3/2 Ξc resonances using a unitarized coupled-channel framework based on a SU(6)lsf×HQSS-extended Weinberg–Tomozawa baryon–meson interaction, while paying a special attention to the renormalization procedure. We predict a large molecular ΛcK¯ component for the Ξc(2790) with a dominant 0− light-degree-of-freedom spin configuration. We discuss the differences between the 3/2− Λc(2625) and Ξc(2815) states, and conclude that they cannot be SU(3) siblings, whereas we predict the existence of other Ξc-states, one of them related to the two-pole structure of the Λc(2595). It is of particular interest a pair of J=1/2 and J=3/2 poles, which form a HQSS doublet and that we tentatively assign to the Ξc(2930) and Ξc(2970), respectively. Within this picture, the Ξc(2930) would be part of a SU(3) sextet, containing either the Ωc(3090) or the Ωc(3119), and that would be completed by the Σc(2800). Moreover, we identify a J=1/2 sextet with the Ξb(6227) state and the recently discovered Σb(6097). Assuming the equal spacing rule and to complete this multiplet, we predict the existence of a J=1/2 Ωb odd parity state, with a mass of 6360 MeV and that should be seen in the ΞbK¯ channel.
In this letter we present some stringy corrections to black hole spacetimes emerging from string T-duality. As a first step, we derive the static Newtonian potential by exploiting the relation between the T-duality and the path integral duality. We show that the intrinsic non-perturbative nature of stringy corrections introduces an ultraviolet cutoff known as zero-point length in the path integral duality literature. As a result, the static potential is found to be regular. We use this result to derive a consistent black hole metric for the spherically symmetric, electrically neutral case. It turns out that the new spacetime is regular and is formally equivalent to the Bardeen metric, apart from a different ultraviolet regulator. On the thermodynamics side, the Hawking temperature admits a maximum before a cooling down phase towards a thermodynamically stable end of the black hole evaporation process. The findings support the idea of universality of quantum black holes.
In this Letter, we propose a new scenario emerging from the conjectured presence of a minimal length ℓ in the spacetime fabric, on the one side, and the existence of a new scale invariant, continuous mass spectrum, of un-particles on the other side. We introduce the concept of un-spectral dimension DU of a d-dimensional, euclidean (quantum) spacetime, as the spectral dimension measured by an “un-particle” probe. We find a general expression for the un-spectral dimension DU labelling different spacetime phases: a semi-classical phase, where ordinary spectral dimension gets contribution from the scaling dimension dU of the un-particle probe; a critical “Planckian phase”, where four-dimensional spacetime can be effectively considered two-dimensional when dU=1; a “Trans-Planckian phase”, which is accessible to un-particle probes only, where spacetime as we currently understand it looses its physical meaning.
This paper studies the geometry and the thermodynamics of a holographic screen in the framework of the ultraviolet self-complete quantum gravity. To achieve this goal we construct a new static, neutral, nonrotating black hole metric, whose outer (event) horizon coincides with the surface of the screen. The spacetime admits an extremal configuration corresponding to the minimal holographic screen and having both mass and radius equalling the Planck units. We identify this object as the spacetime fundamental building block, whose interior is physically unaccessible and cannot be probed even during the Hawking evaporation terminal phase. In agreement with the holographic principle, relevant processes take place on the screen surface. The area quantization leads to a discrete mass spectrum. An analysis of the entropy shows that the minimal holographic screen can store only one byte of information, while in the thermodynamic limit the area law is corrected by a logarithmic term.
In this paper we discuss to what extent one can infer details of the interior structure of a black hole based on its horizon. Recalling that black hole thermal properties are connected to the non-classical nature of gravity, we circumvent the restrictions of the no-hair theorem by postulating that the black hole interior is singularity free due to violations of the usual energy conditions. Further these conditions allow one to establish a one-to-one, holographic projection between Planckian areal “bits” on the horizon and “voxels”, representing the gravitational degrees of freedom in the black hole interior. We illustrate the repercussions of this idea by discussing an example of the black hole interior consisting of a de Sitter core postulated to arise from the local graviton quantum vacuum energy. It is shown that the black hole entropy can emerge as the statistical entropy of a gas of voxels.
In this Letter we study the radiation measured by an accelerated detector, coupled to a scalar field, in the presence of a fundamental minimal length. The latter is implemented by means of a modified momentum space Green's function. After calibrating the detector, we find that the net flux of field quanta is negligible, and that there is no Planckian spectrum. We discuss possible interpretations of this result, and we comment on experimental implications in heavy ion collisions and atomic systems.
In the presence of a minimal length, physical objects cannot collapse to an infinite density, singular, matter point. In this paper, we consider the possible final stage of the gravitational collapse of "thick" matter layers. The energy momentum tensor we choose to model these shell-like objects is a proper modification of the source for "noncommutative geometry inspired," regular black holes. By using higher momenta of Gaussian distribution to localize matter at finite distance from the origin, we obtain new solutions of the Einstein equation which smoothly interpolates between Minkowski's geometry near the center of the shell and Schwarzschild’s spacetime far away from the matter layer. The metric is curvature singularity free. Black hole type solutions exist only for "heavy" shells; that is, M >= Me, where Me is the mass of the extremal configuration. We determine the Hawking temperature and a modified area law taking into account the extended nature of the source.
The Karl Schwarzschild Meeting 2017 (KSM2017) has been the third instalment of the conference dedicated to the great Frankfurter scientist, who derived the first black hole solution of Einstein's equations about 100 years ago.
The event has been a 5 day meeting in the field of black holes, AdS/CFT correspondence and gravitational physics. Like the two previous instalments, the conference continued to attract a stellar ensemble of participants from the world's most renowned institutions. The core of the meeting has been a series of invited talks from eminent experts (keynote speakers) as well as the presence of plenary research talks by students and junior speakers.
List of Conference photo and poster, Sponsors and funding acknowledgments, Committees and List of participants are available in this PDF.
We present an analysis of the role of the charge within the self-complete quantum gravity paradigm. By studying the classicalization of generic ultraviolet improved charged black hole solutions around the Planck scale, we showed that the charge introduces important differences with respect to the neutral case. First, there exists a family of black hole parameters fulfilling the particle-black hole condition. Second, there is no extremal particle-black hole solution but quasi extremal charged particle-black holes at the best. We showed that the Hawking emission disrupts the condition of particle-black hole. By analyzing the Schwinger pair production mechanism, the charge is quickly shed and the particle-black hole condition can ultimately be restored in a cooling down phase towards a zero temperature configuration, provided non-classical effects are taken into account.
In this paper, we present an overview of some of the existing issues of the research in quantum gravity. We also introduce the basic ideas that led Padmanabhan to consider a duality property in path integrals. Such a duality is consistent with the T-duality in string theory. More importantly, the path integral duality discloses a universal feature of any quantum geometry, namely the existence of a zero point length L0. We also comment about recent developments aiming to expose effects of the zero point length in strong electrodynamics and black holes. There are reasons to believe that the main characters of the phenomenology of quantum gravity may be described by means of a single parameter like L0.
From August to November 2017, Madagascar endured an outbreak of plague. A total of 2417 cases of plague were confirmed, causing a death toll of 209. Public health intervention efforts were introduced and successfully stopped the epidemic at the end of November. The plague, however, is endemic in the region and occurs annually, posing the risk of future outbreaks. To understand the plague transmission, we collected real-time data from official reports, described the outbreak's characteristics, and estimated transmission parameters using statistical and mathematical models. The pneumonic plague epidemic curve exhibited multiple peaks, coinciding with sporadic introductions of new bubonic cases. Optimal climate conditions for rat flea to flourish were observed during the epidemic. Estimate of the plague basic reproduction number during the large wave of the epidemic was high, ranging from 5 to 7 depending on model assumptions. The incubation and infection periods for bubonic and pneumonic plague were 4.3 and 3.4 days and 3.8 and 2.9 days, respectively. Parameter estimation suggested that even with a small fraction of the population exposed to infected rat fleas (1/10,000) and a small probability of transition from a bubonic case to a secondary pneumonic case (3%), the high human-to-human transmission rate can still generate a large outbreak. Controlling rodent and fleas can prevent new index cases, but managing human-to-human transmission is key to prevent large-scale outbreaks.
Background: Recent epidemics have entailed global discussions on revamping epidemic control and prevention approaches. A general consensus is that all sources of data should be embraced to improve epidemic preparedness. As a disease transmission is inherently governed by individual-level responses, pathogen dynamics within infected hosts posit high potentials to inform population-level phenomena. We propose a multiscale approach showing that individual dynamics were able to reproduce population-level observations.
Methods: Using experimental data, we formulated mathematical models of pathogen infection dynamics from which we simulated mechanistically its transmission parameters. The models were then embedded in our implementation of an age-specific contact network that allows to express individual differences relevant to the transmission processes. This approach is illustrated with an example of Ebola virus (EBOV).
Results: The results showed that a within-host infection model can reproduce EBOV’s transmission parameters obtained from population data. At the same time, population age-structure, contact distribution and patterns can be expressed using network generating algorithm. This framework opens a vast opportunity to investigate individual roles of factors involved in the epidemic processes. Estimating EBOV’s reproduction number revealed a heterogeneous pattern among age-groups, prompting cautions on estimates unadjusted for contact pattern. Assessments of mass vaccination strategies showed that vaccination conducted in a time window from five months before to one week after the start of an epidemic appeared to strongly reduce epidemic size. Noticeably, compared to a non-intervention scenario, a low critical vaccination coverage of 33% cannot ensure epidemic extinction but could reduce the number of cases by ten to hundred times as well as lessen the case-fatality rate.
Conclusions: Experimental data on the within-host infection have been able to capture upfront key transmission parameters of a pathogen; the applications of this approach will give us more time to prepare for potential epidemics. The population of interest in epidemic assessments could be modelled with an age-specific contact network without exhaustive amount of data. Further assessments and adaptations for different pathogens and scenarios to explore multilevel aspects in infectious diseases epidemics are underway.
Ebola virus (EBOV) infection causes a high death toll, killing a high proportion of EBOV-infected patients within 7 days. Comprehensive data on EBOV infection are fragmented, hampering efforts in developing therapeutics and vaccines against EBOV. Under this circumstance, mathematical models become valuable resources to explore potential controlling strategies. In this paper, we employed experimental data of EBOV-infected nonhuman primates (NHPs) to construct a mathematical framework for determining windows of opportunity for treatment and vaccination. Considering a prophylactic vaccine based on recombinant vesicular stomatitis virus expressing the EBOV glycoprotein (rVSV-EBOV), vaccination could be protective if a subject is vaccinated during a period from one week to four months before infection. For the case of a therapeutic vaccine based on monoclonal antibodies (mAbs), a single dose might resolve the invasive EBOV replication even if it was administrated as late as four days after infection. Our mathematical models can be used as building blocks for evaluating therapeutic and vaccine modalities as well as for evaluating public health intervention strategies in outbreaks. Future laborator experiments will help to validate and refine the estimates of the windows of opportunity proposed here.
Driven by the loss of energy, isolated rotating neutron stars (pulsars) are gradually slowing down to lower frequencies, which increases the tremendous compression of the matter inside of them. This increase in compression changes both the global properties of rotating neutron stars as well as their hadronic core compositions. Both effects may register themselves observationally in the thermal evolution of such stars, as demonstrated in this Letter. The rotation-driven particle process which we consider here is the direct Urca (DU) process, which is known to become operative in neutron stars if the number of protons in the stellar core exceeds a critical limit of around 11% to 15%. We find that neutron stars spinning down from moderately high rotation rates of a few hundred Hertz may be creating just the right conditions where the DU process becomes operative, leading to an observable effect (enhanced cooling) in the temperature evolution of such neutron stars. As it turns out, the rotation-driven DU process could explain the unusual temperature evolution observed for the neutron star in Cas A, provided the mass of this neutron star lies in the range of 1.5 to 1.9M⊙ and its rotational frequency at birth was between 40 (400 Hz) and 70% (800 Hz) of the Kepler (mass shedding) frequency, respectively.
Background: After induction of DNA double strand breaks (DSBs), the DNA damage response (DDR) is activated. One of the earliest events in DDR is the phosphorylation of serine 139 on the histone variant H2AX (gH2AX) catalyzed by phosphatidylinositol 3-kinases-related kinases. Despite being extensively studied, H2AX distribution[1] across the genome and gH2AX spreading around DSBs sites[2] in the context of different chromatin compaction states or transcription are yet to be fully elucidated.
Materials and methods: gH2AX was induced in human hepatocellular carcinoma cells (HepG2) by exposure to 10 Gy X-rays (250 kV, 16 mA). Samples were incubated 0.5, 3 or 24 hours post irradiation to investigate early, intermediate and late stages of DDR, respectively. Chromatin immunoprecipitation was performed to select H2AX, H3 and gH2AX-enriched chromatin fractions. Chromatin-associated DNA was then sequenced by Illumina ChIP-Seq platform. HepG2 gene expression and histone modification (H3K36me3, H3K9me3) ChIP-Seq profiles were retrieved from Gene Expression Omnibus (accession numbers GSE30240 and GSE26386, respectively).
Results: First, we combined G/C usage, gene content, gene expression or histone modification profiles (H3K36me3, H3K9me3) to define genomic compartments characterized by different chromatin compaction states or transcriptional activity. Next, we investigated H3, H2AX and gH2AX distributions in such defined compartments before and after exposure to ionizing radiation (IR) to study DNA repair kinetics during DDR. Our sequencing results indicate that H2AX distribution followed H3 occupancy and, thus, the nucleosome pattern. The highest H2AX and H3 enrichment was observed in transcriptionally active compartments (euchromatin) while the lowest was found in low G/C and gene-poor compartments (heterochromatin). Under physiological conditions, the body of highly and moderately transcribed genes was devoid of gH2AX, despite presenting high H2AX levels. gH2AX accumulation was observed in 5’ or 3’ flanking regions, instead. The same genes showed a prompt gH2AX accumulation during the early stage of DDR which then decreased over time as DDR proceeded.
Finally, during the late stage of DDR the residual gH2AX signal was entirely retained in heterochromatic compartments. At this stage, euchromatic compartments were completely devoid of gH2AX despite presenting high levels of non-phosphorylated H2AX.
Conclusions: We show that gH2AX distribution ultimately depends on H2AX occupancy, the latter following H3 occupancy and, thus, nucleosome pattern. Both H2AX and H3 levels were higher in actively transcribed compartments. However, gH2AX levels were remarkably low over the body of actively transcribed genes suggesting that transcription levels antagonize gH2AX spreading. Moreover, repair processes did not take place uniformly across the genome; rather, DNA repair was affected by genomic location and transcriptional activity. We propose that higher H2AX density in euchromaticcompartments results in high relative gH2AXconcentration soon after the activation of DDR, thus favoring the recruitment of the DNA repair machinery to those compartments. When the damage is repaired and gH2AX is removed, its residual fraction is retained in the heterochromatic compartments which are then targeted and repaired at later times.
We present the current status of hybrid approaches to describe heavy ion collisions and their future challenges and perspectives. First we present a hybrid model combining a Boltzmann transport model of hadronic degrees of freedom in the initial and final state with an optional hydrodynamic evolution during the dense and hot phase. Second, we present a recent extension of the hydrodynamical model to include fluctuations near the phase transition by coupling a chiral field to the hydrodynamic evolution.
Background: In this interdisciplinary project, the biological effects of heavy ions are compared to those of X-rays using tissue slice culture preparations from rodents and humans. Advantages of this biological model are the conservation of an organotypic environment and the independency from genetic immortalization strategies used to generate cell lines. Its open access allows easy treatment and observation via live-imaging microscopy. Materials and methods: Rat brains and human brain tumor tissue are cut into 300 micro m thick tissue slices. These slices are cultivated using a membrane-based culture system and kept in an incubator at 37°C until treatment. The slices are treated with X-rays at the radiation facility of the University Hospital in Frankfurt at doses of up to 40 Gy. The heavy ion irradiations were performed at the UNILAC facility at GSI with different ions of 11.4 A MeV and fluences ranging from 0.5–10 x 106 particles/cm². Using 3D-confocal microscopy, cell-death and immune cell activation of the irradiated slices are analyzed. Planning of the irradiation experiments is done with simulation programs developed at GSI and FIAS. Results: After receiving a single application of either X-rays or heavy ions, slices were kept in culture for up to 9d post irradiation. DNA damage was visualized using gamma H2AXstaining. Here, a dose-dependent increase and time-dependent decrease could clearly be observed for the X-ray irradiation. Slices irradiated with heavy ions showed less gamma H2AX-positive cells distributed evenly throughout the slice, even though particles were calculated to penetrate only 90–100 micro m into the slice. Conclusions: Single irradiations of brain tissue, even at high doses of 40 Gy, will result neither in tissue damage visible on a macroscopic level nor necrosis. This is in line with the view that the brain is highly radio-resistant. However, DNA damage can be detected very well in tissue slices using gamma H2AX-immuno staining. Thus, slice cultures are an excellent tool to study radiation-induced damage and repair mechanisms in living tissues.
A considerable effort has been dedicated recently to the construction of generic equations of state (EOSs) for matter in neutron stars. The advantage of these approaches is that they can provide model-independent information on the interior structure and global properties of neutron stars. Making use of more than 106 generic EOSs, we assess the validity of quasi-universal relations of neutron-star properties for a broad range of rotation rates, from slow rotation up to the mass-shedding limit. In this way, we are able to determine with unprecedented accuracy the quasi-universal maximum-mass ratio between rotating and nonrotating stars and reveal the existence of a new relation for the surface oblateness, i.e., the ratio between the polar and equatorial proper radii. We discuss the impact that our findings have on the imminent detection of new binary neutron-star mergers and how they can be used to set new and more stringent limits on the maximum mass of nonrotating neutron stars, as well as to improve the modeling of the X-ray emission from the surface of rotating stars.