Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (1020)
- Article (813)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (4)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (1894)
Keywords
- Heavy Ion Experiments (22)
- Hadron-Hadron Scattering (14)
- Hadron-Hadron scattering (experiments) (11)
- LHC (11)
- Heavy-ion collisions (9)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1894)
- Physik (1429)
- Informatik (1121)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
ϕ-meson production in In–In collisions at Elab=158A GeV: Evidence for relics of a thermal phase
(2010)
Yields and transverse mass distributions of the ϕ-mesons reconstructed in the ϕ→μ+μ− channel in In+In collisions at Elab=158A GeV are calculated within an integrated Boltzmann+hydrodynamics hybrid approach based on the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport model with an intermediate hydrodynamic stage. The analysis is performed for various centralities and a comparison with the corresponding NA60 data in the muon channel is presented. We find that the hybrid model, that embeds an intermediate locally equilibrated phase subsequently mapped into the transport dynamics according to thermal phase-space distributions, gives a good description of the experimental data, both in yield and slope. On the contrary, the pure transport model calculations tend to fail in catching the general properties of the ϕ meson production: not only the yield, but also the slope of the mT spectra, compare poorly with the experimental observations at top SPS energies.
Recent lattice QCD results, comparing to a hadron resonance gas model, have shown the need for hundreds of particles in hadronic models. These extra particles influence both the equation of state and hadronic interactions within hadron transport models. Here, we introduce the PDG21+ particle list, which contains the most up-to-date database of particles and their properties. We then convert all particles decays into 2 body decays so that they are compatible with SMASH in order to produce a more consistent description of a heavy-ion collision.
Hadron lists based on experimental studies summarized by the Particle Data Group (PDG) are a crucial input for the equation of state and thermal models used in the study of strongly-interacting matter produced in heavy-ion collisions. Modeling of these strongly-interacting systems is carried out via hydrodynamical simulations, which are followed by hadronic transport codes that also require a hadronic list as input. To remain consistent throughout the different stages of modeling of a heavy-ion collision, the same hadron list with its corresponding decays must be used at each step. It has been shown that even the most uncertain states listed in the PDG from 2016 are required to reproduce partial pressures and susceptibilities from Lattice Quantum Chromodynamics with the hadronic list known as the PDG2016+. Here, we update the hadronic list for use in heavy-ion collision modeling by including the latest experimental information for all states listed in the Particle Data Booklet in 2021. We then compare our new list, called PDG2021+, to Lattice Quantum Chromodynamics results and find that it achieves even better agreement with the first principles calculations than the PDG2016+ list. Furthermore, we develop a novel scheme based on intermediate decay channels that allows for only binary decays, such that PDG2021+ will be compatible with the hadronic transport framework SMASH. Finally, we use these results to make comparisons to experimental data and discuss the impact on particle yields and spectra.
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
We investigate the effect of large magnetic fields on the (2 + 1)-dimensional reduced-magnetohydrodynamical expansion of hot and dense nuclear matter produced in √sNN = 200 GeV Au+Au collisions. For the sake of simplicity,we consider the casewhere themagnetic field points in the direction perpendicular to the reaction plane. We also consider this field to be external, with energy density parametrized as a two-dimensional Gaussian. The width of the Gaussian along the directions orthogonal to the beam axis varies with the centrality of the collision. The dependence of the magnetic field on proper time (τ ) for the case of zero electrical conductivity of the QGP is parametrized following Deng et al. [Phys. Rev. C 85, 044907 (2012)], and for finite electrical conductivity following Tuchin [Phys. Rev. C 88, 024911 (2013)].We solve the equations of motion of ideal hydrodynamics for such an external magnetic field. For collisions with nonzero impact parameter we observe considerable changes in the evolution of the momentum eccentricities of the fireball when comparing the case when the magnetic field decays in a conducting QGP medium and when no magnetic field is present. The elliptic-flow coefficient v2 of π− is shown to increase in the presence of an external magnetic field and the increment in v2 is found to depend on the evolution and the initial magnitude of the magnetic field.
The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.
We estimate the temperature dependence of the bulk viscosity in a relativistic hadron gas. Employing the Green–Kubo formalism in the SMASH (Simulating Many Accelerated Strongly-interacting Hadrons) transport approach, we study different hadronic systems in increasing order of complexity. We analyze the (in)validity of the single exponential relaxation ansatz for the bulk-channel correlation function and the strong influence of the resonances and their lifetimes. We discuss the difference between the inclusive bulk viscosity of an equilibrated, long-lived system, and the effective bulk viscosity of a short-lived mixture like the hadronic phase of relativistic heavy-ion collisions, where the processes whose inverse relaxation rate are larger than the fireball duration are excluded from the analysis. This clarifies the differences between previous approaches which computed the bulk viscosity including/excluding the very slow processes in the hadron gas. We compare our final results with previous hadron gas calculations and confirm a decreasing trend of the inclusive bulk viscosity over entropy density as temperature increases, whereas the effective bulk viscosity to entropy ratio, while being lower than the inclusive one, shows no strong dependence to temperature.
ALICE (A Large Heavy Ion Experiment) is one of the four large scale experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC) and the Inner Tracking System (ITS). The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
The influence of visual tasks on short and long-term memory for visual features was investigated using a change-detection paradigm. Subjects completed 2 tasks: (a) describing objects in natural images, reporting a specific property of each object when a crosshair appeared above it, and (b) viewing a modified version of each scene, and detecting which of the previously described objects had changed. When tested over short delays (seconds), no task effects were found. Over longer delays (minutes) we found the describing task influenced what types of changes were detected in a variety of explicit and incidental memory experiments. Furthermore, we found surprisingly high performance in the incidental memory experiment, suggesting that simple tasks are sufficient to instill long-lasting visual memories. Keywords: visual working memory, natural scenes, natural tasks, change detection
In the juvenile brain, the synaptic architecture of the visual cortex remains in a state of flux for months after the natural onset of vision and the initial emergence of feature selectivity in visual cortical neurons. It is an attractive hypothesis that visual cortical architecture is shaped during this extended period of juvenile plasticity by the coordinated optimization of multiple visual cortical maps such as orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we introduced a class of analytically tractable coordinated optimization models and solved representative examples, in which a spatially complex organization of the OP map is induced by interactions between the maps. We found that these solutions near symmetry breaking threshold predict a highly ordered map layout. Here we examine the time course of the convergence towards attractor states and optima of these models. In particular, we determine the timescales on which map optimization takes place and how these timescales can be compared to those of visual cortical development and plasticity. We also assess whether our models exhibit biologically more realistic, spatially irregular solutions at a finite distance from threshold, when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. We show that, although maps typically undergo substantial rearrangement, no other solutions than pinwheel crystals and stripes dominate in the emerging layouts. Pinwheel crystallization takes place on a rather short timescale and can also occur for detuned wavelengths of different maps. Our numerical results thus support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the architecture of the visual cortex. We discuss several alternative scenarios that may improve the agreement between model solutions and biological observations.
In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps.
Experimental data from the NA49 collaboration show an unexpectedly steep rise of the rapidity width of the ϕ meson as function of beam energy, which was suggested as possible interesting signal for novel physics. In this work we show that the Ultra-relativistic Quantum-Molecular-Dynamics (UrQMD) model is able to reproduce the shapes of the rapidity distributions of most measured hadrons and predicts a common linear increase of the width for all hadrons. Only when following the exact same analysis technique and experimental acceptance of the NA49 and NA61/SHINE collaborations, we find that the extracted value of the rapidity width of the ϕ increases drastically for the highest beam energy. We conclude that the observed steep increase of the ϕ rapidity width is a problem of limited detector acceptance and the simplified Gaussian fit approximation.
We investigate the development of the directed, v1, and elliptic flow, v2, in heavy ion collisions in mid-central Au+Au reactions at Elab=1.23A GeV. We demonstrate that the elliptic flow of hot and dense matter is initially positive (v2>0) due to the early pressure gradient. This positive v2 transfers its momentum to the spectators, which leads to the creation of the directed flow v1. In turn, the spectator shadowing of the in-plane expansion leads to a preferred decoupling of hadrons in the out-of-plane direction and results in a negative v2 for the observable final state hadrons. We propose a measurement of v1−v2 flow correlations and of the elliptic flow of dileptons as methods to pin down this evolution pattern. The elliptic flow of the dileptons allows then to determine the early-state EoS more precisely, because it avoids the strong modifications of the momentum distribution due to shadowing seen in the protons. This opens the unique opportunity for the HADES and CBM collaborations to measure the Equation-of-State directly at 2-3 times nuclear saturation density.
Future operation of the CBM detector requires ultra-fast analysis of the continuous stream of data from all subdetector systems. Determining the inter-system time shifts among individual detector systems in the existing prototype experiment mCBM is an essential step for data processing and in particular for stable data taking. Based on the input of raw measurements from all detector systems, the corresponding time correlations can be obtained at digital level by evaluating the differences in time stamps. If the relevant systems are stable during data taking and sufficient digital measurements are available, the distribution of time differences should display a clear peak. Up to now, the outcome of the processed time differences is stored in histograms and the maximum peak is considered, after the evaluation of all timeslices of a run leading to significant run times. The results presented here demonstrate the stability of the synchronicity of mCBM systems. Furthermore it is illustrated that relatively small amounts of raw measurements are sufficient to evaluate corresponding time correlations among individual mCBM detectors, thus enabling fast online monitoring of them in future online data processing.
In this work the baryon number and strange susceptibility of second and fourth order are presented. The results at zero baryon-chemical potential are obtained using a well tested chiral effective model including all known hadron degrees of freedom and additionally implementing quarks and gluons in a PNJL-like approach. Quark and baryon number susceptibilities are sensitive to the fundamental degrees of freedom in the model and signal the shift from massive hadrons to light quarks at the deconfinement transition by a sharp rise at the critical temperature. Furthermore, all susceptibilities are found to be largely suppressed by repulsive vector field interactions of the particles. In the hadronic sector vector repulsion of baryon resonances restrains fluctuations to a large amount and in the quark sector above Tc even small vector field interactions of quarks quench all fluctuations unreasonably strong. For this reason, vector field interactions for quarks have to vanish in the deconfinement limit.
Stimulated emission depletion (STED) microscopy is a super-resolution technique that surpasses the diffraction limit and has contributed to the study of dynamic processes in living cells. However, high laser intensities induce fluorophore photobleaching and sample phototoxicity, limiting the number of fluorescence images obtainable from a living cell. Here, we address these challenges by using ultra-low irradiation intensities and a neural network for image restoration, enabling extensive imaging of single living cells. The endoplasmic reticulum (ER) was chosen as the target structure due to its dynamic nature over short and long timescales. The reduced irradiation intensity combined with denoising permitted continuous ER dynamics observation in living cells for up to 7 hours with a temporal resolution of seconds. This allowed for quantitative analysis of ER structural features over short (seconds) and long (hours) timescales within the same cell, and enabled fast 3D live-cell STED microscopy. Overall, the combination of ultra-low irradiation with image restoration enables comprehensive analysis of organelle dynamics over extended periods in living cells.
Neurogenesis of hippocampal granule cells (GCs) persists throughout mammalian life and is important for learning and memory. How newborn GCs differentiate and mature into an existing circuit during this time period is not yet fully understood. We established a method to visualize postnatally generated GCs in organotypic entorhino-hippocampal slice cultures (OTCs) using retroviral (RV) GFP-labeling and performed time-lapse imaging to study their morphological development in vitro. Using anterograde tracing we could, furthermore, demonstrate that the postnatally generated GCs in OTCs, similar to adult born GCs, grow into an existing entorhino-dentate circuitry. RV-labeled GCs were identified and individual cells were followed for up to four weeks post injection. Postnatally born GCs exhibited highly dynamic structural changes, including dendritic growth spurts but also retraction of dendrites and phases of dendritic stabilization. In contrast, older, presumably prenatally born GCs labeled with an adeno-associated virus (AAV), were far less dynamic. We propose that the high degree of structural flexibility seen in our preparations is necessary for the integration of newborn granule cells into an already existing neuronal circuit of the dentate gyrus in which they have to compete for entorhinal input with cells generated and integrated earlier.
Highlights
• We present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series.
• The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios.
• The model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33.
Abstract
High-rate Global Navigation Satellite System (HR-GNSS) data can be highly useful for earthquake analysis as it provides continuous high-frequency measurements of ground motion. This data can be used to analyze diverse parameters related to the seismic source and to assess the potential of an earthquake to prompt strong motions at certain distances and even generate tsunamis. In this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data with 0.07 ≤ RMS ≤ 0.11. Comparable results were observed in tests using synthetic data from a different region than the training data, with RMS ≤ 0.15. Furthermore, the model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33, provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near-field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements.
For medicine to fulfill its promise of personalized treatments based on a better understanding of disease biology, computational and statistical tools must exist to analyze the increasing amount of patient data that becomes available. A particular challenge is that several types of data are being measured to cope with the complexity of the underlying systems, enhance predictive modeling and enrich molecular understanding.
Here we review a number of recent approaches that specialize in the analysis of multimodal data in the context of predictive biomedicine. We focus on methods that combine different OMIC measurements with image or genome variation data. Our overview shows the diversity of methods that address analysis challenges and reveals new avenues for novel developments.
As important as the intrinsic properties of an individual nervous cell stands the network of neurons in which it is embedded and by virtue of which it acquires great part of its responsiveness and functionality. In this study we have explored how the topological properties and conduction delays of several classes of neural networks affect the capacity of their constituent cells to establish well-defined temporal relations among firing of their action potentials. This ability of a population of neurons to produce and maintain a millisecond-precise coordinated firing (either evoked by external stimuli or internally generated) is central to neural codes exploiting precise spike timing for the representation and communication of information. Our results, based on extensive simulations of conductance-based type of neurons in an oscillatory regime, indicate that only certain topologies of networks allow for a coordinated firing at a local and long-range scale simultaneously. Besides network architecture, axonal conduction delays are also observed to be another important factor in the generation of coherent spiking. We report that such communication latencies not only set the phase difference between the oscillatory activity of remote neural populations but determine whether the interconnected cells can set in any coherent firing at all. In this context, we have also investigated how the balance between the network synchronizing effects and the dispersive drift caused by inhomogeneities in natural firing frequencies across neurons is resolved. Finally, we show that the observed roles of conduction delays and frequency dispersion are not particular to canonical networks but experimentally measured anatomical networks such as the macaque cortical network can display the same type of behavior.
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ...
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.
Interacting with the environment to process sensory information, generate perceptions, and shape behavior engages neural networks in brain areas with highly varied representations, ranging from unimodal sensory cortices to higher-order association areas. Recent work suggests a much greater degree of commonality across areas, with distributed and modular networks present in both sensory and non-sensory areas during early development. However, it is currently unknown whether this initially common modular structure undergoes an equally common developmental trajectory, or whether such a modular functional organization persists in some areas—such as primary visual cortex—but not others. Here we examine the development of network organization across diverse cortical regions in ferrets of both sexes using in vivo widefield calcium imaging of spontaneous activity. We find that all regions examined, including both primary sensory cortices (visual, auditory, and somatosensory—V1, A1, and S1, respectively) and higher order association areas (prefrontal and posterior parietal cortices) exhibit a largely similar pattern of changes over an approximately 3 week developmental period spanning eye opening and the transition to predominantly externally-driven sensory activity. We find that both a modular functional organization and millimeter-scale correlated networks remain present across all cortical areas examined. These networks weakened over development in most cortical areas, but strengthened in V1. Overall, the conserved maintenance of modular organization across different cortical areas suggests a common pathway of network refinement, and suggests that a modular organization—known to encode functional representations in visual areas—may be similarly engaged in highly diverse brain areas.
Significance Different areas of the mature brain encode vastly different representations of the world. This study shows that a modular functional organization where nearby neurons participate in similar functional networks is shared across different brain areas not only during early development, but also as the brain matures where it remains a shared feature that shapes neural activity. The largely conserved trajectory of developmental changes across brain areas suggests that similar circuit mechanisms may drive this maturation. This implies that the large literature on developing cortical circuits, which is largely focused on sensory areas, may also apply more broadly, and that perturbations during development that impinge on any such shared mechanisms may produce deficits that extend across multiple brain systems.
We present the black hole accretion code (BHAC), a new multidimensional general-relativistic magnetohydrodynamics module for the MPI-AMRVAC framework. BHAC has been designed to solve the equations of ideal general-relativistic magnetohydrodynamics in arbitrary spacetimes and exploits adaptive mesh refinement techniques with an efficient block-based approach. Several spacetimes have already been implemented and tested. We demonstrate the validity of BHAC by means of various one-, two-, and three-dimensional test problems, as well as through a close comparison with the HARM3D code in the case of a torus accreting onto a black hole. The convergence of a turbulent accretion scenario is investigated with several diagnostics and we find accretion rates and horizon-penetrating fluxes to be convergent to within a few percent when the problem is run in three dimensions. Our analysis also involves the study of the corresponding thermal synchrotron emission, which is performed by means of a new general-relativistic radiative transfer code, BHOSS. The resulting synthetic intensity maps of accretion onto black holes are found to be convergent with increasing resolution and are anticipated to play a crucial role in the interpretation of horizon-scale images resulting from upcoming radio observations of the source at the Galactic Center.
The wave function of a spheroidal harmonic oscillator without spin-orbit interaction is expressed in terms of associated Laguerre and Hermite polynomials. The pairing gap and Fermi energy are found by solving the BCS system of two equations. Analytical relationships for the matrix elements of inertia are obtained function of the main quantum numbers and potential derivative. They may be used to test complex computer codes one should develop in a realistic approach of the fission dynamics. The results given for the 240 Pu nucleus are compared with a hydrodynamical model. The importance of taking into account the correction term due to the variation of the occupation number is stressed.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
Complex fission phenomena
(2004)
Complex fission phenomena are studied in a unified way. Very general reflection asymmetrical equilibrium (saddle point) nuclear shapes are obtained by solving an integro-differential equation without being necessary to specify a certain parametrization. The mass asymmetry in binary cold fission of Th and U isotopes is explained as the result of adding a phenomenological shell correction to the liquid drop model deformation energy. Applications to binary, ternary, and quaternary fission are outlined.
Sharp wave-ripples (SPW-Rs) are a hippocampal network phenomenon critical for memory consolidation and planning. SPW-Rs have been extensively studied in the adult brain, yet their developmental trajectory is poorly understood. While SPWs have been recorded in rodents shortly after birth, the time point and mechanisms of ripple emergence are still unclear. Here, we combine in vivo electrophysiology with optogenetics and chemogenetics in 4 to 12 days-old mice to address this knowledge gap. We show that ripples are robustly detected and induced by light stimulation of ChR2-transfected CA1 pyramidal neurons only from postnatal day (P) 10 onwards. Leveraging a spiking neural network model, we mechanistically link the maturation of inhibition and ripple emergence. We corroborate these findings by reducing ripple rate upon chemogenetic silencing of CA1 interneurons. Finally, we show that early SPW-Rs elicit a more robust prefrontal cortex response then SPWs lacking ripples. Thus, development of inhibition promotes ripples emergence.
Introduction: Neuronal death and subsequent denervation of target areas are hallmarks of many neurological disorders. Denervated neurons lose part of their dendritic tree, and are considered "atrophic", i.e. pathologically altered and damaged. The functional consequences of this phenomenon are poorly understood.
Results: Using computational modelling of 3D-reconstructed granule cells we show that denervation-induced dendritic atrophy also subserves homeostatic functions: By shortening their dendritic tree, granule cells compensate for the loss of inputs by a precise adjustment of excitability. As a consequence, surviving afferents are able to activate the cells, thereby allowing information to flow again through the denervated area. In addition, action potentials backpropagating from the soma to the synapses are enhanced specifically in reorganized portions of the dendritic arbor, resulting in their increased synaptic plasticity. These two observations generalize to any given dendritic tree undergoing structural changes.
Conclusions: Structural homeostatic plasticity, i.e. homeostatic dendritic remodeling, is operating in long-term denervated neurons to achieve functional homeostasis.
At nonzero temperature, it is expected that QCD undergoes a phase transition to a deconfined, chirally symmetric phase, the Quark-Gluon Plasma (QGP). I review what we expect theoretically about this possible transition, and what we have learned from heavy ion experiments at RHIC. I argue that while there are unambiguous signals for qualitatively new behavior at RHIC, versus experiments at lower energies, that in detail, no simple theoretical model can explain all salient features of the data.
NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events
(2009)
Poster presentation: We present a non-parametric and computationally-efficient method named NeuroXidence (see http://www.NeuroXidence.com ) that detects coordinated firing within a group of two or more neurons and tests whether the observed level of coordinated firing is significantly different from that expected by chance. NeuroXidence [1] considers the full auto-structure of the data, including the changes in the rate responses and the history dependencies in the spiking activity. We demonstrate that NeuroXidence can identify epochs with significant spike synchronisation even if these coincide with strong and fast rate modulations. We also show, that the method accounts for trial-by-trial variability in the rate responses and their latencies, and that it can be applied to short data windows lasting only tens of milliseconds. Based on simulated data we compare the performance of NeuroXidence with the UE-method [2,3] and the cross-correlation analysis. An application of NeuroXidence to 42 single-units (SU) recorded in area 17 of an anesthetized cat revealed significant coincident events of high complexities, involving firing of up to 8 SUs simultaneously (5 ms window). The results were highly consistent with those obtained by traditional pair-wise measures based on cross-correlation: Neuronal synchrony was strongest in stimulation conditions in which the orientation of the sinusoidal grating matched the preferred orientation of most of the SUs included in the analysis, and was the weakest when the neurons were stimulated least optimally. Interestingly, events of higher complexities showed stronger stimulus-specific modulation than pair-wise interactions. The results suggest strong evidence for stimulus specific synchronous firing and, therefore, support the temporal coding hypothesis in visual cortex. ...
Poster presentation: Coordinated neuronal activity across many neurons, i.e. synchronous or spatiotemporal pattern, had been believed to be a major component of neuronal activity. However, the discussion if coordinated activity really exists remained heated and controversial. A major uncertainty was that many analysis approaches either ignored the auto-structure of the spiking activity, assumed a very simplified model (poissonian firing), or changed the auto-structure by spike jittering. We studied whether a statistical inference that tests whether coordinated activity is occurring beyond chance can be made false if one ignores or changes the real auto-structure of recorded data. To this end, we investigated the distribution of coincident spikes in mutually independent spike-trains modeled as renewal processes. We considered Gamma processes with different shape parameters as well as renewal processes in which the ISI distribution is log-normal. For Gamma processes of integer order, we calculated the mean number of coincident spikes, as well as the Fano factor of the coincidences, analytically. We determined how these measures depend on the bin width and also investigated how they depend on the firing rate, and on rate difference between the neurons. We used Monte-Carlo simulations to estimate the whole distribution for these parameters and also for other values of gamma. Moreover, we considered the effect of dithering for both of these processes and saw that while dithering does not change the average number of coincidences, it does change the shape of the coincidence distribution. Our major findings are: 1) the width of the coincidence count distribution depends very critically and in a non-trivial way on the detailed properties of the inter-spike interval distribution, 2) the dependencies of the Fano factor on the coefficient of variation of the ISI distribution are complex and mostly non-monotonic. Moreover, the Fano factor depends on the very detailed properties of the individual point processes, and cannot be predicted by the CV alone. Hence, given a recorded data set, the estimated value of CV of the ISI distribution is not sufficient to predict the Fano factor of the coincidence count distribution, and 3) spike jittering, even if it is as small as a fraction of the expected ISI, can falsify the inference on coordinated firing. In most of the tested cases and especially for complex synchronous and spatiotemporal pattern across many neurons, spike jittering increased the likelihood of false positive finding very strongly. Last, we discuss a procedure [1] that considers the complete auto-structure of each individual spike-train for testing whether synchrony firing occurs at chance and therefore overcomes the danger of an increased level of false positives.
Poster presentation: How can two distant neural assemblies synchronize their firings at zero-lag even in the presence of non-negligible delays in the transfer of information between them? Neural synchronization stands today as one of the most promising mechanisms to counterbalance the huge anatomical and functional specialization of the different brain areas. However, and albeit more evidence is being accumulated in favor of its functional role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization of spiking assemblies of neurons for a wide range of temporal delays. We demonstrate that when two distant neural assemblies do not interact directly but relaying their dynamics via a third mediating single neuron or population and eventually achieve zero-lag coherent firing. Extensive numerical simulations of populations of Hodgkin-Huxley neurons interacting in such a network are analyzed. The results show that even with axonal delays as large as 15 ms the distant neural populations can synchronize their firings at zero-lag in a millisecond precision after the exchange of a few spikes. The role of noise and a distribution of axonal delays in the synchronized dynamics of the neural populations are also studied confirming the robustness of this sync mechanism. The proposed network module is densely embedded within the complex functional architecture of the brain and especially within the reciprocal thalamocortical interactions where the role of indirect pathways mimicking direct cortico-cortical fibers has been already suggested to facilitate trans-areal cortical communication. In summary the robust neural synchronization mechanism presented here arises as a consequence of the relay and redistribution of the dynamics performed by a mediating neuronal population. In opposition to previous works, neither inhibitory, gap junctions, nor complex networks need to be invoked to provide a stable mechanism of zero-phase correlated activity of neural populations in the presence of large conduction delays.
Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory.
Poster presentation: Characterizing neuronal encoding is essential for understanding information processing in the brain. Three methods are commonly used to characterize the relationship between neural spiking activity and the features of putative stimuli. These methods include: Wiener-Volterra kernel methods (WVK), the spike-triggered average (STA), and more recently, the point process generalized linear model (GLM). We compared the performance of these three approaches in estimating receptive field properties and orientation tuning of 251 V1 neurons recorded from 2 monkeys during a fixation period in response to a moving bar. The GLM consisted of two formulations of the conditional intensity function for a point process characterization of the spiking activity: one with a stimulus only component and one with the stimulus and spike history. We fit the GLMs by maximum likelihood using GLMfit in Matlab. Goodness-of-fit was assessed using cross-validation with Kolmogorov-Smirnov (KS) tests based on the time-rescaling theorem to evaluate the accuracy with which each model predicts the spiking activity of individual neurons and for each movement direction (4016 models in total, for 251 neurons and 16 different directions). The GLMs that considered spike history of up to 35 ms, accurately predicted neuronal spiking activity (95% confidence intervals for KS test) with a performance of 97.0% (3895/4016) for the training data, and 96.5% (3876/4016) for the test data. If spike history was not considered, performance dropped to 73,1% in the training and 71.3% in the testing data. In contrast, the WVF and the STA predicted spiking accurately for 24.2% and 44.5% of the test data examples respectively. The receptive field size estimates obtained from the GLM (with and without history), WVF and STA were comparable. Relative to the GLM orientation tuning was underestimated on average by a factor of 0.45 by the WVF and the STA. The main reason for using the STA and WVF approaches is their apparent simplicity. However, our analyses suggest that more accurate spike prediction as well as more credible estimates of receptive field size and orientation tuning can be computed easily using GLMs implemented in Matlab with standard functions such as GLMfit.
The cumulant ratios up to fourth order of the Z distributions of the largest fragment in spectator fragmentation following 107,124Sn+Sn and 124La+Sn collisions at 600 MeV/nucleon have been investigated. They are found to exhibit the signatures of a second-order phase transition established with cubic bond percolation and previously observed in the ALADIN experimental data for fragmentation of 197Au projectiles at similar energies. The deduced pseudocritical points are found to be only weakly dependent on the A/Z ratio of the fragmenting spectator source. The same holds for the corresponding chemical freeze-out temperatures of close to 6 MeV.The experimental cumulant distributions are quantitatively reproduced with the Statistical Multifragmentation Model and parameters used to describe the experimental fragment multiplicities, isotope distributions and their correlations with impact-parameter related observables in these reactions. The characteristic coincidence of the zero transition of the skewness with the minimum of the kurtosis excess appears to be a generic property of statistical models and is found to coincide with the maximum of the heat capacity in the canonical thermodynamic fragmentation model.
Self-organized complexity and Coherent Infomax from the viewpoint of Jaynes’s probability theory
(2012)
This paper discusses concepts of self-organized complexity and the theory of Coherent Infomax in the light of Jaynes’s probability theory. Coherent Infomax, shows, in principle, how adaptively self-organized complexity can be preserved and improved by using probabilistic inference that is context-sensitive. It argues that neural systems do this by combining local reliability with flexible, holistic, context-sensitivity. Jaynes argued that the logic of probabilistic inference shows it to be based upon Bayesian and Maximum Entropy methods or special cases of them. He presented his probability theory as the logic of science; here it is considered as the logic of life. It is concluded that the theory of Coherent Infomax specifies a general objective for probabilistic inference, and that contextual interactions in neural systems perform functions required of the scientist within Jaynes’s theory.
Lattice QCD with heavy quarks reduces to a three-dimensional effective theory of Polyakov loops, which is amenable to series expansion methods. We analyse the effective theory in the cold and dense regime for a general number of colours, Nc. In particular, we investigate the transition from a hadron gas to baryon condensation. For any finite lattice spacing, we find the transition to become stronger, i.e. ultimately first-order, as Nc is made large. Moreover, in the baryon condensed regime, we find the pressure to scale as p ∼ Nc through three orders in the hopping expansion. Such a phase differs from a hadron gas with p ∼ N0c, or a quark gluon plasma, p ∼ N2c, and was termed quarkyonic in the literature, since it shows both baryon-like and quark-like aspects. A lattice filling with baryon number shows a rapid and smooth transition from condensing baryons to a crystal of saturated quark matter, due to the Pauli principle, and is consistent with this picture. For continuum physics, the continuum limit needs to be taken before the large Nc limit, which is not yet possible in practice. However, in the controlled range of lattice spacings and Nc-values, our results are stable when the limits are approached in this order. We discuss possible implications for physical QCD.
LatticeQCD using OpenCL
(2011)
The global energy system is undergoing a major transition, and in energy planning and decision-making across governments, industry and academia, models play a crucial role. Because of their policy relevance and contested nature, the transparency and open availability of energy models and data are of particular importance. Here we provide a practical how-to guide based on the collective experience of members of the Open Energy Modelling Initiative (Openmod). We discuss key steps to consider when opening code and data, including determining intellectual property ownership, choosing a licence and appropriate modelling languages, distributing code and data, and providing support and building communities. After illustrating these decisions with examples and lessons learned from the community, we conclude that even though individual researchers' choices are important, institutional changes are still also necessary for more openness and transparency in energy research.
Volatility is a widely recognized measure of market risk. As volatility is not observed it has to be estimated from market prices, i.e., as the implied volatility from option prices. The volatility index VIX making volatility a tradeable asset in its own right is computed from near- and next-term put and call options on the S&P 500 with more than 23 days and less than 37 days to expiration and non-vanishing bid. In the present paper we quantify the information content of the constituents of the VIX about the volatility of the S&P 500 in terms of the Fisher information matrix. Assuming that observed option prices are centered on the theoretical price provided by Heston's model perturbed by additive Gaussian noise we relate their Fisher information matrix to the Greeks in the Heston model. We find that the prices of options contained in the VIX basket allow for reliable estimates of the volatility of the S&P 500 with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value of roughly 3%, inferences from option prices become imprecise because Vega, the derivative of a European option w.r.t. volatility, and thereby the Fisher information nearly vanishes.
The goal of heavy ion reactions at low beam energies is to explore the QCD phase diagram at high net baryon chemical potential. To relate experimental observations with a first order phase transition or a critical endpoint, dynamical approaches for the theoretical description have to be developed. In this summary of the corresponding plenary talk, the status of the dynamical modeling including the most recent advances is presented. The remaining challenges are highlighted and promising experimental measurements are pointed out.
Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
(2019)
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here, we show in awake macaque area V1 that both repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects show some persistence on the timescale of minutes. Gamma increases are specific to the presented stimulus location. Further, repetition effects on gamma and on firing rates generalize to images of natural objects. These findings support the notion that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters.
Background: The technical development of imaging techniques in life sciences has enabled the three-dimensional recording of living samples at increasing temporal resolutions. Dynamic 3D data sets of developing organisms allow for time-resolved quantitative analyses of morphogenetic changes in three dimensions, but require efficient and automatable analysis pipelines to tackle the resulting Terabytes of image data. Particle image velocimetry (PIV) is a robust and segmentation-free technique that is suitable for quantifying collective cellular migration on data sets with different labeling schemes. This paper presents the implementation of an efficient 3D PIV package using the Julia programming language—quickPIV. Our software is focused on optimizing CPU performance and ensuring the robustness of the PIV analyses on biological data.
Results: QuickPIV is three times faster than the Python implementation hosted in openPIV, both in 2D and 3D. Our software is also faster than the fastest 2D PIV package in openPIV, written in C++. The accuracy evaluation of our software on synthetic data agrees with the expected accuracies described in the literature. Additionally, by applying quickPIV to three data sets of the embryogenesis of Tribolium castaneum, we obtained vector fields that recapitulate the migration movements of gastrulation, both in nuclear and actin-labeled embryos. We show normalized squared error cross-correlation to be especially accurate in detecting translations in non-segmentable biological image data.
Conclusions: The presented software addresses the need for a fast and open-source 3D PIV package in biological research. Currently, quickPIV offers efficient 2D and 3D PIV analyses featuring zero-normalized and normalized squared error cross-correlations, sub-pixel/voxel approximation, and multi-pass. Post-processing options include filtering and averaging of the resulting vector fields, extraction of velocity, divergence and collectiveness maps, simulation of pseudo-trajectories, and unit conversion. In addition, our software includes functions to visualize the 3D vector fields in Paraview.
This a review of the present status of heavy-ion collisions at intermediate energies. The main goal of heavy-ion physics in this energy regime is to shed some light on the nuclear equation of state (EOS), hence we present the basic concept of the EOS in nuclear matter as well as of nuclear shock waves which provide the key mechanism for the compression of nuclear matter. The main part of this article is devoted to the models currently used for describing heavy-ion reactions theoretically and to the observables useful for extracting information about the EOS from experiments. A detailed discussion of the flow effects with a broad comparison with the avaible data is presented. The many-body aspects of such reactions are investigated via the multifragmentation break up of excited nuclear systems and a comparison of model calculations with the most recent multifragmentation experiments is presented.
Reprogramming of tomato leaf metabolome by the activity of heat stress transcription factor HsfB1
(2020)
Plants respond to high temperatures with global changes of the transcriptome, proteome, and metabolome. Heat stress transcription factors (Hsfs) are the core regulators of transcriptome responses as they control the reprogramming of expression of hundreds of genes. The thermotolerance-related function of Hsfs is mainly based on the regulation of many heat shock proteins (HSPs). Instead, the Hsf-dependent reprogramming of metabolic pathways and their contribution to thermotolerance are not well described. In tomato (Solanum lycopersicum), manipulation of HsfB1, either by suppression or overexpression (OE) leads to enhanced thermotolerance and coincides with distinct profile of metabolic routes based on a metabolome profiling of wild-type (WT) and HsfB1 transgenic plants. Leaves of HsfB1 knock-down plants show an accumulation of metabolites with a positive effect on thermotolerance such as the sugars sucrose and glucose and the polyamine putrescine. OE of HsfB1 leads to the accumulation of products of the phenylpropanoid and flavonoid pathways, including several caffeoyl quinic acid isomers. The latter is due to the enhanced transcription of genes coding key enzymes in both pathways, in some cases in both non-stressed and stressed plants. Our results show that beyond the control of the expression of Hsfs and HSPs, HsfB1 has a wider activity range by regulating important metabolic pathways providing an important link between stress response and physiological tomato development.
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for treatment and prevention of influenza and its complications is largely debatable. Here, we developed a transparent mathematical modelling setting to analyse the impact of NAIs on influenza disease at within-host and population level. Analytical and simulation results indicate that even assuming unrealistically high efficacies for NAIs, drug intake starting on the onset of symptoms has a negligible effect on an individual's viral load and symptoms score. Increasing NAIs doses does not provide a better outcome as is generally believed. Considering Tamiflu's pandemic regimen for prophylaxis, different multiscale simulation scenarios reveal modest reductions in epidemic size despite high investments in stockpiling. Our results question the use of NAIs in general to treat influenza as well as the respective stockpiling by regulatory authorities.
Neuraminidase inhibitors in influenza treatment and prevention – is it time to call it a day?
(2018)
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for the treatment and prevention of influenza and its complications is largely debatable due to constraints in the ability to control for confounders and to explore unobserved areas of the drug effects. For this study, we used a mathematical model of influenza infection which allowed transparent analyses. The model recreated the oseltamivir effects and indicated that: (i) the efficacy was limited by design, (ii) a 99% efficacy could be achieved by using high drug doses (however, taking high doses of drug 48 h post-infection could only yield a maximum of 1.6-day reduction in the time to symptom alleviation), and (iii) contributions of oseltamivir to epidemic control could be high, but were observed only in fragile settings. In a typical influenza infection, NAIs’ efficacy is inherently not high, and even if their efficacy is improved, the effect can be negligible in practice.
Adjuvanted influenza vaccines constitute a key element towards inducing neutralizing antibody responses in populations with reduced responsiveness, such as infants and elderly subjects, as well as in devising antigen-sparing strategies. In particular, squalene-containing adjuvants have been observed to induce enhanced antibody responses, as well as having an influence on cross-reactive immunity. To explore the effects of adjuvanted vaccine formulations on antibody response and their relation to protein-specific immunity, we propose different mathematical models of antibody production dynamics in response to influenza vaccination. Data from ferrets immunized with commercial H1N1pdm09 vaccine antigen alone or formulated with different adjuvants was instrumental to adjust model parameters. While the affinity maturation process complexity is abridged, the proposed model is able to recapitulate the essential features of the observed dynamics. Our numerical results suggest that there exists a qualitative shift in protein-specific antibody response, with enhanced production of antibodies targeting the NA protein in adjuvanted versus non-adjuvanted formulations, in conjunction with a protein-independent boost that is over one order of magnitude larger for squalene-containing adjuvants. Furthermore, simulations predict that vaccines formulated with squalene-containing adjuvants are able to induce sustained antibody titers in a robust way, with little impact of the time interval between immunizations.
Motivation: Partial differential equations (PDEs) is a well-established and powerful tool to simulate multi-cellular biological systems. However, available free tools for validation against data are not established. The PDEparams module provides flexible functionality in Python for parameter estimation in PDE models.
Results: The PDEparams module provides a flexible interface and readily accommodates different parameter analysis tools in PDE models such as computation of likelihood profiles, and parametric boot-strapping, along with direct visualisation of the results. To our knowledge, it is the first open, freely available tool for parameter fitting of PDE models.
Availability and implementation: The PDEparams module is distributed under the MIT license. The source code, usage instructions and step-by-step examples are freely available on GitHub at github.com/systemsmedicine/PDE_params.
We propose a generalized modeling framework for the kinetic mechanisms of transcriptional riboswitches. The formalism accommodates time-dependent transcription rates and changes of metabolite concentration and permits incorporation of variations in transcription rate depending on transcript length. We derive explicit analytical expressions for the fraction of transcripts that determine repression or activation of gene expression, pause site location and its slowing down of transcription for the case of the (2’dG)-sensing riboswitch from Mesoplasma florum. Our modeling challenges the current view on the exclusive importance of metabolite binding to transcripts containing only the aptamer domain. Numerical simulations of transcription proceeding in a continuous manner under time-dependent changes of metabolite concentration further suggest that rapid modulations in concentration result in a reduced dynamic range for riboswitch function regardless of transcription rate, while a combination of slow modulations and small transcription rates ensures a wide range of finely tuneable regulatory outcomes.
Criticality meets learning : criticality signatures in a self-organizing recurrent neural network
(2017)
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics. Such EoS-meter is model-independent and insensitive to other simulation inputs including the initial conditions for hydrodynamic simulations.
The state-of-the-art pattern recognition method in machine learning (deep convolution neural network) is used to identify the equation of state (EoS) employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in QCD. The EoS-meter is model independent and insensitive to other simulation inputs including the initial conditions and shear viscosity for hydrodynamic simulations. Through this study we demonstrate that there is a traceable encoder of the dynamical information from the phase structure that survives the evolution and exists in the final snapshot of heavy ion collisions and one can exclusively and effectively decode these information from the highly complex final output with machine learning when traditional methods fail. Besides the deep neural network, the performance of traditional machine learning classifiers are also provided.
The scope of this Thesis is to understand the position dependency phenomenon of human visual perception. First, under the ecological assumption, meaning under the assumption that animals adapt to the statistical regularities of their environment, we study the consequences of the imaging on the local statistics of the input to the human visual system. Second, we model efficient representations of these statistics and their contribution to shape the properties of eye sensory neurons. Third, we model efficient representations of the semantic context of images and the correctness of different underneath geometrical assumptions on the statistics of images.
The efficient coding hypothesis posits that sensory systems are adapted to the regularities of their signal input in order to reduce redundancy in the resulting representations. It is therefore important to characterize the regularities of natural signals to gain insight into the processing of natural stimuli. While measurements of statistical regularity in vision have focused on photographic images of natural environments it has been much less investigated, how the specific imaging process embodied by the organism’s eye induces statistical dependencies on the natural input to the visual system. This has allowed using the convenient assumption that natural image data is homogeneous across the visual field. Here we give up on this assumption and show how the imaging process in a human eye model influences the local statistics of the natural input to the visual system across the entire visual field. ...
We study the kinetic and chemical equilibration in 'infinite' parton-hadron matter within the Parton-Hadron-String Dynamics transport approach, which is based on a dynamical quasiparticle model for partons matched to reproduce lattice-QCD results – including the partonic equation of state – in thermodynamic equilibrium. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different baryon density (or chemical potential) and energy density. The transition from initially pure partonic matter to hadronic degrees of freedom (or vice versa) occurs dynamically by interactions. Different thermody-namical distributions of the strongly-interacting quark-gluon plasma (sQGP) are addressed and discussed.
Steep rise of parton densities in the limit of small parton momentum fraction x poses a challenge for describing the observed energy-dependence of the total and inelastic proton-proton cross sections σtot/inelpp : considering a realistic parton spatial distribution, one obtains a too-strong increase of σtot/inelpp in the limit of very high energies. We discuss various mechanisms which allow one to tame such a rise, paying special attention to the role of parton-parton correlations. In addition, we investigate a potential impact on model predictions for σtotpp, related to dynamical higher twist corrections to parton-production process.
We apply the phenomenological Reggeon field theory framework to investigate rapidity gap survival (RGS) probability for diffractive dijet production in proton–proton collisions. In particular, we study in some detail rapidity gap suppression due to elastic rescatterings of intermediate partons in the underlying parton cascades, described by enhanced (Pomeron–Pomeron interaction) diagrams. We demonstrate that such contributions play a subdominant role, compared to the usual, so-called “eikonal”, rapidity gap suppression due to elastic rescatterings of constituent partons of the colliding protons. On the other hand, the overall RGS factor proves to be sensitive to color fluctuations in the proton. Hence, experimental data on diffractive dijet production can be used to constrain the respective model approaches.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
We discuss in some detail the physics content of the new model, QGSJET-III-01, focusing on major problems related to the treatment of semihard processes in the very high energy limit. A special attention has been payed to the main improvement, compared to the QGSJET-II model, which is related to a phenomenological treatment of leading power corrections corresponding to final parton rescattering off soft gluons. In particular, this allowed us to use a twice smaller separation scale between the soft and hard parton physics, compared to the previous model version, QGSJET-II-04. Preliminary results obtained with the new model are also presented.
Predictions of popular cosmic ray interaction models for some basic characteristics of cosmic ray-induced extensive air showers are analyzed in view of experimental data on proton-proton collisions, obtained at the Large Hadron Collider. The differences between the results are traced down to different approaches for the treatment of hadronic interactions, implemented in those models. Potential measurements by LHC and cosmic ray experiments, which could be able to discriminate between the alternative approaches, are proposed.
I review the state-of-the-art concerning the treatment of high energy cosmic ray interactions in the atmosphere, discussing in some detail the underlying physical concepts and the possibilities to constrain the latter by current and future measurements at the Large Hadron Collider. The relation of basic characteristics of hadronic interactions tothe properties of nuclear-electromagnetic cascades induced by primary cosmic rays in the atmosphere is addressed.
COVID-19 pandemic is a major public health threat with unanswered questions regarding the role of the immune system in the severity level of the disease. In this paper, based on antibody kinetic data of patients with different disease severity, topological data analysis highlights clear differences in the shape of antibody dynamics between three groups of patients, which were non-severe, severe, and one intermediate case of severity. Subsequently, different mathematical models were developed to quantify the dynamics between the different severity groups. The best model was the one with the lowest media value of Akaike Information Criterion for all groups of patients. Although it has been reported high IgG level in severe patients, our findings suggest that IgG antibodies in severe patients may be less effective than non-severe patients due to early B cell production and early activation of the seroconversion process from IgM to IgG antibody.
A novel method for identifying the nature of QCD transitions in heavy-ion collision experiments is introduced. PointNet based Deep Learning (DL) models are developed to classify the equation of state (EoS) that drives the hydrodynamic evolution of the system created in Au-Au collisions at 10 AGeV. The DL models were trained and evaluated in different hypothetical experimental situations. A decreased performance is observed when more realistic experimental effects (acceptance cuts and decreased resolutions) are taken into account. It is shown that the performance can be improved by combining multiple events to make predictions. The PointNet based models trained on the reconstructed tracks of charged particles from the CBM detector simulation discriminate a crossover transition from a first order phase transition with an accuracy of up to 99.8%. The models were subjected to several tests to evaluate the dependence of its performance on the centrality of the collisions and physical parameters of fluid dynamic simulations. The models are shown to work in a broad range of centralities (b=0–7 fm). However, the performance is found to improve for central collisions (b=0–3 fm). There is a drop in the performance when the model parameters lead to reduced duration of the fluid dynamic evolution or when less fraction of the medium undergoes the transition. These effects are due to the limitations of the underlying physics and the DL models are shown to be superior in its discrimination performance in comparison to conventional mean observables.
In this talk we presented a novel technique, based on Deep Learning, to determine the impact parameter of nuclear collisions at the CBM experiment. PointNet based Deep Learning models are trained on UrQMD followed by CBMRoot simulations of Au+Au collisions at 10 AGeV to reconstruct the impact parameter of collisions from raw experimental data such as hits of the particles in the detector planes, tracks reconstructed from the hits or their combinations. The PointNet models can perform fast, accurate, event-by-event impact parameter determination in heavy ion collision experiments. They are shown to outperform a simple model which maps the track multiplicity to the impact parameter. While conventional methods for centrality classification merely provide an expected impact parameter distribution for a given centrality class, the PointNet models predict the impact parameter from 2–14 fm on an event-by-event basis with a mean error of −0.33 to 0.22 fm.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
In this thesis we investigate the role played by gauge fields in providing new observable signatures that can attest to the presence of color superconductivity in neutron stars. We show that thermal gluon fluctuations in color-flavor locked superconductors can substantially increase their critical temperature and also change the order of the transition, which becomes a strong first-order phase transition. Moreover, we explore the effects of strong magnetic fields on the properties of color-flavor locked superconducting matter. We find that both the energy gaps as well as the magnetization are oscillating functions of the magnetic field. Also, it is shown that the magnetization can be so strong that homogeneous quark matter becomes metastable for a range of parameters. This points towards the existence of magnetic domains or other types of magnetic inhomogeneities in the hypothesized quark cores of magnetars. Obviously, our results only apply if the strong magnetic fields observed on the surface of magnetars can be transmitted to their inner core. This can occur if the superconducting protons expected to exist in the outer core form a type-I I superconductor. However, it has been argued that the observed long periodic oscillations in isolated pulsars can only be explained if the outer core is a type-I superconductor rather than type-I I. We show that this is not the only solution for the precession puzzle by demonstrating that the long-term variation in the spin of PSR 1828-11 can be explained in terms of Tkachenko oscillations within superfluid shells.
Glia, the helper cells of the brain, are essential in maintaining neural resilience across time and varying challenges: By reacting to changes in neuronal health glia carefully balance repair or disposal of injured neurons. Malfunction of these interactions is implicated in many neurodegenerative diseases. We present a reductionist model that mimics repair-or-dispose decisions to generate a hypothesis for the cause of disease onset. The model assumes four tissue states: healthy and challenged tissue, primed tissue at risk of acute damage propagation, and chronic neurodegeneration. We discuss analogies to progression stages observed in the most common neurodegenerative conditions and to experimental observations of cellular signaling pathways of glia-neuron crosstalk. The model suggests that the onset of neurodegeneration can result as a compromise between two conflicting goals: short-term resilience to stressors versus long-term prevention of tissue damage.
Autophagosome biogenesis requires a localized perturbation of lipid membrane dynamics and a unique protein-lipid conjugate. Autophagy-related (ATG) proteins catalyze this biogenesis on cellular membranes, but the underlying molecular mechanism remains unclear. Focusing on the final step of the protein-lipid conjugation reaction, ATG8/LC3 lipidation, we show how membrane association of the conjugation machinery is organized and fine-tuned at the atomistic level. Amphipathic α-helices in ATG3 proteins (AHATG3) are found to have low hydrophobicity and to be less bulky. Molecular dynamics simulations reveal that AHATG3 regulates the dynamics and accessibility of the thioester bond of the ATG3∼LC3 conjugate to lipids, allowing covalent lipidation of LC3. Live cell imaging shows that the transient membrane association of ATG3 with autophagic membranes is governed by the less bulky- hydrophobic feature of AHATG3. Collectively, the unique properties of AHATG3 facilitate protein- lipid bilayer association leading to the remodeling of the lipid bilayer required for the formation of autophagosomes.
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (<= ~20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs.
We study odd parity J=1/2 and J=3/2 Ξc resonances using a unitarized coupled-channel framework based on a SU(6)lsf×HQSS-extended Weinberg–Tomozawa baryon–meson interaction, while paying a special attention to the renormalization procedure. We predict a large molecular ΛcK¯ component for the Ξc(2790) with a dominant 0− light-degree-of-freedom spin configuration. We discuss the differences between the 3/2− Λc(2625) and Ξc(2815) states, and conclude that they cannot be SU(3) siblings, whereas we predict the existence of other Ξc-states, one of them related to the two-pole structure of the Λc(2595). It is of particular interest a pair of J=1/2 and J=3/2 poles, which form a HQSS doublet and that we tentatively assign to the Ξc(2930) and Ξc(2970), respectively. Within this picture, the Ξc(2930) would be part of a SU(3) sextet, containing either the Ωc(3090) or the Ωc(3119), and that would be completed by the Σc(2800). Moreover, we identify a J=1/2 sextet with the Ξb(6227) state and the recently discovered Σb(6097). Assuming the equal spacing rule and to complete this multiplet, we predict the existence of a J=1/2 Ωb odd parity state, with a mass of 6360 MeV and that should be seen in the ΞbK¯ channel.
In this letter we present some stringy corrections to black hole spacetimes emerging from string T-duality. As a first step, we derive the static Newtonian potential by exploiting the relation between the T-duality and the path integral duality. We show that the intrinsic non-perturbative nature of stringy corrections introduces an ultraviolet cutoff known as zero-point length in the path integral duality literature. As a result, the static potential is found to be regular. We use this result to derive a consistent black hole metric for the spherically symmetric, electrically neutral case. It turns out that the new spacetime is regular and is formally equivalent to the Bardeen metric, apart from a different ultraviolet regulator. On the thermodynamics side, the Hawking temperature admits a maximum before a cooling down phase towards a thermodynamically stable end of the black hole evaporation process. The findings support the idea of universality of quantum black holes.
This paper studies the geometry and the thermodynamics of a holographic screen in the framework of the ultraviolet self-complete quantum gravity. To achieve this goal we construct a new static, neutral, nonrotating black hole metric, whose outer (event) horizon coincides with the surface of the screen. The spacetime admits an extremal configuration corresponding to the minimal holographic screen and having both mass and radius equalling the Planck units. We identify this object as the spacetime fundamental building block, whose interior is physically unaccessible and cannot be probed even during the Hawking evaporation terminal phase. In agreement with the holographic principle, relevant processes take place on the screen surface. The area quantization leads to a discrete mass spectrum. An analysis of the entropy shows that the minimal holographic screen can store only one byte of information, while in the thermodynamic limit the area law is corrected by a logarithmic term.
In this Letter, we propose a new scenario emerging from the conjectured presence of a minimal length ℓ in the spacetime fabric, on the one side, and the existence of a new scale invariant, continuous mass spectrum, of un-particles on the other side. We introduce the concept of un-spectral dimension DU of a d-dimensional, euclidean (quantum) spacetime, as the spectral dimension measured by an “un-particle” probe. We find a general expression for the un-spectral dimension DU labelling different spacetime phases: a semi-classical phase, where ordinary spectral dimension gets contribution from the scaling dimension dU of the un-particle probe; a critical “Planckian phase”, where four-dimensional spacetime can be effectively considered two-dimensional when dU=1; a “Trans-Planckian phase”, which is accessible to un-particle probes only, where spacetime as we currently understand it looses its physical meaning.
In this paper we discuss to what extent one can infer details of the interior structure of a black hole based on its horizon. Recalling that black hole thermal properties are connected to the non-classical nature of gravity, we circumvent the restrictions of the no-hair theorem by postulating that the black hole interior is singularity free due to violations of the usual energy conditions. Further these conditions allow one to establish a one-to-one, holographic projection between Planckian areal “bits” on the horizon and “voxels”, representing the gravitational degrees of freedom in the black hole interior. We illustrate the repercussions of this idea by discussing an example of the black hole interior consisting of a de Sitter core postulated to arise from the local graviton quantum vacuum energy. It is shown that the black hole entropy can emerge as the statistical entropy of a gas of voxels.
In this Letter we study the radiation measured by an accelerated detector, coupled to a scalar field, in the presence of a fundamental minimal length. The latter is implemented by means of a modified momentum space Green's function. After calibrating the detector, we find that the net flux of field quanta is negligible, and that there is no Planckian spectrum. We discuss possible interpretations of this result, and we comment on experimental implications in heavy ion collisions and atomic systems.
In the presence of a minimal length, physical objects cannot collapse to an infinite density, singular, matter point. In this paper, we consider the possible final stage of the gravitational collapse of "thick" matter layers. The energy momentum tensor we choose to model these shell-like objects is a proper modification of the source for "noncommutative geometry inspired," regular black holes. By using higher momenta of Gaussian distribution to localize matter at finite distance from the origin, we obtain new solutions of the Einstein equation which smoothly interpolates between Minkowski's geometry near the center of the shell and Schwarzschild’s spacetime far away from the matter layer. The metric is curvature singularity free. Black hole type solutions exist only for "heavy" shells; that is, M >= Me, where Me is the mass of the extremal configuration. We determine the Hawking temperature and a modified area law taking into account the extended nature of the source.
The Karl Schwarzschild Meeting 2017 (KSM2017) has been the third instalment of the conference dedicated to the great Frankfurter scientist, who derived the first black hole solution of Einstein's equations about 100 years ago.
The event has been a 5 day meeting in the field of black holes, AdS/CFT correspondence and gravitational physics. Like the two previous instalments, the conference continued to attract a stellar ensemble of participants from the world's most renowned institutions. The core of the meeting has been a series of invited talks from eminent experts (keynote speakers) as well as the presence of plenary research talks by students and junior speakers.
List of Conference photo and poster, Sponsors and funding acknowledgments, Committees and List of participants are available in this PDF.
We present an analysis of the role of the charge within the self-complete quantum gravity paradigm. By studying the classicalization of generic ultraviolet improved charged black hole solutions around the Planck scale, we showed that the charge introduces important differences with respect to the neutral case. First, there exists a family of black hole parameters fulfilling the particle-black hole condition. Second, there is no extremal particle-black hole solution but quasi extremal charged particle-black holes at the best. We showed that the Hawking emission disrupts the condition of particle-black hole. By analyzing the Schwinger pair production mechanism, the charge is quickly shed and the particle-black hole condition can ultimately be restored in a cooling down phase towards a zero temperature configuration, provided non-classical effects are taken into account.
In this paper, we present an overview of some of the existing issues of the research in quantum gravity. We also introduce the basic ideas that led Padmanabhan to consider a duality property in path integrals. Such a duality is consistent with the T-duality in string theory. More importantly, the path integral duality discloses a universal feature of any quantum geometry, namely the existence of a zero point length L0. We also comment about recent developments aiming to expose effects of the zero point length in strong electrodynamics and black holes. There are reasons to believe that the main characters of the phenomenology of quantum gravity may be described by means of a single parameter like L0.
From August to November 2017, Madagascar endured an outbreak of plague. A total of 2417 cases of plague were confirmed, causing a death toll of 209. Public health intervention efforts were introduced and successfully stopped the epidemic at the end of November. The plague, however, is endemic in the region and occurs annually, posing the risk of future outbreaks. To understand the plague transmission, we collected real-time data from official reports, described the outbreak's characteristics, and estimated transmission parameters using statistical and mathematical models. The pneumonic plague epidemic curve exhibited multiple peaks, coinciding with sporadic introductions of new bubonic cases. Optimal climate conditions for rat flea to flourish were observed during the epidemic. Estimate of the plague basic reproduction number during the large wave of the epidemic was high, ranging from 5 to 7 depending on model assumptions. The incubation and infection periods for bubonic and pneumonic plague were 4.3 and 3.4 days and 3.8 and 2.9 days, respectively. Parameter estimation suggested that even with a small fraction of the population exposed to infected rat fleas (1/10,000) and a small probability of transition from a bubonic case to a secondary pneumonic case (3%), the high human-to-human transmission rate can still generate a large outbreak. Controlling rodent and fleas can prevent new index cases, but managing human-to-human transmission is key to prevent large-scale outbreaks.
Background: Recent epidemics have entailed global discussions on revamping epidemic control and prevention approaches. A general consensus is that all sources of data should be embraced to improve epidemic preparedness. As a disease transmission is inherently governed by individual-level responses, pathogen dynamics within infected hosts posit high potentials to inform population-level phenomena. We propose a multiscale approach showing that individual dynamics were able to reproduce population-level observations.
Methods: Using experimental data, we formulated mathematical models of pathogen infection dynamics from which we simulated mechanistically its transmission parameters. The models were then embedded in our implementation of an age-specific contact network that allows to express individual differences relevant to the transmission processes. This approach is illustrated with an example of Ebola virus (EBOV).
Results: The results showed that a within-host infection model can reproduce EBOV’s transmission parameters obtained from population data. At the same time, population age-structure, contact distribution and patterns can be expressed using network generating algorithm. This framework opens a vast opportunity to investigate individual roles of factors involved in the epidemic processes. Estimating EBOV’s reproduction number revealed a heterogeneous pattern among age-groups, prompting cautions on estimates unadjusted for contact pattern. Assessments of mass vaccination strategies showed that vaccination conducted in a time window from five months before to one week after the start of an epidemic appeared to strongly reduce epidemic size. Noticeably, compared to a non-intervention scenario, a low critical vaccination coverage of 33% cannot ensure epidemic extinction but could reduce the number of cases by ten to hundred times as well as lessen the case-fatality rate.
Conclusions: Experimental data on the within-host infection have been able to capture upfront key transmission parameters of a pathogen; the applications of this approach will give us more time to prepare for potential epidemics. The population of interest in epidemic assessments could be modelled with an age-specific contact network without exhaustive amount of data. Further assessments and adaptations for different pathogens and scenarios to explore multilevel aspects in infectious diseases epidemics are underway.
Ebola virus (EBOV) infection causes a high death toll, killing a high proportion of EBOV-infected patients within 7 days. Comprehensive data on EBOV infection are fragmented, hampering efforts in developing therapeutics and vaccines against EBOV. Under this circumstance, mathematical models become valuable resources to explore potential controlling strategies. In this paper, we employed experimental data of EBOV-infected nonhuman primates (NHPs) to construct a mathematical framework for determining windows of opportunity for treatment and vaccination. Considering a prophylactic vaccine based on recombinant vesicular stomatitis virus expressing the EBOV glycoprotein (rVSV-EBOV), vaccination could be protective if a subject is vaccinated during a period from one week to four months before infection. For the case of a therapeutic vaccine based on monoclonal antibodies (mAbs), a single dose might resolve the invasive EBOV replication even if it was administrated as late as four days after infection. Our mathematical models can be used as building blocks for evaluating therapeutic and vaccine modalities as well as for evaluating public health intervention strategies in outbreaks. Future laborator experiments will help to validate and refine the estimates of the windows of opportunity proposed here.
Driven by the loss of energy, isolated rotating neutron stars (pulsars) are gradually slowing down to lower frequencies, which increases the tremendous compression of the matter inside of them. This increase in compression changes both the global properties of rotating neutron stars as well as their hadronic core compositions. Both effects may register themselves observationally in the thermal evolution of such stars, as demonstrated in this Letter. The rotation-driven particle process which we consider here is the direct Urca (DU) process, which is known to become operative in neutron stars if the number of protons in the stellar core exceeds a critical limit of around 11% to 15%. We find that neutron stars spinning down from moderately high rotation rates of a few hundred Hertz may be creating just the right conditions where the DU process becomes operative, leading to an observable effect (enhanced cooling) in the temperature evolution of such neutron stars. As it turns out, the rotation-driven DU process could explain the unusual temperature evolution observed for the neutron star in Cas A, provided the mass of this neutron star lies in the range of 1.5 to 1.9M⊙ and its rotational frequency at birth was between 40 (400 Hz) and 70% (800 Hz) of the Kepler (mass shedding) frequency, respectively.
Background: After induction of DNA double strand breaks (DSBs), the DNA damage response (DDR) is activated. One of the earliest events in DDR is the phosphorylation of serine 139 on the histone variant H2AX (gH2AX) catalyzed by phosphatidylinositol 3-kinases-related kinases. Despite being extensively studied, H2AX distribution[1] across the genome and gH2AX spreading around DSBs sites[2] in the context of different chromatin compaction states or transcription are yet to be fully elucidated.
Materials and methods: gH2AX was induced in human hepatocellular carcinoma cells (HepG2) by exposure to 10 Gy X-rays (250 kV, 16 mA). Samples were incubated 0.5, 3 or 24 hours post irradiation to investigate early, intermediate and late stages of DDR, respectively. Chromatin immunoprecipitation was performed to select H2AX, H3 and gH2AX-enriched chromatin fractions. Chromatin-associated DNA was then sequenced by Illumina ChIP-Seq platform. HepG2 gene expression and histone modification (H3K36me3, H3K9me3) ChIP-Seq profiles were retrieved from Gene Expression Omnibus (accession numbers GSE30240 and GSE26386, respectively).
Results: First, we combined G/C usage, gene content, gene expression or histone modification profiles (H3K36me3, H3K9me3) to define genomic compartments characterized by different chromatin compaction states or transcriptional activity. Next, we investigated H3, H2AX and gH2AX distributions in such defined compartments before and after exposure to ionizing radiation (IR) to study DNA repair kinetics during DDR. Our sequencing results indicate that H2AX distribution followed H3 occupancy and, thus, the nucleosome pattern. The highest H2AX and H3 enrichment was observed in transcriptionally active compartments (euchromatin) while the lowest was found in low G/C and gene-poor compartments (heterochromatin). Under physiological conditions, the body of highly and moderately transcribed genes was devoid of gH2AX, despite presenting high H2AX levels. gH2AX accumulation was observed in 5’ or 3’ flanking regions, instead. The same genes showed a prompt gH2AX accumulation during the early stage of DDR which then decreased over time as DDR proceeded.
Finally, during the late stage of DDR the residual gH2AX signal was entirely retained in heterochromatic compartments. At this stage, euchromatic compartments were completely devoid of gH2AX despite presenting high levels of non-phosphorylated H2AX.
Conclusions: We show that gH2AX distribution ultimately depends on H2AX occupancy, the latter following H3 occupancy and, thus, nucleosome pattern. Both H2AX and H3 levels were higher in actively transcribed compartments. However, gH2AX levels were remarkably low over the body of actively transcribed genes suggesting that transcription levels antagonize gH2AX spreading. Moreover, repair processes did not take place uniformly across the genome; rather, DNA repair was affected by genomic location and transcriptional activity. We propose that higher H2AX density in euchromaticcompartments results in high relative gH2AXconcentration soon after the activation of DDR, thus favoring the recruitment of the DNA repair machinery to those compartments. When the damage is repaired and gH2AX is removed, its residual fraction is retained in the heterochromatic compartments which are then targeted and repaired at later times.
We present the current status of hybrid approaches to describe heavy ion collisions and their future challenges and perspectives. First we present a hybrid model combining a Boltzmann transport model of hadronic degrees of freedom in the initial and final state with an optional hydrodynamic evolution during the dense and hot phase. Second, we present a recent extension of the hydrodynamical model to include fluctuations near the phase transition by coupling a chiral field to the hydrodynamic evolution.
Background: In this interdisciplinary project, the biological effects of heavy ions are compared to those of X-rays using tissue slice culture preparations from rodents and humans. Advantages of this biological model are the conservation of an organotypic environment and the independency from genetic immortalization strategies used to generate cell lines. Its open access allows easy treatment and observation via live-imaging microscopy. Materials and methods: Rat brains and human brain tumor tissue are cut into 300 micro m thick tissue slices. These slices are cultivated using a membrane-based culture system and kept in an incubator at 37°C until treatment. The slices are treated with X-rays at the radiation facility of the University Hospital in Frankfurt at doses of up to 40 Gy. The heavy ion irradiations were performed at the UNILAC facility at GSI with different ions of 11.4 A MeV and fluences ranging from 0.5–10 x 106 particles/cm². Using 3D-confocal microscopy, cell-death and immune cell activation of the irradiated slices are analyzed. Planning of the irradiation experiments is done with simulation programs developed at GSI and FIAS. Results: After receiving a single application of either X-rays or heavy ions, slices were kept in culture for up to 9d post irradiation. DNA damage was visualized using gamma H2AXstaining. Here, a dose-dependent increase and time-dependent decrease could clearly be observed for the X-ray irradiation. Slices irradiated with heavy ions showed less gamma H2AX-positive cells distributed evenly throughout the slice, even though particles were calculated to penetrate only 90–100 micro m into the slice. Conclusions: Single irradiations of brain tissue, even at high doses of 40 Gy, will result neither in tissue damage visible on a macroscopic level nor necrosis. This is in line with the view that the brain is highly radio-resistant. However, DNA damage can be detected very well in tissue slices using gamma H2AX-immuno staining. Thus, slice cultures are an excellent tool to study radiation-induced damage and repair mechanisms in living tissues.
A considerable effort has been dedicated recently to the construction of generic equations of state (EOSs) for matter in neutron stars. The advantage of these approaches is that they can provide model-independent information on the interior structure and global properties of neutron stars. Making use of more than 106 generic EOSs, we assess the validity of quasi-universal relations of neutron-star properties for a broad range of rotation rates, from slow rotation up to the mass-shedding limit. In this way, we are able to determine with unprecedented accuracy the quasi-universal maximum-mass ratio between rotating and nonrotating stars and reveal the existence of a new relation for the surface oblateness, i.e., the ratio between the polar and equatorial proper radii. We discuss the impact that our findings have on the imminent detection of new binary neutron-star mergers and how they can be used to set new and more stringent limits on the maximum mass of nonrotating neutron stars, as well as to improve the modeling of the X-ray emission from the surface of rotating stars.
The effect of a non-zero strangeness chemical potential on the strong interaction phase diagram has been studied within the framework of the SU(3) quark-hadron chiral parity-doublet model. Both, the nuclear liquid-gas and the chiral/deconfinement phase transitions are modified. The first-order line in the chiral phase transition is observed to vanish completely, with the entire phase boundary becoming a crossover. These changes in the nature of the phase transitions are expected to modify various susceptibilities, the effects of which might be detectable in particle-number distributions resulting from moderate-temperature and high-density heavy-ion collision experiments.
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.
Currently, little is known about how synesthesia develops and which aspects of synesthesia can be acquired through a learning process. We review the increasing evidence for the role of semantic representations in the induction of synesthesia, and argue for the thesis that synesthetic abilities are developed and modified by semantic mechanisms. That is, in certain people semantic mechanisms associate concepts with perception-like experiences—and this association occurs in an extraordinary way. This phenomenon can be referred to as “higher” synesthesia or ideasthesia. The present analysis suggests that synesthesia develops during childhood and is being enriched further throughout the synesthetes’ lifetime; for example, the already existing concurrents may be adopted by novel inducers or new concurrents may be formed. For a deeper understanding of the origin and nature of synesthesia we propose to focus future research on two aspects: (i) the similarities between synesthesia and ordinary phenomenal experiences based on concepts; and (ii) the tight entanglement of perception, cognition and the conceptualization of the world. Importantly, an explanation of how biological systems get to generate experiences, synesthetic or not, may have to involve an explanation of how semantic networks are formed in general and what their role is in the ability to be aware of the surrounding world.
We study in detail the nuclear aspects of a neutron-star merger in which deconfinement to quark matter takes place. For this purpose, we make use of the Chiral Mean Field (CMF) model, an effective relativistic model that includes self-consistent chiral symmetry restoration and deconfinement to quark matter and, for this reason, predicts the existence of different degrees of freedom depending on the local density/chemical potential and temperature. We then use the out-of-chemical-equilibrium finite-temperature CMF equation of state in full general-relativistic simulations to analyze which regions of different QCD phase diagrams are probed and which conditions, such as strangeness and entropy, are generated when a strong first-order phase transition appears. We also investigate the amount of electrons present in different stages of the merger and discuss how far from chemical equilibrium they can be and, finally, draw some comparisons with matter created in supernova explosions and heavy-ion collisions.
We compute the probability distribution P(N) of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T < 1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4) spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N). By considering ratios of P(N) to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4) criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4) criticality in the context of binomial and negative-binomial distributions for the net proton number.
We study the effect of the chiral symmetry restoration (CSR) on heavy-ion collisions observables in the energy range sNN=3–20GeV within the Parton-Hadron-String Dynamics (PHSD) transport approach. The PHSD includes the deconfinement phase transition as well as essential aspects of CSR in the dense and hot hadronic medium, which are incorporated in the Schwinger mechanism for particle production. Our systematic studies show that chiral symmetry restoration plays a crucial role in the description of heavy-ion collisions at sNN=3–20GeV, realizing an increase of the hadronic particle production in the strangeness sector with respect to the non-strange one. Our results provide a microscopic explanation for the horn structure in the excitation function of the K+/π+ ratio: the CSR in the hadronic phase produces the steep increase of this particle ratio up to sNN≈7GeV, while the drop at higher energies is associated to the appearance of a deconfined partonic medium. Furthermore, the appearance/disappearance of the horn structure is investigated as a function of the system size. We additionally present an analysis of strangeness production in the (T,μB)-plane (as extracted from the PHSD for central Au+Au collisions) and discuss the perspectives to identify a possible critical point in the phase diagram.
We study D and DS mesons at finite temperature using an effective field theory based on chiral and heavy-quark spin-flavor symmetries within the imaginary-time formalism. Interactions with the light degrees of freedom are unitarized via a Bethe-Salpeter approach, and the D and self-energies are calculated self-consistently. We generate dynamically the e D∗0(2300)and Ds(2317)state, and study their possible identification as the chiral We study Dand Dsmesons at finite temperature using an effective field theory based on chiral and heavy-quark spin-flavor symmetries within the imaginary-time formalism. Interactions with the light degrees of freedom are unitarized via a Bethe-Salpeter approach, and the Dand Dsself-energies are calculated self-consistently. We generate dynamically the D∗0(2300)and Ds(2317)states, and study their possible identification as the chiral partners of the Dand Dsground states, respectively. We show the evolution of their masses and decay widths as functions of temperature, and provide an analysis of the chiral-symmetry restoration in the heavy-flavor sector below the transition temperature. In particular, we analyse the very special case of the D-meson, for which the chiral partner is associated to the double-pole structure of the D∗0(2300).