Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (962)
- Article (753)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Review (1)
Has Fulltext
- yes (1774) (remove)
Is part of the Bibliography
- no (1774)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (10)
- Heavy-ion collisions (8)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1774)
- Physik (1315)
- Informatik (1008)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
- Informatik und Mathematik (7)
- ELEMENTS (5)
- Präsidium (5)
- Geowissenschaften (4)
- Hochschulrechenzentrum (4)
- MPI für Biophysik (4)
- Biochemie, Chemie und Pharmazie (3)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (3)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (2)
- Exzellenzcluster Makromolekulare Komplexe (2)
- MPI für empirische Ästhetik (2)
- Mathematik (2)
- Biodiversität und Klima Forschungszentrum (BiK-F) (1)
- Center for Scientific Computing (CSC) (1)
- Pharmazie (1)
- Senckenbergische Naturforschende Gesellschaft (1)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (1)
The production of Ξ(1321)− and Ξ¯¯¯¯(1321)+ hyperons in inelastic p+p interactions is studied in a fixed target experiment at a beam momentum of 158 GeV/c. Double differential distributions in rapidity y and transverse momentum pT are obtained from a sample of 33M inelastic events. They allow to extrapolate the spectra to full phase space and to determine the mean multiplicity of both Ξ− and Ξ¯¯¯¯+. The rapidity and transverse momentum spectra are compared to transport model predictions. The Ξ− mean multiplicity in inelastic p+p interactions at 158 GeV/c is used to quantify the strangeness enhancement in A+A collisions at the same centre-of-mass energy per nucleon pair.
We explore some implications of our previous proposal, motivated in part by the Generalised Uncertainty Principle (GUP) and the possibility that black holes have quantum mechanical hair that the ADM mass of a system has the form M+βM2Pl/(2M), where M is the bare mass, MPl is the Planck mass and β is a positive constant. This also suggests some connection between black holes and elementary particles and supports the suggestion that gravity is self-complete. We extend our model to charged and rotating black holes, since this is clearly relevant to elementary particles. The standard Reissner–Nordström and Kerr solutions include zero-temperature states, representing the smallest possible black holes, and already exhibit features of the GUP-modified Schwarzschild solution. However, interesting new features arise if the charged and rotating solutions are themselves GUP-modified. In particular, there is an interesting transition below some value of β from the GUP solutions (spanning both super-Planckian and sub-Planckian regimes) to separated super-Planckian and sub-Planckian solutions. Equivalently, for a given value of β, there is a critical value of the charge and spin above which the solutions bifurcate into sub-Planckian and super-Planckian phases, separated by a mass gap in which no black holes can form.
Constraints on the Covariant Canonical Gauge Gravity (CCGG) theory from low-redshift cosmology are studied. The formulation extends Einstein’s theory of General Relativity (GR) by a quadratic Riemann–Cartan term in the Lagrangian, controlled by a “deformation” parameter. In the Friedman universe this leads to an additional geometrical stress energy and promotes, due to the necessary presence of torsion, the cosmological constant to a time-dependent function. The MCMC analysis of the combined data sets of Type Ia Supernovae, Cosmic Chronometers and Baryon Acoustic Oscillations yields a fit that is well comparable with the ΛCDM results. The modifications implied in the CCGG approach turn out to be subdominant in the low-redshift cosmology. However, a non-zero spatial curvature and deformation parameter are shown to be consistent with observations.
We apply the phenomenological Reggeon field theory framework to investigate rapidity gap survival (RGS) probability for diffractive dijet production in proton–proton collisions. In particular, we study in some detail rapidity gap suppression due to elastic rescatterings of intermediate partons in the underlying parton cascades, described by enhanced (Pomeron–Pomeron interaction) diagrams. We demonstrate that such contributions play a subdominant role, compared to the usual, so-called “eikonal”, rapidity gap suppression due to elastic rescatterings of constituent partons of the colliding protons. On the other hand, the overall RGS factor proves to be sensitive to color fluctuations in the proton. Hence, experimental data on diffractive dijet production can be used to constrain the respective model approaches.
Charged-particle spectra at midrapidity are measured in Pb–Pb collisions at the centre-of-mass energy per nucleon–nucleon pair √sNN = 5.02 TeV and presented in centrality classes ranging from most central (0–5%) to most peripheral (95–100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton–proton collisions, scaled by the number of independent nucleon–nucleon collisions obtained from a Glauber model. At large transverse momenta (8 < pT < 20 GeV/c), the average RAA is found to increase from about 0.15 in 0–5% central to a maximum value of about 0.8 in 75–85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8–20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb–Pb, but equal to unity in minimum-bias p–Pb collisions despite similar charged-particle multiplicities.
A measurement of dijet correlations in p–Pb collisions at √sNN = 5.02 TeV with the ALICE detector is presented. Jets are reconstructed from charged particles measured in the central tracking detectors and neutral energy deposited in the electromagnetic calorimeter. The transverse momentum of the full jet (clustered from charged and neutral constituents) and charged jet (clustered from charged particles only) is corrected event-by-event for the contribution of the underlying event, while corrections for underlying event fluctuations and finite detector resolution are applied on an inclusive basis. A projection of the dijet transverse momentum, kTy = pch+ne T,jet sin(ϕdijet) with ϕdijet the azimuthal angle between a full and charged jet and pch+ne T,jet the transverse momentum of the full jet, is used to study nuclear matter effects in p–Pb collisions. This observable is sensitive to the acoplanarity of dijet production and its potential modification in p–Pb collisions with respect to pp collisions. Measurements of the dijet kTy as a function of the transverse momentum of the full and recoil charged jet, and the event multiplicity are presented. No significant modification of kTy due to nuclear matter effects in p–Pb collisions with respect to the event multiplicity or a PYTHIA8 reference is observed.
The ALICE collaboration at the CERN LHC reports novel measurements of jet substructure in pp collisions at √s = 7 TeV and central Pb–Pb collisions at √sNN = 2.76 TeV. Jet substructure of track-based jets is explored via iterative declustering and grooming techniques. We present the measurement of the momentum sharing of two-prong substructure exposed via grooming, the zg, and its dependence on the opening angle, in both pp and Pb–Pb collisions. We also present the measurement of the distribution of the number of branches obtained in the iterative declustering of the jet, which is interpreted as the number of its hard splittings. In Pb–Pb collisions, we observe a suppression of symmetric splittings at large opening angles and an enhancement of splittings at small opening angles relative to pp collisions, with no significant modification of the number of splittings. The results are compared to predictions from various Monte Carlo event generators to test the role of important concepts in the evolution of the jet in the medium such as colour coherence.
Experimental results are presented on event-by-event net-proton fluctuation measurements in Pb–Pb collisions at √sNN = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. These measurements have as their ultimate goal an experimental test of Lattice QCD (LQCD) predictions on second and higher order cumulants of net-baryon distributions to search for critical behavior near the QCD phase boundary. Before confronting them with LQCD predictions, account has to be taken of correlations stemming from baryon number conservation as well as fluctuations of participating nucleons. Both effects influence the experimental measurements and are usually not considered in theoretical calculations. For the first time, it is shown that event-by-event baryon number conservation leads to subtle long-range correlations arising from very early interactions in the collisions.
Correlations between moments of different flow coefficients are measured in Pb–Pb collisions at √sNN=5.02TeV recorded with the ALICE detector. These new measurements are based on multiparticle mixed harmonic cumulants calculated using charged particles in the pseudorapidity region |η| <0.8with the transverse momentum range 0.2 <pT<5.0GeV/c. The centrality dependence of correlations between two flow coefficients as well as the correlations between three flow coefficients, both in terms of their second moments, are shown. In addition, a collection of mixed harmonic cumulants involving higher moments of v2and v3is measured for the first time, where the characteristic signature of negative, positive and negative signs of four-, six-and eight-particle cumulants are observed, respectively. The measurements are compared to the hydrodynamic calculations using iEBE-VISHNU with AMPT and TRENTo initial conditions. It is shown that the measurements carried out using the LHC Run 2 data in 2015 have the precision to explore the details of initial-state fluctuations and probe the nonlinear hydrodynamic response of v2and v3to their corresponding initial anisotropy coefficients ε2and ε3. These new studies on correlations between three flow coefficients as well as correlations between higher moments of two different flow coefficients will pave the way to tighten constraints on initial-state models and help to extract precise information on the dynamic evolution of the hot and dense matter created in heavy-ion collisions at the LHC.
Production of pions, kaons, (anti-)protons and φ mesons in Xe–Xe collisions at √sNN = 5.44 TeV
(2021)
The first measurement of the production of pions, kaons, (anti-)protons and φ mesons at midrapidity in Xe–Xe collisions at √sNN = 5.44 TeV is presented. Transverse momentum (pT) spectra and pT-integrated yields are extracted in several centrality intervals bridging from p–Pb to mid-central Pb–Pb collisions in terms of final-state multiplicity. The study of Xe–Xe and Pb–Pb collisions allows systems at similar charged-particle multiplicities but with different initial geometrical eccentricities to be investigated. A detailed comparison of the spectral shapes in the two systems reveals an opposite behaviour for radial and elliptic flow. In particular, this study shows that the radial flow does not depend on the colliding system when compared at similar charged-particle multiplicity. In terms of hadron chemistry, the previously observed smooth evolution of particle ratios with multiplicity from small to large collision systems is also found to hold in Xe–Xe. In addition, our results confirm that two remarkable features of particle production at LHC energies are also valid in the collision of medium-sized nuclei: the lower proton-to-pion ratio with respect to the thermal model expectations and the increase of the φ-to-pion ratio with increasing final-state multiplicity.
Themultiplicity dependence of the pseudorapidity density of charged particles in proton–proton (pp) collisions at centre-of-mass energies √s = 5.02, 7 and 13 TeV measured by ALICE is reported. The analysis relies on track segments measured in the midrapidity range (|η| < 1.5). Results are presented for inelastic events having at least one charged particle produced in the pseudorapidity interval |η| < 1. The multiplicity dependence of the pseudorapidity density of charged particles is measured with mid- and forward rapidity multiplicity estimators, the latter being less affected by autocorrelations.Adetailed comparison with predictions from the PYTHIA 8 and EPOS LHC event generators is also presented. The results can be used to constrain models for particle production as a function of multiplicity in pp collisions.
We examine the thermodynamic behavior of a static neutral regular (non-singular) black hole enclosed in a finite isothermal cavity. The cavity enclosure helps us investigate black hole systems in a canonical or a grand canonical ensemble. Here we demonstrate the derivation of the reduced action for the general metric of a regular black hole in a cavity by considering a canonical ensemble. The new expression of the action contains quantum corrections at short distances and concludes to the action of a singular black hole in a cavity at large distances. We apply this formalism to the noncommutative Schwarzschild black hole, in order to study the phase structure of the system. We conclude to a possible small/large stable regular black hole transition inside the cavity that exists neither at the system of a classical Schwarzschild black hole in a cavity, nor at the asymptotically flat regular black hole without the cavity. This phase transition seems to be similar with the liquid/gas transition of a Van der Waals gas.
The development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions with naturalistic input, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereograms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.
The cosmological implications of the Covariant Canonical Gauge Theory of Gravity (CCGG) are investigated. CCGG is a Palatini theory derived from first principles using the canonical transformation formalism in the covariant Hamiltonian formulation. The Einstein-Hilbert theory is thereby extended by a quadratic Riemann-Cartan term in the Lagrangian. Moreover, the requirement of covariant conservation of the stress-energy tensor leads to necessary presence of torsion. In the Friedman universe that promotes the cosmological constant to a time-dependent function, and gives rise to a geometrical correction with the EOS of dark radiation. The resulting cosmology, compatible with the ΛCDM parameter set, encompasses bounce and bang scenarios with graceful exits into the late dark energy era. Testing those scenarios against low-z observations shows that CCGG is a viable theory.
Multi-view microscopy techniques are used to increase the resolution along the optical axis for 3D imaging. Without this, the resolution is insufficient to resolve subcellular events. In addition, parts of the images of opaque specimens are often highly degraded or masked. Both problems motivate scientists to record the same specimen from multiple directions. The images, then have to be digitally fused into a single high-quality image. Selective-plane illumination microscopy has proven to be a powerful imaging technique due to its unsurpassed acquisition speed and gentle optical sectioning. However, even in the case of multi view imaging techniques that illuminate and image the sample from multiple directions, light scattering inside tissues often severely impairs image contrast.
Here we show that for c-elegans embryos multi view registration can be achieved based on segmented nuclei. However, segmentation of nuclei in high density distribution like c-elegans embryo is challenging. We propose a method which uses 3D Mexican hat filter for preprocessing and 3D Gaussian curvature for the post-processing step to separate nuclei. We used this method successfully on 3 data sets of c-elegans embryos in 3 different views. The result of segmentation outperforms previous methods. Moreover, we provide a simple GUI for manual correction and adjusting the parameters for different data.
We then proposed a method that combines point and voxel registration for an accurate multi view reg- istration of c-elegans embryo, which does not need any special experimental preparation. We demonstrate the performance of our approach on data acquired from fixed embryos of c-elegans worms. This multi step approach is successfully evaluated by comparison to different methods and also by using synthetic data. The proposed method could overcome the typically low resolution along the optical axis and enable stitching to- gether the different parts of the embryo available through the different views. A tool for running the code and analyzing the results is developed.
We derive the interaction of fermions with a dynamical space–time based on the postulate that the description of physics should be independent of the reference frame, which means to require the form-invariance of the fermion action under diffeomorphisms. The derivation is worked out in the Hamiltonian formalism as a canonical transformation along the line of non-Abelian gauge theories. This yields a closed set of field equations for fermions, unambiguously fixing their coupling to dynamical space–time. We encounter, in addition to the well-known minimal coupling, anomalous couplings to curvature and torsion. In torsion-free geometries that anomalous interaction reduces to a Pauli-type coupling with the curvature scalar via a spontaneously emerged new coupling constant with the dimension of mass. A consistent model Hamiltonian for the free gravitational field and the impact of its functional form on the structure of the dynamical geometry space–time is discussed.
We review the effective field theory associated with the superfluid phonons that we use for the study of transport properties in the core of superfluid neutrons stars in their low temperature regime. We then discuss the shear and bulk viscosities together with the thermal conductivity coming from the collisions of superfluid phonons in neutron stars. With regard to shear, bulk, and thermal transport coefficients, the phonon collisional processes are obtained in terms of the equation of state and the superfluid gap. We compare the shear coefficient due to the interaction among superfluid phonons with other dominant processes in neutron stars, such as electron collisions. We also analyze the possible consequences for the r-mode instability in neutron stars. As for the bulk viscosities, we determine that phonon collisions contribute decisively to the bulk viscosities inside neutron stars. For the thermal conductivity resulting from phonon collisions, we find that it is temperature independent well below the transition temperature. We also obtain that the thermal conductivity due to superfluid phonons dominates over the one resulting from electron-muon interactions once phonons are in the hydrodynamic regime. As the phonons couple to the Z electroweak gauge boson, we estimate the associated neutrino emissivity. We also briefly comment on how the superfluid phonon interactions are modified in the presence of a gravitational field or in a moving background.
Tailoring of spin state energetics of transition metal complexes and even the correct prediction of the resulting spin state is still a challenging task, both for the experimentalist and the theoretician. Apart from the complexity in the solid state imposed by packing effects, molecular factors of the spin state ordering are required to be identified and quantified on equal rights. In this work we experimentally record the spin states and SCO energies within an eight-member substitution-series of N4O2 ligated iron(II) complexes both in the solid state (SQUID magnetometry and single-crystal X-ray crystallography) and in solution (VT-NMR). The experimental survey is complemented
by exhaustive theoretical modelling of the molecular and electronic structure of the open-chain N4O2 family and its macrocyclic N6 congeners through density-functional theory methods. Ligand topology is identified as the leading factor defining ground-state multiplicity of the corresponding iron(II) complexes. Invariably the low-spin state is sterically trapped in the macrocycles, whereas subtle substitution effects allow for a molecular fine tuning of the spin state in the open-chain ligands. Factorization of computed relative SCO energies holds promise for directed design of future SCO systems.
In this talk we presented a novel technique, based on Deep Learning, to determine the impact parameter of nuclear collisions at the CBM experiment. PointNet based Deep Learning models are trained on UrQMD followed by CBMRoot simulations of Au+Au collisions at 10 AGeV to reconstruct the impact parameter of collisions from raw experimental data such as hits of the particles in the detector planes, tracks reconstructed from the hits or their combinations. The PointNet models can perform fast, accurate, event-by-event impact parameter determination in heavy ion collision experiments. They are shown to outperform a simple model which maps the track multiplicity to the impact parameter. While conventional methods for centrality classification merely provide an expected impact parameter distribution for a given centrality class, the PointNet models predict the impact parameter from 2–14 fm on an event-by-event basis with a mean error of −0.33 to 0.22 fm.
Cell fate clusters in ICM organoids arise from cell fate heredity and division: a modelling approach
(2020)
During the mammalian preimplantation phase, cells undergo two subsequent cell fate decisions. During the first decision, the trophectoderm and the inner cell mass are formed. Subsequently, the inner cell mass segregates into the epiblast and the primitive endoderm. Inner cell mass organoids represent an experimental model system, mimicking the second cell fate decision. It has been shown that cells of the same fate tend to cluster stronger than expected for random cell fate decisions. Three major processes are hypothesised to contribute to the cell fate arrangements: (1) chemical signalling; (2) cell sorting; and (3) cell proliferation. In order to quantify the influence of cell proliferation on the observed cell lineage type clustering, we developed an agent-based model accounting for mechanical cell–cell interaction, i.e. adhesion and repulsion, cell division, stochastic cell fate decision and cell fate heredity. The model supports the hypothesis that initial cell fate acquisition is a stochastically driven process, taking place in the early development of inner cell mass organoids. Further, we show that the observed neighbourhood structures can emerge solely due to cell fate heredity during cell division.
Reprogramming of tomato leaf metabolome by the activity of heat stress transcription factor HsfB1
(2020)
Plants respond to high temperatures with global changes of the transcriptome, proteome, and metabolome. Heat stress transcription factors (Hsfs) are the core regulators of transcriptome responses as they control the reprogramming of expression of hundreds of genes. The thermotolerance-related function of Hsfs is mainly based on the regulation of many heat shock proteins (HSPs). Instead, the Hsf-dependent reprogramming of metabolic pathways and their contribution to thermotolerance are not well described. In tomato (Solanum lycopersicum), manipulation of HsfB1, either by suppression or overexpression (OE) leads to enhanced thermotolerance and coincides with distinct profile of metabolic routes based on a metabolome profiling of wild-type (WT) and HsfB1 transgenic plants. Leaves of HsfB1 knock-down plants show an accumulation of metabolites with a positive effect on thermotolerance such as the sugars sucrose and glucose and the polyamine putrescine. OE of HsfB1 leads to the accumulation of products of the phenylpropanoid and flavonoid pathways, including several caffeoyl quinic acid isomers. The latter is due to the enhanced transcription of genes coding key enzymes in both pathways, in some cases in both non-stressed and stressed plants. Our results show that beyond the control of the expression of Hsfs and HSPs, HsfB1 has a wider activity range by regulating important metabolic pathways providing an important link between stress response and physiological tomato development.
Background: Cognitive dysfunctions represent a core feature of schizophrenia and a predictor for clinical outcomes. One possible mechanism for cognitive impairments could involve an impairment in the experience-dependent modifications of cortical networks.
Methods: To address this issue, we employed magnetoencephalography (MEG) during a visual priming paradigm in a sample of chronic patients with schizophrenia (n = 14), and in a group of healthy controls (n = 14). We obtained MEG-recordings during the presentation of visual stimuli that were presented three times either consecutively or with intervening stimuli. MEG-data were analyzed for event-related fields as well as spectral power in the 1–200 Hz range to examine repetition suppression and repetition enhancement. We defined regions of interest in occipital and thalamic regions and obtained virtual-channel data.
Results: Behavioral priming did not differ between groups. However, patients with schizophrenia showed prominently reduced oscillatory response to novel stimuli in the gamma-frequency band as well as significantly reduced repetition suppression of gamma-band activity and reduced repetition enhancement of beta-band power in occipital cortex to both consecutive repetitions as well as repetitions with intervening stimuli. Moreover, schizophrenia patients were characterized by a significant deficit in suppression of the C1m component in occipital cortex and thalamus as well as of the late positive component (LPC) in occipital cortex.
Conclusions: These data provide novel evidence for impaired repetition suppression in cortical and subcortical circuits in schizophrenia. Although behavioral priming was preserved, patients with schizophrenia showed deficits in repetition suppression as well as repetition enhancement in thalamic and occipital regions, suggesting that experience-dependent modification of neural circuits is impaired in the disorder.
Nodular lymphocyte predominant Hodgkin lymphoma (NLPHL) is a subtype of Hodgkin lymphoma with a preserved B‐cell phenotype and follicular T helper (TFH) cells rosetting around the tumor cells, the lymphocyte‐predominant (LP) cells. As we recently described reactivity of the B‐cell receptors of LP cells of some NLPHL cases with Moraxella spp. proteins, we hypothesized that LP cells could present peptides to rosetting T cells in a major histocompatibility complex class II (MHCII)‐bound manner. Rosetting PD1+ T cells were present in the majority of NLPHL cases, both in typical (17/20) and variant patterns (16/19). In most cases, T‐cell rosettes were CD69+ (typical NLPHL, 17/20; NLPHL variant, 14/19). Furthermore, both MHCII alpha and beta chains were expressed in the LP cells in 23/39 NLPHL. Proximity ligation assay and confocal laser imaging demonstrated interaction of the MHCII beta chain expressed by the LP cells and the T‐cell receptor alpha chain expressed by rosetting T cells. We thus conclude that rosetting T cells in NLPHL express markers that are encountered after antigenic exposure, that MHCII is expressed by the LP cells, and that LP cells interact with rosetting T cells in an immunological synapse in a subset of cases. As they likely receive growth stimulatory signals in this way, blockade of this interaction, for example, by PD1‐directed checkpoint inhibitors, could be a treatment option in a subset of cases in the future.
Volatility clustering and fat tails are prominently observed in financial markets. Here, we analyze the underlying mechanisms of three agent-based models explaining these stylized facts in terms of market instabilities and compare them on empirical grounds. To this end, we first develop a general framework for detecting tail events in stock markets. In particular, we introduce Hawkes processes to automatically identify and date onsets of market turmoils which result in increased volatility. Second, we introduce three different indicators to predict those onsets. Each of the three indicators is derived from and tailored to one of the models, namely quantifying information content, critical slowing down or market risk perception. Finally, we apply our indicators to simulated and real market data. We find that all indicators reliably predict market events on simulated data and clearly distinguish the different models. In contrast, a systematic comparison on the stocks of the Forbes 500 companies shows a markedly lower performance. Overall, predicting the onset of market turmoils appears difficult, yet, over very short time horizons high or rising volatility exhibits some predictive power.
Neuraminidase inhibitors in influenza treatment and prevention – is it time to call it a day?
(2018)
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for the treatment and prevention of influenza and its complications is largely debatable due to constraints in the ability to control for confounders and to explore unobserved areas of the drug effects. For this study, we used a mathematical model of influenza infection which allowed transparent analyses. The model recreated the oseltamivir effects and indicated that: (i) the efficacy was limited by design, (ii) a 99% efficacy could be achieved by using high drug doses (however, taking high doses of drug 48 h post-infection could only yield a maximum of 1.6-day reduction in the time to symptom alleviation), and (iii) contributions of oseltamivir to epidemic control could be high, but were observed only in fragile settings. In a typical influenza infection, NAIs’ efficacy is inherently not high, and even if their efficacy is improved, the effect can be negligible in practice.
We construct a new equation of state for the baryonic matter under an intense magnetic field within the framework of covariant density functional theory. The composition of matter includes hyperons as well as Δ-resonances. The extension of the nucleonic functional to the hypernuclear sector is constrained by the experimental data on Λ and Ξ-hypernuclei. We find that the equation of state stiffens with the inclusion of the magnetic field, which increases the maximum mass of neutron star compared to the non-magnetic case. In addition, the strangeness fraction in the matter is enhanced. Several observables, like the Dirac effective mass, particle abundances, etc. show typical oscillatory behavior as a function of the magnetic field and/or density which is traced back to the occupation pattern of Landau levels.
Glia, the helper cells of the brain, are essential in maintaining neural resilience across time and varying challenges: By reacting to changes in neuronal health glia carefully balance repair or disposal of injured neurons. Malfunction of these interactions is implicated in many neurodegenerative diseases. We present a reductionist model that mimics repair-or-dispose decisions to generate a hypothesis for the cause of disease onset. The model assumes four tissue states: healthy and challenged tissue, primed tissue at risk of acute damage propagation, and chronic neurodegeneration. We discuss analogies to progression stages observed in the most common neurodegenerative conditions and to experimental observations of cellular signaling pathways of glia-neuron crosstalk. The model suggests that the onset of neurodegeneration can result as a compromise between two conflicting goals: short-term resilience to stressors versus long-term prevention of tissue damage.
We derive the relation between cumulants of a conserved charge measured in a subvolume of a thermal system and the corresponding grand-canonical susceptibilities, taking into account exact global conservation of that charge. The derivation is presented for an arbitrary equation of state, with the assumption that the subvolume is sufficiently large to be close to the thermodynamic limit. Our framework – the subensemble acceptance method (SAM) – quantifies the effect of global conservation laws and is an important step toward a direct comparison between cumulants of conserved charges measured in central heavy ion collisions and theoretical calculations of grand-canonical susceptibilities, such as lattice QCD. As an example, we apply our formalism to net-baryon fluctuations at vanishing baryon chemical potentials as encountered in collisions at the LHC and RHIC.
EEG microstate periodicity explained by rotating phase patterns of resting-state alpha oscillations
(2020)
Spatio-temporal patterns in electroencephalography (EEG) can be described by microstate analysis, a discrete approximation of the continuous electric field patterns produced by the cerebral cortex. Resting-state EEG microstates are largely determined by alpha frequencies (8-12 Hz) and we recently demonstrated that microstates occur periodically with twice the alpha frequency.
To understand the origin of microstate periodicity, we analyzed the analytic amplitude and the analytic phase of resting-state alpha oscillations independently. In continuous EEG data we found rotating phase patterns organized around a small number of phase singularities which varied in number and location. The spatial rotation of phase patterns occurred with the underlying alpha frequency. Phase rotors coincided with periodic microstate motifs involving the four canonical microstate maps. The analytic amplitude showed no oscillatory behaviour and was almost static across time intervals of 1-2 alpha cycles, resulting in the global pattern of a standing wave.
In n=23 healthy adults, time-lagged mutual information analysis of microstate sequences derived from amplitude and phase signals of awake eyes-closed EEG records showed that only the phase component contributed to the periodicity of microstate sequences. Phase sequences showed mutual information peaks at multiples of 50 ms and the group average had a main peak at 100 ms (10 Hz), whereas amplitude sequences had a slow and monotonous information decay. This result was confirmed by an independent approach combining temporal principal component analysis (tPCA) and autocorrelation analysis.
We reproduced our observations in a generic model of EEG oscillations composed of coupled non-linear oscillators (Stuart-Landau model). Phase-amplitude dynamics similar to experimental EEG occurred when the oscillators underwent a supercritical Hopf bifurcation, a common feature of many computational models of the alpha rhythm.
These findings explain our previous description of periodic microstate recurrence and its relation to the time scale of alpha oscillations. Moreover, our results corroborate the predictions of computational models and connect experimentally observed EEG patterns to properties of critical oscillator networks.
p53 regulates the cellular response to genotoxic damage and prevents carcinogenic events. Theoretical and experimental studies state that the p53-Mdm2 network constitutes the core module of regulatory interactions activated by cellular stress induced by a variety of signaling pathways. In this paper, a strategy to control the p53-Mdm2 network regulated by p14ARF is developed, based on the pinning control technique, which consists into applying local feedback controllers to a small number of nodes (pinned ones) in the network. Pinned nodes are selected on the basis of their importance level in a topological hierarchy, their degree of connectivity within the network, and the biological role they perform. In this paper, two cases are considered. For the first case, the oscillatory pattern under gamma-radiation is recovered; afterward, as the second case, increased expression of p53 level is taken into account. For both cases, the control law is applied to p14ARF (pinned node based on a virtual leader methodology), and overexpressed Mdm2-mediated p53 degradation condition is considered as carcinogenic initial behavior. The approach in this paper uses a computational algorithm, which opens an alternative path to understand the cellular responses to stress, doing it possible to model and control the gene regulatory network dynamics in two different biological contexts. As the main result of the proposed control technique, the two mentioned desired behaviors are obtained.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
Summary
Wild relatives of crops thrive in habitats where environmental conditions can be restrictive for productivity and survival of cultivated species. The genetic basis of this variability, particularly for tolerance to high temperatures, is not well understood. We examined the capacity of wild and cultivated accessions to acclimate to rapid temperature elevations that cause heat stress (HS).
We investigated genotypic variation in thermotolerance of seedlings of wild and cultivated accessions. The contribution of polymorphisms associated with thermotolerance variation was examined regarding alterations in function of the identified gene.
We show that tomato germplasm underwent a progressive loss of acclimation to strong temperature elevations. Sensitivity is associated with intronic polymorphisms in the HS transcription factor HsfA2 which affect the splicing efficiency of its pre‐mRNA. Intron splicing in wild species results in increased synthesis of isoform HsfA2‐II, implicated in the early stress response, at the expense of HsfA2‐I which is involved in establishing short‐term acclimation and thermotolerance.
We propose that the selection for modern HsfA2 haplotypes reduced the ability of cultivated tomatoes to rapidly acclimate to temperature elevations, but enhanced their short‐term acclimation capacity. Hence, we provide evidence that alternative splicing has a central role in the definition of plant fitness plasticity to stressful conditions.
Our primary objective is to construct a plausible, unified model of inflation, dark energy and dark matter from a fundamental Lagrangian action first principle, wherein all fundamental ingredients are systematically dynamically generated starting from a very simple model of modified gravity interacting with a single scalar field employing the formalism of non-Riemannian spacetime volume-elements. The non-Riemannian volume element in the initial scalar field action leads to a hidden, nonlinear Noether symmetry which produces an energy-momentum tensor identified as the sum of a dynamically generated cosmological constant and dust-like dark matter. The non-Riemannian volume-element in the initial Einstein–Hilbert action upon passage to the physical Einstein-frame creates, dynamically, a second scalar field with a non-trivial inflationary potential and with an additional interaction with the dynamically generated dark matter. The resulting Einstein-frame action describes a fully dynamically generated inflationary model coupled to dark matter. Numerical results for observables such as the scalar power spectral index and the tensor-to-scalar ratio conform to the latest 2018 PLANCK data.
We estimate the feeddown contributions from decays of unstable A=4 and A=5 nuclei to the final yields of protons, deuterons, tritons, 3He, and 4He produced in relativistic heavy-ion collisions at sNN>2.4 GeV, using the statistical model. The feeddown contribution effects do not exceed 5% at LHC and top RHIC energies due to the large penalty factors involved, but are substantial at intermediate collision energies. We observe large feeddown contributions for tritons, 3He, and 4He at sNN≲10 GeV, where they may account for as much as 70% of the final yield at the lower end of the collision energies considered. Sizable (>10%) effects for deuteron yields are observed at sNN≲4 GeV. The results suggest that the excited nuclei feeddown cannot be neglected in the ongoing and future analysis of light nuclei production at intermediate collision energies, including HADES and CBM experiments at FAIR, NICA at JINR, RHIC beam energy scan and fixed-target programmes, and NA61/SHINE at CERN. We further show that the freeze-out curve in the T-μB plane itself is affected significantly by the light nuclei at high baryochemical potential.
In this paper, we discuss the damping of density oscillations in dense nuclear matter in the temperature range relevant to neutron star mergers. This damping is due to bulk viscosity arising from the weak interaction “Urca” processes of neutron decay and electron capture. The nuclear matter is modelled in the relativistic density functional approach. The bulk viscosity reaches a resonant maximum close to the neutrino trapping temperature, then drops rapidly as temperature rises into the range where neutrinos are trapped in neutron stars. We investigate the bulk viscous dissipation timescales in a post-merger object and identify regimes where these timescales are as short as the characteristic timescale ∼10 ms, and, therefore, might affect the evolution of the post-merger object. Our analysis indicates that bulk viscous damping would be important at not too high temperatures of the order of a few MeV and densities up to a few times saturation density.
We study D and DS mesons at finite temperature using an effective field theory based on chiral and heavy-quark spin-flavor symmetries within the imaginary-time formalism. Interactions with the light degrees of freedom are unitarized via a Bethe-Salpeter approach, and the D and self-energies are calculated self-consistently. We generate dynamically the e D∗0(2300)and Ds(2317)state, and study their possible identification as the chiral We study Dand Dsmesons at finite temperature using an effective field theory based on chiral and heavy-quark spin-flavor symmetries within the imaginary-time formalism. Interactions with the light degrees of freedom are unitarized via a Bethe-Salpeter approach, and the Dand Dsself-energies are calculated self-consistently. We generate dynamically the D∗0(2300)and Ds(2317)states, and study their possible identification as the chiral partners of the Dand Dsground states, respectively. We show the evolution of their masses and decay widths as functions of temperature, and provide an analysis of the chiral-symmetry restoration in the heavy-flavor sector below the transition temperature. In particular, we analyse the very special case of the D-meson, for which the chiral partner is associated to the double-pole structure of the D∗0(2300).
First, we propose a scale-invariant modified gravity interacting with a neutral scalar inflaton and a Higgs-like SU(2)×U(1) iso-doublet scalar field based on the formalism of non-Riemannian (metric-independent) spacetime volume-elements. This model describes, in the physical Einstein frame, a quintessential inflationary scenario driven by the “inflaton” together with the gravity-“inflaton” assisted dynamical spontaneous SU(2)×U(1) symmetry breaking in the post-inflationary universe, whereas the SU(2)×U(1) symmetry remains intact in the inflationary epoch. Next, we find the explicit representation of the latter quintessential inflationary model with a dynamical Higgs effect as an Eddington-type purely affine gravity.
Measurement of ϒ(1S) elliptic flow at forward rapidity in Pb-Pb collisions at √sNN = 5.02 TeV
(2019)
The first measurement of the ϒ(1S) elliptic flow coefficient (v2) is performed at forward rapidity (2.5 < y < 4) in Pb–Pb collisions at √sNN = 5.02 TeV with the ALICE detector at the LHC. The results are obtained with the scalar product method and are reported as a function of transverse momentum (pT) up to 15 GeV/c in the 5%–60% centrality interval. The measured Υ(1S)v2 is consistent with 0 and with the small positive values predicted by transport models within uncertainties. The v2 coefficient in 2 < pT < 15 GeV/c is lower than that of inclusive J/ψ mesons in the same pT interval by 2.6 standard deviations. These results, combined with earlier suppression measurements, are in agreement with a scenario in which the Υ(1S) production in Pb–Pb collisions at LHC energies is dominated by dissociation limited to the early stage of the collision, whereas in the J/ψ case there is substantial experimental evidence of an additional regeneration component.
Distillation of scalar exchange by coherent hypernucleus production in antiproton–nucleus collisions
(2017)
The total and angular differential cross sections of the coherent process p¯ + A Z → A (Z − 1) + ¯ are evaluated at the beam momenta 1.5 ÷ 20 GeV/c within the meson exchange model with bound proton and -hyperon wave functions. It is shown that the shape of the beam momentum dependence of the hypernucleus production cross sections with various discrete states is strongly sensitive to the presence of the scalar κ-meson exchange in the p p¯ → ¯ amplitude. This can be used as a clean test of the exchange by scalar π K correlation in coherent p A¯ reactions.
The study of hypernuclei in relativistic ion collisions open new opportunities for nuclear and particle physics. The main processes leading to the production of hypernuclei in these reactions are the disintegration of large excited hyper-residues (target- and projectile-like), and the coalescence of hyperons with other baryons into light clusters. We use the transport, coalescence and statistical models to describe the whole reaction, and demonstrate the effectiveness of this approach: These reactions lead to the abundant production of multi-strange nuclei and new hypernuclear states. A broad distribution of predicted hypernuclei in masses and isospin allows for investigating properties of exotic hypernuclei, as well as the hypermatter both at high and low temperatures. There is a saturation of the hypernuclei production at high energies, therefore, the optimal way to pursue this experimental research is to use the accelerator facilities of intermediate energies, like FAIR (Darmstadt) and NICA (Dubna).
Formation of hypermatter and hypernuclei within transport models in relativistic ion collisions
(2015)
Within a combined approach we investigate the main features of the production of hyper-fragments in relativistic heavy-ion collisions. The formation of hyperons is modeled within the UrQMD and HSD transport codes. To describe the hyperon capture by nucleons and nuclear residues a coalescence of baryons (CB) model was developed. We demonstrate that the origin of hypernuclei of various masses can be explained by typical baryon interactions, and that it is similar to processes leading to the production of conventional nuclei. At high beam energies we predict a saturation of the yields of all hyper-fragments, therefore, this kind of reactions can be studied with high yields even at the accelerators of moderate relativistic energies.
Observations show that, at the beginning of their existence, neutron stars are accelerated briskly to velocities of up to a thousand kilometers per second. We argue that this remarkable effect can be explained as a manifestation of quantum anomalies on astrophysical scales. To theoretically describe the early stage in the life of neutron stars we use hydrodynamics as a systematic effective-field-theory framework. Within this framework, anomalies of the Standard Model of particle physics as underlying microscopic theory imply the presence of a particular set of transport terms, whose form is completely fixed by theoretical consistency. The resulting chiral transport effects in proto-neutron stars enhance neutrino emission along the internal magnetic field, and the recoil can explain the order of magnitude of the observed kick velocities.
Unparticle Casimir effect
(2017)
In this paper we present the un-Casimir effect, namely the study of the Casimir energy in the presence of an unparticle component in addition to the electromagnetic field contribution. The distinctive feature of the un-Casimir effect is a fractalization of metallic plates. This result emerges through a new dependence of the Casimir energy on the plate separation that scales with a continuous power controlled by the unparticle dimension. As long as the perfect conductor approximation is valid, we find bounds on the unparticle scale that are independent of the effective coupling constant between the scale invariant sector and ordinary matter. We find regions of the parameter space such that for plate distances around 5 μm and larger the un-Casimir bound wins over the other bounds.
We calculate ratios of higher-order susceptibilities quantifying fluctuations in the number of net-protons and in the net-electric charge using the Hadron Resonance Gas (HRG) model. We take into account the effect of resonance decays, the kinematic acceptance cuts in rapidity, pseudo-rapidity and transverse momentum used in the experimental analysis, as well as a randomization of the isospin of nucleons in the hadronic phase. By comparing these results to the latest experimental data from the STAR Collaboration, we determine the freeze-out conditions from net-electric charge and net-proton distributions and discuss their consistency.
Motivated by a recent finding of an exact solution of the relativistic Boltzmann equation in a Friedmann–Robertson–Walker spacetime, we implement this metric into the newly developed transport approach Simulating Many Accelerated Strongly-interacting Hadrons (SMASH). We study the numerical solution of the transport equation and compare it to this exact solution for massless particles. We also compare a different initial condition, for which the transport equation can be independently solved numerically. Very nice agreement is observed in both cases. Having passed these checks for the SMASH code, we study a gas of massive particles within the same spacetime, where the particle decoupling is forced by the Hubble expansion. In this simple scenario we present an analysis of the freeze-out times, as function of the masses and cross sections of the particles. The results might be of interest for their potential application to relativistic heavy-ion collisions, for the characterization of the freeze-out process in terms of hadron properties.
We compare the reconstructed hadronization conditions in relativistic nuclear collisions in the nucleon–nucleon centre-of-mass energy range 4.7–2760 GeV in terms of temperature and baryon-chemical potential with lattice QCD calculations, by using hadronic multiplicities. We obtain hadronization temperatures and baryon chemical potentials with a fit to measured multiplicities by correcting for the effect of post-hadronization rescattering. The post-hadronization modification factors are calculated by means of a coupled hydrodynamical-transport model simulation under the same conditions of approximate isothermal and isochemical decoupling as assumed in the statistical hadronization model fits to the data. The fit quality is considerably better than without rescattering corrections, as already found in previous work. The curvature of the obtained “true” hadronization pseudo-critical line κ is found to be 0.0048 ± 0.0026, in agreement with lattice QCD estimates; the pseudo-critical temperature at vanishing is found to be 164.3 ± 1.8 MeV.
We discuss different models for the spin structure of the nonperturbative pomeron: scalar, vector, and rank-2 symmetric tensor. The ratio of single-helicity-flip to helicity-conserving amplitudes in polarised high-energy proton–proton elastic scattering, known as the complex r5 parameter, is calculated for these models. We compare our results to experimental data from the STAR experiment. We show that the spin-0 (scalar) pomeron model is clearly excluded by the data, while the vector pomeron is inconsistent with the rules of quantum field theory. The tensor pomeron is found to be perfectly consistent with the STAR data.
We study the production of entropy in the context of a nonequilibrium chiral phase transition. The dynamical symmetry breaking is modeled by a Langevin equation for the order parameter coupled to the Bjorken dynamics of a quark plasma. We investigate the impact of dissipation and noise on the entropy and explore the possibility of reheating for crossover and first-order phase transitions, depending on the expansion rate of the fluid. The relative increase in is estimated to range from 10% for a crossover to 100% for a first-order phase transition at low beam energies, which could be detected in the pion-to-proton ratio as a function of beam energy.
We compute the probability distribution P(N) of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T < 1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4) spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N). By considering ratios of P(N) to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4) criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4) criticality in the context of binomial and negative-binomial distributions for the net proton number.
We present a systematic study of the normalized symmetric cumulants, NSC(n,m), at the eccentricity level in proton-proton interactions at within a wounded hot spot approach. We focus our attention on the influence of spatial correlations between the proton constituents, in our case gluonic hot spots, on this observable. We notice that the presence of short-range repulsive correlations between the hot spots systematically decreases the values of and in mid- to ultra-central collisions while increases them in peripheral interactions. In the case of we find that, as suggested by data, an anti-correlation of and in ultra-central collisions, i.e. , is possible within the correlated scenario while it never occurs without correlations when the number of gluonic hot spots is set to three. We attribute this fact to the decisive role of correlations on enlarging the probability of interaction topologies that reduce the value of and, eventually, make it negative. Further, we explore the dependence of our conclusions on the number of hot spots, the values of the hot spot radius and the repulsive core distance. Our results add evidence to the idea that considering spatial correlations between the subnucleonic degrees of freedom of the proton may have a strong impact on the initial state properties of proton-proton interactions [1].
In this paper we discuss to what extent one can infer details of the interior structure of a black hole based on its horizon. Recalling that black hole thermal properties are connected to the non-classical nature of gravity, we circumvent the restrictions of the no-hair theorem by postulating that the black hole interior is singularity free due to violations of the usual energy conditions. Further these conditions allow one to establish a one-to-one, holographic projection between Planckian areal “bits” on the horizon and “voxels”, representing the gravitational degrees of freedom in the black hole interior. We illustrate the repercussions of this idea by discussing an example of the black hole interior consisting of a de Sitter core postulated to arise from the local graviton quantum vacuum energy. It is shown that the black hole entropy can emerge as the statistical entropy of a gas of voxels.
We present an analysis of the role of the charge within the self-complete quantum gravity paradigm. By studying the classicalization of generic ultraviolet improved charged black hole solutions around the Planck scale, we showed that the charge introduces important differences with respect to the neutral case. First, there exists a family of black hole parameters fulfilling the particle-black hole condition. Second, there is no extremal particle-black hole solution but quasi extremal charged particle-black holes at the best. We showed that the Hawking emission disrupts the condition of particle-black hole. By analyzing the Schwinger pair production mechanism, the charge is quickly shed and the particle-black hole condition can ultimately be restored in a cooling down phase towards a zero temperature configuration, provided non-classical effects are taken into account.
Bardeen black hole chemistry
(2019)
In the present paper we try to connect the Bardeen black hole with the concept of the recently proposed black hole chemistry. We study thermodynamic properties of the regular black hole with an anti-deSitter background. The negative cosmological constant Λ plays the role of the positive thermodynamic pressure of the system. After studying the thermodynamic variables, we derive the corresponding equation of state and we show that a neutral Bardeen-anti-deSitter black hole has similar phenomenology to the chemical Van der Waals fluid. This is equivalent to saying that the system exhibits criticality and a first order small/large black hole phase transition reminiscent of the liquid/gas coexistence.
The properties of open strange meson K1± in nuclear matter are estimated in the QCD sum rule approach. We obtain a relation between the in-medium mass and width of K1− (K1+) in nuclear matter, and show that the upper limit of the mass shift is as large as −249 (−35) MeV. The spectral modification of the K1 meson is possible to be probed by using kaon beams at J-PARC. Such measurement together with that of K⁎ will shed light on how chiral symmetry is partially restored in nuclear matter.
The effect of a non-zero strangeness chemical potential on the strong interaction phase diagram has been studied within the framework of the SU(3) quark-hadron chiral parity-doublet model. Both, the nuclear liquid-gas and the chiral/deconfinement phase transitions are modified. The first-order line in the chiral phase transition is observed to vanish completely, with the entire phase boundary becoming a crossover. These changes in the nature of the phase transitions are expected to modify various susceptibilities, the effects of which might be detectable in particle-number distributions resulting from moderate-temperature and high-density heavy-ion collision experiments.
In this letter we present some stringy corrections to black hole spacetimes emerging from string T-duality. As a first step, we derive the static Newtonian potential by exploiting the relation between the T-duality and the path integral duality. We show that the intrinsic non-perturbative nature of stringy corrections introduces an ultraviolet cutoff known as zero-point length in the path integral duality literature. As a result, the static potential is found to be regular. We use this result to derive a consistent black hole metric for the spherically symmetric, electrically neutral case. It turns out that the new spacetime is regular and is formally equivalent to the Bardeen metric, apart from a different ultraviolet regulator. On the thermodynamics side, the Hawking temperature admits a maximum before a cooling down phase towards a thermodynamically stable end of the black hole evaporation process. The findings support the idea of universality of quantum black holes.
We consider a simple model of modified gravity interacting with a single scalar field ϕ with weakly coupled exponential potential within the framework of non-Riemannian spacetime volume-form formalism. The specific form of the action is fixed by the requirement of invariance under global Weyl-scale symmetry. Upon passing to the physical Einstein frame we show how the non-Riemannian volume elements create a second canonical scalar field u and dynamically generate a non-trivial two-scalar-field potential Ueff(u,ϕ) with two remarkable features: (i) it possesses a large flat region for large u describing a slow-roll inflation; (ii) it has a stable low-lying minimum w.r.t. (u,ϕ) representing the dark energy density in the “late universe”. We study the corresponding two-field slow-roll inflation and show that the pertinent slow-roll inflationary curve ϕ = ϕ(u) in the two-field space (u,ϕ) has a very small curvature, i.e., ϕ changes very little during the inflationary evolution of u on the flat region of Ueff(u,ϕ). Explicit expressions are found for the slow-roll parameters which differ from those in the single-field inflationary counterpart. Numerical solutions for the scalar spectral index and the tensor-to-scalar ratio are derived agreeing with the observational data.
Rethinking superdeterminism
(2020)
Quantum mechanics has irked physicists ever since its conception more than 100 years ago. While some of the misgivings, such as it being unintuitive, are merely aesthetic, quantum mechanics has one serious shortcoming: it lacks a physical description of the measurement process. This “measurement problem” indicates that quantum mechanics is at least an incomplete theory—good as far as it goes, but missing a piece—or, more radically, is in need of complete overhaul. Here we describe an approach which may provide this sought-for completion or replacement: Superdeterminism. A superdeterministic theory is one which violates the assumption of Statistical Independence (that distributions of hidden variables are independent of measurement settings). Intuition suggests that Statistical Independence is an essential ingredient of any theory of science (never mind physics), and for this reason Superdeterminism is typically discarded swiftly in any discussion of quantum foundations. The purpose of this paper is to explain why the existing objections to Superdeterminism are based on experience with classical physics and linear systems, but that this experience misleads us. Superdeterminism is a promising approach not only to solve the measurement problem, but also to understand the apparent non-locality of quantum physics. Most importantly, we will discuss how it may be possible to test this hypothesis in an (almost) model independent way.
In this work, we discuss the dense matter equation of state (EOS) for the extreme range of conditions encountered in neutron stars and their mergers. The calculation of the properties of such an EOS involves modeling different degrees of freedom (such as nuclei, nucleons, hyperons, and quarks), taking into account different symmetries, and including finite density and temperature effects in a thermodynamically consistent manner. We begin by addressing subnuclear matter consisting of nucleons and a small admixture of light nuclei in the context of the excluded volume approach. We then turn our attention to supranuclear homogeneous matter as described by the Chiral Mean Field (CMF) formalism. Finally, we present results from realistic neutron-star-merger simulations performed using the CMF model that predict signatures for deconfinement to quark matter in gravitational wave signals.
In power systems, flow allocation (FA) methods enable to allocate the usage and costs of the transmission grid to each single market participant. Based on predefined assumptions, the power flow is split into isolated generator-specific or producer-specific sub-flows. Two prominent FA methods, Marginal Participation (MP) and Equivalent Bilateral Exchanges (EBEs), build upon the linearized power flow and thus on the Power Transfer Distribution Factors (PTDFs). Despite their intuitive and computationally efficient concepts, they are restricted to networks with passive transmission elements only. As soon as a significant number of controllable transmission elements, such as high-voltage direct current (HVDC) lines, operate in the system, they lose their applicability. This work reformulates the two methods in terms of Virtual Injection Patterns (VIPs), which allows one to efficiently introduce a shift parameter q to tune contributions of net sources and net sinks in the network. In this work, major properties and differences in the methods are pointed out, and it is shown how the MP and EBE algorithms can be applied to generic meshed AC-DC electricity grids: by introducing a pseudo-impedance ω¯ , which reflects the operational state of controllable elements and allows one to extend the PTDF matrix under the assumption of knowing the current flow in the system. Basic properties from graph theory are used to solve for the pseudo-impedance in dependence of the position within the network. This directly enables, e.g., HVDC lines to be considered in the MP and EBE algorithms. The extended methods are applied to a low-carbon European network model (PyPSA-EUR) with a spatial resolution of 181 nodes and an 18% transmission expansion compared to today’s total transmission capacity volume. The allocations of MP and EBE show that countries with high wind potentials profit most from the transmission grid expansion. Based on the average usage of transmission system expansion, a method of distributing operational and capital expenditures is proposed. In addition, it is shown how injections from renewable resources strongly drive country-to-country allocations and thus cross-border electricity flows.
The Karl Schwarzschild Meeting 2017 (KSM2017) has been the third instalment of the conference dedicated to the great Frankfurter scientist, who derived the first black hole solution of Einstein's equations about 100 years ago.
The event has been a 5 day meeting in the field of black holes, AdS/CFT correspondence and gravitational physics. Like the two previous instalments, the conference continued to attract a stellar ensemble of participants from the world's most renowned institutions. The core of the meeting has been a series of invited talks from eminent experts (keynote speakers) as well as the presence of plenary research talks by students and junior speakers.
List of Conference photo and poster, Sponsors and funding acknowledgments, Committees and List of participants are available in this PDF.
We have built quasi-equilibrium models for uniformly rotating quark stars in general relativity. The conformal flatness approximation is employed and the Compact Object CALculator (cocal) code is extended to treat rotating stars with surface density discontinuity. In addition to the widely used MIT bag model, we have considered a strangeon star equation of state (EoS), suggested by Lai and Xu, that is based on quark clustering and results in a stiff EoS. We have investigated the maximum mass of uniformly rotating axisymmetric quark stars. We have also built triaxially deformed solutions for extremely fast rotating quark stars and studied the possible gravitational wave emission from such configurations.
Steep rise of parton densities in the limit of small parton momentum fraction x poses a challenge for describing the observed energy-dependence of the total and inelastic proton-proton cross sections σtot/inelpp : considering a realistic parton spatial distribution, one obtains a too-strong increase of σtot/inelpp in the limit of very high energies. We discuss various mechanisms which allow one to tame such a rise, paying special attention to the role of parton-parton correlations. In addition, we investigate a potential impact on model predictions for σtotpp, related to dynamical higher twist corrections to parton-production process.
The global energy system is undergoing a major transition, and in energy planning and decision-making across governments, industry and academia, models play a crucial role. Because of their policy relevance and contested nature, the transparency and open availability of energy models and data are of particular importance. Here we provide a practical how-to guide based on the collective experience of members of the Open Energy Modelling Initiative (Openmod). We discuss key steps to consider when opening code and data, including determining intellectual property ownership, choosing a licence and appropriate modelling languages, distributing code and data, and providing support and building communities. After illustrating these decisions with examples and lessons learned from the community, we conclude that even though individual researchers' choices are important, institutional changes are still also necessary for more openness and transparency in energy research.
In the last decades, energy modelling has supported energy planning by offering insights into the dynamics between energy access, resource use, and sustainable development. Especially in recent years, there has been an attempt to strengthen the science-policy interface and increase the involvement of society in energy planning processes. This has, both in the EU and worldwide, led to the development of open-source and transparent energy modelling practices.This paper describes the role of an open-source energy modelling tool in the energy planning process and highlights its importance for society. Specifically, it describes the existence and characteristics of the relationship between developing an open-source, freely available tool and its application, dissemination and use for policy making. Using the example of the Open Source energy Modelling System (OSeMOSYS), this work focuses on practices that were established within the community and that made the framework's development and application both relevant and scientifically grounded. Keywords: Energy system modelling tool, Open-source software, Model-based public policy, Software development practice, Outreach practice
Python for Power System Analysis (PyPSA) is a free software toolbox for simulating and optimising modern electrical power systems over multiple periods. PyPSA includes models for conventional generators with unit commitment, variable renewable generation, storage units, coupling to other energy sectors, and mixed alternating and direct current networks. It is designed to be easily extensible and to scale well with large networks and long time series. In this paper the basic functionality of PyPSA is described, including the formulation of the full power flow equations and the multi-period optimisation of operation and investment with linear power flow equations. PyPSA is positioned in the existing free software landscape as a bridge between traditional power flow analysis tools for steady-state analysis and full multi-period energy system models. The functionality is demonstrated on two open datasets of the transmission system in Germany (based on SciGRID) and Europe (based on GridKit).
In energy modelling, open data and open source code can help enhance traceability and reproducibility of model exercises which contribute to facilitate controversial debates and improve policy advice. While the availability of open power plant databases increased in recent years, they often differ considerably from each other and their data quality has not been systematically compared to proprietary sources yet. Here, we introduce the python-based ‘powerplantmatching’ (PPM), an open source toolset for cleaning, standardizing and combining multiple power plant databases. We apply it once only with open databases and once with an additional proprietary database in order to discuss and elaborate the issue of data quality, by analysing capacities, countries, fuel types, geographic coordinates and commissioning years for conventional power plants. We find that a derived dataset purely based on open data is not yet on a par with one in which a proprietary database has been added to the matching, even though the statistical values for capacity matched to a large degree with both datasets. When commissioning years are needed for modelling purposes in the final dataset, the proprietary database helps crucially to increase the quality of the derived dataset.
Use-dependent long-term changes of neuronal response properties must be gated to prevent irrelevant activity from inducing inappropriate modifications. Here we test the hypothesis that local network dynamics contribute to such gating. As synaptic modifications depend on temporal contiguity between presynaptic and postsynaptic activity, we examined the effect of synchronized gamma (ɣ) oscillations on stimulation-dependent modifications of orientation selectivity in adult cat visual cortex. Changes of orientation maps were induced by pairing visual stimulation with electrical activation of the mesencephalic reticular formation. Changes in orientation selectivity were assessed with optical recording of intrinsic signals and multiunit recordings. When conditioning stimuli were associated with strong ɣ-oscillations, orientation domains matching the orientation of the conditioning grating stimulus became more responsive and expanded, because neurons with preferences differing by less than 30° from the orientation of the conditioning grating shifted their orientation preference toward the conditioned orientation. When conditioning stimuli induced no or only weak ɣ-oscillations, responsiveness of neurons driven by the conditioning stimulus decreased. These differential effects depended on the power of oscillations in the low ɣ-band (20 Hz to 48 Hz) and not on differences in discharge rate of cortical neurons, because there was no correlation between the discharge rates during conditioning and the occurrence of changes in orientation preference. Thus, occurrence and polarity of use-dependent long-term changes of cortical response properties appear to depend on the occurrence of ɣ-oscillations during induction and hence on the degree of temporal coherence of the change-inducing network activity.
An incoming or outgoing hadron in a hard collision with large momentum transfer gets squeezed in the transverse direction to its momentum. In the case of nuclear targets, this leads to the reduced interaction of such hadrons with surrounding nucleons which is known as color transparency (CT). The identification of CT in exclusive processes on nuclear targets is of significant interest not only by itself but also due to the fact that CT is a necessary condition for the applicability of factorization for the description of the corresponding elementary process. In this paper we discuss the semiexclusive processes A(e,e′π+) , A(π−,l−l+) and A(γ,π−p) . Since CT is closely related to hadron formation mechanism, the reduced interaction of ’pre-hadrons’ with nucleons is a common feature of generic high-energy inclusive processes on nuclear targets, such as hadron attenuation in deep inelastic scattering (DIS). We will discuss the novel way to study hadron formation via slow neutron production induced by a hard photon interaction with a nucleus. Finally, the opportunity to study hadron formation effects in heavy-ion collisions in the NICA regime will be considered.
Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
(2019)
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
PURPOSE: The purpose of this work is to analyze whether the Monte Carlo codes penh, fluka, and geant4/topas are suitable to calculate absorbed doses and fQ/fQ0 ratios in therapeutic high-energy photon and proton beams.
METHODS: We used penh, fluka, geant4/topas, and egsnrc to calculate the absorbed dose to water in a reference water cavity and the absorbed dose to air in two air cavities representative of a plane-parallel and a cylindrical ionization chamber in a 1.25 MeV photon beam and a 150 MeV proton beam - egsnrc was only used for the photon beam calculations. The physics and transport settings in each code were adjusted to simulate the particle transport as detailed as reasonably possible. From these absorbed doses, fQ0 factors, fQ factors, and fQ/fQ0 ratios (which are the basis of Monte Carlo calculated beam quality correction factors kQ,Q0 ) were calculated and compared between the codes. Additionally, we calculated the spectra of primary particles and secondary electrons in the reference water cavity, as well as the integrated depth-dose curve of 150 MeV protons in water.
RESULTS: The absorbed doses agreed within 1.4% or better between the individual codes for both the photon and proton simulations. The fQ0 and fQ factors agreed within 0.5% or better for the individual codes for both beam qualities. The resulting fQ/fQ0 ratios for 150 MeV protons agreed within 0.7% or better. For the 1.25 MeV photon beam, the spectra of photons and secondary electrons agreed almost perfectly. For the 150 MeV proton simulation, we observed differences in the spectra of secondary protons whereas the spectra of primary protons and low-energy delta electrons also agreed almost perfectly. The first 2 mm of the entrance channel of the 150 MeV proton Bragg curve agreed almost perfectly while for greater depths, the differences in the integrated dose were up to 1.5%.
CONCLUSION: penh, fluka, and geant4/topas are capable of calculating beam quality correction factors in proton beams. The differences in the fQ0 and fQ factors between the codes are 0.5% at maximum. The differences in the fQ/fQ0 ratios are 0.7% at maximum.
An overt pro-inflammatory immune response is a key factor contributing to lethal pneumococcal infection in an influenza pre-infected host and represents a potential target for therapeutic intervention. However, there is a paucity of knowledge about the level of contribution of individual cytokines. Based on the predictions of our previous mathematical modeling approach, the potential benefit of IFN-γ- and/or IL-6-specific antibody-mediated cytokine neutralization was explored in C57BL/6 mice infected with the influenza A/PR/8/34 strain, which were subsequently infected with the Streptococcus pneumoniae strain TIGR4 on day 7 post influenza. While single IL-6 neutralization had no effect on respiratory bacterial clearance, single IFN-γ neutralization enhanced local bacterial clearance in the lungs. Concomitant neutralization of IFN-γ and IL-6 significantly reduced the degree of pneumonia as well as bacteremia compared to the control group, indicating a positive effect for the host during secondary bacterial infection. The results of our model-driven experimental study reveal that the predicted therapeutic value of IFN-γ and IL-6 neutralization in secondary pneumococcal infection following influenza infection is tightly dependent on the experimental protocol while at the same time paving the way toward the development of effective immune therapies.
Classical Hodgkin lymphoma (cHL) is one of the most common malignant lymphomas in Western Europe. The nodular sclerosing subtype of cHL (NS cHL) is characterized by a proliferation of fibroblasts in the tumor microenvironment, leading to fibrotic bands surrounding the lymphoma infiltrate. Several studies have described a crosstalk between the tumour cells of cHL, the Hodgkin- and Reed-Sternberg (HRS) cells, and cancer-associated fibroblasts. However, to date a deep molecular characterization of these fibroblasts is lacking. Thus, the aim of the present study is a comprehensive characterization of these fibroblasts. Gene expression profiling and methylation profiles of fibroblasts isolated from primary lymph node suspensions revealed persistent differences between fibroblasts obtained from NS cHL and lymphadenitis. NS cHL derived fibroblasts exhibit a myofibroblastic phenotype characterized by myocardin (MYOCD) expression. Moreover, TIMP3, an inhibitor of matrix metalloproteinases, was strongly upregulated in NS cHL fibroblasts, likely contributing to the accumulation of collagen in sclerotic bands of NS cHL. As previously shown for other types of cancer-associated fibroblasts, treatment by luteolin could reverse this fibroblast phenotype and decrease TIMP3 secretion. NS cHL fibroblasts showed enhanced proliferation when they were exposed to soluble factors released from HRS cells. For HRS cells, soluble factors from fibroblasts were not sufficient to protect them from Brentuximab-Vedotin induced cell death. However, HRS cells adherent to fibroblasts were protected from Brentuximab-Vedotin induced injury. In summary, we confirm the importance of fibroblasts for HRS cell survival and identify TIMP3 which probably contributes as a major factor to the typical fibrosis observed in NS cHL.
Gravitational waves, electromagnetic radiation, and the emission of high energy particles probe the phase structure of the equation of state of dense matter produced at the crossroad of the closely related relativistic collisions of heavy ions and of binary neutron stars mergers. 3 + 1 dimensional special- and general relativistic hydrodynamic simulation studies reveal a unique window of opportunity to observe phase transitions in compressed baryon matter by laboratory based experiments and by astrophysical multimessenger observations. The astrophysical consequences of a hadron-quark phase transition in the interior of a compact star will be focused within this article. Especially with a future detection of the post-merger gravitational wave emission emanated from a binary neutron star merger event, it would be possible to explore the phase structure of quantum chromodynamics. The astrophysical observables of a hadron-quark phase transition in a single compact star system and binary hybrid star merger scenario will be summarized within this article. The FAIR facility at GSI Helmholtzzentrum allows one to study the universe in the laboratory, and several astrophysical signatures of the quark-gluon plasma have been found in relativistic collisions of heavy ions and will be explored in future experiments.
The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal–frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder.
Synesthesia is a phenomenon in which additional perceptual experiences are elicited by sensory stimuli or cognitive concepts. Synesthetes possess a unique type of phenomenal experiences not directly triggered by sensory stimulation. Therefore, for better understanding of consciousness it is relevant to identify the mental and physiological processes that subserve synesthetic experience. In the present work we suggest several reasons why synesthesia has merit for research on consciousness. We first review the research on the dynamic and rapidly growing field of the studies of synesthesia. We particularly draw attention to the role of semantics in synesthesia, which is important for establishing synesthetic associations in the brain. We then propose that the interplay between semantics and sensory input in synesthesia can be helpful for the study of the neural correlates of consciousness, especially when making use of ambiguous stimuli for inducing synesthesia. Finally, synesthesia-related alterations of brain networks and functional connectivity can be of merit for the study of consciousness.
Following a brief review of current efforts to identify the neuronal correlates of conscious processing (NCCP) an attempt is made to bridge the gap between the material neuronal processes and the immaterial dimensions of subjective experience. It is argued that this "hard problem" of consciousness research cannot be solved by only considering the neuronal underpinnings of cognition. The proposal is that the hard problem can be treated within a naturalistic framework if one considers not only the biological but also the socio-cultural dimensions of evolution. The argument is based on the following premises: perceptions are the result of a constructivist process that depends on priors. This applies both for perceptions of the outer world and the perception of oneself. Social interactions between agents endowed with the cognitive abilities of humans generated immaterial realities, addressed as social or cultural realities. This novel class of realities assumed the role of priors for the perception of oneself and the embedding world. A natural consequence of these extended perceptions is a dualist classification of observables into material and immaterial phenomena nurturing the concept of ontological substance dualism. It is argued that perceptions shaped by socio-cultural priors lead to the construction of a self-model that has both a material and an immaterial dimension. As priors are implicit and not amenable to conscious recollection the perceived immaterial dimension is experienced as veridical and not derivable from material processes—which is the hallmark of the hard problem. These considerations let the hard problem appear as the result of cognitive constructs that are amenable to naturalistic explanations in an evolutionary framework.
Simulating Many Accelerated Strongly-interacting Hadrons (SMASH) is a new hadronic transport approach designed to describe the non-equilibrium evolution of heavy-ion collisions. The production of strange particles in such systems is enhanced compared to elementary reactions (Blume and Markert 2011), providing an interesting signal to study. Two different strangeness production mechanisms are discussed: one based on resonances and another using forced canonical thermalization. Comparisons to experimental data from elementary collisions are shown.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.
Top-down influences on ambiguous perception: the role of stable and transient states of the observer
(2014)
The world as it appears to the viewer is the result of a complex process of inference performed by the brain. The validity of this apparently counter-intuitive assertion becomes evident whenever we face noisy, feeble or ambiguous visual stimulation: in these conditions, the state of the observer may play a decisive role in determining what is currently perceived. On this background, ambiguous perception and its amenability to top-down influences can be employed as an empirical paradigm to explore the principles of perception. Here we offer an overview of both classical and recent contributions on how stable and transient states of the observer can impact ambiguous perception. As to the influence of the stable states of the observer, we show that what is currently perceived can be influenced (1) by cognitive and affective aspects, such as meaning, prior knowledge, motivation, and emotional content and (2) by individual differences, such as gender, handedness, genetic inheritance, clinical conditions, and personality traits and by (3) learning and conditioning. As to the impact of transient states of the observer, we outline the effects of (4) attention and (5) voluntary control, which have attracted much empirical work along the history of ambiguous perception. In the huge literature on the topic we trace a difference between the observer's ability to control dominance (i.e., the maintenance of a specific percept in visual awareness) and reversal rate (i.e., the switching between two alternative percepts). Other transient states of the observer that have more recently drawn researchers' attention regard (6) the effects of imagery and visual working memory. (7) Furthermore, we describe the transient effects of prior history of perceptual dominance. (8) Finally, we address the currently available computational models of ambiguous perception and how they can take into account the crucial share played by the state of the observer in perceiving ambiguous displays.
Aims: The examination of histological sections is still the gold standard in diagnostic pathology. Important histopathological diagnostic criteria are nuclear shapes and chromatin distribution as well as nucleus-cytoplasm relation and immunohistochemical properties of surface and intracellular proteins. The aim of this investigation was to evaluate the benefits and drawbacks of three-dimensional imaging of CD30+ cells in classical Hodgkin Lymphoma (cHL) in comparison to CD30+ lymphoid cells in reactive lymphoid tissues.
Materials and results: Using immunoflourescence confocal microscopy and computer-based analysis, we compared CD30+ neoplastic cells in Nodular Sclerosis cHL (NScCHL), Mixed Cellularity cHL (MCcHL), with reactive CD30+ cells in Adenoids (AD) and Lymphadenitis (LAD). We confirmed that the percentage of CD30+ cell volume can be calculated. The amount in lymphadenitis was approx. 1.5%, in adenoids around 2%, in MCcHL up to 4,5% whereas the values for NScHL rose to more than 8% of the total cell cytoplasm. In addition, CD30+ tumour cells (HRS-cells) in cHL had larger volumes, and more protrusions compared to CD30+ reactive cells. Furthermore, the formation of large cell networks turned out to be a typical characteristic of NScHL.
Conclusion: In contrast to 2D histology, 3D laser scanning offers a visualisation of complete cells, their network interaction and spatial distribution in the tissue. The possibility to differentiate cells in regards to volume, surface, shape, and cluster formation enables a new view on further diagnostic and biological questions. 3D includes an increased amount of information as a basis of bioinformatical calculations.
Volatility is a widely recognized measure of market risk. As volatility is not observed it has to be estimated from market prices, i.e., as the implied volatility from option prices. The volatility index VIX making volatility a tradeable asset in its own right is computed from near- and next-term put and call options on the S&P 500 with more than 23 days and less than 37 days to expiration and non-vanishing bid. In the present paper we quantify the information content of the constituents of the VIX about the volatility of the S&P 500 in terms of the Fisher information matrix. Assuming that observed option prices are centered on the theoretical price provided by Heston's model perturbed by additive Gaussian noise we relate their Fisher information matrix to the Greeks in the Heston model. We find that the prices of options contained in the VIX basket allow for reliable estimates of the volatility of the S&P 500 with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value of roughly 3%, inferences from option prices become imprecise because Vega, the derivative of a European option w.r.t. volatility, and thereby the Fisher information nearly vanishes.
A hypothesis regarding the development of imitation learning is presented that is rooted in intrinsic motivations. It is derived from a recently proposed form of intrinsically motivated learning (IML) for efficient coding in active perception, wherein an agent learns to perform actions with its sense organs to facilitate efficient encoding of the sensory data. To this end, actions of the sense organs that improve the encoding of the sensory data trigger an internally generated reinforcement signal. Here it is argued that the same IML mechanism might also support the development of imitation when general actions beyond those of the sense organs are considered: The learner first observes a tutor performing a behavior and learns a model of the the behavior's sensory consequences. The learner then acts itself and receives an internally generated reinforcement signal reflecting how well the sensory consequences of its own behavior are encoded by the sensory model. Actions that are more similar to those of the tutor will lead to sensory signals that are easier to encode and produce a higher reinforcement signal. Through this, the learner's behavior is progressively tuned to make the sensory consequences of its actions match the learned sensory model. I discuss this mechanism in the context of human language acquisition and bird song learning where similar ideas have been proposed. The suggested mechanism also offers an account for the development of mirror neurons and makes a number of predictions. Overall, it establishes a connection between principles of efficient coding, intrinsic motivations and imitation.
Variable renewable energy sources (VRES), such as solarphotovoltaic (PV) and wind turbines (WT), are starting to play a significant role in several energy systems around the globe. To overcome the problem of their non-dispatchable and stochastic nature, several approaches have been proposed so far. This paper describes a novel mathematical model for scheduling the operation of a wind-powered pumped-storage hydroelectricity (PSH) hybrid for 25 to 48 h ahead. The model is based on mathematical programming and wind speed forecasts for the next 1 to 24 h, along with predicted upper reservoir occupancy for the 24th hour ahead. The results indicate that by coupling a 2-MW conventional wind turbine with a PSH of energy storing capacity equal to 54 MWh it is possible to significantly reduce the intraday energy generation coefficient of variation from 31% for pure wind turbine to 1.15% for a wind-powered PSH The scheduling errors calculated based on mean absolute percentage error (MAPE) are significantly smaller for such a coupling than those seen for wind generation forecasts, at 2.39% and 27%, respectively. This is even stronger emphasized by the fact that, those for wind generation were calculated for forecasts made for the next 1 to 24 h, while those for scheduled generation were calculated for forecasts made for the next 25 to 48 h. The results clearly show that the proposed scheduling approach ensures the high reliability of the WT–PSH energy source
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
Anaplastic large cell lymphoma (ALCL) and classical Hodgkin lymphoma (cHL) are lymphomas that contain CD30-expressing tumor cells and have numerous pathological similarities. Whereas ALCL is usually diagnosed at an advanced stage, cHL more frequently presents with localized disease. The aim of the present study was to elucidate the mechanisms underlying the different clinical presentation of ALCL and cHL. Chemokine and chemokine receptor expression were similar in primary ALCL and cHL cases apart from the known overexpression of the chemokines CCL17 and CCL22 in the Hodgkin and Reed-Sternberg (HRS) cells of cHL. Consistent with the overexpression of these chemokines, primary cHL cases encountered a significantly denser T cell microenvironment than ALCL. Additionally to differences in the interaction with their microenvironment, cHL cell lines presented a lower and less efficient intrinsic cell motility than ALCL cell lines, as assessed by time-lapse microscopy in a collagen gel and transwell migration assays. We thus propose that the combination of impaired basal cell motility and differences in the interaction with the microenvironment hamper the dissemination of HRS cells in cHL when compared with the tumor cells of ALCL.
We present a model for the autonomous and simultaneous learning of active binocular and motion vision. The model is based on the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model learns how to efficiently encode the incoming visual signals generated by an object moving in 3-D through sparse coding. Simultaneously, it learns how to produce eye movements that further improve the efficiency of the sensory coding. This learning is driven by an intrinsic motivation to maximize the system's coding efficiency. We test our approach on the humanoid robot iCub using simulations. The model demonstrates self-calibration of accurate object fixation and tracking of moving objects. Our results show that the model keeps improving until it hits physical constraints such as camera or motor resolution, or limits on its internal coding capacity. Furthermore, we show that the emerging sensory tuning properties are in line with results on disparity, motion, and motion-in-depth tuning in the visual cortex of mammals. The model suggests that vergence and tracking eye movements can be viewed as fundamentally having the same objective of maximizing the coding efficiency of the visual system and that they can be learned and calibrated jointly through AEC.
We investigate charmonium production in Pb + Pb collisions at LHC beam energy Elab=2.76A TeV at fixed-target experiment (√sNN = 72 GeV). In the frame of a transport approach including cold and hot nuclear matter effects on charmonium evolution, we focus on the antishadowing effect on the nuclear modification factors RAA and rAA for the J/ψ yield and transverse momentum. The yield is more suppressed at less forward rapidity (ylab ≃ 2) than that at very forward rapidity (ylab ≃ 4) due to the shadowing and antishadowing in different rapidity bins.
Physics at its core is an experimental pursuit. If one theory does not agree with experimental results, then the theory is wrong. However, it is becoming harder and harder to directly test some theories of fundamental physics at the high energy/small distance frontier exactly because this frontier is becoming technologically harder to reach. The Large Hadron Collider is getting near the limit of what we can do with present accelerator technology in terms of directly reaching the energy frontier. The motivation for this special issue was to try and collect together ideas and potential approaches to experimentally probe some of our ideas about physics at the high energy/small distance frontier. Some of the papers in this special issue directly deal with the issue of what happens to spacetime at small distance scales. In the paper by A. Aurilia and E. Spallucci a picture of quantum spacetime is given based on the effects of ultrahigh velocity length contractions on the structure of the spacetime. The work of P. Nicolini et al. further pursues the idea that spacetime has a minimal length. The consequences of this minimal length are investigated in terms of the effects it would have on the gravitational collapse of a star to form a black hole. In the article by G. Amelino-Camelia et al. the quantum structure of spacetime is studied through the Fermi LAT data on the Gamma Ray Burst GRB130427A. The article by S. Hossenfelder addressed the question of whether spacetime is fundamentally continuous or discrete and postulates that in the case when spacetime is discrete it might have defects which would have important observational consequences. ...
This paper studies the geometry and the thermodynamics of a holographic screen in the framework of the ultraviolet self-complete quantum gravity. To achieve this goal we construct a new static, neutral, nonrotating black hole metric, whose outer (event) horizon coincides with the surface of the screen. The spacetime admits an extremal configuration corresponding to the minimal holographic screen and having both mass and radius equalling the Planck units. We identify this object as the spacetime fundamental building block, whose interior is physically unaccessible and cannot be probed even during the Hawking evaporation terminal phase. In agreement with the holographic principle, relevant processes take place on the screen surface. The area quantization leads to a discrete mass spectrum. An analysis of the entropy shows that the minimal holographic screen can store only one byte of information, while in the thermodynamic limit the area law is corrected by a logarithmic term.
The 2D azimuth and rapidity structure of the two-particle correlations in relativistic A+A collisions is altered significantly by the presence of sharp inhomogeneities in superdense matter formed in such processes. The causality constraints enforce one to associate the long-range longitudinal correlations observed in a narrow angular interval, the so-called (soft) ridge, with peculiarities of the initial conditions of collision process. This study's objective is to analyze whether multiform initial tubular structures, undergoing the subsequent hydrodynamic evolution and gradual decoupling, can form the soft ridges. Motivated by the flux-tube scenarios, the initial energy density distribution contains the different numbers of high density tube-like boost-invariant inclusions that form a bumpy structure in the transverse plane. The influence of various structures of such initial conditions in the most central A+A events on the collective evolution of matter, resulting spectra, angular particle correlations and vn-coefficients is studied in the framework of the hydrokinetic model (HKM).
The theoretical review of the last femtoscopy results for the systems created in ultrarelativistic A+A, p+p, and p+Pb collisions is presented. The basic model, allowing to describe the interferometry data at SPS, RHIC, and LHC, is the hydrokinetic model. The model allows one to avoid the principal problem of the particlization of the medium at nonspace-like sites of transition hypersurfaces and switch to hadronic cascade at a space-like hypersurface with nonequilibrated particle input. The results for pion and kaon interferometry scales in Pb+Pb and Au+Au collisions at LHC and RHIC are presented for different centralities. The new theoretical results as for the femtoscopy of small sources with sizes of 1-2 fm or less are discussed. The uncertainty principle destroys the standard approach of completely chaotic sources: the emitters in such sources cannot radiate independently and incoherently. As a result, the observed femtoscopy scales are reduced, and the Bose-Einstein correlation function is suppressed. The results are applied for the femtoscopy analysis of p+p collisions at √s=7 TeV LHC energy and p+Pb ones at √s=5.02 TeV. The behavior of the corresponding interferometry volumes on multiplicity is compared with what is happening for central A+A collisions. In addition the nonfemtoscopic two-pion correlations in proton-proton collisions at the LHC energies are considered, and a simple model that takes into account correlations induced by the conservation laws and minijets is analyzed.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p–Pb collisions at √sNN = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range - 0.5 < y < 0. The transverse momentum spectra, measured as a function of the multiplicity, have a pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at √s = 7 TeV and Pb–Pb collisions at √sNN = 2.76 TeV. In Pb–Pb and p–Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
Spatial neuronal synchronization and the waveform of oscillations : implications for EEG and MEG
(2019)
Neuronal oscillations are ubiquitous in the human brain and are implicated in virtually all brain functions. Although they can be described by a prominent peak in the power spectrum, their waveform is not necessarily sinusoidal and shows rather complex morphology. Both frequency and temporal descriptions of such non-sinusoidal neuronal oscillations can be utilized. However, in non-invasive EEG/MEG recordings the waveform of oscillations often takes a sinusoidal shape which in turn leads to a rather oversimplified view on oscillatory processes. In this study, we show in simulations how spatial synchronization can mask non-sinusoidal features of the underlying rhythmic neuronal processes. Consequently, the degree of non-sinusoidality can serve as a measure of spatial synchronization. To confirm this empirically, we show that a mixture of EEG components is indeed associated with more sinusoidal oscillations compared to the waveform of oscillations in each constituent component. Using simulations, we also show that the spatial mixing of the non-sinusoidal neuronal signals strongly affects the amplitude ratio of the spectral harmonics constituting the waveform. Finally, our simulations show how spatial mixing can affect the strength and even the direction of the amplitude coupling between constituent neuronal harmonics at different frequencies. Validating these simulations, we also demonstrate these effects in real EEG recordings. Our findings have far reaching implications for the neurophysiological interpretation of spectral profiles, cross-frequency interactions, as well as for the unequivocal determination of oscillatory phase.
The Gribov mode in hot QCD
(2017)
In thesis I investigate the possibility that at the smallest length scale (Planck scale) the very notion of "dimension" needs to be revisited. Due to "quantum effects" spacetime might become very turbulent at these scales and properties like those of "fractals" emerge, including a "scale dependent dimension". It seems that this "spontaneous dimensional reduction" and the appearance of a minimal physical length are very general effects that most approaches to quantum gravity share. Main emphasis is given to the"spectral dimension" and its calculation for strings and p-branes.