Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (821)
- Article (715)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Review (1)
Has Fulltext
- yes (1594) (remove)
Is part of the Bibliography
- no (1594)
Keywords
- Heavy Ion Experiments (20)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1594)
- Physik (1298)
- Informatik (1000)
- Medizin (64)
- MPI für Hirnforschung (30)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (11)
- Helmholtz International Center for FAIR (7)
Simulating Many Accelerated Strongly-interacting Hadrons (SMASH) is a new hadronic transport approach designed to describe the non-equilibrium evolution of heavy-ion collisions. The production of strange particles in such systems is enhanced compared to elementary reactions (Blume and Markert 2011), providing an interesting signal to study. Two different strangeness production mechanisms are discussed: one based on resonances and another using forced canonical thermalization. Comparisons to experimental data from elementary collisions are shown.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.
Top-down influences on ambiguous perception: the role of stable and transient states of the observer
(2014)
The world as it appears to the viewer is the result of a complex process of inference performed by the brain. The validity of this apparently counter-intuitive assertion becomes evident whenever we face noisy, feeble or ambiguous visual stimulation: in these conditions, the state of the observer may play a decisive role in determining what is currently perceived. On this background, ambiguous perception and its amenability to top-down influences can be employed as an empirical paradigm to explore the principles of perception. Here we offer an overview of both classical and recent contributions on how stable and transient states of the observer can impact ambiguous perception. As to the influence of the stable states of the observer, we show that what is currently perceived can be influenced (1) by cognitive and affective aspects, such as meaning, prior knowledge, motivation, and emotional content and (2) by individual differences, such as gender, handedness, genetic inheritance, clinical conditions, and personality traits and by (3) learning and conditioning. As to the impact of transient states of the observer, we outline the effects of (4) attention and (5) voluntary control, which have attracted much empirical work along the history of ambiguous perception. In the huge literature on the topic we trace a difference between the observer's ability to control dominance (i.e., the maintenance of a specific percept in visual awareness) and reversal rate (i.e., the switching between two alternative percepts). Other transient states of the observer that have more recently drawn researchers' attention regard (6) the effects of imagery and visual working memory. (7) Furthermore, we describe the transient effects of prior history of perceptual dominance. (8) Finally, we address the currently available computational models of ambiguous perception and how they can take into account the crucial share played by the state of the observer in perceiving ambiguous displays.
Aims: The examination of histological sections is still the gold standard in diagnostic pathology. Important histopathological diagnostic criteria are nuclear shapes and chromatin distribution as well as nucleus-cytoplasm relation and immunohistochemical properties of surface and intracellular proteins. The aim of this investigation was to evaluate the benefits and drawbacks of three-dimensional imaging of CD30+ cells in classical Hodgkin Lymphoma (cHL) in comparison to CD30+ lymphoid cells in reactive lymphoid tissues.
Materials and results: Using immunoflourescence confocal microscopy and computer-based analysis, we compared CD30+ neoplastic cells in Nodular Sclerosis cHL (NScCHL), Mixed Cellularity cHL (MCcHL), with reactive CD30+ cells in Adenoids (AD) and Lymphadenitis (LAD). We confirmed that the percentage of CD30+ cell volume can be calculated. The amount in lymphadenitis was approx. 1.5%, in adenoids around 2%, in MCcHL up to 4,5% whereas the values for NScHL rose to more than 8% of the total cell cytoplasm. In addition, CD30+ tumour cells (HRS-cells) in cHL had larger volumes, and more protrusions compared to CD30+ reactive cells. Furthermore, the formation of large cell networks turned out to be a typical characteristic of NScHL.
Conclusion: In contrast to 2D histology, 3D laser scanning offers a visualisation of complete cells, their network interaction and spatial distribution in the tissue. The possibility to differentiate cells in regards to volume, surface, shape, and cluster formation enables a new view on further diagnostic and biological questions. 3D includes an increased amount of information as a basis of bioinformatical calculations.
Volatility is a widely recognized measure of market risk. As volatility is not observed it has to be estimated from market prices, i.e., as the implied volatility from option prices. The volatility index VIX making volatility a tradeable asset in its own right is computed from near- and next-term put and call options on the S&P 500 with more than 23 days and less than 37 days to expiration and non-vanishing bid. In the present paper we quantify the information content of the constituents of the VIX about the volatility of the S&P 500 in terms of the Fisher information matrix. Assuming that observed option prices are centered on the theoretical price provided by Heston's model perturbed by additive Gaussian noise we relate their Fisher information matrix to the Greeks in the Heston model. We find that the prices of options contained in the VIX basket allow for reliable estimates of the volatility of the S&P 500 with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value of roughly 3%, inferences from option prices become imprecise because Vega, the derivative of a European option w.r.t. volatility, and thereby the Fisher information nearly vanishes.
A hypothesis regarding the development of imitation learning is presented that is rooted in intrinsic motivations. It is derived from a recently proposed form of intrinsically motivated learning (IML) for efficient coding in active perception, wherein an agent learns to perform actions with its sense organs to facilitate efficient encoding of the sensory data. To this end, actions of the sense organs that improve the encoding of the sensory data trigger an internally generated reinforcement signal. Here it is argued that the same IML mechanism might also support the development of imitation when general actions beyond those of the sense organs are considered: The learner first observes a tutor performing a behavior and learns a model of the the behavior's sensory consequences. The learner then acts itself and receives an internally generated reinforcement signal reflecting how well the sensory consequences of its own behavior are encoded by the sensory model. Actions that are more similar to those of the tutor will lead to sensory signals that are easier to encode and produce a higher reinforcement signal. Through this, the learner's behavior is progressively tuned to make the sensory consequences of its actions match the learned sensory model. I discuss this mechanism in the context of human language acquisition and bird song learning where similar ideas have been proposed. The suggested mechanism also offers an account for the development of mirror neurons and makes a number of predictions. Overall, it establishes a connection between principles of efficient coding, intrinsic motivations and imitation.
Variable renewable energy sources (VRES), such as solarphotovoltaic (PV) and wind turbines (WT), are starting to play a significant role in several energy systems around the globe. To overcome the problem of their non-dispatchable and stochastic nature, several approaches have been proposed so far. This paper describes a novel mathematical model for scheduling the operation of a wind-powered pumped-storage hydroelectricity (PSH) hybrid for 25 to 48 h ahead. The model is based on mathematical programming and wind speed forecasts for the next 1 to 24 h, along with predicted upper reservoir occupancy for the 24th hour ahead. The results indicate that by coupling a 2-MW conventional wind turbine with a PSH of energy storing capacity equal to 54 MWh it is possible to significantly reduce the intraday energy generation coefficient of variation from 31% for pure wind turbine to 1.15% for a wind-powered PSH The scheduling errors calculated based on mean absolute percentage error (MAPE) are significantly smaller for such a coupling than those seen for wind generation forecasts, at 2.39% and 27%, respectively. This is even stronger emphasized by the fact that, those for wind generation were calculated for forecasts made for the next 1 to 24 h, while those for scheduled generation were calculated for forecasts made for the next 25 to 48 h. The results clearly show that the proposed scheduling approach ensures the high reliability of the WT–PSH energy source
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy.
Anaplastic large cell lymphoma (ALCL) and classical Hodgkin lymphoma (cHL) are lymphomas that contain CD30-expressing tumor cells and have numerous pathological similarities. Whereas ALCL is usually diagnosed at an advanced stage, cHL more frequently presents with localized disease. The aim of the present study was to elucidate the mechanisms underlying the different clinical presentation of ALCL and cHL. Chemokine and chemokine receptor expression were similar in primary ALCL and cHL cases apart from the known overexpression of the chemokines CCL17 and CCL22 in the Hodgkin and Reed-Sternberg (HRS) cells of cHL. Consistent with the overexpression of these chemokines, primary cHL cases encountered a significantly denser T cell microenvironment than ALCL. Additionally to differences in the interaction with their microenvironment, cHL cell lines presented a lower and less efficient intrinsic cell motility than ALCL cell lines, as assessed by time-lapse microscopy in a collagen gel and transwell migration assays. We thus propose that the combination of impaired basal cell motility and differences in the interaction with the microenvironment hamper the dissemination of HRS cells in cHL when compared with the tumor cells of ALCL.
We present a model for the autonomous and simultaneous learning of active binocular and motion vision. The model is based on the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model learns how to efficiently encode the incoming visual signals generated by an object moving in 3-D through sparse coding. Simultaneously, it learns how to produce eye movements that further improve the efficiency of the sensory coding. This learning is driven by an intrinsic motivation to maximize the system's coding efficiency. We test our approach on the humanoid robot iCub using simulations. The model demonstrates self-calibration of accurate object fixation and tracking of moving objects. Our results show that the model keeps improving until it hits physical constraints such as camera or motor resolution, or limits on its internal coding capacity. Furthermore, we show that the emerging sensory tuning properties are in line with results on disparity, motion, and motion-in-depth tuning in the visual cortex of mammals. The model suggests that vergence and tracking eye movements can be viewed as fundamentally having the same objective of maximizing the coding efficiency of the visual system and that they can be learned and calibrated jointly through AEC.
We investigate charmonium production in Pb + Pb collisions at LHC beam energy Elab=2.76A TeV at fixed-target experiment (√sNN = 72 GeV). In the frame of a transport approach including cold and hot nuclear matter effects on charmonium evolution, we focus on the antishadowing effect on the nuclear modification factors RAA and rAA for the J/ψ yield and transverse momentum. The yield is more suppressed at less forward rapidity (ylab ≃ 2) than that at very forward rapidity (ylab ≃ 4) due to the shadowing and antishadowing in different rapidity bins.
Physics at its core is an experimental pursuit. If one theory does not agree with experimental results, then the theory is wrong. However, it is becoming harder and harder to directly test some theories of fundamental physics at the high energy/small distance frontier exactly because this frontier is becoming technologically harder to reach. The Large Hadron Collider is getting near the limit of what we can do with present accelerator technology in terms of directly reaching the energy frontier. The motivation for this special issue was to try and collect together ideas and potential approaches to experimentally probe some of our ideas about physics at the high energy/small distance frontier. Some of the papers in this special issue directly deal with the issue of what happens to spacetime at small distance scales. In the paper by A. Aurilia and E. Spallucci a picture of quantum spacetime is given based on the effects of ultrahigh velocity length contractions on the structure of the spacetime. The work of P. Nicolini et al. further pursues the idea that spacetime has a minimal length. The consequences of this minimal length are investigated in terms of the effects it would have on the gravitational collapse of a star to form a black hole. In the article by G. Amelino-Camelia et al. the quantum structure of spacetime is studied through the Fermi LAT data on the Gamma Ray Burst GRB130427A. The article by S. Hossenfelder addressed the question of whether spacetime is fundamentally continuous or discrete and postulates that in the case when spacetime is discrete it might have defects which would have important observational consequences. ...
This paper studies the geometry and the thermodynamics of a holographic screen in the framework of the ultraviolet self-complete quantum gravity. To achieve this goal we construct a new static, neutral, nonrotating black hole metric, whose outer (event) horizon coincides with the surface of the screen. The spacetime admits an extremal configuration corresponding to the minimal holographic screen and having both mass and radius equalling the Planck units. We identify this object as the spacetime fundamental building block, whose interior is physically unaccessible and cannot be probed even during the Hawking evaporation terminal phase. In agreement with the holographic principle, relevant processes take place on the screen surface. The area quantization leads to a discrete mass spectrum. An analysis of the entropy shows that the minimal holographic screen can store only one byte of information, while in the thermodynamic limit the area law is corrected by a logarithmic term.
The 2D azimuth and rapidity structure of the two-particle correlations in relativistic A+A collisions is altered significantly by the presence of sharp inhomogeneities in superdense matter formed in such processes. The causality constraints enforce one to associate the long-range longitudinal correlations observed in a narrow angular interval, the so-called (soft) ridge, with peculiarities of the initial conditions of collision process. This study's objective is to analyze whether multiform initial tubular structures, undergoing the subsequent hydrodynamic evolution and gradual decoupling, can form the soft ridges. Motivated by the flux-tube scenarios, the initial energy density distribution contains the different numbers of high density tube-like boost-invariant inclusions that form a bumpy structure in the transverse plane. The influence of various structures of such initial conditions in the most central A+A events on the collective evolution of matter, resulting spectra, angular particle correlations and vn-coefficients is studied in the framework of the hydrokinetic model (HKM).
The theoretical review of the last femtoscopy results for the systems created in ultrarelativistic A+A, p+p, and p+Pb collisions is presented. The basic model, allowing to describe the interferometry data at SPS, RHIC, and LHC, is the hydrokinetic model. The model allows one to avoid the principal problem of the particlization of the medium at nonspace-like sites of transition hypersurfaces and switch to hadronic cascade at a space-like hypersurface with nonequilibrated particle input. The results for pion and kaon interferometry scales in Pb+Pb and Au+Au collisions at LHC and RHIC are presented for different centralities. The new theoretical results as for the femtoscopy of small sources with sizes of 1-2 fm or less are discussed. The uncertainty principle destroys the standard approach of completely chaotic sources: the emitters in such sources cannot radiate independently and incoherently. As a result, the observed femtoscopy scales are reduced, and the Bose-Einstein correlation function is suppressed. The results are applied for the femtoscopy analysis of p+p collisions at √s=7 TeV LHC energy and p+Pb ones at √s=5.02 TeV. The behavior of the corresponding interferometry volumes on multiplicity is compared with what is happening for central A+A collisions. In addition the nonfemtoscopic two-pion correlations in proton-proton collisions at the LHC energies are considered, and a simple model that takes into account correlations induced by the conservation laws and minijets is analyzed.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p–Pb collisions at √sNN = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range - 0.5 < y < 0. The transverse momentum spectra, measured as a function of the multiplicity, have a pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at √s = 7 TeV and Pb–Pb collisions at √sNN = 2.76 TeV. In Pb–Pb and p–Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
Spatial neuronal synchronization and the waveform of oscillations : implications for EEG and MEG
(2019)
Neuronal oscillations are ubiquitous in the human brain and are implicated in virtually all brain functions. Although they can be described by a prominent peak in the power spectrum, their waveform is not necessarily sinusoidal and shows rather complex morphology. Both frequency and temporal descriptions of such non-sinusoidal neuronal oscillations can be utilized. However, in non-invasive EEG/MEG recordings the waveform of oscillations often takes a sinusoidal shape which in turn leads to a rather oversimplified view on oscillatory processes. In this study, we show in simulations how spatial synchronization can mask non-sinusoidal features of the underlying rhythmic neuronal processes. Consequently, the degree of non-sinusoidality can serve as a measure of spatial synchronization. To confirm this empirically, we show that a mixture of EEG components is indeed associated with more sinusoidal oscillations compared to the waveform of oscillations in each constituent component. Using simulations, we also show that the spatial mixing of the non-sinusoidal neuronal signals strongly affects the amplitude ratio of the spectral harmonics constituting the waveform. Finally, our simulations show how spatial mixing can affect the strength and even the direction of the amplitude coupling between constituent neuronal harmonics at different frequencies. Validating these simulations, we also demonstrate these effects in real EEG recordings. Our findings have far reaching implications for the neurophysiological interpretation of spectral profiles, cross-frequency interactions, as well as for the unequivocal determination of oscillatory phase.
The Gribov mode in hot QCD
(2017)