Refine
Year of publication
- 2015 (43) (remove)
Document Type
- Article (43) (remove)
Language
- English (43)
Has Fulltext
- yes (43) (remove)
Is part of the Bibliography
- no (43)
Keywords
- Hadron-Hadron Scattering (3)
- Charm physics (2)
- Heavy Ions (2)
- ALICE experiment (1)
- Chiral phase transition (1)
- Conserved charge fluctuations (1)
- Elliptic flow (1)
- Energy system design (1)
- Heavy ion collisions (1)
- Heavy ions (1)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (43) (remove)
The advent of improved experimental and theoretical techniques has brought a lot of attention to the electric dipole (E1) response of atomic nuclei in the last decade. The extensive studies have led to the observation and interpretation of a concentration of E1 strength energetically below the Giant Dipole Resonance in many nuclei. This phenomenon is commonly denoted as Pygmy Dipole Resonance (PDR). This contribution will summarize the most important results obtained using different experimental probes, define the challenges to gain a deeper understanding of the excitations, and discuss the newest experimental developments.
We investigate charmonium production in Pb + Pb collisions at LHC beam energy Elab=2.76A TeV at fixed-target experiment (√sNN = 72 GeV). In the frame of a transport approach including cold and hot nuclear matter effects on charmonium evolution, we focus on the antishadowing effect on the nuclear modification factors RAA and rAA for the J/ψ yield and transverse momentum. The yield is more suppressed at less forward rapidity (ylab ≃ 2) than that at very forward rapidity (ylab ≃ 4) due to the shadowing and antishadowing in different rapidity bins.
We compiled an NMR data set consisting of exact nuclear Overhauser enhancement (eNOE) distance limits, residual dipolar couplings (RDCs) and scalar (J) couplings for GB3, which forms one of the largest and most diverse data set for structural characterization of a protein to date. All data have small experimental errors, which are carefully estimated. We use the data in the research article Vogeli et al., 2015, Complementarity and congruence between exact NOEs and traditional NMR probes for spatial decoding of protein dynamics, J. Struct. Biol., 191, 3, 306–317, doi:10.1016/j.jsb.2015.07.008 [1] for cross-validation in multiple-state structural ensemble calculation. We advocate this set to be an ideal test case for molecular dynamics simulations and structure calculations.
We discuss the behavior of dynamically-generated charmed baryonic resonances in matter within a unitarized coupled-channel model consistent with heavy-quark spin symmetry. We analyze the implications for the formation of D-meson bound states in nuclei and the propagation of D mesons in heavy-ion collisions from RHIC to FAIR energies.
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
We compute the probability distribution P(N) of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T < 1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4) spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N). By considering ratios of P(N) to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4) criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4) criticality in the context of binomial and negative-binomial distributions for the net proton number.
Synesthesia is a phenomenon in which additional perceptual experiences are elicited by sensory stimuli or cognitive concepts. Synesthetes possess a unique type of phenomenal experiences not directly triggered by sensory stimulation. Therefore, for better understanding of consciousness it is relevant to identify the mental and physiological processes that subserve synesthetic experience. In the present work we suggest several reasons why synesthesia has merit for research on consciousness. We first review the research on the dynamic and rapidly growing field of the studies of synesthesia. We particularly draw attention to the role of semantics in synesthesia, which is important for establishing synesthetic associations in the brain. We then propose that the interplay between semantics and sensory input in synesthesia can be helpful for the study of the neural correlates of consciousness, especially when making use of ambiguous stimuli for inducing synesthesia. Finally, synesthesia-related alterations of brain networks and functional connectivity can be of merit for the study of consciousness.
Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.
Tumour hypoxia plays a pivotal role in cancer therapy for most therapeutic approaches from radiotherapy to immunotherapy. The detailed and accurate knowledge of the oxygen distribution in a tumour is necessary in order to determine the right treatment strategy. Still, due to the limited spatial and temporal resolution of imaging methods as well as lacking fundamental understanding of internal oxygenation dynamics in tumours, the precise oxygen distribution map is rarely available for treatment planing. We employ an agent-based in silico tumour spheroid model in order to study the complex, localized and fast oxygen dynamics in tumour micro-regions which are induced by radiotherapy. A lattice-free, 3D, agent-based approach for cell representation is coupled with a high-resolution diffusion solver that includes a tissue density-dependent diffusion coefficient. This allows us to assess the space- and time-resolved reoxygenation response of a small subvolume of tumour tissue in response to radiotherapy. In response to irradiation the tumour nodule exhibits characteristic reoxygenation and re-depletion dynamics which we resolve with high spatio-temporal resolution. The reoxygenation follows specific timings, which should be respected in treatment in order to maximise the use of the oxygen enhancement effects. Oxygen dynamics within the tumour create windows of opportunity for the use of adjuvant chemotherapeutica and hypoxia-activated drugs. Overall, we show that by using modelling it is possible to follow the oxygenation dynamics beyond common resolution limits and predict beneficial strategies for therapy and in vitro verification. Models of cell cycle and oxygen dynamics in tumours should in the future be combined with imaging techniques, to allow for a systematic experimental study of possible improved schedules and to ultimately extend the reach of oxygenation monitoring available in clinical treatment.
The pA system is typically regarded in heavy ion collisions as a “cold” nuclear matter environment and thought to isolate and identify initial state effects due to the presence of multiple nucleons in the incoming nucleus. Moreover, pA collisions bridge the gap between peripheral AA collisions and the pp baseline to create a more complete understanding of underlying production mechanisms and how they evolve with multiplicity. Recent measurements at both RHIC and the LHC provide an indication, however, that the “cold” nuclear matter picture may be somewhat naïve.
Recent LHC results from the 2013 p–Pb run at √sNN = 5.02 TeV will be discussed.
Due to their penetrating nature, electromagnetic probes, i.e., lepton-antilepton pairs (dileptons) and photons are unique tools to gain insight into the nature of the hot and dense medium of strongly-interacting particles created in relativistic heavy-ion collisions, including hints to the nature of the restoration of chiral symmetry of QCD. Of particular interest are the spectral properties of the electromagnetic current-correlation function of these particles within the dense and/or hot medium. The related theoretical investigations of the in-medium properties of the involved particles in both the partonic and hadronic part of the QCD phase diagram underline the importance of a proper understanding of the properties of various hadron resonances in the medium.
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.
Dilepton production in heavy-ion collisions at top SPS energy is investigated within a coarse-graining approach that combines an underlying microscopic evolution of the nuclear reaction with the application of medium-modified spectral functions. Extracting local energy and baryon density for a grid of small space-time cells and going to each cell’s rest frame enables to determine local temperature and chemical potential by application of an equation of state. This allows for the calculation of thermal dilepton emission. We apply and compare two different spectral functions for the ρ: A hadronic many-body calculation and an approach that uses empirical scattering amplitudes. Quantitatively good agreement of the model calculations with the data from the NA60 collaboration is achieved for both spectral functions, but in detail the hadronic many-body approach leads to a better description, especially of the broadening around the pole mass of the ρ and for the low-mass excess. We further show that the presence of a pion chemical potential significantly influences the dilepton yield.
Two types of particles exist in the atmosphere, primary and secondary particles. While primary particles such as soot, mineral dust, sea salt particles or pollen are introduced directly as particles into the atmosphere, secondary particles are formed in the atmosphere by condensation of gases. The formation of such new aerosol particles takes place frequently and at a broad variety of atmospheric conditions and geographic locations. A considerable fraction of the atmospheric particles is formed by such nucleation processes. The newly formed particles may grow by condensation to sizes where they are large enough to act as cloud condensation nuclei and therefore may affect cloud properties. The fundamental processes of aerosol nucleation are described and typical atmospheric observations are discussed. Two recent studies are introduced that potentially change our current understanding of atmospheric nucleation substantially.
Intrinsic motivations drive the acquisition of knowledge and skills on the basis of novel or surprising stimuli or the pleasure to learn new skills. In so doing, they are different from extrinsic motivations that are mainly linked to drives that promote survival and reproduction. Intrinsic motivations have been implicitly exploited in several psychological experiments but, due to the lack of proper paradigms, they are rarely a direct subject of investigation. This article investigates how different intrinsic motivation mechanisms can support the learning of visual skills, such as "foveate a particular object in space", using a gaze contingency paradigm. In the experiment participants could freely foveate objects shown in a computer screen. Foveating each of two “button” pictures caused different effects: one caused the appearance of a simple image (blue rectangle) in unexpected positions, while the other evoked the appearance of an always-novel picture (objects or animals). The experiment studied how two possible intrinsic motivation mechanisms might guide learning to foveate one or the other button picture. One mechanism is based on the sudden, surprising appearance of a familiar image at unpredicted locations, and a second one is based on the content novelty of the images. The results show the comparative effectiveness of the mechanism based on image novelty, whereas they do not support the operation of the mechanism based on the surprising location of the image appearance. Interestingly, these results were also obtained with participants that, according to a post experiment questionnaire, had not understood the functions of the different buttons suggesting that novelty-based intrinsic motivation mechanisms might operate even at an unconscious level.
The dynamics of strange pseudoscalar and vector mesons in hot and dense nuclear matter is studied within a chiral unitary framework in coupled channels. Our results set up the starting point for implementations in microscopic transport approaches of heavy-ion collisions, particularly at the conditions of the forthcoming experiments at GSI/FAIR and NICA-Dubna. In the K̄ N sector we focus on the calculation of (off-shell) transition rates for the most relevant binary reactions involved in strangeness production close to threshold energies, with special attention to the excitation of sub-threshold hyperon resonances and isospin effects (e.g. K̄ p vs K̄ n). We also give an overview of recent theoretical developments regarding the dynamics of strange vector mesons (K*, K̄* and ϕ) in the nuclear medium, in connection with experimental activity from heavy-ion collisions and nuclear production reactions. We emphasize the role of hadronic decay modes and the excitation of hyperon resonances as the driving mechanisms modifying the properties of vector mesons.
The true revolution in the age of digital neuroanatomy is the ability to extensively quantify anatomical structures and thus investigate structure-function relationships in great detail. Large-scale projects were recently launched with the aim of providing infrastructure for brain simulations. These projects will increase the need for a precise understanding of brain structure, e.g., through statistical analysis and models.
From articles in this Research Topic, we identify three main themes that clearly illustrate how new quantitative approaches are helping advance our understanding of neural structure and function. First, new approaches to reconstruct neurons and circuits from empirical data are aiding neuroanatomical mapping. Second, methods are introduced to improve understanding of the underlying principles of organization. Third, by combining existing knowledge from lower levels of organization, models can be used to make testable predictions about a higher-level organization where knowledge is absent or poor. This latter approach is useful for examining statistical properties of specific network connectivity when current experimental methods have not yet been able to fully reconstruct whole circuits of more than a few hundred neurons.
Formation of hypermatter and hypernuclei within transport models in relativistic ion collisions
(2015)
Within a combined approach we investigate the main features of the production of hyper-fragments in relativistic heavy-ion collisions. The formation of hyperons is modeled within the UrQMD and HSD transport codes. To describe the hyperon capture by nucleons and nuclear residues a coalescence of baryons (CB) model was developed. We demonstrate that the origin of hypernuclei of various masses can be explained by typical baryon interactions, and that it is similar to processes leading to the production of conventional nuclei. At high beam energies we predict a saturation of the yields of all hyper-fragments, therefore, this kind of reactions can be studied with high yields even at the accelerators of moderate relativistic energies.
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
The transition to a future electricity system based primarily on wind and solar PV is examined for all regions in the contiguous US. We present optimized pathways for the build-up of wind and solar power for least backup energy needs as well as for least cost obtained with a simplified, lightweight model based on long-term high resolution weather-determined generation data. In the absence of storage, the pathway which achieves the best match of generation and load, thus resulting in the least backup energy requirements, generally favors a combination of both technologies, with a wind/solar PV (photovoltaics) energy mix of about 80/20 in a fully renewable scenario. The least cost development is seen to start with 100% of the technology with the lowest average generation costs first, but with increasing renewable installations, economically unfavorable excess generation pushes it toward the minimal backup pathway. Surplus generation and the entailed costs can be reduced significantly by combining wind and solar power, and/or absorbing excess generation, for example with storage or transmission, or by coupling the electricity system to other energy sectors.