Refine
Year of publication
Document Type
- Conference Proceeding (27) (remove)
Language
- English (27)
Has Fulltext
- yes (27)
Is part of the Bibliography
- no (27) (remove)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (27) (remove)
One-photon and multi-photon absorption, spontaneous and stimulated photon emission, resonance Raman scattering and electron transfer are important molecular processes that commonly involve combined vibrational-electronic (vibronic) transitions. The corresponding vibronic transition profiles in the energy domain are usually determined by Franck-Condon factors (FCFs), the squared norm of overlap integrals between vibrational wavefunctions of different electronic states. FC profiles are typically highly congested for large molecular systems and the spectra usually become not well-resolvable at elevated temperatures. The (theoretical) analyses of such spectra are even more difficult when vibrational mode mixing (Duschinsky) effects are significant, because contributions from different modes are in general not separable, even within the harmonic approximation. A few decades ago Doktorov, Malkin and Man'ko [1979 J. Mol. Spectrosc. 77, 178] developed a coherent state-based generating function approach and exploited the dynamical symmetry of vibrational Hamiltonians for the Duschinsky relation to describe FC transitions at zero Kelvin. Recently, the present authors extended the method to incorporate thermal, single vibronic level, non-Condon and multi-photon effects in energy, time and probability density domains for the efficient calculation and interpretation of vibronic spectra. Herein, recent developments and corresponding generating functions are presented for single vibronic levels related to fluorescence, resonance Raman scattering and anharmonic transition.
While the existence of a strongly interacting state of matter, known as “quark-gluon plasma” (QGP), has been established in heavy ion collision experiments in the past decade, the task remains to map out the transition from the hadronic matter to the QGP. This is done by measuring the dependence of key observables (such as particle suppression and elliptic flow) on the collision energy of the heavy ions. This procedure, known as "beam energy scan", has been most recently performed at the Relativistic Heavy Ion Collider (RHIC).
Utilizing a Boltzmann+hydrodynamics hybrid model, we study the collision energy dependence of initial state eccentricities and the final state elliptic and triangular flow. This approach is well suited to investigate the relative importance of hydrodynamics and hadron transport at different collision energies.
The QGP that might be created in ultrarelativistic heavy-ion collisions is expected to radiate thermal dilepton radiation. However, this thermal dilepton radiation interferes with dileptons originating from hadron decays. In the invariant mass region between the f and J=y peak (1GeV <= M l+l <=. 3GeV) the most substantial background of hadron decays originates from correlated DD¯ -meson decays. We evaluate this background using a Langevin simulation for charm quarks. As background medium we utilize the well-tested UrQMD-hybrid model. The required drag and diffusion coefficients are taken from a resonance approach. The decoupling of the charm quarks from the hot medium is performed at a temperature of 130MeV and as hadronization mechanism a coalescence approach is chosen. This model for charm quark interactions with the medium has already been successfully applied to the study of the medium modification and the elliptic flow at FAIR, RHIC and LHC energies. In this proceeding we present our results for the dilepton radiation from correlated D¯D decays at RHIC energy in comparison to PHENIX measurements in the invariant mass range between 1 and 3 GeV using different interaction scenarios. These results can be utilized to estimate the thermal QGP radiation.
We study the impact of nonequilibrium effects on the relevant signals within a chiral fluid dynamics model including explicit propagation of the Polyakov loop. An expanding heat bath of quarks is coupled to the Langevin dynamics of the order parameter fields. The model is able to describe relaxational processes, including critical slowing down and the enhancement of soft modes near the critical point. At the first-order phase transition we observe domain formation and phase coexistence in the sigma and Polyakov loop field leading to a significant amount of clumping in the energy density. This effect gets even more pronounced if we go to systems at finite baryon density. Here the formation of high-density clusters could provide an important observable signal for upcoming experiments at FAIR and NICA.We conclude that improving our understanding of dynamical symmetry breaking is important to give realistic estimates for experimental observables connected to the QCD phase transition.
We derive the Polyakov-loop thermodynamic potential in the perturbative approach to pure SU(3) Yang-Mills theory. The potential expressed in terms of the Polyakov loop in the fundamental representation corresponds to that of the strong-coupling expansion, of which the relevant coefficients of the gluon energy distribution are specified by characters of the SU(3) group. At high temperature, the potential exhibits the correct asymptotic behavior, whereas at low temperature, it disfavors gluons as appropriate dynamical degrees of freedom. To quantify the Yang-Mills thermodynamics in confined phase, we introduce a hybrid approach which matches the effective gluon potential to that of glueballs, constrained by the QCD trace anomaly in terms of dilaton fields.
We investigate the properties of the QCD matter across the deconfinement phase transition. In the scope of the parton-hadron string dynamics (PHSD) transport approach, we study the strongly interacting matter in equilibrium as well as the out-of equilibrium dynamics of relativistic heavy-ion collisions. We present here in particular the results on the electromagnetic radiation, i.e. photon and dilepton production, in relativistic heavy-ion collisions and the relevant correlator in equilibrium, i.e. the electric conductivity. By comparing our calculations for the heavy-ion collisions to the available data, we determine the relative importance of the various production sources and address the possible origin of the observed strong elliptic flow ν2 of direct photons.
LICE is one of the four major LHC experiments at CERN. When the accelerator enters the Run 3 data-taking period, starting in 2021, ALICE expects almost 100 times more Pb-Pb central collisions than now, resulting in a large increase of data throughput. In order to cope with this new challenge, the collaboration had to extensively rethink the whole data processing chain, with a tighter integration between Online and Offline computing worlds. Such a system, code-named ALICE O2, is being developed in collaboration with the FAIR experiments at GSI. It is based on the ALFA framework which provides a generalized implementation of the ALICE High Level Trigger approach, designed around distributed software entities coordinating and communicating via message passing.
We will highlight our efforts to integrate ALFA within the ALICE O2 environment. We analyze the challenges arising from the different running environments for production and development, and conclude on requirements for a flexible and modular software framework. In particular we will present the ALICE O2 Data Processing Layer which deals with ALICE specific requirements in terms of Data Model. The main goal is to reduce the complexity of development of algorithms and managing a distributed system, and by that leading to a significant simplification for the large majority of the ALICE users.
Background: After induction of DNA double strand breaks (DSBs), the DNA damage response (DDR) is activated. One of the earliest events in DDR is the phosphorylation of serine 139 on the histone variant H2AX (gH2AX) catalyzed by phosphatidylinositol 3-kinases-related kinases. Despite being extensively studied, H2AX distribution[1] across the genome and gH2AX spreading around DSBs sites[2] in the context of different chromatin compaction states or transcription are yet to be fully elucidated.
Materials and methods: gH2AX was induced in human hepatocellular carcinoma cells (HepG2) by exposure to 10 Gy X-rays (250 kV, 16 mA). Samples were incubated 0.5, 3 or 24 hours post irradiation to investigate early, intermediate and late stages of DDR, respectively. Chromatin immunoprecipitation was performed to select H2AX, H3 and gH2AX-enriched chromatin fractions. Chromatin-associated DNA was then sequenced by Illumina ChIP-Seq platform. HepG2 gene expression and histone modification (H3K36me3, H3K9me3) ChIP-Seq profiles were retrieved from Gene Expression Omnibus (accession numbers GSE30240 and GSE26386, respectively).
Results: First, we combined G/C usage, gene content, gene expression or histone modification profiles (H3K36me3, H3K9me3) to define genomic compartments characterized by different chromatin compaction states or transcriptional activity. Next, we investigated H3, H2AX and gH2AX distributions in such defined compartments before and after exposure to ionizing radiation (IR) to study DNA repair kinetics during DDR. Our sequencing results indicate that H2AX distribution followed H3 occupancy and, thus, the nucleosome pattern. The highest H2AX and H3 enrichment was observed in transcriptionally active compartments (euchromatin) while the lowest was found in low G/C and gene-poor compartments (heterochromatin). Under physiological conditions, the body of highly and moderately transcribed genes was devoid of gH2AX, despite presenting high H2AX levels. gH2AX accumulation was observed in 5’ or 3’ flanking regions, instead. The same genes showed a prompt gH2AX accumulation during the early stage of DDR which then decreased over time as DDR proceeded.
Finally, during the late stage of DDR the residual gH2AX signal was entirely retained in heterochromatic compartments. At this stage, euchromatic compartments were completely devoid of gH2AX despite presenting high levels of non-phosphorylated H2AX.
Conclusions: We show that gH2AX distribution ultimately depends on H2AX occupancy, the latter following H3 occupancy and, thus, nucleosome pattern. Both H2AX and H3 levels were higher in actively transcribed compartments. However, gH2AX levels were remarkably low over the body of actively transcribed genes suggesting that transcription levels antagonize gH2AX spreading. Moreover, repair processes did not take place uniformly across the genome; rather, DNA repair was affected by genomic location and transcriptional activity. We propose that higher H2AX density in euchromaticcompartments results in high relative gH2AXconcentration soon after the activation of DDR, thus favoring the recruitment of the DNA repair machinery to those compartments. When the damage is repaired and gH2AX is removed, its residual fraction is retained in the heterochromatic compartments which are then targeted and repaired at later times.
Network or graph theory has become a popular tool to represent and analyze large-scale interaction patterns in the brain. To derive a functional network representation from experimentally recorded neural time series one has to identify the structure of the interactions between these time series. In neuroscience, this is often done by pairwise bivariate analysis because a fully multivariate treatment is typically not possible due to limited data and excessive computational cost. Furthermore, a true multivariate analysis would consist of the analysis of the combined effects, including information theoretic synergies and redundancies, of all possible subsets of network components. Since the number of these subsets is the power set of the network components, this leads to a combinatorial explosion (i.e. a problem that is computationally intractable). In contrast, a pairwise bivariate analysis of interactions is typically feasible but introduces the possibility of false detection of spurious interactions between network components, especially due to cascade and common drive effects. These spurious connections in a network representation may introduce a bias to subsequently computed graph theoretical measures (e.g. clustering coefficient or centrality) as these measures depend on the reliability of the graph representation from which they are computed. Strictly speaking, graph theoretical measures are meaningful only if the underlying graph structure can be guaranteed to consist of one type of connections only, i.e. connections in the graph are guaranteed to be non-spurious. ...