Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (889)
- Article (738)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Review (1)
Has Fulltext
- yes (1686) (remove)
Is part of the Bibliography
- no (1686)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (10)
- Heavy-ion collision (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
- QCD (5)
- Quark-Gluon Plasma (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1686)
- Physik (1311)
- Informatik (1002)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (11)
- Helmholtz International Center for FAIR (7)
- Informatik und Mathematik (7)
- ELEMENTS (5)
- Präsidium (5)
- Geowissenschaften (4)
- Hochschulrechenzentrum (4)
- Biochemie, Chemie und Pharmazie (3)
- MPI für Biophysik (3)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (3)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (2)
- Exzellenzcluster Makromolekulare Komplexe (2)
- MPI für empirische Ästhetik (2)
- Biodiversität und Klima Forschungszentrum (BiK-F) (1)
- Center for Scientific Computing (CSC) (1)
- Mathematik (1)
- Pharmazie (1)
- Senckenbergische Naturforschende Gesellschaft (1)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (1)
Using more than a million randomly generated equations of state that satisfy theoretical and observational constraints, we construct a novel, scale-independent description of the sound speed in neutron stars, where the latter is expressed in a unit cube spanning the normalized radius, r/R, and the mass normalized to the maximum one, M/MTOV. From this generic representation, a number of interesting and surprising results can be deduced. In particular, we find that light (heavy) stars have stiff (soft) cores and soft (stiff) outer layers, or that the maximum of the sound speed is located at the center of light stars but moves to the outer layers for stars with M/MTOV ≳ 0.7, reaching a constant value of cs = 1 2 2 as M → MTOV. We also show that the sound speed decreases below the conformal limit cs = 1 3 2 at the center of stars with M = MTOV. Finally, we construct an analytic expression that accurately describes the radial dependence of the sound speed as a function of the neutron-star mass, thus providing an estimate of the maximum sound speed expected in a neutron star.
We have investigated the systematic differences introduced when performing a Bayesian-inference analysis of the equation of state (EOS) of neutron stars employing either variable- or constant-likelihood functions. The former has the advantage of retaining the full information on the distributions of the measurements, making exhaustive usage of the data. The latter, on the other hand, has the advantage of a much simpler implementation and reduced computational costs. In both approaches, the EOSs have identical priors and have been built using the sound speed parameterization method so as to satisfy the constraints from X-ray and gravitational waves observations, as well as those from chiral effective theory and perturbative quantum chromodynamics. In all cases, the two approaches lead to very similar results and the 90% confidence levels essentially overlap. Some differences do appear, but in regions where the probability density is extremely small and are mostly due to the sharp cutoff on the binary tidal deformability L˜ 720 set in the constant-likelihood approach. Our analysis has also produced two additional results. First, an inverse correlation between the normalized central number density, nc,TOV/ns, and the radius of a maximally massive star, RTOV. Second, and most importantly, it has confirmed the relation between the chirp mass and the binary tidal deformability. The importance of this result is that it relates chirp, which is measured very accurately, and L˜ , which contains important information on the EOS. Hence, when chirp is measured in future detections, our relation can be used to set tight constraints on L˜ .
A considerable effort has been dedicated recently to the construction of generic equations of state (EOSs) for matter in neutron stars. The advantage of these approaches is that they can provide model-independent information on the interior structure and global properties of neutron stars. Making use of more than 106 generic EOSs, we assess the validity of quasi-universal relations of neutron-star properties for a broad range of rotation rates, from slow rotation up to the mass-shedding limit. In this way, we are able to determine with unprecedented accuracy the quasi-universal maximum-mass ratio between rotating and nonrotating stars and reveal the existence of a new relation for the surface oblateness, i.e., the ratio between the polar and equatorial proper radii. We discuss the impact that our findings have on the imminent detection of new binary neutron-star mergers and how they can be used to set new and more stringent limits on the maximum mass of nonrotating neutron stars, as well as to improve the modeling of the X-ray emission from the surface of rotating stars.
The amplification of magnetic fields plays an important role in explaining numerous astrophysical phenomena associated with binary neutron star mergers, such as mass ejection and the powering of short gamma-ray bursts. Magnetic fields in isolated neutron stars are often assumed to be confined to a small region near the stellar surface, while they are normally taken to fill the whole star in numerical modeling of mergers. By performing high-resolution, global, and high-order general-relativistic magnetohydrodynamic simulations, we investigate the impact of a purely crustal magnetic field and contrast it with the standard configuration consisting of a dipolar magnetic field with the same magnetic energy but filling the whole star. While the crust configurations are very effective in generating strong magnetic fields during the Kelvin–Helmholtz-instability stage, they fail to achieve the same level of magnetic-field amplification of the full-star configurations. This is due to the lack of magnetized material in the neutron-star interiors to be used for further turbulent amplification and to the surface losses of highly magnetized matter in the crust configurations. Hence, the final magnetic energies in the two configurations differ by more than 1 order of magnitude. We briefly discuss the impact of these results on astrophysical observables and how they can be employed to deduce the magnetic topology in merging binaries.
Post-merger gravitational-wave signal from neutron-star binaries: a new look at an old problem
(2023)
The spectral properties of the post-merger gravitational-wave signal from a binary of neutron stars encodes a variety of information about the features of the system and of the equation of state describing matter around and above nuclear saturation density. Characterizing the properties of such a signal is an “old” problem, which first emerged when a number of frequencies were shown to be related to the properties of the binary through “quasiuniversal” relations. Here we take a new look at this old problem by computing the properties of the signal in terms of the Weyl scalar ψ4. In this way, and using a database of more than 100 simulations, we provide the first evidence for a new instantaneous frequency, y f0 4, associated with the instant of quasi-time-symmetry in the dynamics, and which also follows a quasi-universal relation. We also derive a new quasi-universal relation for the merger frequency f h mer, which provides a description of the data that is 4 times more accurate than previous expressions while requiring fewer fitting coefficients. Finally, consistent with the findings of numerous studies before ours, and using an enlarged ensemble of binary systems, we point out that the ℓ = 2, m = 1 gravitational-wave mode could become comparable with the traditional ℓ = 2, m = 2 mode on sufficiently long timescales, with strain amplitudes in a ratio |h21|/|h22| ∼ 0.1–1 under generic orientations of the binary, which could be measured by present detectors for signals with a large signal-to-noise ratio or by third-generation detectors for generic signals should no collapse occur.
One-photon and multi-photon absorption, spontaneous and stimulated photon emission, resonance Raman scattering and electron transfer are important molecular processes that commonly involve combined vibrational-electronic (vibronic) transitions. The corresponding vibronic transition profiles in the energy domain are usually determined by Franck-Condon factors (FCFs), the squared norm of overlap integrals between vibrational wavefunctions of different electronic states. FC profiles are typically highly congested for large molecular systems and the spectra usually become not well-resolvable at elevated temperatures. The (theoretical) analyses of such spectra are even more difficult when vibrational mode mixing (Duschinsky) effects are significant, because contributions from different modes are in general not separable, even within the harmonic approximation. A few decades ago Doktorov, Malkin and Man'ko [1979 J. Mol. Spectrosc. 77, 178] developed a coherent state-based generating function approach and exploited the dynamical symmetry of vibrational Hamiltonians for the Duschinsky relation to describe FC transitions at zero Kelvin. Recently, the present authors extended the method to incorporate thermal, single vibronic level, non-Condon and multi-photon effects in energy, time and probability density domains for the efficient calculation and interpretation of vibronic spectra. Herein, recent developments and corresponding generating functions are presented for single vibronic levels related to fluorescence, resonance Raman scattering and anharmonic transition.
Coarse-grained modeling has become an important tool to supplement experimental measurements, allowing access to spatio-temporal scales beyond all-atom based approaches. The GōMartini model combines structure- and physics-based coarse-grained approaches, balancing computational efficiency and accurate representation of protein dynamics with the capabilities of studying proteins in different biological environments. This paper introduces an enhanced GōMartini model, which combines a virtual-site implementation of Gō models with Martini 3. The implementation has been extensively tested by the community since the release of the new version of Martini. This work demonstrates the capabilities of the model in diverse case studies, ranging from protein-membrane binding to protein-ligand interactions and AFM force profile calculations. The model is also versatile, as it can address recent inaccuracies reported in the Martini protein model. Lastly, the paper discusses the advantages, limitations, and future perspectives of the Martini 3 protein model and its combination with Gō models.
Highlights
• Brain connectivity states identified by cofluctuation strength.
• CMEP as new method to robustly predict human traits from brain imaging data.
• Network-identifying connectivity ‘events’ are not predictive of cognitive ability.
• Sixteen temporally independent fMRI time frames allow for significant prediction.
• Neuroimaging-based assessment of cognitive ability requires sufficient scan lengths.
Abstract
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10 min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual's network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Measurements of the pT-dependent flow vector fluctuations in Pb-Pb collisions at sNN−−−√=5.02 TeV using azimuthal correlations with the ALICE experiment at the LHC are presented. A four-particle correlation approach [1] is used to quantify the effects of flow angle and magnitude fluctuations separately. This paper extends previous studies to additional centrality intervals and provides measurements of the pT-dependent flow vector fluctuations at sNN−−−√=5.02 TeV with two-particle correlations. Significant pT-dependent fluctuations of the V⃗ 2 flow vector in Pb-Pb collisions are found across different centrality ranges, with the largest fluctuations of up to ∼15% being present in the 5% most central collisions. In parallel, no evidence of significant pT-dependent fluctuations of V⃗ 3 or V⃗ 4 is found. Additionally, evidence of flow angle and magnitude fluctuations is observed with more than 5σ significance in central collisions. These observations in Pb-Pb collisions indicate where the classical picture of hydrodynamic modeling with a common symmetry plane breaks down. This has implications for hard probes at high pT, which might be biased by pT-dependent flow angle fluctuations of at least 23% in central collisions. Given the presented results, existing theoretical models should be re-examined to improve our understanding of initial conditions, quark--gluon plasma (QGP) properties, and the dynamic evolution of the created system.
The intense photon fluxes from relativistic nuclei provide an opportunity to study photonuclear interactions in ultraperipheral collisions. The measurement of coherently photoproduced π+π−π+π− final states in ultraperipheral Pb-Pb collisions at sNN−−−√=5.02 TeV is presented for the first time. The cross section, dσ/dy, times the branching ratio (ρ→π+π+π−π−) is found to be 47.8±2.3 (stat.)±7.7 (syst.) mb in the rapidity interval |y|<0.5. The invariant mass distribution is not well described with a single Breit-Wigner resonance. The production of two interfering resonances, ρ(1450) and ρ(1700), provides a good description of the data. The values of the masses (m) and widths (Γ) of the resonances extracted from the fit are m1=1385±14 (stat.)±3 (syst.) MeV/c2, Γ1=431±36 (stat.)±82 (syst.) MeV/c2, m2=1663±13 (stat.)±22 (syst.) MeV/c2 and Γ2=357±31 (stat.)±49 (syst.) MeV/c2, respectively. The measured cross sections times the branching ratios are compared to recent theoretical predictions.
Measurement of beauty-quark production in pp collisions at √s = 13 TeV via non-prompt D mesons
(2024)
The pT-differential production cross sections of non-prompt D0, D+, and D+s mesons originating from beauty-hadron decays are measured in proton−proton collisions at a centre-of-mass energy s√ of 13 TeV. The measurements are performed at midrapidity, |y|<0.5, with the data sample collected by ALICE from 2016 to 2018. The results are in agreement with predictions from several perturbative QCD calculations. The fragmentation fraction of beauty quarks to strange mesons divided by the one to non-strange mesons, fs/(fu+fd), is found to be 0.114±0.016 (stat.)±0.006 (syst.)±0.003 (BR)±0.003 (extrap.). This value is compatible with previous measurements at lower centre-of-mass energies and in different collision systems in agreement with the assumption of universality of fragmentation functions. In addition, the dependence of the non-prompt D meson production on the centre-of-mass energy is investigated by comparing the results obtained at s√=5.02 and 13 TeV, showing a hardening of the non-prompt D-meson pT-differential production cross section at higher s√. Finally, the bb¯¯¯ production cross section per unit of rapidity at midrapidity is calculated from the non-prompt D0, D+, D+s, and Λ+c hadron measurements, obtaining dσ/dy=75.2±3.2 (stat.)±5.2 (syst.)+12.3−3.2 (extrap.) μb.
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton–proton collisions at a center-of-mass energy of √s = 13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
LICE is one of the four major LHC experiments at CERN. When the accelerator enters the Run 3 data-taking period, starting in 2021, ALICE expects almost 100 times more Pb-Pb central collisions than now, resulting in a large increase of data throughput. In order to cope with this new challenge, the collaboration had to extensively rethink the whole data processing chain, with a tighter integration between Online and Offline computing worlds. Such a system, code-named ALICE O2, is being developed in collaboration with the FAIR experiments at GSI. It is based on the ALFA framework which provides a generalized implementation of the ALICE High Level Trigger approach, designed around distributed software entities coordinating and communicating via message passing.
We will highlight our efforts to integrate ALFA within the ALICE O2 environment. We analyze the challenges arising from the different running environments for production and development, and conclude on requirements for a flexible and modular software framework. In particular we will present the ALICE O2 Data Processing Layer which deals with ALICE specific requirements in terms of Data Model. The main goal is to reduce the complexity of development of algorithms and managing a distributed system, and by that leading to a significant simplification for the large majority of the ALICE users.
Highlights
• We present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series.
• The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios.
• The model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33.
Abstract
High-rate Global Navigation Satellite System (HR-GNSS) data can be highly useful for earthquake analysis as it provides continuous high-frequency measurements of ground motion. This data can be used to analyze diverse parameters related to the seismic source and to assess the potential of an earthquake to prompt strong motions at certain distances and even generate tsunamis. In this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data with 0.07 ≤ RMS ≤ 0.11. Comparable results were observed in tests using synthetic data from a different region than the training data, with RMS ≤ 0.15. Furthermore, the model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33, provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near-field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements.
PolarCAP – A deep learning approach for first motion polarity classification of earthquake waveforms
(2022)
Highlights
• We present PolarCAP, a deep learning model that can classify the polarity of a waveform with a 98% accuracy.
• The first-motion polarity of seismograms is a useful parameter, but its manual determination can be laborious and imprecise.
• We demonstrate that in several cases the model can assign trace polar-ity more accurately than a human analyst.
Abstract
The polarity of first P-wave arrivals plays a significant role in the effective determination of focal mechanisms specially for smaller earthquakes. Manual estimation of polarities is not only time-consuming but also prone to human errors. This warrants a need for an automated algorithm for first motion polarity determination. We present a deep learning model - PolarCAP that uses an autoencoder architecture to identify first-motion polarities of earth-quake waveforms. PolarCAP is trained in a supervised fashion using more than 130,000 labelled traces from the Italian seismic dataset (INSTANCE) and is cross-validated on 22,000 traces to choose the most optimal set of hyperparameters. We obtain an accuracy of 0.98 on a completely unseen test dataset of almost 33,000 traces. Furthermore, we check the model generalizability by testing it on the datasets provided by previous works and show that our model achieves a higher recall on both positive and negative polarities.
G protein-coupled receptors (GPCRs) play a crucial role in modulating physiological responses and serve as the main drug target. Specifically, salmeterol and salbutamol which are used for the treatment of pulmonary diseases, exert their effects by activating the GPCR β2-adrenergic receptor (β2AR). In our study, we employed coarse-grained molecular dynamics simulations with the Martini 3 force field to investigate the dynamics of drug molecules in membranes in presence and absence of β2AR. Our simulations reveal that in more than 50% of the flip-flop events the drug molecules use the β2AR surface to permeate the membrane. The pathway along the GPCR surface is significantly more energetically favorable for the drug molecules, which was revealed by umbrella sampling simulations along spontaneous flip-flop pathways. Furthermore, we assessed the behavior of drugs with intracellular targets, such as kinase inhibitors, whose therapeutic efficacy could benefit from this observation. In summary, our results show that β2AR surface interactions can significantly enhance membrane permeation of drugs, emphasizing their potential for consideration in future drug development strategies.
Hadron lists based on experimental studies summarized by the Particle Data Group (PDG) are a crucial input for the equation of state and thermal models used in the study of strongly-interacting matter produced in heavy-ion collisions. Modeling of these strongly-interacting systems is carried out via hydrodynamical simulations, which are followed by hadronic transport codes that also require a hadronic list as input. To remain consistent throughout the different stages of modeling of a heavy-ion collision, the same hadron list with its corresponding decays must be used at each step. It has been shown that even the most uncertain states listed in the PDG from 2016 are required to reproduce partial pressures and susceptibilities from Lattice Quantum Chromodynamics with the hadronic list known as the PDG2016+. Here, we update the hadronic list for use in heavy-ion collision modeling by including the latest experimental information for all states listed in the Particle Data Booklet in 2021. We then compare our new list, called PDG2021+, to Lattice Quantum Chromodynamics results and find that it achieves even better agreement with the first principles calculations than the PDG2016+ list. Furthermore, we develop a novel scheme based on intermediate decay channels that allows for only binary decays, such that PDG2021+ will be compatible with the hadronic transport framework SMASH. Finally, we use these results to make comparisons to experimental data and discuss the impact on particle yields and spectra.
Recent lattice QCD results, comparing to a hadron resonance gas model, have shown the need for hundreds of particles in hadronic models. These extra particles influence both the equation of state and hadronic interactions within hadron transport models. Here, we introduce the PDG21+ particle list, which contains the most up-to-date database of particles and their properties. We then convert all particles decays into 2 body decays so that they are compatible with SMASH in order to produce a more consistent description of a heavy-ion collision.
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called "contrastive analysis"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.
In order to investigate the involvement of primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multiunit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of working memory contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
We compiled an NMR data set consisting of exact nuclear Overhauser enhancement (eNOE) distance limits, residual dipolar couplings (RDCs) and scalar (J) couplings for GB3, which forms one of the largest and most diverse data set for structural characterization of a protein to date. All data have small experimental errors, which are carefully estimated. We use the data in the research article Vogeli et al., 2015, Complementarity and congruence between exact NOEs and traditional NMR probes for spatial decoding of protein dynamics, J. Struct. Biol., 191, 3, 306–317, doi:10.1016/j.jsb.2015.07.008 [1] for cross-validation in multiple-state structural ensemble calculation. We advocate this set to be an ideal test case for molecular dynamics simulations and structure calculations.
The human growth factor receptor MET is a receptor tyrosine kinase involved in cell proliferation, migration, and survival. MET is also hijacked by the intracellular pathogen Listeria monocytogenes. Its invasion protein, internalin B (InlB), binds to MET and promotes the formation of a signaling dimer that triggers the internalization of the pathogen. Here, we use a combination of structural biology, modeling, molecular dynamics simulations, and in situ single-molecule Förster resonance energy transfer (smFRET) experiments to elucidate the early events in MET activation by Listeria. Simulations show that InlB binding stabilizes MET in a conformation that promotes dimer formation. smFRET identifies the organization of the in situ signaling dimer. Further MD simulations of the dimer model are in quantitative agreement with smFRET. We accurately describe the structural dynamics underpinning an important cellular event and introduce a powerful methodological pipeline applicable to studying the activation of other plasma membrane receptors.
Structural rearrangements play a central role in the organization and function of complex biomolecular systems. In principle, Molecular Dynamics (MD) simulations enable us to investigate these thermally activated processes with an atomic level of resolution. In practice, an exponentially large fraction of computational resources must be invested to simulate thermal fluctuations in metastable states. Path sampling methods focus the computational power on sampling the rare transitions between states. One of their outstanding limitations is to efficiently generate paths that visit significantly different regions of the conformational space. To overcome this issue, we introduce a new algorithm for MD simulations that integrates machine learning and quantum computing. First, using functional integral methods, we derive a rigorous low-resolution spatially coarse-grained representation of the system’s dynamics, based on a small set of molecular configurations explored with machine learning. Then, we use a quantum annealer to sample the transition paths of this low-resolution theory. We provide a proof-of-concept application by simulating a benchmark conformational transition with all-atom resolution on the D-Wave quantum computer. By exploiting the unique features of quantum annealing, we generate uncorrelated trajectories at every iteration, thus addressing one of the challenges of path sampling. Once larger quantum machines will be available, the interplay between quantum and classical resources may emerge as a new paradigm of high-performance scientific computing. In this work, we provide a platform to implement this integrated scheme in the field of molecular simulations.
Determining the structure and mechanisms of all individual functional modules of cells at high molecular detail has often been seen as equal to understanding how cells work. Recent technical advances have led to a flush of high-resolution structures of various macromolecular machines, but despite this wealth of detailed information, our understanding of cellular function remains incomplete. Here, we discuss present-day limitations of structural biology and highlight novel technologies that may enable us to analyze molecular functions directly inside cells. We predict that the progression toward structural cell biology will involve a shift toward conceptualizing a 4D virtual reality of cells using digital twins. These will capture cellular segments in a highly enriched molecular detail, include dynamic changes, and facilitate simulations of molecular processes, leading to novel and experimentally testable predictions. Transferring biological questions into algorithms that learn from the existing wealth of data and explore novel solutions may ultimately unveil how cells work.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
Residual connections have been proposed as an architecture-based inductive bias to mitigate the problem of exploding and vanishing gradients and increased task performance in both feed-forward and recurrent networks (RNNs) when trained with the backpropagation algorithm. Yet, little is known about how residual connections in RNNs influence their dynamics and fading memory properties. Here, we introduce weakly coupled residual recurrent networks (WCRNNs) in which residual connections result in well-defined Lyapunov exponents and allow for studying properties of fading memory. We investigate how the residual connections of WCRNNs influence their performance, network dynamics, and memory properties on a set of benchmark tasks. We show that several distinct forms of residual connections yield effective inductive biases that result in increased network expressivity. In particular, those are residual connections that (i) result in network dynamics at the proximity of the edge of chaos, (ii) allow networks to capitalize on characteristic spectral properties of the data, and (iii) result in heterogeneous memory properties. In addition, we demonstrate how our results can be extended to non-linear residuals and introduce a weakly coupled residual initialization scheme that can be used for Elman RNNs.
From August to November 2017, Madagascar endured an outbreak of plague. A total of 2417 cases of plague were confirmed, causing a death toll of 209. Public health intervention efforts were introduced and successfully stopped the epidemic at the end of November. The plague, however, is endemic in the region and occurs annually, posing the risk of future outbreaks. To understand the plague transmission, we collected real-time data from official reports, described the outbreak's characteristics, and estimated transmission parameters using statistical and mathematical models. The pneumonic plague epidemic curve exhibited multiple peaks, coinciding with sporadic introductions of new bubonic cases. Optimal climate conditions for rat flea to flourish were observed during the epidemic. Estimate of the plague basic reproduction number during the large wave of the epidemic was high, ranging from 5 to 7 depending on model assumptions. The incubation and infection periods for bubonic and pneumonic plague were 4.3 and 3.4 days and 3.8 and 2.9 days, respectively. Parameter estimation suggested that even with a small fraction of the population exposed to infected rat fleas (1/10,000) and a small probability of transition from a bubonic case to a secondary pneumonic case (3%), the high human-to-human transmission rate can still generate a large outbreak. Controlling rodent and fleas can prevent new index cases, but managing human-to-human transmission is key to prevent large-scale outbreaks.
Ebola virus (EBOV) infection causes a high death toll, killing a high proportion of EBOV-infected patients within 7 days. Comprehensive data on EBOV infection are fragmented, hampering efforts in developing therapeutics and vaccines against EBOV. Under this circumstance, mathematical models become valuable resources to explore potential controlling strategies. In this paper, we employed experimental data of EBOV-infected nonhuman primates (NHPs) to construct a mathematical framework for determining windows of opportunity for treatment and vaccination. Considering a prophylactic vaccine based on recombinant vesicular stomatitis virus expressing the EBOV glycoprotein (rVSV-EBOV), vaccination could be protective if a subject is vaccinated during a period from one week to four months before infection. For the case of a therapeutic vaccine based on monoclonal antibodies (mAbs), a single dose might resolve the invasive EBOV replication even if it was administrated as late as four days after infection. Our mathematical models can be used as building blocks for evaluating therapeutic and vaccine modalities as well as for evaluating public health intervention strategies in outbreaks. Future laborator experiments will help to validate and refine the estimates of the windows of opportunity proposed here.
The search for materials with topological properties is an ongoing effort. In this article we propose a systematic statistical method, supported by machine learning techniques, that is capable of constructing topological models for a generic lattice without prior knowledge of the phase diagram. By sampling tight-binding parameter vectors from a random distribution, we obtain data sets that we label with the corresponding topological index. This labeled data is then analyzed to extract those parameters most relevant for the topological classification and to find their most likely values. We find that the marginal distributions of the parameters already define a topological model. Additional information is hidden in correlations between parameters. Here we present as a proof of concept the prediction of the Haldane model as the prototypical topological insulator for the honeycomb lattice in Altland-Zirnbauer (AZ) class A. The algorithm is straightforwardly applicable to any other AZ class or lattice, and could be generalized to interacting systems.
The ALICE collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp collisions at s√= 2.76 TeV. Electrons not originating from semi-electronic decay of beauty hadrons are suppressed using the impact parameter of the corresponding tracks. The production cross section of beauty decay electrons is compared to the result obtained with an alternative method which uses the distribution of the azimuthal angle between heavy-flavour decay electrons and charged hadrons. Perturbative QCD calculations agree with the measured cross section within the experimental and theoretical uncertainties. The integrated visible cross section, σb→e=3.47±0.40(stat)+1.12−1.33(sys)±0.07(norm)μb, was extrapolated to full phase space using Fixed Order plus Next-to-Leading Log (FONLL) predictions to obtain the total bb¯ production cross section, σbb¯=130±15.1(stat)+42.1−49.8(sys)+3.4−3.1(extr)±2.5(norm)±4.4(BR)μb.
The ALICE Collaboration reports a differential measurement of inclusive jet suppression using pp and Pb−Pb collision data at a center-of-mass energy per nucleon-nucleon collision sNN−−−√=5.02 TeV. Charged-particle jets are reconstructed using the anti-kT algorithm with resolution parameters R= 0.2, 0.3, 0.4, 0.5, and 0.6 in pp collisions and R= 0.2, 0.4, 0.6 in central (0−10%), semi-central (30−50%), and peripheral (60−80%) Pb−Pb collisions. A novel approach based on machine learning is employed to mitigate the influence of jet background. This enables measurements of inclusive jet suppression in new regions of phase space, including down to the lowest jet pT≥40 GeV/c at R=0.6 in central Pb−Pb collisions. This is an important step for discriminating different models of jet quenching in the quark-gluon plasma. The transverse momentum spectra, nuclear modification factors, derived cross section, and nuclear modification factor ratios for different jet resolution parameters of charged-particle jets are presented and compared to model predictions. A mild dependence of the nuclear modification factor ratios on collision centrality and resolution parameter is observed. The results are compared to a variety of jet-quenching models with varying levels of agreement.
The ALICE collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp collisions at s√= 2.76 TeV. Electrons not originating from semi-electronic decay of beauty hadrons are suppressed using the impact parameter of the corresponding tracks. The production cross section of beauty decay electrons is compared to the result obtained with an alternative method which uses the distribution of the azimuthal angle between heavy-flavour decay electrons and charged hadrons. Perturbative QCD calculations agree with the measured cross section within the experimental and theoretical uncertainties. The integrated visible cross section, σb→e=3.47±0.40(stat)+1.12−1.33(sys)±0.07(norm)μb, was extrapolated to full phase space using Fixed Order plus Next-to-Leading Log (FONLL) predictions to obtain the total bb¯ production cross section, σbb¯=130±15.1(stat)+42.1−49.8(sys)+3.4−3.1(extr)±2.5(norm)±4.4(BR)μb.
The ALICE collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp collisions at s√= 2.76 TeV. Electrons not originating from semi-electronic decay of beauty hadrons are suppressed using the impact parameter of the corresponding tracks. The production cross section of beauty decay electrons is compared to the result obtained with an alternative method which uses the distribution of the azimuthal angle between heavy-flavour decay electrons and charged hadrons. Perturbative QCD calculations agree with the measured cross section within the experimental and theoretical uncertainties. The integrated visible cross section, σb→e=3.47±0.40(stat)+1.12−1.33(sys)±0.07(norm)μb, was extrapolated to full phase space using Fixed Order plus Next-to-Leading Log (FONLL) predictions to obtain the total bb¯ production cross section, σbb¯=130±15.1(stat)+42.1−49.8(sys)+3.4−3.1(extr)±2.5(norm)±4.4(BR)μb.
The production of the Λ(1520) baryonic resonance has been measured at midrapidity in inelastic pp collisions at s√ = 7 TeV and in p-Pb collisions at sNN−−−√ = 5.02 TeV for non-single diffractive events and in multiplicity classes. The resonance is reconstructed through its hadronic decay channel Λ(1520) → pK− and the charge conjugate with the ALICE detector. The integrated yields and mean transverse momenta are calculated from the measured transverse momentum distributions in pp and p-Pb collisions. The mean transverse momenta follow mass ordering as previously observed for other hyperons in the same collision systems. A Blast-Wave function constrained by other light hadrons (π, K, K0S, p, Λ) describes the shape of the Λ(1520) transverse momentum distribution up to 3.5 GeV/c in p-Pb collisions. In the framework of this model, this observation suggests that the Λ(1520) resonance participates in the same collective radial flow as other light hadrons. The ratio of the yield of Λ(1520) to the yield of the ground state particle Λ remains constant as a function of charged-particle multiplicity, suggesting that there is no net effect of the hadronic phase in p-Pb collisions on the Λ(1520) yield.
The elliptic flow (v2) of D0 mesons from beauty-hadron decays (non-prompt D0) was measured in midcentral (30-50%) Pb-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√ = 5.02 TeV with the ALICE detector at the LHC. The D0 mesons were reconstructed at midrapidity (|y|<0.8) from their hadronic decay D0→K−π+, in the transverse momentum interval 2<pT<12 GeV/c. The result indicates a positive v2 for non-prompt D0 mesons with a significance of 2.7σ. The non-prompt D0-meson v2 is lower than that of prompt non-strange D mesons with 3.2σ significance in 2<pT<8 GeV/c, and compatible with the v2 of beauty-decay electrons. Theoretical calculations of beauty-quark transport in a hydrodynamically expanding medium describe the measurement within uncertainties.
The ALICE Collaboration reports a differential measurement of inclusive jet suppression using pp and Pb−Pb collision data at a center-of-mass energy per nucleon-nucleon collision sNN−−−√=5.02 TeV. Charged-particle jets are reconstructed using the anti-kT algorithm with resolution parameters R= 0.2, 0.3, 0.4, 0.5, and 0.6 in pp collisions and R= 0.2, 0.4, 0.6 in central (0−10%), semi-central (30−50%), and peripheral (60−80%) Pb−Pb collisions. A novel approach based on machine learning is employed to mitigate the influence of jet background. This enables measurements of inclusive jet suppression in new regions of phase space, including down to the lowest jet pT≥40 GeV/c at R=0.6 in central Pb−Pb collisions. This is an important step for discriminating different models of jet quenching in the quark-gluon plasma. The transverse momentum spectra, nuclear modification factors, derived cross section, and nuclear modification factor ratios for different jet resolution parameters of charged-particle jets are presented and compared to model predictions. A mild dependence of the nuclear modification factor ratios on collision centrality and resolution parameter is observed. The results are compared to a variety of jet-quenching models with varying levels of agreement.
We present the charged-particle multiplicity distributions over a wide pseudorapidity range (−3.4<η<5.0) for pp collisions at s√= 0.9, 7, and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector and the Forward Multiplicity Detector of ALICE, extending the pseudorapidity coverage of the earlier publications and the high-multiplicity reach. The measurements are compared to results from the CMS experiment and to PYTHIA, PHOJET and EPOS LHC event generators, as well as IP-Glasma calculations.
We report about the properties of the underlying event measured with ALICE at the LHC in pp and p−Pb collisions at sNN−−−√=5.02 TeV. The event activity, quantified by charged-particle number and summed-pT densities, is measured as a function of the leading-particle transverse momentum (ptrigT). These quantities are studied in three azimuthal-angle regions relative to the leading particle in the event: toward, away, and transverse. Results are presented for three different pT thresholds (0.15, 0.5, and 1 GeV/c) at mid-pseudorapidity (|η|<0.8). The event activity in the transverse region, which is the most sensitive to the underlying event, exhibits similar behaviour in both pp and p−Pb collisions, namely, a steep increase with ptrigT for low ptrigT, followed by a saturation at ptrigT≈5 GeV/c. The results from pp collisions are compared with existing measurements at other centre-of-mass energies. The quantities in the toward and away regions are also analyzed after the subtraction of the contribution measured in the transverse region. The remaining jet-like particle densities are consistent in pp and p−Pb collisions for ptrigT>10 GeV/c, whereas for lower ptrigT values the event activity is slightly higher in p−Pb than in pp collisions. The measurements are compared with predictions from the PYTHIA 8 and EPOS LHC Monte Carlo event generators.
The first measurement of the e+e− pair production at low lepton pair transverse momentum (pT,ee) and low invariant mass (mee) in non-central Pb−Pb collisions at sNN−−−√=5.02 TeV at the LHC is presented. The dielectron production is studied with the ALICE detector at midrapidity (|ηe|<0.8) as a function of invariant mass (0.4≤mee<2.7 GeV/c2) in the 50−70% and 70−90% centrality classes for pT,ee<0.1 GeV/c, and as a function of pT,ee in three mee intervals in the most peripheral Pb−Pb collisions. Below a pT,ee of 0.1 GeV/c, a clear excess of e+e− pairs is found compared to the expectations from known hadronic sources and predictions of thermal radiation from the medium. The mee excess spectra are reproduced, within uncertainties, by different predictions of the photon−photon production of dielectrons, where the photons originate from the extremely strong electromagnetic fields generated by the highly Lorentz-contracted Pb nuclei. Lowest-order quantum electrodynamic (QED) calculations, as well as a model that takes into account the impact-parameter dependence of the average transverse momentum of the photons, also provide a good description of the pT,ee spectra. The measured ⟨p2T,ee⟩−−−−−√ of the excess pT,ee spectrum in peripheral Pb−Pb collisions is found to be comparable to the values observed previously at RHIC in a similar phase-space region.
We report about the properties of the underlying event measured with ALICE at the LHC in pp and p−Pb collisions at sNN−−−√=5.02 TeV. The event activity, quantified by charged-particle number and summed-pT densities, is measured as a function of the leading-particle transverse momentum (ptrigT). These quantities are studied in three azimuthal-angle regions relative to the leading particle in the event: toward, away, and transverse. Results are presented for three different pT thresholds (0.15, 0.5, and 1 GeV/c) at mid-pseudorapidity (|η|<0.8). The event activity in the transverse region, which is the most sensitive to the underlying event, exhibits similar behaviour in both pp and p−Pb collisions, namely, a steep increase with ptrigT for low ptrigT, followed by a saturation at ptrigT≈5 GeV/c. The results from pp collisions are compared with existing measurements at other centre-of-mass energies. The quantities in the toward and away regions are also analyzed after the subtraction of the contribution measured in the transverse region. The remaining jet-like particle densities are consistent in pp and p−Pb collisions for ptrigT>10 GeV/c, whereas for lower ptrigT values the event activity is slightly higher in p−Pb than in pp collisions. The measurements are compared with predictions from the PYTHIA 8 and EPOS LHC Monte Carlo event generators.
We report about the properties of the underlying event measured with ALICE at the LHC in pp and p−Pb collisions at sNN−−−√=5.02 TeV. The event activity, quantified by charged-particle number and summed-pT densities, is measured as a function of the leading-particle transverse momentum (ptrigT). These quantities are studied in three azimuthal-angle regions relative to the leading particle in the event: toward, away, and transverse. Results are presented for three different pT thresholds (0.15, 0.5, and 1 GeV/c) at mid-pseudorapidity (|η|<0.8). The event activity in the transverse region, which is the most sensitive to the underlying event, exhibits similar behaviour in both pp and p−Pb collisions, namely, a steep increase with ptrigT for low ptrigT, followed by a saturation at ptrigT≈5 GeV/c. The results from pp collisions are compared with existing measurements at other centre-of-mass energies. The quantities in the toward and away regions are also analyzed after the subtraction of the contribution measured in the transverse region. The remaining jet-like particle densities are consistent in pp and p−Pb collisions for ptrigT>10 GeV/c, whereas for lower ptrigT values the event activity is slightly higher in p−Pb than in pp collisions. The measurements are compared with predictions from the PYTHIA 8 and EPOS LHC Monte Carlo event generators.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.