Refine
Year of publication
Document Type
- Preprint (669)
- Article (368)
- Working Paper (8)
- Conference Proceeding (2)
Language
- English (1047)
Has Fulltext
- yes (1047)
Is part of the Bibliography
- no (1047)
Keywords
- Heavy Ion Experiments (20)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (9)
- Heavy-ion collision (6)
- ALICE experiment (4)
- Collective Flow (4)
- Jets (4)
- Lambda-Kalkül (4)
- Quark-Gluon Plasma (4)
Institute
- Physik (1031)
- Frankfurt Institute for Advanced Studies (FIAS) (959)
- Informatik (934)
- Medizin (4)
- Informatik und Mathematik (3)
- Hochschulrechenzentrum (2)
- Biochemie und Chemie (1)
- Geowissenschaften (1)
Heavy flavour decay muon production at forward rapidity in proton–proton collisions at √s=7 TeV
(2012)
The production of muons from heavy flavour decays is measured at forward rapidity in proton–proton collisions at √s=7 TeV collected with the ALICE experiment at the LHC. The analysis is carried out on a data sample corresponding to an integrated luminosity Lint=16.5 nb−1. The transverse momentum and rapidity differential production cross sections of muons from heavy flavour decays are measured in the rapidity range 2.5<y<4, over the transverse momentum range 2<pt<12 GeV/c. The results are compared to predictions based on perturbative QCD calculations.
Harmonic decomposition of two particle angular correlations in Pb–Pb collisions at √sNN=2.76 TeV
(2012)
Angular correlations between unidentified charged trigger (t) and associated (a) particles are measured by the ALICE experiment in Pb–Pb collisions at √sNN=2.76 TeV for transverse momenta 0.25<pTt,a<15 GeV/c, where pTt>pTa. The shapes of the pair correlation distributions are studied in a variety of collision centrality classes between 0 and 50% of the total hadronic cross section for particles in the pseudorapidity interval |η|<1.0. Distributions in relative azimuth Δϕ≡ϕt−ϕa are analyzed for |Δη|≡|ηt−ηa|>0.8, and are referred to as “long-range correlations”. Fourier components VnΔ≡〈cos(nΔϕ)〉 are extracted from the long-range azimuthal correlation functions. If particle pairs are correlated to one another through their individual correlation to a common symmetry plane, then the pair anisotropy VnΔ(pTt,pTa) is fully described in terms of single-particle anisotropies vn(pT) as VnΔ(pTt,pTa)=vn(pTt)vn(pTa). This expectation is tested for 1⩽n⩽5 by applying a global fit of all VnΔ(pTt,pTa) to obtain the best values vn{GF}(pT). It is found that for 2⩽n⩽5, the fit agrees well with data up to pTa∼3–4 GeV/c, with a trend of increasing deviation as pTt and pTa are increased or as collisions become more peripheral. This suggests that no pair correlation harmonic can be described over the full 0.25<pT<15 GeV/c range using a single vn(pT) curve; such a description is however approximately possible for 2⩽n⩽5 when pTa<4 GeV/c. For the n=1 harmonic, however, a single v1(pT) curve is not obtained even within the reduced range pTa<4 GeV/c.
The ALICE Collaboration reports the measurement of the relative J/ψ yield as a function of charged particle pseudorapidity density dNch/dη in pp collisions at √s=7 TeV at the LHC. J/ψ particles are detected for pt>0, in the rapidity interval |y|<0.9 via decay into e+e−, and in the interval 2.5<y<4.0 via decay into μ+μ− pairs. An approximately linear increase of the J/ψ yields normalized to their event average (dNJ/ψ/dy)/〈dNJ/ψ/dy〉 with (dNch/dη)/〈dNch/dη〉 is observed in both rapidity ranges, where dNch/dη is measured within |η|<1 and pt>0. In the highest multiplicity interval with 〈dNch/dη(bin)〉=24.1, corresponding to four times the minimum bias multiplicity density, an enhancement relative to the minimum bias J/ψ yield by a factor of about 5 at 2.5<y<4 (8 at |y|<0.9) is observed.
A measurement of the multi-strange Ξ− and Ω− baryons and their antiparticles by the ALICE experiment at the CERN Large Hadron Collider (LHC) is presented for inelastic proton–proton collisions at a centre-of-mass energy of 7 TeV. The transverse momentum (pT) distributions were studied at mid-rapidity (|y|<0.5) in the range of 0.6<pT<8.5 GeV/c for Ξ− and Ξ¯+ baryons, and in the range of 0.8<pT<5 GeV/c for Ω− and Ω¯+. Baryons and antibaryons were measured as separate particles and we find that the baryon to antibaryon ratio of both particle species is consistent with unity over the entire range of the measurement. The statistical precision of the current data has allowed us to measure a difference between the mean pT of Ξ− (Ξ¯+) and Ω− (Ω¯+). Particle yields, mean pT, and the spectra in the intermediate pT range are not well described by the PYTHIA Perugia 2011 tune Monte Carlo event generator, which has been tuned to reproduce the early LHC data. The discrepancy is largest for Ω− (Ω¯+). This PYTHIA tune approaches the pT spectra of Ξ− and Ξ¯+ baryons below pT<0.85 GeV/c and describes the Ξ− and Ξ¯+ spectra above pT>6.0 GeV/c. We also illustrate the difference between the experimental data and model by comparing the corresponding ratios of (Ω−+Ω¯+)/(Ξ−+Ξ¯+) as a function of transverse mass.
The ALICE Zero Degree Calorimeter system (ZDC) is composed of two identical sets of calorimeters, placed at opposite sides with respect to the interaction point, 114 meters away from it, complemented by two small forward electromagnetic calorimeters (ZEM). Each set of detectors consists of a neutron (ZN) and a proton (ZP) ZDC. They are placed at zero degrees with respect to the LHC axis and allow to detect particles emitted close to beam direction, in particular neutrons and protons emerging from hadronic heavy-ion collisions (spectator nucleons) and those emitted from electromagnetic processes. For neutrons emitted by these two processes, the ZN calorimeters have nearly 100% acceptance.
During the √sNN = 2.76 TeV Pb-Pb data-taking, the ALICE Collaboration studied forward neutron emission with a dedicated trigger, requiring a minimum energy deposition in at least one of the two ZN. By exploiting also the information of the two ZEM calorimeters it has been possible to separate the contributions of electromagnetic and hadronic processes and to study single neutron vs. multiple neutron emission.
The measured cross sections of single and mutual electromagnetic dissociation of Pb nuclei at √sNN = 2.76 TeV, with neutron emission, are σsingle EMD = 187:4 ± 0.2 (stat.)−11.2+13.2 (syst.) b and σmutual EMD = 5.7 ± 0.1 (stat.) ±0.4 (syst.) b, respectively [1]. This is the first measurement of electromagnetic dissociation of 208Pb nuclei at the LHC energies, allowing a test of electromagnetic dissociation theory in a new energy regime. The experimental results are compared to the predictions from a relativistic electromagnetic dissociation model.
The first measurement of two-pion Bose–Einstein correlations in central Pb–Pb collisions at √sNN=2.76 TeV at the Large Hadron Collider is presented. We observe a growing trend with energy now not only for the longitudinal and the outward but also for the sideward pion source radius. The pion homogeneity volume and the decoupling time are significantly larger than those measured at RHIC.
Inclusive transverse momentum spectra of primary charged particles in Pb–Pb collisions at √sNN=2.76 TeV have been measured by the ALICE Collaboration at the LHC. The data are presented for central and peripheral collisions, corresponding to 0–5% and 70–80% of the hadronic Pb–Pb cross section. The measured charged particle spectra in |η|<0.8 and 0.3<pT<20 GeV/c are compared to the expectation in pp collisions at the same sNN, scaled by the number of underlying nucleon–nucleon collisions. The comparison is expressed in terms of the nuclear modification factor RAA. The result indicates only weak medium effects (RAA≈0.7) in peripheral collisions. In central collisions, RAA reaches a minimum of about 0.14 at pT=6–7 GeV/c and increases significantly at larger pT. The measured suppression of high-pT particles is stronger than that observed at lower collision energies, indicating that a very dense medium is formed in central Pb–Pb collisions at the LHC.
Simple Summary: The incidence of brain metastases from breast cancer is increasing and the treatment is still a major challenge. Several scores have been developed in order to estimate the prognosis of patients with brain metastases by objective criteria. Here, we validated all three published graded-prognostic-assessment (GPA)-scores in a subcohort of 882 breast cancer patients with brain metastases in the Brain Metastases in the German Breast Cancer (BMBC) registry. Although all three available GPA-scores were associated with OS, they all show limitations mainly in predicting short-term (below 3 months) survival but also in long-term (above 12 months) survival. We discuss the test performances of all scores in our work and provide evidence how physicians should use them as a tool to select patients for different treatment options.
Abstract: Several scores have been developed in order to estimate the prognosis of patients with brain metastases (BM) by objective criteria. The aim of this analysis was to validate all three published graded-prognostic-assessment (GPA)-scores in a subcohort of 882 breast cancer (BC) patients with BM in the Brain Metastases in the German Breast Cancer (BMBC) registry. The median age at diagnosis of BM was 57 years. All in all, 22.3% of patients (n = 197) had triple-negative, 33.4% (n = 295) luminal A like, 25.1% (n = 221) luminal B/HER2-enriched like and 19.2% (n = 169) HER2 positive like BC. Age ≥60 years, evidence of extracranial metastases (ECM), higher number of BM, triple-negative subtype and low Karnofsky-Performance-Status (KPS) were all associated with worse overall survival (OS) in univariate analysis (p < 0.001 each). All three GPA-scores were associated with OS. The breast-GPA showed the highest probability of classifying patients with survival above 12 months in the best prognostic group (specificity 68.7% compared with 48.1% for the updated breast-GPA and 21.8% for the original GPA). Sensitivities for predicting 3 months survival were very low for all scores. In this analysis, all GPA-scores showed only moderate diagnostic accuracy in predicting the OS of BC patients with BM.
Characteristics and clinical outcome of breast cancer patients with asymptomatic brain metastases
(2020)
Simple Summary: The prognosis for patients with breast cancer that has spread to the brain is poor, and survival for these women hasn’t improved over the last few decades. We do not currently test for asymptomatic brain metastases in breast cancer patients, although this does happen in some other types of cancer. In this study we wanted to find out more about breast cancer that has spread to the brain and in particular to see whether there might be any advantage to spotting brain metastases before the development of neurological symptoms. Overall, our results suggest that women could be better off if their brain metastases are diagnosed before they begin to cause symptoms. We now need to carry out a clinical trial to see what happens if we screen high-risk breast cancer patients for brain metastases. This will verify whether doing so could increase survival, symptom control or quality of life.
Abstract: Background: Brain metastases (BM) have become a major challenge in patients with metastatic breast cancer. Methods: The aim of this analysis was to characterize patients with asymptomatic BM (n = 580) in the overall cohort of 2589 patients with BM from our Brain Metastases in Breast Cancer Network Germany (BMBC) registry. Results: Compared to symptomatic patients, asymptomatic patients were slightly younger at diagnosis (median age: 55.5 vs. 57.0 years, p = 0.01), had a better performance status at diagnosis (Karnofsky index 80–100%: 68.4% vs. 57%, p < 0.001), a lower number of BM (>1 BM: 56% vs. 70%, p = 0.027), and a slightly smaller diameter of BM (median: 1.5 vs. 2.2 cm, p < 0.001). Asymptomatic patients were more likely to have extracranial metastases (86.7% vs. 81.5%, p = 0.003) but were less likely to have leptomeningeal metastasis (6.3% vs. 10.9%, p < 0.001). Asymptomatic patients underwent less intensive BM therapy but had a longer median overall survival (statistically significant for a cohort of HER2-positive patients) compared to symptomatic patients (10.4 vs. 6.9 months, p < 0.001). Conclusions: These analyses show a trend that asymptomatic patients have less severe metastatic brain disease and despite less intensive local BM therapy still have a better outcome (statistically significant for a cohort of HER2-positive patients) than patients who present with symptomatic BM, although a lead time bias of the earlier diagnosis cannot be ruled out. Our analysis is of clinical relevance in the context of potential trials examining the benefit of early detection and treatment of BM.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seq-operator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howe's method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on letrec-free expressions.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky’s lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky’s lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models. We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen. 1998 ACM Subject Classification: F.4.2, F.3.2, F.3.3, F.4.1. Key words and phrases: semantics, contextual equivalence, bisimulation, lambda calculus, call-by-need, letrec.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Background: The diagnostic accuracy of the Elecsys® HCV Duo antigen-antibody combination immunoassay (Roche Diagnostics GmbH) was evaluated for the detection of hepatitis C virus (HCV) infection, versus commercially available comparators.
Methods: This multicenter study (August 2020–March 2021) assessed the specificity of the Elecsys HCV Duo immunoassay and comparator assays in blood donor and routine clinical laboratory samples; sensitivity was determined in confirmed HCV-positive samples and seroconversion panels. The Elecsys HCV Duo immunoassay was compared with the Monolisa HCV Ag-Ab ULTRA V2, Murex HCV Ag/Ab Combination and ARCHITECT HCV Ag assays, as well as nucleic acid testing (NAT). The antibody (anti-HCV) module of the Elecsys HCV Duo immunoassay was compared with the Elecsys Anti-HCV II, Alinity s Anti-HCV, ARCHITECT Anti-HCV and RIBA HCV 3.0 SIA assays.
Results: The specificity of the Elecsys HCV Duo immunoassay was 99.94% (95% confidence interval [CI], 99.89–99.97) and 99.92% (95% CI, 99.71–99.99) in blood donor (n = 20,634) and routine clinical laboratory samples (n = 2531), respectively. The specificity of the Elecsys HCV Duo immunoassay was similar or better than comparator assays. The sensitivity of the Elecsys HCV Duo immunoassay in confirmed HCV-positive samples (n = 257) was 99.6%. In seroconversion panels, the Elecsys HCV Duo immunoassay detected infections earlier (2.2–21.9 days) than all but one of the comparator assays and detected HCV 1.8 days later than NAT.
Conclusions: The Elecsys HCV Duo immunoassay shows high diagnostic accuracy, reduces the diagnostic window, and could be used when NAT is not possible.
Phase transitions in a non-perturbative regime can be studied by ab initio Lattice Field Theory methods. The status and future research directions for LFT investigations of Quantum Chromo-Dynamics under extreme conditions are reviewed, including properties of hadrons and of the hypothesized QCD axion as inferred from QCD topology in different phases. We discuss phase transitions in strong interactions in an extended parameter space, and the possibility of model building for Dark Matter and Electro-Weak Symmetry Breaking. Methodological challenges are addressed as well, including new developments in Artificial Intelligence geared towards the identification of different phases and transitions.
The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson– Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H10/C10 chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stemloop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
In order to quantitatively analyse the chemical and dynamical evolution of the polar vortex it has proven extremely useful to work with coordinate systems that follow the vortex flow. We propose here a two-dimensional quasi-Lagrangian coordinate system {X i, delta X i}, based on the mixing ratio of a long-lived stratospheric trace gas i, and its systematic use with i = N2O, in order to describe the structure of a well-developed Antarctic polar vortex. In the coordinate system {X i, delta X i} the mixing ratio X i is the vertical coordinate and delta X i = X i(theta) - X i vort(theta) is the meridional coordinate (X i vort(theta) being a vertical reference profile in the vortex core). The quasi-Lagrangian coordinates {X i, delta X i} persist for much longer time than standard isentropic coordinates, potential temperature theta and equivalent latitude Phi e, do not require explicit reference to geographic space, and can be derived directly from high-resolution in situ measurements. They are therefore well-suited for studying the evolution of the Antarctic polar vortex throughout the polar winter with respect to the relevant chemical and microphysical processes. By using the introduced coordinate system {X N2O, delta X N2O} we analyze the well-developed Antarctic vortex investigated during the APE-GAIA (Airborne Polar Experiment – Geophysica Aircraft in Antarctica – 1999) campaign (Carli et al., 2000). A criterion, which uses the local in-situ measurements of X i=X i(theta) and attributes the inner vortex edge to a rapid change (delta-step) in the meridional profile of the mixing ratio X i, is developed to determine the (Antarctic) inner vortex edge. In turn, we suggest that the outer vortex edge of a well-developed Antarctic vortex can be attributed to the position of a local minimum of the X H2O gradient in the polar vortex area. For a well-developed Antarctic vortex, the delta X N2O-parametrization of tracer-tracer relationships allows to distinguish the tracer inter-relationships in the vortex core, vortex boundary region and surf zone and to examine their meridional variation throughout these regions. This is illustrated by analyzing the tracer-tracer relationships X i : X N2O obtained from the in-situ data of the APE-GAIA campaign for i = CFC-11, CFC-12, H-1211 and SF6. A number of solitary anomalous points in the CFC-11 : N2O correlation, observed in the Antarctic vortex core, are interpreted in terms of small-scale cross-isentropic dispersion.
Lattice strains of appropriate symmetry have served as an excellent tool to explore the interaction of superconductivity in the iron-based superconductors with nematic and stripe spin-density wave (SSDW) order, which are both closely tied to an orthorhombic distortion. In this work, we contribute to a broader understanding of the coupling of strain to superconductivity and competing normal-state orders by studying CaKFe4As4 under large, in-plane strains of B1g and B2g symmetry. In contrast to the majority of iron-based superconductors, pure CaKFe4As4 exhibits superconductivity with relatively high transition temperature of Tc∼35 K in proximity of a non-collinear, tetragonal, hedgehog spin-vortex crystal (SVC) order. Through experiments, we demonstrate an anisotropic in-plane strain response of Tc, which is reminiscent of the behavior of other pnictides with nematicity. However, our calculations suggest that in CaKFe4As4, this anisotropic response correlates with the one of the SVC fluctuations, highlighting the close interrelation of magnetism and high-Tc superconductivity. By suggesting moderate B2g strains as an effective parameter to change the stability of SVC and SSDW, we outline a pathway to a unified phase diagram of iron-based superconductivity.
Aims: Carotid intima media thickness (CIMT) predicts cardiovascular (CVD) events, but the predictive value of CIMT change is debated. We assessed the relation between CIMT change and events in individuals at high cardiovascular risk.
Methods and results: From 31 cohorts with two CIMT scans (total n = 89070) on average 3.6 years apart and clinical follow-up, subcohorts were drawn: (A) individuals with at least 3 cardiovascular risk factors without previous CVD events, (B) individuals with carotid plaques without previous CVD events, and (C) individuals with previous CVD events. Cox regression models were fit to estimate the hazard ratio (HR) of the combined endpoint (myocardial infarction, stroke or vascular death) per standard deviation (SD) of CIMT change, adjusted for CVD risk factors. These HRs were pooled across studies.
In groups A, B and C we observed 3483, 2845 and 1165 endpoint events, respectively. Average common CIMT was 0.79mm (SD 0.16mm), and annual common CIMT change was 0.01mm (SD 0.07mm), both in group A. The pooled HR per SD of annual common CIMT change (0.02 to 0.43mm) was 0.99 (95% confidence interval: 0.95–1.02) in group A, 0.98 (0.93–1.04) in group B, and 0.95 (0.89–1.04) in group C. The HR per SD of common CIMT (average of the first and the second CIMT scan, 0.09 to 0.75mm) was 1.15 (1.07–1.23) in group A, 1.13 (1.05–1.22) in group B, and 1.12 (1.05–1.20) in group C.
Conclusions: We confirm that common CIMT is associated with future CVD events in individuals at high risk. CIMT change does not relate to future event risk in high-risk individuals.
The vector U bosons, or so-called “dark photons,” are one of the possible candidates for the dark matter mediators. They are supposed to interact with the standard matter via a “vector portal” due to the Uð1Þ − Uð1Þ0 symmetry group mixing which might make them visible in particle and heavy-ion experiments. While there is no confirmed observation of dark photons, the detailed analysis of different experimental data allows to estimate the upper limit for the kinetic mixing parameter ϵ2 depending on the mass MU of U bosons which is also unknown. In this study we present theoretical constraints on the upper limit of ϵ2ðMUÞ in the mass range MU ≤ 0.6 GeV from the comparison of the calculated dilepton spectra with the experimental data from the HADES collaboration at SIS18 energies where the dark photons are not observed. Our analysis is based on the microscopic Parton-Hadron-String Dynamics (PHSD) transport
approach which reproduces well the measured dilepton spectra in p þ p, p þ A and A þ A collisions. Additionally to the different dilepton channels originating from interactions and decays of ordinary matter particles (mesons and baryons), we incorporate the decay of hypothetical U bosons to dileptons, U → eþe−, where the U bosons themselves are produced by the Dalitz decay of pions π0 → γU, η mesons η → γU and Delta resonances Δ → NU. Our analysis can help to estimate the requested accuracy for future experimental searches of “light” dark photons by dilepton experiments.