Universitätspublikationen
Refine
Year of publication
Document Type
- Preprint (738)
- Article (370)
- Working Paper (68)
- Doctoral Thesis (65)
- Book (36)
- Bachelor Thesis (35)
- Conference Proceeding (23)
- Diploma Thesis (19)
- Part of a Book (11)
- Contribution to a Periodical (10)
Has Fulltext
- yes (1390)
Is part of the Bibliography
- no (1390)
Keywords
- Heavy Ion Experiments (19)
- Lambda-Kalkül (12)
- Hadron-Hadron Scattering (11)
- Formale Semantik (10)
- Hadron-Hadron scattering (experiments) (10)
- LHC (8)
- Heavy-ion collision (7)
- concurrency (6)
- functional programming (6)
- ALICE (5)
Institute
- Informatik (1390) (remove)
Recent lattice QCD results, comparing to a hadron resonance gas model, have shown the need for hundreds of particles in hadronic models. These extra particles influence both the equation of state and hadronic interactions within hadron transport models. Here, we introduce the PDG21+ particle list, which contains the most up-to-date database of particles and their properties. We then convert all particles decays into 2 body decays so that they are compatible with SMASH in order to produce a more consistent description of a heavy-ion collision.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
Highlights
• Transparency of design, reference frames and support for action were found to support students' sense-making of LA dashboards.
• The higher the overall SRL score, the more relevant the three factors were perceived by learners.
• Learner goals affect how relevant students find reference frames.
• The SRL effect on the perceived relevance of transparency depends on learner goals.
Abstract
Unequal stakeholder engagement is a common pitfall of adoption approaches of learning analytics in higher education leading to lower buy-in and flawed tools that fail to meet the needs of their target groups. With each design decision, we make assumptions on how learners will make sense of the visualisations, but we know very little about how students make sense of dashboard and which aspects influence their sense-making. We investigated how learner goals and self-regulated learning (SRL) skills influence dashboard sense-making following a mixed-methods research methodology: a qualitative pre-study followed-up with an extensive quantitative study with 247 university students. We uncovered three latent variables for sense-making: transparency of design, reference frames and support for action. SRL skills are predictors for how relevant students find these constructs. Learner goals have a significant effect only on the perceived relevance of reference frames. Knowing which factors influence students' sense-making will lead to more inclusive and flexible designs that will cater to the needs of both novice and expert learners.
Current deep learning methods are regarded as favorable if they empirically perform well on dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving data is investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten. However, comparison of individual methods is nevertheless performed in isolation from the real world by monitoring accumulated benchmark test set performance. The closed world assumption remains predominant, i.e. models are evaluated on data that is guaranteed to originate from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown and corrupted instances. In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era. Hence, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Finally, the established synergies are supported empirically, showing joint improvement in alleviating catastrophic forgetting, querying data, selecting task orders, while exhibiting robust open world application.
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
()
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
The intense photon fluxes from relativistic nuclei provide an opportunity to study photonuclear interactions in ultraperipheral collisions. The measurement of coherently photoproduced π+π−π+π− final states in ultraperipheral Pb-Pb collisions at sNN−−−√=5.02 TeV is presented for the first time. The cross section, dσ/dy, times the branching ratio (ρ→π+π+π−π−) is found to be 47.8±2.3 (stat.)±7.7 (syst.) mb in the rapidity interval |y|<0.5. The invariant mass distribution is not well described with a single Breit-Wigner resonance. The production of two interfering resonances, ρ(1450) and ρ(1700), provides a good description of the data. The values of the masses (m) and widths (Γ) of the resonances extracted from the fit are m1=1385±14 (stat.)±3 (syst.) MeV/c2, Γ1=431±36 (stat.)±82 (syst.) MeV/c2, m2=1663±13 (stat.)±22 (syst.) MeV/c2 and Γ2=357±31 (stat.)±49 (syst.) MeV/c2, respectively. The measured cross sections times the branching ratios are compared to recent theoretical predictions.
Measurements of the pT-dependent flow vector fluctuations in Pb-Pb collisions at sNN−−−√=5.02 TeV using azimuthal correlations with the ALICE experiment at the LHC are presented. A four-particle correlation approach [1] is used to quantify the effects of flow angle and magnitude fluctuations separately. This paper extends previous studies to additional centrality intervals and provides measurements of the pT-dependent flow vector fluctuations at sNN−−−√=5.02 TeV with two-particle correlations. Significant pT-dependent fluctuations of the V⃗ 2 flow vector in Pb-Pb collisions are found across different centrality ranges, with the largest fluctuations of up to ∼15% being present in the 5% most central collisions. In parallel, no evidence of significant pT-dependent fluctuations of V⃗ 3 or V⃗ 4 is found. Additionally, evidence of flow angle and magnitude fluctuations is observed with more than 5σ significance in central collisions. These observations in Pb-Pb collisions indicate where the classical picture of hydrodynamic modeling with a common symmetry plane breaks down. This has implications for hard probes at high pT, which might be biased by pT-dependent flow angle fluctuations of at least 23% in central collisions. Given the presented results, existing theoretical models should be re-examined to improve our understanding of initial conditions, quark--gluon plasma (QGP) properties, and the dynamic evolution of the created system.
The pT-differential production cross sections of non-prompt D0, D+, and D+s mesons originating from beauty-hadron decays are measured in proton−proton collisions at a centre-of-mass energy s√ of 13 TeV. The measurements are performed at midrapidity, |y|<0.5, with the data sample collected by ALICE from 2016 to 2018. The results are in agreement with predictions from several perturbative QCD calculations. The fragmentation fraction of beauty quarks to strange mesons divided by the one to non-strange mesons, fs/(fu+fd), is found to be 0.114±0.016 (stat.)±0.006 (syst.)±0.003 (BR)±0.003 (extrap.). This value is compatible with previous measurements at lower centre-of-mass energies and in different collision systems in agreement with the assumption of universality of fragmentation functions. In addition, the dependence of the non-prompt D meson production on the centre-of-mass energy is investigated by comparing the results obtained at s√=5.02 and 13 TeV, showing a hardening of the non-prompt D-meson pT-differential production cross section at higher s√. Finally, the bb¯¯¯ production cross section per unit of rapidity at midrapidity is calculated from the non-prompt D0, D+, D+s, and Λ+c hadron measurements, obtaining dσ/dy=75.2±3.2 (stat.)±5.2 (syst.)+12.3−3.2 (extrap.) μb.
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton–proton collisions at a center-of-mass energy of √s = 13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
Assessing communicative accommodation in the context of large language models : a semiotic approach
()
Recently, significant strides have been made in the ability of transformer-based chatbots to hold natural conversations. However, despite a growing societal and scientific relevancy, there are few frameworks systematically deriving what it means for a chatbot conversation to be natural. The present work approaches this question through the phenomenon of communicative accommodation/interactive alignment. While there is existing research suggesting that humans adapt communicatively to technologies, the aim of this work is to explore the accommodation of AI-chatbots to an interlocutor. Its research interest is twofold: Firstly, the structural ability of the transformer-architecture to support accommodative behavior is assessed using a frame constructed in accordance with existing accommodationtheories.
This results in hypotheses to be tested empirically. Secondly, since effective accommodation produces the same outcomes, regardless of technical implementation, a behavioral experiment is proposed. Existing quantifications of accommodation are reconciled,
extended, and modified to apply them to nonhuman-interlocutors. Thus, a measurement scheme is suggested which evaluates textual data from text-only, double-blind interactions between chatbots and humans, chatbots and chatbots and humans and humans. Using the generated human-to-human convergence data as a reference, the degree of artificial accommodation can be evaluated. Accommodation as a central facet of artificial interactivity can thus be evaluated directly against its theoretical paradigm, i.e. human interaction. In case that subsequent examinations show that chatbots effectively do not accommodate, there may be a new form of algorithmic bias, emerging from the aggregate accommodation towards chatbots but not towards humans. Thus, existing, hegemonic semantics could be cemented through chatbot-learning. Meanwhile, the ability to effectively accommodate would render chatbots vastly more susceptible to misuse.
This Letter presents the first measurement of event-by-event fluctuations of the net number (difference between the particle and antiparticle multiplicities) of multistrange hadrons Ξ− and Ξ¯¯¯¯+ and its correlation with the net-kaon number using the data collected by the ALICE Collaboration in pp, p−Pb, and Pb−Pb collisions at a center-of-mass energy per nucleon pair sNN−−−√=5.02 TeV. The statistical hadronization model with a correlation over three units of rapidity between hadrons having the same and opposite strangeness content successfully describes the results. On the other hand, string-fragmentation models that mainly correlate strange hadrons with opposite strange quark content over a small rapidity range fail to describe the data.
he first measurement of 3ΛH and 3Λ¯¯¯¯H¯¯¯¯ differential production with respect to transverse momentum and centrality in Pb−Pb collisions at sNN−−−√=5.02~TeV is presented. The 3ΛH has been reconstructed via its two-charged-body decay channel, i.e., 3ΛH→3He+π−. A Blast-Wave model fit of the pT-differential spectra of all nuclear species measured by the ALICE collaboration suggests that the 3ΛH kinetic freeze-out surface is consistent with that of other nuclei. The ratio between the integrated yields of 3ΛH and 3He is compared to predictions from the statistical hadronisation model and the coalescence model, with the latter being favoured by the presented measurements.
First measurements of hadron(h)−Λ azimuthal angular correlations in p−Pb collisions at sNN−−−√ = 5.02 TeV using the ALICE detector at the LHC are presented. These correlations are used to separate the production of associated Λ baryons into three different kinematic regions, namely those produced in the direction of the trigger particle (near-side), those produced in the opposite direction (away-side), and those whose production is uncorrelated with the jet-axis (underlying event). The per-trigger associated Λ yields in these regions are extracted, along with the near- and away-side azimuthal peak widths, and the results are studied as a function of associated particle pT and event multiplicity. Comparisons with the DPMJET event generator and previous measurements of the ϕ(1020) meson are also made. The final results indicate that strangeness production in the highest multiplicity p−Pb collisions is enhanced relative to low multiplicity collisions in the jet-like regions, as well as the underlying event. The production of Λ relative to charged hadrons is also enhanced in the underlying event when compared to the jet-like regions. Additionally, the results hint that strange quark production in the away-side of the jet is modified by soft interactions with the underlying event.
Measurements of (anti)deuteron and (anti)3He production in the rapidity range |y|< 0.5 as a function of the transverse momentum and event multiplicity in Xe−Xe collisions at a center-of-mass energy per nucleon−nucleon pair of sNN−−−√ = 5.44 TeV are presented. The coalescence parameters B2 and B3 are measured as a function of the transverse momentum per nucleon. The ratios between (anti)deuteron and (anti)3He yields and those of (anti)protons and pions are reported as a function of the mean charged-particle multiplicity density, and compared with two implementations of the statistical hadronization model (SHM) and with coalescence predictions. The elliptic flow of (anti)deuterons is measured for the first time in Xe−Xe collisions and shows features similar to those already observed in Pb−Pb collisions, i.e., the mass ordering at low transverse momentum and the meson−baryon grouping at intermediate transverse momentum. The production of nuclei is particularly sensitive to the chemical freeze-out temperature of the system created in the collision, which is extracted from a grand-canonical-ensemble-based thermal fit, performed for the first time including light nuclei along with light-flavor hadrons in Xe−Xe collisions. The extracted chemical freeze-out temperature Tchem = (154.2 ± 1.1) MeV in Xe−Xe collisions is similar to that observed in Pb−Pb collisions and close to the crossover temperature predicted by lattice QCD calculations.
The transverse momentum (pT) differential production cross section of the promptly-produced charm-strange baryon Ξ0c (and its charge conjugate Ξ0c¯¯¯¯¯¯) is measured at midrapidity via its hadronic decay into π+Ξ− in p−Pb collisions at a centre-of-mass energy per nucleon−nucleon collision sNN−−−√ = 5.02 TeV with the ALICE detector at the LHC. The Ξ0c nuclear modification factor (RpPb), calculated from the cross sections in pp and p−Pb collisions, is presented and compared with the RpPb of Λ+c baryons. The ratios between the pT-differential production cross section of Ξ0c baryons and those of D0 mesons and Λ+c baryons are also reported and compared with results at forward and backward rapidity from the LHCb Collaboration. The measurements of the production cross section of prompt Ξ0c baryons are compared with a model based on perturbative QCD calculations of charm-quark production cross sections, which includes only cold nuclear matter effects in p−Pb collisions, and underestimates the measurement by a factor of about 50. This discrepancy is reduced when the data is compared with a model in which hadronisation is implemented via quark coalescence. The pT-integrated cross section of prompt Ξ0c-baryon production at midrapidity extrapolated down to pT = 0 is also reported. These measurements offer insights and constraints for theoretical calculations of the hadronisation process. Additionally, they provide inputs for the calculation of the charm production cross section in p−Pb collisions at midrapidity.
Investigating strangeness enhancement with multiplicity in pp collisions using angular correlations
()
A study of strange hadron production associated with hard scattering processes and with the underlying event is conducted to investigate the origin of the enhanced production of strange hadrons in small collision systems characterised by large charged-particle multiplicities. For this purpose, the production of the single-strange meson K0S and the double-strange baryon Ξ± is measured, in each event, in the azimuthal direction of the highest-pT particle (``trigger" particle), related to hard scattering processes, and in the direction transverse to it in azimuth, associated with the underlying event, in pp collisions at s√=5.02 TeV and s√=13 TeV using the ALICE detector at the LHC. The per-trigger yields of K0S and Ξ± are dominated by the transverse-to-leading production (i.e., in the direction transverse to the trigger particle), whose contribution relative to the toward-leading production is observed to increase with the event charged-particle multiplicity. The transverse-to-leading and the toward-leading Ξ±/K0S yield ratios increase with the multiplicity of charged particles, suggesting that strangeness enhancement with multiplicity is associated with both hard scattering processes and the underlying event. The relative production of Ξ± with respect to K0S is higher in transverse-to-leading processes over the whole multiplicity interval covered by the measurement. The K0S and Ξ± per-trigger yields and yield ratios are compared with predictions of three different phenomenological models, namely PYTHIA 8.2 with the Monash tune, PYTHIA 8.2 with ropes and EPOS LHC. The comparison shows that none of them can quantitatively describe either the transverse-to-leading or the toward-leading yields of K0S and Ξ±.
The first measurement of the impact-parameter dependent angular anisotropy in the decay of coherently photoproduced ρ0 mesons is presented. The ρ0 mesons are reconstructed through their decay into a pion pair. The measured anisotropy corresponds to the amplitude of the cos(2ϕ) modulation, where ϕ is the angle between the two vectors formed by the sum and the difference of the transverse momenta of the pions, respectively. The measurement was performed by the ALICE Collaboration at the LHC using data from ultraperipheral Pb−Pb collisions at a center-of-mass energy of sNN−−−√ = 5.02 TeV per nucleon pair. Different impact-parameter regions are selected by classifying the events in nuclear-breakup classes. The amplitude of the cos(2ϕ) modulation is found to increase by about one order of magnitude from large to small impact parameters. Theoretical calculations, which describe the measurement, explain the cos(2ϕ) anisotropy as the result of a quantum interference effect at the femtometer scale that arises from the ambiguity as to which of the nuclei is the source of the photon in the interaction.
The production of K∗(892)± meson resonance is measured at midrapidity (|y|<0.5) in Pb-Pb collisions at sNN−−−√=5.02 TeV using the ALICE detector at the LHC. The resonance is reconstructed via its hadronic decay channel K∗(892)±→K0Sπ±. The transverse momentum distributions are obtained for various centrality intervals in the pT range of 0.4-16 GeV/c. The reported measurements of integrated yields, mean transverse momenta, and particle yield ratios are consistent with previous ALICE measurements for K∗(892)0. The pT-integrated yield ratio 2K∗(892)±/(K++K−) in central Pb-Pb collisions shows a significant suppression (9.3σ) relative to pp collisions. Thermal model calculations overpredict the particle yield ratio. Although both simulations consider the hadronic phase, only HRG-PCE accurately represents the measurements, whereas MUSIC+SMASH tends to overpredict them. These observations, along with the kinetic freeze-out temperatures extracted from the yields of light-flavored hadrons using the HRG-PCE model, indicate a finite hadronic phase lifetime, which increases towards central collisions. The pT-differential yield ratios 2K∗(892)±/(K++K−) and 2K∗(892)±/(π++π−) are suppressed by up to a factor of five at pT<2 GeV/c in central Pb-Pb collisions compared to pp collisions at s√= 5.02 TeV. Both particle ratios and are qualitatively consistent with expectations for rescattering effects in the hadronic phase. The nuclear modification factor shows a smooth evolution with centrality and is below unity at pT>8 GeV/c, consistent with measurements for other light-flavored hadrons. The smallest values are observed in most central collisions, indicating larger energy loss of partons traversing the dense medium.
The production of K∗(892)± meson resonance is measured at midrapidity (|y|<0.5) in Pb-Pb collisions at sNN−−−√=5.02 TeV using the ALICE detector at the LHC. The resonance is reconstructed via its hadronic decay channel K∗(892)±→K0Sπ±. The transverse momentum distributions are obtained for various centrality intervals in the pT range of 0.4-16 GeV/c. The reported measurements of integrated yields, mean transverse momenta, and particle yield ratios are consistent with previous ALICE measurements for K∗(892)0. The pT-integrated yield ratio 2K∗(892)±/(K++K−) in central Pb-Pb collisions shows a significant suppression (9.3σ) relative to pp collisions. Thermal model calculations overpredict the particle yield ratio. Although both simulations consider the hadronic phase, only HRG-PCE accurately represents the measurements, whereas MUSIC+SMASH tends to overpredict them. These observations, along with the kinetic freeze-out temperatures extracted from the yields of light-flavored hadrons using the HRG-PCE model, indicate a finite hadronic phase lifetime, which increases towards central collisions. The pT-differential yield ratios 2K∗(892)±/(K++K−) and 2K∗(892)±/(π++π−) are suppressed by up to a factor of five at pT<2 GeV/c in central Pb-Pb collisions compared to pp collisions at s√= 5.02 TeV. Both particle ratios and are qualitatively consistent with expectations for rescattering effects in the hadronic phase. The nuclear modification factor shows a smooth evolution with centrality and is below unity at pT>8 GeV/c, consistent with measurements for other light-flavored hadrons. The smallest values are observed in most central collisions, indicating larger energy loss of partons traversing the dense medium.
The production of K∗(892)± meson resonance is measured at midrapidity (|y|<0.5) in Pb−Pb collisions at √sNN=5.02 TeV using the ALICE detector at the CERN Large Hadron Collider. The resonance is reconstructed via its hadronic decay channel K∗(892)±→K0Sπ±. The transverse momentum distributions are obtained for various centrality intervals in the pT range of 0.4−16 GeV/c . Measurements of integrated yields, mean transverse momenta, and particle yield ratios are reported and found to be consistent with previous ALICE measurements for K∗(892)0 within uncertainties. The pT-integrated yield ratio 2K∗(892)±/(K++K−) in central Pb−Pb collisions shows a significant suppression at a level of 9.3σ relative to pp collisions. Thermal model calculations result in an overprediction of the particle yield ratio. Although both hadron resonance gas in partial chemical equilibrium (HRG-PCE) and music + smash simulations consider the hadronic phase, only HRG-PCE accurately represents the measurements, whereas music + smash simulations tend to overpredict the particle yield ratio. These observations, along with the kinetic freeze-out temperatures extracted from the yields measured for light-flavored hadrons using the HRG-PCE model, indicate a finite hadronic phase lifetime, which decreases with increasing collision centrality percentile. The pT-differential yield ratios 2K∗(892)±/(K++K−) and 2K∗(892)±/(π++π−) are presented and compared with measurements in pp collisions at √s=5.02 TeV. Both pa rticle ratios are found to be suppressed by up to a factor of five at pT<2.0 GeV/c in central Pb−Pb collisions and are qualitatively consistent with expectations for rescattering effects in the hadronic phase. The nuclear modification factor (RAA) shows a smooth evolution with centrality and is found to be below unity at pT>8 GeV/c, consistent with measurements for other light-flavored hadrons. The smallest values are observed in most central collisions, indicating larger energy loss of partons traversing the dense medium.
A new, more precise measurement of the Λ hyperon lifetime is performed using a large data sample of Pb–Pb collisions at √sNN p ¼ 5.02 TeV with ALICE. The Λ and Λ¯ hyperons are reconstructed at midrapidity using their two-body weak decay channel Λ → p þ π− and Λ¯ → p¯ þ πþ. The measured value of the Λ lifetime is τΛ ¼ ½261.07 0.37ðstat:Þ 0.72ðsyst:Þ ps. The relative difference between the lifetime of Λ and Λ¯ , which represents an important test of CPT invariance in the strangeness sector, is also measured. The obtained value ðτΛ − τΛ¯Þ=τΛ ¼ 0.0013 0.0028ðstat:Þ 0.0021ðsyst:Þ is consistent with zero within the uncertainties. Both measurements of the Λ hyperon lifetime and of the relative difference between τΛ and τΛ¯ are in agreement with the corresponding world averages of the Particle Data Group and about a factor of three more precise.
The production of prompt +c baryons has been measured at midrapidity in the transverse momentum interval 0 < pT < 1 GeV/c for the first time, in pp and p–Pb collisions at a center-of-mass energy per nucleon-nucleon collision √sNN = 5.02 TeV. The measurement was performed in the decay channel +c → pK0S by applying new decay reconstruction techniques using a Kalman-Filter vertexing algorithm and adopting a machine-learning approach for the candidate selection. The pT -integrated +c production cross sections in both collision systems were determined and used along with the measured yields in Pb–Pb collisions to compute the pT -integrated nuclear modification factors RpPb and RAA of +c baryons, which are compared to model calculations that consider nuclear modification of the parton distribution functions. The +c /D0 baryon-to-meson yield ratio is reported for pp and p–Pb collisions. Comparisons with models that include modified hadronization processes are presented, and the implications of the results on the understanding of charm hadronization in hadronic collisions are discussed. A significant (3.7σ) modification of the mean transverse momentum of + c baryons is seen in p–Pb collisions with respect to pp collisions, while the pT -integrated +c /D0 yield ratio was found to be consistent between the two collision systems within the uncertainties.
Long- and short-range correlations for pairs of charged particles are studied via two-particle angular correlations in pp collisions at s√=13 TeV and p−Pb collisions at sNN−−−√=5.02 TeV. The correlation functions are measured as a function of relative azimuthal angle Δφ and pseudorapidity separation Δη for pairs of primary charged particles within the pseudorapidity interval |η|<0.9 and the transverse-momentum interval 1<pT<4 GeV/c. Flow coefficients are extracted for the long-range correlations (1.6<|Δη|<1.8) in various high-multiplicity event classes using the low-multiplicity template fit method. The method is used to subtract the enhanced yield of away-side jet fragments in high-multiplicity events. These results show decreasing flow signals toward lower multiplicity events. Furthermore, the flow coefficients for events with hard probes, such as jets or leading particles, do not exhibit any significant changes compared to those obtained from high-multiplicity events without any specific event selection criteria. The results are compared with hydrodynamic-model calculations, and it is found that a better understanding of the initial conditions is necessary to describe the results, particularly for low-multiplicity events.
The human immune system is determined by the functionality of the human lymph node. With the use of high-throughput techniques in clinical diagnostics, a large number of data is currently collected. The new data on the spatiotemporal organization of cells offers new possibilities to build a mathematical model of the human lymph node - a virtual lymph node. The virtual lymph node can be applied to simulate drug responses and may be used in clinical diagnosis. Here, we review mathematical models of the human lymph node from the viewpoint of cellular processes. Starting with classical methods, such as systems of differential equations, we discuss the values of different levels of abstraction and methods in the range from artificial intelligence techniques formalism.
The inclusive production of the charm-strange baryon Ω0c is measured for the first time via its semileptonic decay into Ω−e+νe at midrapidity (|y| < 0.8) in proton–proton (pp) collisions at the centre-of-mass energy √s = 13 TeV with the ALICE detector at the LHC. The transverse momentum (pT) differential cross section multiplied by the branching ratio is presented in the interval 2 < pT < 12 GeV/c. The branching-fraction ratio BR(Ω0c → Ω−e+νe)/BR(Ω0c → Ω−π+) is measured to be 1.12 ± 0.22 (stat.) ± 0.27 (syst.). Comparisons with other experimental measurements, as well as with theoretical calculations, are presented.
The inclusive production of the charm-strange baryon Ω0c is measured for the first time via its semileptonic decay into Ω−e+νe at midrapidity (|y| < 0.8) in proton–proton (pp) collisions at the centre-of-mass energy √s = 13 TeV with the ALICE detector at the LHC. The transverse momentum (pT) differential cross section multiplied by the branching ratio is presented in the interval 2 < pT < 12 GeV/c. The branching-fraction ratio BR(Ω0c → Ω−e+νe)/BR(Ω0c → Ω−π+) is measured to be 1.12 ± 0.22 (stat.) ± 0.27 (syst.). Comparisons with other experimental measurements, as well as with theoretical calculations, are presented.
The measurement of the production of deuterons, tritons and 3He and their antiparticles in Pb-Pb collisions at √sNN = 5.02 TeV is presented in this article. The measurements are carried out at midrapidity (y|< 0.5) as a function of collision centrality using the ALICE detector. The pT-integrated yields, the coalescence parameters and the ratios to protons and antiprotons are reported and compared with nucleosynthesis models. The comparison of these results in different collision systems at different center-of-mass collision energies reveals a suppression of nucleus production in small systems. In the Statistical Hadronisation Model framework, this can be explained by a small correlation volume where the baryon number is conserved, as already shown in previous fluctuation analyses. However, a different size of the correlation volume is required to describe the proton yields in the same data sets. The coalescence model can describe this suppression by the fact that the wave functions of the nuclei are large and the fireball size starts to become comparable and even much smaller than the actual nucleus at low multiplicities.
The knowledge of the material budget with a high precision is fundamental for measurements of direct photon production using the photon conversion method due to its direct impact on the total systematic uncertainty. Moreover, it influences many aspects of the charged-particle reconstruction performance. In this article, two procedures to determine data-driven corrections to the material-budget description in ALICE simulation software are developed. One is based on the precise knowledge of the gas composition in the Time Projection Chamber. The other is based on the robustness of the ratio between the produced number of photons and charged particles, to a large extent due to the approximate isospin symmetry in the number of produced neutral and charged pions. Both methods are applied to ALICE data allowing for a reduction of the overall material budget systematic uncertainty from 4.5% down to 2.5%. Using these methods, a locally correct material budget is also achieved. The two proposed methods are generic and can be applied to any experiment in a similar fashion.
The knowledge of the material budget with a high precision is fundamental for measurements of direct photon production using the photon conversion method due to its direct impact on the total systematic uncertainty. Moreover, it influences many aspects of the charged-particle reconstruction performance. In this article, two procedures to determine data-driven corrections to the material-budget description in ALICE simulation software are developed. One is based on the precise knowledge of the gas composition in the Time Projection Chamber. The other is based on the robustness of the ratio between the produced number of photons and charged particles, to a large extent due to the approximate isospin symmetry in the number of produced neutral and charged pions. Both methods are applied to ALICE data allowing for a reduction of the overall material budget systematic uncertainty from 4.5% down to 2.5%. Using these methods, a locally correct material budget is also achieved. The two proposed methods are generic and can be applied to any experiment in a similar fashion.
Long- and short-range correlations for pairs of charged particles are studied via two-particle angular correlations in pp collisions at √sNN = 13 TeV and p–Pb collisions at √s = 5.02 TeV. The correlation functions are measured as a function of relative azimuthal angle ∆φ and pseudorapidity separation ∆η for pairs of primary charged particles within the pseudorapidity interval |η| < 0.9 and the transverse-momentum interval 1 < pT < 4 GeV/c. Flow coefficients are extracted for the long-range correlations (1.6 < |∆η| < 1.8) in various high-multiplicity event classes using the low-multiplicity template fit method. The method is used to subtract the enhanced yield of away-side jet fragments in high-multiplicity events. These results show decreasing flow signals toward lower multiplicity events. Furthermore, the flow coefficients for events with hard probes, such as jets or leading particles, do not exhibit any significant changes compared to those obtained from high-multiplicity events without any specific event selection criteria. The results are compared with hydrodynamic-model calculations, and it is found that a better understanding of the initial conditions is necessary to describe the results, particularly for low-multiplicity events.
The free energy of TAP-solutions for the SK-model of mean field spin glasses can be expressed as a nonlinear functional of local terms: we exploit this feature in order to contrive abstract REM-like models which we then solve by a classical large deviations treatment. This allows to identify the origin of the physically unsettling quadratic (in the inverse of temperature) correction to the Parisi free energy for the SK-model, and formalizes the true cavity dynamics which acts on TAP-space, i.e. on the space of TAP-solutions. From a non-spin glass point of view, this work is the first in a series of refinements which addresses the stability of hierarchical structures in models of evolving populations.
The total charm-quark production cross section per unit of rapidity dσ(cc)/dy, and the fragmentation fractions of charm quarks to different charm-hadron species f(c → hc), are measured for the first time in p–Pb collisions at √sNN = 5.02 TeV at midrapidity (−0.96 < y < 0.04 in the centre-ofmass frame) using data collected by ALICE at the CERN LHC. The results are obtained based on all the available measurements of prompt production of ground-state charm-hadron species: D0, D+,D+s, and J/ψ mesons, and Λ+cand Ξ0cbaryons. The resulting cross section is dσ(cc)/dy = 219.6±6.3 (stat.)+10.5−11.8(syst.)+7.6−2.9(extr.)±5.4 (BR)±4.6 (lumi.)±19.5 (rapidity shape) +15.0 (Ω0c) mb, which is consistent with a binary scaling of pQCD calculations from pp ollisions. The measured fragmentation fractions are compatible with those measured in pp collisions at √s = 5.02 and 13 TeV, showing an increase in the relative production rates of charm baryons with respect to charm mesons in pp and p–Pb collisions compared with e+e − and e−p collisions. The pT-integrated nuclear modification factor of charm quarks, RpPb(cc) = 0.91±0.04 (stat.) +0.08 −0.09 (syst.) +0.04 −0.03 (extr.)±0.03 (lumi.), is found to be consistent with unity and with theoretical predictions including nuclear modifications of the parton distribution functions.
This work aims to differentiate strangeness produced from hard processes (jet-like) and softer processes (underlying event) by measuring the angular correlation between a high-momentum trigger hadron (h) acting as a jet-proxy and a produced strange hadron (φ(1020) meson). Measuring h–φ correlations at midrapidity in p–Pb collisions at √sNN = 5.02 TeV as a function of event multiplicity provides insight into the microscopic origin of strangeness enhancement in small collision systems. The jet-like and the underlying-event-like strangeness production are investigated as a function of event multiplicity. They are also compared between a lower and higher momentum region. The evolution of the per-trigger yields within the near-side (aligned with the trigger hadron) and away-side (in the opposite direction of the trigger hadron) jet is studied separately, allowing for the characterization of two distinct jet-like production regimes. Furthermore, the h–φ correlations within the underlying event give access to a production regime dominated by soft production processes, which can be compared directly to the in-jet production. Comparisons between h–φ and dihadron correlations show that the observed strangeness enhancement is largely driven by the underlying event, where the φ/h ratio is significantly larger than within the jet regions. As multiplicity increases, the fraction of the total φ(1020) yield coming from jets decreases compared to the underlying event production, leading to high-multiplicity events being dominated by the increased strangeness production from the underlying event
Interacting with the environment to process sensory information, generate perceptions, and shape behavior engages neural networks in brain areas with highly varied representations, ranging from unimodal sensory cortices to higher-order association areas. Recent work suggests a much greater degree of commonality across areas, with distributed and modular networks present in both sensory and non-sensory areas during early development. However, it is currently unknown whether this initially common modular structure undergoes an equally common developmental trajectory, or whether such a modular functional organization persists in some areas—such as primary visual cortex—but not others. Here we examine the development of network organization across diverse cortical regions in ferrets of both sexes using in vivo widefield calcium imaging of spontaneous activity. We find that all regions examined, including both primary sensory cortices (visual, auditory, and somatosensory—V1, A1, and S1, respectively) and higher order association areas (prefrontal and posterior parietal cortices) exhibit a largely similar pattern of changes over an approximately 3 week developmental period spanning eye opening and the transition to predominantly externally-driven sensory activity. We find that both a modular functional organization and millimeter-scale correlated networks remain present across all cortical areas examined. These networks weakened over development in most cortical areas, but strengthened in V1. Overall, the conserved maintenance of modular organization across different cortical areas suggests a common pathway of network refinement, and suggests that a modular organization—known to encode functional representations in visual areas—may be similarly engaged in highly diverse brain areas.
Significance Different areas of the mature brain encode vastly different representations of the world. This study shows that a modular functional organization where nearby neurons participate in similar functional networks is shared across different brain areas not only during early development, but also as the brain matures where it remains a shared feature that shapes neural activity. The largely conserved trajectory of developmental changes across brain areas suggests that similar circuit mechanisms may drive this maturation. This implies that the large literature on developing cortical circuits, which is largely focused on sensory areas, may also apply more broadly, and that perturbations during development that impinge on any such shared mechanisms may produce deficits that extend across multiple brain systems.
Uniform sampling from the set G(d) of graphs with a given degree-sequence d=(d1,…,dn)∈Nn is a classical problem in the study of random graphs. We consider an analogue for temporal graphs in which the edges are labeled with integer timestamps. The input to this generation problem is a tuple D=(d,T)∈Nn×N>0 and the task is to output a uniform random sample from the set G(D) of temporal graphs with degree-sequence d and timestamps in the interval [1,T]. By allowing repeated edges with distinct timestamps, G(D) can be non-empty even if G(d) is, and as a consequence, existing algorithms are difficult to apply.
We describe an algorithm for this generation problem which runs in expected time O(M) if Δ2+ϵ=O(M) for some constant ϵ>0 and T−Δ=Ω(T) where M=∑idi and Δ=maxidi. Our algorithm applies the switching method of McKay and Wormald [1] to temporal graphs: we first generate a random temporal multigraph and then remove self-loops and duplicated edges with switching operations which rewire the edges in a degree-preserving manner.
Uniform sampling from the set G(d) of graphs with a given degree-sequence d=(d1,…,dn)∈Nn is a classical problem in the study of random graphs. We consider an analogue for temporal graphs in which the edges are labeled with integer timestamps. The input to this generation problem is a tuple D=(d,T)∈Nn×N>0 and the task is to output a uniform random sample from the set G(D) of temporal graphs with degree-sequence d and timestamps in the interval [1,T]. By allowing repeated edges with distinct timestamps, G(D) can be non-empty even if G(d) is, and as a consequence, existing algorithms are difficult to apply.
We describe an algorithm for this generation problem which runs in expected time O(M) if Δ2+ϵ=O(M) for some constant ϵ>0 and T−Δ=Ω(T) where M=∑idi and Δ=maxidi. Our algorithm applies the switching method of McKay and Wormald [1] to temporal graphs: we first generate a random temporal multigraph and then remove self-loops and duplicated edges with switching operations which rewire the edges in a degree-preserving manner.
The production cross section of inclusive isolated photons has been measured by the ALICE experiment at the CERN LHC in pp collisions at centre-of-momentum energy of s√=13 TeV collected during the LHC Run 2 data-taking period. The measurement is performed by combining the measurements of the electromagnetic calorimeter EMCal and the central tracking detectors ITS and TPC, covering a pseudorapidity range of |ηγ|<0.67 and a transverse momentum range of 7<pγT<200 GeV/c. The result extends to lower pγT and xγT=2pγT/s√ ranges, the lowest xγT of any isolated photon measurements to date, extending significantly those measured by the ATLAS and CMS experiments towards lower pγT at the same collision energy with a small overlap between the measurements. The measurement is compared with next-to-leading order perturbative QCD calculations and the results from the ATLAS and CMS experiments as well as with measurements at other collision energies. The measurement and theory prediction are in agreement with each other within the experimental and theoretical uncertainties.
Graph4Med: a web application and a graph database for visualizing and analyzing medical databases
()
Background: Medical databases normally contain large amounts of data in a variety of forms. Although they grant significant insights into diagnosis and treatment, implementing data exploration into current medical databases is challenging since these are often based on a relational schema and cannot be used to easily extract information for cohort analysis and visualization. As a consequence, valuable information regarding cohort distribution or patient similarity may be missed. With the rapid advancement of biomedical technologies, new forms of data from methods such as Next Generation Sequencing (NGS) or chromosome microarray (array CGH) are constantly being generated; hence it can be expected that the amount and complexity of medical data will rise and bring relational database systems to a limit.
Description: We present Graph4Med, a web application that relies on a graph database obtained by transforming a relational database. Graph4Med provides a straightforward visualization and analysis of a selected patient cohort. Our use case is a database of pediatric Acute Lymphoblastic Leukemia (ALL). Along routine patients’ health records it also contains results of latest technologies such as NGS data. We developed a suitable graph data schema to convert the relational data into a graph data structure and store it in Neo4j. We used NeoDash to build a dashboard for querying and displaying patients’ cohort analysis. This way our tool (1) quickly displays the overview of patients’ cohort information such as distributions of gender, age, mutations (fusions), diagnosis; (2) provides mutation (fusion) based similarity search and display in a maneuverable graph; (3) generates an interactive graph of any selected patient and facilitates the identification of interesting patterns among patients.
Conclusion: We demonstrate the feasibility and advantages of a graph database for storing and querying medical databases. Our dashboard allows a fast and interactive analysis and visualization of complex medical data. It is especially useful for patients similarity search based on mutations (fusions), of which vast amounts of data have been generated by NGS in recent years. It can discover relationships and patterns in patients cohorts that are normally hard to grasp. Expanding Graph4Med to more medical databases will bring novel insights into diagnostic and research.
Natural Language Processing (NLP) for big data requires an efficient and sophisticated infrastructure to complete tasks both fast and correctly. Providing an intuitive and lightweight interaction with a framework that abstracts and simplifies complex tasks assists in reaching this goal. This bachelor thesis extends the NLP framework Docker Unified UIMA Interface (DUUI) by an API and a web-based graphical user interface to control and manage pipelines for automated analysis of large quantities of natural language. The extension aims to reduce the entry barrier into the field as well as to accelerate the creation and management of pipelines according to UIMA standards. Pipelines can be executed in the browser or using the web API directly and then monitored on a document level. The evaluation in usability and user experience indicates that the implementation benefits the framework by making its usage more user friendly, lightweight, and intuitive while also making the management of pipelines more efficient.
n this paper we study invasion probabilities and invasion times of cooperative parasites spreading in spatially structured host populations. The spatial structure of the host population is given by a random geometric graph on [0,1]n, n∈N, with a Poisson(N)-distributed number of vertices and in which vertices are connected over an edge when they have a distance of at most rN∈Θ(Nβ−1n) for some 0<β<1 and N→∞. At a host infection many parasites are generated and parasites move along edges to neighbouring hosts. We assume that parasites have to cooperate to infect hosts, in the sense that at least two parasites need to attack a host simultaneously. We find lower and upper bounds on the invasion probability of the parasites in terms of survival probabilities of branching processes with cooperation. Furthermore, we characterize the asymptotic invasion time.
An important ingredient of the proofs is a comparison with infection dynamics of cooperative parasites in host populations structured according to a complete graph, i.e. in well-mixed host populations. For these infection processes we can show that invasion probabilities are asymptotically equal to survival probabilities of branching processes with cooperation.
Furthermore, we build in the proofs on techniques developed in [BP22], where an analogous invasion process has been studied for host populations structured according to a configuration model.
We substantiate our results with simulations.
A Large Ion Collider Experiment (ALICE) is a high-energy physics experiment, designed to study heavy ion collisions at the European Organization for Nuclear Research (CERN)Large Hadron Collider (LHC). ALICE is built to study the fundamental properties of matter as it existed shortly after the big bang. This requires reading out millions of sensors with high frequency, enabling high statistics for physics analysis, resulting in a considerable computing demand concerning network throughput and processing power. With the ALICE Run 3 upgrade [14], requirements for a High Throughput Computing
(HTC) online processing cluster increased significantly, due to more than an order of magnitude more data than in Run 2, resulting in a processing input rate of up to 900 GB/s. Online (real-time) event reconstruction allows for the compression of the data stream to 130 GB/s, which is stored on disk for physics analysis.
This thesis presents the implementation of the ALICE Event Processing Node (EPN) compute farm, to cope with the Run 3 online computing challenges. Building a Data Centre tailored to ALICE requirements for the Run 3 and Run 4 EPN farm. Providing the operational conditions for a dynamic compute environment of a High Performance Computing (HPC) cluster, with significant load changes in a short time span, when starting or stopping a data-taking run. EPN servers provide the required computing resources for online reconstruction and data compression. The farm includes network connectivity towards First Level Processors (FLPs), requiring reliable throughput of 900 GB/s between FLPs and EPNs and connectivity from the internal InfiniBand network to the CERN Exabyte Object Storage (EOS) Ethernet network, with more than 100 GB/s.
The results of operating the EPN computing infrastructure during the first year of Run 3 LHC collisions are described in the context of the ALICE experiment. The EPN farm was delivering the expected performance for ALICE data-taking. Data Centre environmental conditions remained stable during the last more than two years, in particular during starting and stopping runs, which include significant changes in IT load. Several unforeseen external circumstances lead to increasing demands for the Online Offline System (O2). Higher data rates than anticipated required network performance to exceed the initial design specifications, for the throughput between FLPs and EPNs. In particular, the high throughput from an internal EPN InfiniBand network towards the storage Ethernet network was one of the challenges to overcome.
Heavy quarks are useful probes to investigate the properties of the Quark-Gluon Plasma (QGP) produced in heavy-ion collisions at the LHC, since they are produced in initial hard scattering processes. To single out the signals that are characteristic of the QGP, it is nevertheless crucial to understand the primordial heavy-quark production in vacuum, and to disentangle hot from cold nuclear matter effects. Moreover, observations of collective effects in high-multiplicity pp and p-Pb collisions show surprising similarities with those in heavy-ion collisions. Heavy-flavour production in such collisions could give further insight into the underlying processes. The heavy-flavour production can be studied with e+e− pairs from correlated semileptonic decays of heavy-flavour hadrons. Compared to single heavy-flavour measurements, the dielectron yield contains information about the initial kinematical correlations between the charm and anti-charm quarks, which is otherwise not accessible, and is sensitive to soft heavy-flavour production. We report results on correlated e+e− pairs in pp collisions recorded by the ALICE detector at different collision energies. The production of heavy quarks is discussed by comparing the yield of dielectrons from heavy-flavour hadron decays as a function of invariant mass, pair transverse momentum and distance of closest approach to the primary vertex with different Monte Carlo event generators. The heavy-flavour production cross sections are also presented. Results from high-multiplicity pp collisions at √s=13 TeV and the status of the p-Pb analysis at √sNN=5.02 TeV are reported as well.
Particle production as a function of charged-particle flattenicity in pp collisions at √s = 13 TeV
()
This paper reports the first measurement of the transverse momentum (pT) spectra of primary charged pions, kaons, (anti)protons, and unidentified particles as a function of the charged-particle flattenicity in pp collisions at s√=13 TeV. Flattenicity is a novel event shape observable that is measured in the pseudorapidity intervals covered by the V0 detector, 2.8<η<5.1 and −3.7<η<−1.7. According to QCD-inspired phenomenological models, it shows sensitivity to multiparton interactions and is less affected by biases towards larger pT due to local multiplicity fluctuations in the V0 acceptance than multiplicity. The analysis is performed in minimum-bias (MB) as well as in high-multiplicity events up to pT=20 GeV/c. The event selection requires at least one charged particle produced in the pseudorapidity interval |η|<1. The measured pT distributions, average pT, kaon-to-pion and proton-to-pion particle ratios, presented in this paper, are compared to model calculations using PYTHIA 8 based on color strings and EPOS LHC. The modification of the pT-spectral shapes in low-flattenicity events that have large event activity with respect to those measured in MB events develops a pronounced peak at intermediate pT (2<pT<8 GeV/c), and approaches the vicinity of unity at higher pT. The results are qualitatively described by PYTHIA, and they show different behavior than those measured as a function of charged-particle multiplicity based on the V0M estimator.
Measurement of beauty production via non-prompt charm hadrons in p-Pb collisions at √sNN = 5.02 TeV
()
The production cross sections of D0, D+, and Λ+c hadrons originating from beauty-hadron decays (i.e. non-prompt) were measured for the first time at midrapidity in proton−lead (p−Pb) collisions at the center-of-mass energy per nucleon pair of √sNN=5.02 TeV. Nuclear modification factors (RpPb) of non-prompt D0, D+, and Λ+c are calculated as a function of the transverse momentum (pT) to investigate the modification of the momentum spectra measured in p−Pb collisions with respect to those measured in proton−proton (pp) collisions at the same energy. The RpPb measurements are compatible with unity and with the measurements in the prompt charm sector, and do not show a significant pT dependence. The pT-integrated cross sections and pT-integrated RpPb of non-prompt D0 and D+ mesons are also computed by extrapolating the visible cross sections down to pT = 0. The non-prompt D-meson RpPb integrated over pT is compatible with unity and with model calculations implementing modification of the parton distribution functions of nucleons bound in nuclei with respect to free nucleons. The non-prompt Λ+c/D0 and D+/D0 production ratios are computed to investigate hadronisation mechanisms of beauty quarks into mesons and baryons. The measured ratios as a function of pT display a similar trend to that measured for charm hadrons in the same collision system.
The production yields of antideuterons and antiprotons are measured in pp collisions at a center-of-mass energy of √s=13 TeV, as a function of transverse momentum (pT) and rapidity (y), for the first time up to |y|=0.7. The measured spectra are used to study the pT and rapidity dependence of the coalescence parameter B2, which quantifies the coalescence probability of antideuterons. The pT and rapidity dependence of the obtained B2 is extrapolated for pT>1.7 GeV/c and |y|>0.7 using the phenomenological antideuteron production model implemented in PYTHIA 8.3 as well as a baryon coalescence afterburner model based on EPOS 3. Such measurements are of interest to the astrophysics community, since they can be used for the calculation of the flux of antinuclei from cosmic rays, in combination with coalescence models.
Stimulated emission depletion (STED) microscopy is a super-resolution technique that surpasses the diffraction limit and has contributed to the study of dynamic processes in living cells. However, high laser intensities induce fluorophore photobleaching and sample phototoxicity, limiting the number of fluorescence images obtainable from a living cell. Here, we address these challenges by using ultra-low irradiation intensities and a neural network for image restoration, enabling extensive imaging of single living cells. The endoplasmic reticulum (ER) was chosen as the target structure due to its dynamic nature over short and long timescales. The reduced irradiation intensity combined with denoising permitted continuous ER dynamics observation in living cells for up to 7 hours with a temporal resolution of seconds. This allowed for quantitative analysis of ER structural features over short (seconds) and long (hours) timescales within the same cell, and enabled fast 3D live-cell STED microscopy. Overall, the combination of ultra-low irradiation with image restoration enables comprehensive analysis of organelle dynamics over extended periods in living cells.
The uniform sampling of simple graphs matching a prescribed degree sequence is an important tool in network science, e.g. to construct graph generators or null-models. Here, the Edge Switching Markov Chain (ES-MC) is a common choice. Given an arbitrary simple graph with the required degree sequence, ES-MC carries out a large number of small changes, called edge switches, to eventually obtain a uniform sample. In practice, reasonably short runs efficiently yield approximate uniform samples.
In this work, we study the problem of executing edge switches in parallel. We discuss parallelizations of ES-MC, but find that this approach suffers from complex dependencies between edge switches. For this reason, we propose the Global Edge Switching Markov Chain (G-ES-MC), an ES-MC variant with simpler dependencies. We show that G-ES-MC converges to the uniform distribution and design shared-memory parallel algorithms for ES-MC and G-ES-MC. In an empirical evaluation, we provide evidence that G-ES-MC requires not more switches than ES-MC (and often fewer), and demonstrate the efficiency and scalability of our parallel G-ES-MC implementation.