Universitätspublikationen
Refine
Year of publication
- 2023 (182)
- 2019 (159)
- 2016 (133)
- 2021 (130)
- 2020 (116)
- 2022 (115)
- 2015 (96)
- 2017 (91)
- 2018 (82)
- 2024 (36)
- 2012 (34)
- 2011 (31)
- 2013 (31)
- 2010 (29)
- 2014 (29)
- 2009 (24)
- 2007 (11)
- 2008 (10)
- 2006 (7)
- 1998 (5)
- 2002 (5)
- 2004 (5)
- 1996 (3)
- 1999 (3)
- 1995 (2)
- 2003 (2)
- 1987 (1)
- 1989 (1)
- 1991 (1)
- 1993 (1)
- 1994 (1)
- 2000 (1)
- 2001 (1)
- 2005 (1)
Document Type
- Preprint (729)
- Article (370)
- Working Paper (68)
- Doctoral Thesis (64)
- Book (36)
- Bachelor Thesis (35)
- Conference Proceeding (23)
- Diploma Thesis (19)
- Part of a Book (11)
- Contribution to a Periodical (10)
Has Fulltext
- yes (1380)
Is part of the Bibliography
- no (1380)
Keywords
- Heavy Ion Experiments (19)
- Lambda-Kalkül (12)
- Hadron-Hadron Scattering (11)
- Formale Semantik (10)
- Hadron-Hadron scattering (experiments) (10)
- LHC (8)
- Heavy-ion collision (7)
- concurrency (6)
- functional programming (6)
- Operationale Semantik (5)
Institute
- Informatik (1380) (remove)
Recent lattice QCD results, comparing to a hadron resonance gas model, have shown the need for hundreds of particles in hadronic models. These extra particles influence both the equation of state and hadronic interactions within hadron transport models. Here, we introduce the PDG21+ particle list, which contains the most up-to-date database of particles and their properties. We then convert all particles decays into 2 body decays so that they are compatible with SMASH in order to produce a more consistent description of a heavy-ion collision.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
Highlights
• Transparency of design, reference frames and support for action were found to support students' sense-making of LA dashboards.
• The higher the overall SRL score, the more relevant the three factors were perceived by learners.
• Learner goals affect how relevant students find reference frames.
• The SRL effect on the perceived relevance of transparency depends on learner goals.
Abstract
Unequal stakeholder engagement is a common pitfall of adoption approaches of learning analytics in higher education leading to lower buy-in and flawed tools that fail to meet the needs of their target groups. With each design decision, we make assumptions on how learners will make sense of the visualisations, but we know very little about how students make sense of dashboard and which aspects influence their sense-making. We investigated how learner goals and self-regulated learning (SRL) skills influence dashboard sense-making following a mixed-methods research methodology: a qualitative pre-study followed-up with an extensive quantitative study with 247 university students. We uncovered three latent variables for sense-making: transparency of design, reference frames and support for action. SRL skills are predictors for how relevant students find these constructs. Learner goals have a significant effect only on the perceived relevance of reference frames. Knowing which factors influence students' sense-making will lead to more inclusive and flexible designs that will cater to the needs of both novice and expert learners.
Current deep learning methods are regarded as favorable if they empirically perform well on dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving data is investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten. However, comparison of individual methods is nevertheless performed in isolation from the real world by monitoring accumulated benchmark test set performance. The closed world assumption remains predominant, i.e. models are evaluated on data that is guaranteed to originate from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown and corrupted instances. In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era. Hence, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Finally, the established synergies are supported empirically, showing joint improvement in alleviating catastrophic forgetting, querying data, selecting task orders, while exhibiting robust open world application.
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
()
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
The intense photon fluxes from relativistic nuclei provide an opportunity to study photonuclear interactions in ultraperipheral collisions. The measurement of coherently photoproduced π+π−π+π− final states in ultraperipheral Pb-Pb collisions at sNN−−−√=5.02 TeV is presented for the first time. The cross section, dσ/dy, times the branching ratio (ρ→π+π+π−π−) is found to be 47.8±2.3 (stat.)±7.7 (syst.) mb in the rapidity interval |y|<0.5. The invariant mass distribution is not well described with a single Breit-Wigner resonance. The production of two interfering resonances, ρ(1450) and ρ(1700), provides a good description of the data. The values of the masses (m) and widths (Γ) of the resonances extracted from the fit are m1=1385±14 (stat.)±3 (syst.) MeV/c2, Γ1=431±36 (stat.)±82 (syst.) MeV/c2, m2=1663±13 (stat.)±22 (syst.) MeV/c2 and Γ2=357±31 (stat.)±49 (syst.) MeV/c2, respectively. The measured cross sections times the branching ratios are compared to recent theoretical predictions.
Measurements of the pT-dependent flow vector fluctuations in Pb-Pb collisions at sNN−−−√=5.02 TeV using azimuthal correlations with the ALICE experiment at the LHC are presented. A four-particle correlation approach [1] is used to quantify the effects of flow angle and magnitude fluctuations separately. This paper extends previous studies to additional centrality intervals and provides measurements of the pT-dependent flow vector fluctuations at sNN−−−√=5.02 TeV with two-particle correlations. Significant pT-dependent fluctuations of the V⃗ 2 flow vector in Pb-Pb collisions are found across different centrality ranges, with the largest fluctuations of up to ∼15% being present in the 5% most central collisions. In parallel, no evidence of significant pT-dependent fluctuations of V⃗ 3 or V⃗ 4 is found. Additionally, evidence of flow angle and magnitude fluctuations is observed with more than 5σ significance in central collisions. These observations in Pb-Pb collisions indicate where the classical picture of hydrodynamic modeling with a common symmetry plane breaks down. This has implications for hard probes at high pT, which might be biased by pT-dependent flow angle fluctuations of at least 23% in central collisions. Given the presented results, existing theoretical models should be re-examined to improve our understanding of initial conditions, quark--gluon plasma (QGP) properties, and the dynamic evolution of the created system.
The pT-differential production cross sections of non-prompt D0, D+, and D+s mesons originating from beauty-hadron decays are measured in proton−proton collisions at a centre-of-mass energy s√ of 13 TeV. The measurements are performed at midrapidity, |y|<0.5, with the data sample collected by ALICE from 2016 to 2018. The results are in agreement with predictions from several perturbative QCD calculations. The fragmentation fraction of beauty quarks to strange mesons divided by the one to non-strange mesons, fs/(fu+fd), is found to be 0.114±0.016 (stat.)±0.006 (syst.)±0.003 (BR)±0.003 (extrap.). This value is compatible with previous measurements at lower centre-of-mass energies and in different collision systems in agreement with the assumption of universality of fragmentation functions. In addition, the dependence of the non-prompt D meson production on the centre-of-mass energy is investigated by comparing the results obtained at s√=5.02 and 13 TeV, showing a hardening of the non-prompt D-meson pT-differential production cross section at higher s√. Finally, the bb¯¯¯ production cross section per unit of rapidity at midrapidity is calculated from the non-prompt D0, D+, D+s, and Λ+c hadron measurements, obtaining dσ/dy=75.2±3.2 (stat.)±5.2 (syst.)+12.3−3.2 (extrap.) μb.
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton–proton collisions at a center-of-mass energy of √s = 13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
A new, more precise measurement of the Λ hyperon lifetime is performed using a large data sample of Pb–Pb collisions at √sNN p ¼ 5.02 TeV with ALICE. The Λ and Λ¯ hyperons are reconstructed at midrapidity using their two-body weak decay channel Λ → p þ π− and Λ¯ → p¯ þ πþ. The measured value of the Λ lifetime is τΛ ¼ ½261.07 0.37ðstat:Þ 0.72ðsyst:Þ ps. The relative difference between the lifetime of Λ and Λ¯ , which represents an important test of CPT invariance in the strangeness sector, is also measured. The obtained value ðτΛ − τΛ¯Þ=τΛ ¼ 0.0013 0.0028ðstat:Þ 0.0021ðsyst:Þ is consistent with zero within the uncertainties. Both measurements of the Λ hyperon lifetime and of the relative difference between τΛ and τΛ¯ are in agreement with the corresponding world averages of the Particle Data Group and about a factor of three more precise.
The production of prompt +c baryons has been measured at midrapidity in the transverse momentum interval 0 < pT < 1 GeV/c for the first time, in pp and p–Pb collisions at a center-of-mass energy per nucleon-nucleon collision √sNN = 5.02 TeV. The measurement was performed in the decay channel +c → pK0S by applying new decay reconstruction techniques using a Kalman-Filter vertexing algorithm and adopting a machine-learning approach for the candidate selection. The pT -integrated +c production cross sections in both collision systems were determined and used along with the measured yields in Pb–Pb collisions to compute the pT -integrated nuclear modification factors RpPb and RAA of +c baryons, which are compared to model calculations that consider nuclear modification of the parton distribution functions. The +c /D0 baryon-to-meson yield ratio is reported for pp and p–Pb collisions. Comparisons with models that include modified hadronization processes are presented, and the implications of the results on the understanding of charm hadronization in hadronic collisions are discussed. A significant (3.7σ) modification of the mean transverse momentum of + c baryons is seen in p–Pb collisions with respect to pp collisions, while the pT -integrated +c /D0 yield ratio was found to be consistent between the two collision systems within the uncertainties.
Long- and short-range correlations for pairs of charged particles are studied via two-particle angular correlations in pp collisions at s√=13 TeV and p−Pb collisions at sNN−−−√=5.02 TeV. The correlation functions are measured as a function of relative azimuthal angle Δφ and pseudorapidity separation Δη for pairs of primary charged particles within the pseudorapidity interval |η|<0.9 and the transverse-momentum interval 1<pT<4 GeV/c. Flow coefficients are extracted for the long-range correlations (1.6<|Δη|<1.8) in various high-multiplicity event classes using the low-multiplicity template fit method. The method is used to subtract the enhanced yield of away-side jet fragments in high-multiplicity events. These results show decreasing flow signals toward lower multiplicity events. Furthermore, the flow coefficients for events with hard probes, such as jets or leading particles, do not exhibit any significant changes compared to those obtained from high-multiplicity events without any specific event selection criteria. The results are compared with hydrodynamic-model calculations, and it is found that a better understanding of the initial conditions is necessary to describe the results, particularly for low-multiplicity events.
The human immune system is determined by the functionality of the human lymph node. With the use of high-throughput techniques in clinical diagnostics, a large number of data is currently collected. The new data on the spatiotemporal organization of cells offers new possibilities to build a mathematical model of the human lymph node - a virtual lymph node. The virtual lymph node can be applied to simulate drug responses and may be used in clinical diagnosis. Here, we review mathematical models of the human lymph node from the viewpoint of cellular processes. Starting with classical methods, such as systems of differential equations, we discuss the values of different levels of abstraction and methods in the range from artificial intelligence techniques formalism.
The inclusive production of the charm-strange baryon Ω0c is measured for the first time via its semileptonic decay into Ω−e+νe at midrapidity (|y| < 0.8) in proton–proton (pp) collisions at the centre-of-mass energy √s = 13 TeV with the ALICE detector at the LHC. The transverse momentum (pT) differential cross section multiplied by the branching ratio is presented in the interval 2 < pT < 12 GeV/c. The branching-fraction ratio BR(Ω0c → Ω−e+νe)/BR(Ω0c → Ω−π+) is measured to be 1.12 ± 0.22 (stat.) ± 0.27 (syst.). Comparisons with other experimental measurements, as well as with theoretical calculations, are presented.
The inclusive production of the charm-strange baryon Ω0c is measured for the first time via its semileptonic decay into Ω−e+νe at midrapidity (|y| < 0.8) in proton–proton (pp) collisions at the centre-of-mass energy √s = 13 TeV with the ALICE detector at the LHC. The transverse momentum (pT) differential cross section multiplied by the branching ratio is presented in the interval 2 < pT < 12 GeV/c. The branching-fraction ratio BR(Ω0c → Ω−e+νe)/BR(Ω0c → Ω−π+) is measured to be 1.12 ± 0.22 (stat.) ± 0.27 (syst.). Comparisons with other experimental measurements, as well as with theoretical calculations, are presented.
The measurement of the production of deuterons, tritons and 3He and their antiparticles in Pb-Pb collisions at √sNN = 5.02 TeV is presented in this article. The measurements are carried out at midrapidity (y|< 0.5) as a function of collision centrality using the ALICE detector. The pT-integrated yields, the coalescence parameters and the ratios to protons and antiprotons are reported and compared with nucleosynthesis models. The comparison of these results in different collision systems at different center-of-mass collision energies reveals a suppression of nucleus production in small systems. In the Statistical Hadronisation Model framework, this can be explained by a small correlation volume where the baryon number is conserved, as already shown in previous fluctuation analyses. However, a different size of the correlation volume is required to describe the proton yields in the same data sets. The coalescence model can describe this suppression by the fact that the wave functions of the nuclei are large and the fireball size starts to become comparable and even much smaller than the actual nucleus at low multiplicities.
The knowledge of the material budget with a high precision is fundamental for measurements of direct photon production using the photon conversion method due to its direct impact on the total systematic uncertainty. Moreover, it influences many aspects of the charged-particle reconstruction performance. In this article, two procedures to determine data-driven corrections to the material-budget description in ALICE simulation software are developed. One is based on the precise knowledge of the gas composition in the Time Projection Chamber. The other is based on the robustness of the ratio between the produced number of photons and charged particles, to a large extent due to the approximate isospin symmetry in the number of produced neutral and charged pions. Both methods are applied to ALICE data allowing for a reduction of the overall material budget systematic uncertainty from 4.5% down to 2.5%. Using these methods, a locally correct material budget is also achieved. The two proposed methods are generic and can be applied to any experiment in a similar fashion.
The knowledge of the material budget with a high precision is fundamental for measurements of direct photon production using the photon conversion method due to its direct impact on the total systematic uncertainty. Moreover, it influences many aspects of the charged-particle reconstruction performance. In this article, two procedures to determine data-driven corrections to the material-budget description in ALICE simulation software are developed. One is based on the precise knowledge of the gas composition in the Time Projection Chamber. The other is based on the robustness of the ratio between the produced number of photons and charged particles, to a large extent due to the approximate isospin symmetry in the number of produced neutral and charged pions. Both methods are applied to ALICE data allowing for a reduction of the overall material budget systematic uncertainty from 4.5% down to 2.5%. Using these methods, a locally correct material budget is also achieved. The two proposed methods are generic and can be applied to any experiment in a similar fashion.