Refine
Year of publication
- 2023 (173)
- 2019 (152)
- 2021 (126)
- 2016 (122)
- 2020 (115)
- 2022 (112)
- 2015 (91)
- 2017 (88)
- 2018 (70)
- 2012 (27)
- 2014 (22)
- 2009 (21)
- 2011 (20)
- 2013 (20)
- 2001 (18)
- 2008 (18)
- 2010 (16)
- 2024 (13)
- 1996 (12)
- 1999 (12)
- 2003 (12)
- 2004 (12)
- 1997 (11)
- 2005 (11)
- 1994 (10)
- 2007 (10)
- 2006 (9)
- 1998 (8)
- 2000 (8)
- 2002 (8)
- 1995 (6)
- 1993 (4)
- 1972 (1)
- 1978 (1)
- 1987 (1)
- 1989 (1)
- 1990 (1)
- 1991 (1)
- 1992 (1)
Document Type
- Preprint (732)
- Article (382)
- Working Paper (114)
- Doctoral Thesis (54)
- Conference Proceeding (37)
- Report (20)
- Part of a Book (9)
- Book (5)
- Master's Thesis (5)
- Bachelor Thesis (3)
Language
- English (1365) (remove)
Has Fulltext
- yes (1365)
Is part of the Bibliography
- no (1365)
Keywords
- Lambda-Kalkül (20)
- Heavy Ion Experiments (19)
- Formale Semantik (11)
- Hadron-Hadron scattering (experiments) (10)
- Hadron-Hadron Scattering (9)
- lambda calculus (9)
- LHC (8)
- Operationale Semantik (8)
- Heavy-ion collision (7)
- Kongress (7)
Institute
- Informatik (1365) (remove)
Highlights
• Transparency of design, reference frames and support for action were found to support students' sense-making of LA dashboards.
• The higher the overall SRL score, the more relevant the three factors were perceived by learners.
• Learner goals affect how relevant students find reference frames.
• The SRL effect on the perceived relevance of transparency depends on learner goals.
Abstract
Unequal stakeholder engagement is a common pitfall of adoption approaches of learning analytics in higher education leading to lower buy-in and flawed tools that fail to meet the needs of their target groups. With each design decision, we make assumptions on how learners will make sense of the visualisations, but we know very little about how students make sense of dashboard and which aspects influence their sense-making. We investigated how learner goals and self-regulated learning (SRL) skills influence dashboard sense-making following a mixed-methods research methodology: a qualitative pre-study followed-up with an extensive quantitative study with 247 university students. We uncovered three latent variables for sense-making: transparency of design, reference frames and support for action. SRL skills are predictors for how relevant students find these constructs. Learner goals have a significant effect only on the perceived relevance of reference frames. Knowing which factors influence students' sense-making will lead to more inclusive and flexible designs that will cater to the needs of both novice and expert learners.
Measurements of the pT-dependent flow vector fluctuations in Pb-Pb collisions at sNN−−−√=5.02 TeV using azimuthal correlations with the ALICE experiment at the LHC are presented. A four-particle correlation approach [1] is used to quantify the effects of flow angle and magnitude fluctuations separately. This paper extends previous studies to additional centrality intervals and provides measurements of the pT-dependent flow vector fluctuations at sNN−−−√=5.02 TeV with two-particle correlations. Significant pT-dependent fluctuations of the V⃗ 2 flow vector in Pb-Pb collisions are found across different centrality ranges, with the largest fluctuations of up to ∼15% being present in the 5% most central collisions. In parallel, no evidence of significant pT-dependent fluctuations of V⃗ 3 or V⃗ 4 is found. Additionally, evidence of flow angle and magnitude fluctuations is observed with more than 5σ significance in central collisions. These observations in Pb-Pb collisions indicate where the classical picture of hydrodynamic modeling with a common symmetry plane breaks down. This has implications for hard probes at high pT, which might be biased by pT-dependent flow angle fluctuations of at least 23% in central collisions. Given the presented results, existing theoretical models should be re-examined to improve our understanding of initial conditions, quark--gluon plasma (QGP) properties, and the dynamic evolution of the created system.
The intense photon fluxes from relativistic nuclei provide an opportunity to study photonuclear interactions in ultraperipheral collisions. The measurement of coherently photoproduced π+π−π+π− final states in ultraperipheral Pb-Pb collisions at sNN−−−√=5.02 TeV is presented for the first time. The cross section, dσ/dy, times the branching ratio (ρ→π+π+π−π−) is found to be 47.8±2.3 (stat.)±7.7 (syst.) mb in the rapidity interval |y|<0.5. The invariant mass distribution is not well described with a single Breit-Wigner resonance. The production of two interfering resonances, ρ(1450) and ρ(1700), provides a good description of the data. The values of the masses (m) and widths (Γ) of the resonances extracted from the fit are m1=1385±14 (stat.)±3 (syst.) MeV/c2, Γ1=431±36 (stat.)±82 (syst.) MeV/c2, m2=1663±13 (stat.)±22 (syst.) MeV/c2 and Γ2=357±31 (stat.)±49 (syst.) MeV/c2, respectively. The measured cross sections times the branching ratios are compared to recent theoretical predictions.
Measurement of beauty-quark production in pp collisions at √s = 13 TeV via non-prompt D mesons
(2024)
The pT-differential production cross sections of non-prompt D0, D+, and D+s mesons originating from beauty-hadron decays are measured in proton−proton collisions at a centre-of-mass energy s√ of 13 TeV. The measurements are performed at midrapidity, |y|<0.5, with the data sample collected by ALICE from 2016 to 2018. The results are in agreement with predictions from several perturbative QCD calculations. The fragmentation fraction of beauty quarks to strange mesons divided by the one to non-strange mesons, fs/(fu+fd), is found to be 0.114±0.016 (stat.)±0.006 (syst.)±0.003 (BR)±0.003 (extrap.). This value is compatible with previous measurements at lower centre-of-mass energies and in different collision systems in agreement with the assumption of universality of fragmentation functions. In addition, the dependence of the non-prompt D meson production on the centre-of-mass energy is investigated by comparing the results obtained at s√=5.02 and 13 TeV, showing a hardening of the non-prompt D-meson pT-differential production cross section at higher s√. Finally, the bb¯¯¯ production cross section per unit of rapidity at midrapidity is calculated from the non-prompt D0, D+, D+s, and Λ+c hadron measurements, obtaining dσ/dy=75.2±3.2 (stat.)±5.2 (syst.)+12.3−3.2 (extrap.) μb.
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton–proton collisions at a center-of-mass energy of √s = 13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
Recent lattice QCD results, comparing to a hadron resonance gas model, have shown the need for hundreds of particles in hadronic models. These extra particles influence both the equation of state and hadronic interactions within hadron transport models. Here, we introduce the PDG21+ particle list, which contains the most up-to-date database of particles and their properties. We then convert all particles decays into 2 body decays so that they are compatible with SMASH in order to produce a more consistent description of a heavy-ion collision.
Current deep learning methods are regarded as favorable if they empirically perform well on dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving data is investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten. However, comparison of individual methods is nevertheless performed in isolation from the real world by monitoring accumulated benchmark test set performance. The closed world assumption remains predominant, i.e. models are evaluated on data that is guaranteed to originate from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown and corrupted instances. In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era. Hence, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Finally, the established synergies are supported empirically, showing joint improvement in alleviating catastrophic forgetting, querying data, selecting task orders, while exhibiting robust open world application.