Refine
Year of publication
Document Type
- Preprint (755)
- Article (392)
- Working Paper (114)
- Doctoral Thesis (55)
- Conference Proceeding (37)
- Report (20)
- Part of a Book (9)
- Bachelor Thesis (5)
- Book (5)
- Master's Thesis (5)
Language
- English (1401) (remove)
Has Fulltext
- yes (1401)
Is part of the Bibliography
- no (1401) (remove)
Keywords
- Lambda-Kalkül (20)
- Heavy Ion Experiments (19)
- Formale Semantik (11)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- lambda calculus (9)
- LHC (8)
- Operationale Semantik (8)
- Heavy-ion collision (7)
- Kongress (7)
Institute
- Informatik (1401) (remove)
Risk evaluations for agricultural chemicals are necessary to preserve healthy populations of honey bee colonies. Field studies on whole colonies are limited in behavioural research, while results from lab studies allow only restricted conclusions on whole colony impacts. Methods for automated long-term investigations of behaviours within comb cells, such as brood care, were hitherto missing. In the present study, we demonstrate an innovative video method that enables within-cell analysis in honey bee (Apis mellifera) observation hives to detect chronic sublethal neonicotinoid effects of clothianidin (1 and 10 ppb) and thiacloprid (200 ppb) on worker behaviour and development. In May and June, colonies which were fed 10 ppb clothianidin and 200 ppb thiacloprid in syrup over three weeks showed reduced feeding visits and duration throughout various larval development days (LDDs). On LDD 6 (capping day) total feeding duration did not differ between treatments. Behavioural adaptation was exhibited by nurses in the treatment groups in response to retarded larval development by increasing the overall feeding timespan. Using our machine learning algorithm, we demonstrate a novel method for detecting behaviours in an intact hive that can be applied in a versatile manner to conduct impact analyses of chemicals, pests and other stressors.
50 years of amino acid hydrophobicity scales : revisiting the capacity for peptide classification
(2016)
Background: Physicochemical properties are frequently analyzed to characterize protein-sequences of known and unknown function. Especially the hydrophobicity of amino acids is often used for structural prediction or for the detection of membrane associated or embedded β-sheets and α-helices. For this purpose many scales classifying amino acids according to their physicochemical properties have been defined over the past decades. In parallel, several hydrophobicity parameters have been defined for calculation of peptide properties. We analyzed the performance of separating sequence pools using 98 hydrophobicity scales and five different hydrophobicity parameters, namely the overall hydrophobicity, the hydrophobic moment for detection of the α-helical and β-sheet membrane segments, the alternating hydrophobicity and the exact ß-strand score.
Results: Most of the scales are capable of discriminating between transmembrane α-helices and transmembrane β-sheets, but assignment of peptides to pools of soluble peptides of different secondary structures is not achieved at the same quality. The separation capacity as measure of the discrimination between different structural elements is best by using the five different hydrophobicity parameters, but addition of the alternating hydrophobicity does not provide a large benefit. An in silico evolutionary approach shows that scales have limitation in separation capacity with a maximal threshold of 0.6 in general. We observed that scales derived from the evolutionary approach performed best in separating the different peptide pools when values for arginine and tyrosine were largely distinct from the value of glutamate. Finally, the separation of secondary structure pools via hydrophobicity can be supported by specific detectable patterns of four amino acids.
Conclusion: It could be assumed that the quality of separation capacity of a certain scale depends on the spacing of the hydrophobicity value of certain amino acids. Irrespective of the wealth of hydrophobicity scales a scale separating all different kinds of secondary structures or between soluble and transmembrane peptides does not exist reflecting that properties other than hydrophobicity affect secondary structure formation as well. Nevertheless, application of hydrophobicity scales allows distinguishing between peptides with transmembrane α-helices and β-sheets. Furthermore, the overall separation capacity score of 0.6 using different hydrophobicity parameters could be assisted by pattern search on the protein sequence level for specific peptides with a length of four amino acids.
The cortical networks that underlie behavior exhibit an orderly functional organization at local and global scales, which is readily evident in the visual cortex of carnivores and primates1-6. Here, neighboring columns of neurons represent the full range of stimulus orientations and contribute to distributed networks spanning several millimeters2,7-11. However, the principles governing functional interactions that bridge this fine-scale functional architecture and distant network elements are unclear, and the emergence of these network interactions during development remains unexplored. Here, by using in vivo wide-field and 2-photon calcium imaging of spontaneous activity patterns in mature ferret visual cortex, we find widespread and specific modular correlation patterns that accurately predict the local structure of visually-evoked orientation columns from the spontaneous activity of neurons that lie several millimeters away. The large-scale networks revealed by correlated spontaneous activity show abrupt ‘fractures’ in continuity that are in tight register with evoked orientation pinwheels. Chronic in vivo imaging demonstrates that these large-scale modular correlation patterns and fractures are already present at early stages of cortical development and predictive of the mature network structure. Silencing feed-forward drive through either retinal or thalamic blockade does not affect network structure suggesting a cortical origin for this large-scale correlated activity, despite the immaturity of long-range horizontal network connections in the early cortex. Using a circuit model containing only local connections, we demonstrate that such a circuit is sufficient to generate large-scale correlated activity, while also producing correlated networks showing strong fractures, a reduced dimensionality, and an elongated local correlation structure, all in close agreement with our empirical data. These results demonstrate the precise local and global organization of cortical networks revealed through correlated spontaneous activity and suggest that local connections in early cortical circuits may generate structured long-range network correlations that underlie the subsequent formation of visually-evoked distributed functional networks.
The objective of this thesis is to develop new methodologies for formal verification of nonlinear analog circuits. Therefore, new approaches to discrete modeling of analog circuits, specification of analog circuit properties and formal verification algorithms are introduced. Formal approaches to verification of analog circuits are not yet introduced into industrial design flows and still subject to research. Formal verification proves specification conformance for all possible input conditions and all possible internal states of a circuit. Automatically proving that a model of the circuit satisfies a declarative machine-readable property specification is referred to as model checking. Equivalence checking proves the equivalence of two circuit implementations. Starting from the state of the art in modeling analog circuits for simulation-based verification, discrete modeling of analog circuits for state space-based formal verification methodologies is motivated in this thesis. In order to improve the discrete modeling of analog circuits, a new trajectory-directed partitioning algorithm was developed in the scope of this thesis. This new approach determines the partitioning of the state space parallel or orthogonal to the trajectories of the state space dynamics. Therewith, a high accuracy of the successor relation is achieved in combination with a lower number of states necessary for a discrete model of equal accuracy compared to the state-of-the-art hyperbox-approach. The mapping of the partitioning to a discrete analog transition structure (DATS) enables the application of formal verification algorithms. By analyzing digital specification concepts and the existing approaches to analog property specification, the requirements for a new specification language for analog properties have been discussed in this thesis. On the one hand, it shall meet the requirements for formal specification of verification approaches applied to DATS models. On the other hand, the language syntax shall be oriented on natural language phrases. By synthesis of these requirements, the analog specification language (ASL) was developed in the scope of this thesis. The verification algorithms for model checking, that were developed in combination with ASL for application to DATS models generated with the new trajectory-directed approach, offer a significant enhancement compared to the state of the art. In order to prepare a transition of signal-based to state space-based verification methodologies, an approach to transfer transient simulation results from non-formal test bench simulation flows into a partial state space representation in form of a DATS has been developed in the scope of this thesis. As has been demonstrated by examples, the same ASL specification that was developed for formal model checking on complete discrete models could be evaluated without modifications on transient simulation waveforms. An approach to counterexample generation for the formal ASL model checking methodology offers to generate transition sequences from a defined starting state to a specification-violating state for inspection in transient simulation environments. Based on this counterexample generation, a new formal verification methodology using complete state space-covering input stimuli was developed. By conducting a transient simulation with these complete state space-covering input stimuli, the circuit adopts every state and transition that were visited during stimulus generation. An alternative formal verification methodology is given by retransferring the transient simulation responses to a DATS model and by applying the ASL verification algorithms in combination with an ASL property specification. Moreover, the complete state space-covering input stimuli can be applied to develop a formal equivalence checking methodology. Therewith, the equivalence of two implementations can be proven for every inner state of both systems by comparing the transient simulation responses to the complete-coverage stimuli of both circuits. In order to visually inspect the results of the newly introduced verification methodologies, an approach to dynamic state space visualization using multi-parallel particle simulation was developed. Due to the particles being randomly distributed over the complete state space and moving corresponding to the state space dynamics, another perspective to the system's behavior is provided that covers the state space and hence offers formal results. The prototypic implementations of the formal verification methodologies developed in the scope of this thesis have been applied to several example circuits. The acquired results for the new approaches to discrete modeling, specification and verification algorithms all demonstrate the capability of the new verification methodologies to be applied to complex circuit blocks and their properties.
This thesis has explored how structural techniques can be applied to the problem of formal verification for sequential circuits. Algorithms for formal verification which operate on non-canonical gate netlist representations of digital circuits have certain advantages over the traditional techniques based on canonical representations as BDDs. They allow to exploit problem-specific knowledge because they can take into account structural properties of the designs being analyzed. This allows us to break the problem down into sub-problems which are (hopefully) easier to be solved. However, in the past, the main application of such structural techniques was in the field of combinational equivalence checking. One reason for this is that the behaviour of a sequential system does not only depend on its inputs but also on its internal states, and no concepts had been developed to-date allowing structural methods to deal with large sets of states. An important goal of this research was therefore to develop structural, non-canonical forms of representing the reachable states of a finite state machine and to develop methods for reachability analysis based on such representations. In order to reach this goal, two steps were taken. Firstly, a framework for manipulating Boolean functions represented as gate netlists has been established. Secondly, using this framework, a structural method for FSM traversal was developed serving as the basis for an equivalence checking algorithm for sequential circuits. The framework for manipulating Boolean functions represented as multi-level combinational networks is based on a new concept of an implicant in a multi-level network and on an AND/ORtype enumeration technique which allows us to derive such implicants. This concept extends the classical notion of an implicant in two-level circuits to the multi-level case. Using this notion, arbitrary transformations in multi-level combinational networks can be performed. The multi-level network implicants can be determined from AND/OR reasoning graphs, which are associated with an AND/OR reasoning technique operating directly on the gate netlist description of a multi-level circuit. This reasoning technique has the important property that it is complete, i.e. the associated AND/OR trees contain all prime implicants of a Boolean function at an arbitrary node in a combinational circuit. In other words, AND/OR graphs constructed for a network function serve as a representation of this function. A great advantage over BDDs is that AND/OR graphs, besides representing the logic function, also represent some structural properties of the analyzed circuitry. This permits to develop heuristics that are specially tailored for certain applications such as logic optimization or verification. Another advantage which is especially useful for logic optimization is the fact that the proposed AND/OR enumeration scheme is not restricted to the use of a specific logic alphabet such as B3 = {0, 1, X}. By using Roth’s D-calculus based on B5 = {0, 1, D, D-Komplement} permissible implicants can be determined. Transformations based on permissible implicants exploit observability don’t-care conditions in logic synthesis by creating permissible functions at internal network nodes. In order to evaluate the new structural framework for manipulating Boolean functions represented as gate netlists, several experiments with implicant-based optimization of multi-level circuits were performed. The results show that implicant-based circuit transformations lead to significantly better optimization results than traditional synthesis techniques. Next, based on the proposed structural methods for Boolean function manipulation, techniques for representing and manipulating the set of states of a sequential circuit have been developed. The concept of a “stub circuit” was introduced which implicitly represents a set of state vectors as the range of a multi-output function given as a gate netlist. The stub circuit is the result of an existential quantification operation which is obtained by functional decomposition using implicant-based netlist transformations and a network cutting procedure. Using this existential quantification operation, a new structural FSM traversal algorithm was formulated which performs a fixed point iteration on the set of reachable states represented by the stub circuit. The proposed approach performs a reachability analysis of the states of a sequential circuit. It operates on gate netlists and naturally allows to incorporate structural properties of a design under consideration into the reasoning. Therefore, structural FSM traversal is an interesting alternative to traditional symbolic FSM traversal, especially in those applications of formal verification, where structural properties can be exploited. Structural FSM traversal was applied to the problem of sequential equivalence checking. Here, structural similarities between the designs to be compared can effectively reduce the complexity of the verification task. The FSM to be traversed is a special product machine called sequential miter. The special structural properties of this product machine have made it possible to formulate an approximate algorithm for structural FSM traversal, called record and play(). This algorithm uses an approximation on the reachable state set represented by the stub circuit which is very beneficial for performance. Instead of calculating the stub circuit using the exact algorithm, implicant-based transformations directly using structural design similarities are performed. These transformations, together with existential quantification implemented by the cutting procedure, lead to an over-approximation of the reachable state set. By this overapproximation, only such unreachable product states are added to the set of states represented by the stub circuit which are unreachable at the current point in time but which are nevertheless equivalent. Therefore, more product states are added to the set of reachable states sometimes leading to drastic acceleration of the traversal, i.e. the fixed point is reached in much fewer steps. The algorithm record and play() was applied to the problem of checking the equivalence of a circuit with its optimized and retimed version. Retiming is a form of sequential circuit optimization which can radically alter the state encoding of a circuit. Traditional FSM traversal techniques often fail because the BDDs needed to represent the reachable state set and the transition relation of the product machine become too large. Experiments were conducted to evaluate the performance of record and play() on a standard set of sequential benchmark circuits. The algorithm was capable of proving the equivalence of optimized and retimed circuits with their original versions, some of which (to our knowledge) have never before been verified using traditional techniques like symbolic FSM traversal. The experimental results are very promising. Future research will therefore explore how structural FSM traversal can be applied to model checking.
We present a theoretical analysis of structural FSM traversal, which is the basis for the sequential equivalence checking algorithm Record & Play presented earlier. We compare the convergence behaviour of exact and approximative structural FSM traversal with that of standard BDD-based FSM traversal. We show that for most circuits encountered in practice exact structural FSM traversal reaches the fixed point as fast as symbolic FSM traversal, while approximation can significantly reduce in the number of iterations needed. Our experiments confirm these results.
One of the most severe short-comings of currently available equivalence checkers is their inability to verify integer multipliers. In this paper, we present a bit level reverse-engineering technique that can be integrated into standard equivalence checking flows. We propose a Boolean mapping algorithm that extracts a network of half adders from the gate netlist of an addition circuit. Once the arithmetic bit level representation of the circuit is obtained, equivalence checking can be performed using simple arithmetic operations. Experimental results show the promise of our approach.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency. In this paper, two approaches are presented which generalize the verification of coindexing constraints to de cient descriptions. At first, a partly heuristic method is described, which has been implemented. Secondly, a provable complete method is specified. It provides the means to exploit the results of anaphor resolution for a further structural disambiguation. By rendering possible a parallel processing model, this method exhibits, in a general sense, a higher degree of robustness. As a practically optimal solution, a combination of the two approaches is suggested.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for re exive pronouns, nonre exive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency.
This paper describes the ROSANA approach, which generalizes the verification of coindexing restrictions in order to make it applicable to the deficient syntactic descriptions that are provided by a robust state-of-the-art parser. By a formal evaluation on two corpora that differ with respect to text genre and domain, it is shown that ROSANA achieves high-quality robust coreference resolution. Moreover, by an in-depth analysis, it is proven that the robust implementation of syntactic disjoint reference is nearly optimal. The study reveals that, compared with approaches that rely on shallow preprocessing, the largely nonheuristic disjoint reference algorithmization opens up the possibility/or a slight improvement. Furthermore, it is shown that more significant gains are to be expected elsewhere, particularly from a text-genre-specific choice of preference strategies.
The performance study of the ROSANA system crucially rests on an enhanced evaluation methodology for coreference resolution systems, the development of which constitutes the second major contribution o/the paper. As a supplement to the model-theoretic scoring scheme that was developed for the Message Understanding Conference (MUC) evaluations, additional evaluation measures are defined that, on one hand, support the developer of anaphor resolution systems, and, on the other hand, shed light on application aspects of pronoun interpretation.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Channel routing is an NP-complete problem. Therefore, it is likely that there is no efficient algorithm solving this problem exactly.In this paper, we show that channel routing is a fixed-parameter tractable problem and that we can find a solution in linear time for a fixed channel width.We implemented our approach for the restricted layer model. The algorithm finds an optimal route for channels with up to 13 tracks within minutes or up to 11 tracks within seconds.Such narrow channels occur for example as a leaf problem of hierarchical routers or within standard cell generators.
We present the FPGA implementation of an algorithm [4] that computes implications between signal values in a boolean network. The research was performed as a masterrsquos thesis [5] at the University of Frankfurt. The recursive algorithm is rather complex for a hardware realization and therefore the FPGA implementation is an interesting example for the potential of reconfigurable computing beyond systolic algorithms. A circuit generator was written that transforms a boolean network into a network of small processing elements and a global control logic which together implement the algorithm. The resulting circuit performs the computation two orders of magnitudes faster than a software implementation run by a conventional workstation.
In this dissertation the formal abstraction and verification of analog circuit is examined. An approach is introduced that automatically abstracts a transistor level circuit with full Spice accuracy into a hybrid automaton (HA) in various output languages. The generated behavioral model exhibits a significant simulation speed-up compared to the original netlist, while maintaining an acceptable accuracy, and can be therefore used in various verification and validation routines. On top of that, the generated models can be formally verified against their Spice netlists, making the obtained models correct by construction.
The generated abstract models can be extended to enclose modeling as well as technology dependent parameter variations with little over approximations. As these models enclose the various behaviors of the sampled netlists, the obtained models are of significant importance as they can replace several simulations with just a single reachability analysis or symbolic simulation. Moreover, these models can be as well be used in different verification routines as demonstrated in this dissertation.
As the obtained models are described by HAs with linear behaviors in the locations, the abstract models can be as well compositionally linked, allowing thereby the abstraction of complex analog circuits.
Depending on the specified modeling settings, including for example the number of locations of the HA and the description of the system behavior, the accuracy, speedup, and various additional properties of the HA can be influenced. This is examined in detail in this dissertation. The underlying abstraction process is first covered in detail. Several extensions are then handled including the modeling of the HAs with parameter variations. The obtained models are then verified using various verification methodologies. The accuracy and speed-up of the abstraction methodology is finally evaluated on several transistor level circuits ranging from simple operational amplifiers up to a complex circuits.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
Background: Although mortality after cardiac surgery has significantly decreased in the last decade, patients still experience clinically relevant postoperative complications. Among others, atrial fibrillation (AF) is a common consequence of cardiac surgery, which is associated with prolonged hospitalization and increased mortality.
Methods: We retrospectively analyzed data from patients who underwent coronary artery bypass grafting, valve surgery or a combination of both at the University Hospital Muenster between April 2014 and July 2015. We evaluated the incidence of new onset and intermittent/permanent AF (patients with pre- and postoperative AF). Furthermore, we investigated the impact of postoperative AF on clinical outcomes and evaluated potential risk factors.
Results: In total, 999 patients were included in the analysis. New onset AF occurred in 24.9% of the patients and the incidence of intermittent/permanent AF was 59.5%. Both types of postoperative AF were associated with prolonged ICU length of stay (median increase approx. 2 days) and duration of mechanical ventilation (median increase 1 h). Additionally, new onset AF patients had a higher rate of dialysis and hospital mortality and more positive fluid balance on the day of surgery and postoperative days 1 and 2. In a multiple logistic regression model, advanced age (odds ratio (OR) = 1.448 per decade increase, p < 0.0001), a combination of CABG and valve surgery (OR = 1.711, p = 0.047), higher C-reactive protein (OR = 1.06 per unit increase, p < 0.0001) and creatinine plasma concentration (OR = 1.287 per unit increase, p = 0.032) significantly predicted new onset AF. Higher Horowitz index values were associated with a reduced risk (OR = 0.996 per unit increase, p = 0.012). In a separate model, higher plasma creatinine concentration (OR = 2.125 per unit increase, p = 0.022) was a significant risk factor for intermittent/permanent AF whereas higher plasma phosphate concentration (OR = 0.522 per unit increase, p = 0.003) indicated reduced occurrence of this arrhythmia.
Conclusions: New onset and intermittent/permanent AF are associated with adverse clinical outcomes of elective cardiac surgery patients. Different risk factors implicated in postoperative AF suggest different mechanisms might be involved in its pathogenesis. Customized clinical management protocols seem to be warranted for a higher success rate of prevention and treatment of postoperative AF.
The archaeological data dealt with in our database solution Antike Fundmünzen in Europa (AFE), which records finds of ancient coins, is entered by humans. Based on the Linked Open Data (LOD) approach, we link our data to Nomisma.org concepts, as well as to other resources like Online Coins of the Roman Empire (OCRE). Since information such as denomination, material, etc. is recorded for each single coin, this information should be identical for coins of the same type. Unfortunately, this is not always the case, mostly due to human errors. Based on rules that we implemented, we were able to make use of this redundant information in order to detect possible errors within AFE, and were even able to correct errors in Nomimsa.org. However, the approach had the weakness that it was necessary to transform the data into an internal data model. In a second step, we therefore developed our rules within the Linked Open Data world. The rules can now be applied to datasets following the Nomisma. org modelling approach, as we demonstrated with data held by Corpus Nummorum Thracorum (CNT). We believe that the use of methods like this to increase the data quality of individual databases, as well as across different data sources and up to the higher levels of OCRE and Nomisma.org, is mandatory in order to increase trust in them.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
Heterologously expressed genes require adaptation to the host organism to ensure adequate levels of protein synthesis, which is typically approached by replacing codons by the target organism’s preferred codons. In view of frequently encountered suboptimal outcomes we introduce the codon-specific elongation model (COSEM) as an alternative concept. COSEM simulates ribosome dynamics during mRNA translation and informs about protein synthesis rates per mRNA in an organism- and context-dependent way. Protein synthesis rates from COSEM are integrated with further relevant covariates such as translation accuracy into a protein expression score that we use for codon optimization. The scoring algorithm further enables fine-tuning of protein expression including deoptimization and is implemented in the software OCTOPOS. The protein expression score produces competitive predictions on proteomic data from prokaryotic, eukaryotic, and human expression systems. In addition, we optimized and tested heterologous expression of manA and ova genes in Salmonella enterica serovar Typhimurium. Superiority over standard methodology was demonstrated by a threefold increase in protein yield compared to wildtype and commercially optimized sequences.
Correction to: Scientifc Reports https://doi.org/10.1038/s41598-019-43857-5, published online 17 May 2019. In the original version of this Article, Jan-Hendrik Trösemeier was incorrectly affiliated with ‘Division of Allergology, Paul Ehrlich Institut, Langen, Germany’. Te correct afliations are listed below...
Event-related potentials (ERPs) are widely used in basic neuroscience and in clinical diagnostic procedures. In contrast, neurophysiological insights from ERPs have been limited, as several different mechanisms lead to ERPs. Apart from stereotypically repeated responses (additive evoked responses), these mechanisms are asymmetric amplitude modulations and phase-resetting of ongoing oscillatory activity. Therefore, a method is needed that differentiates between these mechanisms and moreover quantifies the stability of a response. We propose a constrained subspace independent component analysis that exploits the multivariate information present in the all-to-all relationship of recordings over trials. Our method identifies additive evoked activity and quantifies its stability over trials. We evaluate identification performance for biologically plausible simulation data and two neurophysiological test cases: Local field potential (LFP) recordings from a visuo-motor-integration task in the awake behaving macaque and magnetoencephalography (MEG) recordings of steady-state visual evoked fields (SSVEFs). In the LFPs we find additive evoked response contributions in visual areas V2/4 but not in primary motor cortex A4, although visually triggered ERPs were also observed in area A4. MEG-SSVEFs were mainly created by additive evoked response contributions. Our results demonstrate that the identification of additive evoked response contributions is possible both in invasive and in non-invasive electrophysiological recordings.
Local protein synthesis has re-defined our ideas on the basic cellular mechanisms that underlie synaptic plasticity and memory formation. The population of messenger RNAs that are localised to dendrites, however, remains sparsely identified. Furthermore, neuronal morphological complexity and spatial compartmentalisation require efficient mechanisms for messenger RNA localisation and control over translational efficiency or transcript stability. 3’ untranslated regions, downstream from stop codons, are recognised for providing binding platforms for many regulatory units, thus encoding the processing of the above processes. The hippocampus, a part of the brain involved in the formation, organisation and storage of memories, provides a natural platform to investigate patterns of RNA localisation. The hippocampus comprises tissue layers, which naturally separate the principle neuronal cell bodies from their processes (axons and dendrites). Identifying the full-complement of localised transcripts and associated 3’UTR isoforms is of great importance to understand both basic neuronal functions and principles of synaptic plasticity. These findings can be used to study the properties of neuronal networks as well as to understand how these networks malfunction in neuronal diseases.
Here, deep sequencing is used to identify the mRNAs resident in the synaptic neuropil in the hippocampus. Analysis of a neuropil data set yields a list of 8,379 transcripts of which 2,550 are localised in dendrites and/or axons. Using a fluorescent barcode strategy to label individual mRNAs shows that the relative abundance of different mRNAs in the neuropil varies over 5 orders of magnitude. High-resolution in situ hybridisation validated the presence of mRNAs in both cultured neurons and hippocampal slices. Among the many mRNAs identified, a large fraction of known synaptic proteins including signaling molecules, scaffolds and receptors is discovered. These results reveal a previously unappreciated enormous potential for the local protein synthesis machinery to supply, maintain and modify the dendritic and synaptic proteome.
Using advances in library preparation for next generation sequencing experiments, the diversity of 3’UTR isoforms present in localised transcripts from the rat hippocampus is examined. The obtained results indicate that there is an increase in 3’UTR heterogeneity and 3’UTR length in neuronal tissue. The evolutionary importance of the 3’UTR diversity and correlation with changes in species,tissue and cell complexity is investigated. The conducted analysis reveals the population of 3’UTR isoforms required for transcript localisation in overall neuronal transcriptome as well as the regulatory elements and binding sites specific for neuronal compartments. The configuration of poly(A) signals is correlated with gene function and can be further exploit to determine similar mechanisms for alternative polyadenylation.
Usage of custom specified methods for next-generation sequencing as well as novel approaches for RNA quantification and visualisation necessitate the development and implementation of new downstream analytic methods. Library methods for data-mining transcripts annotation, expression and ontology relations is provided. Usage of a specialised search engine targeting key features of previous experiments is proposed. A processing pipeline for NanoString technology, defining experimental quality and exploiting methods for data normalisation is developed. High-resolution in situ images are analysed by custom application, showing a correlation between RNA quantity and spatial distribution. The vast variety of bioinformatic methods included in this work indicates the importance of downstream analysis to reach biological conclusions. Maintaining the integrability and modularity of our implementations is of great priority, as the dynamic nature of many experimental techniques requires constant improvement in computational analysis.
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
Computing the diameter of a graph is a fundamental part of network analysis. Even if the data fits into main memory the best known algorithm needs O(n2) [3] with high probability to compute the exact diameter. In practice this is usually too costly. Therefore, heuristics have been developed to approximate the diameter much faster. The heuristic “double sweep lower bound” (dslb) has reasonably good results and needs only two Breadth-First Searches (BFS). Hence, dslb has a complexity of O(n+m). If the data does not fit into main memory, an external-memory algorithm is needed. In this thesis the I/O model by Vitter and Shriver [4] is used. It is widely accepted and has produced suitable results in the past. The best known external-memory BFS implementation has an I/O-complexity of W(pn B + sort(n)) for sparse graphs [5]. But this is still very expensive compared to the I/O complexity of sorting with O(N/B * logM/B (N/B)). While there is no improvement for the external-memory computation of BFS yet, Meyer published a different approach called “Parallel clustering growing approach” (PAR_APPROX) that is a trade-off between the I/O complexity and the approximation guarantee [6].
In this thesis different existing approaches will be evaluated. Also, PAR_APPROX will be implemented and analyzed if it is viable in practice. One main result will be that it is difficult to choose the parameter in a way that PAR_APPROX is reasonably fast for every graph class without using the semi external-memory Single Source Shortest Path (SSSP) implementation by [1]. However, the gain is small compared to external-memory BFS using this approach. Therefore, the approach PAR_APPROX_R will be developed. Furthermore, a lower bound for the expected error of PAR_APPROX_R will be proved on a carefully chosen difficult input class. With PAR_APPROX_R the desired gain will be reached.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
FIFO is the most prominent queueing strategy due to its simplicity and the fact that it only works with local information. Its analysis within the adversarial queueing theory however has shown, that there are networks that are not stable under the FIFO protocol, even at arbitrarily low rate. On the other hand there are networks that are universally stable, i.e., they are stable under every greedy protocol at any rate r < 1. The question as to which networks are stable under the FIFO protocol arises naturally. We offer the first polynomial time algorithm for deciding FIFO stability and simple-path FIFO stability of a directed network, answering an open question posed in [1, 4]. It turns out, that there are networks, that are FIFO stable but not universally stable, hence FIFO is not a worst case protocol in this sense. Our characterization of FIFO stability is constructive and disproves an open characterization in [4].
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
The amyloid precursor protein (APP) was discovered in the 1980s as the precursor protein of the amyloid A4 peptide. The amyloid A4 peptide, also known as A-beta (Aβ), is the main constituent of senile plaques implicated in Alzheimer’s disease (AD). In association with the amyloid deposits, increasing impairments in learning and memory as well as the degeneration of neurons especially in the hippocampus formation are hallmarks of the pathogenesis of AD. Within the last decades much effort has been expended into understanding the pathogenesis of AD. However, little is known about the physiological role of APP within the central nervous system (CNS). Allocating APP to the proteome of the highly dynamic presynaptic active zone (PAZ) identified APP as a novel player within this neuronal communication and signaling network. The analysis of the hippocampal PAZ proteome derived from APP-mutant mice demonstrates that APP is tightly embedded in the underlying protein network. Strikingly, APP deletion accounts for major dysregulation within the PAZ proteome network. Ca2+-homeostasis, neurotransmitter release and mitochondrial function are affected and resemble the outcome during the pathogenesis of AD. The observed changes in protein abundance that occur in the absence of APP as well as in AD suggest that APP is a structural and functional regulator within the hippocampal PAZ proteome. Within this review article, we intend to introduce APP as an important player within the hippocampal PAZ proteome and to outline the impact of APP deletion on individual PAZ proteome subcommunities.
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
Various concurrency primitives had been added to functional programming languages in different ways. In Haskell such a primitive is a MVar, joins are described in JoCaml and AliceML uses futures to provide a concurrent behaviour. Despite these concurrency libraries seem to behave well, their equivalence between each other has not been proven yet. An expressive formal system is needed. In their paper "On proving the equivalence of concurrency primitives", Jan Schwinghammer, David Sabel, Joachim Niehren, and Manfred Schmidt-Schauß define a universal calculus for concurrency primitives known as the typed lambda calculus with futures. There, equivalence of processes had been proved. An encoding of simple one-place buffers had been worked out. This bachelor’s thesis is about encoding more complex concurrency abstractions in the lambda calculus with futures and proving correctness of its operational semantics. Given the new abstractions, we will discuss program equivalence between them. Finally, we present a library written in Haskell that exposes futures and our concurrency abstractions as a proof of concept.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm’s performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.
Human lymph nodes play a central part of immune defense against infection agents and tumor cells. Lymphoid follicles are compartments of the lymph node which are spherical, mainly filled with B cells. B cells are cellular components of the adaptive immune systems. In the course of a specific immune response, lymphoid follicles pass different morphological differentiation stages. The morphology and the spatial distribution of lymphoid follicles can be sometimes associated to a particular causative agent and development stage of a disease. We report our new approach for the automatic detection of follicular regions in histological whole slide images of tissue sections immuno-stained with actin. The method is divided in two phases: (1) shock filter-based detection of transition points and (2) segmentation of follicular regions. Follicular regions in 10 whole slide images were manually annotated by visual inspection, and sample surveys were conducted by an expert pathologist. The results of our method were validated by comparing with the manual annotation. On average, we could achieve a Zijbendos similarity index of 0.71, with a standard deviation of 0.07.
Co-design of a trustworthy AI system in healthcare: deep learning based skin lesion classifier
(2021)
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.
Conceptual design of an ALICE Tier-2 centre integrated into a multi-purpose computing facility
(2012)
This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.
Modern experiments in heavy ion collisions operate with huge data rates that can not be fully stored on the currently available storage devices. Therefore the data flow should be reduced by selecting those collisions that potentially carry the information of the physics interest. The future CBM experiment will have no simple criteria for selecting such collisions and requires the full online reconstruction of the collision topology including reconstruction of short-lived particles.
In this work the KF Particle Finder package for online reconstruction and selection of short-lived particles is proposed and developed. It reconstructs more than 70 decays, covering signals from all the physics cases of the CBM experiment: strange particles, strange resonances, hypernuclei, low mass vector mesons, charmonium, and open-charm particles.
The package is based on the Kalman filter method providing a full set of the particle parameters together with their errors including position, momentum, mass, energy, lifetime, etc. It shows a high quality of the reconstructed particles, high efficiencies, and high signal to background ratios.
The KF Particle Finder is extremely fast for achieving the reconstruction speed of 1.5 ms per minimum-bias AuAu collision at 25 AGeV beam energy on single CPU core. It is fully vectorized and parallelized and shows a strong linear scalability on the many-core architectures of up to 80 cores. It also scales within the First Level Event Selection package on the many-core clusters up to 3200 cores.
The developed KF Particle Finder package is a universal platform for short- lived particle reconstruction, physics analysis and online selection.
At present, there are no quantitative, objective methods for diagnosing the Parkinson disease. Existing methods of quantitative analysis by myograms suffer by inaccuracy and patient strain; electronic tablet analysis is limited to the visible drawing, not including the writing forces and hand movements. In our paper we show how handwriting analysis can be obtained by a new electronic pen and new features of the recorded signals. This gives good results for diagnostics. Keywords: Parkinson diagnosis, electronic pen, automatic handwriting analysis
G-CSC Report 2010
(2011)
The present report gives a short summary of the research of the Goethe Center for Scientific Computing (G-CSC) of the Goethe University Frankfurt. G-CSC aims at developing and applying methods and tools for modelling and numerical simulation of problems from empirical science and technology. In particular, fast solvers for partial differential equations (i.e. pde) such as robust, parallel, and adaptive multigrid methods and numerical methods for stochastic differential equations are developed. These methods are highly adanvced and allow to solve complex problems..
The G-CSC is organised in departments and interdisciplinary research groups. Departments are localised directly at the G-CSC, while the task of interdisciplinary research groups is to bridge disciplines and to bring scientists form different departments together. Currently, G-CSC consists of the department Simulation and Modelling and the interdisciplinary research group Computational Finance.
The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ...
This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto.
This volume contains the papers presented at the First International Workshop on Rewriting Techniques for Program Transformations and Evaluation (WPTE 2014) which was held on July 13, 2014 in Vienna, Austria during the Vienna Summer of Logic 2014 (VSL 2014) as a workshop of the Sixth Federated Logic Conference (FLoC 2014). WPTE 2014 was affiliated with the 25th International Conference on Rewriting Techniques and Applications joined with the 12th International Conference on Typed Lambda Calculi and Applications (RTA/TLCA 2014).
Measurement of ϒ(1S) elliptic flow at forward rapidity in Pb-Pb collisions at √sNN = 5.02 TeV
(2019)
The first measurement of the ϒ(1S) elliptic flow coefficient (v2) is performed at forward rapidity (2.5 < y < 4) in Pb–Pb collisions at √sNN = 5.02 TeV with the ALICE detector at the LHC. The results are obtained with the scalar product method and are reported as a function of transverse momentum (pT) up to 15 GeV/c in the 5%–60% centrality interval. The measured Υ(1S)v2 is consistent with 0 and with the small positive values predicted by transport models within uncertainties. The v2 coefficient in 2 < pT < 15 GeV/c is lower than that of inclusive J/ψ mesons in the same pT interval by 2.6 standard deviations. These results, combined with earlier suppression measurements, are in agreement with a scenario in which the Υ(1S) production in Pb–Pb collisions at LHC energies is dominated by dissociation limited to the early stage of the collision, whereas in the J/ψ case there is substantial experimental evidence of an additional regeneration component.
Contents:
Yuki Chiba, Santiago Escobar, Naoki Nishida, and David Sabel, and Manfred Schmidt-Schauß : Preface:
The Collection of all Abstracts of the Talks at WPTE 2015 xi
Brigitte Pientka : Mechanizing Meta-Theory in Beluga
Giulio Guerrieri : Head reduction and normalization in a call-by-value lambda-calculus
Adrián Palacios and Germán Vidal : Towards Modelling Actor-Based Concurrency in Term Rewriting
David Sabel and Manfred Schmidt-Schauß : Observing Success in the Pi-Calculus
Sjaak Smetsers, Ken Madlener, and Marko van Eekelen : Formalizing Bialgebraic Semantics in PVS 6.0
We report about the properties of the underlying event measured with ALICE at the LHC in pp and p−Pb collisions at sNN−−−√=5.02 TeV. The event activity, quantified by charged-particle number and summed-pT densities, is measured as a function of the leading-particle transverse momentum (ptrigT). These quantities are studied in three azimuthal-angle regions relative to the leading particle in the event: toward, away, and transverse. Results are presented for three different pT thresholds (0.15, 0.5, and 1 GeV/c) at mid-pseudorapidity (|η|<0.8). The event activity in the transverse region, which is the most sensitive to the underlying event, exhibits similar behaviour in both pp and p−Pb collisions, namely, a steep increase with ptrigT for low ptrigT, followed by a saturation at ptrigT≈5 GeV/c. The results from pp collisions are compared with existing measurements at other centre-of-mass energies. The quantities in the toward and away regions are also analyzed after the subtraction of the contribution measured in the transverse region. The remaining jet-like particle densities are consistent in pp and p−Pb collisions for ptrigT>10 GeV/c, whereas for lower ptrigT values the event activity is slightly higher in p−Pb than in pp collisions. The measurements are compared with predictions from the PYTHIA 8 and EPOS LHC Monte Carlo event generators.
The ALICE collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp collisions at s√= 2.76 TeV. Electrons not originating from semi-electronic decay of beauty hadrons are suppressed using the impact parameter of the corresponding tracks. The production cross section of beauty decay electrons is compared to the result obtained with an alternative method which uses the distribution of the azimuthal angle between heavy-flavour decay electrons and charged hadrons. Perturbative QCD calculations agree with the measured cross section within the experimental and theoretical uncertainties. The integrated visible cross section, σb→e=3.47±0.40(stat)+1.12−1.33(sys)±0.07(norm)μb, was extrapolated to full phase space using Fixed Order plus Next-to-Leading Log (FONLL) predictions to obtain the total bb¯ production cross section, σbb¯=130±15.1(stat)+42.1−49.8(sys)+3.4−3.1(extr)±2.5(norm)±4.4(BR)μb.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The first measurement of the e+e− pair production at low lepton pair transverse momentum (pT,ee) and low invariant mass (mee) in non-central Pb−Pb collisions at sNN−−−√=5.02 TeV at the LHC is presented. The dielectron production is studied with the ALICE detector at midrapidity (|ηe|<0.8) as a function of invariant mass (0.4≤mee<2.7 GeV/c2) in the 50−70% and 70−90% centrality classes for pT,ee<0.1 GeV/c, and as a function of pT,ee in three mee intervals in the most peripheral Pb−Pb collisions. Below a pT,ee of 0.1 GeV/c, a clear excess of e+e− pairs is found compared to the expectations from known hadronic sources and predictions of thermal radiation from the medium. The mee excess spectra are reproduced, within uncertainties, by different predictions of the photon−photon production of dielectrons, where the photons originate from the extremely strong electromagnetic fields generated by the highly Lorentz-contracted Pb nuclei. Lowest-order quantum electrodynamic (QED) calculations, as well as a model that takes into account the impact-parameter dependence of the average transverse momentum of the photons, also provide a good description of the pT,ee spectra. The measured ⟨p2T,ee⟩−−−−−√ of the excess pT,ee spectrum in peripheral Pb−Pb collisions is found to be comparable to the values observed previously at RHIC in a similar phase-space region.
The measurement of the production of charm jets, identified by the presence of a D0 meson in the jet constituents, is presented in proton-proton collisions at centre-of-mass energies of s√ = 5.02 and 13 TeV with the ALICE detector at the CERN LHC. The D0 mesons were reconstructed from their hadronic decay D0→K−π+ and the respective charge conjugate. Jets were reconstructed from D0-meson candidates and charged particles using the anti-kT algorithm, in the jet transverse momentum range 5<pT;chjet<50 GeV/c, pseudorapidity |ηjet|<0.9−R, and with the jet resolution parameters R = 0.2, 0.4, 0.6. The distribution of the jet momentum fraction carried by a D0 meson along the jet axis (zch||) was measured in the range 0.4<zch||<1.0 in four ranges of the jet transverse momentum. Comparisons of results for different collision energies and jet resolution parameters are also presented. The measurements are compared to predictions from Monte Carlo event generators based on leading-order and next-to-leading-order perturbative quantum chromodynamics calculations. A generally good description of the main features of the data is obtained in spite of a few discrepancies at low pT;chjet. Measurements were also done for R=0.3 at s√ = 5.02 TeV and are shown along with their comparisons to theoretical predictions in an appendix to this paper.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
Measurements of the elliptic flow coefficient relative to the collision plane defined by the spectator neutrons v2{ΨSP} in collisions of Pb ions at center-of-mass energy per nucleon-nucleon pair sNN−−−√=2.76 TeV and Xe ions at sNN−−−√=5.44 TeV are reported. The results are presented for charged particles produced at midrapidity as a function of centrality and transverse momentum. The ratio between v2{ΨSP} and the elliptic flow coefficient relative to the participant plane v2{4}, estimated using four-particle correlations, deviates by up to 20% from unity depending on centrality. This observation differs strongly from the magnitude of the corresponding eccentricity ratios predicted by the TRENTo and the elliptic power models of initial state fluctuations that are tuned to describe the participant plane anisotropies. The differences can be interpreted as a decorrelation of the neutron spectator plane and the reaction plane because of fragmentation of the remnants from the colliding nuclei, which points to an incompleteness of current models of initial state fluctuations. A significant transverse momentum dependence of the ratio v2{ΨSP}/v2{4} is observed in all but the most central collisions, which may help to understand whether momentum anisotropies at low and intermediate transverse momentum have a common origin in initial state fluctuations. The ratios of v2{ΨSP} and v2{4} to the corresponding initial state eccentricities for Xe-Xe and Pb-Pb collisions at similar initial entropy density show a difference of (7.0±0.9)% with an additional variation of +1.8% when including RHIC data in the TRENTo parameter extraction. These observations provide new experimental constraints for viscous effects in the hydrodynamic modeling of the expanding quark-gluon plasma.
This article presents new measurements of the fragmentation properties of jets in both proton-proton (pp) and heavy-ion collisions with the ALICE experiment at the Large Hadron Collider (LHC). We report distributions of the fraction zr of transverse momentum carried by subjets of radius r within jets of radius R. Charged-particle jets are reconstructed at midrapidity using the anti-kT algorithm with jet radius R=0.4, and subjets are reconstructed by reclustering the jet constituents using the anti-kT algorithm with radii r=0.1 and r=0.2. In proton-proton collisions, we measure both the inclusive and leading subjet distributions. We compare these measurements to perturbative calculations at next-to-leading logarithmic accuracy, which suggest a large impact of threshold resummation and hadronization effects on the zr distribution. In heavy-ion collisions, we measure the leading subjet distributions, which allow access to a region of harder jet fragmentation than has been probed by previous measurements of jet quenching via hadron fragmentation distributions. The zr distributions enable extraction of the parton-to-subjet fragmentation function and allow for tests of the universality of jet fragmentation functions in the quark-gluon plasma (QGP). We find indications that there is a turnover in the ratio between the distributions in Pb-Pb and pp collisions as zr→1, exposing qualitatively new possibilities to disentangle competing jet quenching mechanisms. By comparing our results to theoretical calculations based on an independent extraction of the parton-to-jet fragmentation function, we find consistency with the universality of jet fragmentation and no indication of factorization breaking in the QGP.
The production yields of non-prompt Ds+ mesons, namely Ds+ mesons from beauty-hadron decays, were measured for the first time as a function of the transverse momentum (pT) at midrapidity (|y|<0.5) in central and semi-central Pb−Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV with the ALICE experiment at the LHC. The Ds+ mesons and their charge conjugates were reconstructed from the hadronic decay channel Ds+→ϕπ+, with ϕ→K−K+, in the 4<pT<36 GeV/c and 2<pT<24 GeV/c intervals for the 0−10% and 30−50% centrality classes, respectively. The measured yields of non-prompt Ds+ mesons are compared to those of prompt Ds+ and non-prompt D0 mesons by calculating the ratios of the production yields in Pb−Pb collisions and the nuclear modification factor RAA. The ratio between the RAA of non-prompt Ds+ and prompt Ds+ mesons, and that between the RAA of non-prompt Ds+ and non-prompt D0 mesons in central Pb−Pb collisions are found to be on average higher than unity in the 4<pT<12 GeV/c interval with a statistical significance of about 1.6σ and 1.7σ, respectively. The measured RAA ratios are compared with the predictions of theoretical models of heavy-quark transport in a hydrodynamically expanding QGP that incorporate hadronisation via quark recombination.
Charmonium production in pp collisions at center-of-mass energy of s√ = 13 TeV and p-Pb collisions at center-of-mass energy per nucleon pair of sNN−−−√ = 8.16 TeV is studied as a function of charged-particle pseudorapidity density with ALICE. Ground and excited charmonium states (J/ψ, ψ(2S)) are measured from their dimuon decays in the interval of rapidity in the center-of-mass frame 2.5<ycms<4.0 for pp collisions, and 2.03<ycms<3.53 and −4.46<ycms<−2.96 for p-Pb collisions. The charged-particle pseudorapidity density is measured around midrapidity (|η|<1.0). In pp collisions, the measured charged-particle multiplicity extends to about six times the average value, while in p-Pb collisions at forward (backward) rapidity a multiplicity corresponding to about three (four) times the average is reached. The ψ(2S) yield increases with the charged-particle pseudorapidity density. The ratio of ψ(2S) over J/ψ yield does not show a significant multiplicity dependence in either colliding system, suggesting a similar behavior of J/ψ and ψ(2S) yields with respect to charged-particle pseudorapidity density. The results are also compared with model calculations.
The first experimental information on the strong interaction between Λ and Ξ− strange baryons is presented in this Letter. The correlation function of Λ−Ξ− and Λ¯¯¯¯−Ξ¯¯¯¯+ pairs produced in high-multiplicity proton-proton (pp) collisions at s√ = 13 TeV at the LHC is measured as a function of the relative momentum of the pair. The femtoscopy method is used to calculate the correlation function, which is then compared with theoretical expectations obtained using a meson exchange model, chiral effective field theory, and Lattice QCD calculations close to the physical point. Data support predictions of small scattering parameters while discarding versions with large ones, thus suggesting a weak Λ−Ξ− interaction. The limited statistical significance of the data does not yet allow one to constrain the effects of coupled channels like Σ−Ξ and N−Ω.
Two-particle transverse momentum differential correlators, recently measured in Pb--Pb collisions at energies available at the CERN Large Hadron Collider (LHC), provide an additional tool to gain insights into particle production mechanisms and infer transport properties, such as the ratio of shear viscosity to entropy density, of the medium created in Pb-Pb collisions. The longitudinal long-range correlations and the large azimuthal anisotropy measured at low transverse momenta in small collision systems, namely pp and p-Pb, at LHC energies resemble manifestations of collective behaviour. This suggests that locally equilibrated matter may be produced in these small collision systems, similar to what is observed in Pb-Pb collisions. In this work, the same two-particle transverse momentum differential correlators are exploited in pp and p-Pb collisions at √s=7 TeV and sNN−−−√=5.02 TeV, respectively, to seek evidence for viscous effects. Specifically, the strength and shape of the correlators are studied as a function of the produced particle multiplicity to identify evidence for longitudinal broadening that might reveal the presence of viscous effects in these smaller systems. The measured correlators and their evolution from pp and p--Pb to Pb--Pb collisions are additionally compared to predictions from Monte Carlo event generators, and the potential presence of viscous effects is discussed.
Jet-like correlations with respect to K0S and Λ (Λ‾) in pp and Pb–Pb collisions at √sNN = 5.02 TeV
(2022)
Two-particle correlations with K0S, Λ/Λ¯, and charged hadrons as trigger particles in the transverse momentum range 8<pT,trig<16 GeV/c, and associated charged particles within 1<pT,assoc<8 GeV/c, are studied at mid-rapidity in pp and central Pb-Pb collisions at a centre-of-mass energy per nucleon-nucleon collision sNN−−−√=5.02 TeV with the ALICE detector at the LHC. After subtracting the contributions of the flow background, the per-trigger yields are extracted on both the near and away sides, and the ratio in Pb-Pb collisions with respect to pp collisions (IAA) is computed. The per-trigger yield in Pb-Pb collisions on the away side is strongly suppressed to the level of IAA≈0.6 for pT,assoc>3 GeV/c as expected from strong in-medium energy loss, while an enhancement develops at low pT,assoc on both the near and away sides, reaching IAA≈1.8 and 2.7 respectively. These findings are in good agreement with previous ALICE measurements from two-particle correlations triggered by neutral pions (π0-h) and charged hadrons (h-h) in Pb-Pb collisions at sNN−−−√=2.76 TeV. Moreover, the correlations with K0S mesons and Λ/Λ¯ baryons as trigger particles are compared to those of inclusive charged hadrons. The results are compared with the predictions of Monte Carlo models.
The measurement of the production of deuterons, tritons and 3He and their antiparticles in Pb-Pb collisions at sNN−−−√=5.02 TeV is presented in this article. The measurements are carried out at midrapidity (|y|< 0.5) as a function of collision centrality using the ALICE detector. The pT-integrated yields, the coalescence parameters and the ratios to protons and antiprotons are reported and compared with nucleosynthesis models. The comparison of these results in different collision systems at different centre-of-mass collision energies reveals a suppression of nucleus production in small systems. In the Statistical Hadronisation Model framework, this can be explained by a small correlation volume where the baryon number is conserved, as already shown in previous fluctuation analyses. However, a different size of the correlation volume is required to describe the proton yields in the same data sets. The coalescence model can describe this suppression by the fact that the wave functions of the nuclei are large and the fireball size starts to become comparable and even much smaller than the actual nucleus at low multiplicities.
The pseudorapidity density of charged particles with minimum transverse momentum (pT) thresholds of 0.15, 0.5, 1, and 2 GeV/c was measured in pp collisions at centre-of-mass energies of √s = 5.02 and 13 TeV with the ALICE detector. The study is carried out for inelastic collisions with at least one primary charged particle having a pseudorapidity (η) within ±0.8 and pT larger than the corresponding threshold. The measurements were also performed for inelastic and non-single-diffractive events as well as for inelastic events with at least one charged particle having |η| < 1 in pp collisions at √s = 5.02 TeV for the first time at the LHC. The measurements are compared to the PYTHIA 6, PYTHIA 8, and EPOS-LHC models. In general, the models describe the pseudorapidity dependence of particle production well, however, discrepancies are observed for event classes including diffractive events and for the highest transverse momentum threshold (pT > 2 GeV/c), highlighting the importance of such measurements for tuning event generators. The new measurements agree within uncertainties with results from the ATLAS and CMS experiments.
First measurement of 𝚲+c production down to 𝑝T = 0 in pp and p–Pb collisions at √𝑠NN = 5.02 TeV
(2023)
The production of prompt Λ+c baryons has been measured at midrapidity in the transverse momentum interval 0<pT<1 GeV/c for the first time, in pp and p-Pb collisions at a centre-of-mass energy per nucleon-nucleon collision sNN−−−√=5.02 TeV. The measurement was performed in the decay channel Λ+c→pK0S by applying new decay reconstruction techniques using a Kalman-Filter vertexing algorithm and adopting a machine-learning approach for the candidate selection. The pT-integrated Λ+c production cross sections in both collision systems were determined and used along with the measured yields in Pb-Pb collisions to compute the pT-integrated nuclear modification factors RpPb and RAA of Λ+c baryons, which are compared to model calculations that consider nuclear modification of the parton distribution functions. The Λ+c/D0 baryon-to-meson yield ratio is reported for pp and p-Pb collisions. Comparisons with models that include modified hadronisation processes are presented, and the implications of the results on the understanding of charm hadronisation in hadronic collisions are discussed. A significant (3.7σ) modification of the mean transverse momentum of Λ+c baryons is seen in p-Pb collisions with respect to pp collisions, while the pT-integrated Λ+c/D0 yield ratio was found to be consistent between the two collision systems within the uncertainties.
The first measurement of event-by-event antideuteron number fluctuations in high energy heavy-ion collisions is presented. The measurements are carried out at midrapidity (|η|<0.8) as a function of collision centrality in Pb−Pb collisions at sNN−−−√=5.02 TeV using the ALICE detector. A significant negative correlation between the produced antiprotons and antideuterons is observed in all collision centralities. The results are compared with coalescence calculations, which fail to describe the measurement, in particular if a correlated production of protons and neutrons is assumed. Thermal-statistical model calculations describe the data within uncertainties only for correlation volumes that are different with respect to those describing proton yields and a similar measurement of net-proton number fluctuations.
The transverse-momentum (pT) spectra of K∗(892)0 and ϕ(1020) measured with the ALICE detector up to pT = 16 GeV/c in the rapidity range −1.2<y<0.3, in p-Pb collisions at center-of-mass energy per nucleon-nucleon collision sNN−−−√ = 5.02 TeV are presented as a function of charged particle multiplicity and rapidity. The measured pT distributions show a dependence on both multiplicity and rapidity at low pT whereas no significant dependence is observed at high pT. A rapidity dependence is observed in the pT-integrated yield (dN/dy), whereas the mean transverse momentum (⟨pT⟩) shows a flat behavior as a function of rapidity. The rapidity asymmetry (Yasym) at low pT ( < 5 GeV/c) is more significant for higher multiplicity classes. At high pT, no significant rapidity asymmetry is observed in none of the multiplicity classes. Both K∗(892)0 and ϕ(1020) show similar Yasym. The nuclear modification factor (QCP) as a function of pT shows a Cronin-like enhancement at intermediate pT, which is more prominent at higher rapidities (Pb-going direction) and in higher multiplicity classes. At high pT (> 5 GeV/c), the QCP values are greater than unity and no significant rapidity dependence is observed.
Measurement of the J/ψ polarization with respect to the event plane in Pb–Pb collisions at the LHC
(2022)
The polarization of inclusive J/ψ produced in Pb-Pb collisions at sNN−−−√=5.02 TeV at the LHC was studied by ALICE in the dimuon channel, via the measurement of the angular distribution of its decay products. The study was performed in the rapidity region 2.5<y<4, for three transverse momentum intervals (2<pT<4, 4<pT<6, 6<pT<10 GeV/c) and as a function of the centrality of the collision for 2<pT<6 GeV/c. For the first time, the polarization was measured with respect to the event plane of the collision, by considering the angle between the positive-charge decay muon in the J/ψ rest frame and the axis perpendicular to the event-plane vector in the laboratory system. A small transverse polarization is measured, with a significance reaching 3.9σ at low pT and for intermediate centrality values. The polarization could be connected with the existence of a strong magnetic field in the early stage of quark-gluon plasma formation in Pb-Pb collisions, as well as with its behaviour as a rotating fluid with large vorticity.
Investigation of K+K− interactions via femtoscopy in Pb−Pb collisions at √sNN = 2.76 TeV at the LHC
(2022)
Femtoscopic correlations of non-identical charged kaons (K+K−) are studied in Pb−Pb collisions at a center-of-mass energy per nucleon−nucleon collision sNN−−−√=2.76 TeV by ALICE at the LHC. One-dimensional K+K− correlation functions are analyzed in three centrality classes and eight intervals of particle-pair transverse momentum. The Lednický and Luboshitz interaction model used in the K+K− analysis includes the final-state Coulomb interactions between kaons and the final-state interaction through a0(980) and f0(980) resonances. The mass of f0(980) and coupling were extracted from the fit to K+K− correlation functions using the femtoscopic technique for the first time. The measured mass and width of the f0(980) resonance are consistent with other published measurements. The height of the ϕ(1020) meson peak present in the K+K− correlation function rapidly decreases with increasing source radius, qualitatively in agreement with an inverse volume dependence. A phenomenological fit to this trend suggests that the ϕ(1020) meson yield is dominated by particles produced directly from the hadronization of the system. The small fraction subsequently produced by FSI could not be precisely quantified with data presented in this paper and will be assessed in future work.
This article reports measurements of the angle between differently defined jet axes in pp collisions at s√=5.02 TeV carried out by the ALICE Collaboration. Charged particles at midrapidity are clustered into jets with resolution parameters R=0.2 and 0.4. The jet axis, before and after Soft Drop grooming, is compared to the jet axis from the Winner-Takes-All (WTA) recombination scheme. The angle between these axes, ΔRaxis, probes a wide phase space of the jet formation and evolution, ranging from the initial high-momentum-transfer scattering to the hadronization process. The ΔRaxis observable is presented for 20<pchjetT<100 GeV/c, and compared to predictions from the PYTHIA 8 and Herwig 7 event generators. The distributions can also be calculated analytically with a leading hadronization correction related to the non-perturbative component of the Collins−Soper−Sterman (CSS) evolution kernel. Comparisons to analytical predictions at next-to-leading-logarithmic accuracy with leading hadronization correction implemented from experimental extractions of the CSS kernel in Drell−Yan measurements are presented. The analytical predictions describe the measured data within 20% in the perturbative regime, with surprising agreement in the non-perturbative regime as well. These results are compatible with the universality of the CSS kernel in the context of jet substructure.
This letter reports measurements which characterize the underlying event associated with hard scatterings at mid-pseudorapidity (|η|<0.8) in pp, p−Pb and Pb−Pb collisions at centre-of-mass energy per nucleon pair, sNN−−−√=5.02 TeV. The measurements are performed with ALICE at the LHC. Different multiplicity classes are defined based on the event activity measured at forward rapidities. The hard scatterings are identified by the leading particle defined as the charged particle with the largest transverse momentum (pT) in the collision and having 8<pT<15 GeV/c. The pT spectra of associated particles (0.5≤pT<6 GeV/c) are measured in different azimuthal regions defined with respect to the leading particle direction: toward, transverse, and away. The associated charged particle yields in the transverse region are subtracted from those of the away and toward regions. The remaining jet-like yields are reported as a function of the multiplicity measured in the transverse region. The measurements show a suppression of the jet-like yield in the away region and an enhancement of high-pT associated particles in the toward region in central Pb−Pb collisions, as compared to minimum-bias pp collisions. These observations are consistent with previous measurements that used two-particle correlations, and with an interpretation in terms of parton energy loss in a high-density quark gluon plasma. These yield modifications vanish in peripheral Pb−Pb collisions and are not observed in either high-multiplicity pp or p−Pb collisions.
This article presents measurements of the groomed jet radius and momentum splitting fraction in pp collisions at s√=5.02 TeV with the ALICE detector at the Large Hadron Collider. Inclusive charged-particle jets are reconstructed at midrapidity using the anti-kT algorithm for transverse momentum 60<pchjetT<80 GeV/c. We report results using two different grooming algorithms: soft drop and, for the first time, dynamical grooming. For each grooming algorithm, a variety of grooming settings are used in order to explore the impact of collinear radiation on these jet substructure observables. These results are compared to perturbative calculations that include resummation of large logarithms at all orders in the strong coupling constant. We find good agreement of the theoretical predictions with the data for all grooming settings considered.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, p-Pb, and Pb-Pb, at the top energy of the Large Hadron Collider (sNN−−−√=5.02 TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for p-Pb and Pb-Pb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
The inclusive production of the charm-strange baryon Ω0c is measured for the first time via its hadronic decay into Ω−π+ at midrapidity (|y|<0.5) in proton-proton (pp) collisions at the centre-of-mass energy s√=13 TeV with the ALICE detector at the LHC. The transverse momentum (pT) differential cross section multiplied by the branching ratio is presented in the interval 2<pT<12 GeV/c. The pT dependence of the Ω0c-baryon production relative to the prompt D0-meson and to the prompt Ξ0c-baryon production is compared to various models that take different hadronisation mechanisms into consideration. In the measured pT interval, the ratio of the pT-integrated cross sections of Ω0c and prompt Λ+c baryons multiplied by the Ω−π+ branching ratio is found to be larger by a factor of about 20 with a significance of about 4σ when compared to e+e− collisions.
Hadronic resonances are used to probe the hadron gas produced in the late stage of heavy-ion collisions since they decay on the same timescale, of the order of 1 to 10 fm/c, as the decoupling time of the system. In the hadron gas, (pseudo)elastic scatterings among the products of resonances that decayed before the kinetic freeze-out and regeneration processes counteract each other, the net effect depending on the resonance lifetime, the duration of the hadronic phase, and the hadronic cross sections at play. In this context, the Σ(1385)± particle is of particular interest as models predict that regeneration dominates over rescattering despite its relatively short lifetime of about 5.5 fm/c. The first measurement of the Σ(1385)± resonance production at midrapidity in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector is presented in this Letter. The resonances are reconstructed via their hadronic decay channel, Λπ, as a function of the transverse momentum (pT) and the collision centrality. The results are discussed in comparison with the measured yield of pions and with expectations from the statistical hadronization model as well as commonly employed event generators, including PYTHIA8/Angantyr and EPOS3 coupled to the UrQMD hadronic cascade afterburner. None of the models can describe the data. For Σ(1385)±, a similar behaviour as K∗(892)0 is observed in data unlike the predictions of EPOS3 with afterburner.
This Letter reports on the first measurements of transverse momentum dependent flow angle Ψn and flow magnitude vn fluctuations, determined using new four-particle correlators. The measurements are performed for various centralities in Pb-Pb collisions at a centre-of-mass energy per nucleon pair of sNN−−−√ = 5.02 TeV with ALICE at the CERN Large Hadron Collider. Both flow angle and flow magnitude fluctuations are observed in the presented centrality ranges and are strongest in the most central collisions and for a transverse momentum pT>2 GeV/c. Comparison with theoretical models, including iEBE-VISHNU, MUSIC, and AMPT, show that the measurements exhibit unique sensitivities to the initial state of heavy-ion collisions.
Three-body nuclear forces play an important role in the structure of nuclei and hypernuclei and are also incorporated in models to describe the dynamics of dense baryonic matter, such as in neutron stars. So far, only indirect measurements anchored to the binding energies of nuclei can be used to constrain the three-nucleon force, and if hyperons are considered, the scarce data on hypernuclei impose only weak constraints on the three-body forces. In this work, we present the first direct measurement of the p−p−p and p−p−Λ systems in terms of three-particle mixed moments carried out for pp collisions at s√ = 13 TeV. Three-particle cumulants are extracted from the normalised mixed moments by applying the Kubo formalism, where the three-particle interaction contribution to these moments can be isolated after subtracting the known two-body interaction terms. A negative cumulant is found for the p−p−p system, hinting to the presence of a residual three-body effect while for p−p−Λ the cumulant is consistent with zero. This measurement demonstrates the accessibility of three-baryon correlations at the LHC.
Fluctuation measurements are important sources of information on the mechanism of particle production at LHC energies. This article reports the first experimental results on third-order cumulants of the net-proton distributions in Pb−Pb collisions at a center-of-mass energy sNN−−−√=5.02 TeV recorded by the ALICE detector. The results on the second-order cumulants of net-proton distributions at sNN−−−√=2.76 and 5.02 TeV are also discussed in view of effects due to the global and local baryon number conservation. The results demonstrate the presence of long-range rapidity correlations between protons and antiprotons. Such correlations originate from the early phase of the collision. The experimental results are compared with HIJING and EPOS model calculations, and the dependence of the fluctuation measurements on the phase-space coverage is examined in the context of lattice quantum chromodynamics (LQCD) and hadron resonance gas (HRG) model estimations. The measured third-order cumulants are consistent with zero within experimental uncertainties of about 4% and are described well by LQCD and HRG predictions.
The interaction of K− with protons is characterised by the presence of several coupled channels, systems like K¯¯¯¯0n and πΣ with a similar mass and the same quantum numbers as the K−p state. The strengths of these couplings to the K−p system are of crucial importance for the understanding of the nature of the Λ(1405) resonance and of the attractive K−p strong interaction. In this article, we present measurements of the K−p correlation functions in relative momentum space obtained in pp collisions at s√ = 13 TeV, in p-Pb collisions at sNN−−−√ = 5.02 TeV, and (semi)peripheral Pb-Pb collisions at sNN−−−√ = 5.02 TeV. The emitting source size, composed of a core radius anchored to the K+p correlation and of a resonance halo specific to each particle pair, varies between 1 and 2 fm in these collision systems. The strength and the effects of the K¯¯¯¯0n and πΣ inelastic channels on the measured K−p correlation function are investigated in the different colliding systems by comparing the data with state-of-the-art models of chiral potentials. A novel approach to determine the conversion weights ω, necessary to quantify the amount of produced inelastic channels in the correlation function, is presented. In this method, particle yields are estimated from thermal model predictions, and their kinematic distribution from blast-wave fits to measured data. The comparison of chiral potentials to the measured K−p interaction indicates that, while the πΣ−K−p dynamics is well reproduced by the model, the coupling to the K¯¯¯¯0n channel in the model is currently underestimated.
The measurement of the production of f0(980) in inelastic pp collisions at s√=5.02 TeV is presented. This is the first reported measurement of inclusive f0(980) production at LHC energies. The production is measured at midrapidity, |y|<0.5, in a wide transverse momentum range, 0<pT<16 GeV/c, by reconstructing the resonance in the f0(980)→π+π− hadronic decay channel using the ALICE detector. The pT-differential yields are compared to those of pions, protons and ϕ mesons as well as to predictions from the HERWIG 7.2 QCD-inspired Monte Carlo event generator and calculations from a coalescence model that uses the AMPT model as an input. The ratio of the pT-integrated yield of f0(980) relative to pions is compared to measurements in e+e− and pp collisions at lower energies and predictions from statistical hadronisation models and HERWIG 7.2. A mild collision energy dependence of the f0(980) to pion production is observed in pp collisions from SPS to LHC energies. All considered models underpredict the pT-integrated f0(980)/(π++π−) ratio. The prediction from the γs-CSM model assuming a zero total strangeness content of f0(980) is consistent with the data within 1.9σ and is the closest to the data. The results provide an essential reference for future measurements of the particle yield and nuclear modification in p−Pb and Pb−Pb collisions, which have been proposed to be instrumental to probe the elusive nature and quark composition of the f0(980) scalar meson.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2022)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
The study of the azimuthal anisotropy of inclusive muons produced in p-Pb collisions at sNN−−−√=8.16 TeV, using the ALICE detector at the LHC is reported. The measurement of the second-order Fourier coefficient of the particle azimuthal distribution, v2, is performed as a function of transverse momentum pT in the 0-20% high-multiplicity interval at both forward (2.03<yCMS<3.53) and backward (−4.46<yCMS<−2.96) rapidities over a wide pT range, 0.5<pT<10 GeV/c, in which a dominant contribution of muons from heavy-flavour hadron decays is expected at pT>2 GeV/c. The v2 coefficient of inclusive muons is extracted using two different techniques, namely two-particle cumulants, used for the first time for heavy-flavour measurements, and forward-central two-particle correlations. Both techniques give compatible results. A positive v2 is measured at both forward and backward rapidities with a significance larger than 4.7σ and 7.6σ, respectively, in the interval 2<pT<6 GeV/c. Comparisons with previous measurements in p-Pb collisions at sNN−−−√=5.02 TeV, and with AMPT and CGC-based theoretical calculations are discussed. The findings impose new constraints on the theoretical interpretations of the origin of the collective behaviour in small collision systems.
The study of the azimuthal anisotropy of inclusive muons produced in p-Pb collisions at sNN−−−√=8.16 TeV, using the ALICE detector at the LHC is reported. The measurement of the second-order Fourier coefficient of the particle azimuthal distribution, v2, is performed as a function of transverse momentum pT in the 0-20% high-multiplicity interval at both forward (2.03<yCMS<3.53) and backward (−4.46<yCMS<−2.96) rapidities over a wide pT range, 0.5<pT<10 GeV/c, in which a dominant contribution of muons from heavy-flavour hadron decays is expected at pT>2 GeV/c. The v2 coefficient of inclusive muons is extracted using two different techniques, namely two-particle cumulants, used for the first time for heavy-flavour measurements, and forward-central two-particle correlations. Both techniques give compatible results. A positive v2 is measured at both forward and backward rapidities with a significance larger than 4.7σ and 7.6σ, respectively, in the interval 2<pT<6 GeV/c. Comparisons with previous measurements in p-Pb collisions at sNN−−−√=5.02 TeV, and with AMPT and CGC-based theoretical calculations are discussed. The findings impose new constraints on the theoretical interpretations of the origin of the collective behaviour in small collision systems.
In ultraperipheral collisions (UPCs) of relativistic nuclei without overlap of nuclear densities, the two nuclei are excited by the Lorentz-contracted Coulomb fields of their collision partners. In these UPCs, the typical nuclear excitation energy is below a few tens of MeV, and a small number of nucleons are emitted in electromagnetic dissociation (EMD) of primary nuclei, in contrast to complete nuclear fragmentation in hadronic interactions. The cross sections of emission of given numbers of neutrons in UPCs of 208Pb nuclei at sNN−−−√=5.02 TeV were measured with the neutron zero degree calorimeters (ZDCs) of the ALICE detector at the LHC, exploiting a similar technique to that used in previous studies performed at sNN−−−√=2.76 TeV. In addition, the cross sections for the exclusive emission of one, two, three, four, and five forward neutrons in the EMD, not accompanied by the emission of forward protons, and thus mostly corresponding to the production of 207,206,205,204,203Pb, respectively, were measured for the first time. The predictions from the available models describe the measured cross sections well. These cross sections can be used for evaluating the impact of secondary nuclei on the LHC components, in particular, on superconducting magnets, and also provide useful input for the design of the Future Circular Collider (FCC-hh).
W±-boson production in p–Pb collisions at √sNN = 8.16 TeV and Pb–Pb collisions at √sNN = 5.02 TeV
(2022)
The production of the W± bosons measured in p−Pb collisions at a centre-of-mass energy per nucleon−nucleon collision sNN−−−−√=8.16 TeV and Pb−Pb collisions at sNN−−−−√=5.02 TeV with ALICE at the LHC is presented. The W± bosons are measured via their muonic decay channel, with the muon reconstructed in the pseudorapidity region −4<ημlab<−2.5 with transverse momentum pμT>10 GeV/c. While in Pb−Pb collisions the measurements are performed in the forward (2.5<yμcms<4) rapidity region, in p−Pb collisions, where the centre-of-mass frame is boosted with respect to the laboratory frame, the measurements are performed in the backward (−4.46<yμcms<−2.96) and forward (2.03<yμcms<3.53) rapidity regions. The W− and W+ production cross sections, lepton-charge asymmetry, and nuclear modification factors are evaluated as a function of the muon rapidity. In order to study the production as a function of the p−Pb collision centrality, the production cross sections of the W− and W+ bosons are combined and normalised to the average number of binary nucleon−nucleon collision ⟨Ncoll⟩. In Pb−Pb collisions, the same measurements are presented as a function of the collision centrality. Study of the binary scaling of the W±-boson cross sections in p−Pb and Pb−Pb collisions is also reported. The results are compared with perturbative QCD (pQCD) calculations, with and without nuclear modifications of the Parton Distribution Functions (PDFs), as well as with available data at the LHC. Significant deviations from the theory expectations are found in the two collision systems, indicating that the measurements can provide additional constraints for the determination of nuclear PDF (nPDFs) and in particular of the light-quark distributions.
W±-boson production in p–Pb collisions at √sNN = 8.16 TeV and Pb–Pb collisions at √sNN = 5.02 TeV
(2022)
The production of the W± bosons measured in p−Pb collisions at a centre-of-mass energy per nucleon−nucleon collision sNN−−−−√=8.16 TeV and Pb−Pb collisions at sNN−−−−√=5.02 TeV with ALICE at the LHC is presented. The W± bosons are measured via their muonic decay channel, with the muon reconstructed in the pseudorapidity region −4<ημlab<−2.5 with transverse momentum pμT>10 GeV/c. While in Pb−Pb collisions the measurements are performed in the forward (2.5<yμcms<4) rapidity region, in p−Pb collisions, where the centre-of-mass frame is boosted with respect to the laboratory frame, the measurements are performed in the backward (−4.46<yμcms<−2.96) and forward (2.03<yμcms<3.53) rapidity regions. The W− and W+ production cross sections, lepton-charge asymmetry, and nuclear modification factors are evaluated as a function of the muon rapidity. In order to study the production as a function of the p−Pb collision centrality, the production cross sections of the W− and W+ bosons are combined and normalised to the average number of binary nucleon−nucleon collision ⟨Ncoll⟩. In Pb−Pb collisions, the same measurements are presented as a function of the collision centrality. Study of the binary scaling of the W±-boson cross sections in p−Pb and Pb−Pb collisions is also reported. The results are compared with perturbative QCD (pQCD) calculations, with and without nuclear modifications of the Parton Distribution Functions (PDFs), as well as with available data at the LHC. Significant deviations from the theory expectations are found in the two collision systems, indicating that the measurements can provide additional constraints for the determination of nuclear PDF (nPDFs) and in particular of the light-quark distributions.
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum (pT) in minimum-bias p–Pb collisions at sNN=5.02 TeV using the ALICE detector at the LHC. The measurement covers the pT interval 0.5<pT<12 GeV/c and the rapidity range −1.065<ycms<0.135 in the centre-of-mass reference frame. The contribution of electrons from background sources was subtracted using an invariant mass approach. The nuclear modification factor RpPb was calculated by comparing the pT-differential invariant cross section in p–Pb collisions to a pp reference at the same centre-of-mass energy, which was obtained by interpolating measurements at s=2.76 TeV and s=7 TeV. The RpPb is consistent with unity within uncertainties of about 25%, which become larger for pT below 1 GeV/c. The measurement shows that heavy-flavour production is consistent with binary scaling, so that a suppression in the high-pT yield in Pb–Pb collisions has to be attributed to effects induced by the hot medium produced in the final state. The data in p–Pb collisions are described by recent model calculations that include cold nuclear matter effects.
Inclusive photon production at forward rapidities in pp and p–Pb
collisions at √sNN = 5.02 TeV
(2023)
A study of multiplicity and pseudorapidity distributions of inclusive photons measured in pp and p−Pb collisions at a center-of-mass energy per nucleon−nucleon collision of sNN−−−√=5.02 TeV using the ALICE detector in the forward pseudorapidity region 2.3<ηlab<3.9 is presented. Measurements in p−Pb collisions are reported for two beam configurations in which the directions of the proton and lead ion beam were reversed. The pseudorapidity distributions in p−Pb collisions are obtained for seven centrality classes which are defined based on different event activity estimators, i.e., the charged-particle multiplicity measured at midrapidity as well as the energy deposited in a calorimeter at beam rapidity. The inclusive photon multiplicity distributions for both pp and p−Pb collisions are described by double negative binomial distributions. The pseudorapidity distributions of inclusive photons are compared to those of charged particles at midrapidity in \pp collisions and for different centrality classes in p−Pb collisions. The results are compared to predictions from various Monte Carlo event generators. None of the generators considered in this paper reproduces the inclusive photon multiplicity distributions in the reported multiplicity range. The pseudorapidity distributions are, however, better described by the same generators.
The transverse-momentum (pT) spectra and coalescence parameters B2 of (anti)deuterons are measured in pp collisions at s√=13 TeV in and out of jets. In this measurement, the direction of the leading particle with the highest pT in the event (pleadT>5 GeV/c) is used as an approximation for the jet axis. The event is consequently divided into three azimuthal regions and the jet signal is obtained as the difference between the Toward region, that contains jet fragmentation products in addition to the underlying event (UE), and the Transverse region, which is dominated by the UE. The coalescence parameter in the jet is found to be approximately a factor of 10 larger than that in the underlying event. This experimental observation is consistent with the coalescence picture and can be attributed to the smaller average phase-space distance between nucleons inside the jet cone as compared to the underlying event. The results presented in this Letter are compared to predictions from a simple nucleon coalescence model, where the phase space distributions of nucleons are generated using PYTHIA 8 with the Monash 2013 tuning, and to predictions from a deuteron production model based on ordinary nuclear reactions with parametrized energy-dependent cross sections tuned on data. The latter model is implemented in PYTHIA 8.3. Both models reproduce the observed large difference between in-jet and out-of-jet coalescence parameters.
A new, more precise measurement of the Λ hyperon lifetime is performed using a large data sample of Pb−Pb collisions at sNN−−−√=5.02 TeV with ALICE. The Λ and Λ¯¯¯¯ hyperons are reconstructed at midrapidity using their two-body weak decay channel Λ→p+π− and Λ¯¯¯¯→p¯¯¯+π+. The measured value of the Λ lifetime is τΛ=[261.07±0.37 (stat.)±0.72 (syst.)] ps. The relative difference between the lifetime of Λ and Λ¯¯¯¯, which represents an important test of CPT invariance in the strangeness sector, is also measured. The obtained value (τΛ−τΛ¯¯¯¯)/τΛ=0.0013±0.0028 (stat.)±0.0021 (syst.) is consistent with zero within the uncertainties. Both measurements of the Λ hyperon lifetime and of the relative difference between τΛ and τΛ¯¯¯¯ are in agreement with the corresponding world averages of the Particle Data Group and about a factor of three more precise.
Multiplicity (Nch) distributions and transverse momentum (pT) spectra of inclusive primary charged particles in the kinematic range of |η|<0.8 and 0.15GeV/c<pT<10GeV/c are reported for pp, p-Pb, Xe-Xe and Pb-Pb collisions at centre-of-mass energies per nucleon pair ranging from sNN−−−√=2.76 TeV up to 13 TeV. A sequential two-dimensional unfolding procedure is used to extract the correlation between the transverse momentum of primary charged particles and the charged-particle multiplicity of the corresponding collision. This correlation sharply characterises important features of the final state of a collision and, therefore, can be used as a stringent test of theoretical models. The multiplicity distributions as well as the mean and standard deviation derived from the pT spectra are compared to state-of-the-art model predictions.
This letter reports the first measurement of spin alignment, with respect to the helicity axis, for D∗+ vector mesons and their charge conjugates from charm-quark hadronisation (prompt) and from beauty-meson decays (non-prompt) in hadron collisions. The measurements were performed at midrapidity (|y|<0.8) as a function of transverse momentum (pT) in proton-proton (pp) collisions collected by ALICE at the centre-of-mass energy s√=13 TeV. The diagonal spin density matrix element ρ00 of D∗+ mesons was measured from the angular distribution of the D∗+→D0(→K−π+)π+ decay products, in the D∗+ rest frame, with respect to the D∗+ momentum direction in the pp centre of mass frame. The ρ00 value for prompt D∗+ mesons is consistent with 1/3, which implies no spin alignment. However, for non-prompt D∗+ mesons an evidence of ρ00 larger than 1/3 is found. The measured value of the spin density element is ρ00=0.455±0.022(stat.)±0.035(syst.) in the 5<pT<20 GeV/c interval, which is consistent with a PYTHIA 8 Monte Carlo simulation coupled with the EVTGEN package, which implements the helicity conservation in the decay of D∗+ meson from beauty mesons. In non-central heavy-ion collisions, the spin of the D∗+ mesons may be globally aligned with the direction of the initial angular momentum and magnetic field. Based on the results for pp collisions reported in this letter it is shown that alignment of non-prompt D∗+ mesons due to the helicity conservation coupled to the collective anisotropic expansion may mimic the signal of global spin alignment in heavy-ion collisions.
An excess of J/ψ yield at very low transverse momentum (pT<0.3 GeV/c), originating from coherent photoproduction, is observed in peripheral and semicentral hadronic Pb−Pb collisions at a center-of-mass energy per nucleon pair of √sNN=5.02 TeV. The measurement is performed with the ALICE detector via the dimuon decay channel at forward rapidity (2.5<y<4). The nuclear modification factor at very low pT and the coherent photoproduction cross section are measured as a function of centrality down to the 10% most central collisions. These results extend the previous study at √sNN=2.76 TeV, confirming the clear excess over hadronic production in the pT range 0−0.3 GeV/c and the centrality range 70−90%, and establishing an excess with a significance greater than 5σ also in the 50−70% and 30−50% centrality ranges. The results are compared with earlier measurements at √sNN=2.76 TeV and with different theoretical predictions aiming at describing how coherent photoproduction occurs in hadronic interactions with nuclear overlap.
The production of inclusive, prompt and non-prompt J/ψ was studied for the first time at midrapidity (−1.37<ycms<0.43) in p−Pb collisions at sNN−−−√=8.16 TeV with the ALICE detector at the LHC. The inclusive J/ψ mesons were reconstructed in the dielectron decay channel in the transverse momentum (pT) interval 0<pT<14 GeV/c and the prompt and non-prompt contributions were separated on a statistical basis for pT>2 GeV/c. The study of the J/ψ mesons in the dielectron channel used for the first time in ALICE online single-electron triggers from the Transition Radiation Detector, providing a data sample corresponding to an integrated luminosity of 689±13μb−1. The proton−proton reference cross section for inclusive J/ψ was obtained based on interpolations of measured data at different centre-of-mass energies and a universal function describing the pT-differential J/ψ production cross sections. The pT-differential nuclear modification factors RpPb of inclusive, prompt, and non-prompt J/ψ are consistent with unity and described by theoretical models implementing only nuclear shadowing.