Refine
Year of publication
Document Type
- Preprint (746)
- Article (400)
- Working Paper (119)
- Doctoral Thesis (92)
- Diploma Thesis (46)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (35)
- diplomthesis (29)
- Report (25)
Has Fulltext
- yes (1602)
Is part of the Bibliography
- no (1602)
Keywords
Institute
- Informatik (1602) (remove)
Transverse momentum (pT ) spectra of charged particles at mid-pseudorapidity in Xe–Xe collisions at √sNN=5.44TeV measured with the ALICE apparatus at the Large Hadron Collider are reported. The kinematic range 0.15<pT<50GeV/c and |η|<0.8 is covered. Results are presented in nine classes of collision centrality in the 0–80% range. For comparison, a pp reference at the collision energy of √s=5.44 TeV is obtained by interpolating between existing pp measurements at √s=5.02 and 7 TeV. The nuclear modification factors in central Xe–Xe collisions and Pb–Pb collisions at a similar center-of-mass energy of √sNN=5.02 TeV, and in addition at 2.76 TeV, at analogous ranges of charged particle multiplicity density 〈dNch/dη〉 show a remarkable similarity at pT>10 GeV/c. The centrality dependence of the ratio of the average transverse momentum 〈pT〉 in Xe–Xe collisions over Pb–Pb collision at √s=5.02 TeV is compared to hydrodynamical model calculations.
Transverse momentum (pT) spectra of charged particles at mid-pseudorapidity in Xe-Xe collisions at sNN−−−√ = 5.44 TeV measured with the ALICE apparatus at the Large Hadron Collider are reported. The kinematic range 0.15<pT<50 GeV/c and |η|<0.8 is covered. Results are presented in nine classes of collision centrality in the 0-80% range. For comparison, a pp reference at the collision energy of s√ = 5.44 TeV is obtained by interpolating between existing \pp measurements at s√ = 5.02 and 7 TeV. The nuclear modification factors in central Xe-Xe collisions and Pb-Pb collisions at a similar center-of-mass energy of sNN−−−√ = 5.02 TeV, and in addition at 2.76 TeV, at analogous ranges of charged particle multiplicity density ⟨dNch/dη⟩ show a remarkable similarity at pT>10 GeV/c. The comparison of the measured RAA values in the two colliding systems could provide insight on the path length dependence of medium-induced parton energy loss. The centrality dependence of the ratio of the average transverse momentum ⟨pT⟩ in Xe-Xe collisions over Pb-Pb collision at s√ = 5.02 TeV is compared to hydrodynamical model calculations.
Transverse momentum (pT) spectra of charged particles at mid-pseudorapidity in Xe-Xe collisions at sNN−−−√ = 5.44 TeV measured with the ALICE apparatus at the Large Hadron Collider are reported. The kinematic range 0.15<pT<50 GeV/c and |η|<0.8 is covered. Results are presented in nine classes of collision centrality in the 0-80% range. For comparison, a pp reference at the collision energy of s√ = 5.44 TeV is obtained by interpolating between existing \pp measurements at s√ = 5.02 and 7 TeV. The nuclear modification factors in central Xe-Xe collisions and Pb-Pb collisions at a similar center-of-mass energy of sNN−−−√ = 5.02 TeV, and in addition at 2.76 TeV, at analogous ranges of charged particle multiplicity density ⟨dNch/dη⟩ show a remarkable similarity at pT>10 GeV/c. The centrality dependence of the ratio of the average transverse momentum ⟨pT⟩ in Xe-Xe collisions over Pb-Pb collision at s√ = 5.02 TeV is compared to hydrodynamical model calculations.
We report the measured transverse momentum (pT) spectra of primary charged particles from pp, p-Pb and Pb-Pb collisions at a center-of-mass energy sNN−−−√=5.02 TeV in the kinematic range of 0.15 < pT< 50 GeV/c and |η| < 0.8. A significant improvement of systematic uncertainties motivated the reanalysis of data in pp and Pb-Pb collisions at sNN−−−√=2.76 TeV, as well as in p-Pb collisions at sNN−−−√=5.02 TeV, which is also presented. Spectra from Pb-Pb collisions are presented in nine centrality intervals and are compared to a reference spectrum from pp collisions scaled by the number of binary nucleon-nucleon collisions. For central collisions, the pT spectra are suppressed by more than a factor of 7 around 6–7 GeV/c with a significant reduction in suppression towards higher momenta up to 30 GeV/c. The nuclear modification factor RpPb, constructed from the pp and p-Pb spectra measured at the same collision energy, is consistent with unity above 8 GeV/c. While the spectra in both pp and Pb-Pb collisions are substantially harder at sNN−−−√=5.02 TeV compared to 2.76 TeV, the nuclear modification factors show no significant collision energy dependence. The obtained results should provide further constraints on the parton energy loss calculations to determine the transport properties of the hot and dense QCD matter.
The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV. The production yields for rapidity |y|<0.5 are presented as a function of transverse momentum, pT, in the interval 1-36 GeV/c for the centrality class 0-10% and in the interval 1-16 GeV/c for the centrality class 30-50%. The nuclear modification factor RAA was computed using a proton-proton reference at s√=2.76 TeV, based on measurements at s√=7 TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.
The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV. The production yields for rapidity |y|<0.5 are presented as a function of transverse momentum, pT, in the interval 1-36 GeV/c for the centrality class 0-10% and in the interval 1-16 GeV/c for the centrality class 30-50%. The nuclear modification factor RAA was computed using a proton-proton reference at s√=2.76 TeV, based on measurements at s√=7 TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.
The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV. The production yields for rapidity |y|<0.5 are presented as a function of transverse momentum, pT, in the interval 1-36 GeV/c for the centrality class 0-10% and in the interval 1-16 GeV/c for the centrality class 30-50%. The nuclear modification factor RAA was computed using a proton-proton reference at s√=2.76 TeV, based on measurements at s√=7 TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.
Three-body nuclear forces play an important role in the structure of nuclei and hypernuclei and are also incorporated in models to describe the dynamics of dense baryonic matter, such as in neutron stars. So far, only indirect measurements anchored to the binding energies of nuclei can be used to constrain the three-nucleon force, and if hyperons are considered, the scarce data on hypernuclei impose only weak constraints on the three-body forces. In this work, we present the first direct measurement of the p−p−p and p−p−Λ systems in terms of three-particle mixed moments carried out for pp collisions at s√ = 13 TeV. Three-particle cumulants are extracted from the normalised mixed moments by applying the Kubo formalism, where the three-particle interaction contribution to these moments can be isolated after subtracting the known two-body interaction terms. A negative cumulant is found for the p−p−p system, hinting to the presence of a residual three-body effect while for p−p−Λ the cumulant is consistent with zero. This measurement demonstrates the accessibility of three-baryon correlations at the LHC.
Three-body nuclear forces play an important role in the structure of nuclei and hypernuclei and are also incorporated in models to describe the dynamics of dense baryonic matter, such as in neutron stars. So far, only indirect measurements anchored to the binding energies of nuclei can be used to constrain the three-nucleon force, and if hyperons are considered, the scarce data on hypernuclei impose only weak constraints on the three-body forces. In this work, we present the first direct measurement of the p−p−p and p−p−Λ systems in terms of three-particle correlation functions carried out for pp collisions at s√=13 TeV. Three-particle cumulants are extracted from the correlation functions by applying the Kubo formalism, where the three-particle interaction contribution to these correlations can be isolated after subtracting the known two-body interaction terms. A negative cumulant is found for the p−p−p system, hinting to the presence of a residual three-body effect while for p−p−Λ the cumulant is consistent with zero. This measurement demonstrates the accessibility of three-baryon correlations at the LHC.
Three-body nuclear forces play an important role in the structure of nuclei and hypernuclei and are also incorporated in models to describe the dynamics of dense baryonic matter, such as in neutron stars. So far, only indirect measurements anchored to the binding energies of nuclei can be used to constrain the three-nucleon force, and if hyperons are considered, the scarce data on hypernuclei impose only weak constraints on the three-body forces. In this work, we present the first direct measurement of the p−p−p and p−p−Λ systems in terms of three-particle mixed moments carried out for pp collisions at s√ = 13 TeV. Three-particle cumulants are extracted from the normalised mixed moments by applying the Kubo formalism, where the three-particle interaction contribution to these moments can be isolated after subtracting the known two-body interaction terms. A negative cumulant is found for the p−p−p system, hinting to the presence of a residual three-body effect while for p−p−Λ the cumulant is consistent with zero. This measurement demonstrates the accessibility of three-baryon correlations at the LHC.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
Towards correctness of program transformations through unification and critical pair computation
(2011)
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
In this contribution we present algorithms for model checking of analog circuits enabling the specification of time constraints. Furthermore, a methodology for defining time-based specifications is introduced. An already known method for model checking of integrated analog circuits has been extended to take into account time constraints. The method will be presented using three industrial circuits. The results of model checking will be compared to verification by simulation.
Retiming is a widely investigated technique for performance optimization. In general, it performs extensive modifications on a circuit netlist, leaving it unclear, whether the achieved performance improvement will still be valid after placement has been performed. This paper presents an approach for integrating retiming into a timing-driven placement environment. The experimental results show the benefit of the proposed approach on circuit performance in comparison with design flows using retiming only as a pre- or postplacement optimization method.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Die vorliegende Arbeit lässt sich in den Bereich Data Science einordnen. Data Science verwendet Verfahren aus dem Bereich Computer Science, Algorithmen aus der Mathematik und Statistik sowie Domänenwissen, um große Datenmengen zu analysieren und neue Erkenntnisse zu gewinnen. In dieser Arbeit werden verschiedene Forschungsbereiche aus diesen verwendet. Diese umfassen die Datenanalyse im Bereich von Big Data (soziale Netzwerke, Kurznachrichten von Twitter), Opinion Mining (Analyse von Meinungen auf Basis eines Lexikons mit meinungstragenden Phrasen) sowie Topic Detection (Themenerkennung)....
Ergebnis 1: Sentiment Phrase List (SePL)
Im Forschungsbereich Opinion Mining spielen Listen meinungstragender Wörter eine wesentliche Rolle bei der Analyse von Meinungsäußerungen. Das im Rahmen dieser Arbeit entwickelte Vorgehen zur automatisierten Generierung einer solchen Liste leistet einen wichtigen Forschungsbeitrag in diesem Gebiet. Der neuartige Ansatz ermöglicht es einerseits, dass auch Phrasen aus mehreren Wörtern (inkl. Negationen, Verstärkungs- und Abschwächungspartikeln) sowie Redewendungen enthalten sind, andererseits werden die Meinungswerte aller Phrasen auf Basis eines entsprechenden Korpus automatisiert berechnet. Die Sentiment Phrase List sowie das Vorgehen wurden veröffentlicht und können von der Forschungsgemeinde genutzt werden [121, 123]. Die Erstellung basiert auf einer textuellen sowie zusätzlich numerischen Bewertung, welche typischerweise in Kundenrezensionen verwendet werden (beispielsweise der Titel und die Sternebewertung bei Amazon Kundenrezensionen). Es können weitere Datenquellen verwendet werden, die eine derartige Bewertung aufweisen. Auf Basis von ca. 1,5 Millionen deutschen Kundenrezensionen wurden verschiedene Versionen der SePL erstellt und veröffentlicht [120].
Ergebnis 2: Algorithmus auf Basis der SePL
Mit Hilfe der SePL und den darin enthaltenen meinungstragenden Phrasen ergeben sich Verbesserungen für lexikonbasierte Verfahren bei der Analyse von Meinungsäußerungen. Phrasen werden im Text häufig durch andere Wörter getrennt, wodurch eine Identifizierung der Phrasen erforderlich ist. Der Algorithmus für eine lexikonbasierte Meinungsanalyse wurde veröffentlicht [176]. Er basiert auf meinungstragenden Phrasen bestehend aus einem oder mehreren Wörtern. Da für einzelne Phrasen unterschiedliche Meinungswerte vorliegen, ist eine genauere Bewertung als mit bisherigen Ansätzen möglich. Dies ermöglicht, dass meinungstragende Phrasen aus dem Text extrahiert und anhand der in der SePL enthaltenen Einträge differenziert bewertet werden können. Bisherige Ansätze nutzen häufig einzelne meinungstragende Wörter. Der Meinungswert für beispielsweise eine Verneinung muss nicht anhand eines generellen Vorgehens erfolgen. In aktuellen Verfahren wird der Wert eines meinungstragenden Wortes bei Vorhandensein einer Verneinung bisher meist invertiert, was häufig falsche Ergebnisse liefert. Die Liste enthält im besten Fall sowohl einen Meinungswert für das einzelne Wort und seine Verneinung (z.B. „schön“ und „nicht schön“).
1.3 übersicht der hauptergebnisse 5
Ergebnis 3: Evaluierung der Anwendung der SePL
Der Algorithmus aus Ergebnis 2 wurde mit Rezensionen der Bewertungsplattform CiaoausdemBereichderAutomobilversicherunge valuiert.Dabei wurden wesentliche Fehlerquellen aufgezeigt [176], die entsprechende Verbesserungen ermöglichen. Weiterhin wurde mit der SePL eine Evaluation anhand eines Maschinenlernverfahrens auf Basis einer Support Vector Machine durchgeführt. Hierbei wurden verschiedene bestehende lexikalische Ressourcen mit der SePL verglichen sowie deren Einsatz in verschiedenen Domänen untersucht. Die Ergebnisse wurden in [115] veröffentlicht.
Ergebnis 4: Forschungsprojekt PoliTwi - Themenerkennung politischer Top-Themen
Mit dem Forschungsprojekt PoliTwi wurden einerseits die erforderlichen Daten von Twitter gesammelt. Andererseits werden der breiten Öffentlichkeit fortlaufend aktuelle politische Top-Themen über verschiedene Kanäle zur Verfügung gestellt. Für die Evaluation der angestrebten Verbesserungen im Bereich der Themenerkennung in Verbindung mit einer Meinungsanalyse liegen die erforderlichen Daten über einen Zeitraum von bisher drei Jahren aus der Domäne Politik vor. Auf Basis dieser Daten konnte die Themenerkennung durchgeführt werden. Die berechneten Themen wurden mit anderen Systemen wie Google Trends oder Tagesschau Meta verglichen (siehe Kapitel 5.3). Es konnte gezeigt werden, dass die Meinungsanalyse die Themenerkennung verbessern kann. Die Ergebnisse des Projekts wurden in [124] veröffentlicht. Der Öffentlichkeit und insbesondere Journalisten und Politikern wird zudem ein Service (u.a. anhand des Twitter-Kanals unter https://twitter.com/politwi) zur Verfügung gestellt, anhand dessen sie über aktuelle Top-Themen informiert werden. Nachrichtenportale wie FOCUS Online nutzten diesen Service bei ihrer Berichterstattung (siehe Kapitel 4.3.6.1). Die Top-Themen werden seit Mitte 2013 ermittelt und können zudem auf der Projektwebseite [119] abgerufen werden.
Ergebnis 5: Erweiterung lexikalischer Ressourcen auf Konzeptebene
Das noch junge Forschungsgebiet des Concept-level Sentiment Analysis versucht bisherige Ansätze der Meinungsanalyse dadurch zu verbessern, dass Meinungsäußerungen auf Konzeptebene analysiert werden. Eine Voraussetzung sind Listen meinungstragender Wörter, welche differenzierte Betrachtungen anhand unterschiedlicher Kontexte ermöglichen. Anhand der Top-Themen und deren Kontext wurde ein Vorgehen entwickelt, welches die Erstellung bzw. Ergänzung dieser Listen ermöglicht. Es wurde gezeigt, wie Meinungen in unterschiedlichen Kontexten differenziert bewertet werden und diese Information in lexikalischen Ressourcen aufgenommen werden können, was im Bereich der Concept-level Sentiment Analysis genutzt werden kann. Das Vorgehen wurde in [124] veröffentlicht.