Refine
Year of publication
Document Type
- Preprint (748)
- Article (401)
- Working Paper (119)
- Doctoral Thesis (92)
- Diploma Thesis (46)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (36)
- diplomthesis (29)
- Report (25)
Has Fulltext
- yes (1606)
Is part of the Bibliography
- no (1606)
Keywords
Institute
- Informatik (1606) (remove)
Two-particle transverse momentum differential correlators, recently measured in Pb--Pb collisions at energies available at the CERN Large Hadron Collider (LHC), provide an additional tool to gain insights into particle production mechanisms and infer transport properties, such as the ratio of shear viscosity to entropy density, of the medium created in Pb-Pb collisions. The longitudinal long-range correlations and the large azimuthal anisotropy measured at low transverse momenta in small collision systems, namely pp and p-Pb, at LHC energies resemble manifestations of collective behaviour. This suggests that locally equilibrated matter may be produced in these small collision systems, similar to what is observed in Pb-Pb collisions. In this work, the same two-particle transverse momentum differential correlators are exploited in pp and p-Pb collisions at s√=7 TeV and sNN−−−√=5.02 TeV, respectively, to seek evidence for viscous effects. Specifically, the strength and shape of the correlators are studied as a function of the produced particle multiplicity to identify evidence for longitudinal broadening that might reveal the presence of viscous effects in these smaller systems. The measured correlators and their evolution from pp and p--Pb to Pb--Pb collisions are additionally compared to predictions from Monte Carlo event generators, and the potential presence of viscous effects is discussed.
Two-particle transverse momentum differential correlators, recently measured in Pb-Pb collisions at LHC energies, provide an additional tool to gain insights into particle production mechanisms and infer transport properties, such as the ratio of shear viscosity to entropy density, of the medium created in Pb-Pb collisions. The longitudinal long-range correlations and the large azimuthal anisotropy measured at low transverse momenta in small collision systems, namely pp and p-Pb, at LHC energies resemble manifestations of collective behaviour. This suggests that locally equilibrated matter may be produced in these small collision systems, similar to what is observed in Pb-Pb collisions. In this work, the same two-particle transverse momentum differential correlators are exploited in pp and p-Pb collisions at s√=7 TeV and sNN−−−√=5.02 TeV, respectively, to seek evidence for viscous effects. Specifically, the strength and shape of the correlators are studied as a function of the produced particle multiplicity to identify evidence for longitudinal broadening that might reveal the presence of viscous effects in these smaller systems. The measured correlators and their evolution from pp and p-Pb to Pb-Pb collisions are additionally compared to predictions from Monte Carlo event generators, and the potential presence of viscous effects is discussed.
We present measurements of two-particle differential number correlation functions R2 and transverse momentum correlation functions P2, obtained from p-Pb collisions at 5.02 TeV and Pb-Pb collisions at 2.76 TeV. The results are obtained using charged particles in the pseudorapidity range |η|< 1.0, and transverse momentum range 0.2<pT<2.0 GeV/c as a function of pair separation in pseudorapidity, |Δη|, azimuthal angle, Δφ, and for several charged-particle multiplicity classes. Measurements are carried out for like-sign and unlike-sign charged-particle pairs separately and combined to obtain charge-independent and charge-dependent correlation functions. We study the evolution of the width of the near-side peak of these correlation functions with collision centrality. Additionally, we study Fourier decompositions of the correlators in Δφ as a function of the pair separation |Δη|. Significant differences in the dependence of their harmonic coefficients on multiplicity classes are found. These differences can be exploited, in theoretical models, to obtain further insight into charged-particle production and transport in heavy-ion collisions. Moreover, an upper limit of non-flow contributions to flow coefficients vn measured in Pb-Pb collisions based on the relative strength of Fourier coefficients measured in p-Pb interactions is estimated.
We present measurements of two-particle differential number correlation functions R2 and transverse momentum correlation functions P2, obtained from p-Pb collisions at 5.02 TeV and Pb-Pb collisions at 2.76 TeV. The results are obtained using charged particles in the pseudorapidity range |η|< 1.0, and transverse momentum range 0.2<pT<2.0 GeV/c as a function of pair separation in pseudorapidity, |Δη|, azimuthal angle, Δφ, and for several charged-particle multiplicity classes. Measurements are carried out for like-sign and unlike-sign charged-particle pairs separately and combined to obtain charge-independent and charge-dependent correlation functions. We study the evolution of the width of the near-side peak of these correlation functions with collision centrality. Additionally, we study Fourier decompositions of the correlators in Δφ as a function of the pair separation |Δη|. Significant differences in the dependence of their harmonic coefficients on multiplicity classes are found. These differences can be exploited, in theoretical models, to obtain further insight into charged-particle production and transport in heavy-ion collisions. Moreover, an upper limit of non-flow contributions to flow coefficients vn measured in Pb-Pb collisions based on the relative strength of Fourier coefficients measured in p-Pb interactions is estimated.
We present measurements of two-particle differential number correlation functions R2 and transverse momentum correlation functions P2, obtained from p-Pb collisions at 5.02 TeV and Pb-Pb collisions at 2.76 TeV. The results are obtained using charged particles in the pseudorapidity range |η|< 1.0, and transverse momentum range 0.2<pT<2.0 GeV/c as a function of pair separation in pseudorapidity, |Δη|, azimuthal angle, Δφ, and for several charged-particle multiplicity classes. Measurements are carried out for like-sign and unlike-sign charged-particle pairs separately and combined to obtain charge-independent and charge-dependent correlation functions. We study the evolution of the width of the near-side peak of these correlation functions with collision centrality. Additionally, we study Fourier decompositions of the correlators in Δφ as a function of the pair separation |Δη|. Significant differences in the dependence of their harmonic coefficients on multiplicity classes are found. These differences can be exploited, in theoretical models, to obtain further insight into charged-particle production and transport in heavy-ion collisions. Moreover, an upper limit of non-flow contributions to flow coefficients vn measured in Pb-Pb collisions based on the relative strength of Fourier coefficients measured in p-Pb interactions is estimated.
We introduce tree-width for first order formulae φ, fotw(φ). We show that computing fotw is fixed-parameter tractable with parameter fotw. Moreover, we show that on classes of formulae of bounded fotw, model checking is fixed parameter tractable, with parameter the length of the formula. This is done by translating a formula φ with fotw(φ)<k into a formula of the k-variable fragment Lk of first order logic. For fixed k, the question whether a given first order formula is equivalent to an Lk formula is undecidable. In contrast, the classes of first order formulae with bounded fotw are fragments of first order logic for which the equivalence is decidable. Our notion of tree-width generalises tree-width of conjunctive queries to arbitrary formulae of first order logic by taking into account the quantifier interaction in a formula. Moreover, it is more powerful than the notion of elimination-width of quantified constraint formulae, defined by Chen and Dalmau (CSL 2005): for quantified constraint formulae, both bounded elimination-width and bounded fotw allow for model checking in polynomial time. We prove that fotw of a quantified constraint formula φ is bounded by the elimination-width of φ, and we exhibit a class of quantified constraint formulae with bounded fotw, that has unbounded elimination-width. A similar comparison holds for strict tree-width of non-recursive stratified datalog as defined by Flum, Frick, and Grohe (JACM 49, 2002). Finally, we show that fotw has a characterization in terms of a cops and robbers game without monotonicity cost.
The elliptic and triangular flow coefficients v2 and v3 of prompt D0, D+, and D∗+ mesons were measured at midrapidity (|y|<0.8) in Pb-Pb collisions at the centre-of-mass energy per nucleon pair of sNN−−−−√=5.02 TeV with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays in the transverse momentum interval 1<pT<36 GeV/c in central (0-10%) and semi-central (30-50%) collisions. Compared to pions, protons, and J/ψ mesons, the average D-meson vn harmonics are compatible within uncertainties with a mass hierarchy for pT≲3 GeV/c, and are similar to those of charged pions for higher pT. The coupling of the charm quark to the light quarks in the underlying medium is further investigated with the application of the event-shape engineering (ESE) technique to the D-meson v2 and pT-differential yields. The D-meson v2 is correlated with average bulk elliptic flow in both central and semi-central collisions. Within the current precision, the ratios of per-event D-meson yields in the ESE-selected and unbiased samples are found to be compatible with unity. All the measurements are found to be reasonably well described by theoretical calculations including the effects of charm-quark transport and the recombination of charm quarks with light quarks in a hydrodynamically expanding medium.
The elliptic and triangular flow coefficients v2 and v3 of prompt D0, D+, and D∗+ mesons were measured at midrapidity (|y|<0.8) in Pb-Pb collisions at the centre-of-mass energy per nucleon pair of sNN−−−−√=5.02 TeV with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays in the transverse momentum interval 1<pT<36 GeV/c in central (0-10%) and semi-central (30-50%) collisions. Compared to pions, protons, and J/ψ mesons, the average D-meson vn harmonics are found to follow a mass ordering for pT<3 GeV/c, and to be similar to those of charged pions for higher pT. The coupling of the charm quark to the light quarks in the underlying medium is further investigated with the application of the event-shape engineering (ESE) technique to the D-meson v2 and pT-differential yields. The D-meson v2 is correlated with average bulk elliptic flow in both central and semi-central collisions. Within the current precision, the ratios of per-event D-meson yields in the ESE-selected and unbiased samples are found to be compatible with unity. All the measurements are found to be reasonably well described by theoretical calculations including the effects of charm-quark transport and the recombination of charm quarks with light quarks in a hydrodynamically expanding medium.
The inclusive charged particle transverse momentum distribution is measured in proton–proton collisions at s=900 GeV at the LHC using the ALICE detector. The measurement is performed in the central pseudorapidity region (|η|<0.8) over the transverse momentum range 0.15<pT<10 GeV/c. The correlation between transverse momentum and particle multiplicity is also studied. Results are presented for inelastic (INEL) and non-single-diffractive (NSD) events. The average transverse momentum for |η|<0.8 is 〈pT〉INEL=0.483±0.001 (stat.)±0.007 (syst.) GeV/c and 〈pT〉NSD=0.489±0.001 (stat.)±0.007 (syst.) GeV/c, respectively. The data exhibit a slightly larger 〈pT〉 than measurements in wider pseudorapidity intervals. The results are compared to simulations with the Monte Carlo event generators PYTHIA and PHOJET.
Transverse momentum (pT ) spectra of charged particles at mid-pseudorapidity in Xe–Xe collisions at √sNN=5.44TeV measured with the ALICE apparatus at the Large Hadron Collider are reported. The kinematic range 0.15<pT<50GeV/c and |η|<0.8 is covered. Results are presented in nine classes of collision centrality in the 0–80% range. For comparison, a pp reference at the collision energy of √s=5.44 TeV is obtained by interpolating between existing pp measurements at √s=5.02 and 7 TeV. The nuclear modification factors in central Xe–Xe collisions and Pb–Pb collisions at a similar center-of-mass energy of √sNN=5.02 TeV, and in addition at 2.76 TeV, at analogous ranges of charged particle multiplicity density 〈dNch/dη〉 show a remarkable similarity at pT>10 GeV/c. The centrality dependence of the ratio of the average transverse momentum 〈pT〉 in Xe–Xe collisions over Pb–Pb collision at √s=5.02 TeV is compared to hydrodynamical model calculations.
Transverse momentum (pT) spectra of charged particles at mid-pseudorapidity in Xe-Xe collisions at sNN−−−√ = 5.44 TeV measured with the ALICE apparatus at the Large Hadron Collider are reported. The kinematic range 0.15<pT<50 GeV/c and |η|<0.8 is covered. Results are presented in nine classes of collision centrality in the 0-80% range. For comparison, a pp reference at the collision energy of s√ = 5.44 TeV is obtained by interpolating between existing \pp measurements at s√ = 5.02 and 7 TeV. The nuclear modification factors in central Xe-Xe collisions and Pb-Pb collisions at a similar center-of-mass energy of sNN−−−√ = 5.02 TeV, and in addition at 2.76 TeV, at analogous ranges of charged particle multiplicity density ⟨dNch/dη⟩ show a remarkable similarity at pT>10 GeV/c. The comparison of the measured RAA values in the two colliding systems could provide insight on the path length dependence of medium-induced parton energy loss. The centrality dependence of the ratio of the average transverse momentum ⟨pT⟩ in Xe-Xe collisions over Pb-Pb collision at s√ = 5.02 TeV is compared to hydrodynamical model calculations.
Transverse momentum (pT) spectra of charged particles at mid-pseudorapidity in Xe-Xe collisions at sNN−−−√ = 5.44 TeV measured with the ALICE apparatus at the Large Hadron Collider are reported. The kinematic range 0.15<pT<50 GeV/c and |η|<0.8 is covered. Results are presented in nine classes of collision centrality in the 0-80% range. For comparison, a pp reference at the collision energy of s√ = 5.44 TeV is obtained by interpolating between existing \pp measurements at s√ = 5.02 and 7 TeV. The nuclear modification factors in central Xe-Xe collisions and Pb-Pb collisions at a similar center-of-mass energy of sNN−−−√ = 5.02 TeV, and in addition at 2.76 TeV, at analogous ranges of charged particle multiplicity density ⟨dNch/dη⟩ show a remarkable similarity at pT>10 GeV/c. The centrality dependence of the ratio of the average transverse momentum ⟨pT⟩ in Xe-Xe collisions over Pb-Pb collision at s√ = 5.02 TeV is compared to hydrodynamical model calculations.
We report the measured transverse momentum (pT) spectra of primary charged particles from pp, p-Pb and Pb-Pb collisions at a center-of-mass energy sNN−−−√=5.02 TeV in the kinematic range of 0.15 < pT< 50 GeV/c and |η| < 0.8. A significant improvement of systematic uncertainties motivated the reanalysis of data in pp and Pb-Pb collisions at sNN−−−√=2.76 TeV, as well as in p-Pb collisions at sNN−−−√=5.02 TeV, which is also presented. Spectra from Pb-Pb collisions are presented in nine centrality intervals and are compared to a reference spectrum from pp collisions scaled by the number of binary nucleon-nucleon collisions. For central collisions, the pT spectra are suppressed by more than a factor of 7 around 6–7 GeV/c with a significant reduction in suppression towards higher momenta up to 30 GeV/c. The nuclear modification factor RpPb, constructed from the pp and p-Pb spectra measured at the same collision energy, is consistent with unity above 8 GeV/c. While the spectra in both pp and Pb-Pb collisions are substantially harder at sNN−−−√=5.02 TeV compared to 2.76 TeV, the nuclear modification factors show no significant collision energy dependence. The obtained results should provide further constraints on the parton energy loss calculations to determine the transport properties of the hot and dense QCD matter.
The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV. The production yields for rapidity |y|<0.5 are presented as a function of transverse momentum, pT, in the interval 1-36 GeV/c for the centrality class 0-10% and in the interval 1-16 GeV/c for the centrality class 30-50%. The nuclear modification factor RAA was computed using a proton-proton reference at s√=2.76 TeV, based on measurements at s√=7 TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.
The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV. The production yields for rapidity |y|<0.5 are presented as a function of transverse momentum, pT, in the interval 1-36 GeV/c for the centrality class 0-10% and in the interval 1-16 GeV/c for the centrality class 30-50%. The nuclear modification factor RAA was computed using a proton-proton reference at s√=2.76 TeV, based on measurements at s√=7 TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.
The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV. The production yields for rapidity |y|<0.5 are presented as a function of transverse momentum, pT, in the interval 1-36 GeV/c for the centrality class 0-10% and in the interval 1-16 GeV/c for the centrality class 30-50%. The nuclear modification factor RAA was computed using a proton-proton reference at s√=2.76 TeV, based on measurements at s√=7 TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.
Three-body nuclear forces play an important role in the structure of nuclei and hypernuclei and are also incorporated in models to describe the dynamics of dense baryonic matter, such as in neutron stars. So far, only indirect measurements anchored to the binding energies of nuclei can be used to constrain the three-nucleon force, and if hyperons are considered, the scarce data on hypernuclei impose only weak constraints on the three-body forces. In this work, we present the first direct measurement of the p−p−p and p−p−Λ systems in terms of three-particle mixed moments carried out for pp collisions at s√ = 13 TeV. Three-particle cumulants are extracted from the normalised mixed moments by applying the Kubo formalism, where the three-particle interaction contribution to these moments can be isolated after subtracting the known two-body interaction terms. A negative cumulant is found for the p−p−p system, hinting to the presence of a residual three-body effect while for p−p−Λ the cumulant is consistent with zero. This measurement demonstrates the accessibility of three-baryon correlations at the LHC.
Three-body nuclear forces play an important role in the structure of nuclei and hypernuclei and are also incorporated in models to describe the dynamics of dense baryonic matter, such as in neutron stars. So far, only indirect measurements anchored to the binding energies of nuclei can be used to constrain the three-nucleon force, and if hyperons are considered, the scarce data on hypernuclei impose only weak constraints on the three-body forces. In this work, we present the first direct measurement of the p−p−p and p−p−Λ systems in terms of three-particle correlation functions carried out for pp collisions at s√=13 TeV. Three-particle cumulants are extracted from the correlation functions by applying the Kubo formalism, where the three-particle interaction contribution to these correlations can be isolated after subtracting the known two-body interaction terms. A negative cumulant is found for the p−p−p system, hinting to the presence of a residual three-body effect while for p−p−Λ the cumulant is consistent with zero. This measurement demonstrates the accessibility of three-baryon correlations at the LHC.
Three-body nuclear forces play an important role in the structure of nuclei and hypernuclei and are also incorporated in models to describe the dynamics of dense baryonic matter, such as in neutron stars. So far, only indirect measurements anchored to the binding energies of nuclei can be used to constrain the three-nucleon force, and if hyperons are considered, the scarce data on hypernuclei impose only weak constraints on the three-body forces. In this work, we present the first direct measurement of the p−p−p and p−p−Λ systems in terms of three-particle mixed moments carried out for pp collisions at s√ = 13 TeV. Three-particle cumulants are extracted from the normalised mixed moments by applying the Kubo formalism, where the three-particle interaction contribution to these moments can be isolated after subtracting the known two-body interaction terms. A negative cumulant is found for the p−p−p system, hinting to the presence of a residual three-body effect while for p−p−Λ the cumulant is consistent with zero. This measurement demonstrates the accessibility of three-baryon correlations at the LHC.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
Towards correctness of program transformations through unification and critical pair computation
(2011)
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
In this contribution we present algorithms for model checking of analog circuits enabling the specification of time constraints. Furthermore, a methodology for defining time-based specifications is introduced. An already known method for model checking of integrated analog circuits has been extended to take into account time constraints. The method will be presented using three industrial circuits. The results of model checking will be compared to verification by simulation.
Retiming is a widely investigated technique for performance optimization. In general, it performs extensive modifications on a circuit netlist, leaving it unclear, whether the achieved performance improvement will still be valid after placement has been performed. This paper presents an approach for integrating retiming into a timing-driven placement environment. The experimental results show the benefit of the proposed approach on circuit performance in comparison with design flows using retiming only as a pre- or postplacement optimization method.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Die vorliegende Arbeit lässt sich in den Bereich Data Science einordnen. Data Science verwendet Verfahren aus dem Bereich Computer Science, Algorithmen aus der Mathematik und Statistik sowie Domänenwissen, um große Datenmengen zu analysieren und neue Erkenntnisse zu gewinnen. In dieser Arbeit werden verschiedene Forschungsbereiche aus diesen verwendet. Diese umfassen die Datenanalyse im Bereich von Big Data (soziale Netzwerke, Kurznachrichten von Twitter), Opinion Mining (Analyse von Meinungen auf Basis eines Lexikons mit meinungstragenden Phrasen) sowie Topic Detection (Themenerkennung)....
Ergebnis 1: Sentiment Phrase List (SePL)
Im Forschungsbereich Opinion Mining spielen Listen meinungstragender Wörter eine wesentliche Rolle bei der Analyse von Meinungsäußerungen. Das im Rahmen dieser Arbeit entwickelte Vorgehen zur automatisierten Generierung einer solchen Liste leistet einen wichtigen Forschungsbeitrag in diesem Gebiet. Der neuartige Ansatz ermöglicht es einerseits, dass auch Phrasen aus mehreren Wörtern (inkl. Negationen, Verstärkungs- und Abschwächungspartikeln) sowie Redewendungen enthalten sind, andererseits werden die Meinungswerte aller Phrasen auf Basis eines entsprechenden Korpus automatisiert berechnet. Die Sentiment Phrase List sowie das Vorgehen wurden veröffentlicht und können von der Forschungsgemeinde genutzt werden [121, 123]. Die Erstellung basiert auf einer textuellen sowie zusätzlich numerischen Bewertung, welche typischerweise in Kundenrezensionen verwendet werden (beispielsweise der Titel und die Sternebewertung bei Amazon Kundenrezensionen). Es können weitere Datenquellen verwendet werden, die eine derartige Bewertung aufweisen. Auf Basis von ca. 1,5 Millionen deutschen Kundenrezensionen wurden verschiedene Versionen der SePL erstellt und veröffentlicht [120].
Ergebnis 2: Algorithmus auf Basis der SePL
Mit Hilfe der SePL und den darin enthaltenen meinungstragenden Phrasen ergeben sich Verbesserungen für lexikonbasierte Verfahren bei der Analyse von Meinungsäußerungen. Phrasen werden im Text häufig durch andere Wörter getrennt, wodurch eine Identifizierung der Phrasen erforderlich ist. Der Algorithmus für eine lexikonbasierte Meinungsanalyse wurde veröffentlicht [176]. Er basiert auf meinungstragenden Phrasen bestehend aus einem oder mehreren Wörtern. Da für einzelne Phrasen unterschiedliche Meinungswerte vorliegen, ist eine genauere Bewertung als mit bisherigen Ansätzen möglich. Dies ermöglicht, dass meinungstragende Phrasen aus dem Text extrahiert und anhand der in der SePL enthaltenen Einträge differenziert bewertet werden können. Bisherige Ansätze nutzen häufig einzelne meinungstragende Wörter. Der Meinungswert für beispielsweise eine Verneinung muss nicht anhand eines generellen Vorgehens erfolgen. In aktuellen Verfahren wird der Wert eines meinungstragenden Wortes bei Vorhandensein einer Verneinung bisher meist invertiert, was häufig falsche Ergebnisse liefert. Die Liste enthält im besten Fall sowohl einen Meinungswert für das einzelne Wort und seine Verneinung (z.B. „schön“ und „nicht schön“).
1.3 übersicht der hauptergebnisse 5
Ergebnis 3: Evaluierung der Anwendung der SePL
Der Algorithmus aus Ergebnis 2 wurde mit Rezensionen der Bewertungsplattform CiaoausdemBereichderAutomobilversicherunge valuiert.Dabei wurden wesentliche Fehlerquellen aufgezeigt [176], die entsprechende Verbesserungen ermöglichen. Weiterhin wurde mit der SePL eine Evaluation anhand eines Maschinenlernverfahrens auf Basis einer Support Vector Machine durchgeführt. Hierbei wurden verschiedene bestehende lexikalische Ressourcen mit der SePL verglichen sowie deren Einsatz in verschiedenen Domänen untersucht. Die Ergebnisse wurden in [115] veröffentlicht.
Ergebnis 4: Forschungsprojekt PoliTwi - Themenerkennung politischer Top-Themen
Mit dem Forschungsprojekt PoliTwi wurden einerseits die erforderlichen Daten von Twitter gesammelt. Andererseits werden der breiten Öffentlichkeit fortlaufend aktuelle politische Top-Themen über verschiedene Kanäle zur Verfügung gestellt. Für die Evaluation der angestrebten Verbesserungen im Bereich der Themenerkennung in Verbindung mit einer Meinungsanalyse liegen die erforderlichen Daten über einen Zeitraum von bisher drei Jahren aus der Domäne Politik vor. Auf Basis dieser Daten konnte die Themenerkennung durchgeführt werden. Die berechneten Themen wurden mit anderen Systemen wie Google Trends oder Tagesschau Meta verglichen (siehe Kapitel 5.3). Es konnte gezeigt werden, dass die Meinungsanalyse die Themenerkennung verbessern kann. Die Ergebnisse des Projekts wurden in [124] veröffentlicht. Der Öffentlichkeit und insbesondere Journalisten und Politikern wird zudem ein Service (u.a. anhand des Twitter-Kanals unter https://twitter.com/politwi) zur Verfügung gestellt, anhand dessen sie über aktuelle Top-Themen informiert werden. Nachrichtenportale wie FOCUS Online nutzten diesen Service bei ihrer Berichterstattung (siehe Kapitel 4.3.6.1). Die Top-Themen werden seit Mitte 2013 ermittelt und können zudem auf der Projektwebseite [119] abgerufen werden.
Ergebnis 5: Erweiterung lexikalischer Ressourcen auf Konzeptebene
Das noch junge Forschungsgebiet des Concept-level Sentiment Analysis versucht bisherige Ansätze der Meinungsanalyse dadurch zu verbessern, dass Meinungsäußerungen auf Konzeptebene analysiert werden. Eine Voraussetzung sind Listen meinungstragender Wörter, welche differenzierte Betrachtungen anhand unterschiedlicher Kontexte ermöglichen. Anhand der Top-Themen und deren Kontext wurde ein Vorgehen entwickelt, welches die Erstellung bzw. Ergänzung dieser Listen ermöglicht. Es wurde gezeigt, wie Meinungen in unterschiedlichen Kontexten differenziert bewertet werden und diese Information in lexikalischen Ressourcen aufgenommen werden können, was im Bereich der Concept-level Sentiment Analysis genutzt werden kann. Das Vorgehen wurde in [124] veröffentlicht.
Succinctness is a natural measure for comparing the strength of different logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all properties that can be expressed in L_2 can be expressed in L_1 by formulas of (approximately) the same size, but some properties can be expressed in L_1 by (significantly) smaller formulas.
We study the succinctness of logics on linear orders. Our first theorem is concerned with the finite variable fragments of first-order logic. We prove that:
(i) Up to a polynomial factor, the 2- and the 3-variable fragments of first-order logic on linear orders have the same succinctness. (ii) The 4-variable fragment is exponentially more succinct than the 3-variable fragment. Our second main result compares the succinctness of first-order logic on linear orders with that of monadic second-order logic. We prove that the fragment of monadic second-order logic that has the same expressiveness as first-order logic on linear orders is non-elementarily more succinct than first-order logic.
The SU(3) spin model with chemical potential corresponds to a simplified version of QCD with static quarks in the strong coupling regime. It has been studied previously as a testing ground for new methods aiming to overcome the sign problem of lattice QCD. In this work we show that the equation of state and the phase structure of the model can be fully determined to reasonable accuracy by a linked cluster expansion. In particular, we compute the free energy to 14-th order in the nearest neighbour coupling. The resulting predictions for the equation of state and the location of the critical end points agree with numerical determinations to O(1%) and O(10%), respectively. While the accuracy for the critical couplings is still limited at the current series depth, the approach is equally applicable at zero and non-zero imaginary or real chemical potential, as well as to effective QCD Hamiltonians obtained by strong coupling and hopping expansions.
We review the representation problem based on factoring and show that this problem gives rise to alternative solutions to a lot of cryptographic protocols in the literature. And, while the solutions so far usually either rely on the RSA problem or the intractability of factoring integers of a special form (e.g., Blum integers), the solutions here work with the most general factoring assumption. Protocols we discuss include identification schemes secure against parallel attacks, secure signatures, blind signatures and (non-malleable) commitments.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
Classically, encoding of images by only a few, important components is done by the Principal Component Analysis (PCA). Recently, a data analysis tool called Independent Component Analysis (ICA) for the separation of independent influences in signals has found strong interest in the neural network community. This approach has also been applied to images. Whereas the approach assumes continuous source channels mixed up to the same number of channels by a mixing matrix, we assume that images are composed by only a few image primitives. This means that for images we have less sources than pixels. Additionally, in order to reduce unimportant information, we aim only for the most important source patterns with the highest occurrence probabilities or biggest information called „Principal Independent Components (PIC)“. For the example of a synthetic picture composed by characters this idea gives us the most important ones. Nevertheless, for natural images where no a-priori probabilities can be computed this does not lead to an acceptable reproduction error. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
The dynamics of many systems are described by ordinary differential equations (ODE). Solving ODEs with standard methods (i.e. numerical integration) needs a high amount of computing time but only a small amount of storage memory. For some applications, e.g. short time weather forecast or real time robot control, long computation times are prohibitive. Is there a method which uses less computing time (but has drawbacks in other aspects, e.g. memory), so that the computation of ODEs gets faster? We will try to discuss this question for the assumption that the alternative computation method is a neural network which was trained on ODE dynamics and compare both methods using the same approximation error. This comparison is done with two different errors. First, we use the standard error that measures the difference between the approximation and the solution of the ODE which is hard to characterize. But in many cases, as for physics engines used in computer games, the shape of the approximation curve is important and not the exact values of the approximation. Therefore, we introduce a subjective error based on the Total Least Square Error (TLSE) which gives more consistent results. For the final performance comparison, we calculate the optimal resource usage for the neural network and evaluate it depending on the resolution of the interpolation points and the inter-point distance. Our conclusion gives a method to evaluate where neural nets are advantageous over numerical ODE integration and where this is not the case. Index Terms—ODE, neural nets, Euler method, approximation complexity, storage optimization.
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The impact of columnar file formats on SQL‐on‐hadoop engine performance: a study on ORC and Parquet
(2019)
Columnar file formats provide an efficient way to store data to be queried by SQL‐on‐Hadoop engines. Related works consider the performance of processing engine and file format together, which makes it impossible to predict their individual impact. In this work, we propose an alternative approach: by executing each file format on the same processing engine, we compare the different file formats as well as their different parameter settings. We apply our strategy to two processing engines, Hive and SparkSQL, and evaluate the performance of two columnar file formats, ORC and Parquet. We use BigBench (TPCx‐BB), a standardized application‐level benchmark for Big Data scenarios. Our experiments confirm that the file format selection and its configuration significantly affect the overall performance. We show that ORC generally performs better on Hive, whereas Parquet achieves best performance with SparkSQL. Using ZLIB compression brings up to 60.2% improvement with ORC, while Parquet achieves up to 7% improvement with Snappy. Exceptions are the queries involving text processing, which do not benefit from using any compression.
It is well known that artificial neural nets can be used as approximators of any continuous functions to any desired degree and therefore be used e.g. in high - speed, real-time process control. Nevertheless, for a given application and a given network architecture the non-trivial task remains to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation which are critical issues in VLSI and computer implementations of nontrivial tasks. In this paper the accuracy of the weights and the number of neurons are seen as general system parameters which determine the maximal approximation error by the absolute amount and the relative distribution of information contained in the network. We define as the error-bounded network descriptional complexity the minimal number of bits for a class of approximation networks which show a certain approximation error and achieve the conditions for this goal by the new principle of optimal information distribution. For two examples, a simple linear approximation of a non-linear, quadratic function and a non-linear approximation of the inverse kinematic transformation used in robot manipulator control, the principle of optimal information distribution gives the the optimal number of neurons and the resolutions of the variables, i.e. the minimal amount of storage for the neural net. Keywords: Kolmogorov complexity, e-Entropy, rate-distortion theory, approximation networks, information distribution, weight resolutions, Kohonen mapping, robot control.
We study the effect of randomness in the adversarial queueing model. All proofs of instability for deterministic queueing strategies exploit a finespun strategy of insertions by an adversary. If the local queueing decisions in the network are subject to randomness, it is far from obvious, that an adversary can still trick the network into instability. We show that uniform queueing is unstable even against an oblivious adversary. Consequently, randomizing the queueing decisions made to operate a network is not in itself a suitable fix for poor network performances due to packet pileups.
We study the approximability of the following NP-complete (in their feasibility recognition forms) number theoretic optimization problems: 1. Given n numbers a1 ; : : : ; an 2 Z, find a minimum gcd set for a1 ; : : : ; an , i.e., a subset S fa1 ; : : : ; ang with minimum cardinality satisfying gcd(S) = gcd(a1 ; : : : ; an ). 2. Given n numbers a1 ; : : : ; an 2 Z, find a 1-minimum gcd multiplier for a1 ; : : : ; an , i.e., a vector x 2 Z n with minimum max 1in jx i j satisfying P n...
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p-Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p-Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p–Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
Gegenstand der hier vorgestellten Arbeit ist eine Applikation für die virtuelle Realität (VR), die in der Lage ist, die Struktur eines beliebigen Textes als begehbare, interaktive Stadt zu visualisieren. Darüber hinaus bietet das Programm eine besondere Textsuche an, die so in anderen konventionellen Textverarbeitungsprogrammen nicht vorzufinden ist. Dank der strukturellen Analyse und der Verwendung einiger außergewöhnlicher Analysetools des TextImager [2], ermöglicht text2City nicht nur die Suche nach bestimmten Textmustern, sondern zum Beispiel auch die Bestimmung der Textebene (Wort, Satz, Absatz, etc.) und einiges mehr. Ein weiteres Feature ist die Kommunikationsverbindung zwischen dem TextAnnotator-Service [1] und text2City, die dem Benutzer die Möglichkeit zum Annotieren bietet, aber auch von anderen Personen durchgeführte Annotationen sofort sichtbar machen kann. Für die Ausführung des Programms ist eine der beiden VRBrillen, Oculus Rift oder HTC Vive, ein für VR geeigneter PC, sowie die Software Unity nötig.
In der heutigen Zeit werden viele Anwendungen als Webanwendungen entwickelt, weil man sie schneller auf den Markt werfen kann. Neue Methoden wurden entwickelt um den Softwareentwicklungsprozess zu verschlanken, um damit noch schneller und öfter eine Produkt auf den Markt zu bringen. Diese Methoden erschweren die Arbeit von manuellen Tester ungemein. Sie müssen jetzt noch schneller und noch öfter testen.
Um dieser Miesere entgegenzuwirken wurden Testautomatisierungsmechanismen und Testautomatisierungswerkzeuge entwickelt. In dieser Arbeit wollte ich zeigen, dass Testautomatisierung in bestehenden Projekten nachträglich noch eingefügt werden kann. Und das diese für eine verbesserte Qualität des Produktes sorgen kann.
Ich habe in dieser Arbeit den Testfallkatalog für das Produkt „Email4Tablet“ der Firma Deutsche Telekom AG zu 70% mit dem Testwerkzeug Selenium automatisiert.