Refine
Year of publication
Document Type
- Preprint (746)
- Article (400)
- Working Paper (119)
- Doctoral Thesis (92)
- Diploma Thesis (46)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (35)
- diplomthesis (29)
- Report (25)
Has Fulltext
- yes (1602)
Is part of the Bibliography
- no (1602)
Keywords
Institute
- Informatik (1602) (remove)
Succinctness is a natural measure for comparing the strength of different logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all properties that can be expressed in L_2 can be expressed in L_1 by formulas of (approximately) the same size, but some properties can be expressed in L_1 by (significantly) smaller formulas.
We study the succinctness of logics on linear orders. Our first theorem is concerned with the finite variable fragments of first-order logic. We prove that:
(i) Up to a polynomial factor, the 2- and the 3-variable fragments of first-order logic on linear orders have the same succinctness. (ii) The 4-variable fragment is exponentially more succinct than the 3-variable fragment. Our second main result compares the succinctness of first-order logic on linear orders with that of monadic second-order logic. We prove that the fragment of monadic second-order logic that has the same expressiveness as first-order logic on linear orders is non-elementarily more succinct than first-order logic.
The SU(3) spin model with chemical potential corresponds to a simplified version of QCD with static quarks in the strong coupling regime. It has been studied previously as a testing ground for new methods aiming to overcome the sign problem of lattice QCD. In this work we show that the equation of state and the phase structure of the model can be fully determined to reasonable accuracy by a linked cluster expansion. In particular, we compute the free energy to 14-th order in the nearest neighbour coupling. The resulting predictions for the equation of state and the location of the critical end points agree with numerical determinations to O(1%) and O(10%), respectively. While the accuracy for the critical couplings is still limited at the current series depth, the approach is equally applicable at zero and non-zero imaginary or real chemical potential, as well as to effective QCD Hamiltonians obtained by strong coupling and hopping expansions.
We review the representation problem based on factoring and show that this problem gives rise to alternative solutions to a lot of cryptographic protocols in the literature. And, while the solutions so far usually either rely on the RSA problem or the intractability of factoring integers of a special form (e.g., Blum integers), the solutions here work with the most general factoring assumption. Protocols we discuss include identification schemes secure against parallel attacks, secure signatures, blind signatures and (non-malleable) commitments.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
Classically, encoding of images by only a few, important components is done by the Principal Component Analysis (PCA). Recently, a data analysis tool called Independent Component Analysis (ICA) for the separation of independent influences in signals has found strong interest in the neural network community. This approach has also been applied to images. Whereas the approach assumes continuous source channels mixed up to the same number of channels by a mixing matrix, we assume that images are composed by only a few image primitives. This means that for images we have less sources than pixels. Additionally, in order to reduce unimportant information, we aim only for the most important source patterns with the highest occurrence probabilities or biggest information called „Principal Independent Components (PIC)“. For the example of a synthetic picture composed by characters this idea gives us the most important ones. Nevertheless, for natural images where no a-priori probabilities can be computed this does not lead to an acceptable reproduction error. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
The dynamics of many systems are described by ordinary differential equations (ODE). Solving ODEs with standard methods (i.e. numerical integration) needs a high amount of computing time but only a small amount of storage memory. For some applications, e.g. short time weather forecast or real time robot control, long computation times are prohibitive. Is there a method which uses less computing time (but has drawbacks in other aspects, e.g. memory), so that the computation of ODEs gets faster? We will try to discuss this question for the assumption that the alternative computation method is a neural network which was trained on ODE dynamics and compare both methods using the same approximation error. This comparison is done with two different errors. First, we use the standard error that measures the difference between the approximation and the solution of the ODE which is hard to characterize. But in many cases, as for physics engines used in computer games, the shape of the approximation curve is important and not the exact values of the approximation. Therefore, we introduce a subjective error based on the Total Least Square Error (TLSE) which gives more consistent results. For the final performance comparison, we calculate the optimal resource usage for the neural network and evaluate it depending on the resolution of the interpolation points and the inter-point distance. Our conclusion gives a method to evaluate where neural nets are advantageous over numerical ODE integration and where this is not the case. Index Terms—ODE, neural nets, Euler method, approximation complexity, storage optimization.
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The impact of columnar file formats on SQL‐on‐hadoop engine performance: a study on ORC and Parquet
(2019)
Columnar file formats provide an efficient way to store data to be queried by SQL‐on‐Hadoop engines. Related works consider the performance of processing engine and file format together, which makes it impossible to predict their individual impact. In this work, we propose an alternative approach: by executing each file format on the same processing engine, we compare the different file formats as well as their different parameter settings. We apply our strategy to two processing engines, Hive and SparkSQL, and evaluate the performance of two columnar file formats, ORC and Parquet. We use BigBench (TPCx‐BB), a standardized application‐level benchmark for Big Data scenarios. Our experiments confirm that the file format selection and its configuration significantly affect the overall performance. We show that ORC generally performs better on Hive, whereas Parquet achieves best performance with SparkSQL. Using ZLIB compression brings up to 60.2% improvement with ORC, while Parquet achieves up to 7% improvement with Snappy. Exceptions are the queries involving text processing, which do not benefit from using any compression.
It is well known that artificial neural nets can be used as approximators of any continuous functions to any desired degree and therefore be used e.g. in high - speed, real-time process control. Nevertheless, for a given application and a given network architecture the non-trivial task remains to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation which are critical issues in VLSI and computer implementations of nontrivial tasks. In this paper the accuracy of the weights and the number of neurons are seen as general system parameters which determine the maximal approximation error by the absolute amount and the relative distribution of information contained in the network. We define as the error-bounded network descriptional complexity the minimal number of bits for a class of approximation networks which show a certain approximation error and achieve the conditions for this goal by the new principle of optimal information distribution. For two examples, a simple linear approximation of a non-linear, quadratic function and a non-linear approximation of the inverse kinematic transformation used in robot manipulator control, the principle of optimal information distribution gives the the optimal number of neurons and the resolutions of the variables, i.e. the minimal amount of storage for the neural net. Keywords: Kolmogorov complexity, e-Entropy, rate-distortion theory, approximation networks, information distribution, weight resolutions, Kohonen mapping, robot control.
We study the effect of randomness in the adversarial queueing model. All proofs of instability for deterministic queueing strategies exploit a finespun strategy of insertions by an adversary. If the local queueing decisions in the network are subject to randomness, it is far from obvious, that an adversary can still trick the network into instability. We show that uniform queueing is unstable even against an oblivious adversary. Consequently, randomizing the queueing decisions made to operate a network is not in itself a suitable fix for poor network performances due to packet pileups.
We study the approximability of the following NP-complete (in their feasibility recognition forms) number theoretic optimization problems: 1. Given n numbers a1 ; : : : ; an 2 Z, find a minimum gcd set for a1 ; : : : ; an , i.e., a subset S fa1 ; : : : ; ang with minimum cardinality satisfying gcd(S) = gcd(a1 ; : : : ; an ). 2. Given n numbers a1 ; : : : ; an 2 Z, find a 1-minimum gcd multiplier for a1 ; : : : ; an , i.e., a vector x 2 Z n with minimum max 1in jx i j satisfying P n...
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p-Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p-Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p–Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
Gegenstand der hier vorgestellten Arbeit ist eine Applikation für die virtuelle Realität (VR), die in der Lage ist, die Struktur eines beliebigen Textes als begehbare, interaktive Stadt zu visualisieren. Darüber hinaus bietet das Programm eine besondere Textsuche an, die so in anderen konventionellen Textverarbeitungsprogrammen nicht vorzufinden ist. Dank der strukturellen Analyse und der Verwendung einiger außergewöhnlicher Analysetools des TextImager [2], ermöglicht text2City nicht nur die Suche nach bestimmten Textmustern, sondern zum Beispiel auch die Bestimmung der Textebene (Wort, Satz, Absatz, etc.) und einiges mehr. Ein weiteres Feature ist die Kommunikationsverbindung zwischen dem TextAnnotator-Service [1] und text2City, die dem Benutzer die Möglichkeit zum Annotieren bietet, aber auch von anderen Personen durchgeführte Annotationen sofort sichtbar machen kann. Für die Ausführung des Programms ist eine der beiden VRBrillen, Oculus Rift oder HTC Vive, ein für VR geeigneter PC, sowie die Software Unity nötig.
In der heutigen Zeit werden viele Anwendungen als Webanwendungen entwickelt, weil man sie schneller auf den Markt werfen kann. Neue Methoden wurden entwickelt um den Softwareentwicklungsprozess zu verschlanken, um damit noch schneller und öfter eine Produkt auf den Markt zu bringen. Diese Methoden erschweren die Arbeit von manuellen Tester ungemein. Sie müssen jetzt noch schneller und noch öfter testen.
Um dieser Miesere entgegenzuwirken wurden Testautomatisierungsmechanismen und Testautomatisierungswerkzeuge entwickelt. In dieser Arbeit wollte ich zeigen, dass Testautomatisierung in bestehenden Projekten nachträglich noch eingefügt werden kann. Und das diese für eine verbesserte Qualität des Produktes sorgen kann.
Ich habe in dieser Arbeit den Testfallkatalog für das Produkt „Email4Tablet“ der Firma Deutsche Telekom AG zu 70% mit dem Testwerkzeug Selenium automatisiert.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We present techniques to prove termination of cycle rewriting, that is, string rewriting on cycles, which are strings in which the start and end are connected. Our main technique is to transform cycle rewriting into string rewriting and then apply state of the art techniques to prove termination of the string rewrite system. We present three such transformations, and prove for all of them that they are sound and complete. In this way not only termination of string rewriting of the transformed system implies termination of the original cycle rewrite system, a similar conclusion can be drawn for non-termination. Apart from this transformational approach, we present a uniform framework of matrix interpretations, covering most of the earlier approaches to automatically proving termination of cycle rewriting. All our techniques serve both for proving termination and relative termination. We present several experiments showing the power of our techniques.
With the rise of digitalization and ubiquity of media use, both opportunities and challenges emerge for academic learning. One prevalent challenge is media multitasking, which can become distracting and hinder learning success. This thesis investigates two facets of this issue: the enhancement of data tracking, and the exploration of digital interventions that support self-control.
The first paper focuses on digital tracking of media use, as a comprehensive understanding of digital distractions requires careful data collection to avoid misinterpretations. The paper presents a tracking system where media use is linked to learning activities. An annotation dashboard enabled the enrichment of the log data with self-reports. The efficacy of this system was evaluated in a 14-day online course taken by 177 students, with results confirming the initial assumptions about media tracking.
The second paper tackles the recognition of whether a text was thoroughly read, an issue brought on by the tendency of students to skip lengthy and demanding texts. A method utilizing scroll data and time series classification algorithms is presented and tested, showing promising results for early recognition and intervention.
The third paper presents the results of a systematic literature review on the effectiveness of digital self-control tools in academic learning. The paper identifies gaps in existing research and outlines a roadmap for further research on self-control tools.
The fourth paper shares findings from a survey of 273 students, exploring the practical use and perceived helpfulness of DSCTs. The study highlights the challenge of balancing between too restrictive and too lenient DSCTs, particularly for platforms offering both learning content and entertainment. The results also show a special role of media use that is highly habitual.
The fifth paper of this work investigates facets of app-based habit building. In a study over 27 days, 106 school-aged children used the specially developed PROMPT-app. The children carried out one of three digital activities each day, each of which was supposed to promote a deeper or more superficial processing of plans. Significant differences regarding the processing of plans emerged between the three activities, and the results suggest that a child-friendly planning application needs to be personalized to be effective.
Overall, this work offers a comprehensive insight into the complexity and potentials of dealing with distracting media usage and shows ways for future research and interventions in this fascinating and ever more important field.
Trotz eines umfangreichen Angebots an Literatur und Ratgebern im Bereich des Projektmanagements scheitern auch heute noch viele IT-Projekte. Ursache sind oft Probleme im Projektteam oder Fehleinschätzungen in der Planung des Projektes und Überwachung des Projektstatus. Insbesondere durch neue Technologien und Globalisierung entstandene Arbeitsweisen wie das virtuelle Team sind davon betroffen. In dieser Arbeit wird auf die Frage eingegangen, was virtuelle Teams sind und welche Probleme die Arbeit von virtuellen Teams belastet. Dafür werden aktuell existierende Tools aus dem Bereich des Web 2.0 analysiert und aus dem Stand der angebotenen Tools vermeidbare Schwächen der Helfer herausgearbeitet. Anschließend wird ein mittels einer Anforderungsanalyse und eines Konzepts, welches neue Methoden zur Darstellung von Projektstatus und Verknüpfung mit Dokumentation und Kommunikation nutzt, das Tool „TeamVision“ erstellt, welches versucht, virtuelle Teams möglichst effizient zu managenen, Probleme schnell zu erkennen und somit die Arbeit innerhalb des Teams zu beschleunigen. Hierbei wird insbesondere das Ergebnis der Analyse benutzt, dass viele Tools einzelne Verwaltungsaufgaben getrennt durchführen. Informationen müssen vom Nutzer selbst aus den verschiedenen Grafiken, Listen oder anderen Darstellungen gesammelt und selbst assoziiert werden. Die prototypische Implementierung von TeamVision versucht den Informationsfluss beherrschbar zu machen, indem Übersichten in einem Projektbaum zusammengefasst werden, der mittels Zoomfunktionen und visueller Hilfsmitel wie Farbgebung versucht, die Informationsbeschaffung zu erleichtern.
Interest to become a data scientist or related professions in data science domain is rapidly growing. To meet such a demand, we propose a novel educational service that aims to provide tailored learning paths for data science. Our target user is one who aims to be an expert in data science. Our approach is to analyze the background of the practitioner and match the learning units. A critical feature is that we use gamification to reinforce the practitioner engagement. We believe that our work provides a practical guideline for those who want to learn data science.
Measurements of the pT-dependent flow vector fluctuations in Pb-Pb collisions at sNN−−−√=5.02 TeV using azimuthal correlations with the ALICE experiment at the LHC are presented. A four-particle correlation approach [1] is used to quantify the effects of flow angle and magnitude fluctuations separately. This paper extends previous studies to additional centrality intervals and provides measurements of the pT-dependent flow vector fluctuations at sNN−−−√=5.02 TeV with two-particle correlations. Significant pT-dependent fluctuations of the V⃗ 2 flow vector in Pb-Pb collisions are found across different centrality ranges, with the largest fluctuations of up to ∼15% being present in the 5% most central collisions. In parallel, no evidence of significant pT-dependent fluctuations of V⃗ 3 or V⃗ 4 is found. Additionally, evidence of flow angle and magnitude fluctuations is observed with more than 5σ significance in central collisions. These observations in Pb-Pb collisions indicate where the classical picture of hydrodynamic modeling with a common symmetry plane breaks down. This has implications for hard probes at high pT, which might be biased by pT-dependent flow angle fluctuations of at least 23% in central collisions. Given the presented results, existing theoretical models should be re-examined to improve our understanding of initial conditions, quark--gluon plasma (QGP) properties, and the dynamic evolution of the created system.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependence of correlations between v3 and v2 and between v4 and v2 is also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the Large Hadron Collider. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependences of correlations between v3 and v2 and between v4 and v2 are also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. This might be an indication of possible viscous corrections to the equilibrium distribution at hadronic freeze-out, which might help to understand the possible contribution of bulk viscosity in the hadronic phase of the system. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependence of correlations between v3 and v2 and between v4 and v2 is also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, pPb, and PbPb, at the top energy of the Large Hadron Collider (√sNN=5.02TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for pPb and PbPb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, p-Pb, and Pb-Pb, at the top energy of the Large Hadron Collider (sNN−−−√=5.02 TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for p-Pb and Pb-Pb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, p-Pb, and Pb-Pb, at the top energy of the Large Hadron Collider (sNN−−−√=5.02 TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for p-Pb and Pb-Pb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
The first measurements of K∗(892)0 resonance production as a function of charged-particle multiplicity in Xe−Xe collisions at sNN−−−√= 5.44 TeV and pp collisions at s√= 5.02 TeV using the ALICE detector are presented. The resonance is reconstructed at midrapidity (|y|<0.5) using the hadronic decay channel K∗0→K±π∓. Measurements of transverse-momentum integrated yield, mean transverse-momentum, nuclear modification factor of K∗0, and yield ratios of resonance to stable hadron (K∗0/K) are compared across different collision systems (pp, p−Pb, Xe−Xe, and Pb−Pb) at similar collision energies to investigate how the production of K∗0 resonances depends on the size of the system formed in these collisions. The hadronic rescattering effect is found to be independent of the size of colliding systems and mainly driven by the produced charged-particle multiplicity, which is a proxy of the volume of produced matter at the chemical freeze-out. In addition, the production yields of K∗0 in Xe−Xe collisions are utilized to constrain the dependence of the kinetic freeze-out temperature on the system size using HRG-PCE model.
To truly appreciate the myriad of events which relate synaptic function and vesicle dynamics, simulations should be done in a spatially realistic environment. This holds true in particular in order to explain as well the rather astonishing motor patterns which we observed within in vivo recordings which underlie peristaltic contractionsas well as the shape of the EPSPs at different forms of long-term stimulation, presented both here, at a well characterized synapse, the neuromuscular junction (NMJ) of the Drosophila larva (c.f. Figure 1). To this end, we have employed a reductionist approach and generated three dimensional models of single presynaptic boutons at the Drosophila larval NMJ. Vesicle dynamics are described by diffusion-like partial differential equations which are solved numerically on unstructured grids using the uG platform. In our model we varied parameters such as bouton-size, vesicle output probability (Po), stimulation frequency and number of synapses, to observe how altering these parameters effected bouton function. Hence we demonstrate that the morphologic and physiologic specialization maybe a convergent evolutionary adaptation to regulate the trade off between sustained, low output, and short term, high output, synaptic signals. There seems to be a biologically meaningful explanation for the co-existence of the two different bouton types as previously observed at the NMJ (characterized especially by the relation between size and Po), the assigning of two different tasks with respect to short- and long-time behaviour could allow for an optimized interplay of different synapse types. We can present astonishing similar results of experimental and simulation data which could be gained in particular without any data fitting, however based only on biophysical values which could be taken from different experimental results. As a side product, we demonstrate how advanced methods from numerical mathematics could help in future to resolve also other difficult experimental neurobiological issues.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. To truly appreciate the myriad of events which relate synaptic function and vesicle dynamics, simulations should be done in a spatially realistic environment. This holds true in particular in order to explain the rather astonishing motor patterns presented here which we observed within in vivo recordings which underlie peristaltic contractions at a well characterized synapse, the neuromuscular junction (NMJ) of the Drosophila larva. To this end, we have employed a reductionist approach and generated three dimensional models of single presynaptic boutons at the Drosophila larval NMJ. Vesicle dynamics are described by diffusion-like partial differential equations which are solved numerically on unstructured grids using the uG platform. In our model we varied parameters such as bouton-size, vesicle output probability (Po), stimulation frequency and number of synapses, to observe how altering these parameters effected bouton function. Hence we demonstrate that the morphologic and physiologic specialization maybe a convergent evolutionary adaptation to regulate the trade off between sustained, low output, and short term, high output, synaptic signals. There seems to be a biologically meaningful explanation for the co-existence of the two different bouton types as previously observed at the NMJ (characterized especially by the relation between size and Po),the assigning of two different tasks with respect to short- and long-time behaviour could allow for an optimized interplay of different synapse types. As a side product, we demonstrate how advanced methods from numerical mathematics could help in future to resolve also other difficult experimental neurobiological issues.
The morphology of presynaptic specializations can vary greatly ranging from classical single-release-site boutons in the central nervous system to boutons of various sizes harboring multiple vesicle release sites. Multi-release-site boutons can be found in several neural contexts, for example at the neuromuscular junction (NMJ) of body wall muscles of Drosophila larvae. These NMJs are built by two motor neurons forming two types of glutamatergic multi-release-site boutons with two typical diameters. However, it is unknown why these distinct nerve terminal configurations are used on the same postsynaptic muscle fiber. To systematically dissect the biophysical properties of these boutons we developed a full three-dimensional model of such boutons, their release sites and transmitter-harboring vesicles and analyzed the local vesicle dynamics of various configurations during stimulation. Here we show that the rate of transmission of a bouton is primarily limited by diffusion-based vesicle movements and that the probability of vesicle release and the size of a bouton affect bouton-performance in distinct temporal domains allowing for an optimal transmission of the neural signals at different time scales. A comparison of our in silico simulations with in vivo recordings of the natural motor pattern of both neurons revealed that the bouton properties resemble a well-tuned cooperation of the parameters release probability and bouton size, enabling a reliable transmission of the prevailing firing-pattern at diffusion-limited boutons. Our findings indicate that the prevailing firing-pattern of a neuron may determine the physiological and morphological parameters required for its synaptic terminals.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
The production yield of the Λ(1520) baryon resonance is measured at mid-rapidity in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the Λ(1520)→pK− (and charge conjugate) hadronic decay channel as a function of the transverse momentum (pT) and collision centrality. The pT-integrated production rate of Λ(1520) relative to Λ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at LHC and the first evidence of Λ(1520) suppression in heavy-ion collisions. The measured Λ(1520)/Λ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured pT distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances.
The production yield of the Λ(1520) baryon resonance is measured at mid-rapidity in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the Λ(1520)→pK− (and charge conjugate) hadronic decay channel as a function of the transverse momentum (pT) and collision centrality. The pT-integrated production rate of Λ(1520) relative to Λ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at the LHC and the first 3σ evidence of Λ(1520) suppression within a single collision system. The measured Λ(1520)/Λ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured pT distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances.
The production yield of the Λ(1520) baryon resonance is measured at mid-rapidity in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the Λ(1520)→pK− (and charge conjugate) hadronic decay channel as a function of the transverse momentum (pT) and collision centrality. The pT-integrated production rate of Λ(1520) relative to Λ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at the LHC and the first 3σ evidence of Λ(1520) suppression within a single collision system. The measured Λ(1520)/Λ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured pT distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances.
Inclusive transverse momentum spectra of primary charged particles in Pb–Pb collisions at √sNN=2.76 TeV have been measured by the ALICE Collaboration at the LHC. The data are presented for central and peripheral collisions, corresponding to 0–5% and 70–80% of the hadronic Pb–Pb cross section. The measured charged particle spectra in |η|<0.8 and 0.3<pT<20 GeV/c are compared to the expectation in pp collisions at the same sNN, scaled by the number of underlying nucleon–nucleon collisions. The comparison is expressed in terms of the nuclear modification factor RAA. The result indicates only weak medium effects (RAA≈0.7) in peripheral collisions. In central collisions, RAA reaches a minimum of about 0.14 at pT=6–7 GeV/c and increases significantly at larger pT. The measured suppression of high-pT particles is stronger than that observed at lower collision energies, indicating that a very dense medium is formed in central Pb–Pb collisions at the LHC.
Suche im Semantic Web : Erweiterung des VRP um eine intuitive und RQL-basierte Anfrageschnittstelle
(2003)
Datenflut im World Wide Web - ein Problem jedes Internetbenutzers. Klassische Internetsuchmaschinen sind überfordert und liefern immer seltener brauchbare Resultate. Das Semantic Web verspricht Hoffnung - maßgeblich basierend auf RDF. Das Licht der Öffentlichkeit erblickt das Semantic Web vermutlich zunächst in spezialisierten Informationsportalen, so genannten Infomediaries. Besucher von Informationsportalen benötigen eine Abfragesprache, welche ebenso einfach wie eine gewöhnliche Internetsuchmaschine anzuwenden ist. Eine derartige Abfragesprache existiert für RDF zur Zeit nicht. Diese Arbeit stellt eine neuartige Abfragesprache vor, welche dieser Anforderung genügt: eRQL. Bestandteil dieser Arbeit ist der mittels Java implementierte eRQL-Prozessor eRqlEngine, welcher unter http://www.wleklinski.de/rdf/ und unter http://www.dbis.informatik.uni-frankfurt.de/~tolle/RDF/eRQL/ bezogen werden kann.
Iterative arrays (IAs) are a, parallel computational model with a sequential processing of the input. They are one-dimensional arrays of interacting identical deterministic finite automata. In this note, realtime-lAs with sublinear space bounds are used to accept formal languages. The existence of a proper hierarchy of space complexity classes between logarithmic anel linear space bounds is proved. Furthermore, an optimal spacc lower bound for non-regular language recognition is shown. Key words: Iterative arrays, cellular automata, space bounded computations, decidability questions, formal languages, theory of computation
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton-proton collisions at a center-of-mass energy of s√=13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton–proton collisions at a center-of-mass energy of √s = 13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
Studying strangeness and baryon production mechanisms through angular correlations between charged
(2023)
The angular correlations between charged Ξ baryons and associated identified hadrons (pions, kaons, protons, Λ baryons, and Ξ baryons) are measured in pp collisions at s√=13 TeV with the ALICE detector to give insight into the particle production mechanisms and balancing of quantum numbers on the microscopic level. In particular, the distribution of strangeness is investigated in the correlations between the doubly-strange Ξ baryon and mesons and baryons that contain a single strange quark, K and Λ. As a reference, the results are compared to Ξπ and Ξp correlations, where the associated mesons and baryons do not contain a strange valence quark. These measurements are expected to be sensitive to whether strangeness is produced through string breaking or in a thermal production scenario. Furthermore, the multiplicity dependence of the correlation functions is measured to look for the turn-on of additional particle production mechanisms with event activity. The results are compared to predictions from the string-breaking model PYTHIA 8, including tunes with baryon junctions and rope hadronisation enabled, the cluster hadronisation ly or qualitatively by the Monte Carlo models, no one model can match all features of the data. These results provide stringent constraints on the strangeness and baryon number production mechanisms in pp collisions.
The very forward energy is a powerful tool for characterising the proton fragmentation in pp and p-Pb collisions and, studied in correlation with particle production at midrapidity, provides direct insightsinto the initial stages and the subsequent evolution of the collision. Furthermore, the correlation between the forward energy and the production of particles with large transverse momenta at midrapidity provides information complementary to the measurements of the underlying event, which are usually interpreted in the framework of models implementing centrality-dependent multiple parton interaction. Results about the very forward energy, measured by the ALICE zero degree calorimeters (ZDC), and its dependence on the activity measured at midrapidity in pp collisions at s√=13 TeV and in p-Pb collisions at sNN−−−√=8.16 TeV are presented and discussed. The measurements performed in pp collisions are compared with the expectations of three hadronic interaction event generators: PYTHIA 6 (Perugia 2011 tune), PYTHIA 8 (Monash tune), and EPOS LHC. These results provide new constraints on the validity of models in describing the beam remnants at very forward rapidities, where perturbative QCD cannot be used.