Refine
Year of publication
Document Type
- Preprint (771)
- Article (416)
- Working Paper (119)
- Doctoral Thesis (93)
- Diploma Thesis (47)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (36)
- diplomthesis (28)
- Report (25)
Has Fulltext
- yes (1645)
Is part of the Bibliography
- no (1645)
Keywords
Institute
- Informatik (1645) (remove)
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
The azimuthal (Δφ) correlation distributions between heavy-flavor decay electrons and associated charged particles are measured in pp and p−Pb collisions at sNN−−−√=5.02 TeV. Results are reported for electrons with transverse momentum 4<pT<16 GeV/c and pseudorapidity |η|<0.6. The associated charged particles are selected with transverse momentum 1<pT<7 GeV/c, and relative pseudorapidity separation with the leading electron |Δη|<1. The correlation measurements are performed to study and characterize the fragmentation and hadronization of heavy quarks. The correlation structures are fitted with a constant and two von Mises functions to obtain the baseline and the near- and away-side peaks, respectively. The results from p−Pb collisions are compared with those from pp collisions to study the effects of cold nuclear matter. In the measured trigger electron and associated particle kinematic regions, the two collision systems give consistent results. The Δφ distribution and the peak observables in pp and p−Pb collisions are compared with calculations from various Monte Carlo event generators.
Angular correlations between heavy-flavour decay electrons and charged particles at mid-rapidity (|η|<0.8) are measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. The analysis is carried out for the 0-20% (high) and 60-100% (low) multiplicity ranges. The jet contribution in the correlation distribution from high-multiplicity events is removed by subtracting the distribution from low-multiplicity events. An azimuthal modulation remains after removing the jet contribution, similar to previous observations in two-particle angular correlation measurements for light-flavour hadrons. A Fourier decomposition of the modulation results in a positive second-order coefficient (v2) for heavy-flavour decay electrons in the transverse momentum interval 1.5<pT<4 GeV/c in high-multiplicity events, with a significance larger than 5σ. The results are compared with those of charged particles at mid-rapidity and of inclusive muons at forward rapidity. The v2 measurement of open heavy-flavour particles at mid-rapidity in small collision systems could provide crucial information to help interpret the anisotropies observed in such systems.
Angular correlations between heavy-flavour decay electrons and charged particles at mid-rapidity (|η|<0.8) are measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. The analysis is carried out for the 0-20% (high) and 60-100% (low) multiplicity ranges. The jet contribution in the correlation distribution from high-multiplicity events is removed by subtracting the distribution from low-multiplicity events. An azimuthal modulation remains after removing the jet contribution, similar to previous observations in two-particle angular correlation measurements for light-flavour hadrons. A Fourier decomposition of the modulation results in a positive second-order coefficient (v2) for heavy-flavour decay electrons in the transverse momentum interval 1.5<pT<4 GeV/c in high-multiplicity events, with a significance larger than 5σ. The results are compared with those of charged particles at mid-rapidity and of inclusive muons at forward rapidity. The v2 measurement of open heavy-flavour particles at mid-rapidity in small collision systems could provide crucial information to help interpret the anisotropies observed in such systems.
Angular correlations between heavy-flavour decay electrons and charged particles at mid-rapidity (|η|<0.8) are measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. The analysis is carried out for the 0-20% (high) and 60-100% (low) multiplicity ranges. The jet contribution in the correlation distribution from high-multiplicity events is removed by subtracting the distribution from low-multiplicity events. An azimuthal modulation remains after removing the jet contribution, similar to previous observations in two-particle angular correlation measurements for light-flavour hadrons. A Fourier decomposition of the modulation results in a positive second-order coefficient (v2) for heavy-flavour decay electrons in the transverse momentum interval 1.5<pT<4 GeV/c in high-multiplicity events, with a significance larger than 5σ. The results are compared with those of charged particles at mid-rapidity and of inclusive muons at forward rapidity. The v2 measurement of open heavy-flavour particles at mid-rapidity in small collision systems could provide crucial information to help interpret the anisotropies observed in such systems.
We present measurements of the azimuthal dependence of charged jet production in central and semi-central √sNN=2.76 TeV Pb–Pb collisions with respect to the second harmonic event plane, quantified as v2ch jet. Jet finding is performed employing the anti-kT algorithm with a resolution parameter R=0.2 using charged tracks from the ALICE tracking system. The contribution of the azimuthal anisotropy of the underlying event is taken into account event-by-event. The remaining (statistical) region-to-region fluctuations are removed on an ensemble basis by unfolding the jet spectra for different event plane orientations independently. Significant non-zero v2ch jet is observed in semi-central collisions (30–50% centrality) for 20<pTch jet<90 GeV/c. The azimuthal dependence of the charged jet production is similar to the dependence observed for jets comprising both charged and neutral fragments, and compatible with measurements of the v2 of single charged particles at high pT. Good agreement between the data and predictions from JEWEL, an event generator simulating parton shower evolution in the presence of a dense QCD medium, is found in semi-central collisions.
We present measurements of the azimuthal dependence of charged jet production in central and semi-central sNN−−−√ = 2.76 TeV Pb-Pb collisions with respect to the second harmonic event plane, quantified as vch jet2. Jet finding is performed employing the anti-kT algorithm with a resolution parameter R = 0.2 using charged tracks from the ALICE tracking system. The contribution of the azimuthal anisotropy of the underlying event is taken into account event-by-event. The remaining (statistical) region-to-region fluctuations are removed on an ensemble basis by unfolding the jet spectra for different event plane orientations independently. Significant non-zero vch jet2 is observed in semi-central collisions (30-50\% centrality) for 20 < pch jetT < 90 GeV/c. The azimuthal dependence of the charged jet production is similar to the dependence observed for jets comprising both charged and neutral fragments, and compatible with measurements of the v2 of single charged particles at high pT. Good agreement between the data and predictions from JEWEL, an event generator simulating parton shower evolution in the presence of a dense QCD medium, is found in semi-central collisions.
We present measurements of the azimuthal dependence of charged jet production in central and semi-central sNN−−−√ = 2.76 TeV Pb-Pb collisions with respect to the second harmonic event plane, quantified as vch jet2. Jet finding is performed employing the anti-kT algorithm with a resolution parameter R = 0.2 using charged tracks from the ALICE tracking system. The contribution of the azimuthal anisotropy of the underlying event is taken into account event-by-event. The remaining (statistical) region-to-region fluctuations are removed on an ensemble basis by unfolding the jet spectra for different event plane orientations independently. Significant non-zero vch jet2 is observed in semi-central collisions (30-50\% centrality) for 20 < pch jetT < 90 GeV/c. The azimuthal dependence of the charged jet production is similar to the dependence observed for jets comprising both charged and neutral fragments, and compatible with measurements of the v2 of single charged particles at high pT. Good agreement between the data and predictions from JEWEL, an event generator simulating parton shower evolution in the presence of a dense QCD medium, is found in semi-central collisions.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
Das Ziel dieser Arbeit ist, einen Text automatisch darauf zu untersuchen, ob er Gebäude beschreibt, und diese gegebenenfalls zu visualisieren. Zu diesem Zweck wurde ein Prototyp entwickelt, der mithilfe von NLP-Software auf Basis einer UIMA-Pipeline einen Text auf Gebäudedaten untersucht und diese anschließend als 3D-Modelle auf einer Karte visualisiert. Um die Güte des Projekts zu bestimmen wurde eine Evaluation durchgeführt, in der die Aufgabe darin bestand, Paragraphen ihren zugehörigen 3D-Modellen zuzuordnen. Die Ergebnisse wiesen eine Erkennungsrate von 88.67\% auf. Jedoch wurden auch Schwächen im Standardisierungsverfahren der Parameter und in der einseitigen Art zu Visualisieren aufgezeigt. Zum Schluss wird vorgestellt, wie diese Schwachstellen mithilfe eines ontologischen Modells behoben werden können und wie mit dem Projekt weiterverfahren werden kann.
Our recently developed LRSX Tool implements a technique to automatically prove the correctness of program transformations in higher-order program calculi which may permit recursive let-bindings as they occur in functional programming languages. A program transformation is correct if it preserves the observational semantics of programs- In our tool the so-called diagram method is automated by combining unification, matching, and reasoning on alpha-renamings on the higher-order metalanguage, and automating induction proofs via an encoding into termination problems of term rewrite systems. We explain the techniques, we illustrate the usage of the tool, and we report on experiments.
To accommodate the growth of the software industry, programming languages are getting increasingly easy to use. The latest trend in the simplification of the software development process is the usage of visual programming environments. To make visual programming effective, the graph-like representation of the source code must be clearly arranged. This thesis details some of the difficulties in automatic layout generation and proposes an interface as well as two different implementations of automatic layout generators to integrate into the VWorkflows visual programming framework.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
Das Ziel dieser Arbeit ist es, eine authentische Verdeckung eingebetteter virtueller 3D-Objekte in augmentierten Bilderwelten bei einer geringen Anzahl an Fotos innerhalb der Bilderwelt zu erreichen. Für die Verdeckung von realen und virtuellen Anteilen einer Augmented Reality-Szene sind Tiefeninformationen notwendig. Diese stammen üblicherweise aus einer 3D-Rekonstruktion, für deren Erstellung sehr viele Eingangsbilder notwendig sind. Im Gegensatz dazu wurde in dieser Arbeit ein System entwickelt, das eine vollständige 3D-Rekonstruktion umgeht. Dieses beruht auf einem direkten bildbasierten Rendering-Ansatz, welcher auch mit unvollständigen Tiefeninformationen eine hohe Bildqualität in Bezug auf eine authentische Verdeckung erreicht. Daraus erschließen sich neue Anwendungsgebiete, wie z.B. die automatisierte Visualisierung von 3D-Planungsdaten und 3D-Produktpräsentationen in Bildern bzw. Bilderwelten, da in diesen Bereichen oftmals nicht genügend große Bildmengen vorhanden sind. Gerade für diese Anwendungsgebiete sind authentische Verdeckungen für die Nutzerakzeptanz der Augmentierung wichtig. Unter authentischer Verdeckung wird die entsprechend der menschlichen Wahrnehmung visuell korrekte Überlagerung zwischen virtuellen Objekten und einzelnen Bildanteilen eines oder mehrerer Fotos verstanden. Das Ergebnis wird in Form einer Bilderwelt (eine bildbasierte 3D-Welt, die die Fotos entsprechend der Bildinhalte räumlich anordnet) präsentiert, die mit virtuellen Objekten erweitert wurde. Folglich ordnet sich diese Arbeit in das Fachgebiet der Augmented Reality ein. Im Rahmen dieser Arbeit wurde ein Verfahren für die bildbasierte Darstellung mit authentischen Verdeckungen auf der Basis von unvollständigen Tiefeninformationen sowie unterschiedliche Verfahren für die notwendige Berechnung der Tiefeninformationen entwickelt und gegenübergestellt. Das Sliced-Image-Rendering-Verfahren rendert mithilfe unvollständiger Tiefeninformationen ein Bild ohne 3D-Geometrie als dreidimensionale Darstellung und realisiert auf diese Weise eine authentische Verdeckung. Das Berechnen der dafür notwendigen Tiefeninformationen eines 2D-Bildes stellt eine gesonderte Herausforderung dar, da die Bilderwelt nur wenige und unvollständige 3D-Informationen der abgebildeten Szene bereitstellt. Folglich kann eine qualitativ hochwertige 3D-Rekonstruktion nicht durchgeführt werden. Die Fragestellung ist daher, wie einzelne Tiefeninformationen berechnet und diese anschließend größeren Bildbereichen zugeordnet werden können. Für diese Tiefenzuordnung wurden im Rahmen der vorliegenden Arbeit drei verschiedene Verfahren konzipiert, die sich in Bezug auf genutzte Daten und deren Verarbeitung unterscheiden. Das Segment-Depth-Matching-Verfahren ordnet Segmenten eines Bildes mithilfe der 3D-Szeneninformationen der Bilderwelt eine Tiefe zu. Hierfür werden Segmentbilder vorausgesetzt. Als Ergebnis liegt für jedes Foto eine Depth-Map vor. Um eine Tiefenzuordnung auch ohne eine vorangehende Segmentierung zu ermöglichen, wurde das Key-Point-Depth-Matching-Verfahren entwickelt. Bei diesem Verfahren werden die 3D-Szeneninformationen der Bilderwelt auf die Bildebene als kreisförmige Sprites projiziert. Die Distanz zur Kamera wird dabei als Tiefenwert für das Sprite verwendet. Alle projizierten Sprites einer Kamera ergeben die Depth-Map. Beide Verfahren liefern Flächen mit Tiefeninformationen, aber keine pixelgenauen Depth-Maps. Um pixelgenaue Depth-Maps zu erzeugen, wurde das Geometry-Depth-Matching-Verfahren entwickelt. Bei diesem Verfahren wird eine Szenengeometrie des abgebildeten Szenenausschnittes erzeugt und dadurch eine pixelgenaue Depth-Map erstellt. Hierfür wird ein semiautomatischer Skizzierungsschritt vorausgesetzt. Die erzeugte Szenengeometrie stellt keine vollständige 3D-Rekonstruktion der Bilderweltenszene dar, da nur ein Szenenausschnitt aus Sicht einer Kamera rekonstruiert wird. Anhand einer technischen Umsetzung erfolgte eine Validierung der konzeptionellen Verfahren. Die daraus resultierenden Ergebnisse wurden anhand verschiedener Bilderweltenszenen mit unterschiedlichen Eigenschaften (Außen- und Innenraumszenen, detailreich und -arm, unterschiedliche Bildmengen) evaluiert. Die Evaluierung des Sliced-Image-Renderings zeigt, dass mithilfe unvollständiger Tiefeninformationen der entwickelten Depth-Matching-Verfahren und unter Einhaltung der gestellten Anforderungen (wenig Eingabefotos, kleine Szenen, keine 3D-Rekonstruktion) eine authentische Verdeckung eingebetteter virtueller 3D-Objekte in Bilderwelten realisiert werden kann. Mithilfe des entwickelten Systems können bildbasierte Anwendungen auch mit kleinen Fotomengen Augmentierungen mit hoher Bildqualität in Bezug auf eine authentische Verdeckung realisieren.
Augmented Reality ist eine Technologie, mit der die Wahrnehmung der realen Umgebung durch computergenerierte Sinnesreize verändert bzw. erweitert wird. Zur Erweiterung dieser „angereicherten Realität“ werden virtuelle Informationen wie z.B. 3D-Objekte, Grafiken und Videos in Echtzeit in Abbildern der realen Umgebung dargestellt. Die Erweiterungen helfen dem Anwender Aufgaben in der Realität auszuführen, da sie ihm Informationen bereitstellen, die er – ohne AR – nicht unmittelbar wahrnehmen könnte. Die Zielsetzung ist, dem Benutzer den Eindruck zu vermitteln, dass die reale Umgebung und die virtuellen Objekte koexistent miteinander verschmelzen. Für AR-Anwendungen existieren zahlreiche potenzielle Einsatzgebiete, doch verhindern bisher einige Probleme die Verbreitung dieser Technologie. Einer breiten Nutzung von AR-Anwendungen steht beispielsweise die Problematik gegenüber, dass deren Erstellung hohe programmiertechnische Anforderungen an die Entwickler stellt. Zur Verminderung dieser Probleme ist es wünschenswert Benutzern ohne Programmierkenntnisse (Autoren) die Entwicklung von AR-Anwendungen zu ermöglichen. Zum anderen bestehen technologische Probleme bei den für die Registrierung der virtuellen Objekte essenziellen Trackingverfahren. Weiterhin weisen die bisherigen AR-Anwendungen im Allgemeinen und die mittels autorenorientierter Systeme erstellten AR-Applikationen im Besonderen Defizite bezüglich der Authentizität der Darstellungen auf. Dabei sind hauptsächlich inkorrekte Verdeckungen und unrealistische Schatten bei den virtuellen Objekten verantwortlich für den Verlust des Koexistenzeindrucks. In dieser Arbeit wird unter Berücksichtigung der Trackingprobleme und auf Basis von Analysen, die die wichtigsten Authentizitätskriterien bestimmen, ein Konzept zur authentischen Integration von virtuellen Objekten in AR-Anwendungen erarbeitet und dargelegt. Auf diesem Integrationsprozess basierend werden Konzepte für Werkzeuge mit grafischen Benutzungsschnittstellen abgeleitet, mit denen Autoren die Erstellung von AR-Anwendungen mit hoher Darstellungsauthentizität ermöglicht wird. Einerseits verfügen die mit diesen Werkzeugen erstellten AR-Anwendungen über eine verbesserte Registrierung der virtuellen Objekte. Andererseits stellen die Werkzeuge Lösungen bereit, damit die virtuellen Objekte der AR-Anwendungen korrekte Verdeckungen aufweisen und über Schatten und Schattierungseffekte verfügen, die mit der tatsächlichen Beleuchtungssituation der realen Umgebung übereinstimmen. Sämtliche dieser Autorenwerkzeuge basieren auf einem in dieser Arbeit dargelegten Prinzip, bei dem die authentische Integration mittels leicht verständlicher bzw. wenig komplexer Arbeitsschritte und auf Basis der Verwendung einer Bildsequenz der realen Zielumgebung stattfindet. Die Konzepte dieser Arbeit werden durch die Implementierung der Autorenwerkzeuge validiert. Dabei zeigt sich, dass die Konzepte technisch umsetzbar sind. Die Evaluierung basiert auf der Gegenüberstellung eines in dieser Arbeit entwickelten Anforderungskatalogs und verdeutlicht die Eignung des Integrationsprozesses und der davon abgeleiteten Konzepte der Autorenwerkzeuge. Die Autorenwerkzeuge werden in eine bestehende, frei verfügbare AR-Autorenumgebung integriert.
Lernplattformen sind E-Learning-Systeme, deren Kernfunktionalität die Verwaltung und Verteilung von Lernmaterialien über das World Wide Web ist. In dieser Arbeit wurde untersucht, wie durch Aufzeichnung (Tracking), Auswertung und Visualisierung von Lernaktivitäten in Lernplattformen eine Verbesserung der Lernqualität erreicht werden kann. Der Ansatzpunkt dafür war, Informationen zu Lernaktivitäten in geeigneter Weise Lehrenden und Lernenden zu präsentieren, so dass diese Rückschlüsse ziehen können, um Lernprozesse eigenständig zu optimieren. Viele Lernplattformen verfolgen bereits diesen Ansatz und verfügen deshalb über entsprechende Funktionalität.
Es mussten zwei wesentliche Fragen beantwortet werden:
1. Was müssen Lernende und Lehrende über erfolgte Lernaktivitäten wissen?
2. Wie werden Lernaktivitäten in geeigneter Weise präsentiert?
Diese Fragen wurden durch Betrachtung existierender Lernplattformen (State of the Art) sowie Befragung von Experten in Form von Interviews beantwortet. Zur Beantwortung der 2. Frage wurden außerdem allgemeine Grundlagen der Auswertung und Visualisierung von Daten verwendet sowie (zu einem geringen Teil) Auswertungs- und Visualisierungsverfahren von Systemen, die keine Lernplattformen sind. Besondere Aufmerksamkeit wurde auch dem
Datenschutz gewidmet.
Beruhend auf den gewonnenen Erkenntnissen wurde dann ein Konzept für ein Auswertungs-/Visualisierungssystem entwickelt das in verschiedenen Punkten eine Verbesserung des State of the Art darstellt.
Teile des Konzepts wurden schließlich für das webbasierte Softwaresystem LernBar, das über einen Großteil der Funktionalität einer Lernplattform verfügt, prototypisch implementiert. Durch die Implementierung soll es ermöglicht werden, das Konzept im praktischen Einsatz zu evaluieren, was im Rahmen dieser Arbeit nicht möglich war.
In der folgenden Anleitung werden diverse Methoden für den Zugriff auf das Ressourcen-Management, entwickelt von der AG Texttechnologie, erläutert. Das Ressourcen-Management ist für alle Anwendungen identisch. Erklärt wird das Auslesen des Ressourcen-Managements der Projects „PHI Picturing Atlas“. Alle Anweisungen erfolgen per RESTful-Aufrufen. Die API-Dokumentation findet sich unter http://phi.resources.hucompute.org.
Wir betrachten in dieser Diplomarbeit die Sicherheit des ringbasierten Public Key Kryptosystems NTRU, das 1996 von J. Hoffstein, J. Pipher und J.H. Silverman vorgeschlagen wurde. Dieses Kryptosystem bietet schnelle Kodierung und Dekodierung in Laufzeit O(n exp 2) bei kleinem Sicherheitsparameter n. Die Sicherheit des Systems beruht auf einem Polynomfaktorisierungsproblem (PFP)im Polynomring Zq[X]/(X exp n -1). Das PFP wurde von Coppersmith und Shamir auf ein Kürzestes Vektor Problem im Gitter Lcs reduziert. Die neuen Ergebnisse dieser Arbeit bauen auf dem Gitter Lcs auf. Wir betrachten die Nachteile von Lcs und konstruieren verbesserte Gitterbasen zum Angriff auf das NTRU-Kryptosystem. Dabei nutzen wir Strukturen des Polynomrings Zq[X]/(X exp n -1) und der geheimen Schlüssel aus. Durch die neuen Gitterbasen wird der Quotient aus der Länge des zweitkürzesten und der Länge des kürzesten Gittervektors vergrößert. Da wir Approximationsalgorithmen zum Finden eines kürzesten Vektors verwenden, beschleunigt dies die Attacken. Wir präsentieren verschiedene Methoden, wie man die Dimension der Gitterbasen verkleinern kann. Durch die verbesserten Gitterattacken erhalten wir eine Cryptanalyse des NTRU-Systems in der vorgeschlagenen mittleren Sicherheitsstufe. Beträgt die Zeit zum Brechen eines Public-Keys unter Verwendung der Coppersmith/Shamir-Basis 1 Monat, so verringert sich die Laufzeit durch einen kombinierten Einsatz der neuen Gitterbasen auf ca. 5 Stunden auf einem Rechner und bei Parallelisierung auf ca. 1:20 Stunde auf 4 Rechnern. Wir erwarten, daß die neuen Methoden NTRU in hoher Sicherheitsstufe n = 167 brechen, obwohl für dieses n bisher nur "schwache" Schlüssel gebrochen wurden. Trotz signifikanter Verbesserungen deuten die experimentellen Ergebnisse auf ein exponentielles Laufzeitverhalten bei steigendem Sicherheitsparameter n hin. Der Laufzeitexponent kann allerdings gesenkt werden, so daß man n größer wählen muß, um Sicherheit gegenüber den neuen Attacken zu erzielen. Auch wenn das NTRU-Kryptosystem nicht vollständig gebrochen wird, verliert es seinen größten Vorteil gegenüber anderen Public Key Kryptosystemen: Die effiziente Kodierung und Dekodierung bei kleinem Sicherheitsparameter n.
We introduce algorithms for lattice basis reduction that are improvements of the famous L3-algorithm. If a random L3-reduced lattice basis b1,b2,...,bn is given such that the vector of reduced Gram-Schmidt coefficients ({µi,j} 1<= j< i<= n) is uniformly distributed in [0,1)n(n-1)/2, then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the Chor-Rivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
Assessing communicative accommodation in the context of large language models : a semiotic approach
(2023)
Recently, significant strides have been made in the ability of transformer-based chatbots to hold natural conversations. However, despite a growing societal and scientific relevancy, there are few frameworks systematically deriving what it means for a chatbot conversation to be natural. The present work approaches this question through the phenomenon of communicative accommodation/interactive alignment. While there is existing research suggesting that humans adapt communicatively to technologies, the aim of this work is to explore the accommodation of AI-chatbots to an interlocutor. Its research interest is twofold: Firstly, the structural ability of the transformer-architecture to support accommodative behavior is assessed using a frame constructed in accordance with existing accommodationtheories.
This results in hypotheses to be tested empirically. Secondly, since effective accommodation produces the same outcomes, regardless of technical implementation, a behavioral experiment is proposed. Existing quantifications of accommodation are reconciled,
extended, and modified to apply them to nonhuman-interlocutors. Thus, a measurement scheme is suggested which evaluates textual data from text-only, double-blind interactions between chatbots and humans, chatbots and chatbots and humans and humans. Using the generated human-to-human convergence data as a reference, the degree of artificial accommodation can be evaluated. Accommodation as a central facet of artificial interactivity can thus be evaluated directly against its theoretical paradigm, i.e. human interaction. In case that subsequent examinations show that chatbots effectively do not accommodate, there may be a new form of algorithmic bias, emerging from the aggregate accommodation towards chatbots but not towards humans. Thus, existing, hegemonic semantics could be cemented through chatbot-learning. Meanwhile, the ability to effectively accommodate would render chatbots vastly more susceptible to misuse.
Im Fachbereich der Computerlinguistik ist die automatische Generierung von Szenen aus, in natürlicher Sprache verfassten, Text seit bereits vielen Jahrzehnten ein wichtiger Bestandteil der Forschung, welche in der "Kunst", "Lehre" und "Robotik" Verwendung finden. Mit Hilfe von neuen Technologien im Bereich der Künstlichen Intelligenzen (KI), werden neue Entwicklungen möglich, welche diese Generierungen vereinfachen, allerdings auch undurchsichtige interne vom Modell getroffene Entscheidungen fördern.
Ziel der vorgeschlagenen Lösung „ARES: Annotation von Relationen und Eigenschaften zur Szenengenerierung“ ist es, ein modulares System zu entwerfen, wobei einzelne Prozesse für den Benutzer verständlich bleiben. Außerdem sollen Möglichkeiten geboten werden, neue Entitäten und Relationen, welche über die Textanalyse bereitgestellt werden, auch in die Szenengenerierung im dreidimensionalen Raum einzupflegen, ohne dass hierfür Code zwingend notwendig wird.
Der Fokus liegt auf der syntaktisch korrekten Darstellung der Elemente im Raum. Dagegen lässt sich die semantische Korrektheit durch weitere manuelle Anpassungen, welche für spätere Generierungen gespeichert werden erhöhen. Letztlich soll die Menge der zur Darstellung benötigten Annotationen möglichst gering bleiben und neue szenenbezogene Annotationen durch die implementierten Annotationstools hinzugefügt werden.
The amyloid precursor protein (APP) was discovered in the 1980s as the precursor protein of the amyloid A4 peptide. The amyloid A4 peptide, also known as A-beta (Aβ), is the main constituent of senile plaques implicated in Alzheimer’s disease (AD). In association with the amyloid deposits, increasing impairments in learning and memory as well as the degeneration of neurons especially in the hippocampus formation are hallmarks of the pathogenesis of AD. Within the last decades much effort has been expended into understanding the pathogenesis of AD. However, little is known about the physiological role of APP within the central nervous system (CNS). Allocating APP to the proteome of the highly dynamic presynaptic active zone (PAZ) identified APP as a novel player within this neuronal communication and signaling network. The analysis of the hippocampal PAZ proteome derived from APP-mutant mice demonstrates that APP is tightly embedded in the underlying protein network. Strikingly, APP deletion accounts for major dysregulation within the PAZ proteome network. Ca2+-homeostasis, neurotransmitter release and mitochondrial function are affected and resemble the outcome during the pathogenesis of AD. The observed changes in protein abundance that occur in the absence of APP as well as in AD suggest that APP is a structural and functional regulator within the hippocampal PAZ proteome. Within this review article, we intend to introduce APP as an important player within the hippocampal PAZ proteome and to outline the impact of APP deletion on individual PAZ proteome subcommunities.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Given a real vector alpha =(alpha1 ; : : : ; alpha d ) and a real number E > 0 a good Diophantine approximation to alpha is a number Q such that IIQ alpha mod Zk1 ", where k \Delta k1 denotes the 1-norm kxk1 := max 1id jx i j for x = (x1 ; : : : ; xd ). Lagarias [12] proved the NP-completeness of the corresponding decision problem, i.e., given a vector ff 2 Q d , a rational number " ? 0 and a number N 2 N+ , decide whether there exists a number Q with 1 Q N and kQff mod Zk1 ". We prove that, unless ...
Motivated by the question whether sound and expressive applicative similarities for program calculi with should-convergence exist, this paper investigates expressive applicative similarities for the untyped call-by-value lambda-calculus extended with McCarthy's ambiguous choice operator amb. Soundness of the applicative similarities w.r.t. contextual equivalence based on may-and should-convergence is proved by adapting Howe's method to should-convergence. As usual for nondeterministic calculi, similarity is not complete w.r.t. contextual equivalence which requires a rather complex counter example as a witness. Also the call-by-value lambda-calculus with the weaker nondeterministic construct erratic choice is analyzed and sound applicative similarities are provided. This justifies the expectation that also for more expressive and call-by-need higher-order calculi there are sound and powerful similarities for should-convergence.
Visual perception has increasingly grown important during the last decades in the robotics domain. Mobile robots have to localize themselves in known environments and carry out complex navigation tasks. This thesis presents an appearance-based or view-based approach to robot self-localization and robot navigation using holistic, spherical views obtained by cameras with large fields of view. For view-based methods, it is crucial to have a compressed image representation where different views can be stored and compared efficiently. Our approach relies on the spherical Fourier transform, which transforms a signal defined on the sphere to a small set of coefficients, approximating the original signal by a weighted sum of orthonormal basis functions, the so-called spherical harmonics. The truncated low order expansion of the image signal allows to compare input images efficiently, and the mathematical properties of spherical harmonics also allow for estimating rotation between two views, even in 3D. Since no geometrical measurements need to be done, modest quality of the vision system is sufficient. All experiments shown in this thesis are purely based on visual information to show the applicability of the approach. The research presented on robot self localization was focused on demonstrating the usability of the compressed spherical harmonics representation to solve the well-known kidnapped robot problem. To address this problem, the basic idea is to compare the current view to a set of images from a known environment to obtain a likelihood of robot positions. To localize the robot, one could choose the most probable position from the likelihood map; however, it is more beneficial to apply standard methods to integrate information over time while the robot moves, that is, particle or Kalman filters. The first step was to design a fast expansion method to obtain coefficient vectors directly in image space. This was achieved by back-projecting basis functions on the input image. The next steps were to develop a dissimilarity measure, an estimator for rotations between coefficient vectors, and a rotation-invariant dissimilarity measure, all of them purely based on the compact signal representation. With all these techniques at hand, generating likelihood maps is straightforward, but first experiments indicated strong dependence on illumination conditions. This is obviously a challenge for all holistic methods, in particular for a spherical harmonics approach, since local changes usually affect each single element of the coefficient vector. To cope with illumination changes, we investigated preprocessing steps leading to feature images (e.g. edge images, depth images), which bring together our holistic approach and classical feature-based methods. Furthermore, we concentrated on building a statistical model for typical changes of the coefficient vectors in presence of changes in illumination. This task is more demanding but leads to even better results. The second major topic of this thesis is appearance-based robot navigation. I present a view-based approach called Optical Rails (ORails), which leads a robot along a prerecorded track. The robot navigates in a network of known locations which are denoted as waypoints. At each waypoint, we store a compressed view representation. A visual servoing method is used to reach a current target waypoint based on the appearance and the current camera image. Navigating in a network of views is achieved by reaching a sequence of stopover locations, one after another. The main contribution of this work is a model which allows to deduce the best driving direction of the robot based purely on the coefficient vectors of the current and the target image. It is based on image registration as the classical method by Lucas-Kanade, but has been transferred to the spectral domain, which allows for great speedup. ORails also includes a waypoint selection strategy and a module for steering our nonholonomic robot. As for our self-localization algorithm, dependance on illumination changes is also problematic in ORails. Furthermore, occlusions have to be handled for ORails to work properly. I present a solution based on the optimal expansion, which is able to deal with incomplete image signals. To handle dynamic occlusions, i.e. objects appearing in an arbitrary region of the image, we use the linearity of the expansion process and cut the image into segments. These segments can be treated separately, and finally we merge the results. At this point, we can decide to disregard certain segments. Slicing the view allows for local illumination compensation, which is inherently non-robust if applied to the whole view. In conclusion, this approach allows to handle the most important criticism to holistic view-based approaches, that is, occlusions and illumination changes, and consequently improves the performance of Optical Rails.
Software updates are a critical success factor in mobile app ecosystems. Through publishing regular updates, platform providers enhance their operating systems for the benefit of both end users and third-party developers. It is also a way of attracting new customers. However, this platform evolution poses the risk of inadvertently introducing software problems, which can severely disturb the ecosystem’s balance by compromising its foundational technologies. So far, little to no research has addressed this issue from a user-centered perspective. The thesis at hand draws on IS post-adoption literature to investigate the potential negative influences of operating system updates on mobile app users. The release of Apple’s iOS 13 update serves as research object. Based on over half a million user reviews from the AppStore, data mining techniques are applied to study the impact of the new platform version. The results show that iOS 13 caused complications with a large number of popular apps, leading to a significant decline in user ratings and an uptrend in negative sentiment. Feature requests, functional complaints, and device compatibility are identified as the three major issue categories. These issue types are compared in terms of their quantifiable negative effect on users’ continuance intention. In essence, the findings contribute to IS research on post-adoption behavior and provide guidance to ecosystem participants in dealing with update-induced platform issues.
Synaptic release sites are characterized by exocytosis-competent synaptic vesicles tightly anchored to the presynaptic active zone (PAZ) whose proteome orchestrates the fast signaling events involved in synaptic vesicle cycle and plasticity. Allocation of the amyloid precursor protein (APP) to the PAZ proteome implicated a functional impact of APP in neuronal communication. In this study, we combined state-of-the-art proteomics, electrophysiology and bioinformatics to address protein abundance and functional changes at the native hippocampal PAZ in young and old APP-KO mice. We evaluated if APP deletion has an impact on the metabolic activity of presynaptic mitochondria. Furthermore, we quantified differences in the phosphorylation status after long-term-potentiation (LTP) induction at the purified native PAZ. We observed an increase in the phosphorylation of the signaling enzyme calmodulin-dependent kinase II (CaMKII) only in old APP-KO mice. During aging APP deletion is accompanied by a severe decrease in metabolic activity and hyperphosphorylation of CaMKII. This attributes an essential functional role to APP at hippocampal PAZ and putative molecular mechanisms underlying the age-dependent impairments in learning and memory in APP-KO mice.
Alternative polyadenylation (APA) is a widespread mechanism that contributes to the sophisticated dynamics of gene regulation. Approximately 50% of all protein-coding human genes harbor multiple polyadenylation (PA) sites; their selective and combinatorial use gives rise to transcript variants with differing length of their 3' untranslated region (3'UTR). Shortened variants escape UTR-mediated regulation by microRNAs (miRNAs), especially in cancer, where global 3'UTR shortening accelerates disease progression, dedifferentiation and proliferation. Here we present APADB, a database of vertebrate PA sites determined by 3' end sequencing, using massive analysis of complementary DNA ends. APADB provides (A)PA sites for coding and non-coding transcripts of human, mouse and chicken genes. For human and mouse, several tissue types, including different cancer specimens, are available. APADB records the loss of predicted miRNA binding sites and visualizes next-generation sequencing reads that support each PA site in a genome browser. The database tables can either be browsed according to organism and tissue or alternatively searched for a gene of interest. APADB is the largest database of APA in human, chicken and mouse. The stored information provides experimental evidence for thousands of PA sites and APA events. APADB combines 3' end sequencing data with prediction algorithms of miRNA binding sites, allowing to further improve prediction algorithms. Current databases lack correct information about 3'UTR lengths, especially for chicken, and APADB provides necessary information to close this gap. Database URL: http://tools.genxpro.net/apadb/
In dieser Arbeit wurde die Implementierung einer JMX-konformen Managementinfrastruktur für das Agentensystem AMETAS vorgestellt. Darauf basierend wurden im Rahmen des Fehlermanagements Kontrollmechanismen der mobilen Agenten im AMETAS untersucht und eine Lösung für die Lokalisierung von AMETAS-Agenten entworfen und implementiert. Der essentielle Hintergrund für das AMETAS-Management stellt sich folgendermaßen dar: Die Betrachtung des Anwendungs- und Infrastrukturmanagements mit Blick auf die Managementhierarchie stellt die Offenheit und Kooperationsfähigkeit der angestrebten Managementlösung in den Vordergrund. Diese Eigenschaften ermöglichen die Integration der in einem Unternehmen existierenden Managementlösungen. Ziel ist dabei ein kostengünstiges und effizientes Management. Eine Managementarchitektur wird mit Hilfe der informations-, organisations-, kommunikations- und funktionsbezogenen Aspekte beschrieben und modelliert. Anhand dieser Aspekte ist CORBA, DMTF, WBEM und JMX analysiert und ihre Eignung für das AMETASManagement bewertet worden. Neben den allgemeinen Kriterien sind ihre Teilmodelle, ihre Unterstützung des dezentralen und des dynamischen Managements sowie ihre Integrationsfähigkeit im AMETAS zentrale Punkte. Es zeigt sich, dass die JMX die besten Möglichkeiten für das AMETAS-Management bietet. Das OSI-Funktionsmodell klassifiziert die Managementaufgaben und -funktionen in fünf Bereiche, die häufig als FCAPS bezeichnet werden: Fehler-, Konfigurations-, Abrechnungs- , Leistungs- und Sicherheitsmanagement. Diese Klassifikation ist orthogonal zu jeder anderen und bietet einen geeigneten Rahmen für die Aufteilung der Managementaufgaben und - funktionen. Das in dieser Arbeit empfohlene AMETAS-Management orientiert sich hinsichtlich der Managementaufgabenaufteilung am OSI-Modell. Die JMX bietet mächtigeWerkzeuge zur Instrumentierung aller Arten von Ressourcen. Ihre Java-Basiertheit bedeutet eine wesentliche Vereinfachung für das Agentensystem. Die offene Architektur von der JMX ermöglicht die Kooperation des AMETAS-Managements mit anderen Managementstandards. Das AMETAS-Management nutzt die Vorzüge der mobilen Agenten insbesondere im Bereich des Konfigurations- und Fehlermanagements aus. Folgende Eigenschaften zeichnen das AMETAS-Management aus: 1) Verwendung der Agenteninfrastruktur für das Management. Selbiges wird dabei als ein AMETAS-Dienst implementiert und kann alle Möglichkeiten und Dienste der Agenteninfrastruktur nutzen. 2) Verwendung der AMETAS-Agenten und Dienste als Managementwerkzeuge. 3) Selbstmanagement des Systems. Der Managementdienst ist hierfür mit ausreichender Intelligenz ausgestattet. Er nutzt die Mechanismen der Agenteninfrastruktur aus und erledigt diverse Managementaufgaben selbständig. Das Ereignissystem vom AMETAS spielt hierbei eine wichtige Rolle. Die Analyse der Kontrollmechanismen von MASIF, Aglets Workbench und Mole liefert hinsichtlich ihrer Eignung für die Lokalisierung von Agenten im AMETAS folgendes Ergebnis: Die untersuchten Ansätze sind teilweise allgemein anwendbar. Man unterscheidet die nichtdeterministischen Ansätze wie Advertising und Energiekonzept von denen, die bestimmte Spuren von Agenten in einer geeigneten Art hinterlegen. In dieser Hinsicht stellte sich das Pfadkonzept als interessant heraus: Bei diesem Konzept können die Informationen über den Pfad der Migration eines Agenten in geeigneterWeise zeitlich beschränkt oder unbeschränkt gespeichert werden. Eine andere Alternative bietet die Registrierungsmethode. Bei dieser Methode wird ein Agent in einer zentralen Stelle registriert, wobei die eindeutige Identität eines Agenten und die aktuelle Stelle, in der sich ein Agent aufhält, gespeichert werden. Vor dem Hintergrund der erfolgten Analyse empfiehlt sich als Basis für die Lokalisierung von AMETAS-Agenten eine Art Pfadkonzept: Die Spuren der Agenten werden durch einen Managementdienst gesichert. Will man einen bestimmten Agenten oder eine Gruppe lokalisieren, werden die dezentral vorhandenen Informationen innerhalb eines konsistenten Schnitts (Schnappschuss) ausgewertet. Die Schnappschussmethode empfiehlt sich für die Lokalisierung von Agenten im AMETAS entsprechend den zu Beginn der Arbeit von einem Lokalisierungsmechanismus geforderten Eigenschaften: Sie erlaubt eine zuverlässige Lokalisierung der gesuchten Agenten, deren Autonomie dabei respektiert wird. Die Kosten-Leistungsrelation ist günstig einzuschätzen, da unnötiger Daten- bzw. Agentenverkehr ebenso vermieden wird wie die Pflege umfangreicher, zentralistischer Datenbanken.
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a pT region inaccessible by direct jet identification. In these measurements pseudorapidity (Δη) and azimuthal (Δφ) differences are used to extract the shape of the near-side peak formed by particles associated to a higher pT trigger particle (1<pT,trig< 8 GeV/c). A combined fit of the near-side peak and long-range correlations is applied to the data allowing the extraction of the centrality evolution of the peak shape in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. A significant broadening of the peak in the Δη direction at low pT is found from peripheral to central collisions, which vanishes above 4 GeV/c, while in the Δφ direction the peak is almost independent of centrality. For the 10% most central collisions and 1<pT,assoc< 2 GeV/c, 1<pT,trig< 3 GeV/c a novel feature is observed: a depletion develops around the centre of the peak. The results are compared to pp collisions at the same centre of mass energy and to AMPT model simulations. The comparison to the investigated models suggests that the broadening and the development of the depletion is connected to the strength of radial and longitudinal flow.
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a pT region inaccessible by direct jet identification. In these measurements pseudorapidity (Δη) and azimuthal (Δφ) differences are used to extract the shape of the near-side peak formed by particles associated to a higher pT trigger particle (1<pT,trig< 8 GeV/c). A combined fit of the near-side peak and long-range correlations is applied to the data allowing the extraction of the centrality evolution of the peak shape in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. A significant broadening of the peak in the Δη direction at low pT is found from peripheral to central collisions, which vanishes above 4 GeV/c, while in the Δφ direction the peak is almost independent of centrality. For the 10% most central collisions and 1<pT,assoc< 2 GeV/c, 1<pT,trig< 3 GeV/c a novel feature is observed: a depletion develops around the centre of the peak. The results are compared to pp collisions at the same centre of mass energy and to AMPT model simulations. The comparison to the investigated models suggests that the broadening and the development of the depletion is connected to the strength of radial and longitudinal flow.
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a pT region inaccessible by direct jet identification. In these measurements pseudorapidity (Δη) and azimuthal (Δφ) differences are used to extract the shape of the near-side peak formed by particles associated to a higher pT trigger particle (1<pT,trig< 8 GeV/c). A combined fit of the near-side peak and long-range correlations is applied to the data allowing the extraction of the centrality evolution of the peak shape in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. A significant broadening of the peak in the Δη direction at low pT is found from peripheral to central collisions, which vanishes above 4 GeV/c, while in the Δφ direction the peak is almost independent of centrality. For the 10% most central collisions and 1<pT,assoc< 2 GeV/c, 1<pT,trig< 3 GeV/c a novel feature is observed: a depletion develops around the centre of the peak. The results are compared to pp collisions at the same centre of mass energy and to AMPT model simulations. The comparison to the investigated models suggests that the broadening and the development of the depletion is connected to the strength of radial and longitudinal flow.
This paper describes work on the morphological and syntactic annotation of Sumerian cuneiform as a model for low resource languages in general. Cuneiform texts are invaluable sources for the study of history, languages, economy, and cultures of Ancient Mesopotamia and its surrounding regions. Assyriology, the discipline dedicated to their study, has vast research potential, but lacks the modern means for computational processing and analysis. Our project, Machine Translation and Automated Analysis of Cuneiform Languages, aims to fill this gap by bringing together corpus data, lexical data, linguistic annotations and object metadata. The project’s main goal is to build a pipeline for machine translation and annotation of Sumerian Ur III administrative texts. The rich and structured data is then to be made accessible in the form of (Linguistic) Linked Open Data (LLOD), which should open them to a larger research community. Our contribution is two-fold: in terms of language technology, our work represents the first attempt to develop an integrative infrastructure for the annotation of morphology and syntax on the basis of RDF technologies and LLOD resources. With respect to Assyriology, we work towards producing the first syntactically annotated corpus of Sumerian.
The elliptic (v2), triangular (v3), and quadrangular (v4) flow coefficients of π±, K±, p+p¯¯¯, Λ+Λ¯¯¯¯, K0S, and the ϕ-meson are measured in Pb-Pb collisions at sNN−−−√=5.02 TeV. Results obtained with the scalar product method are reported for the rapidity range |y|< 0.5 as a function of transverse momentum, pT, at different collision centrality intervals between 0-70%, including ultra-central (0-1%) collisions for π±, K±, and p+p¯¯¯. For pT<3 GeV/c, the flow coefficients exhibit a particle mass dependence. At intermediate transverse momenta (3<pT<~8-10 GeV/c), particles show an approximate grouping according to their type (i.e., mesons and baryons). The ϕ-meson v2, which tests both particle mass dependence and type scaling, follows p+p¯¯¯ v2 at low pT and π± v2 at intermediate pT. The evolution of the shape of vn(pT) as a function of centrality and harmonic number n is studied for the various particle species. Flow coefficients of π±, K±, and p+p¯¯¯ for pT<3 GeV/c are compared to iEBE-VISHNU and MUSIC hydrodynamical calculations coupled to a hadronic cascade model (UrQMD). The iEBE-VISHNU calculations describe the results fairly well for pT<2.5 GeV/c, while MUSIC calculations reproduce the measurements for pT<1 GeV/c. A comparison to vn coefficients measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV is also provided.
The elliptic (v2), triangular (v3), and quadrangular (v4) flow coefficients of π±, K±, p+p¯¯¯, Λ+Λ¯¯¯¯, K0S, and the ϕ-meson are measured in Pb-Pb collisions at sNN−−−√=5.02 TeV. Results obtained with the scalar product method are reported for the rapidity range |y|< 0.5 as a function of transverse momentum, pT, at different collision centrality intervals between 0-70%, including ultra-central (0-1%) collisions for π±, K±, and p+p¯¯¯. For pT<3 GeV/c, the flow coefficients exhibit a particle mass dependence. At intermediate transverse momenta (3<pT<~8-10 GeV/c), particles show an approximate grouping according to their type (i.e., mesons and baryons). The ϕ-meson v2, which tests both particle mass dependence and type scaling, follows p+p¯¯¯ v2 at low pT and π± v2 at intermediate pT. The evolution of the shape of vn(pT) as a function of centrality and harmonic number n is studied for the various particle species. Flow coefficients of π±, K±, and p+p¯¯¯ for pT<3 GeV/c are compared to iEBE-VISHNU and MUSIC hydrodynamical calculations coupled to a hadronic cascade model (UrQMD). The iEBE-VISHNU calculations describe the results fairly well for pT<2.5 GeV/c, while MUSIC calculations reproduce the measurements for pT<1 GeV/c. A comparison to vn coefficients measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV is also provided.
The elliptic (v2), triangular (v3), and quadrangular (v4) flow coefficients of π±, K±, p+p¯¯¯,Λ+Λ¯¯¯¯,K0S, and the ϕ-meson are measured in Pb-Pb collisions at s√NN=5.02 TeV. Results obtained with the scalar product method are reported for the rapidity range |y| < 0.5 as a function of transverse momentum, pT, at different collision centrality intervals between 0–70%, including ultra-central (0–1%) collisions for π±, K±, and p+p¯¯¯. For pT < 3 GeV/c, the flow coefficients exhibit a particle mass dependence. At intermediate transverse momenta (3 < pT < 8–10 GeV/c), particles show an approximate grouping according to their type (i.e., mesons and baryons). The ϕ-meson v2, which tests both particle mass dependence and type scaling, follows p+p¯¯¯ v2 at low pT and π± v2 at intermediate pT. The evolution of the shape of vn(pT) as a function of centrality and harmonic number n is studied for the various particle species. Flow coefficients of π±, K±, and p+p¯¯¯ for pT < 3 GeV/c are compared to iEBE-VISHNU and MUSIC hydrodynamical calculations coupled to a hadronic cascade model (UrQMD). The iEBE-VISHNU calculations describe the results fairly well for pT < 2.5 GeV/c, while MUSIC calculations reproduce the measurements for pT < 1 GeV/c. A comparison to vn coefficients measured in Pb-Pb collisions at sNN−−−√=2.76 TeV is also provided.
Measurements of elliptic (v2) and triangular (v3) flow coefficients of π±, K±, p+p¯¯¯, K0S, and Λ+Λ¯¯¯¯ obtained with the scalar product method in Xe-Xe collisions at sNN−−−√ = 5.44 TeV are presented. The results are obtained in the rapidity range |y|<0.5 and reported as a function of transverse momentum, pT, for several collision centrality classes. The flow coefficients exhibit a particle mass dependence for pT<3 GeV/c, while a grouping according to particle type (i.e., meson and baryon) is found at intermediate transverse momenta (3< pT <8 GeV/c). The magnitude of the baryon v2 is larger than that of mesons up to pT = 6 GeV/c. The centrality dependence of the shape evolution of the pT-differential v2 is studied for the various hadron species. The v2 coefficients of π±, K±, and p+p¯¯¯ are reproduced by MUSIC hydrodynamic calculations coupled to a hadronic cascade model (UrQMD) for pT<1 GeV/c. A comparison with vn measurements in the corresponding centrality intervals in Pb-Pb collisions at sNN−−−√ = 5.02 TeV yields an enhanced v2 in central collisions and diminished value in semicentral collisions.
Measurements of elliptic (v2) and triangular (v3) flow coefficients of π±, K±, p+p¯¯¯, K0S, and Λ+Λ¯¯¯¯ obtained with the scalar product method in Xe-Xe collisions at sNN−−−√ = 5.44 TeV are presented. The results are obtained in the rapidity range |y|<0.5 and reported as a function of transverse momentum, pT, for several collision centrality classes. The flow coefficients exhibit a particle mass dependence for pT<3 GeV/c, while a grouping according to particle type (i.e., meson and baryon) is found at intermediate transverse momenta (3< pT <8 GeV/c). The magnitude of the baryon v2 is larger than that of mesons up to pT = 6 GeV/c. The centrality dependence of the shape evolution of the pT-differential v2 is studied for the various hadron species. The v2 coefficients of π±, K±, and p+p¯¯¯ are reproduced by MUSIC hydrodynamic calculations coupled to a hadronic cascade model (UrQMD) for pT<1 GeV/c. A comparison with vn measurements in the corresponding centrality intervals in Pb-Pb collisions at sNN−−−√ = 5.02 TeV yields an enhanced v2 in central collisions and diminished value in semicentral collisions.
Measurements of elliptic (v2) and triangular (v3) flow coefficients of π±, K±, p+p¯¯¯, K0S, and Λ+Λ¯¯¯¯ obtained with the scalar product method in Xe-Xe collisions at sNN−−−√ = 5.44 TeV are presented. The results are obtained in the rapidity range |y| < 0.5 and reported as a function of transverse momentum, pT, for several collision centrality classes. The flow coefficients exhibit a particle mass dependence for pT < 3 GeV/c, while a grouping according to particle type (i.e., meson and baryon) is found at intermediate transverse momenta (3 < pT < 8 GeV/c). The magnitude of the baryon v2 is larger than that of mesons up to pT = 6 GeV/c. The centrality dependence of the shape evolution of the pT-differential v2 is studied for the various hadron species. The v2 coefficients of π±, K±, and p+p¯¯¯ are reproduced by MUSIC hydrodynamic calculations coupled to a hadronic cascade model (UrQMD) for pT < 1 GeV/c. A comparison with vn measurements in the corresponding centrality intervals in Pb-Pb collisions at sNN−−−√ = 5.02 TeV yields an enhanced v2 in central collisions and diminished value in semicentral collisions.
We report the first results of elliptic (v2), triangular (v3) and quadrangular flow (v4) of charged particles in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region |η|<0.8 and for the transverse momentum range 0.2<pT<5 GeV/c. The anisotropic flow is measured using two-particle correlations with a pseudorapidity gap greater than one unit and with the multi-particle cumulant method. Compared to results from Pb-Pb collisions at sNN−−−√=2.76 TeV, the anisotropic flow coefficients v2, v3 and v4 are found to increase by (3.0±0.6)%, (4.3±1.4)% and (10.2±3.8)%, respectively, in the centrality range 0-50%. This increase can be attributed mostly to an increase of the average transverse momentum between the two energies. The measurements are found to be compatible with hydrodynamic model calculations. This comparison provides a unique opportunity to test the validity of the hydrodynamic picture and the power to further discriminate between various possibilities for the temperature dependence of shear viscosity to entropy density ratio of the produced matter in heavy-ion collisions at the highest energies.
We report the first results of elliptic (v2), triangular (v3) and quadrangular flow (v4) of charged particles in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region |η|<0.8 and for the transverse momentum range 0.2<pT<5 GeV/c. The anisotropic flow is measured using two-particle correlations with a pseudorapidity gap greater than one unit and with the multi-particle cumulant method. Compared to results from Pb-Pb collisions at sNN−−−√=2.76 TeV, the anisotropic flow coefficients v2, v3 and v4 are found to increase by (3.0±0.6)%, (4.3±1.4)% and (10.2±3.8)%, respectively, in the centrality range 0-50%. This increase can be attributed mostly to an increase of the average transverse momentum between the two energies. The measurements are found to be compatible with hydrodynamic model calculations. This comparison provides a unique opportunity to test the validity of the hydrodynamic picture and the power to further discriminate between various possibilities for the temperature dependence of shear viscosity to entropy density ratio of the produced matter in heavy-ion collisions at the highest energies.
We report the first results of elliptic (v2), triangular (v3) and quadrangular flow (v4) of charged particles in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region |η|<0.8 and for the transverse momentum range 0.2<pT<5 GeV/c. The anisotropic flow is measured using two-particle correlations with a pseudorapidity gap greater than one unit and with the multi-particle cumulant method. Compared to results from Pb-Pb collisions at sNN−−−√=2.76 TeV, the anisotropic flow coefficients v2, v3 and v4 are found to increase by (3.0±0.6)%, (4.3±1.4)% and (10.2±3.8)%, respectively, in the centrality range 0-50%. This increase can be attributed mostly to an increase of the average transverse momentum between the two energies. The measurements are found to be compatible with hydrodynamic model calculations. This comparison provides a unique opportunity to test the validity of the hydrodynamic picture and the power to further discriminate between various possibilities for the temperature dependence of shear viscosity to entropy density ratio of the produced matter in heavy-ion collisions at the highest energies.
The elliptic, v2, triangular, v3, and quadrangular, v4, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at √sNN=2.76 TeV with the ALICE detector at the Large Hadron Collider. Results obtained with the event plane and four-particle cumulant methods are reported for the pseudo-rapidity range |η|<0.8 at different collision centralities and as a function of transverse momentum, pT, out to pT=20 GeV/c. The observed non-zero elliptic and triangular flow depends only weakly on transverse momentum for pT>8 GeV/c. The small pT dependence of the difference between elliptic flow results obtained from the event plane and four-particle cumulant methods suggests a common origin of flow fluctuations up to pT=8 GeV/c. The magnitude of the (anti-)proton elliptic and triangular flow is larger than that of pions out to at least pT=8 GeV/c indicating that the particle type dependence persists out to high pT.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2023)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2023)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2022)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
Wir betrachten das auf der Crypto '97 vorgeschlagene gitterbasierte Kryp- tosystem von Goldreich, Goldwasser und Halevi (GGH) [11]. Die Autoren veröffentlichten Challenges zu den Sicherheitsparametern 200, 250, 300, 350 und 400 [12]. Jeder Challenge besteht aus dem öffentlichen Schlüssel, sowie einem Ciphertext. Für den Angriff entwickeln wir numerisch stabile Gitterreduktionsalgorithmen, die es ermöglichen, das System in diesen Dimensionen anzugreifen. Es werden Methoden zur Orthogonalisierung, die sogenannten House- holder-Reflexionen und Givens-Rotationen behandelt, und eine praktikable Gleitpunkt-Arithmetik Version des LLL-Algorithmus nach Lenstra, Lenstra und Lov'asz [16] angegeben. Wir entwickeln und analysieren den LLL-Block- Algorithmus, der die Gitterreduktion in Blöcken organisiert. Die Gleitpunkt-Arithmetik Version des LLL-Block-Algorithmus wird experimentell auf das GGH-Schema angewendet und mit der LLL-Reduktion in den Dimensio- nen 100 bis 400 verglichen. Neben der besseren numerischen Stabilität ist die LLL-Block-Reduktion um den Faktor 10 bis 18 mal schneller als die gewöhnliche LLL-Reduktion. Das GGH-Kryptosystem wurde ebenfalls von Nguyen [22] angegriffen, und die ursprünglichen Nachrichten wurden bis in Dimension 350 rekonstruiert. Wir stellen weitere Angriffe auf das Kryptosystem vor. Es zeigt sich, dass die öffentlichen Parameter für erfolgreiche Angriffe benutzt werden können. Der private Schlüssel in der Dimension 200 wird nach ca. 10 Stunden rekonstruiert und Ciphertext-Attaken sind bis in Dimension 300 erfolgreich.
An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for re exive pronouns, nonre exive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.
Monitoring is an indispensable tool for the operation of any large installation of grid or cluster computing, be it high energy physics or elsewhere. Usually, monitoring is configured to collect a small amount of data, just enough to enable detection of abnormal conditions. Once detected, the abnormal condition is handled by gathering all information from the affected components. This data is processed by querying it in a manner similar to a database.
This contribution shows how the metaphor of a debugger (for software applications) can be transferred to a compute cluster. The concepts of variables, assertions and breakpoints that are used in debugging can be applied to monitoring by defining variables as the quantities recorded by monitoring and breakpoints as invariants formulated via these variables. It is found that embedding fragments of a data extracting and reporting tool such as the UNIX tool awk facilitates concise notations for commonly used variables since tools like awk are designed to process large event streams (in textual representations) with bounded memory. A functional notation similar to both the pipe notation used in the UNIX shell and the point-free style used in functional programming simplify the combination of variables that commonly occur when formulating breakpoints.
The economic success of the World Wide Web makes it a highly competitive environment for web businesses. For this reason, it is crucial for web business owners to learn what their customers want. This thesis provides a conceptual framework and an implementation of a system that helps to better understand the behavior and potential interests of web site visitors by accounting for both explicit and implicit feedback. This thesis is divided into two parts.
The first part is rooted in computer science and information systems and uses graph theory and an extended click-stream analysis to define a framework and a system tool that is useful for analyzing web user behavior by calculating the interests of the users.
The second part is rooted in behavioral economics, mathematics, and psychology and is investigating influencing factors on different types of web user choices. In detail, a model for the cognitive process of rating products on the Web is defined and an importance hierarchy of the influencing factors is discovered.
Both parts make use of techniques from a variety of research fields and, therefore, contribute to the area of Web Science.
Charged-particle spectra at midrapidity are measured in Pb–Pb collisions at the centre-of-mass energy per nucleon–nucleon pair √sNN = 5.02 TeV and presented in centrality classes ranging from most central (0–5%) to most peripheral (95–100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton–proton collisions, scaled by the number of independent nucleon–nucleon collisions obtained from a Glauber model. At large transverse momenta (8 < pT < 20 GeV/c), the average RAA is found to increase from about 0.15 in 0–5% central to a maximum value of about 0.8 in 75–85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8–20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb–Pb, but equal to unity in minimum-bias p–Pb collisions despite similar charged-particle multiplicities.
Charged-particle spectra at midrapidity are measured in Pb-Pb collisions at the centre-of-mass energy per nucleon-nucleon pair sNN−−−√ = 5.02 TeV and presented in centrality classes ranging from most central (0-5%) to most peripheral (95-100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton-proton collisions, scaled by the number of independent nucleon-nucleon collisions obtained from a Glauber model. At large transverse momenta (8<pT<20 GeV/c), the average RAA is found to increase from about 0.15 in 0-5% central to a maximum value of about 0.8 in 75-85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8-20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb-Pb, but equal to unity in minimum-bias p-Pb collisions despite similar charged-particle multiplicities.
Charged-particle spectra at midrapidity are measured in Pb-Pb collisions at the centre-of-mass energy per nucleon-nucleon pair sNN−−−√ = 5.02 TeV and presented in centrality classes ranging from most central (0-5%) to most peripheral (95-100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton-proton collisions, scaled by the number of independent nucleon-nucleon collisions obtained from a Glauber model. At large transverse momenta (8<pT<20 GeV/c), the average RAA is found to increase from about 0.15 in 0-5% central to a maximum value of about 0.8 in 75-85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8-20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb-Pb, but equal to unity in minimum-bias p-Pb collisions despite similar charged-particle multiplicities.
Virtual machines are for the most part not used inside of high-energy physics (HEP) environments. Even though they provide a high degree of isolation, the performance overhead they introduce is too great for them to be used. With the rising number of container technologies and their increasing separation capabilities, HEP-environments are evaluating if they could utilize the technology. The container images are small and self-contained which allows them to be easily distributed throughout the global environment. They also offer a near native performance while at the same time aproviding an often acceptable level of isolation. Only the needed services and libraries are packed into an image and executed directly by the host kernel. This work compared the performance impact of the three container technologies Docker, rkt and Singularity. The host kernel was additionally hardened with grsecurity and PaX to strengthen its security and make an exploitation from inside a container harder. The execution time of a physics simulation was used as a benchmark. The results show that the different container technologies have a different impact on the performance. The performance loss on a stock kernel is small; in some cases they were even faster than no container. Docker showed overall the best performance on a stock kernel. The difference on a hardened kernel was bigger than on a stock kernel, but in favor of the container technologies. rkt showed performed in almost all cases better than all the others.
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
We present a biologically-inspired system for real-time, feed-forward object recognition in cluttered scenes. Our system utilizes a vocabulary of very sparse features that are shared between and within different object models. To detect objects in a novel scene, these features are located in the image, and each detected feature votes for all objects that are consistent with its presence. Due to the sharing of features between object models our approach is more scalable to large object databases than traditional methods. To demonstrate the utility of this approach, we train our system to recognize any of 50 objects in everyday cluttered scenes with substantial occlusion. Without further optimization we also demonstrate near-perfect recognition on a standard 3-D recognition problem. Our system has an interpretation as a sparsely connected feed-forward neural network, making it a viable model for fast, feed-forward object recognition in the primate visual system.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
Analyse von Heuristiken
(2006)
Heuristiken treten insbesondere im Zusammenhang mit Optimierungsproblemen in Erscheinung, bei solchen Problemen also, bei denen nicht nur eine Lösung zu finden ist, sondern unter mehreren möglichen Lösungen eine in einem objektiven Sinne beste Lösung ausfindig gemacht werden soll. Beim Problem kürzester Superstrings werden Heuristiken herangezogen, da mit exakten Algorithmen in Anbetracht der APX-Vollständigkeit des Problems nicht zu rechnen ist. Gegeben ist eine Menge S von Strings. Gesucht ist ein String s, so dass jeder String aus S Teilstring von s ist. Die Länge von s ist dabei zu minimieren. Die prominenteste Heuristik für das Problem kürzester Superstrings ist die Greedy-Heuristik, deren Approximationsfaktor derzeit jedoch nur unzureichend beschränkt werden kann. Es wird vermutet (die sogenannte Greedy-Conjecture), dass der Approximationsfaktor genau 2 beträgt, bewiesen werden kann aber nur, dass er nicht unter 2 und nicht über 3,5 liegt. Die Greedy-Conjecture ist das zentrale Thema des zweiten Kapitels. Die erzielten Ergebnisse sind im Wesentlichen: * Durch die Betrachtung von Greedyordnungen können bedingte lineare Ungleichungen nutzbar gemacht werden. Dieser Ansatz ermöglicht den Einsatz linearer Programmierung zum Auffinden interessanter Instanzen und eine Vertiefung des Verständnisses solcher schwerer Instanzen. Dieser Ansatz wird eingeführt und eine Interpretation des dualen Problems wird dargestellt. * Für die nichttriviale, große Teilklasse der bilinearen Greedyordnungen wird gezeigt, dass die Länge des von der Greedy-Heuristik gefundenen Superstrings und die des optimalen Superstrings sich höchstens um die Größe einer optimalen Kreisüberdeckung der Strings unterscheiden. Da eine optimale Kreisüberdeckung einer Menge von Strings stets höchstens so groß ist wie ein optimaler Superstring (man schließe einen Superstring zu einem einzelnen Kreis), ist das erzielte Ergebnis für die betrachtete Teilklasse der Greedyordnungen stärker als die klassische Greedy-Conjecture. * Es wird eine neue bedingte lineare Ungleichung auf Strings -- die Tripelungleichung -- gezeigt, die für das eben genannte Hauptergebnis wesentlich ist. * Schließlich wird gezeigt, dass die zum Nachweis der oberen Schranke von 3,5 für den Approximationsfaktor herangezogenen bedingten Ungleichungen (etwa die Monge-Ungleichung) inhärent zu schwach sind, um die Greedy-Conjecture selbst für lineare Greedyordnungen zu beweisen. Also ist die neue Tripelungleichung auch notwendig. Zuletzt wird gezeigt, dass das um die Tripelungleichung erweiterte System bedingter linearer Ungleichungen inhärent zu schwach ist, um die klassische Greedy-Conjecture für beliebige Greedyordnungen zu beweisen. Mit der Analyse von Queueing Strategien im Adversarial Queueing Modell wird auch ein Fall betrachtet, in dem Heuristiken auf Grund von anwendungsspezifischen Forderungen wie Online-Setup und Lokalität eingesetzt werden. Pakete sollen in einem Netzwerk verschickt werden, wobei jeder Rechner nur begrenzte Information über den Zustand des Netzwerks hat. Es werden Klassen von Queueing Strategien untersucht und insbesondere untersucht, wovon Queueing Strategien ihre lokalen Entscheidungen abhängig machen sollten, um ein gewisses Qualitätsmerkmal zu erreichen. Die hier erzielten Ergebnisse sind: * Jede Queueing Strategie, die ohne Zeitstempel arbeitet, kann zu einer exponentiell großen Queue und damit zu exponentiell großer Verzögerung (im Durchmesser und der Knotenzahl des Netzwerks) gezwungen werden. Dies war bisher nur für konkrete prominente Strategien bekannt. * Es wird eine neue Technik zur Feststellung der Stabilität von Queueing Strategien ohne Zeitnahme vorgestellt, die Aufschichtungskreise. Mit ihrer Hilfe können bekannte Stabilitätsbeweise prominenter Strategien vereinheitlicht werden und weitere Stabilitätsergebnisse erzielt werden. * Für die große Teilklasse distanzbasierter Queueing Strategien gelingt eine vollständige Klassifizierung aller 1-stabilen und universell stabilen Strategien.
Im heutigen Zahlungsverkehr übernehmen in zunehmendem Maße Zahlungen mit Kreditkarten eine entscheidende Rolle. Entsprechend der Verbreitung dieser Art des Zahlungsverkehrs nimmt ebenfalls der Mißbrauch mit diesem bargeldlosen Zahlungsmittel zu. Um die Verluste, die bei dem Kreditkarteninstitut auf diese Weise entstehen, so weit wie möglich einzudämmen, wird versucht, Mißbrauchstransaktionen bei der Autorisierung der Zahlungsaufforderung zu erkennen. Ziel dieser Diplomarbeit ist es zu bestimmen, in wie weit es möglich ist, illegale Transaktionen aus der Menge von Autorisierungsanfragen mit Hilfe adaptiver Algorithmen aufzudecken. Dabei sollen sowohl Methoden aus dem Bereich des Data-Mining, als auch aus den Bereichen der neuronalen Netze benutzt werden. Erschwerend bei der Mißbrauchsanalyse kommt hinzu, daß die Beurteilung der einzelnen Transaktionen in Sekundenbruchteilen abgeschlossen sein muß, um die hohe Anzahl an Autorisierungsanfragen verarbeiten zu können und den Kundenservice auf Seiten des Benutzers und des Händlers auf diese Weise zu optimieren. Weiter handelt es sich bei einem Großteil der bei der Analyse zu Verfügung stehenden Datensätze um symbolische Daten, also alpha-numerisch kodierte Werte, die stellvertretend für verschiedene Eigenschaften verwendet werden. Nur wenige der Transaktionsdaten sind analoger Natur, weisen also eine Linearität auf, die es erlaubt, "Nachbarschaften" zwischen den Daten bestimmen zu können. Damit scheidet eine reine Analyse auf Basis von neuronalen Netzwerken aus. Diese Problematik führte unter anderem zu dem verfolgten Ansatz. Als Grundlage der Analyse dienen bekannte Mißbrauchstransaktionen aus einem Zeitintervall von ungefähr einem Jahr, die jedoch aufgrund der hohen Anzahl nicht komplett als solche mit den eingehenden Transaktionen verglichen werden können, da ein sequentieller Vergleich zu viel Zeit in Anspruch nähme. Im übrigen würde durch einen einfachen Vergleich nur der schon bekannte Mißbrauch erkannt werden; eine Abstraktion der Erkenntnisse aus den Mißbrauchserfahrungen ist nicht möglich. Aus diesem Grund werden diese Mißbrauchstransaktionen mit Hilfe von Methoden aus dem Bereich des Data-Mining verallgemeinert und damit auf ein Minimum, soweit es die Verläßlichkeit dieser Datensätze zuläßt, reduziert. Desweiteren schließt sich eine Analyse der zu diesem Zeitpunkt noch nicht betrachteten analogen Daten an, um die maximale, enthaltene Information aus den Transaktionsdaten zu beziehen. Dafür werden moderne Methoden aus dem Bereich der neuronalen Netzwerke, sogenannte radiale Basisfunktionsnetze, verwendet. Da eine Mißbrauchsanalyse ohne eine entsprechende Profilanalyse unvollständig wäre, wurde abschließend mit den vorhanden Mitteln auf den zugrunde liegenden Daten in Anlehnung an die bisherige Methodik eine solche Profilauswertung und zeitabhängige Analyse realisiert. Mit dem so implementierten Modell wurde versucht, auf allgemeine Art und Weise, Verhaltens- beziehungsweise Transaktionsmuster einzuordnen und mit bei der Mißbrauchsentscheidung einfließen zu lassen. Aus den vorgestellten Analyseverfahren wurden verschiedene Klassifizierungsmodelle entwickelt, die zu guten Ergebnissen auf den Simulationsdaten führen. Es kann gezeigt werden, daß die Mißbrauchserkennung durch eine kombinierte Anwendung aus symbolischer und analoger Auswertung bestmöglich durchzuführen ist.
Das größte Problem bei der Erstellung von MR-Anwendungen besteht darin, dass sie meistens durch Programmierung erstellt werden. Daher muss ein Autor spezielles Fachwissen über MR-Technologie und zumindest allgemeine Programmierkenntnisse mitbringen, um eine MR-Anwendung erstellen zu können. Dieser Erstellungsprozess soll mit Hilfe von MR-Autorensystemen, die derzeit auf dem Markt existieren und in der Forschung entwickelt werden, vereinfacht werden. Dies war ein Grund, warum diese Arbeit sich zum Ziel erklärte, zu überprüfen, inwieweit die Erstellung von MRAnwendungen durch Einsatz von MR-Autorensystemen vereinfacht wird. Ein weiteres Hauptziel war die Erstellung einer repräsentativen MR-Anwendung, die in dieser Arbeit als MR-Referenzanwendung bezeichnet wird. Sie sollte vor allem bei weiteren Entwicklungen als Vorlage dienen können und auf Basis von standardisierten Vorgehensmodellen, wie das Wasserfallmodell, erstellt werden. Ganz wichtig war es noch im Rahmen dieser Arbeit zu bestätigen, dass standardisierte Vorgehensmodelle auf MR-Anwendungen übertragbar sind. Um diese Ziele zu erreichen, sind in dieser Arbeit viele Schritte befolgt worden, die jeweils als Teilziele betrachtet werden können. Die MR-Referenzanwendung , die im Rahmen dieser Arbeit erstellt wurde, sollte mit Hilfe eines MR-Autorensystems umgesetzt werden. Um das richtige MRAutorensystem dafür auszusuchen, wurden im Rahmen einer Analyse fakultative und obligatorische Anforderungen an MR-Autorensysteme definiert, worin auch Funktionen identifiziert wurden, die ein solches System bereitstellen sollte. Das Anbieten einer Vorschau ist ein Beispiel für diese Funktionen, die bei der Erstellung von MR-Anwendungen eine essentielle Rolle spielen können. Die obligatorischen Anforderungen sind welche, die jedes Softwaresystem erfüllen soll, während die fakultativen das Ziel der Verbesserung von Autorensystemen verfolgen. Mit Hilfe der Analyse wurde ein Vergleich zwischen bekannten MR-Autorensystemen gezogen, dessen Ergebnis AMIRE als ein für die Ziele dieser Arbeit geeignetes MR-Autorensystem identifizierte. Für die MR-Referenzanwendung , die ähnliche Funktionen aufweisen sollte wie andere typische MR-Anwendungen wurden Funktionen, Anwendungsfälle und Design der Oberfläche spezifiziert. Diese Spezifikation wurde unabhängig von dem ausgesuchten Autorensystem durchgeführt, um darin analog zur Software-Technik das Augenmerk auf fachliche und nicht auf technische Aspekte zu legen. Um ans Ziel zu gelangen, wurde die MR-Referenzanwendung durch AMIRE realisiert, jedoch musste zuvor ihre Spezifikation auf dieses MR-Autorensystem überführt werden. Bei der Überführung wurde die Realisierung aus technischer Sicht betrachtet, das heißt es wurden verschiedene Vorbereitungen, wie die Auswahl der benötigten Komponenten, die Planung der Anwendungslogik und die Aufteilung der Anwendung in verschiedenen Zuständen, durchgeführt. Nach der gelungenen Realisierung und beispielhaften Dokumentation der MRReferenzanwendung konnte die Arbeit bewertet werden, worin die erzielten Resultate den Zielen der Arbeit gegenübergestellt wurden. Die Ergebnisse bestätigen, dass mit AMIRE die Entwicklung einer MR-Anwendung ohne Spezialwissen möglich ist und dass diese Arbeit alle ihrer Ziele innerhalb des festgelegten Zeitrahmens erreicht hat.
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
We analyse a continued fraction algorithm (abbreviated CFA) for arbitrary dimension n showing that it produces simultaneous diophantine approximations which are up to the factor 2^((n+2)/4) best possible. Given a real vector x=(x_1,...,x_{n-1},1) in R^n this CFA generates a sequence of vectors (p_1^(k),...,p_{n-1}^(k),q^(k)) in Z^n, k=1,2,... with increasing integers |q^{(k)}| satisfying for i=1,...,n-1 | x_i - p_i^(k)/q^(k) | <= 2^((n+2)/4) sqrt(1+x_i^2) |q^(k)|^(1+1/(n-1)) By a theorem of Dirichlet this bound is best possible in that the exponent 1+1/(n-1) can in general not be increased.
High-energy physics experiments aim to deepen our understanding of the fundamental structure of matter and the governing forces. One of the most challenging aspects of the design of new experiments is data management and event selection. The search for increasingly rare and intricate physics events asks for high-statistics measurements and sophisticated event analysis. With progressively complex event signatures, traditional hardware-based trigger systems reach the limits of realizable latency and complexity. The Compressed Baryonic Matter experiment (CBM) employs a novel approach for data readout and event selection to address these challenges. Self-triggered, free-streaming detectors push all data to a central compute cluster, called First-level Event Selector (FLES), for software-based event analysis and selection. While this concept solves many issues present in classical architectures, it also sets new challenges for the design of the detector readout systems and online event selection.
This thesis presents an efficient solution to the data management challenges presented by self-triggered, free-streaming particle detectors. The FLES must receive asynchronously streamed data from a heterogeneous detector setup at rates of up to 1 TB/s. The real-time processing environment implies that all components have to deliver high performance and reliability to record as much valuable data as possible. The thesis introduces a time-based data model to partition the input streams into containers of fixed length in experiment time for efficient data management. These containers provide all necessary metadata to enable generic, detector-subsystem-agnostic data distribution across the entire cluster. An analysis shows that the introduced data overhead is well below 1 % for a wide range of system parameters.
Furthermore, a concept and the implementation of a detector data input interface for the CBM FLES, optimized for resource-efficient data transport, are presented. The central element of the architecture is an FPGA-based PCIe extension card for the FLES entry nodes. The hardware designs developed in the thesis enable interfacing with a diverse set of detector systems. A custom, high-throughput DMA design structures data in a way that enables low-overhead access and efficient software processing. The ability to share the host DMA buffers with other devices, such as an InfiniBand HCA, allows for true zero-copy data distribution between the cluster nodes. The discussed FLES input interface is fully implemented and has already proven its reliability in production operation in various physics experiments.
We empirically investigate algorithms for solving Connected Components in the external memory model. In particular, we study whether the randomized O(Sort(E)) algorithm by Karger, Klein, and Tarjan can be implemented to compete with practically promising and simpler algorithms having only slightly worse theoretical cost, namely Borůvka’s algorithm and the algorithm by Sibeyn and collaborators. For all algorithms, we develop and test a number of tuning options. Our experiments are executed on a large set of different graph classes including random graphs, grids, geometric graphs, and hyperbolic graphs. Among our findings are: The Sibeyn algorithm is a very strong contender due to its simplicity and due to an added degree of freedom in its internal workings when used in the Connected Components setting. With the right tunings, the Karger-Klein-Tarjan algorithm can be implemented to be competitive in many cases. Higher graph density seems to benefit Karger-Klein-Tarjan relative to Sibeyn. Borůvka’s algorithm is not competitive with the two others.
Channel routing is an NP-complete problem. Therefore, it is likely that there is no efficient algorithm solving this problem exactly.In this paper, we show that channel routing is a fixed-parameter tractable problem and that we can find a solution in linear time for a fixed channel width.We implemented our approach for the restricted layer model. The algorithm finds an optimal route for channels with up to 13 tracks within minutes or up to 11 tracks within seconds.Such narrow channels occur for example as a leaf problem of hierarchical routers or within standard cell generators.
Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme.
A novel method for identifying the nature of QCD transitions in heavy-ion collision experiments is introduced. PointNet based Deep Learning (DL) models are developed to classify the equation of state (EoS) that drives the hydrodynamic evolution of the system created in Au-Au collisions at 10 AGeV. The DL models were trained and evaluated in different hypothetical experimental situations. A decreased performance is observed when more realistic experimental effects (acceptance cuts and decreased resolutions) are taken into account. It is shown that the performance can be improved by combining multiple events to make predictions. The PointNet based models trained on the reconstructed tracks of charged particles from the CBM detector simulation discriminate a crossover transition from a first order phase transition with an accuracy of up to 99.8%. The models were subjected to several tests to evaluate the dependence of its performance on the centrality of the collisions and physical parameters of fluid dynamic simulations. The models are shown to work in a broad range of centralities (b=0–7 fm). However, the performance is found to improve for central collisions (b=0–3 fm). There is a drop in the performance when the model parameters lead to reduced duration of the fluid dynamic evolution or when less fraction of the medium undergoes the transition. These effects are due to the limitations of the underlying physics and the DL models are shown to be superior in its discrimination performance in comparison to conventional mean observables.
We present an implementation of an interpreter LRPi for the call-by-need calculus LRP, based on a variant of Sestoft's abstract machine Mark 1, extended with an eager garbage collector. It is used as a tool for exact space usage analyses as a support for our investigations into space improvements of call-by-need calculi.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
Ambiguity and communication
(2009)
The ambiguity of a nondeterministic finite automaton (NFA) N for input size n is the maximal number of accepting computations of N for an input of size n. For all k, r 2 N we construct languages Lr,k which can be recognized by NFA's with size k poly(r) and ambiguity O(nk), but Lr,k has only NFA's with exponential size, if ambiguity o(nk) is required. In particular, a hierarchy for polynomial ambiguity is obtained, solving a long standing open problem (Ravikumar and Ibarra, 1989, Leung, 1998).
Gegenstand dieser Arbeit war die Analyse der Komplexität von Kosten- und Erlösrechnungssystemen und ihrer Auswirkung auf die Auswahl geeigneter Instrumente für die EDV-gestützte Realisierung dieser Systeme, wobei insbesondere auf die bisherigen Ansätze der Datenbank- und Wissensuntersrutzung der Kosten- und Erlösrechnung eingegangen werden sollte. Das zweite Kapitel befaßt sich mit einer Analyse der Komplexität der in Deutschland am weitesten verbreiteten Kosten- und Erlösrechnungssysteme. Die Untersuchung der grundlegenden Gestaltungsmerkmale von Kosten- und Erlösrechnungssystemen auf ihre Komplexitätsrelevanz zeigte, daß einige Merkmale die Komplexität sehr stark beeinflussen, andere dagegen kaum, darunter auch in der betriebswirtschaftlichen Diskussion so wesentliche wie der verwendete Kostenbegriff. Den größten Einfluß auf die Komplexität von Kosten- und Erlösrechnungssystemen besitzen die Kosten- und Erlösstrukturierung sowie die Verarbeitungsarten, -methoden und -inhalte. Ein Vergleich der Grenzplankostenrechnung nach Kn.GER und FLAUT, stellvertretend Im überwiegend zweckmonistische Kostenrechnungssysteme, und der Einzelkostenrechnung nach RIEBEL als zweckpluralistischem Kosten- und Erlösrechnungssystem bezüglich der komplexitätsrelevanten Merkmale ergab eindeutige Unterschiede zwischen diesen Systemen. Während die Grenzplankostenrechnung polynomiale Platz- und Funktionskomplexitäten niedriger Grade (überwiegend quadratisch und nur im Rahmen der innerbetrieblichen Leistungsverrechnung kubisch) aufweist, treten in der Einzelkostenrechnung an mehreren entscheidenden Stellen exponentielle Komplexitäten auf. Die Analyse der Komplexität dieser beiden Kosten- und Erlösrechnungssystemen zeigt einen eindeutigen Zusammenhang zwischen vielseitiger Auswertbarkeit und der Komplexität eines Systems auf, der bei einer Beurteilung von Kosten- und Erlösrechnungssystemen berücksichtigt werden muß. Für die Gestaltung von Kosten- und Erlösrechnungssystemen bedeutet dies eine grundsätzliche Wahlmöglichkeit zwischen Systemen begrenzter Auswertbarkeit und niedriger Komplexität sowie Systemen mit größerer Auswertungsvielfalt, aber deutlich höherer Komplexität. Die Komplexität von Kosten- und Erlösrechnungssystemen ist jedoch nicht als eine Folge der Auswahl eines Rechnungssystems zu betrachten, sondern resultiert letztlich aus der Komplexität einer Unternehmung und ihrer Umwelt, die unterschiedlich detailliert abgebildet werden können. Da diese Komplexitäten in Zukunft eher noch zunehmen werden, ist grundSätzlich mit einem Trend zu universelleren und komplexeren Systemen zu rechnen. Die Erweiterung der Grenzplankostenrechnung hin zu größerer Komplexität sowie die Entwicklung neuerer Ansätze wie der Prozeßkostenrechnung bestätigen beide diesen Trend. Für die weitere Untersuchung wird vorausgesetzt, daß die Grenzplankostenrechnung und die Einzelkostenrechnung die entgegengesetzten Enden eines Komplexitätsspektrums von Kosten- und Erlösrechnungssystemen bilden und daher auch das Spektrum der Anforderungen an die Instrumente zu ihrer EDV-Implementierung begrenzen. Unter einer Anzahl von neueren Entwicklungen in der EDV wurden daher zwei Konzepte ausgewählt, die zur Behandlung verschiedener Aspekte der Komplexität geeignet sind: Datenbanksysteme zur Behandlung der Platzkomplexität und Wissenssysteme zur Behandlung der Funktionskomplexität. Im folgenden werden die Erfahrungen, die bei der Realisierung von Datenbank- und Wissenssystemen für die Kosten- und Erlösrechnung gemacht wurden, unter dem Gesichtspunkt der Komplexität von Kosten- und Erlösrechnungssystemen bewertet. Bei der Betrachtung von Datenbanksystemen ist zu berücksichtigen, daß sich im Laufe der Zeit zwei unterschiedliche Anwendungstypen herauskristallisiert haben: konventionelle Datenbankanwendungen, die den herkömmlichen Paradigmen von Datenbanksystemen entsprechen, und neuere Datenbankanwendungen, die z.T. wesentlich höhere Anforderungen stellen und so die Entwicklung neuer Datenbanksysteme erforderlich machten. Beide Systeme der Kosten- und Erlösrechnung eignen sich grundSätzlich als Datenbankanwendungen, d.h. sie rechtfertigen den Einsatz von Datenbanksystemen zur Verwaltung ihrer Datenmengen. Während die Grenzplankostenrechnung aber den konventionellen Datenbankanwendungen zuzurechnen ist, weist die Einzelkostenrechnung bereits wesentliche Merkmale neuerer Datenbankanwendungen auf. Im Gegensatz zu Datenbanksystemen sind die Anforderungen an Wissenssysteme und ihre Eigenschaften sehr unpräzise, z.T. sogar widersprüchlich formuliert. Auf der Basis der gängigen Eigenschaftskataloge erscheint die Kosten- und Erlösrechnung nicht als typische Wissenssystemanwendung. Trotzdem wurden bereits mehrere Wissenssysteme für Kosten- und Erlösrechnungsprobleme (Abweichungsanalyse, Betriebsergebnisanalyse, Bestimmung von Preisuntergrenzen, konstruktionsbegleitende Kalkulation und Teilprobleme der Prozeßkostenrechnung) realisiert, von denen jedes einige der Eignungskriterien für Wissenssystemanwendungen erfüllt. Die behandelten Beispiele für Wissenssysteme im Rahmen der Kosten- und Erlösrechnung basieren überwiegend auf der Grenzplankostenrechnung. Es ist daher anzunehmen, daß die Einzelkostenrechnung auf Grund ihrer höheren Komplexität weitere Anwendungsprobleme für Wissenssysteme enthält. Insgesamt sind jedoch die Unterschiede zwischen der Grenzplankostenrechnung und der Einzelkostenrechnung im Hinblick auf den Einsatz von Wissenssystemen wesentlich weniger ausgeprägt als dies für den Einsatz von Datenbanksystemen der Fall war. Nachdem beide Systeme der Kosten- und Erlösrechnung sowohl als Datenbankanwendungen geeignet sind als auch Anwendungsprobleme für Wissenssysteme aufweisen, ist auch die Verbindung von Wissenssystemen und Datenbanksystemen in Betracht zu ziehen. Daher wurde im Anschluß die jeweiligen Vor- und Nachteile von Datenbank- und Wissenssysteme gegenübergestellt. Die Vorteile von Datenbanksystemen liegen auf den maschinennäheren Ebenen, auf denen die Vorkehrungen für Datenschutz, Datensicherung, reibungslosen Mehrbenutzerbetrieb sowie die effiziente Ausführung der Operationen geschaffen werden. Die Vorteile von Wissenssystemen liegen in der größeren Mächtigkeit der Problemlösungskomponente, der Wissenserweiterungskomponente und der Erklärungskomponente. Ein neueres Beispiel für eine Zusammenarbeit von Datenbank- und Wissenssystemen ist die Auswertung eines speziell für derartige Zwecke angelegten Data Warehouse durch das Data Mining sowie andere Analysesysteme. Ein Data Warehouse stimmt in wesentlichen Merkmalen mit der Grundrechnung der Einzelkostenrechnung überein und zeigt, daß eine Grundrechnung auf der Basis heutiger EDV -Systeme realisierbar ist. Zur Auswertung einer Datenbank dieser Größe sind spezielle Analysesysteme notwendig. Für standardisierte Auswertungen eines Data Warehouse wurden OLAP-Systeme entwickelt, deren Operationen Verallgemeinerungen mehrdimensionaler Deckungsbeitragsrechnungen sind. Bei nicht standardisierbaren Auswertungen empfiehlt sich dagegen der Einsatz von Wissenssystemen, für den das Data Mining ein Beispiel liefert. Diese Kombination von Datenbanksystem, konventionellen und Kl-Auswertungen erscheint für eine Verwendung in der Kosten- und Erlösrechnung bestens geeignet. Das vierte Kapitel befaßt sich mit Ansätzen zur Strukturierung von Daten- und Wissensbasen, die bei Datenbanksystemen als Datenmodelle, bei Wissenssystemen als Wissensrepräsentationstechniken bezeichnet werden. Dabei wurde der Unterteilung des dritten Kapitels gefolgt und zwischen konventionellen und neueren Datenmodellen sowie Wissensrepräsentationstechniken unterschieden. Die Betrachtung des Relationenmodells als Vertreters der konventionellen Datenmodelle ergab, daß es für die Grenzplankostenrechnung völlig ausreicht. Die Erfahrungen mit der Realisierung einer Grundrechnung auf der Basis des Relationenmodells haben dagegen gezeigt, daß seine syntaktischen und semantischen Mängel zu weitgehenden Vereinfachungen beim Schemaentwurf zwingen, die wiederum die Operationen der Auswertungsrechnungen unnötig komplizieren. Aus der Vielzahl semantischer und objektorientierter Datenmodelle, die für neuere Datenbankanwendungen entwickelt wurden, hat sich trotz Unterschieden in Details eine Anzahl von Konzepten herauskristallisiert, die den meisten dieser DatenmodelIe gemeinsam sind. Mit Hilfe dieser Konzepte sind die Probleme, die bei der Verwendung des Relationenmodelis auftraten, vermeidbar. Im Grunde sind daher fast alle semantischen und objektorientierten Entwurfsmodelle zur ModelIierung einer Grundrechnung geeignet. Wichtig ist jedoch,daß die Grundrechnung auch mit einem Datenbanksystem realisiert wird, dem eines dieser Datenmodelle zugrunde liegt, da bei einer Transformation auf ein relationales Datenmodell wesentliche Entwurfsüberlegungen - und damit der größte Teil des Vorteils,den semantische und objektorientierte Entwurfsmodelle bieten -, verloren gehen. Zur Realisierung einer Grundrechnung erscheinen objektrelationale Datenbanksysteme am besten geeignet, da sie einerseits objektorientierte Konzepte mit mächtigen und komfortablen Anfragesprachen verbinden und andererseits aufwärtskompatibel zu den weitverbreiteten relationalen Datenbanksystemen sind. Da sich die objektorientierten Datenmodelle als für die Modellierung einer Grundrechnung geeignet erwiesen haben, wurden unter dem Gesichtspunkt der Verbindung von Datenbank- und Wissenssystemen nur objektorientierte Wissensrepräsentationstechniken in Betracht gezogen. Zwischen semantischen und objektorientierten Datenmodellen einerseits und objektorientierten Wissensrepräsentationstechniken, vor allem semantischen Netzen und Frames, andererseits bestehen weitgehende Übereinstimmungen. Daher können z.B. framebasierte Wissenssysteme direkt auf objektorientierten Datenbanksystemen realisiert werden. Inzwischen werden aber auch objektorientierte Programmiersprachen wie C++ oder Smalltalk zur Implementierung von Wissenssystemen verwendet, von denen die objektorientierte Sprache C++ am geeignetsten erscheint, da die meisten objektorientierten und objektrelationalen Datenbanksysteme eine C++-Schnittstelle aufweisen. Abschließend ist daher festzustellen, daß das Paradigma der Objektorientierung, das in Entwurfssprachen, Datenmodellen, Wissensrepräsentationstechniken und Programmiersprachen wesentliche Einflüsse ausgeübt hat, für die Realisierung der datenbankgestützten Grundrechnung eines zweckpluralistischen Kosten- und Erlösrechnungssystems wie der Einzelkostenrechnung sowie darauf aufbauender Auswertungsrechnungen, die z.T. als Wissenssysteme realisiert werden, wesentliche Vorteile besitzt. Über die adäquatere ModelIierung der Strukturen hinaus entsteht durch den Einsatz objektorientierter Techniken zum Entwurf und zur Implementierung aller System teile ein möglichst homogenes System, das nicht zusätzlich zu der inhärenten Komplexität noch weitere Probleme durch ungeeignete Darstellungskonzepte oder schlechte Abstimmung schafft.
Motivated by tools for automaed deduction on functional programming languages and programs, we propose a formalism to symbolically represent $\alpha$-renamings for meta-expressions. The formalism is an extension of usual higher-order meta-syntax which allows to $\alpha$-rename all valid ground instances of a meta-expression to fulfill the distinct variable convention. The renaming mechanism may be helpful for several reasoning tasks in deduction systems. We present our approach for a meta-language which uses higher-order abstract syntax and a meta-notation for recursive let-bindings, contexts, and environments. It is used in the LRSX Tool -- a tool to reason on the correctness of program transformations in higher-order program calculi with respect to their operational semantics. Besides introducing a formalism to represent symbolic $\alpha$-renamings, we present and analyze algorithms for simplification of $\alpha$-renamings, matching, rewriting, and checking $\alpha$-equivalence of symbolically $\alpha$-renamed meta-expressions.
Magnetoencephalography (MEG) measures neural activity non-invasively and at an excellent temporal resolution. Since its invention (Cohen, 1968, 1972), MEG has proven a most valuable tool in neurocognitive (Salmelin et al., 1994) and clinical research (Stufflebeam et al., 2009; Van ’t Ent et al., 2003). MEG is able to measure rapid changes in electrophysiological neural signals related to sensory and cognitive processes. The magnetic fields measured outside the head by MEG directly reflect the cortical currents generated by the synchronised activity of thousands of neuronal sources. This distinguishes MEG from functional magnetic resonance imaging (fMRI), where measurements are only indirectly related to electrophysiological activity through neurovascular coupling...
Seminar: 10501 - Advances and Applications of Automata on Words and Trees. The aim of the seminar was to discuss and systematize the recent fast progress in automata theory and to identify important directions for future research. For this, the seminar brought together more than 40 researchers from automata theory and related fields of applications. We had 19 talks of 30 minutes and 5 one-hour lectures leaving ample room for discussions. In the following we describe the topics in more detail.
From 12.12.2010 to 17.12.2010, the Dagstuhl Seminar 10501 "Advances and Applications of Automata on Words and Trees" was held in Schloss Dagstuhl - Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.
The hepatitis C virus (HCV) RNA replication cycle is a dynamic intracellular process occurring in three-dimensional space (3D), which is difficult both to capture experimentally and to visualize conceptually. HCV-generated replication factories are housed within virus-induced intracellular structures termed membranous webs (MW), which are derived from the Endoplasmatic Reticulum (ER). Recently, we published 3D spatiotemporal resolved diffusion–reaction models of the HCV RNA replication cycle by means of surface partial differential equation (sPDE) descriptions. We distinguished between the basic components of the HCV RNA replication cycle, namely HCV RNA, non-structural viral proteins (NSPs), and a host factor. In particular, we evaluated the sPDE models upon realistic reconstructed intracellular compartments (ER/MW). In this paper, we propose a significant extension of the model based upon two additional parameters: different aggregate states of HCV RNA and NSPs, and population dynamics inspired diffusion and reaction coefficients instead of multilinear ones. The combination of both aspects enables realistic modeling of viral replication at all scales. Specifically, we describe a replication complex state consisting of HCV RNA together with a defined amount of NSPs. As a result of the combination of spatial resolution and different aggregate states, the new model mimics a cis requirement for HCV RNA replication. We used heuristic parameters for our simulations, which were run only on a subsection of the ER. Nevertheless, this was sufficient to allow the fitting of core aspects of virus reproduction, at least qualitatively. Our findings should help stimulate new model approaches and experimental directions for virology.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension.
In der vorliegenden Arbeit wurde ein klinisches Alarmsystem für septische Schock-Patienten aufgebaut. Zweckmäßigerweise wurden hierfür metrische körpereigene Variablen verwendet, da Analysen belegt haben, dass die metrischen Daten besser zur Alarmgenerierung geeignet sind als die symbolischen Daten. Für das Training des adaptiven Neuro-Fuzzy-Systems wurden die Daten der letzten Tage des Intensivaufenthalts verwendet, da in diesem Zeitraum, im Gegensatz zu den ersten Tagen, eine gute Klassifikationsperformanz erreicht wurde. Die daraus resultierenden Alarmhistorien liefern zuverlässige Hinweise für den Intensivmediziner auf besonders kritische Patienten. Durch diese Arbeit wird es möglich werden, den medizinischen SOFA-Score, der aus 10 Variablen zusammengesetzt ist, durch die einfachere Kombination "Systolischer Blutdruck / Diastolischer Blutdruck / Thrombozyten" zu ersetzen mit einer mindestens genauso guten Performanz. Durch die Hinzunahme weiterer Variablen ist es möglich, die Performanz des SOFA-Scores zu überbieten, wobei der SOFA-Score bereits die beste Klassifikationsperformanz unter den getesteten Scores erreichte. Die erzeugten Regeln konnten die Klassifikationsentscheidung sinnvoll untermauern. Im Gegensatz zur automatischen Regelgenerierung war es Ärzten nicht möglich ahnlich sinnvolle formale Regeln zu formulieren.
This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good results even using only a few training samples.
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence - ICTAI 2003
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Keywords: model parameter adaption, septic shock. coupled differential equations, genetic algorithm.
The Internet as the biggest human library ever assembled keeps on growing. Although all kinds of information carriers (e.g. audio/video/hybrid file formats) are available, text based documents dominate. It is estimated that about 80% of all information worldwide stored electronically exists in (or can be converted into) text form. More and more, all kinds of documents are generated by means of a text processing system and are therefore available electronically. Nowadays, many printed journals are also published online and may even discontinue to appear in print form tomorrow. This development has many convincing advantages: the documents are both available faster (cf. prepress services) and cheaper, they can be searched more easily, the physical storage only needs a fraction of the space previously necessary and the medium will not age. For most people, fast and easy access is the most interesting feature of the new age; computer-aided search for specific documents or Web pages becomes the basic tool for information-oriented work. But this tool has problems. The current keyword based search machines available on the Internet are not really appropriate for such a task; either there are (way) too many documents matching the specified keywords are presented or none at all. The problem lies in the fact that it is often very difficult to choose appropriate terms describing the desired topic in the first place. This contribution discusses the current state-of-the-art techniques in content-based searching (along with common visualization/browsing approaches) and proposes a particular adaptive solution for intuitive Internet document navigation, which not only enables the user to provide full texts instead of manually selected keywords (if available), but also allows him/her to explore the whole database.
Efficient algorithms for object recognition are crucial for the newly robotics and computer vision applications that demand real-time and on-line methods. Some examples are autonomous systems, navigating robots, autonomous driving. In this work, we focus on efficient semantic segmentation, which is the problem of labeling each pixel of an image with a semantic class.
Our aim is to speed-up all of the parts of the semantic segmentation pipeline. We also aim at delivering a labeling solution on a time budget, that can be decided on-the-fly. For this purpose, we analyze all the components of the semantic segmentation pipeline, and identify the computational bottleneck of each of them. The different components of the pipeline are over-segmenting the image with local regions, extracting features and classify the local regions, and the final inference of the image labeling with semantic classes. We focus on each of these steps.
First, we introduce a new superpixel algorithm to over-segment the image. Our superpixel method runs in real-time and can deliver a solution at any time budget. Then, for feature extraction, we focus on the framework that computes descriptors and encodes them, followed by a pooling step. We see that the encoding step is the bottleneck, for computational efficiency and performance. We present a novel assignment-based encoding formulation, that allows for the design of a new, very efficient, encoding. Finally, the image labeling output is obtained modeling the dependencies with a Conditional Random Field (CRF). In semantic image segmentation, the computational cost of instantiating the potentials is much higher than MAP inference. We introduce Active MAP inference to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest as unknown, and to estimate the MAP labeling from such incomplete energy function.
We perform experiments on all proposed methods for the different parts of the semantic segmentation pipeline. We show that our superpixel extraction achieves higher accuracy than state-of-the-art on standard superpixel benchmark, while it runs in real-time. We test our feature encoding on standard image classification and segmentation benchmarks, and we show that our method achieves competitive results with the state-of-the-art, and requires less time and memory. Finally, results for semantic segmentation benchmark show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
The interaction between Λ baryons and kaons/antikaons is a crucial ingredient for the strangeness S=0 and S=−2 sector of the meson--baryon interaction at low energies. In particular, the ΛK¯¯¯¯ might help in understanding the origin of states such as the Ξ(1620), whose nature and properties are still under debate. Experimental data on Λ−K and Λ−K¯¯¯¯ systems are scarce, leading to large uncertainties and tension between the available theoretical predictions constrained by such data. In this Letter we present the measurements of Λ−K+⊕Λ¯¯¯¯−K− and Λ−K−⊕Λ¯¯¯¯−K+ correlations obtained in the high-multiplicity triggered data sample in pp collisions at s√=13 TeV recorded by ALICE at the LHC. The correlation function for both pairs is modeled using the Lednicky−Lyuboshits analytical formula and the corresponding scattering parameters are extracted. The Λ−K−⊕Λ¯¯¯¯−K+ correlations show the presence of several structures at relative momenta k∗ above 200 MeV/c, compatible with the Ω baryon, the Ξ(1690), and Ξ(1820) resonances decaying into Λ−K− pairs. The low k∗ region in the Λ−K−⊕Λ¯¯¯¯−K+ also exhibits the presence of the Ξ(1620) state, expected to strongly couple to the measured pair. The presented data allow to access the ΛK+ and ΛK− strong interaction with an unprecedented precision and deliver the first experimental observation of the Ξ(1620) decaying into ΛK−.