Refine
Year of publication
Document Type
- Preprint (759)
- Article (402)
- Working Paper (119)
- Doctoral Thesis (93)
- Diploma Thesis (47)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (36)
- diplomthesis (28)
- Report (25)
Has Fulltext
- yes (1619)
Is part of the Bibliography
- no (1619)
Keywords
Institute
- Informatik (1619) (remove)
Augmented Reality ist eine Technologie, mit der die Wahrnehmung der realen Umgebung durch computergenerierte Sinnesreize verändert bzw. erweitert wird. Zur Erweiterung dieser „angereicherten Realität“ werden virtuelle Informationen wie z.B. 3D-Objekte, Grafiken und Videos in Echtzeit in Abbildern der realen Umgebung dargestellt. Die Erweiterungen helfen dem Anwender Aufgaben in der Realität auszuführen, da sie ihm Informationen bereitstellen, die er – ohne AR – nicht unmittelbar wahrnehmen könnte. Die Zielsetzung ist, dem Benutzer den Eindruck zu vermitteln, dass die reale Umgebung und die virtuellen Objekte koexistent miteinander verschmelzen. Für AR-Anwendungen existieren zahlreiche potenzielle Einsatzgebiete, doch verhindern bisher einige Probleme die Verbreitung dieser Technologie. Einer breiten Nutzung von AR-Anwendungen steht beispielsweise die Problematik gegenüber, dass deren Erstellung hohe programmiertechnische Anforderungen an die Entwickler stellt. Zur Verminderung dieser Probleme ist es wünschenswert Benutzern ohne Programmierkenntnisse (Autoren) die Entwicklung von AR-Anwendungen zu ermöglichen. Zum anderen bestehen technologische Probleme bei den für die Registrierung der virtuellen Objekte essenziellen Trackingverfahren. Weiterhin weisen die bisherigen AR-Anwendungen im Allgemeinen und die mittels autorenorientierter Systeme erstellten AR-Applikationen im Besonderen Defizite bezüglich der Authentizität der Darstellungen auf. Dabei sind hauptsächlich inkorrekte Verdeckungen und unrealistische Schatten bei den virtuellen Objekten verantwortlich für den Verlust des Koexistenzeindrucks. In dieser Arbeit wird unter Berücksichtigung der Trackingprobleme und auf Basis von Analysen, die die wichtigsten Authentizitätskriterien bestimmen, ein Konzept zur authentischen Integration von virtuellen Objekten in AR-Anwendungen erarbeitet und dargelegt. Auf diesem Integrationsprozess basierend werden Konzepte für Werkzeuge mit grafischen Benutzungsschnittstellen abgeleitet, mit denen Autoren die Erstellung von AR-Anwendungen mit hoher Darstellungsauthentizität ermöglicht wird. Einerseits verfügen die mit diesen Werkzeugen erstellten AR-Anwendungen über eine verbesserte Registrierung der virtuellen Objekte. Andererseits stellen die Werkzeuge Lösungen bereit, damit die virtuellen Objekte der AR-Anwendungen korrekte Verdeckungen aufweisen und über Schatten und Schattierungseffekte verfügen, die mit der tatsächlichen Beleuchtungssituation der realen Umgebung übereinstimmen. Sämtliche dieser Autorenwerkzeuge basieren auf einem in dieser Arbeit dargelegten Prinzip, bei dem die authentische Integration mittels leicht verständlicher bzw. wenig komplexer Arbeitsschritte und auf Basis der Verwendung einer Bildsequenz der realen Zielumgebung stattfindet. Die Konzepte dieser Arbeit werden durch die Implementierung der Autorenwerkzeuge validiert. Dabei zeigt sich, dass die Konzepte technisch umsetzbar sind. Die Evaluierung basiert auf der Gegenüberstellung eines in dieser Arbeit entwickelten Anforderungskatalogs und verdeutlicht die Eignung des Integrationsprozesses und der davon abgeleiteten Konzepte der Autorenwerkzeuge. Die Autorenwerkzeuge werden in eine bestehende, frei verfügbare AR-Autorenumgebung integriert.
Lernplattformen sind E-Learning-Systeme, deren Kernfunktionalität die Verwaltung und Verteilung von Lernmaterialien über das World Wide Web ist. In dieser Arbeit wurde untersucht, wie durch Aufzeichnung (Tracking), Auswertung und Visualisierung von Lernaktivitäten in Lernplattformen eine Verbesserung der Lernqualität erreicht werden kann. Der Ansatzpunkt dafür war, Informationen zu Lernaktivitäten in geeigneter Weise Lehrenden und Lernenden zu präsentieren, so dass diese Rückschlüsse ziehen können, um Lernprozesse eigenständig zu optimieren. Viele Lernplattformen verfolgen bereits diesen Ansatz und verfügen deshalb über entsprechende Funktionalität.
Es mussten zwei wesentliche Fragen beantwortet werden:
1. Was müssen Lernende und Lehrende über erfolgte Lernaktivitäten wissen?
2. Wie werden Lernaktivitäten in geeigneter Weise präsentiert?
Diese Fragen wurden durch Betrachtung existierender Lernplattformen (State of the Art) sowie Befragung von Experten in Form von Interviews beantwortet. Zur Beantwortung der 2. Frage wurden außerdem allgemeine Grundlagen der Auswertung und Visualisierung von Daten verwendet sowie (zu einem geringen Teil) Auswertungs- und Visualisierungsverfahren von Systemen, die keine Lernplattformen sind. Besondere Aufmerksamkeit wurde auch dem
Datenschutz gewidmet.
Beruhend auf den gewonnenen Erkenntnissen wurde dann ein Konzept für ein Auswertungs-/Visualisierungssystem entwickelt das in verschiedenen Punkten eine Verbesserung des State of the Art darstellt.
Teile des Konzepts wurden schließlich für das webbasierte Softwaresystem LernBar, das über einen Großteil der Funktionalität einer Lernplattform verfügt, prototypisch implementiert. Durch die Implementierung soll es ermöglicht werden, das Konzept im praktischen Einsatz zu evaluieren, was im Rahmen dieser Arbeit nicht möglich war.
In der folgenden Anleitung werden diverse Methoden für den Zugriff auf das Ressourcen-Management, entwickelt von der AG Texttechnologie, erläutert. Das Ressourcen-Management ist für alle Anwendungen identisch. Erklärt wird das Auslesen des Ressourcen-Managements der Projects „PHI Picturing Atlas“. Alle Anweisungen erfolgen per RESTful-Aufrufen. Die API-Dokumentation findet sich unter http://phi.resources.hucompute.org.
Wir betrachten in dieser Diplomarbeit die Sicherheit des ringbasierten Public Key Kryptosystems NTRU, das 1996 von J. Hoffstein, J. Pipher und J.H. Silverman vorgeschlagen wurde. Dieses Kryptosystem bietet schnelle Kodierung und Dekodierung in Laufzeit O(n exp 2) bei kleinem Sicherheitsparameter n. Die Sicherheit des Systems beruht auf einem Polynomfaktorisierungsproblem (PFP)im Polynomring Zq[X]/(X exp n -1). Das PFP wurde von Coppersmith und Shamir auf ein Kürzestes Vektor Problem im Gitter Lcs reduziert. Die neuen Ergebnisse dieser Arbeit bauen auf dem Gitter Lcs auf. Wir betrachten die Nachteile von Lcs und konstruieren verbesserte Gitterbasen zum Angriff auf das NTRU-Kryptosystem. Dabei nutzen wir Strukturen des Polynomrings Zq[X]/(X exp n -1) und der geheimen Schlüssel aus. Durch die neuen Gitterbasen wird der Quotient aus der Länge des zweitkürzesten und der Länge des kürzesten Gittervektors vergrößert. Da wir Approximationsalgorithmen zum Finden eines kürzesten Vektors verwenden, beschleunigt dies die Attacken. Wir präsentieren verschiedene Methoden, wie man die Dimension der Gitterbasen verkleinern kann. Durch die verbesserten Gitterattacken erhalten wir eine Cryptanalyse des NTRU-Systems in der vorgeschlagenen mittleren Sicherheitsstufe. Beträgt die Zeit zum Brechen eines Public-Keys unter Verwendung der Coppersmith/Shamir-Basis 1 Monat, so verringert sich die Laufzeit durch einen kombinierten Einsatz der neuen Gitterbasen auf ca. 5 Stunden auf einem Rechner und bei Parallelisierung auf ca. 1:20 Stunde auf 4 Rechnern. Wir erwarten, daß die neuen Methoden NTRU in hoher Sicherheitsstufe n = 167 brechen, obwohl für dieses n bisher nur "schwache" Schlüssel gebrochen wurden. Trotz signifikanter Verbesserungen deuten die experimentellen Ergebnisse auf ein exponentielles Laufzeitverhalten bei steigendem Sicherheitsparameter n hin. Der Laufzeitexponent kann allerdings gesenkt werden, so daß man n größer wählen muß, um Sicherheit gegenüber den neuen Attacken zu erzielen. Auch wenn das NTRU-Kryptosystem nicht vollständig gebrochen wird, verliert es seinen größten Vorteil gegenüber anderen Public Key Kryptosystemen: Die effiziente Kodierung und Dekodierung bei kleinem Sicherheitsparameter n.
We introduce algorithms for lattice basis reduction that are improvements of the famous L3-algorithm. If a random L3-reduced lattice basis b1,b2,...,bn is given such that the vector of reduced Gram-Schmidt coefficients ({µi,j} 1<= j< i<= n) is uniformly distributed in [0,1)n(n-1)/2, then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the Chor-Rivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
Assessing communicative accommodation in the context of large language models : a semiotic approach
(2023)
Recently, significant strides have been made in the ability of transformer-based chatbots to hold natural conversations. However, despite a growing societal and scientific relevancy, there are few frameworks systematically deriving what it means for a chatbot conversation to be natural. The present work approaches this question through the phenomenon of communicative accommodation/interactive alignment. While there is existing research suggesting that humans adapt communicatively to technologies, the aim of this work is to explore the accommodation of AI-chatbots to an interlocutor. Its research interest is twofold: Firstly, the structural ability of the transformer-architecture to support accommodative behavior is assessed using a frame constructed in accordance with existing accommodationtheories.
This results in hypotheses to be tested empirically. Secondly, since effective accommodation produces the same outcomes, regardless of technical implementation, a behavioral experiment is proposed. Existing quantifications of accommodation are reconciled,
extended, and modified to apply them to nonhuman-interlocutors. Thus, a measurement scheme is suggested which evaluates textual data from text-only, double-blind interactions between chatbots and humans, chatbots and chatbots and humans and humans. Using the generated human-to-human convergence data as a reference, the degree of artificial accommodation can be evaluated. Accommodation as a central facet of artificial interactivity can thus be evaluated directly against its theoretical paradigm, i.e. human interaction. In case that subsequent examinations show that chatbots effectively do not accommodate, there may be a new form of algorithmic bias, emerging from the aggregate accommodation towards chatbots but not towards humans. Thus, existing, hegemonic semantics could be cemented through chatbot-learning. Meanwhile, the ability to effectively accommodate would render chatbots vastly more susceptible to misuse.
Im Fachbereich der Computerlinguistik ist die automatische Generierung von Szenen aus, in natürlicher Sprache verfassten, Text seit bereits vielen Jahrzehnten ein wichtiger Bestandteil der Forschung, welche in der "Kunst", "Lehre" und "Robotik" Verwendung finden. Mit Hilfe von neuen Technologien im Bereich der Künstlichen Intelligenzen (KI), werden neue Entwicklungen möglich, welche diese Generierungen vereinfachen, allerdings auch undurchsichtige interne vom Modell getroffene Entscheidungen fördern.
Ziel der vorgeschlagenen Lösung „ARES: Annotation von Relationen und Eigenschaften zur Szenengenerierung“ ist es, ein modulares System zu entwerfen, wobei einzelne Prozesse für den Benutzer verständlich bleiben. Außerdem sollen Möglichkeiten geboten werden, neue Entitäten und Relationen, welche über die Textanalyse bereitgestellt werden, auch in die Szenengenerierung im dreidimensionalen Raum einzupflegen, ohne dass hierfür Code zwingend notwendig wird.
Der Fokus liegt auf der syntaktisch korrekten Darstellung der Elemente im Raum. Dagegen lässt sich die semantische Korrektheit durch weitere manuelle Anpassungen, welche für spätere Generierungen gespeichert werden erhöhen. Letztlich soll die Menge der zur Darstellung benötigten Annotationen möglichst gering bleiben und neue szenenbezogene Annotationen durch die implementierten Annotationstools hinzugefügt werden.
The amyloid precursor protein (APP) was discovered in the 1980s as the precursor protein of the amyloid A4 peptide. The amyloid A4 peptide, also known as A-beta (Aβ), is the main constituent of senile plaques implicated in Alzheimer’s disease (AD). In association with the amyloid deposits, increasing impairments in learning and memory as well as the degeneration of neurons especially in the hippocampus formation are hallmarks of the pathogenesis of AD. Within the last decades much effort has been expended into understanding the pathogenesis of AD. However, little is known about the physiological role of APP within the central nervous system (CNS). Allocating APP to the proteome of the highly dynamic presynaptic active zone (PAZ) identified APP as a novel player within this neuronal communication and signaling network. The analysis of the hippocampal PAZ proteome derived from APP-mutant mice demonstrates that APP is tightly embedded in the underlying protein network. Strikingly, APP deletion accounts for major dysregulation within the PAZ proteome network. Ca2+-homeostasis, neurotransmitter release and mitochondrial function are affected and resemble the outcome during the pathogenesis of AD. The observed changes in protein abundance that occur in the absence of APP as well as in AD suggest that APP is a structural and functional regulator within the hippocampal PAZ proteome. Within this review article, we intend to introduce APP as an important player within the hippocampal PAZ proteome and to outline the impact of APP deletion on individual PAZ proteome subcommunities.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Given a real vector alpha =(alpha1 ; : : : ; alpha d ) and a real number E > 0 a good Diophantine approximation to alpha is a number Q such that IIQ alpha mod Zk1 ", where k \Delta k1 denotes the 1-norm kxk1 := max 1id jx i j for x = (x1 ; : : : ; xd ). Lagarias [12] proved the NP-completeness of the corresponding decision problem, i.e., given a vector ff 2 Q d , a rational number " ? 0 and a number N 2 N+ , decide whether there exists a number Q with 1 Q N and kQff mod Zk1 ". We prove that, unless ...
Motivated by the question whether sound and expressive applicative similarities for program calculi with should-convergence exist, this paper investigates expressive applicative similarities for the untyped call-by-value lambda-calculus extended with McCarthy's ambiguous choice operator amb. Soundness of the applicative similarities w.r.t. contextual equivalence based on may-and should-convergence is proved by adapting Howe's method to should-convergence. As usual for nondeterministic calculi, similarity is not complete w.r.t. contextual equivalence which requires a rather complex counter example as a witness. Also the call-by-value lambda-calculus with the weaker nondeterministic construct erratic choice is analyzed and sound applicative similarities are provided. This justifies the expectation that also for more expressive and call-by-need higher-order calculi there are sound and powerful similarities for should-convergence.
Visual perception has increasingly grown important during the last decades in the robotics domain. Mobile robots have to localize themselves in known environments and carry out complex navigation tasks. This thesis presents an appearance-based or view-based approach to robot self-localization and robot navigation using holistic, spherical views obtained by cameras with large fields of view. For view-based methods, it is crucial to have a compressed image representation where different views can be stored and compared efficiently. Our approach relies on the spherical Fourier transform, which transforms a signal defined on the sphere to a small set of coefficients, approximating the original signal by a weighted sum of orthonormal basis functions, the so-called spherical harmonics. The truncated low order expansion of the image signal allows to compare input images efficiently, and the mathematical properties of spherical harmonics also allow for estimating rotation between two views, even in 3D. Since no geometrical measurements need to be done, modest quality of the vision system is sufficient. All experiments shown in this thesis are purely based on visual information to show the applicability of the approach. The research presented on robot self localization was focused on demonstrating the usability of the compressed spherical harmonics representation to solve the well-known kidnapped robot problem. To address this problem, the basic idea is to compare the current view to a set of images from a known environment to obtain a likelihood of robot positions. To localize the robot, one could choose the most probable position from the likelihood map; however, it is more beneficial to apply standard methods to integrate information over time while the robot moves, that is, particle or Kalman filters. The first step was to design a fast expansion method to obtain coefficient vectors directly in image space. This was achieved by back-projecting basis functions on the input image. The next steps were to develop a dissimilarity measure, an estimator for rotations between coefficient vectors, and a rotation-invariant dissimilarity measure, all of them purely based on the compact signal representation. With all these techniques at hand, generating likelihood maps is straightforward, but first experiments indicated strong dependence on illumination conditions. This is obviously a challenge for all holistic methods, in particular for a spherical harmonics approach, since local changes usually affect each single element of the coefficient vector. To cope with illumination changes, we investigated preprocessing steps leading to feature images (e.g. edge images, depth images), which bring together our holistic approach and classical feature-based methods. Furthermore, we concentrated on building a statistical model for typical changes of the coefficient vectors in presence of changes in illumination. This task is more demanding but leads to even better results. The second major topic of this thesis is appearance-based robot navigation. I present a view-based approach called Optical Rails (ORails), which leads a robot along a prerecorded track. The robot navigates in a network of known locations which are denoted as waypoints. At each waypoint, we store a compressed view representation. A visual servoing method is used to reach a current target waypoint based on the appearance and the current camera image. Navigating in a network of views is achieved by reaching a sequence of stopover locations, one after another. The main contribution of this work is a model which allows to deduce the best driving direction of the robot based purely on the coefficient vectors of the current and the target image. It is based on image registration as the classical method by Lucas-Kanade, but has been transferred to the spectral domain, which allows for great speedup. ORails also includes a waypoint selection strategy and a module for steering our nonholonomic robot. As for our self-localization algorithm, dependance on illumination changes is also problematic in ORails. Furthermore, occlusions have to be handled for ORails to work properly. I present a solution based on the optimal expansion, which is able to deal with incomplete image signals. To handle dynamic occlusions, i.e. objects appearing in an arbitrary region of the image, we use the linearity of the expansion process and cut the image into segments. These segments can be treated separately, and finally we merge the results. At this point, we can decide to disregard certain segments. Slicing the view allows for local illumination compensation, which is inherently non-robust if applied to the whole view. In conclusion, this approach allows to handle the most important criticism to holistic view-based approaches, that is, occlusions and illumination changes, and consequently improves the performance of Optical Rails.
Software updates are a critical success factor in mobile app ecosystems. Through publishing regular updates, platform providers enhance their operating systems for the benefit of both end users and third-party developers. It is also a way of attracting new customers. However, this platform evolution poses the risk of inadvertently introducing software problems, which can severely disturb the ecosystem’s balance by compromising its foundational technologies. So far, little to no research has addressed this issue from a user-centered perspective. The thesis at hand draws on IS post-adoption literature to investigate the potential negative influences of operating system updates on mobile app users. The release of Apple’s iOS 13 update serves as research object. Based on over half a million user reviews from the AppStore, data mining techniques are applied to study the impact of the new platform version. The results show that iOS 13 caused complications with a large number of popular apps, leading to a significant decline in user ratings and an uptrend in negative sentiment. Feature requests, functional complaints, and device compatibility are identified as the three major issue categories. These issue types are compared in terms of their quantifiable negative effect on users’ continuance intention. In essence, the findings contribute to IS research on post-adoption behavior and provide guidance to ecosystem participants in dealing with update-induced platform issues.
Synaptic release sites are characterized by exocytosis-competent synaptic vesicles tightly anchored to the presynaptic active zone (PAZ) whose proteome orchestrates the fast signaling events involved in synaptic vesicle cycle and plasticity. Allocation of the amyloid precursor protein (APP) to the PAZ proteome implicated a functional impact of APP in neuronal communication. In this study, we combined state-of-the-art proteomics, electrophysiology and bioinformatics to address protein abundance and functional changes at the native hippocampal PAZ in young and old APP-KO mice. We evaluated if APP deletion has an impact on the metabolic activity of presynaptic mitochondria. Furthermore, we quantified differences in the phosphorylation status after long-term-potentiation (LTP) induction at the purified native PAZ. We observed an increase in the phosphorylation of the signaling enzyme calmodulin-dependent kinase II (CaMKII) only in old APP-KO mice. During aging APP deletion is accompanied by a severe decrease in metabolic activity and hyperphosphorylation of CaMKII. This attributes an essential functional role to APP at hippocampal PAZ and putative molecular mechanisms underlying the age-dependent impairments in learning and memory in APP-KO mice.
Alternative polyadenylation (APA) is a widespread mechanism that contributes to the sophisticated dynamics of gene regulation. Approximately 50% of all protein-coding human genes harbor multiple polyadenylation (PA) sites; their selective and combinatorial use gives rise to transcript variants with differing length of their 3' untranslated region (3'UTR). Shortened variants escape UTR-mediated regulation by microRNAs (miRNAs), especially in cancer, where global 3'UTR shortening accelerates disease progression, dedifferentiation and proliferation. Here we present APADB, a database of vertebrate PA sites determined by 3' end sequencing, using massive analysis of complementary DNA ends. APADB provides (A)PA sites for coding and non-coding transcripts of human, mouse and chicken genes. For human and mouse, several tissue types, including different cancer specimens, are available. APADB records the loss of predicted miRNA binding sites and visualizes next-generation sequencing reads that support each PA site in a genome browser. The database tables can either be browsed according to organism and tissue or alternatively searched for a gene of interest. APADB is the largest database of APA in human, chicken and mouse. The stored information provides experimental evidence for thousands of PA sites and APA events. APADB combines 3' end sequencing data with prediction algorithms of miRNA binding sites, allowing to further improve prediction algorithms. Current databases lack correct information about 3'UTR lengths, especially for chicken, and APADB provides necessary information to close this gap. Database URL: http://tools.genxpro.net/apadb/
In dieser Arbeit wurde die Implementierung einer JMX-konformen Managementinfrastruktur für das Agentensystem AMETAS vorgestellt. Darauf basierend wurden im Rahmen des Fehlermanagements Kontrollmechanismen der mobilen Agenten im AMETAS untersucht und eine Lösung für die Lokalisierung von AMETAS-Agenten entworfen und implementiert. Der essentielle Hintergrund für das AMETAS-Management stellt sich folgendermaßen dar: Die Betrachtung des Anwendungs- und Infrastrukturmanagements mit Blick auf die Managementhierarchie stellt die Offenheit und Kooperationsfähigkeit der angestrebten Managementlösung in den Vordergrund. Diese Eigenschaften ermöglichen die Integration der in einem Unternehmen existierenden Managementlösungen. Ziel ist dabei ein kostengünstiges und effizientes Management. Eine Managementarchitektur wird mit Hilfe der informations-, organisations-, kommunikations- und funktionsbezogenen Aspekte beschrieben und modelliert. Anhand dieser Aspekte ist CORBA, DMTF, WBEM und JMX analysiert und ihre Eignung für das AMETASManagement bewertet worden. Neben den allgemeinen Kriterien sind ihre Teilmodelle, ihre Unterstützung des dezentralen und des dynamischen Managements sowie ihre Integrationsfähigkeit im AMETAS zentrale Punkte. Es zeigt sich, dass die JMX die besten Möglichkeiten für das AMETAS-Management bietet. Das OSI-Funktionsmodell klassifiziert die Managementaufgaben und -funktionen in fünf Bereiche, die häufig als FCAPS bezeichnet werden: Fehler-, Konfigurations-, Abrechnungs- , Leistungs- und Sicherheitsmanagement. Diese Klassifikation ist orthogonal zu jeder anderen und bietet einen geeigneten Rahmen für die Aufteilung der Managementaufgaben und - funktionen. Das in dieser Arbeit empfohlene AMETAS-Management orientiert sich hinsichtlich der Managementaufgabenaufteilung am OSI-Modell. Die JMX bietet mächtigeWerkzeuge zur Instrumentierung aller Arten von Ressourcen. Ihre Java-Basiertheit bedeutet eine wesentliche Vereinfachung für das Agentensystem. Die offene Architektur von der JMX ermöglicht die Kooperation des AMETAS-Managements mit anderen Managementstandards. Das AMETAS-Management nutzt die Vorzüge der mobilen Agenten insbesondere im Bereich des Konfigurations- und Fehlermanagements aus. Folgende Eigenschaften zeichnen das AMETAS-Management aus: 1) Verwendung der Agenteninfrastruktur für das Management. Selbiges wird dabei als ein AMETAS-Dienst implementiert und kann alle Möglichkeiten und Dienste der Agenteninfrastruktur nutzen. 2) Verwendung der AMETAS-Agenten und Dienste als Managementwerkzeuge. 3) Selbstmanagement des Systems. Der Managementdienst ist hierfür mit ausreichender Intelligenz ausgestattet. Er nutzt die Mechanismen der Agenteninfrastruktur aus und erledigt diverse Managementaufgaben selbständig. Das Ereignissystem vom AMETAS spielt hierbei eine wichtige Rolle. Die Analyse der Kontrollmechanismen von MASIF, Aglets Workbench und Mole liefert hinsichtlich ihrer Eignung für die Lokalisierung von Agenten im AMETAS folgendes Ergebnis: Die untersuchten Ansätze sind teilweise allgemein anwendbar. Man unterscheidet die nichtdeterministischen Ansätze wie Advertising und Energiekonzept von denen, die bestimmte Spuren von Agenten in einer geeigneten Art hinterlegen. In dieser Hinsicht stellte sich das Pfadkonzept als interessant heraus: Bei diesem Konzept können die Informationen über den Pfad der Migration eines Agenten in geeigneterWeise zeitlich beschränkt oder unbeschränkt gespeichert werden. Eine andere Alternative bietet die Registrierungsmethode. Bei dieser Methode wird ein Agent in einer zentralen Stelle registriert, wobei die eindeutige Identität eines Agenten und die aktuelle Stelle, in der sich ein Agent aufhält, gespeichert werden. Vor dem Hintergrund der erfolgten Analyse empfiehlt sich als Basis für die Lokalisierung von AMETAS-Agenten eine Art Pfadkonzept: Die Spuren der Agenten werden durch einen Managementdienst gesichert. Will man einen bestimmten Agenten oder eine Gruppe lokalisieren, werden die dezentral vorhandenen Informationen innerhalb eines konsistenten Schnitts (Schnappschuss) ausgewertet. Die Schnappschussmethode empfiehlt sich für die Lokalisierung von Agenten im AMETAS entsprechend den zu Beginn der Arbeit von einem Lokalisierungsmechanismus geforderten Eigenschaften: Sie erlaubt eine zuverlässige Lokalisierung der gesuchten Agenten, deren Autonomie dabei respektiert wird. Die Kosten-Leistungsrelation ist günstig einzuschätzen, da unnötiger Daten- bzw. Agentenverkehr ebenso vermieden wird wie die Pflege umfangreicher, zentralistischer Datenbanken.
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a pT region inaccessible by direct jet identification. In these measurements pseudorapidity (Δη) and azimuthal (Δφ) differences are used to extract the shape of the near-side peak formed by particles associated to a higher pT trigger particle (1<pT,trig< 8 GeV/c). A combined fit of the near-side peak and long-range correlations is applied to the data allowing the extraction of the centrality evolution of the peak shape in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. A significant broadening of the peak in the Δη direction at low pT is found from peripheral to central collisions, which vanishes above 4 GeV/c, while in the Δφ direction the peak is almost independent of centrality. For the 10% most central collisions and 1<pT,assoc< 2 GeV/c, 1<pT,trig< 3 GeV/c a novel feature is observed: a depletion develops around the centre of the peak. The results are compared to pp collisions at the same centre of mass energy and to AMPT model simulations. The comparison to the investigated models suggests that the broadening and the development of the depletion is connected to the strength of radial and longitudinal flow.
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a pT region inaccessible by direct jet identification. In these measurements pseudorapidity (Δη) and azimuthal (Δφ) differences are used to extract the shape of the near-side peak formed by particles associated to a higher pT trigger particle (1<pT,trig< 8 GeV/c). A combined fit of the near-side peak and long-range correlations is applied to the data allowing the extraction of the centrality evolution of the peak shape in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. A significant broadening of the peak in the Δη direction at low pT is found from peripheral to central collisions, which vanishes above 4 GeV/c, while in the Δφ direction the peak is almost independent of centrality. For the 10% most central collisions and 1<pT,assoc< 2 GeV/c, 1<pT,trig< 3 GeV/c a novel feature is observed: a depletion develops around the centre of the peak. The results are compared to pp collisions at the same centre of mass energy and to AMPT model simulations. The comparison to the investigated models suggests that the broadening and the development of the depletion is connected to the strength of radial and longitudinal flow.
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a pT region inaccessible by direct jet identification. In these measurements pseudorapidity (Δη) and azimuthal (Δφ) differences are used to extract the shape of the near-side peak formed by particles associated to a higher pT trigger particle (1<pT,trig< 8 GeV/c). A combined fit of the near-side peak and long-range correlations is applied to the data allowing the extraction of the centrality evolution of the peak shape in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. A significant broadening of the peak in the Δη direction at low pT is found from peripheral to central collisions, which vanishes above 4 GeV/c, while in the Δφ direction the peak is almost independent of centrality. For the 10% most central collisions and 1<pT,assoc< 2 GeV/c, 1<pT,trig< 3 GeV/c a novel feature is observed: a depletion develops around the centre of the peak. The results are compared to pp collisions at the same centre of mass energy and to AMPT model simulations. The comparison to the investigated models suggests that the broadening and the development of the depletion is connected to the strength of radial and longitudinal flow.
This paper describes work on the morphological and syntactic annotation of Sumerian cuneiform as a model for low resource languages in general. Cuneiform texts are invaluable sources for the study of history, languages, economy, and cultures of Ancient Mesopotamia and its surrounding regions. Assyriology, the discipline dedicated to their study, has vast research potential, but lacks the modern means for computational processing and analysis. Our project, Machine Translation and Automated Analysis of Cuneiform Languages, aims to fill this gap by bringing together corpus data, lexical data, linguistic annotations and object metadata. The project’s main goal is to build a pipeline for machine translation and annotation of Sumerian Ur III administrative texts. The rich and structured data is then to be made accessible in the form of (Linguistic) Linked Open Data (LLOD), which should open them to a larger research community. Our contribution is two-fold: in terms of language technology, our work represents the first attempt to develop an integrative infrastructure for the annotation of morphology and syntax on the basis of RDF technologies and LLOD resources. With respect to Assyriology, we work towards producing the first syntactically annotated corpus of Sumerian.
The elliptic (v2), triangular (v3), and quadrangular (v4) flow coefficients of π±, K±, p+p¯¯¯, Λ+Λ¯¯¯¯, K0S, and the ϕ-meson are measured in Pb-Pb collisions at sNN−−−√=5.02 TeV. Results obtained with the scalar product method are reported for the rapidity range |y|< 0.5 as a function of transverse momentum, pT, at different collision centrality intervals between 0-70%, including ultra-central (0-1%) collisions for π±, K±, and p+p¯¯¯. For pT<3 GeV/c, the flow coefficients exhibit a particle mass dependence. At intermediate transverse momenta (3<pT<~8-10 GeV/c), particles show an approximate grouping according to their type (i.e., mesons and baryons). The ϕ-meson v2, which tests both particle mass dependence and type scaling, follows p+p¯¯¯ v2 at low pT and π± v2 at intermediate pT. The evolution of the shape of vn(pT) as a function of centrality and harmonic number n is studied for the various particle species. Flow coefficients of π±, K±, and p+p¯¯¯ for pT<3 GeV/c are compared to iEBE-VISHNU and MUSIC hydrodynamical calculations coupled to a hadronic cascade model (UrQMD). The iEBE-VISHNU calculations describe the results fairly well for pT<2.5 GeV/c, while MUSIC calculations reproduce the measurements for pT<1 GeV/c. A comparison to vn coefficients measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV is also provided.
The elliptic (v2), triangular (v3), and quadrangular (v4) flow coefficients of π±, K±, p+p¯¯¯, Λ+Λ¯¯¯¯, K0S, and the ϕ-meson are measured in Pb-Pb collisions at sNN−−−√=5.02 TeV. Results obtained with the scalar product method are reported for the rapidity range |y|< 0.5 as a function of transverse momentum, pT, at different collision centrality intervals between 0-70%, including ultra-central (0-1%) collisions for π±, K±, and p+p¯¯¯. For pT<3 GeV/c, the flow coefficients exhibit a particle mass dependence. At intermediate transverse momenta (3<pT<~8-10 GeV/c), particles show an approximate grouping according to their type (i.e., mesons and baryons). The ϕ-meson v2, which tests both particle mass dependence and type scaling, follows p+p¯¯¯ v2 at low pT and π± v2 at intermediate pT. The evolution of the shape of vn(pT) as a function of centrality and harmonic number n is studied for the various particle species. Flow coefficients of π±, K±, and p+p¯¯¯ for pT<3 GeV/c are compared to iEBE-VISHNU and MUSIC hydrodynamical calculations coupled to a hadronic cascade model (UrQMD). The iEBE-VISHNU calculations describe the results fairly well for pT<2.5 GeV/c, while MUSIC calculations reproduce the measurements for pT<1 GeV/c. A comparison to vn coefficients measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV is also provided.
The elliptic (v2), triangular (v3), and quadrangular (v4) flow coefficients of π±, K±, p+p¯¯¯,Λ+Λ¯¯¯¯,K0S, and the ϕ-meson are measured in Pb-Pb collisions at s√NN=5.02 TeV. Results obtained with the scalar product method are reported for the rapidity range |y| < 0.5 as a function of transverse momentum, pT, at different collision centrality intervals between 0–70%, including ultra-central (0–1%) collisions for π±, K±, and p+p¯¯¯. For pT < 3 GeV/c, the flow coefficients exhibit a particle mass dependence. At intermediate transverse momenta (3 < pT < 8–10 GeV/c), particles show an approximate grouping according to their type (i.e., mesons and baryons). The ϕ-meson v2, which tests both particle mass dependence and type scaling, follows p+p¯¯¯ v2 at low pT and π± v2 at intermediate pT. The evolution of the shape of vn(pT) as a function of centrality and harmonic number n is studied for the various particle species. Flow coefficients of π±, K±, and p+p¯¯¯ for pT < 3 GeV/c are compared to iEBE-VISHNU and MUSIC hydrodynamical calculations coupled to a hadronic cascade model (UrQMD). The iEBE-VISHNU calculations describe the results fairly well for pT < 2.5 GeV/c, while MUSIC calculations reproduce the measurements for pT < 1 GeV/c. A comparison to vn coefficients measured in Pb-Pb collisions at sNN−−−√=2.76 TeV is also provided.
Measurements of elliptic (v2) and triangular (v3) flow coefficients of π±, K±, p+p¯¯¯, K0S, and Λ+Λ¯¯¯¯ obtained with the scalar product method in Xe-Xe collisions at sNN−−−√ = 5.44 TeV are presented. The results are obtained in the rapidity range |y|<0.5 and reported as a function of transverse momentum, pT, for several collision centrality classes. The flow coefficients exhibit a particle mass dependence for pT<3 GeV/c, while a grouping according to particle type (i.e., meson and baryon) is found at intermediate transverse momenta (3< pT <8 GeV/c). The magnitude of the baryon v2 is larger than that of mesons up to pT = 6 GeV/c. The centrality dependence of the shape evolution of the pT-differential v2 is studied for the various hadron species. The v2 coefficients of π±, K±, and p+p¯¯¯ are reproduced by MUSIC hydrodynamic calculations coupled to a hadronic cascade model (UrQMD) for pT<1 GeV/c. A comparison with vn measurements in the corresponding centrality intervals in Pb-Pb collisions at sNN−−−√ = 5.02 TeV yields an enhanced v2 in central collisions and diminished value in semicentral collisions.
Measurements of elliptic (v2) and triangular (v3) flow coefficients of π±, K±, p+p¯¯¯, K0S, and Λ+Λ¯¯¯¯ obtained with the scalar product method in Xe-Xe collisions at sNN−−−√ = 5.44 TeV are presented. The results are obtained in the rapidity range |y|<0.5 and reported as a function of transverse momentum, pT, for several collision centrality classes. The flow coefficients exhibit a particle mass dependence for pT<3 GeV/c, while a grouping according to particle type (i.e., meson and baryon) is found at intermediate transverse momenta (3< pT <8 GeV/c). The magnitude of the baryon v2 is larger than that of mesons up to pT = 6 GeV/c. The centrality dependence of the shape evolution of the pT-differential v2 is studied for the various hadron species. The v2 coefficients of π±, K±, and p+p¯¯¯ are reproduced by MUSIC hydrodynamic calculations coupled to a hadronic cascade model (UrQMD) for pT<1 GeV/c. A comparison with vn measurements in the corresponding centrality intervals in Pb-Pb collisions at sNN−−−√ = 5.02 TeV yields an enhanced v2 in central collisions and diminished value in semicentral collisions.
Measurements of elliptic (v2) and triangular (v3) flow coefficients of π±, K±, p+p¯¯¯, K0S, and Λ+Λ¯¯¯¯ obtained with the scalar product method in Xe-Xe collisions at sNN−−−√ = 5.44 TeV are presented. The results are obtained in the rapidity range |y| < 0.5 and reported as a function of transverse momentum, pT, for several collision centrality classes. The flow coefficients exhibit a particle mass dependence for pT < 3 GeV/c, while a grouping according to particle type (i.e., meson and baryon) is found at intermediate transverse momenta (3 < pT < 8 GeV/c). The magnitude of the baryon v2 is larger than that of mesons up to pT = 6 GeV/c. The centrality dependence of the shape evolution of the pT-differential v2 is studied for the various hadron species. The v2 coefficients of π±, K±, and p+p¯¯¯ are reproduced by MUSIC hydrodynamic calculations coupled to a hadronic cascade model (UrQMD) for pT < 1 GeV/c. A comparison with vn measurements in the corresponding centrality intervals in Pb-Pb collisions at sNN−−−√ = 5.02 TeV yields an enhanced v2 in central collisions and diminished value in semicentral collisions.
We report the first results of elliptic (v2), triangular (v3) and quadrangular flow (v4) of charged particles in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region |η|<0.8 and for the transverse momentum range 0.2<pT<5 GeV/c. The anisotropic flow is measured using two-particle correlations with a pseudorapidity gap greater than one unit and with the multi-particle cumulant method. Compared to results from Pb-Pb collisions at sNN−−−√=2.76 TeV, the anisotropic flow coefficients v2, v3 and v4 are found to increase by (3.0±0.6)%, (4.3±1.4)% and (10.2±3.8)%, respectively, in the centrality range 0-50%. This increase can be attributed mostly to an increase of the average transverse momentum between the two energies. The measurements are found to be compatible with hydrodynamic model calculations. This comparison provides a unique opportunity to test the validity of the hydrodynamic picture and the power to further discriminate between various possibilities for the temperature dependence of shear viscosity to entropy density ratio of the produced matter in heavy-ion collisions at the highest energies.
We report the first results of elliptic (v2), triangular (v3) and quadrangular flow (v4) of charged particles in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region |η|<0.8 and for the transverse momentum range 0.2<pT<5 GeV/c. The anisotropic flow is measured using two-particle correlations with a pseudorapidity gap greater than one unit and with the multi-particle cumulant method. Compared to results from Pb-Pb collisions at sNN−−−√=2.76 TeV, the anisotropic flow coefficients v2, v3 and v4 are found to increase by (3.0±0.6)%, (4.3±1.4)% and (10.2±3.8)%, respectively, in the centrality range 0-50%. This increase can be attributed mostly to an increase of the average transverse momentum between the two energies. The measurements are found to be compatible with hydrodynamic model calculations. This comparison provides a unique opportunity to test the validity of the hydrodynamic picture and the power to further discriminate between various possibilities for the temperature dependence of shear viscosity to entropy density ratio of the produced matter in heavy-ion collisions at the highest energies.
We report the first results of elliptic (v2), triangular (v3) and quadrangular flow (v4) of charged particles in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. The measurements are performed in the central pseudorapidity region |η|<0.8 and for the transverse momentum range 0.2<pT<5 GeV/c. The anisotropic flow is measured using two-particle correlations with a pseudorapidity gap greater than one unit and with the multi-particle cumulant method. Compared to results from Pb-Pb collisions at sNN−−−√=2.76 TeV, the anisotropic flow coefficients v2, v3 and v4 are found to increase by (3.0±0.6)%, (4.3±1.4)% and (10.2±3.8)%, respectively, in the centrality range 0-50%. This increase can be attributed mostly to an increase of the average transverse momentum between the two energies. The measurements are found to be compatible with hydrodynamic model calculations. This comparison provides a unique opportunity to test the validity of the hydrodynamic picture and the power to further discriminate between various possibilities for the temperature dependence of shear viscosity to entropy density ratio of the produced matter in heavy-ion collisions at the highest energies.
The elliptic, v2, triangular, v3, and quadrangular, v4, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at √sNN=2.76 TeV with the ALICE detector at the Large Hadron Collider. Results obtained with the event plane and four-particle cumulant methods are reported for the pseudo-rapidity range |η|<0.8 at different collision centralities and as a function of transverse momentum, pT, out to pT=20 GeV/c. The observed non-zero elliptic and triangular flow depends only weakly on transverse momentum for pT>8 GeV/c. The small pT dependence of the difference between elliptic flow results obtained from the event plane and four-particle cumulant methods suggests a common origin of flow fluctuations up to pT=8 GeV/c. The magnitude of the (anti-)proton elliptic and triangular flow is larger than that of pions out to at least pT=8 GeV/c indicating that the particle type dependence persists out to high pT.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2023)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2023)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
Anisotropic flow and flow fluctuations of identified hadrons in Pb–Pb collisions at √sNN = 5.02 TeV
(2022)
The first measurements of elliptic flow of π±, K±, p+p¯¯¯, K0S, Λ+Λ¯¯¯¯, ϕ, Ξ−+Ξ+, and Ω−+Ω+ using multiparticle cumulants in Pb−Pb collisions at sNN−−−√ = 5.02 TeV are presented. Results obtained with two- (v2{2}) and four-particle cumulants (v2{4}) are shown as a function of transverse momentum, pT, for various collision centrality intervals. Combining the data for both v2{2} and v2{4} also allows us to report the first measurements of the mean elliptic flow, elliptic flow fluctuations, and relative elliptic flow fluctuations for various hadron species. These observables probe the event-by-event eccentricity fluctuations in the initial state and the contributions from the dynamic evolution of the expanding quark-gluon plasma. The characteristic features observed in previous pT-differential anisotropic flow measurements for identified hadrons with two-particle correlations, namely the mass ordering at low pT and the approximate scaling with the number of constituent quarks at intermediate pT, are similarly present in the four-particle correlations and the combinations of v2{2} and v2{4}. In addition, a particle species dependence of flow fluctuations is observed that could indicate a significant contribution from final state hadronic interactions. The comparison between experimental measurements and CoLBT model calculations, which combine the various physics processes of hydrodynamics, quark coalescence, and jet fragmentation, illustrates their importance over a wide pT range.
Wir betrachten das auf der Crypto '97 vorgeschlagene gitterbasierte Kryp- tosystem von Goldreich, Goldwasser und Halevi (GGH) [11]. Die Autoren veröffentlichten Challenges zu den Sicherheitsparametern 200, 250, 300, 350 und 400 [12]. Jeder Challenge besteht aus dem öffentlichen Schlüssel, sowie einem Ciphertext. Für den Angriff entwickeln wir numerisch stabile Gitterreduktionsalgorithmen, die es ermöglichen, das System in diesen Dimensionen anzugreifen. Es werden Methoden zur Orthogonalisierung, die sogenannten House- holder-Reflexionen und Givens-Rotationen behandelt, und eine praktikable Gleitpunkt-Arithmetik Version des LLL-Algorithmus nach Lenstra, Lenstra und Lov'asz [16] angegeben. Wir entwickeln und analysieren den LLL-Block- Algorithmus, der die Gitterreduktion in Blöcken organisiert. Die Gleitpunkt-Arithmetik Version des LLL-Block-Algorithmus wird experimentell auf das GGH-Schema angewendet und mit der LLL-Reduktion in den Dimensio- nen 100 bis 400 verglichen. Neben der besseren numerischen Stabilität ist die LLL-Block-Reduktion um den Faktor 10 bis 18 mal schneller als die gewöhnliche LLL-Reduktion. Das GGH-Kryptosystem wurde ebenfalls von Nguyen [22] angegriffen, und die ursprünglichen Nachrichten wurden bis in Dimension 350 rekonstruiert. Wir stellen weitere Angriffe auf das Kryptosystem vor. Es zeigt sich, dass die öffentlichen Parameter für erfolgreiche Angriffe benutzt werden können. Der private Schlüssel in der Dimension 200 wird nach ca. 10 Stunden rekonstruiert und Ciphertext-Attaken sind bis in Dimension 300 erfolgreich.
An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for re exive pronouns, nonre exive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.
Monitoring is an indispensable tool for the operation of any large installation of grid or cluster computing, be it high energy physics or elsewhere. Usually, monitoring is configured to collect a small amount of data, just enough to enable detection of abnormal conditions. Once detected, the abnormal condition is handled by gathering all information from the affected components. This data is processed by querying it in a manner similar to a database.
This contribution shows how the metaphor of a debugger (for software applications) can be transferred to a compute cluster. The concepts of variables, assertions and breakpoints that are used in debugging can be applied to monitoring by defining variables as the quantities recorded by monitoring and breakpoints as invariants formulated via these variables. It is found that embedding fragments of a data extracting and reporting tool such as the UNIX tool awk facilitates concise notations for commonly used variables since tools like awk are designed to process large event streams (in textual representations) with bounded memory. A functional notation similar to both the pipe notation used in the UNIX shell and the point-free style used in functional programming simplify the combination of variables that commonly occur when formulating breakpoints.
The economic success of the World Wide Web makes it a highly competitive environment for web businesses. For this reason, it is crucial for web business owners to learn what their customers want. This thesis provides a conceptual framework and an implementation of a system that helps to better understand the behavior and potential interests of web site visitors by accounting for both explicit and implicit feedback. This thesis is divided into two parts.
The first part is rooted in computer science and information systems and uses graph theory and an extended click-stream analysis to define a framework and a system tool that is useful for analyzing web user behavior by calculating the interests of the users.
The second part is rooted in behavioral economics, mathematics, and psychology and is investigating influencing factors on different types of web user choices. In detail, a model for the cognitive process of rating products on the Web is defined and an importance hierarchy of the influencing factors is discovered.
Both parts make use of techniques from a variety of research fields and, therefore, contribute to the area of Web Science.
Charged-particle spectra at midrapidity are measured in Pb–Pb collisions at the centre-of-mass energy per nucleon–nucleon pair √sNN = 5.02 TeV and presented in centrality classes ranging from most central (0–5%) to most peripheral (95–100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton–proton collisions, scaled by the number of independent nucleon–nucleon collisions obtained from a Glauber model. At large transverse momenta (8 < pT < 20 GeV/c), the average RAA is found to increase from about 0.15 in 0–5% central to a maximum value of about 0.8 in 75–85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8–20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb–Pb, but equal to unity in minimum-bias p–Pb collisions despite similar charged-particle multiplicities.
Charged-particle spectra at midrapidity are measured in Pb-Pb collisions at the centre-of-mass energy per nucleon-nucleon pair sNN−−−√ = 5.02 TeV and presented in centrality classes ranging from most central (0-5%) to most peripheral (95-100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton-proton collisions, scaled by the number of independent nucleon-nucleon collisions obtained from a Glauber model. At large transverse momenta (8<pT<20 GeV/c), the average RAA is found to increase from about 0.15 in 0-5% central to a maximum value of about 0.8 in 75-85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8-20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb-Pb, but equal to unity in minimum-bias p-Pb collisions despite similar charged-particle multiplicities.
Charged-particle spectra at midrapidity are measured in Pb-Pb collisions at the centre-of-mass energy per nucleon-nucleon pair sNN−−−√ = 5.02 TeV and presented in centrality classes ranging from most central (0-5%) to most peripheral (95-100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton-proton collisions, scaled by the number of independent nucleon-nucleon collisions obtained from a Glauber model. At large transverse momenta (8<pT<20 GeV/c), the average RAA is found to increase from about 0.15 in 0-5% central to a maximum value of about 0.8 in 75-85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8-20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb-Pb, but equal to unity in minimum-bias p-Pb collisions despite similar charged-particle multiplicities.
Virtual machines are for the most part not used inside of high-energy physics (HEP) environments. Even though they provide a high degree of isolation, the performance overhead they introduce is too great for them to be used. With the rising number of container technologies and their increasing separation capabilities, HEP-environments are evaluating if they could utilize the technology. The container images are small and self-contained which allows them to be easily distributed throughout the global environment. They also offer a near native performance while at the same time aproviding an often acceptable level of isolation. Only the needed services and libraries are packed into an image and executed directly by the host kernel. This work compared the performance impact of the three container technologies Docker, rkt and Singularity. The host kernel was additionally hardened with grsecurity and PaX to strengthen its security and make an exploitation from inside a container harder. The execution time of a physics simulation was used as a benchmark. The results show that the different container technologies have a different impact on the performance. The performance loss on a stock kernel is small; in some cases they were even faster than no container. Docker showed overall the best performance on a stock kernel. The difference on a hardened kernel was bigger than on a stock kernel, but in favor of the container technologies. rkt showed performed in almost all cases better than all the others.
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
We present a biologically-inspired system for real-time, feed-forward object recognition in cluttered scenes. Our system utilizes a vocabulary of very sparse features that are shared between and within different object models. To detect objects in a novel scene, these features are located in the image, and each detected feature votes for all objects that are consistent with its presence. Due to the sharing of features between object models our approach is more scalable to large object databases than traditional methods. To demonstrate the utility of this approach, we train our system to recognize any of 50 objects in everyday cluttered scenes with substantial occlusion. Without further optimization we also demonstrate near-perfect recognition on a standard 3-D recognition problem. Our system has an interpretation as a sparsely connected feed-forward neural network, making it a viable model for fast, feed-forward object recognition in the primate visual system.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
Analyse von Heuristiken
(2006)
Heuristiken treten insbesondere im Zusammenhang mit Optimierungsproblemen in Erscheinung, bei solchen Problemen also, bei denen nicht nur eine Lösung zu finden ist, sondern unter mehreren möglichen Lösungen eine in einem objektiven Sinne beste Lösung ausfindig gemacht werden soll. Beim Problem kürzester Superstrings werden Heuristiken herangezogen, da mit exakten Algorithmen in Anbetracht der APX-Vollständigkeit des Problems nicht zu rechnen ist. Gegeben ist eine Menge S von Strings. Gesucht ist ein String s, so dass jeder String aus S Teilstring von s ist. Die Länge von s ist dabei zu minimieren. Die prominenteste Heuristik für das Problem kürzester Superstrings ist die Greedy-Heuristik, deren Approximationsfaktor derzeit jedoch nur unzureichend beschränkt werden kann. Es wird vermutet (die sogenannte Greedy-Conjecture), dass der Approximationsfaktor genau 2 beträgt, bewiesen werden kann aber nur, dass er nicht unter 2 und nicht über 3,5 liegt. Die Greedy-Conjecture ist das zentrale Thema des zweiten Kapitels. Die erzielten Ergebnisse sind im Wesentlichen: * Durch die Betrachtung von Greedyordnungen können bedingte lineare Ungleichungen nutzbar gemacht werden. Dieser Ansatz ermöglicht den Einsatz linearer Programmierung zum Auffinden interessanter Instanzen und eine Vertiefung des Verständnisses solcher schwerer Instanzen. Dieser Ansatz wird eingeführt und eine Interpretation des dualen Problems wird dargestellt. * Für die nichttriviale, große Teilklasse der bilinearen Greedyordnungen wird gezeigt, dass die Länge des von der Greedy-Heuristik gefundenen Superstrings und die des optimalen Superstrings sich höchstens um die Größe einer optimalen Kreisüberdeckung der Strings unterscheiden. Da eine optimale Kreisüberdeckung einer Menge von Strings stets höchstens so groß ist wie ein optimaler Superstring (man schließe einen Superstring zu einem einzelnen Kreis), ist das erzielte Ergebnis für die betrachtete Teilklasse der Greedyordnungen stärker als die klassische Greedy-Conjecture. * Es wird eine neue bedingte lineare Ungleichung auf Strings -- die Tripelungleichung -- gezeigt, die für das eben genannte Hauptergebnis wesentlich ist. * Schließlich wird gezeigt, dass die zum Nachweis der oberen Schranke von 3,5 für den Approximationsfaktor herangezogenen bedingten Ungleichungen (etwa die Monge-Ungleichung) inhärent zu schwach sind, um die Greedy-Conjecture selbst für lineare Greedyordnungen zu beweisen. Also ist die neue Tripelungleichung auch notwendig. Zuletzt wird gezeigt, dass das um die Tripelungleichung erweiterte System bedingter linearer Ungleichungen inhärent zu schwach ist, um die klassische Greedy-Conjecture für beliebige Greedyordnungen zu beweisen. Mit der Analyse von Queueing Strategien im Adversarial Queueing Modell wird auch ein Fall betrachtet, in dem Heuristiken auf Grund von anwendungsspezifischen Forderungen wie Online-Setup und Lokalität eingesetzt werden. Pakete sollen in einem Netzwerk verschickt werden, wobei jeder Rechner nur begrenzte Information über den Zustand des Netzwerks hat. Es werden Klassen von Queueing Strategien untersucht und insbesondere untersucht, wovon Queueing Strategien ihre lokalen Entscheidungen abhängig machen sollten, um ein gewisses Qualitätsmerkmal zu erreichen. Die hier erzielten Ergebnisse sind: * Jede Queueing Strategie, die ohne Zeitstempel arbeitet, kann zu einer exponentiell großen Queue und damit zu exponentiell großer Verzögerung (im Durchmesser und der Knotenzahl des Netzwerks) gezwungen werden. Dies war bisher nur für konkrete prominente Strategien bekannt. * Es wird eine neue Technik zur Feststellung der Stabilität von Queueing Strategien ohne Zeitnahme vorgestellt, die Aufschichtungskreise. Mit ihrer Hilfe können bekannte Stabilitätsbeweise prominenter Strategien vereinheitlicht werden und weitere Stabilitätsergebnisse erzielt werden. * Für die große Teilklasse distanzbasierter Queueing Strategien gelingt eine vollständige Klassifizierung aller 1-stabilen und universell stabilen Strategien.
Im heutigen Zahlungsverkehr übernehmen in zunehmendem Maße Zahlungen mit Kreditkarten eine entscheidende Rolle. Entsprechend der Verbreitung dieser Art des Zahlungsverkehrs nimmt ebenfalls der Mißbrauch mit diesem bargeldlosen Zahlungsmittel zu. Um die Verluste, die bei dem Kreditkarteninstitut auf diese Weise entstehen, so weit wie möglich einzudämmen, wird versucht, Mißbrauchstransaktionen bei der Autorisierung der Zahlungsaufforderung zu erkennen. Ziel dieser Diplomarbeit ist es zu bestimmen, in wie weit es möglich ist, illegale Transaktionen aus der Menge von Autorisierungsanfragen mit Hilfe adaptiver Algorithmen aufzudecken. Dabei sollen sowohl Methoden aus dem Bereich des Data-Mining, als auch aus den Bereichen der neuronalen Netze benutzt werden. Erschwerend bei der Mißbrauchsanalyse kommt hinzu, daß die Beurteilung der einzelnen Transaktionen in Sekundenbruchteilen abgeschlossen sein muß, um die hohe Anzahl an Autorisierungsanfragen verarbeiten zu können und den Kundenservice auf Seiten des Benutzers und des Händlers auf diese Weise zu optimieren. Weiter handelt es sich bei einem Großteil der bei der Analyse zu Verfügung stehenden Datensätze um symbolische Daten, also alpha-numerisch kodierte Werte, die stellvertretend für verschiedene Eigenschaften verwendet werden. Nur wenige der Transaktionsdaten sind analoger Natur, weisen also eine Linearität auf, die es erlaubt, "Nachbarschaften" zwischen den Daten bestimmen zu können. Damit scheidet eine reine Analyse auf Basis von neuronalen Netzwerken aus. Diese Problematik führte unter anderem zu dem verfolgten Ansatz. Als Grundlage der Analyse dienen bekannte Mißbrauchstransaktionen aus einem Zeitintervall von ungefähr einem Jahr, die jedoch aufgrund der hohen Anzahl nicht komplett als solche mit den eingehenden Transaktionen verglichen werden können, da ein sequentieller Vergleich zu viel Zeit in Anspruch nähme. Im übrigen würde durch einen einfachen Vergleich nur der schon bekannte Mißbrauch erkannt werden; eine Abstraktion der Erkenntnisse aus den Mißbrauchserfahrungen ist nicht möglich. Aus diesem Grund werden diese Mißbrauchstransaktionen mit Hilfe von Methoden aus dem Bereich des Data-Mining verallgemeinert und damit auf ein Minimum, soweit es die Verläßlichkeit dieser Datensätze zuläßt, reduziert. Desweiteren schließt sich eine Analyse der zu diesem Zeitpunkt noch nicht betrachteten analogen Daten an, um die maximale, enthaltene Information aus den Transaktionsdaten zu beziehen. Dafür werden moderne Methoden aus dem Bereich der neuronalen Netzwerke, sogenannte radiale Basisfunktionsnetze, verwendet. Da eine Mißbrauchsanalyse ohne eine entsprechende Profilanalyse unvollständig wäre, wurde abschließend mit den vorhanden Mitteln auf den zugrunde liegenden Daten in Anlehnung an die bisherige Methodik eine solche Profilauswertung und zeitabhängige Analyse realisiert. Mit dem so implementierten Modell wurde versucht, auf allgemeine Art und Weise, Verhaltens- beziehungsweise Transaktionsmuster einzuordnen und mit bei der Mißbrauchsentscheidung einfließen zu lassen. Aus den vorgestellten Analyseverfahren wurden verschiedene Klassifizierungsmodelle entwickelt, die zu guten Ergebnissen auf den Simulationsdaten führen. Es kann gezeigt werden, daß die Mißbrauchserkennung durch eine kombinierte Anwendung aus symbolischer und analoger Auswertung bestmöglich durchzuführen ist.