Refine
Year of publication
Document Type
- Preprint (746)
- Article (400)
- Working Paper (119)
- Doctoral Thesis (92)
- Diploma Thesis (46)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (35)
- diplomthesis (29)
- Report (25)
Has Fulltext
- yes (1602)
Is part of the Bibliography
- no (1602)
Keywords
Institute
- Informatik (1602) (remove)
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
Towards correctness of program transformations through unification and critical pair computation
(2011)
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
In this contribution we present algorithms for model checking of analog circuits enabling the specification of time constraints. Furthermore, a methodology for defining time-based specifications is introduced. An already known method for model checking of integrated analog circuits has been extended to take into account time constraints. The method will be presented using three industrial circuits. The results of model checking will be compared to verification by simulation.
Retiming is a widely investigated technique for performance optimization. In general, it performs extensive modifications on a circuit netlist, leaving it unclear, whether the achieved performance improvement will still be valid after placement has been performed. This paper presents an approach for integrating retiming into a timing-driven placement environment. The experimental results show the benefit of the proposed approach on circuit performance in comparison with design flows using retiming only as a pre- or postplacement optimization method.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Die vorliegende Arbeit lässt sich in den Bereich Data Science einordnen. Data Science verwendet Verfahren aus dem Bereich Computer Science, Algorithmen aus der Mathematik und Statistik sowie Domänenwissen, um große Datenmengen zu analysieren und neue Erkenntnisse zu gewinnen. In dieser Arbeit werden verschiedene Forschungsbereiche aus diesen verwendet. Diese umfassen die Datenanalyse im Bereich von Big Data (soziale Netzwerke, Kurznachrichten von Twitter), Opinion Mining (Analyse von Meinungen auf Basis eines Lexikons mit meinungstragenden Phrasen) sowie Topic Detection (Themenerkennung)....
Ergebnis 1: Sentiment Phrase List (SePL)
Im Forschungsbereich Opinion Mining spielen Listen meinungstragender Wörter eine wesentliche Rolle bei der Analyse von Meinungsäußerungen. Das im Rahmen dieser Arbeit entwickelte Vorgehen zur automatisierten Generierung einer solchen Liste leistet einen wichtigen Forschungsbeitrag in diesem Gebiet. Der neuartige Ansatz ermöglicht es einerseits, dass auch Phrasen aus mehreren Wörtern (inkl. Negationen, Verstärkungs- und Abschwächungspartikeln) sowie Redewendungen enthalten sind, andererseits werden die Meinungswerte aller Phrasen auf Basis eines entsprechenden Korpus automatisiert berechnet. Die Sentiment Phrase List sowie das Vorgehen wurden veröffentlicht und können von der Forschungsgemeinde genutzt werden [121, 123]. Die Erstellung basiert auf einer textuellen sowie zusätzlich numerischen Bewertung, welche typischerweise in Kundenrezensionen verwendet werden (beispielsweise der Titel und die Sternebewertung bei Amazon Kundenrezensionen). Es können weitere Datenquellen verwendet werden, die eine derartige Bewertung aufweisen. Auf Basis von ca. 1,5 Millionen deutschen Kundenrezensionen wurden verschiedene Versionen der SePL erstellt und veröffentlicht [120].
Ergebnis 2: Algorithmus auf Basis der SePL
Mit Hilfe der SePL und den darin enthaltenen meinungstragenden Phrasen ergeben sich Verbesserungen für lexikonbasierte Verfahren bei der Analyse von Meinungsäußerungen. Phrasen werden im Text häufig durch andere Wörter getrennt, wodurch eine Identifizierung der Phrasen erforderlich ist. Der Algorithmus für eine lexikonbasierte Meinungsanalyse wurde veröffentlicht [176]. Er basiert auf meinungstragenden Phrasen bestehend aus einem oder mehreren Wörtern. Da für einzelne Phrasen unterschiedliche Meinungswerte vorliegen, ist eine genauere Bewertung als mit bisherigen Ansätzen möglich. Dies ermöglicht, dass meinungstragende Phrasen aus dem Text extrahiert und anhand der in der SePL enthaltenen Einträge differenziert bewertet werden können. Bisherige Ansätze nutzen häufig einzelne meinungstragende Wörter. Der Meinungswert für beispielsweise eine Verneinung muss nicht anhand eines generellen Vorgehens erfolgen. In aktuellen Verfahren wird der Wert eines meinungstragenden Wortes bei Vorhandensein einer Verneinung bisher meist invertiert, was häufig falsche Ergebnisse liefert. Die Liste enthält im besten Fall sowohl einen Meinungswert für das einzelne Wort und seine Verneinung (z.B. „schön“ und „nicht schön“).
1.3 übersicht der hauptergebnisse 5
Ergebnis 3: Evaluierung der Anwendung der SePL
Der Algorithmus aus Ergebnis 2 wurde mit Rezensionen der Bewertungsplattform CiaoausdemBereichderAutomobilversicherunge valuiert.Dabei wurden wesentliche Fehlerquellen aufgezeigt [176], die entsprechende Verbesserungen ermöglichen. Weiterhin wurde mit der SePL eine Evaluation anhand eines Maschinenlernverfahrens auf Basis einer Support Vector Machine durchgeführt. Hierbei wurden verschiedene bestehende lexikalische Ressourcen mit der SePL verglichen sowie deren Einsatz in verschiedenen Domänen untersucht. Die Ergebnisse wurden in [115] veröffentlicht.
Ergebnis 4: Forschungsprojekt PoliTwi - Themenerkennung politischer Top-Themen
Mit dem Forschungsprojekt PoliTwi wurden einerseits die erforderlichen Daten von Twitter gesammelt. Andererseits werden der breiten Öffentlichkeit fortlaufend aktuelle politische Top-Themen über verschiedene Kanäle zur Verfügung gestellt. Für die Evaluation der angestrebten Verbesserungen im Bereich der Themenerkennung in Verbindung mit einer Meinungsanalyse liegen die erforderlichen Daten über einen Zeitraum von bisher drei Jahren aus der Domäne Politik vor. Auf Basis dieser Daten konnte die Themenerkennung durchgeführt werden. Die berechneten Themen wurden mit anderen Systemen wie Google Trends oder Tagesschau Meta verglichen (siehe Kapitel 5.3). Es konnte gezeigt werden, dass die Meinungsanalyse die Themenerkennung verbessern kann. Die Ergebnisse des Projekts wurden in [124] veröffentlicht. Der Öffentlichkeit und insbesondere Journalisten und Politikern wird zudem ein Service (u.a. anhand des Twitter-Kanals unter https://twitter.com/politwi) zur Verfügung gestellt, anhand dessen sie über aktuelle Top-Themen informiert werden. Nachrichtenportale wie FOCUS Online nutzten diesen Service bei ihrer Berichterstattung (siehe Kapitel 4.3.6.1). Die Top-Themen werden seit Mitte 2013 ermittelt und können zudem auf der Projektwebseite [119] abgerufen werden.
Ergebnis 5: Erweiterung lexikalischer Ressourcen auf Konzeptebene
Das noch junge Forschungsgebiet des Concept-level Sentiment Analysis versucht bisherige Ansätze der Meinungsanalyse dadurch zu verbessern, dass Meinungsäußerungen auf Konzeptebene analysiert werden. Eine Voraussetzung sind Listen meinungstragender Wörter, welche differenzierte Betrachtungen anhand unterschiedlicher Kontexte ermöglichen. Anhand der Top-Themen und deren Kontext wurde ein Vorgehen entwickelt, welches die Erstellung bzw. Ergänzung dieser Listen ermöglicht. Es wurde gezeigt, wie Meinungen in unterschiedlichen Kontexten differenziert bewertet werden und diese Information in lexikalischen Ressourcen aufgenommen werden können, was im Bereich der Concept-level Sentiment Analysis genutzt werden kann. Das Vorgehen wurde in [124] veröffentlicht.
Succinctness is a natural measure for comparing the strength of different logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all properties that can be expressed in L_2 can be expressed in L_1 by formulas of (approximately) the same size, but some properties can be expressed in L_1 by (significantly) smaller formulas.
We study the succinctness of logics on linear orders. Our first theorem is concerned with the finite variable fragments of first-order logic. We prove that:
(i) Up to a polynomial factor, the 2- and the 3-variable fragments of first-order logic on linear orders have the same succinctness. (ii) The 4-variable fragment is exponentially more succinct than the 3-variable fragment. Our second main result compares the succinctness of first-order logic on linear orders with that of monadic second-order logic. We prove that the fragment of monadic second-order logic that has the same expressiveness as first-order logic on linear orders is non-elementarily more succinct than first-order logic.
The SU(3) spin model with chemical potential corresponds to a simplified version of QCD with static quarks in the strong coupling regime. It has been studied previously as a testing ground for new methods aiming to overcome the sign problem of lattice QCD. In this work we show that the equation of state and the phase structure of the model can be fully determined to reasonable accuracy by a linked cluster expansion. In particular, we compute the free energy to 14-th order in the nearest neighbour coupling. The resulting predictions for the equation of state and the location of the critical end points agree with numerical determinations to O(1%) and O(10%), respectively. While the accuracy for the critical couplings is still limited at the current series depth, the approach is equally applicable at zero and non-zero imaginary or real chemical potential, as well as to effective QCD Hamiltonians obtained by strong coupling and hopping expansions.
We review the representation problem based on factoring and show that this problem gives rise to alternative solutions to a lot of cryptographic protocols in the literature. And, while the solutions so far usually either rely on the RSA problem or the intractability of factoring integers of a special form (e.g., Blum integers), the solutions here work with the most general factoring assumption. Protocols we discuss include identification schemes secure against parallel attacks, secure signatures, blind signatures and (non-malleable) commitments.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
Classically, encoding of images by only a few, important components is done by the Principal Component Analysis (PCA). Recently, a data analysis tool called Independent Component Analysis (ICA) for the separation of independent influences in signals has found strong interest in the neural network community. This approach has also been applied to images. Whereas the approach assumes continuous source channels mixed up to the same number of channels by a mixing matrix, we assume that images are composed by only a few image primitives. This means that for images we have less sources than pixels. Additionally, in order to reduce unimportant information, we aim only for the most important source patterns with the highest occurrence probabilities or biggest information called „Principal Independent Components (PIC)“. For the example of a synthetic picture composed by characters this idea gives us the most important ones. Nevertheless, for natural images where no a-priori probabilities can be computed this does not lead to an acceptable reproduction error. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
The dynamics of many systems are described by ordinary differential equations (ODE). Solving ODEs with standard methods (i.e. numerical integration) needs a high amount of computing time but only a small amount of storage memory. For some applications, e.g. short time weather forecast or real time robot control, long computation times are prohibitive. Is there a method which uses less computing time (but has drawbacks in other aspects, e.g. memory), so that the computation of ODEs gets faster? We will try to discuss this question for the assumption that the alternative computation method is a neural network which was trained on ODE dynamics and compare both methods using the same approximation error. This comparison is done with two different errors. First, we use the standard error that measures the difference between the approximation and the solution of the ODE which is hard to characterize. But in many cases, as for physics engines used in computer games, the shape of the approximation curve is important and not the exact values of the approximation. Therefore, we introduce a subjective error based on the Total Least Square Error (TLSE) which gives more consistent results. For the final performance comparison, we calculate the optimal resource usage for the neural network and evaluate it depending on the resolution of the interpolation points and the inter-point distance. Our conclusion gives a method to evaluate where neural nets are advantageous over numerical ODE integration and where this is not the case. Index Terms—ODE, neural nets, Euler method, approximation complexity, storage optimization.
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The impact of columnar file formats on SQL‐on‐hadoop engine performance: a study on ORC and Parquet
(2019)
Columnar file formats provide an efficient way to store data to be queried by SQL‐on‐Hadoop engines. Related works consider the performance of processing engine and file format together, which makes it impossible to predict their individual impact. In this work, we propose an alternative approach: by executing each file format on the same processing engine, we compare the different file formats as well as their different parameter settings. We apply our strategy to two processing engines, Hive and SparkSQL, and evaluate the performance of two columnar file formats, ORC and Parquet. We use BigBench (TPCx‐BB), a standardized application‐level benchmark for Big Data scenarios. Our experiments confirm that the file format selection and its configuration significantly affect the overall performance. We show that ORC generally performs better on Hive, whereas Parquet achieves best performance with SparkSQL. Using ZLIB compression brings up to 60.2% improvement with ORC, while Parquet achieves up to 7% improvement with Snappy. Exceptions are the queries involving text processing, which do not benefit from using any compression.
It is well known that artificial neural nets can be used as approximators of any continuous functions to any desired degree and therefore be used e.g. in high - speed, real-time process control. Nevertheless, for a given application and a given network architecture the non-trivial task remains to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation which are critical issues in VLSI and computer implementations of nontrivial tasks. In this paper the accuracy of the weights and the number of neurons are seen as general system parameters which determine the maximal approximation error by the absolute amount and the relative distribution of information contained in the network. We define as the error-bounded network descriptional complexity the minimal number of bits for a class of approximation networks which show a certain approximation error and achieve the conditions for this goal by the new principle of optimal information distribution. For two examples, a simple linear approximation of a non-linear, quadratic function and a non-linear approximation of the inverse kinematic transformation used in robot manipulator control, the principle of optimal information distribution gives the the optimal number of neurons and the resolutions of the variables, i.e. the minimal amount of storage for the neural net. Keywords: Kolmogorov complexity, e-Entropy, rate-distortion theory, approximation networks, information distribution, weight resolutions, Kohonen mapping, robot control.
We study the effect of randomness in the adversarial queueing model. All proofs of instability for deterministic queueing strategies exploit a finespun strategy of insertions by an adversary. If the local queueing decisions in the network are subject to randomness, it is far from obvious, that an adversary can still trick the network into instability. We show that uniform queueing is unstable even against an oblivious adversary. Consequently, randomizing the queueing decisions made to operate a network is not in itself a suitable fix for poor network performances due to packet pileups.
We study the approximability of the following NP-complete (in their feasibility recognition forms) number theoretic optimization problems: 1. Given n numbers a1 ; : : : ; an 2 Z, find a minimum gcd set for a1 ; : : : ; an , i.e., a subset S fa1 ; : : : ; ang with minimum cardinality satisfying gcd(S) = gcd(a1 ; : : : ; an ). 2. Given n numbers a1 ; : : : ; an 2 Z, find a 1-minimum gcd multiplier for a1 ; : : : ; an , i.e., a vector x 2 Z n with minimum max 1in jx i j satisfying P n...
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p-Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p-Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/c in p–Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.
Gegenstand der hier vorgestellten Arbeit ist eine Applikation für die virtuelle Realität (VR), die in der Lage ist, die Struktur eines beliebigen Textes als begehbare, interaktive Stadt zu visualisieren. Darüber hinaus bietet das Programm eine besondere Textsuche an, die so in anderen konventionellen Textverarbeitungsprogrammen nicht vorzufinden ist. Dank der strukturellen Analyse und der Verwendung einiger außergewöhnlicher Analysetools des TextImager [2], ermöglicht text2City nicht nur die Suche nach bestimmten Textmustern, sondern zum Beispiel auch die Bestimmung der Textebene (Wort, Satz, Absatz, etc.) und einiges mehr. Ein weiteres Feature ist die Kommunikationsverbindung zwischen dem TextAnnotator-Service [1] und text2City, die dem Benutzer die Möglichkeit zum Annotieren bietet, aber auch von anderen Personen durchgeführte Annotationen sofort sichtbar machen kann. Für die Ausführung des Programms ist eine der beiden VRBrillen, Oculus Rift oder HTC Vive, ein für VR geeigneter PC, sowie die Software Unity nötig.
In der heutigen Zeit werden viele Anwendungen als Webanwendungen entwickelt, weil man sie schneller auf den Markt werfen kann. Neue Methoden wurden entwickelt um den Softwareentwicklungsprozess zu verschlanken, um damit noch schneller und öfter eine Produkt auf den Markt zu bringen. Diese Methoden erschweren die Arbeit von manuellen Tester ungemein. Sie müssen jetzt noch schneller und noch öfter testen.
Um dieser Miesere entgegenzuwirken wurden Testautomatisierungsmechanismen und Testautomatisierungswerkzeuge entwickelt. In dieser Arbeit wollte ich zeigen, dass Testautomatisierung in bestehenden Projekten nachträglich noch eingefügt werden kann. Und das diese für eine verbesserte Qualität des Produktes sorgen kann.
Ich habe in dieser Arbeit den Testfallkatalog für das Produkt „Email4Tablet“ der Firma Deutsche Telekom AG zu 70% mit dem Testwerkzeug Selenium automatisiert.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We present techniques to prove termination of cycle rewriting, that is, string rewriting on cycles, which are strings in which the start and end are connected. Our main technique is to transform cycle rewriting into string rewriting and then apply state of the art techniques to prove termination of the string rewrite system. We present three such transformations, and prove for all of them that they are sound and complete. In this way not only termination of string rewriting of the transformed system implies termination of the original cycle rewrite system, a similar conclusion can be drawn for non-termination. Apart from this transformational approach, we present a uniform framework of matrix interpretations, covering most of the earlier approaches to automatically proving termination of cycle rewriting. All our techniques serve both for proving termination and relative termination. We present several experiments showing the power of our techniques.
With the rise of digitalization and ubiquity of media use, both opportunities and challenges emerge for academic learning. One prevalent challenge is media multitasking, which can become distracting and hinder learning success. This thesis investigates two facets of this issue: the enhancement of data tracking, and the exploration of digital interventions that support self-control.
The first paper focuses on digital tracking of media use, as a comprehensive understanding of digital distractions requires careful data collection to avoid misinterpretations. The paper presents a tracking system where media use is linked to learning activities. An annotation dashboard enabled the enrichment of the log data with self-reports. The efficacy of this system was evaluated in a 14-day online course taken by 177 students, with results confirming the initial assumptions about media tracking.
The second paper tackles the recognition of whether a text was thoroughly read, an issue brought on by the tendency of students to skip lengthy and demanding texts. A method utilizing scroll data and time series classification algorithms is presented and tested, showing promising results for early recognition and intervention.
The third paper presents the results of a systematic literature review on the effectiveness of digital self-control tools in academic learning. The paper identifies gaps in existing research and outlines a roadmap for further research on self-control tools.
The fourth paper shares findings from a survey of 273 students, exploring the practical use and perceived helpfulness of DSCTs. The study highlights the challenge of balancing between too restrictive and too lenient DSCTs, particularly for platforms offering both learning content and entertainment. The results also show a special role of media use that is highly habitual.
The fifth paper of this work investigates facets of app-based habit building. In a study over 27 days, 106 school-aged children used the specially developed PROMPT-app. The children carried out one of three digital activities each day, each of which was supposed to promote a deeper or more superficial processing of plans. Significant differences regarding the processing of plans emerged between the three activities, and the results suggest that a child-friendly planning application needs to be personalized to be effective.
Overall, this work offers a comprehensive insight into the complexity and potentials of dealing with distracting media usage and shows ways for future research and interventions in this fascinating and ever more important field.
Trotz eines umfangreichen Angebots an Literatur und Ratgebern im Bereich des Projektmanagements scheitern auch heute noch viele IT-Projekte. Ursache sind oft Probleme im Projektteam oder Fehleinschätzungen in der Planung des Projektes und Überwachung des Projektstatus. Insbesondere durch neue Technologien und Globalisierung entstandene Arbeitsweisen wie das virtuelle Team sind davon betroffen. In dieser Arbeit wird auf die Frage eingegangen, was virtuelle Teams sind und welche Probleme die Arbeit von virtuellen Teams belastet. Dafür werden aktuell existierende Tools aus dem Bereich des Web 2.0 analysiert und aus dem Stand der angebotenen Tools vermeidbare Schwächen der Helfer herausgearbeitet. Anschließend wird ein mittels einer Anforderungsanalyse und eines Konzepts, welches neue Methoden zur Darstellung von Projektstatus und Verknüpfung mit Dokumentation und Kommunikation nutzt, das Tool „TeamVision“ erstellt, welches versucht, virtuelle Teams möglichst effizient zu managenen, Probleme schnell zu erkennen und somit die Arbeit innerhalb des Teams zu beschleunigen. Hierbei wird insbesondere das Ergebnis der Analyse benutzt, dass viele Tools einzelne Verwaltungsaufgaben getrennt durchführen. Informationen müssen vom Nutzer selbst aus den verschiedenen Grafiken, Listen oder anderen Darstellungen gesammelt und selbst assoziiert werden. Die prototypische Implementierung von TeamVision versucht den Informationsfluss beherrschbar zu machen, indem Übersichten in einem Projektbaum zusammengefasst werden, der mittels Zoomfunktionen und visueller Hilfsmitel wie Farbgebung versucht, die Informationsbeschaffung zu erleichtern.
Interest to become a data scientist or related professions in data science domain is rapidly growing. To meet such a demand, we propose a novel educational service that aims to provide tailored learning paths for data science. Our target user is one who aims to be an expert in data science. Our approach is to analyze the background of the practitioner and match the learning units. A critical feature is that we use gamification to reinforce the practitioner engagement. We believe that our work provides a practical guideline for those who want to learn data science.
Measurements of the pT-dependent flow vector fluctuations in Pb-Pb collisions at sNN−−−√=5.02 TeV using azimuthal correlations with the ALICE experiment at the LHC are presented. A four-particle correlation approach [1] is used to quantify the effects of flow angle and magnitude fluctuations separately. This paper extends previous studies to additional centrality intervals and provides measurements of the pT-dependent flow vector fluctuations at sNN−−−√=5.02 TeV with two-particle correlations. Significant pT-dependent fluctuations of the V⃗ 2 flow vector in Pb-Pb collisions are found across different centrality ranges, with the largest fluctuations of up to ∼15% being present in the 5% most central collisions. In parallel, no evidence of significant pT-dependent fluctuations of V⃗ 3 or V⃗ 4 is found. Additionally, evidence of flow angle and magnitude fluctuations is observed with more than 5σ significance in central collisions. These observations in Pb-Pb collisions indicate where the classical picture of hydrodynamic modeling with a common symmetry plane breaks down. This has implications for hard probes at high pT, which might be biased by pT-dependent flow angle fluctuations of at least 23% in central collisions. Given the presented results, existing theoretical models should be re-examined to improve our understanding of initial conditions, quark--gluon plasma (QGP) properties, and the dynamic evolution of the created system.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependence of correlations between v3 and v2 and between v4 and v2 is also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the Large Hadron Collider. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependences of correlations between v3 and v2 and between v4 and v2 are also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. This might be an indication of possible viscous corrections to the equilibrium distribution at hadronic freeze-out, which might help to understand the possible contribution of bulk viscosity in the hadronic phase of the system. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependence of correlations between v3 and v2 and between v4 and v2 is also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, pPb, and PbPb, at the top energy of the Large Hadron Collider (√sNN=5.02TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for pPb and PbPb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, p-Pb, and Pb-Pb, at the top energy of the Large Hadron Collider (sNN−−−√=5.02 TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for p-Pb and Pb-Pb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, p-Pb, and Pb-Pb, at the top energy of the Large Hadron Collider (sNN−−−√=5.02 TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for p-Pb and Pb-Pb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
The first measurements of K∗(892)0 resonance production as a function of charged-particle multiplicity in Xe−Xe collisions at sNN−−−√= 5.44 TeV and pp collisions at s√= 5.02 TeV using the ALICE detector are presented. The resonance is reconstructed at midrapidity (|y|<0.5) using the hadronic decay channel K∗0→K±π∓. Measurements of transverse-momentum integrated yield, mean transverse-momentum, nuclear modification factor of K∗0, and yield ratios of resonance to stable hadron (K∗0/K) are compared across different collision systems (pp, p−Pb, Xe−Xe, and Pb−Pb) at similar collision energies to investigate how the production of K∗0 resonances depends on the size of the system formed in these collisions. The hadronic rescattering effect is found to be independent of the size of colliding systems and mainly driven by the produced charged-particle multiplicity, which is a proxy of the volume of produced matter at the chemical freeze-out. In addition, the production yields of K∗0 in Xe−Xe collisions are utilized to constrain the dependence of the kinetic freeze-out temperature on the system size using HRG-PCE model.
To truly appreciate the myriad of events which relate synaptic function and vesicle dynamics, simulations should be done in a spatially realistic environment. This holds true in particular in order to explain as well the rather astonishing motor patterns which we observed within in vivo recordings which underlie peristaltic contractionsas well as the shape of the EPSPs at different forms of long-term stimulation, presented both here, at a well characterized synapse, the neuromuscular junction (NMJ) of the Drosophila larva (c.f. Figure 1). To this end, we have employed a reductionist approach and generated three dimensional models of single presynaptic boutons at the Drosophila larval NMJ. Vesicle dynamics are described by diffusion-like partial differential equations which are solved numerically on unstructured grids using the uG platform. In our model we varied parameters such as bouton-size, vesicle output probability (Po), stimulation frequency and number of synapses, to observe how altering these parameters effected bouton function. Hence we demonstrate that the morphologic and physiologic specialization maybe a convergent evolutionary adaptation to regulate the trade off between sustained, low output, and short term, high output, synaptic signals. There seems to be a biologically meaningful explanation for the co-existence of the two different bouton types as previously observed at the NMJ (characterized especially by the relation between size and Po), the assigning of two different tasks with respect to short- and long-time behaviour could allow for an optimized interplay of different synapse types. We can present astonishing similar results of experimental and simulation data which could be gained in particular without any data fitting, however based only on biophysical values which could be taken from different experimental results. As a side product, we demonstrate how advanced methods from numerical mathematics could help in future to resolve also other difficult experimental neurobiological issues.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. To truly appreciate the myriad of events which relate synaptic function and vesicle dynamics, simulations should be done in a spatially realistic environment. This holds true in particular in order to explain the rather astonishing motor patterns presented here which we observed within in vivo recordings which underlie peristaltic contractions at a well characterized synapse, the neuromuscular junction (NMJ) of the Drosophila larva. To this end, we have employed a reductionist approach and generated three dimensional models of single presynaptic boutons at the Drosophila larval NMJ. Vesicle dynamics are described by diffusion-like partial differential equations which are solved numerically on unstructured grids using the uG platform. In our model we varied parameters such as bouton-size, vesicle output probability (Po), stimulation frequency and number of synapses, to observe how altering these parameters effected bouton function. Hence we demonstrate that the morphologic and physiologic specialization maybe a convergent evolutionary adaptation to regulate the trade off between sustained, low output, and short term, high output, synaptic signals. There seems to be a biologically meaningful explanation for the co-existence of the two different bouton types as previously observed at the NMJ (characterized especially by the relation between size and Po),the assigning of two different tasks with respect to short- and long-time behaviour could allow for an optimized interplay of different synapse types. As a side product, we demonstrate how advanced methods from numerical mathematics could help in future to resolve also other difficult experimental neurobiological issues.
The morphology of presynaptic specializations can vary greatly ranging from classical single-release-site boutons in the central nervous system to boutons of various sizes harboring multiple vesicle release sites. Multi-release-site boutons can be found in several neural contexts, for example at the neuromuscular junction (NMJ) of body wall muscles of Drosophila larvae. These NMJs are built by two motor neurons forming two types of glutamatergic multi-release-site boutons with two typical diameters. However, it is unknown why these distinct nerve terminal configurations are used on the same postsynaptic muscle fiber. To systematically dissect the biophysical properties of these boutons we developed a full three-dimensional model of such boutons, their release sites and transmitter-harboring vesicles and analyzed the local vesicle dynamics of various configurations during stimulation. Here we show that the rate of transmission of a bouton is primarily limited by diffusion-based vesicle movements and that the probability of vesicle release and the size of a bouton affect bouton-performance in distinct temporal domains allowing for an optimal transmission of the neural signals at different time scales. A comparison of our in silico simulations with in vivo recordings of the natural motor pattern of both neurons revealed that the bouton properties resemble a well-tuned cooperation of the parameters release probability and bouton size, enabling a reliable transmission of the prevailing firing-pattern at diffusion-limited boutons. Our findings indicate that the prevailing firing-pattern of a neuron may determine the physiological and morphological parameters required for its synaptic terminals.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.