000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Year of publication
Document Type
- Article (245)
- Part of Periodical (37)
- Doctoral Thesis (22)
- Book (19)
- Review (9)
- Conference Proceeding (7)
- Periodical (4)
- Part of a Book (2)
- Other (2)
- Contribution to a Periodical (1)
Is part of the Bibliography
- no (349)
Keywords
- Frankfurt <Main> / Universität (115)
- Frankfurt <Main> (60)
- Frankfurt (42)
- Forschung (30)
- Zeitschrift (29)
- Biographie (26)
- Preis <Auszeichnung> (10)
- Wissenschaft (10)
- Frankfurt / Universität (9)
- Adorno (7)
Institute
- Präsidium (42)
- Informatik (28)
- Fachübergreifend (16)
- Frankfurt Institute for Advanced Studies (FIAS) (10)
- Informatik und Mathematik (3)
- Biowissenschaften (2)
- Erziehungswissenschaften (2)
- Evangelische Theologie (2)
- Extern (2)
- Geschichtswissenschaften (2)
- Gesellschaftswissenschaften (2)
- Kulturwissenschaften (2)
- Mathematik (2)
- Medizin (2)
- Neuere Philologien (2)
- SFB 268 (2)
- Universitätsbibliothek (2)
- keine Angabe Institut (2)
- Frobenius Institut (1)
- Geographie (1)
- Philosophie (1)
- Physik (1)
- Rechtswissenschaft (1)
- Sprachwissenschaften (1)
- Wirtschaftswissenschaften (1)
- Zentrum für Interdisziplinäre Afrikaforschung (ZIAF) (1)
Ausgehend von der grundlegenden narratologischen Bedeutung von Ereignissen in ihrer konstitutiven Funktion für Erzähltexte wurde im Projekt "Evaluating Events in Narrative Theory" (EvENT) ein Ansatz entwickelt, mit welchem Ereignisse auf der Textoberfläche und daher maschinenlesbar modelliert werden können. Dieser Beitrag skizziert bisherige Arbeitsschritte und ausgewählte Ergebnisse des Projekts, zu denen die Generierung von Narrativitätsgraphen zählt.
Wissenschaft
(2022)
Georg Toepfer wendet sich Ganzheitsidealen der Wissenschaft und ihrer Theorie von der Antike bis zur zeitgenössischen Wissensgeschichte zu. Ganzheit kann dabei auf unterschiedlichen Ebenen zum Tragen kommen. Die Wissenschaft beschäftigt sich mit ihrer inneren Einheit auf methodologischer Ebene, sie fragt nach der Ganzheit oder Ganzheitlichkeit ihrer Gegenstände (im Rahmen etwa einer Überwindung des Leib-Seele-Dualismus), sie kann eine Summe aller Wissenschaften als Ganzheit in Aussicht stellen oder den Versuch unternehmen, eine Einheit oder Ganzheit der gesamten Menschheit zu befördern. Insgesamt sei festzuhalten, dass Forderungen nach einer Einheit der Wissenschaft verstärkt zu Zeiten aufkämen, in denen sich epistemologisch wie praktisch das exakte Gegenteil beobachten lasse. Auch wirke die Ganzheitsrhetorik der Wissenschaft "mitnichten integrativ", sondern arbeite im Gegen teil oft "mit massiven Ausgrenzungen". Das Ganzheitsproblem habe die wissenschaftliche Selbstreflexion oft auch hinsichtlich ihrer eigenen Visualisierbarkeit beschäftigt. Dies findet seinen Niederschlag in frühneuzeitlichen Diagrammtypen wie dem Baum, dem Haus oder dem Bergwerk, in der zweidimensionalen Landkarte der 'Encyclopédie' oder in einer von Jean Piaget noch im 20. Jahrhundert bearbeiteten Kreisfigur. Solche Formen des Ganzen offenbarten die "Simplifikation" des wissenschaftstheoretischen Zugriffs und müssten oft ganze Wissensbereiche ausschließen; im Fall Piagets etwa die kompletten Geisteswissenschaften.
OPEN SCIENCE, VERSION 3.0: Breaking down barriers for equitable and efficient research communication
(2022)
Only a few studies on the nocturnal behavior of African ungulates exist so far, with mostly small sample sizes. For a comprehensive understanding of nocturnal behavior, the data basis needs to be expanded. Results obtained by observing zoo animals can provide clues for the study of wild animals and furthermore contribute to a better understanding of animal welfare and better husbandry conditions in zoos. The current contribution reduces the lack of data in two ways. First, we present a stand-alone open-source software package based on deep learning techniques, named Behavioral Observations by Videos and Images using Deep-Learning Software (BOVIDS). It can be used to identify ungulates in their enclosure and to determine the three behavioral poses “Standing,” “Lying—head up,” and “Lying—head down” on 11,411 h of video material with an accuracy of 99.4%. Second, BOVIDS is used to conduct a case study on 25 common elands (Tragelaphus oryx) out of 5 EAZA zoos with a total of 822 nights, yielding the first detailed description of the nightly behavior of common elands. Our results indicate that age and sex are influencing factors on the nocturnal activity budget, the length of behavioral phases as well as the number of phases per behavioral state during the night while the keeping zoo has no significant influence. It is found that males spend more time in REM sleep posture than females while young animals spend more time in this position than adult ones. Finally, the results suggest a rhythm between the Standing and Lying phases among common elands that opens future research directions.
Eine Erkrankung zählt in der Europäischen Union zu den Seltenen Erkrankungen (SE), wenn diese nicht mehr als 5 von 10.000 Menschen betrifft. Derzeit existiert mit mehr als 6000 SE eine sowohl große als auch heterogene Menge an unterschiedlichen Krankheitsbilder, die in ihrer Symptomatik komplex, vielschichtig und damit im medizinischen Alltag schwierig einzuordnen sind. Dies erschwert Diagnosefindung und Behandlung sowie das Auffinden eines passenden Ansprechpartners, da es nur wenige Experten für jede einzelne SE gibt. Der medizinische Versorgungsatlas für Seltene Erkrankungen www.se-atlas.de ermöglicht anhand von Erkrankungsnamen die Suche nach Versorgungseinrichtungen und Selbsthilfeorganisationen zu bestimmten SE und stellt die Suchergebnisse geografisch dar. Ebenso gibt er einen Überblick über alle deutschen Zentren für SE, die eine Anlaufstelle für betroffene Personen mit unklarer Diagnose darstellen. Der se-atlas dient als Kompass durch die heterogene Menge an Informationen über Versorgungseinrichtungen für SE und stellt niederschwellig Informationen für eine breite Nutzergruppe von Betroffenen bis hin zu Mitgliedern des medizinischen Versorgungsteams bereit.
Artificial Intelligence (AI) and Machine Learning (ML) are currently hot topics in industry and business practice, while management-oriented research disciplines seem reluctant to adopt these sophisticated data analytics methods as research instruments. Even the Information Systems (IS) discipline with its close connections to Computer Science seems to be conservative when conducting empirical research endeavors. To assess the magnitude of the problem and to understand its causes, we conducted a bibliographic review on publications in high-level IS journals. We reviewed 1,838 articles that matched corresponding keyword-queries in journals from the AIS senior scholar basket, Electronic Markets and Decision Support Systems (Ranked B). In addition, we conducted a survey among IS researchers (N = 110). Based on the findings from our sample we evaluate different potential causes that could explain why ML methods are rather underrepresented in top-tier journals and discuss how the IS discipline could successfully incorporate ML methods in research undertakings.
Modern-day science is under great pressure. A potent mix of increasing expectations, limited resources, tensions between competition and cooperation, and the need for evidence-based funding is creating major change in how science is conducted and perceived. Amidst this 'perfect storm' is the allure of 'research excellence', a concept that drives decisions made by universities and funders, and defines scientists' research strategies and career trajectories. But what is 'excellent' science? And how to recognise it? After decades of inquiry and debate there is still no satisfactory answer. Are we asking the wrong question? Is reality more complex, and 'excellence in science' more elusive, than many are willing to admit? And how should excellence be defined in different parts of the world, particularly in lower-income countries of the 'Global South' where science is expected to contribute to pressing development issues, despite often scarce resources? Many wonder whether the Global South is importing, with or without consenting, the flawed tools for research evaluation from North America and Europe that are not fit for purpose. This book takes a critical view of these issues, touching on conceptual issues and practical problems that inevitably emerge when 'excellence' is at the center of science systems. Emerging from the capacity-building work of the Science Granting Councils Initiative in sub-Saharan Africa, it speaks to scholars, as well as to managers and funders of research around the world. Confronting sticky problems and uncomfortable truths, the chapters contain insights and recommendations that point towards new solutions - both for the Global South and the Global North.
Türkçe, Almanca, İngilizce gibi doğal dillerde bir tümce temelde özne ve yüklemden oluşur. Benzer şekilde biçimsel dillerde de bir tümce, yüklem ve argümandan oluşur. Yüklemler P, Q, R gibi büyük harflerle, argümanlar ise x, y, z gibi küçük harflerle gösterilir. Örneğin olumlu bir tümce P(x), olumsuz bir tümce ise -P(x) şeklinde ifade edilebilir. Ancak bazen bir tümcenin olumlu mu yoksa olumsuz mu olduğu net bir şekilde belli olmayabilir. Bu tür durumlarda mevcut sembolik gösterimde belirsizlikler ortaya çıkabilmektedir. Olumlu tümcelere matematiksel olarak 1, olumsuz tümcelere ise 0 değerinin verildiği varsayılırsa, olumluluk veya olumsuzluk durumu belirsiz olan tümceler ancak bu iki değer arasında bir değer alabilir. Diğer bir deyişle P(x) şeklinde gösterilebilen bir tümceyi P1(x), -P(x) şeklinde gösterilen bir tümceyi ise P0(x) şeklinde ifade etmek mümkündür; fakat olumluluğu kesin olmayan tümceler bu değerlerle gösterilemeyeceği için başka bir ifade şekline ihtiyaç vardır. Çünkü bu tümcelerdeki iş, oluş veya hareketin gerçekleşme oranı ne 0 ne de 1'dir; 0 ve 1 arasında bir değerdir. Bu çalışmada bu tür tümcelerin biçimsel dillerde nasıl ifade edilebileceğine dair bir öneride bulunmak ve bulanık küme kuramıyla olumluluğu derecelendirmek amaçlanmıştır. Bu amaç doğrultusunda önerilen yaklaşım birtakım örnek tümceler üzerinde uygulanmış ve söz konusu tümceler bulanık sembolik bir gösterimle ifade edilmiştir.
Why do we need to communicate science? Is science, with its highly specialised language and its arcane methods, too distant to be understood by the public? Is it really possible for citizens to participate meaningfully in scientific research projects and debate? Should scientists be mandated to engage with the public to facilitate better understanding of science? How can they best communicate their special knowledge to be intelligible? These and a plethora of related questions are being raised by researchers and politicians alike as they have become convinced that science and society need to draw nearer to one another. Once the persuasion took hold that science should open up to the public and these questions were raised, it became clear that coming up with satisfactory answers would be a complex challenge. The inaccessibility of scientific language and methods, due to ever increasing specialisation, is at the base of its very success. Thus, translating specialised knowledge to become understandable, interesting and relevant to various publics creates particular perils. This is exacerbated by the ongoing disruption of the public discourse through the digitisation of communication platforms. For example, the availability of medical knowledge on the internet and the immense opportunities to inform oneself about health risks via social media are undermined by the manipulable nature of this technology that does not allow its users to distinguish between credible content and misinformation. In countries around the world, scientists, policy-makers and the public have high hopes for science communication: that it may elevate its populations educationally, that it may raise the level of sound decision-making for people in their daily lives, and that it may contribute to innovation and economic well-being. This collection of current reflections gives an insight into the issues that have to be addressed by research to reach these noble goals, for South Africa and by South Africans in particular.
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
We present an implementation of an interpreter LRPi for the call-by-need calculus LRP, based on a variant of Sestoft's abstract machine Mark 1, extended with an eager garbage collector. It is used as a tool for exact space usage analyses as a support for our investigations into space improvements of call-by-need calculi.
Already today modern driver assistance systems contribute more and more to make individual mobility in road traffic safer and more comfortable. For this purpose, modern vehicles are equipped with a multitude of sensors and actuators which perceive, interpret and react to the environment of the vehicle. In order to reach the next set of goals along this path, for example to be able to assist the driver in increasingly complex situations or to reach a higher degree of autonomy of driver assistance systems, a detailed understanding of the vehicle environment and especially of other moving traffic participants is necessary.
It is known that motion information plays a key role for human object recognition [Spelke, 1990]. However, full 3D motion information is mostly not taken into account for Stereo Vision-based object segmentation in literature. In this thesis, novel approaches for motion-based object segmentation of stereo image sequences are proposed from which a generic environmental model is derived that contributes to a more precise analysis and understanding of the respective traffic scene. The aim of the environmental model is to yield a minimal scene description in terms of a few moving objects and stationary background such as houses, crash barriers or parking vehicles. A minimal scene description aggregates as much information as possible and it is characterized by its stability, precision and efficiency.
Instead of dense stereo and optical flow information, the proposed object segmentation builds on the so-called Stixel World, an efficient superpixel-like representation of space-time stereo data. As it turns out this step substantially increases stability of the segmentation and it reduces the computational time by several orders of magnitude, thus enabling real-time automotive use in the first place. Besides the efficient, real-time capable optimization, the object segmentation has to be able to cope with significant noise which is due to the measurement principle of the used stereo camera system. For that reason, in order to obtain an optimal solution under the given extreme conditions, the segmentation task is formulated as a Bayesian optimization problem which allows to incorporate regularizing prior knowledge and redundancies into the object segmentation.
Object segmentation as it is discussed here means unsupervised segmentation since typically the number of objects in the scene and their individual object parameters are not known in advance. This information has to be estimated from the input data as well.
For inference, two approaches with their individual pros and cons are proposed, evaluated and compared. The first approach is based on dynamic programming. The key advantage of this approach is the possibility to take into account non-local priors such as shape or object size information which is impossible or which is prohibitively expensive with more local, conventional graph optimization approaches such as graphcut or belief propagation.
In the first instance, the Dynamic Programming approach is limited to one-dimensional data structures, in this case to the first Stixel row. A possible extension to capture multiple Stixel rows is discussed at the end of this thesis.
Further novel contributions include a special outlier concept to handle gross stereo errors associated with so-called stereo tear-off edges. Additionally, object-object interactions are taken into account by explicitly modeling object occlusions. These extensions prove to be dramatic improvements in practice.
This first approach is compared with a second approach that is based on an alternating optimization of the Stixel segmentation and of the relevant object parameters in an expectation maximization (EM) sense. The labeling step is performed by means of the _−expansion graphcut algorithm, the parameter estimation step is done via one-dimensional sampling and multidimensional gradient descent. By using the Stixel World and due to an efficient implementation, one step of the optimization only takes about one millisecond on a standard single CPU core. To the knowledge of the author, at the time of development there was no faster global optimization in a demonstrator car.
For both approaches, various testing scenarios have been carefully selected and allow to examine the proposed methods thoroughly under different real-world conditions with limited groundtruth at hand. As an additional innovative application, the first approach was successfully implemented in a demonstrator car that drove the so-called Bertha Benz Memorial Route from Mannheim to Pforzheim autonomously in real traffic.
At the end of this thesis, the limits of the proposed systems are discussed and a prospect on possible future work is given.
Die vorliegende Arbeit stellt ein organisches Taskverarbeitungssystem vor, das die zuverlässige Verwaltung und Verarbeitung von Tasks auf Multi-Core basierten SoC-Architekturen umsetzt. Aufgrund der zunehmenden Integrationsdichte treten bei der planaren Halbleiter-Fertigung vermehrt Nebeneffekte auf, die im Systembetrieb zu Fehler und Ausfällen von Komponenten führen, was die Zuverlässigkeit der SoCs zunehmend beeinträchtigt. Bereits ab einer Fertigungsgröße von weniger als 100 nm ist eine drastische Zunahme von Elektromigration und der Strahlungssensitivität zu beobachten. Gleichzeitig nimmt die Komplexität (Applikations-Anforderungen) weiter zu, wobei der aktuelle Trend auf eine immer stärkere Vernetzung von Geräten abzielt (Ubiquitäre Systeme). Um diese Herausforderungen autonom bewältigen zu können, wird in dieser Arbeit ein biologisch inspiriertes Systemkonzept vorgestellt. Dieses bedient sich der Eigenschaften und Techniken des menschlichen endokrinen Hormonsystems und setzt ein vollständig dezentrales Funktionsprinzip mit Selbst-X Eigenschaften aus dem Organic Computing Bereich um. Die Durchführung dieses organischen Funktionsprinzips erfolgt in zwei getrennten Regelkreisen, die gemeinsam die dezentrale Verwaltung und Verarbeitung von Tasks übernehmen. Der erste Regelkreis wird durch das künstliche Hormonsystem (KHS) abgebildet und führt die Verteilung aller Tasks auf die verfügbaren Kerne durch. Die Verteilung erfolgt durch das Mitwirken aller Kerne und berücksichtigt deren lokale Eignung und aktueller Zustand. Anschließend erfolgt die Synchronisation mit dem zweiten Regelkreis, der durch die hormongeregelte Taskverarbeitung (HTV) abgebildet wird und einen dynamischen Task-Transfer gemäß der aktuellen Verteilung vollzieht. Dabei werden auch die im Netz verfügbaren Zustände von Tasks berücksichtigt und es entsteht ein vollständiger Verarbeitungspfad, ausgehend von der initialen Taskzuordnung, hinweg über den Transfer der Taskkomponenten, gefolgt von der Erzeugung der lokalen Taskinstanz bis zum Start des zugehörigen Taskprozesses auf dem jeweiligen Kern. Die System-Implementierung setzt sich aus modularen Hardware- und Software-Komponenten zusammen. Dadurch kann das System entweder vollständig in Hardware, Software oder in hybrider Form betrieben und genutzt werden. Mittels eines FPGA-basierten Prototyps konnten die formal bewiesenen Zeitschranken durch Messungen in realer Systemumgebung bestätigt werden. Die Messergebnisse zeigen herausragende Zeitschranken bezüglich der Selbst-X Eigenschaften. Des Weiteren zeigt der quantitative Vergleich gegenüber anderen Systemen, dass der hier gewählte dezentrale Regelungsansatz bezüglich Ausfallsicherheit, Flächen- und Rechenaufwand deutlich überlegen ist.
We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
We study Gaifman locality and Hanf locality of an extension of first-order logic with modulo p counting quantifiers (FO+MODp , for short) with arbitrary numerical predicates. We require that the validity of formulas is independent of the particular interpretation of the numerical predicates and refer to such formulas as arb-invariant formulas. This paper gives a detailed picture of locality and non-locality properties of arb-invariant FO+MODp . For example, on the class of all finite structures, for any p 2, arb-invariant FO+MODp is neither Hanf nor Gaifman local with respect to a sublinear locality radius. However, in case that p is an odd prime power, it is weakly Gaifman local with a polylogarithmic locality radius. And when restricting attention to the class of string structures, for odd prime powers p, arb-invariant FO+MODp is both Hanf and Gaifman local with a polylogarithmic locality radius. Our negative results build on examples of order-invariant FO+MODp formulas presented in Niemist ̈o’s PhD thesis. Our positive results make use of the close connection between FO+MODp and Boolean circuits built from NOT-gates and AND-, OR-, and MOD p - gates of arbitrary fan-in.