Refine
Year of publication
Document Type
- Doctoral Thesis (54) (remove)
Language
- English (54) (remove)
Has Fulltext
- yes (54) (remove)
Is part of the Bibliography
- no (54)
Keywords
- FPGA (2)
- ALICE (1)
- Abfrageverarbeitung (1)
- Abstraction (1)
- Agent <Künstliche Intelligenz> (1)
- Agenten (1)
- Agents (1)
- Analog (1)
- Analog Circuits (1)
- Analog Verification (1)
- Analoges System (1)
- Analogschaltungen (1)
- Auswahlprozess (1)
- Autonomous Learning (1)
- Call-by-Need (1)
- Call-by-need Lambda Calculus (1)
- Claude Elwood (1)
- Closed World Assumption (CWA) (1)
- Coding Scheme (1)
- Cognitive (1)
- Conjoint Analysis (1)
- Contextual Equivalence (1)
- Coordination (1)
- Data Analytics (1)
- Dataflow Computing (1)
- Datenaustausch (1)
- Datenintegration (1)
- Detector Readout (1)
- Developmental Robotics (1)
- Diskrete Mathematik (1)
- Ego-motion Estimation (1)
- Eigenbewegungsschaetzung (1)
- Elektronik (1)
- Energy Efficiency (1)
- Engineering (1)
- Entropie (1)
- Entropie <Informationstheorie> (1)
- Entwurfsautomation (1)
- Erasure-Correcting Codes (1)
- Error Mitigation (1)
- Explicit Feedback (1)
- FPGAs (1)
- Fault Tolerance (1)
- Formal Verification (1)
- Formale Semantik (1)
- Formale Verifikation (1)
- Funktionale Programmiersprache (1)
- GPGPU (1)
- GPU Computing (1)
- Gedächtnis (1)
- Gedächtnisbildung (1)
- Green Computing (1)
- Großhirnrinde (1)
- HPC (1)
- Hash Funktionen (1)
- Heuristics (1)
- High Level Synthesis (1)
- High energy physics (1)
- Hybrid automaton (1)
- ICA (1)
- Implicit Discourse Parsing (1)
- Implicit Feedback (1)
- Implicit Semantic Role Labeling (1)
- Induktive Inferenz (1)
- Information Retrieval (1)
- Informationsintegration (1)
- Intrinsic Motivations (1)
- Invariant object recognition (1)
- Ionizing Radiation (1)
- Kalman Filter (1)
- Kalman filter (1)
- Kalman-Filter (1)
- Koordination (1)
- Kryptographie (1)
- Lambda-Kalkül (1)
- Langzeitgedächtnis (1)
- Lattice-QCD (1)
- Leichtgewichtige Kryptographie (1)
- Lernen (1)
- Many-core computer architectures (1)
- Mehragentensystem (1)
- Merkmalsraum (1)
- Merkmalsselektion (1)
- NREM-Schlaf (1)
- Natural Language Processing (1)
- Natural Language Understanding (1)
- Neuronale Plastizität (1)
- Nichtdeterminismus (1)
- Nichtlineare Datenanalyse (1)
- Nichtlineare Merkmalsselektion (1)
- Non-determinism (1)
- Nonlinear Data Analysis (1)
- Nonlinear Feature Selection (1)
- Objekterkennung (1)
- Online Algorithmen (1)
- Paging (1)
- Parallel and SIMD calculations (1)
- Paramecium (1)
- Patternsprachen (1)
- Planning systems (1)
- Planungssystem (1)
- Planungssysteme (1)
- Plastizität (1)
- Plastizität <Physiologie> (1)
- Pose Estimation (1)
- Precongruence (1)
- Präkongruenz (1)
- RF development (1)
- RNA biology (1)
- RNA interference (1)
- Rating Scale (1)
- Rating Scale Design (1)
- Rating System (1)
- Reaktivierung der Gedächtnisspuren (1)
- Reinforcement Learning (1)
- Relationale Datenbank (1)
- Robotics (1)
- SHA-3 (1)
- SLAM (1)
- SORN (1)
- STDP (1)
- Schlaf (1)
- Sehrinde (1)
- Selection process (1)
- Shannon (1)
- Short-lived particles (1)
- Sichere Antworten (1)
- Similarity (1)
- Single Event Effects (1)
- Sprachtheorie (1)
- Stereo Vision (1)
- Stereophotographie (1)
- Theoretische Informatik (1)
- Transfer learning (1)
- Transinformation (1)
- Universa feature extraxtion (1)
- Unüberwachtes Lernen (1)
- User Profile (1)
- Vectorization (1)
- Verification (1)
- Verifikation (1)
- WWW (1)
- Web (1)
- Web Analytics (1)
- Wechselseitige Information (1)
- certain answers (1)
- closed world assumption (CWA) (1)
- data acquisition; high energy physics; FPGA; CBM; FLES (1)
- data exchange (1)
- deductive database (1)
- deduktive Datenbank (1)
- descriptive patterns (1)
- deskriptive Pattern (1)
- discrete processing (1)
- diskrete Verarbeitung (1)
- epigenome (1)
- epileptogenesis (1)
- gamma Zyklus (1)
- gamma cycle (1)
- generatives Lernen (1)
- homeostasis (1)
- inductive inference (1)
- intrinsic plasticity (1)
- kontextabhängige Verarbeitung (1)
- kontextuelle Gleichheit (1)
- macronucleus (1)
- neural networks (1)
- off-line memory reprocessing (1)
- pattern languages (1)
- plasticity (1)
- reward-dependent learning (1)
- small RNA (1)
- sparse coding (1)
- synaptic normalization (1)
- variability (1)
Institute
- Informatik (54) (remove)
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
This thesis has explored how structural techniques can be applied to the problem of formal verification for sequential circuits. Algorithms for formal verification which operate on non-canonical gate netlist representations of digital circuits have certain advantages over the traditional techniques based on canonical representations as BDDs. They allow to exploit problem-specific knowledge because they can take into account structural properties of the designs being analyzed. This allows us to break the problem down into sub-problems which are (hopefully) easier to be solved. However, in the past, the main application of such structural techniques was in the field of combinational equivalence checking. One reason for this is that the behaviour of a sequential system does not only depend on its inputs but also on its internal states, and no concepts had been developed to-date allowing structural methods to deal with large sets of states. An important goal of this research was therefore to develop structural, non-canonical forms of representing the reachable states of a finite state machine and to develop methods for reachability analysis based on such representations. In order to reach this goal, two steps were taken. Firstly, a framework for manipulating Boolean functions represented as gate netlists has been established. Secondly, using this framework, a structural method for FSM traversal was developed serving as the basis for an equivalence checking algorithm for sequential circuits. The framework for manipulating Boolean functions represented as multi-level combinational networks is based on a new concept of an implicant in a multi-level network and on an AND/ORtype enumeration technique which allows us to derive such implicants. This concept extends the classical notion of an implicant in two-level circuits to the multi-level case. Using this notion, arbitrary transformations in multi-level combinational networks can be performed. The multi-level network implicants can be determined from AND/OR reasoning graphs, which are associated with an AND/OR reasoning technique operating directly on the gate netlist description of a multi-level circuit. This reasoning technique has the important property that it is complete, i.e. the associated AND/OR trees contain all prime implicants of a Boolean function at an arbitrary node in a combinational circuit. In other words, AND/OR graphs constructed for a network function serve as a representation of this function. A great advantage over BDDs is that AND/OR graphs, besides representing the logic function, also represent some structural properties of the analyzed circuitry. This permits to develop heuristics that are specially tailored for certain applications such as logic optimization or verification. Another advantage which is especially useful for logic optimization is the fact that the proposed AND/OR enumeration scheme is not restricted to the use of a specific logic alphabet such as B3 = {0, 1, X}. By using Roth’s D-calculus based on B5 = {0, 1, D, D-Komplement} permissible implicants can be determined. Transformations based on permissible implicants exploit observability don’t-care conditions in logic synthesis by creating permissible functions at internal network nodes. In order to evaluate the new structural framework for manipulating Boolean functions represented as gate netlists, several experiments with implicant-based optimization of multi-level circuits were performed. The results show that implicant-based circuit transformations lead to significantly better optimization results than traditional synthesis techniques. Next, based on the proposed structural methods for Boolean function manipulation, techniques for representing and manipulating the set of states of a sequential circuit have been developed. The concept of a “stub circuit” was introduced which implicitly represents a set of state vectors as the range of a multi-output function given as a gate netlist. The stub circuit is the result of an existential quantification operation which is obtained by functional decomposition using implicant-based netlist transformations and a network cutting procedure. Using this existential quantification operation, a new structural FSM traversal algorithm was formulated which performs a fixed point iteration on the set of reachable states represented by the stub circuit. The proposed approach performs a reachability analysis of the states of a sequential circuit. It operates on gate netlists and naturally allows to incorporate structural properties of a design under consideration into the reasoning. Therefore, structural FSM traversal is an interesting alternative to traditional symbolic FSM traversal, especially in those applications of formal verification, where structural properties can be exploited. Structural FSM traversal was applied to the problem of sequential equivalence checking. Here, structural similarities between the designs to be compared can effectively reduce the complexity of the verification task. The FSM to be traversed is a special product machine called sequential miter. The special structural properties of this product machine have made it possible to formulate an approximate algorithm for structural FSM traversal, called record and play(). This algorithm uses an approximation on the reachable state set represented by the stub circuit which is very beneficial for performance. Instead of calculating the stub circuit using the exact algorithm, implicant-based transformations directly using structural design similarities are performed. These transformations, together with existential quantification implemented by the cutting procedure, lead to an over-approximation of the reachable state set. By this overapproximation, only such unreachable product states are added to the set of states represented by the stub circuit which are unreachable at the current point in time but which are nevertheless equivalent. Therefore, more product states are added to the set of reachable states sometimes leading to drastic acceleration of the traversal, i.e. the fixed point is reached in much fewer steps. The algorithm record and play() was applied to the problem of checking the equivalence of a circuit with its optimized and retimed version. Retiming is a form of sequential circuit optimization which can radically alter the state encoding of a circuit. Traditional FSM traversal techniques often fail because the BDDs needed to represent the reachable state set and the transition relation of the product machine become too large. Experiments were conducted to evaluate the performance of record and play() on a standard set of sequential benchmark circuits. The algorithm was capable of proving the equivalence of optimized and retimed circuits with their original versions, some of which (to our knowledge) have never before been verified using traditional techniques like symbolic FSM traversal. The experimental results are very promising. Future research will therefore explore how structural FSM traversal can be applied to model checking.
In this dissertation the formal abstraction and verification of analog circuit is examined. An approach is introduced that automatically abstracts a transistor level circuit with full Spice accuracy into a hybrid automaton (HA) in various output languages. The generated behavioral model exhibits a significant simulation speed-up compared to the original netlist, while maintaining an acceptable accuracy, and can be therefore used in various verification and validation routines. On top of that, the generated models can be formally verified against their Spice netlists, making the obtained models correct by construction.
The generated abstract models can be extended to enclose modeling as well as technology dependent parameter variations with little over approximations. As these models enclose the various behaviors of the sampled netlists, the obtained models are of significant importance as they can replace several simulations with just a single reachability analysis or symbolic simulation. Moreover, these models can be as well be used in different verification routines as demonstrated in this dissertation.
As the obtained models are described by HAs with linear behaviors in the locations, the abstract models can be as well compositionally linked, allowing thereby the abstraction of complex analog circuits.
Depending on the specified modeling settings, including for example the number of locations of the HA and the description of the system behavior, the accuracy, speedup, and various additional properties of the HA can be influenced. This is examined in detail in this dissertation. The underlying abstraction process is first covered in detail. Several extensions are then handled including the modeling of the HAs with parameter variations. The obtained models are then verified using various verification methodologies. The accuracy and speed-up of the abstraction methodology is finally evaluated on several transistor level circuits ranging from simple operational amplifiers up to a complex circuits.