Refine
Year of publication
Document Type
- Doctoral Thesis (54) (remove)
Language
- English (54) (remove)
Has Fulltext
- yes (54)
Is part of the Bibliography
- no (54)
Keywords
- FPGA (2)
- ALICE (1)
- Abfrageverarbeitung (1)
- Abstraction (1)
- Agent <Künstliche Intelligenz> (1)
- Agenten (1)
- Agents (1)
- Analog (1)
- Analog Circuits (1)
- Analog Verification (1)
Institute
- Informatik (54) (remove)
This thesis has explored how structural techniques can be applied to the problem of formal verification for sequential circuits. Algorithms for formal verification which operate on non-canonical gate netlist representations of digital circuits have certain advantages over the traditional techniques based on canonical representations as BDDs. They allow to exploit problem-specific knowledge because they can take into account structural properties of the designs being analyzed. This allows us to break the problem down into sub-problems which are (hopefully) easier to be solved. However, in the past, the main application of such structural techniques was in the field of combinational equivalence checking. One reason for this is that the behaviour of a sequential system does not only depend on its inputs but also on its internal states, and no concepts had been developed to-date allowing structural methods to deal with large sets of states. An important goal of this research was therefore to develop structural, non-canonical forms of representing the reachable states of a finite state machine and to develop methods for reachability analysis based on such representations. In order to reach this goal, two steps were taken. Firstly, a framework for manipulating Boolean functions represented as gate netlists has been established. Secondly, using this framework, a structural method for FSM traversal was developed serving as the basis for an equivalence checking algorithm for sequential circuits. The framework for manipulating Boolean functions represented as multi-level combinational networks is based on a new concept of an implicant in a multi-level network and on an AND/ORtype enumeration technique which allows us to derive such implicants. This concept extends the classical notion of an implicant in two-level circuits to the multi-level case. Using this notion, arbitrary transformations in multi-level combinational networks can be performed. The multi-level network implicants can be determined from AND/OR reasoning graphs, which are associated with an AND/OR reasoning technique operating directly on the gate netlist description of a multi-level circuit. This reasoning technique has the important property that it is complete, i.e. the associated AND/OR trees contain all prime implicants of a Boolean function at an arbitrary node in a combinational circuit. In other words, AND/OR graphs constructed for a network function serve as a representation of this function. A great advantage over BDDs is that AND/OR graphs, besides representing the logic function, also represent some structural properties of the analyzed circuitry. This permits to develop heuristics that are specially tailored for certain applications such as logic optimization or verification. Another advantage which is especially useful for logic optimization is the fact that the proposed AND/OR enumeration scheme is not restricted to the use of a specific logic alphabet such as B3 = {0, 1, X}. By using Roth’s D-calculus based on B5 = {0, 1, D, D-Komplement} permissible implicants can be determined. Transformations based on permissible implicants exploit observability don’t-care conditions in logic synthesis by creating permissible functions at internal network nodes. In order to evaluate the new structural framework for manipulating Boolean functions represented as gate netlists, several experiments with implicant-based optimization of multi-level circuits were performed. The results show that implicant-based circuit transformations lead to significantly better optimization results than traditional synthesis techniques. Next, based on the proposed structural methods for Boolean function manipulation, techniques for representing and manipulating the set of states of a sequential circuit have been developed. The concept of a “stub circuit” was introduced which implicitly represents a set of state vectors as the range of a multi-output function given as a gate netlist. The stub circuit is the result of an existential quantification operation which is obtained by functional decomposition using implicant-based netlist transformations and a network cutting procedure. Using this existential quantification operation, a new structural FSM traversal algorithm was formulated which performs a fixed point iteration on the set of reachable states represented by the stub circuit. The proposed approach performs a reachability analysis of the states of a sequential circuit. It operates on gate netlists and naturally allows to incorporate structural properties of a design under consideration into the reasoning. Therefore, structural FSM traversal is an interesting alternative to traditional symbolic FSM traversal, especially in those applications of formal verification, where structural properties can be exploited. Structural FSM traversal was applied to the problem of sequential equivalence checking. Here, structural similarities between the designs to be compared can effectively reduce the complexity of the verification task. The FSM to be traversed is a special product machine called sequential miter. The special structural properties of this product machine have made it possible to formulate an approximate algorithm for structural FSM traversal, called record and play(). This algorithm uses an approximation on the reachable state set represented by the stub circuit which is very beneficial for performance. Instead of calculating the stub circuit using the exact algorithm, implicant-based transformations directly using structural design similarities are performed. These transformations, together with existential quantification implemented by the cutting procedure, lead to an over-approximation of the reachable state set. By this overapproximation, only such unreachable product states are added to the set of states represented by the stub circuit which are unreachable at the current point in time but which are nevertheless equivalent. Therefore, more product states are added to the set of reachable states sometimes leading to drastic acceleration of the traversal, i.e. the fixed point is reached in much fewer steps. The algorithm record and play() was applied to the problem of checking the equivalence of a circuit with its optimized and retimed version. Retiming is a form of sequential circuit optimization which can radically alter the state encoding of a circuit. Traditional FSM traversal techniques often fail because the BDDs needed to represent the reachable state set and the transition relation of the product machine become too large. Experiments were conducted to evaluate the performance of record and play() on a standard set of sequential benchmark circuits. The algorithm was capable of proving the equivalence of optimized and retimed circuits with their original versions, some of which (to our knowledge) have never before been verified using traditional techniques like symbolic FSM traversal. The experimental results are very promising. Future research will therefore explore how structural FSM traversal can be applied to model checking.
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
Driving can be dangerous. Humans become inattentive when performing a monotonous task like driving. Also the risk implied while multi-tasking, like using the cellular phone while driving, can break the concentration of the driver and increase the risk of accidents. Others factors like exhaustion, nervousness and excitement affect the performance of the driver and the response time. Consequently, car manufacturers have developed systems in the last decades which assist the driver under various circumstances. These systems are called driver assistance systems. Driver assistance systems are meant to support the task of driving, and the field of action varies from alerting the driver, with acoustical or optical warnings, to taking control of the car, such as keeping the vehicle in the traffic lane until the driver resumes control. For such a purpose, the vehicle is equipped with on-board sensors which allow the perception of the environment and/or the state of the vehicle. Cameras are sensors which extract useful information about the visual appearance of the environment. Additionally, a binocular system allows the extraction of 3D information. One of the main requirements for most camera-based driver assistance systems is the accurate knowledge of the motion of the vehicle. Some sources of information, like velocimeters and GPS, are of common use in vehicles today. Nevertheless, the resolution and accuracy usually achieved with these systems are not enough for many real-time applications. The computation of ego-motion from sequences of stereo images for the implementation of driving intelligent systems, like autonomous navigation or collision avoidance, constitutes the core of this thesis. This dissertation proposes a framework for the simultaneous computation of the 6 degrees of freedom of ego-motion (rotation and translation in 3D Euclidean space), the estimation of the scene structure and the detection and estimation of independently moving objects. The input is exclusively provided by a binocular system and the framework does not call for any data acquisition strategy, i.e. the stereo images are just processed as they are provided. Stereo allows one to establish correspondences between left and right images, estimating 3D points of the environment via triangulation. Likewise, feature tracking establishes correspondences between the images acquired at different time instances. When both are used together for a large number of points, the result is a set of clouds of 3D points with point-to-point correspondences between clouds. The apparent motion of the 3D points between consecutive frames is caused by a variety of reasons. The most dominant motion for most of the points in the clouds is caused by the ego-motion of the vehicle; as the vehicle moves and images are acquired, the relative position of the world points with respect to the vehicle changes. Motion is also caused by objects moving in the environment. They move independently of the vehicle motion, so the observed motion for these points is the sum of the ego-vehicle motion and the independent motion of the object. A third reason, and of paramount importance in vision applications, is caused by correspondence problems, i.e. the incorrect spatial or temporal assignment of the point-to-point correspondence. Furthermore, all the points in the clouds are actually noisy measurements of the real unknown 3D points of the environment. Solving ego-motion and scene structure from the clouds of points requires some previous analysis of the noise involved in the imaging process, and how it propagates as the data is processed. Therefore, this dissertation analyzes the noise properties of the 3D points obtained through stereo triangulation. This leads to the detection of a bias in the estimation of 3D position, which is corrected with a reformulation of the projection equation. Ego-motion is obtained by finding the rotation and translation between the two clouds of points. This problem is known as absolute orientation, and many solutions based on least squares have been proposed in the literature. This thesis reviews the available closed form solutions to the problem. The proposed framework is divided in three main blocks: 1) stereo and feature tracking computation, 2) ego-motion estimation and 3) estimation of 3D point position and 3D velocity. The first block solves the correspondence problem providing the clouds of points as output. No special implementation of this block is required in this thesis. The ego-motion block computes the motion of the cameras by finding the absolute orientation between the clouds of static points in the environment. Since the cloud of points might contain independently moving objects and outliers generated by false correspondences, the direct computation of the least squares might lead to an erroneous solution. The first contribution of this thesis is an effective rejection rule that detects outliers based on the distance between predicted and measured quantities, and reduces the effects of noisy measurement by assigning appropriate weights to the data. This method is called Smoothness Motion Constraint (SMC). The ego-motion of the camera between two frames is obtained finding the absolute orientation between consecutive clouds of weighted 3D points. The complete ego-motion since initialization is achieved concatenating the individual motion estimates. This leads to a super-linear propagation of the error, since noise is integrated. A second contribution of this dissertation is a predictor/corrector iterative method, which integrates the clouds of 3D points of multiple time instances for the computation of ego-motion. The presented method considerably reduces the accumulation of errors in the estimated ego-position of the camera. Another contribution of this dissertation is a method which recursively estimates the 3D world position of a point and its velocity; by fusing stereo, feature tracking and the estimated ego-motion in a Kalman Filter system. An improved estimation of point position is obtained this way, which is used in the subsequent system cycle resulting in an improved computation of ego-motion. The general contribution of this dissertation is a single framework for the real time computation of scene structure, independently moving objects and ego-motion for automotive applications.
In the context of information theory, the term Mutual Information has first been formulated by Claude Elwood Shannon. Information theory is the consistent mathematical description of technical communication systems. To this day, it is the basis of numerous applications in modern communications engineering and yet became indispensable in this field. This work is concerned with the development of a concept for nonlinear feature selection from scalar, multivariate data on the basis of the mutual information. From the viewpoint of modelling, the successful construction of a realistic model depends highly on the quality of the employed data. In the ideal case, high quality data simply consists of the relevant features for deriving the model. In this context, it is important to possess a suitable method for measuring the degree of the, mostly nonlinear, dependencies between input- and output variables. By means of such a measure, the relevant features could be specifically selected. During the course of this work, it will become evident that the mutual information is a valuable and feasible measure for this task and hence the method of choice for practical applications. Basically and without the claim of being exhaustive, there are two possible constellations that recommend the application of feature selection. On the one hand, feature selection plays an important role, if the computability of a derived system model cannot be guaranteed, due to a multitude of available features. On the other hand, the existence of very few data points with a significant number of features also recommends the employment of feature selection. The latter constellation is closely related to the so called "Curse of Dimensionality". The actual statement behind this is the necessity to reduce the dimensionality to obtain an adequate coverage of the data space. In other word, it is important to reduce the dimensionality of the data, since the coverage of the data space exponentially decreases, for a constant number of data points, with the dimensionality of the available data. In the context of mapping between input- and output space, this goal is ideally reached by selecting only the relevant features from the available data set. The basic idea for this work has its origin in the rather practical field of automotive engineering. It was motivated by the goals of a complex research project in which the nonlinear, dynamic dependencies among a multitude of sensor signals should be identified. The final goal of such activities was to derive so called virtual sensors from identified dependencies among the installed automotive sensors. This enables the real-time computability of the required variable without the expenses of additional hardware. The prospect of doing without additional computing hardware is a strong motive force in particular in automotive engineering. In this context, the major problem was to find a feasible method to capture the linear- as well as the nonlinear dependencies. As mentioned before, the goal of this work is the development of a flexibly applicable system for nonlinear feature selection. The important point here is to guarantee the practicable computability of the developed method even for high dimensional data spaces, which are rather realistic in technical environments. The employed measure for the feature selection process is based on the sophisticated concept of mutual information. The property of the mutual information, regarding its high sensitivity and specificity to linear- and nonlinear statistical dependencies, makes it the method of choice for the development of a highly flexible, nonlinear feature selection framework. In addition to the mere selection of relevant features, the developed framework is also applicable for the nonlinear analysis of the temporal influences of the selected features. Hence, a subsequent dynamic modelling can be performed more efficiently, since the proposed feature selection algorithm additionally provides information about the temporal dependencies between input- and output variables. In contrast to feature extraction techniques, the developed feature selection algorithm in this work has another considerable advantage. In the case of cost intensive measurements, the variables with the highest information content can be selected in a prior feasibility study. Hence, the developed method can also be employed to avoid redundance in the acquired data and thus prevent for additional costs.
Algorithms and data structures constitute the theoretical foundations of computer science and are an integral part of any classical computer science curriculum. Due to their high level of abstraction, the understanding of algorithms is of crucial concern to the vast majority of novice students. To facilitate the understanding and teaching of algorithms, a new research field termed "algorithm visualisation" evolved in the early 1980's. This field is concerned with innovating techniques and concepts for the development of effective algorithm visualisations for teaching, study, and research purposes. Due to the large number of requirements that high-quality algorithm visualisations need to meet, developing and deploying effective algorithm visualisations from scratch is often deemed to be an arduous, time-consuming task, which necessitates high-level skills in didactics, design, programming and evaluation. A substantial part of this thesis is devoted to the problems and solutions related to the automation of three-dimensional visual simulation of algorithms. The scientific contribution of the research presented in this work lies in addressing three concerns: - Identifying and investigating the issues related to the full automation of visual simulations. - Developing an automation-based approach to minimising the effort required for creating effective visual simulations. - Designing and implementing a rich environment for the visualisation of arbitrary algorithms and data structures in 3D. The presented research in this thesis is of considerable interest to (1) researchers anxious to facilitate the development process of algorithm visualisations, (2) educators concerned with adopting algorithm visualisations as a teaching aid and (3) students interested in developing their own algorithm animations.
Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural models. In the present thesis, we introduce several recurrent network models of threshold units that combine spike timing dependent plasticity with homeostatic plasticity mechanisms like intrinsic plasticity or synaptic normalization. We investigate how these different forms of plasticity shape the dynamics and computational properties of recurrent networks. The networks receive input sequences composed of different symbols and learn the structure embedded in these sequences in an unsupervised manner. Information is encoded in the form of trajectories through a high-dimensional state space reminiscent of recent biological findings on cortical coding. We find that these self-organizing plastic networks are able to represent and "understand" the spatio-temporal patterns in their inputs while maintaining their dynamics in a healthy regime suitable for learning. The emergent properties are not easily predictable on the basis of the individual plasticity mechanisms at work. Our results underscore the importance of studying the interaction of different forms of plasticity on network behavior.
A framework for the analysis and visualization of multielectrode spike trains / von Ovidiu F. Jurjut
(2009)
The brain is a highly distributed system of constantly interacting neurons. Understanding how it gives rise to our subjective experiences and perceptions depends largely on understanding the neuronal mechanisms of information processing. These mechanisms are still poorly understood and a matter of ongoing debate remains the timescale on which the coding process evolves. Recently, multielectrode recordings of neuronal activity have begun to contribute substantially to elucidating how information coding is implemented in brain circuits. Unfortunately, analysis and interpretation of multielectrode data is often difficult because of their complexity and large volume. Here we propose a framework that enables the efficient analysis and visualization of multielectrode spiking data. First, using self-organizing maps, we identified reoccurring multi-neuronal spike patterns that evolve on various timescales. Second, we developed a color-based visualization technique for these patterns. They were mapped onto a three-dimensional color space based on their reciprocal similarities, i.e., similar patterns were assigned similar colors. This innovative representation enables a quick and comprehensive inspection of spiking data and provides a qualitative description of pattern distribution across entire datasets. Third, we quantified the observed pattern expression motifs and we investigated their contribution to the encoding of stimulus-related information. An emphasis was on the timescale on which patterns evolve, covering the temporal scales from synchrony up to mean firing rate. Using our multi-neuronal analysis framework, we investigated data recorded from the primary visual cortex of anesthetized cats. We found that cortical responses to dynamic stimuli are best described as successions of multi-neuronal activation patterns, i.e., trajectories in a multidimensional pattern space. Patterns that encode stimulus-specific information are not confined to a single timescale but can span a broad range of timescales, which are tightly related to the temporal dynamics of the stimuli. Therefore, the strict separation between synchrony and mean firing rate is somewhat artificial as these two represent only extreme cases of a continuum of timescales that are expressed in cortical dynamics. Results also indicate that timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (~10-20 ms) appear to play a particularly salient role in coding, as patterns evolving on these timescales seem to be involved in the representation of stimuli with both slow and fast temporal dynamics.
Relational data exchange deals with translating relational data according to a given specification. This problem is one of the many tasks that arise in data integration, for example, in data restructuring, in ETL (Extract-Transform-Load) processes used for updating data warehouses, or in data exchange between different, possibly independently created, applications. Systems for relational data exchange exist for several decades now. Motivated by their experiences with one of those systems, Fagin, Kolaitis, Miller, and Popa (2003) studied fundamental and algorithmic issues arising in relational data exchange. One of these issues is how to answer queries that are posed against the target schema (i.e., against the result of the data exchange) so that the answers are consistent with the source data. For monotonic queries, the certain answers semantics proposed by Fagin, Kolaitis, Miller, and Popa (2003) is appropriate. For many non-monotonic queries, however, the certain answers semantics was shown to yield counter-intuitive results. This thesis deals with computing the certain answers for monotonic queries on the one hand, and on the other hand, it deals with the issue of which semantics are appropriate for answering non-monotonic queries, and how hard it is to evaluate non-monotonic queries under these semantics. As shown by Fagin, Kolaitis, Miller, and Popa (2003), computing the certain answers for unions of conjunctive queries - a subclass of the monotonic queries - basically reduces to computing universal solutions, provided the data transformation is specified by a set of tgds (tuple-generating dependencies) and egds (equality-generating dependencies). If M is such a specification and S is a source database, then T is called a solution for S under M if T is a possible result of translating S according to M. Intuitively, universal solutions are most general solutions. Since the above-mentioned work by Fagin, Kolaitis, Miller, and Popa it was unknown whether it is decidable if a source database has a universal solution under a given data exchange specification. In this thesis, we show that this problem is undecidable. More precisely, we construct a specification M that consists of tgds only so that it is undecidable whether a given source database has a universal solution under M. From the proof it also follows that it is undecidable whether the chase procedure - by which universal models can be obtained - terminates on a given source database and the set of tgds in M. The above results in particular strengthen results of Deutsch, Nash, and Remmel (2008). Concerning the issue of which semantics are appropriate for answering non-monotonic queries, we study several semantics for answering such queries. All of these semantics are based on the closed world assumption (CWA). First, the CWA-semantics of Libkin (2006) are extended so that they can be applied to specifications consisting of tgds and egds. The key is to extend the concept of CWA-solution, on which the CWA-semantics are based. CWA-solutions are characterized as universal solutions that are derivable from the source database using a suitably controlled version of the chase procedure. In particular, if CWA-solutions exist, then there is a minimal CWA-solution that is unique up to isomorphism: the core of the universal solutions introduced by Fagin, Kolaitis, and Popa (2003). We show that evaluation of a query under some of the CWA-semantics reduces to computing the certain answers to the query on the minimal CWA-solution. The CWA-semantics resolve some the known problems with answering non-monotonic queries. There are, however, two natural properties that are not possessed by the CWA-semantics. On the one hand, queries may be answered differently with respect to data exchange specifications that are logically equivalent. On the other hand, there are queries whose answer under the CWA-semantics intuitively contradicts the information derivable from the source database and the data exchange specification. To find an alternative semantics, we first test several CWA-based semantics from the area of deductive databases for their suitability regarding non-monotonic query answering in relational data exchange. More precisely, we focus on the CWA-semantics by Reiter (1978), the GCWA-semantics (Minker 1982), the EGCWA-semantics (Yahya, Henschen 1985) and the PWS-semantics (Chan 1993). It turns out that these semantics are either too weak or too strong, or do not possess the desired properties. Finally, based on the GCWA-semantics we develop the GCWA*-semantics which intuitively possesses the desired properties. For monotonic queries, some of the CWA-semantics as well as the GCWA*-semantics coincide with the certain answers semantics, that is, results obtained for the certain answers semantics carry over to those semantics. When studying the complexity of evaluating non-monotonic queries under the above-mentioned semantics, we focus on the data complexity, that is, the complexity when the data exchange specification and the query are fixed. We show that in many cases, evaluating non-monotonic queries is hard: co-NP- or NP-complete, or even undecidable. For example, evaluating conjunctive queries with at least one negative literal under simple specifications may be co-NP-hard. Notice, however, that this result only says that there is such a query and such a specification for which the problem is hard, but not that the problem is hard for all such queries and specifications. On the other hand, we identify a broad class of queries - the class of universal queries - which can be evaluated in polynomial time under the GCWA*-semantics, provided the data exchange specification is suitably restricted. More precisely, we show that universal queries can be evaluated on the core of the universal solutions, independent of the source database and the specification.
Plasticity supports the remarkable adaptability and robustness of cortical processing. It allows the brain to learn and remember patterns in the sensory world, to refine motor control, to predict and obtain reward, or to recover function after injury. Behind this great flexibility hide a range of plasticity mechanisms, affecting different aspects of neuronal communication. However, little is known about the precise computational roles of some of these mechanisms. Here, we show that the interaction between spike-timing dependent plasticity (STDP), intrinsic plasticity and synaptic scaling enables neurons to learn efficient representations of their inputs. In the context of reward-dependent learning, the same mechanisms allow a neural network to solve a working memory task. Moreover, although we make no any apriori assumptions on the encoding used for representing inputs, the network activity resembles that of brain regions known to be associated with working memory, suggesting that reward-dependent learning may be a central force in working memory development. Lastly, we investigated some of the clinical implications of synaptic scaling and showed that, paradoxically, there are situations in which the very mechanisms that normally are required to preserve the balance of the system, may act as a destabilizing factor and lead to seizures. Our model offers a novel explanation for the increased incidence of seizures following chronic inflammation.