Refine
Year of publication
Document Type
- Working Paper (119) (remove)
Has Fulltext
- yes (119)
Is part of the Bibliography
- no (119)
Keywords
- Lambda-Kalkül (18)
- Formale Semantik (10)
- Operationale Semantik (8)
- Programmiersprache (7)
- lambda calculus (7)
- Nebenläufigkeit (6)
- functional programming (6)
- concurrency (5)
- pi-calculus (5)
- semantics (5)
Institute
- Informatik (119) (remove)
Call-by-need lambda calculi with letrec provide a rewritingbased operational semantics for (lazy) call-by-name functional languages. These calculi model the sharing behavior during evaluation more closely than let-based calculi that use a fixpoint combinator. In a previous paper we showed that the copy-transformation is correct for the small calculus LR-Lambda. In this paper we demonstrate that the proof method based on a calculus on infinite trees for showing correctness of instantiation operations can be extended to the calculus LRCC-Lambda with case and constructors, and show that copying at compile-time can be done without restrictions. We also show that the call-by-need and call-by-name strategies are equivalent w.r.t. contextual equivalence. A consequence is correctness of all the transformations like instantiation, inlining, specialization and common subexpression elimination in LRCC-Lambda. We are confident that the method scales up for proving correctness of copy-related transformations in non-deterministic lambda calculi if restricted to "deterministic" subterms.
This paper extends the internal frank report 28 as follows: It is shown that for a call-by-need lambda calculus LRCCP-Lambda extending the calculus LRCC-Lambda by por, i.e in a lambda-calculus with letrec, case, constructors, seq and por, copying can be done without restrictions, and also that call-by-need and call-by-name strategies are equivalent w.r.t. contextual equivalence.
We develop a proof method to show that in a (deterministic) lambda calculus with letrec and equipped with contextual equivalence the call-by-name and the call-by-need evaluation are equivalent, and also that the unrestricted copy-operation is correct. Given a let-binding x = t, the copy-operation replaces an occurrence of the variable x by the expression t, regardless of the form of t. This gives an answer to unresolved problems in several papers, it adds a strong method to the tool set for reasoning about contextual equivalence in higher-order calculi with letrec, and it enables a class of transformations that can be used as optimizations. The method can be used in different kind of lambda calculi with cyclic sharing. Probably it can also be used in non-deterministic lambda calculi if the variable x is "deterministic", i.e., has no interference with non-deterministic executions. The main technical idea is to use a restricted variant of the infinitary lambda-calculus, whose objects are the expressions that are unrolled w.r.t. let, to define the infinite developments as a reduction calculus on the infinite trees and showing a standardization theorem.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
The pi-calculus is a well-analyzed model for mobile processes and mobile computations.
While a lot of other process and lambda calculi that are core languages of higher-order concurrent and/or functional programming languages use a contextual semantics observing the termination behavior of programs in all program contexts, traditional program equivalences in the pi-calculus are bisimulations and barbed testing equivalences, which observe the communication capabilities of processes under reduction and in contexts.
There is a distance between these two approaches to program equivalence which makes it hard to compare the pi-calculus with other languages. In this paper we contribute to bridging this gap by investigating a contextual semantics of the synchronous pi-calculus with replication and without sums.
To transfer contextual equivalence to the pi-calculus we add a process Stop as constant which indicates success and is used as the base to define and analyze the contextual equivalence which observes may- and should-convergence of processes.
We show as a main result that contextual equivalence in the pi-calculus with Stop conservatively extends barbed testing equivalence in the (Stop-free) pi-calculus. This implies that results on contextual equivalence can be directly transferred to the (Stop-free) pi-calculus with barbed testing equivalence.
We analyze the contextual ordering, prove some nontrivial process equivalences, and provide proof tools for showing contextual equivalences. Among them are a context lemma, and new notions of sound applicative similarities for may- and should-convergence.
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. It is designed to perform automatic induction proofs and can also deal with partial functions. This paper provides a reconstruction of the corresponding logic and semantics using the standard treatment of undefinedness which adapts and improves the VeriFun-logic by allowing reasoning on nonterminating expressions and functions. Equality of expressions is defined as contextual equivalence based on observing termination in all closing contexts. The reconstruction shows that several restrictions of the VeriFun framework can easily be removed, by natural generalizations: mutual recursive functions, abstractions in the data values, and formulas with arbitrary quantifier prefix can be formulated. The main results of this paper are: an extended set of deduction rules usable in VeriFun under the adapted semantics is proved to be correct, i.e. they respect the observational equivalence in all extensions of a program. We also show that certain classes of theorems are conservative under extensions, like universally quantified equations. Also other special classes of theorems are analyzed for conservativity.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. This paper provides a reconstruction of the corresponding logic when partial functions are permitted. Typing is polymorphic for the definition of functions but monomorphic for terms in formulas. Equality of terms is defined as contextual equivalence based on observing termination in all contexts. The reconstruction also allows several generalizations of the functional language like mutual recursive functions and abstractions in the data values. The main results are: Correctness of several program transformations for all extensions of a program, which have a potential usage in a deduction system. We also proved that universally quantified equations are conservative, i.e. if a universally quantified equation is valid w.r.t. a program P, then it remains valid if the program is extended by new functions and/or new data types.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb, which is locally bottom-avoiding. We use a small-step operational semantics in form of a normal order reduction. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into an arbitrary program context their termination behaviour is the same. We use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We evolve different proof tools for proving correctness of program transformations. We provide a context lemma for may- as well as must- convergence which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations keep contextual equivalence. In contrast to other approaches our syntax as well as semantics does not make use of a heap for sharing expressions. Instead we represent these expressions explicitely via letrec-bindings.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
Our recently developed LRSX Tool implements a technique to automatically prove the correctness of program transformations in higher-order program calculi which may permit recursive let-bindings as they occur in functional programming languages. A program transformation is correct if it preserves the observational semantics of programs- In our tool the so-called diagram method is automated by combining unification, matching, and reasoning on alpha-renamings on the higher-order metalanguage, and automating induction proofs via an encoding into termination problems of term rewrite systems. We explain the techniques, we illustrate the usage of the tool, and we report on experiments.
Motivated by tools for automaed deduction on functional programming languages and programs, we propose a formalism to symbolically represent $\alpha$-renamings for meta-expressions. The formalism is an extension of usual higher-order meta-syntax which allows to $\alpha$-rename all valid ground instances of a meta-expression to fulfill the distinct variable convention. The renaming mechanism may be helpful for several reasoning tasks in deduction systems. We present our approach for a meta-language which uses higher-order abstract syntax and a meta-notation for recursive let-bindings, contexts, and environments. It is used in the LRSX Tool -- a tool to reason on the correctness of program transformations in higher-order program calculi with respect to their operational semantics. Besides introducing a formalism to represent symbolic $\alpha$-renamings, we present and analyze algorithms for simplification of $\alpha$-renamings, matching, rewriting, and checking $\alpha$-equivalence of symbolically $\alpha$-renamed meta-expressions.
We introduce rewriting of meta-expressions which stem from a meta-language that uses higher-order abstract syntax augmented by meta-notation for recursive let, contexts, sets of bindings, and chain variables. Additionally, three kinds of constraints can be added to meta-expressions to express usual constraints on evaluation rules and program transformations. Rewriting of meta-expressions is required for automated reasoning on programs and their properties. A concrete application is a procedure to automatically prove correctness of program transformations in higher-order program calculi which may permit recursive let-bindings as they occur in functional programming languages. Rewriting on meta-expressions can be performed by solving the so-called letrec matching problem which we introduce. We provide a matching algorithm to solve it. We show that the letrec matching problem is NP-complete, that our matching algorithm is sound and complete, and that it runs in non-deterministic polynomial time.
Towards correctness of program transformations through unification and critical pair computation
(2011)
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules.The method is similar to the computation of critical pairs for the completion of term rewriting systems. We describe an effective unification algorithm to determine all overlaps of transformations with reduction rules for the lambda calculus LR which comprises a recursive let-expressions, constructor applications, case expressions and a seq construct for strict evaluation. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modeling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions. As a result the algorithm computes a finite set of overlappings for the reduction rules of the calculus LR that serve as a starting point to the automatization of the analysis of program transformations.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
Reasoning about the correctness of program transformations requires a notion of program equivalence. We present an observational semantics for the concurrent lambda calculus with futures Lambda(fut), which formalizes the operational semantics of the programming language Alice ML. We show that natural program optimizations, as well as partial evaluation with respect to deterministic rules, are correct for Lambda(fut). This relies on a number of fundamental properties that we establish for our observational semantics.
Dieses Dokument beschreibt eine Applikation namens Stolperwege, die als prototypische Kommunikationstechnologie für eine mobile Public History of the Holocaust dienen soll, und zwar ausgehend vom Beispiel des Kunstprojekts namens Stolpersteine von Gunter Demnig. Auf diese Weise soll eine zentrale Herausforderung bezogen auf die Vermittlung der Geschichte des Holocaust aufgegriffen werden, welche in der Anknüpfung an die neuesten Entwicklungen von Kommunikationsmedien besteht. Die Stolperwege-App richtet sich an Schülerinnen und Schüler, Bewohnerinnen und Bewohner, Historikerinnen und Historiker und allgemein an Besucherinnen und Besucher einer Stadt, die vor Ort den Spuren des Holocaust nachspüren wollen, um sich an der Schreibung einer Public History of the Holocaust aktiv zu beteiligen.
Extending the method of Howe, we establish a large class of untyped higher-order calculi, in particular such with call-by-need evaluation, where similarity, also called applicative simulation, can be used as a proof tool for showing contextual preorder. The paper also demonstrates that Mann’s approach using an intermediate “approximation” calculus scales up well from a basic call-by-need non-deterministic lambdacalculus to more expressive lambda calculi. I.e., it is demonstrated, that after transferring the contextual preorder of a non-deterministic call-byneed lambda calculus to its corresponding approximation calculus, it is possible to apply Howe’s method to show that similarity is a precongruence. The transfer is not treated in this paper. The paper also proposes an optimization of the similarity-test by cutting off redundant computations. Our results also applies to deterministic or non-deterministic call-by-value lambda-calculi, and improves upon previous work insofar as it is proved that only closed values are required as arguments for similaritytesting instead of all closed expressions.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
Iterative arrays (IAs) are a, parallel computational model with a sequential processing of the input. They are one-dimensional arrays of interacting identical deterministic finite automata. In this note, realtime-lAs with sublinear space bounds are used to accept formal languages. The existence of a proper hierarchy of space complexity classes between logarithmic anel linear space bounds is proved. Furthermore, an optimal spacc lower bound for non-regular language recognition is shown. Key words: Iterative arrays, cellular automata, space bounded computations, decidability questions, formal languages, theory of computation
It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems.
We study the descriptional complexity of cellular automata (CA), a parallel model of computation. We show that between one of the simplest cellular models, the realtime-OCA. and "classical" models like deterministic finite automata (DFA) or pushdown automata (PDA), there will be savings concerning the size of description not bounded by any recursive function, a so-called nonrecursive trade-off. Furthermore, nonrecursive trade-offs are shown between some restricted classes of cellular automata. The set of valid computations of a Turing machine can be recognized by a realtime-OCA. This implies that many decidability questions are not even semi decidable for cellular automata. There is no pumping lemma and no minimization algorithm for cellular automata.
We investigate a restricted one-way cellular automaton (OCA) model where the number of cells is bounded by a constant number k, so-called kC-OCAs. In contrast to the general model, the generative capacity of the restricted model is reduced to the set of regular languages. A kC-OCA can be algorithmically converted to a deterministic finite automaton (DFA). The blow-up in the number of states is bounded by a polynomial of degree k. We can exhibit a family of unary languages which shows that this upper bound is tight in order of magnitude. We then study upper and lower bounds for the trade-off when converting DFAs to kC-OCAs. We show that there are regular languages where the use of kC-OCAs cannot reduce the number of states when compared to DFAs. We then investigate trade-offs between kC-OCAs with different numbers of cells and finally treat the problem of minimizing a given kC-OCA.
The effect of adding two-way communication to k cells one-way cellular automata (kC-OCAs) on their size of description is studied. kC-OCAs are a parallel model for the regular languages that consists of an array of k identical deterministic finite automata (DFAs), called cells, operating in parallel. Each cell gets information from its right neighbor only. In this paper, two models with different amounts of two-way communication are investigated. Both models always achieve quadratic savings when compared to DFAs. When compared to a one-way cellular model, the result is that minimum two-way communication can achieve at most quadratic savings whereas maximum two-way communication may provide savings bounded by a polynomial of degree k.
The descriptional complexity of iterative arrays (lAs) is studied. Iterative arrays are a parallel computational model with a sequential processing of the input. It is shown that lAs when compared to deterministic finite automata or pushdown automata may provide savings in size which are not bounded by any recursive function, so-called non-recursive trade-offs. Additional non-recursive trade-offs are proven to exist between lAs working in linear time and lAs working in real time. Furthermore, the descriptional complexity of lAs is compared with cellular automata (CAs) and non-recursive trade-offs are proven between two restricted classes. Finally, it is shown that many decidability questions for lAs are undecidable and not semidecidable.
It is shown that between one-turn pushdown automata (1-turn PDAs) and deterministic finite automata (DFAs) there will be savings concerning the size of description not bounded by any recursive function, so-called non-recursive tradeoffs. Considering the number of turns of the stack height as a consumable resource of PDAs, we can show the existence of non-recursive trade-offs between PDAs performing k+ 1 turns and k turns for k >= 1. Furthermore, non-recursive trade-offs are shown between arbitrary PDAs and PDAs which perform only a finite number of turns. Finally, several decidability questions are shown to be undecidable and not semidecidable.
We study the effect of randomness in the adversarial queueing model. All proofs of instability for deterministic queueing strategies exploit a finespun strategy of insertions by an adversary. If the local queueing decisions in the network are subject to randomness, it is far from obvious, that an adversary can still trick the network into instability. We show that uniform queueing is unstable even against an oblivious adversary. Consequently, randomizing the queueing decisions made to operate a network is not in itself a suitable fix for poor network performances due to packet pileups.
We consider matching, rewriting, critical pairs and the Knuth-Bendix confluence test on rewrite rules in a nominal setting extended by atom-variables. Computing critical pairs is done using nominal unification, and rewriting using nominal matching. We utilise atom-variables to formulate rewrite rules, which is an improvement over previous approaches, using usual nominal unification, nominal matching and nominal equivalence of expressions coupled with a freshness constraint. We determine the complexity of several problems in a quantified freshness logic. In particular we show that nominal matching is Πp2-complete. We prove that the adapted Knuth-Bendix confluence test is applicable to a nominal rewrite system with atom-variabes and thus, that there is a decidable test whether confluence of the ground instance of the abstract rewrite system holds. We apply the nominal Knuth Bendix confluence criterion to the theory of monads, and compute a convergent nominal rewrite system modulo alpha-equivalence.
We propose a variation of online paging in two-level memory systems where pages in the fast cache get modified and therefore have to be explicitly written back to the slow memory upon evictions. For increased performance, up to alpha arbitrary pages can be moved from the cache to the slow memory within a single joint eviction, whereas fetching pages from the slow memory is still performed on a one-by-one basis. The main objective in this new alpha-paging scenario is to bound the number of evictions. After providing experimental evidence that alpha-paging can adequately model flash-memory devices in the context of translation layers we turn to the theoretical connections between alpha-paging and standard paging. We give lower bounds for deterministic and randomized alpha-paging algorithms. For deterministic algorithms, we show that an adaptation of LRU is strongly competitive, while for the randomized case we show that by adapting the classical Mark algorithm we get an algorithm with a competitive ratio larger than the lower bound by a multiplicative factor of approximately 1.7.
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features.
Erkennung kritischer Zustände von Patienten mit der Diagnose "Septischer Schock" mit einem RBF-Netz
(2000)
Es wurde gezeigt, dass der Arzt mit dem wachsenden RBF-Netz durch die Ausgabe von verlässlichen Warnungen unterstützt werden kann. Wie in der Clusteranalyse erläutert, leiden die Ergebnisse jedoch unter den wenigen Patienten und unter der ungenauen zeitlichen Erfassung der Daten. Da jeder Patient sehr individuelle Zustände annimmt, ist ein größeres Patientenkollektiv notwendig, um eine umfassende Wissensbasis zu lernen. Eine medizinische Nachbearbeitung der Wissensbasis durch die Analyse der Fälle ließe eine weitere Verbesserung des Ergebnisses erwarten. Somit könnten unbekannte Zusammenhänge durch das Lernen aus Beispielen und medizinisches Fachwissen kombiniert werden. Abstraktere Merkmale, die weniger abhängig von individuellen Zuständen sind, könnten eine Klassifikation noch weiter verbessern. Ein Ansatzpunkt ist z.B. die Abweichung der Messwerte vom gleitenden Mittelwert. Dieses Maß ist unempfindlicher gegenüber den individuellen Arbeitspunkten der Patienten und bildet auch die Basis von relativen Abhängigkeiten zwischen zwei Variablen, die in einem weiteren Schritt ebenfalls als Merkmal herangezogen wurden. Obwohl die Verwendung der relativen Abhängigkeiten zwischen zwei Variablen als Merkmal nicht deutlichere oder häufigere Warnungen hervorbringen konnte, weist doch die Clusteranalyse auf eine bessere Verteilung der Patienten hin. Einige Cluster sind besser für die Vorhersage geeignet, als dieses bei einer Clusterung auf Basis der Zustände erreicht werden kann. Unterstützt wird dieses Ergebnis auch durch den größeren Unterschied der Sicherheiten von falschen und richtigen Klassifikationen. Neben den bisher untersuchten Merkmalen scheinen auch die Variablen interessant zu sein, bei denen festgestellt wurde, dass sie sich trotz Medikamentengabe und adäquater Behandlung schwer stabilisieren lassen. Durch den behandelnden Arzt werden diese Werte üblicherweise in einem gewissen Bereich gehalten. Falls sich das Paar Medikament/physiologischer Parameter nicht mehr in einem sinnvollen Verhältnis befindet, kann dieses ein wichtiger Indikator sein. Nach dem Aufbau der grundlegenden Funktionalität der hier untersuchten Methoden ist die Suche nach geeigneten Merkmalen als Eingabe für ein neuronales Netz ein wesentlicher Bestandteil folgender Arbeiten. Abgesehen von dem generell anspruchsvollen Vorhaben aus Klinikdaten deutliche Hinweise für die Mortalität septischer-Schock-Patienten zu erhalten, liegen die wesentlichen Probleme in dem Umfang und der Messhäufigkeit der Frankfurter Vorstudie begründet, so dass eine Anwendung von Klassifikationsverfahren auf das umfassendere Patientenkollektiv der MEDAN Multicenter-Studie klarere Ergebnisse erwarten lässt. Eine weitere, für medizinische Anwendungen interessante, Analysemöglichkeit ist die Regelgenerierung, die zur Zeit in einem anderen Teilprojekt in der MEDAN-Arbeitsgruppe bearbeitet wird. Hier können im Fall metrischer Daten zusätzliche Hinweise für die Leistung eines reinen Klassifikationsverfahrens gewonnen werden mit dem Vorteil einer expliziten Regelausgabe. Zum anderen werden in diesem Teilprojekt auch Verfahren zur Regelgenerierung eingesetzt, die ordinale und nominale Variablen wie Diagnosen, Operationen, Therapien und Medikamentenangaben (binär, ohne genaue Dosis) auswerten können. Diese werden in den Multicenter-Daten vorhanden sein. Durch Kopplung der Regelgeneratoren für metrische Daten auf der einen Seite und für diskrete Variablen auf der anderen Seite, besteht durchaus die Hoffnung bessere Ergebnisse zu erzielen. Da der Regelgenerator für metrische Daten auf dem RBF-DDA (Abk. für: Dynamic Decay Adjustment)-Netz [BERTHOLD und DIAMOND, 1995] beruht, bietet es sich innerhalb des MEDAN-Projekts an, einen (bislang nicht durchgeführten) Vergleich mit dem hier verwendeten Netztyp durchzuführen. Der Vergleich ist allerdings nur von prinzipiellem Interesse und kann auf den hier betrachteten Daten kein grundsätzlich besseres Ergebnis liefern als die bislang durchgeführten Analysen; er kann aber zu einer umfangreichen Bewertung der Ergebnisse beitragen.
Performance and storage requirements of topology-conserving maps for robot manipulator control
(1989)
A new programming paradigm for the control of a robot manipulator by learning the mapping between the Cartesian space and the joint space (inverse Kinematic) is discussed. It is based on a Neural Network model of optimal mapping between two high-dimensional spaces by Kohonen. This paper describes the approach and presents the optimal mapping, based on the principle of maximal information gain. It is shown that Kohonens mapping in the 2-dimensional case is optimal in this sense. Furthermore, the principal control error made by the learned mapping is evaluated for the example of the commonly used PUMA robot, the trade-off between storage resources and positional error is discussed and an optimal position encoding resolution is proposed.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
The efficient management of large multimedia databases requires the development of new techniques to process, characterize, and search for multimedia objects. Especially in the case of image data, the rapidly growing amount of documents prohibits a manual description of the images’ content. Instead, the automated characterization is highly desirable to support annotation and retrieval of digital images. However, this is a very complex and still unsolved task. To contribute to a solution of this problem, we have developed a mechanism for recognizing objects in images based on the query by example paradigm. Therefore, the most salient image features of an example image representing the searched object are extracted to obtain a scale-invariant object model. The use of this model provides an efficient and robust strategy for recognizing objects in images independently of their size. Further applications of the mechanism are classical recognition tasks such as scene decomposition or object tracking in video sequences.
Classically, encoding of images by only a few, important components is done by the Principal Component Analysis (PCA). Recently, a data analysis tool called Independent Component Analysis (ICA) for the separation of independent influences in signals has found strong interest in the neural network community. This approach has also been applied to images. Whereas the approach assumes continuous source channels mixed up to the same number of channels by a mixing matrix, we assume that images are composed by only a few image primitives. This means that for images we have less sources than pixels. Additionally, in order to reduce unimportant information, we aim only for the most important source patterns with the highest occurrence probabilities or biggest information called „Principal Independent Components (PIC)“. For the example of a synthetic picture composed by characters this idea gives us the most important ones. Nevertheless, for natural images where no a-priori probabilities can be computed this does not lead to an acceptable reproduction error. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
In der folgenden Anleitung werden diverse Methoden für den Zugriff auf das Ressourcen-Management, entwickelt von der AG Texttechnologie, erläutert. Das Ressourcen-Management ist für alle Anwendungen identisch. Erklärt wird das Auslesen des Ressourcen-Managements der Projects „PHI Picturing Atlas“. Alle Anweisungen erfolgen per RESTful-Aufrufen. Die API-Dokumentation findet sich unter http://phi.resources.hucompute.org.