Refine
Year of publication
Document Type
- Working Paper (119) (remove)
Has Fulltext
- yes (119)
Is part of the Bibliography
- no (119)
Keywords
- Lambda-Kalkül (18)
- Formale Semantik (10)
- Operationale Semantik (8)
- Programmiersprache (7)
- lambda calculus (7)
- Nebenläufigkeit (6)
- functional programming (6)
- concurrency (5)
- pi-calculus (5)
- semantics (5)
Institute
- Informatik (119) (remove)
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
The well-known proof of termination of reduction in simply typed calculi is adapted to a monomorphically typed lambda-calculus with case and constructors and recursive data types. The proof differs at several places from the standard proof. Perhaps it is useful and can be extended also to more complex calculi.
Performance and storage requirements of topology-conserving maps for robot manipulator control
(1989)
A new programming paradigm for the control of a robot manipulator by learning the mapping between the Cartesian space and the joint space (inverse Kinematic) is discussed. It is based on a Neural Network model of optimal mapping between two high-dimensional spaces by Kohonen. This paper describes the approach and presents the optimal mapping, based on the principle of maximal information gain. It is shown that Kohonens mapping in the 2-dimensional case is optimal in this sense. Furthermore, the principal control error made by the learned mapping is evaluated for the example of the commonly used PUMA robot, the trade-off between storage resources and positional error is discussed and an optimal position encoding resolution is proposed.
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed.
Iterative arrays (IAs) are a, parallel computational model with a sequential processing of the input. They are one-dimensional arrays of interacting identical deterministic finite automata. In this note, realtime-lAs with sublinear space bounds are used to accept formal languages. The existence of a proper hierarchy of space complexity classes between logarithmic anel linear space bounds is proved. Furthermore, an optimal spacc lower bound for non-regular language recognition is shown. Key words: Iterative arrays, cellular automata, space bounded computations, decidability questions, formal languages, theory of computation
It is shown that between one-turn pushdown automata (1-turn PDAs) and deterministic finite automata (DFAs) there will be savings concerning the size of description not bounded by any recursive function, so-called non-recursive tradeoffs. Considering the number of turns of the stack height as a consumable resource of PDAs, we can show the existence of non-recursive trade-offs between PDAs performing k+ 1 turns and k turns for k >= 1. Furthermore, non-recursive trade-offs are shown between arbitrary PDAs and PDAs which perform only a finite number of turns. Finally, several decidability questions are shown to be undecidable and not semidecidable.
We investigate a restricted one-way cellular automaton (OCA) model where the number of cells is bounded by a constant number k, so-called kC-OCAs. In contrast to the general model, the generative capacity of the restricted model is reduced to the set of regular languages. A kC-OCA can be algorithmically converted to a deterministic finite automaton (DFA). The blow-up in the number of states is bounded by a polynomial of degree k. We can exhibit a family of unary languages which shows that this upper bound is tight in order of magnitude. We then study upper and lower bounds for the trade-off when converting DFAs to kC-OCAs. We show that there are regular languages where the use of kC-OCAs cannot reduce the number of states when compared to DFAs. We then investigate trade-offs between kC-OCAs with different numbers of cells and finally treat the problem of minimizing a given kC-OCA.
The effect of adding two-way communication to k cells one-way cellular automata (kC-OCAs) on their size of description is studied. kC-OCAs are a parallel model for the regular languages that consists of an array of k identical deterministic finite automata (DFAs), called cells, operating in parallel. Each cell gets information from its right neighbor only. In this paper, two models with different amounts of two-way communication are investigated. Both models always achieve quadratic savings when compared to DFAs. When compared to a one-way cellular model, the result is that minimum two-way communication can achieve at most quadratic savings whereas maximum two-way communication may provide savings bounded by a polynomial of degree k.
The descriptional complexity of iterative arrays (lAs) is studied. Iterative arrays are a parallel computational model with a sequential processing of the input. It is shown that lAs when compared to deterministic finite automata or pushdown automata may provide savings in size which are not bounded by any recursive function, so-called non-recursive trade-offs. Additional non-recursive trade-offs are proven to exist between lAs working in linear time and lAs working in real time. Furthermore, the descriptional complexity of lAs is compared with cellular automata (CAs) and non-recursive trade-offs are proven between two restricted classes. Finally, it is shown that many decidability questions for lAs are undecidable and not semidecidable.
It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems.
We study the descriptional complexity of cellular automata (CA), a parallel model of computation. We show that between one of the simplest cellular models, the realtime-OCA. and "classical" models like deterministic finite automata (DFA) or pushdown automata (PDA), there will be savings concerning the size of description not bounded by any recursive function, a so-called nonrecursive trade-off. Furthermore, nonrecursive trade-offs are shown between some restricted classes of cellular automata. The set of valid computations of a Turing machine can be recognized by a realtime-OCA. This implies that many decidability questions are not even semi decidable for cellular automata. There is no pumping lemma and no minimization algorithm for cellular automata.
We propose a variation of online paging in two-level memory systems where pages in the fast cache get modified and therefore have to be explicitly written back to the slow memory upon evictions. For increased performance, up to alpha arbitrary pages can be moved from the cache to the slow memory within a single joint eviction, whereas fetching pages from the slow memory is still performed on a one-by-one basis. The main objective in this new alpha-paging scenario is to bound the number of evictions. After providing experimental evidence that alpha-paging can adequately model flash-memory devices in the context of translation layers we turn to the theoretical connections between alpha-paging and standard paging. We give lower bounds for deterministic and randomized alpha-paging algorithms. For deterministic algorithms, we show that an adaptation of LRU is strongly competitive, while for the randomized case we show that by adapting the classical Mark algorithm we get an algorithm with a competitive ratio larger than the lower bound by a multiplicative factor of approximately 1.7.
FIFO is the most prominent queueing strategy due to its simplicity and the fact that it only works with local information. Its analysis within the adversarial queueing theory however has shown, that there are networks that are not stable under the FIFO protocol, even at arbitrarily low rate. On the other hand there are networks that are universally stable, i.e., they are stable under every greedy protocol at any rate r < 1. The question as to which networks are stable under the FIFO protocol arises naturally. We offer the first polynomial time algorithm for deciding FIFO stability and simple-path FIFO stability of a directed network, answering an open question posed in [1, 4]. It turns out, that there are networks, that are FIFO stable but not universally stable, hence FIFO is not a worst case protocol in this sense. Our characterization of FIFO stability is constructive and disproves an open characterization in [4].
The efficient management of large multimedia databases requires the development of new techniques to process, characterize, and search for multimedia objects. Especially in the case of image data, the rapidly growing amount of documents prohibits a manual description of the images’ content. Instead, the automated characterization is highly desirable to support annotation and retrieval of digital images. However, this is a very complex and still unsolved task. To contribute to a solution of this problem, we have developed a mechanism for recognizing objects in images based on the query by example paradigm. Therefore, the most salient image features of an example image representing the searched object are extracted to obtain a scale-invariant object model. The use of this model provides an efficient and robust strategy for recognizing objects in images independently of their size. Further applications of the mechanism are classical recognition tasks such as scene decomposition or object tracking in video sequences.
Classically, encoding of images by only a few, important components is done by the Principal Component Analysis (PCA). Recently, a data analysis tool called Independent Component Analysis (ICA) for the separation of independent influences in signals has found strong interest in the neural network community. This approach has also been applied to images. Whereas the approach assumes continuous source channels mixed up to the same number of channels by a mixing matrix, we assume that images are composed by only a few image primitives. This means that for images we have less sources than pixels. Additionally, in order to reduce unimportant information, we aim only for the most important source patterns with the highest occurrence probabilities or biggest information called „Principal Independent Components (PIC)“. For the example of a synthetic picture composed by characters this idea gives us the most important ones. Nevertheless, for natural images where no a-priori probabilities can be computed this does not lead to an acceptable reproduction error. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.