Informatik
Refine
Year of publication
- 2009 (21) (remove)
Document Type
- Working Paper (9)
- Article (5)
- Doctoral Thesis (3)
- Conference Proceeding (2)
- Bachelor Thesis (1)
- Book (1)
Language
- English (21) (remove)
Has Fulltext
- yes (21)
Is part of the Bibliography
- no (21)
Keywords
- Lambda-Kalkül (6)
- Nebenläufigkeit (3)
- Funktionale Programmiersprache (2)
- Operationale Semantik (2)
- Programmiersprache (2)
- Pufferspeicher (2)
- Algorithmus (1)
- Flash Memories (1)
- Formale Semantik (1)
- Haskell 98 (1)
- Kontextuelle Gleichheit (1)
- Nichtdeterminismus (1)
- Online Algorithms (1)
- Paging Algorithms (1)
- Rènyi mutual information (1)
- Seitenersetzungsstrategie (1)
- Takens-Grassberger correlation integral (1)
- ambiguity (1)
- automata (1)
- classification (1)
- clustering (1)
- communication complexity (1)
- concurrency (1)
- contextual equivalence (1)
- data streams (1)
- feature selection (1)
- futures (1)
- haskell (1)
- lambda calculus (1)
- lower bounds (1)
- machine models (1)
- non-determinism (1)
- nondeterministic finite automata (1)
- process approximation (1)
- programming languages design (1)
- semantics (1)
- the set disjointness problem (1)
Institute
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence, and as a consequence also yields soundness of the encodings with respect to a contextually defined notion of program equivalence. While these translations encode blocking into queuing and waiting, we also describe an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ...
Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural models. In the present thesis, we introduce several recurrent network models of threshold units that combine spike timing dependent plasticity with homeostatic plasticity mechanisms like intrinsic plasticity or synaptic normalization. We investigate how these different forms of plasticity shape the dynamics and computational properties of recurrent networks. The networks receive input sequences composed of different symbols and learn the structure embedded in these sequences in an unsupervised manner. Information is encoded in the form of trajectories through a high-dimensional state space reminiscent of recent biological findings on cortical coding. We find that these self-organizing plastic networks are able to represent and "understand" the spatio-temporal patterns in their inputs while maintaining their dynamics in a healthy regime suitable for learning. The emergent properties are not easily predictable on the basis of the individual plasticity mechanisms at work. Our results underscore the importance of studying the interaction of different forms of plasticity on network behavior.
A framework for the analysis and visualization of multielectrode spike trains / von Ovidiu F. Jurjut
(2009)
The brain is a highly distributed system of constantly interacting neurons. Understanding how it gives rise to our subjective experiences and perceptions depends largely on understanding the neuronal mechanisms of information processing. These mechanisms are still poorly understood and a matter of ongoing debate remains the timescale on which the coding process evolves. Recently, multielectrode recordings of neuronal activity have begun to contribute substantially to elucidating how information coding is implemented in brain circuits. Unfortunately, analysis and interpretation of multielectrode data is often difficult because of their complexity and large volume. Here we propose a framework that enables the efficient analysis and visualization of multielectrode spiking data. First, using self-organizing maps, we identified reoccurring multi-neuronal spike patterns that evolve on various timescales. Second, we developed a color-based visualization technique for these patterns. They were mapped onto a three-dimensional color space based on their reciprocal similarities, i.e., similar patterns were assigned similar colors. This innovative representation enables a quick and comprehensive inspection of spiking data and provides a qualitative description of pattern distribution across entire datasets. Third, we quantified the observed pattern expression motifs and we investigated their contribution to the encoding of stimulus-related information. An emphasis was on the timescale on which patterns evolve, covering the temporal scales from synchrony up to mean firing rate. Using our multi-neuronal analysis framework, we investigated data recorded from the primary visual cortex of anesthetized cats. We found that cortical responses to dynamic stimuli are best described as successions of multi-neuronal activation patterns, i.e., trajectories in a multidimensional pattern space. Patterns that encode stimulus-specific information are not confined to a single timescale but can span a broad range of timescales, which are tightly related to the temporal dynamics of the stimuli. Therefore, the strict separation between synchrony and mean firing rate is somewhat artificial as these two represent only extreme cases of a continuum of timescales that are expressed in cortical dynamics. Results also indicate that timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (~10-20 ms) appear to play a particularly salient role in coding, as patterns evolving on these timescales seem to be involved in the representation of stimuli with both slow and fast temporal dynamics.
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
Ambiguity and communication
(2009)
The ambiguity of a nondeterministic finite automaton (NFA) N for input size n is the maximal number of accepting computations of N for an input of size n. For all k, r 2 N we construct languages Lr,k which can be recognized by NFA's with size k poly(r) and ambiguity O(nk), but Lr,k has only NFA's with exponential size, if ambiguity o(nk) is required. In particular, a hierarchy for polynomial ambiguity is obtained, solving a long standing open problem (Ravikumar and Ibarra, 1989, Leung, 1998).
Algorithms and data structures constitute the theoretical foundations of computer science and are an integral part of any classical computer science curriculum. Due to their high level of abstraction, the understanding of algorithms is of crucial concern to the vast majority of novice students. To facilitate the understanding and teaching of algorithms, a new research field termed "algorithm visualisation" evolved in the early 1980's. This field is concerned with innovating techniques and concepts for the development of effective algorithm visualisations for teaching, study, and research purposes. Due to the large number of requirements that high-quality algorithm visualisations need to meet, developing and deploying effective algorithm visualisations from scratch is often deemed to be an arduous, time-consuming task, which necessitates high-level skills in didactics, design, programming and evaluation. A substantial part of this thesis is devoted to the problems and solutions related to the automation of three-dimensional visual simulation of algorithms. The scientific contribution of the research presented in this work lies in addressing three concerns: - Identifying and investigating the issues related to the full automation of visual simulations. - Developing an automation-based approach to minimising the effort required for creating effective visual simulations. - Designing and implementing a rich environment for the visualisation of arbitrary algorithms and data structures in 3D. The presented research in this thesis is of considerable interest to (1) researchers anxious to facilitate the development process of algorithm visualisations, (2) educators concerned with adopting algorithm visualisations as a teaching aid and (3) students interested in developing their own algorithm animations.
Various concurrency primitives had been added to functional programming languages in different ways. In Haskell such a primitive is a MVar, joins are described in JoCaml and AliceML uses futures to provide a concurrent behaviour. Despite these concurrency libraries seem to behave well, their equivalence between each other has not been proven yet. An expressive formal system is needed. In their paper "On proving the equivalence of concurrency primitives", Jan Schwinghammer, David Sabel, Joachim Niehren, and Manfred Schmidt-Schauß define a universal calculus for concurrency primitives known as the typed lambda calculus with futures. There, equivalence of processes had been proved. An encoding of simple one-place buffers had been worked out. This bachelor’s thesis is about encoding more complex concurrency abstractions in the lambda calculus with futures and proving correctness of its operational semantics. Given the new abstractions, we will discuss program equivalence between them. Finally, we present a library written in Haskell that exposes futures and our concurrency abstractions as a proof of concept.
This paper gives a brief overview of computation models for data stream processing, and it introduces a new model for multi-pass processing of multiple streams, the so-called mp2s-automata. Two algorithms for solving the set disjointness problem with these automata are presented. The main technical contribution of this paper is the proof of a lower bound on the size of memory and the number of heads that are required for solving the set disjointness problem with mp2s-automata.
We propose a variation of online paging in two-level memory systems where pages in the fast cache get modified and therefore have to be explicitly written back to the slow memory upon evictions. For increased performance, up to alpha arbitrary pages can be moved from the cache to the slow memory within a single joint eviction, whereas fetching pages from the slow memory is still performed on a one-by-one basis. The main objective in this new alpha-paging scenario is to bound the number of evictions. After providing experimental evidence that alpha-paging can adequately model flash-memory devices in the context of translation layers we turn to the theoretical connections between alpha-paging and standard paging. We give lower bounds for deterministic and randomized alpha-paging algorithms. For deterministic algorithms, we show that an adaptation of LRU is strongly competitive, while for the randomized case we show that by adapting the classical Mark algorithm we get an algorithm with a competitive ratio larger than the lower bound by a multiplicative factor of approximately 1.7.
Poster presentation: The analysis of neuronal processes distributed across multiple cortical areas aims at the identification of interactions between signals recorded at different sites. Such interactions can be described by measuring the stability of phase angles in the case of oscillatory signals or other forms of signal dependencies for less regular signals. Before, however, any form of interaction can be analyzed at a given time and frequency, it is necessary to assess whether all potentially contributing signals are present. We have developed a new statistical procedure for the detection of coincident power in multiple simultaneously recorded analog signals, allowing the classification of events as 'non-accidental co-activation'. This method can effectively operate on single trials, each lasting only for a few seconds. Signals need to be transformed into time-frequency space, e.g. by applying a short-time Fourier transformation using a Gaussian window. The discrete wavelet transform (DWT) is used in order to weight the resulting power patterns according to their frequency. Subsequently, the weighted power patterns are binarized via applying a threshold. At this final stage, significant power coincidence is determined across all subgroups of channel combinations for individual frequencies by selecting the maximum ratio between observed and expected duration of co-activation as test statistic. The null hypothesis that the activity in each channel is independent from the activity in every other channel is simulated by independent, random rotation of the respective activity patterns. We applied this procedure to single trials of multiple simultaneously sampled local field potentials (LFPs) obtained from occipital, parietal, central and precentral areas of three macaque monkeys. Since their task was to use visual cues to perform a precise arm movement, co-activation of numerous cortical sites was expected. In a data set with 17 channels analyzed, up to 13 sites expressed simultaneous power in the range between 5 and 240 Hz. On average, more than 50% of active channels participated at least once in a significant power co-activation pattern (PCP). Because the significance of such PCPs can be evaluated at the level of single trials, we are confident that this procedure is useful to study single trial variability with sufficient accuracy that much of the behavioral variability can be explained by the dynamics of the underlying distributed neuronal processes.
Poster presentation: Introduction The ability of neurons to emit different firing patterns is considered relevant for neuronal information processing. In dopaminergic neurons, prominent patterns include highly regular pacemakers with separate spikes and stereotyped intervals, processes with repetitive bursts and partial regularity, and irregular spike trains with nonstationary properties. In order to model and quantify these processes and the variability of their patterns with respect to pharmacological and cellular properties, we aim to describe the two dimensions of burstiness and regularity in a single model framework. Methods We present a stochastic spike train model in which the degree of burstiness and the regularity of the oscillation are described independently and with two simple parameters. In this model, a background oscillation with independent and normally distributed intervals gives rise to Poissonian spike packets with a Gaussian firing intensity. The variability of inter-burst intervals and the average number of spikes in each burst indicate regularity and burstiness, respectively. These parameters can be estimated by fitting the model to the autocorrelograms. This allows to assign every spike train a position in the two-dimensional space described by regularity and burstiness and thus, to investigate the dependence of the firing patterns on different experimental conditions. Finally, burst detection in single spike trains is possible within the model because the parameter estimates determine the appropriate bandwidth that should be used for burst identification. Results and Discussion We applied the model to a sample data set obtained from dopaminergic substantia nigra and ventral tegmental area neurons recorded extracellularly in vivo and studied differences between the firing activity of dopaminergic neurons in wildtype and K-ATP channel knock-out mice. The model is able to represent a variety of discharge patterns and to describe changes induced pharmacologically. It provides a simple and objective classification scheme for the observed spike trains into pacemaker, irregular and bursty processes. In addition to the simple classification, changes in the parameters can be studied quantitatively, also including the properties related to bursting behavior. Interestingly, the proposed algorithm for burst detection may be applicable also to spike trains with nonstationary firing rates if the remaining parameters are unaffected. Thus, the proposed model and its burst detection algorithm can be useful for the description and investigation of neuronal firing patterns and their variability with cellular and experimental conditions.
Poster presentation: An important challenge in neuroscience is understanding how networks of neurons go about processing information. Synapses are thought to play an essential role in cellular information processing however quantitative and mathematical models of the underlying physiologic processes that occur at synaptic active zones are lacking. We are generating mathematical models of synaptic vesicle dynamics at a well-characterized model synapse, the Drosophila larval neuromuscular junction. This synapse's simplicity, accessibility to various electrophysiological recording and imaging techniques, and the genetic malleability intrinsic to Drosophila system make it ideal for computational and mathematical studies. We have employed a reductionist approach and started by modeling single presynaptic boutons. Synaptic vesicles can be divided into different pools; however, a quantitative understanding of their dynamics at the Drosophila neuromuscular junction is lacking [4]. We performed biologically realistic simulations of high and low release probability boutons [3] using partial differential equations (PDE) taking into account not only the evolution in time but also the spatial structure in two dimensions (the extension to three dimensions will be implemented soon). PDEs are solved using UG, a program library for the calculation of multi-dimensional PDEs solved using a finite volume approach and implicit time stepping methods leading to extended linear equation systems be solvedwith multi-grid methods [3,4]. Numerical calculations are done on multi-processor computers for fast calculations using different parameters in order to asses the biological feasibility of different models. In preliminary simulations, we modeled vesicle dynamics as a diffusion process describing exocytosis as Neumann streams at synaptic active zones. The initial results obtained with these models are consistent with experimental data. However, this should be regarded as a work in progress. Further refinements will be implemented, including simulations using morphologically realistic geometries which were generated from confocal scans of the neuromuscular junction using NeuRA (a Neuron Reconstruction Algorithm). Other parameters such as glutamate diffusion and reuptake dynamics, as well as postsynaptic receptor kinetics will be incorporated as well.
Poster presentation: Introduction Dopaminergic neurons in the midbrain show a variety of firing patterns, ranging from very regular firing pacemaker cells to bursty and irregular neurons. The effects of different experimental conditions (like pharmacological treatment or genetical manipulations) on these neuronal discharge patterns may be subtle. Applying a stochastic model is a quantitative approach to reveal these changes. ...
Background Although current molecular clock methods offer greater flexibility in modelling historical evolutionary events, calibration of the clock with dates from the fossil record is still problematic for many groups. Here we implement several new approaches in molecular dating to estimate evolutionary ages of Lacertidae, an Old World family of lizards with a poor fossil record and uncertain phylogeny. Four different models of rate variation are tested in a new program for Bayesian phylogenetic analysis called TreeTime, based on a combination of mitochondrial and nuclear gene sequences. We incorporate paleontological uncertainty into divergence estimates by expressing multiple calibration dates as a range of probabilistic distributions. We also test the reliability of our proposed calibrations by exploring effects of individual priors on posterior estimates. Results According to the most reliable model, as indicated by Bayes factor comparison, modern lacertids arose shortly after the K/T transition and entered Africa about 45 million years ago, with the majority of their African radiation occurring in the Eocene and Oligocene. Our findings indicate much earlier origins for these clades than previously reported, and we discuss our results in light of paleogeographic trends during the Cenozoic. Conclusions This study represents the first attempt to estimate evolutionary ages of a specific group of reptiles exhibiting uncertain phylogenetic relationships, molecular rate variation and a poor fossil record. Our results emphasize the sensitivity of molecular divergence dates to fossil calibrations, and support the use of combined molecular data sets and multiple, well-spaced dates from the fossil record as minimum node constraints. The bioinformatics program used here, TreeTime, is publicly available, and we recommend its use for molecular dating of taxa faced with similar challenges.
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be accompanied with a type modification by generalizing or instantiating types.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.