Refine
Year of publication
Document Type
- Preprint (817)
- Article (450)
- Working Paper (114)
- Doctoral Thesis (55)
- Conference Proceeding (37)
- Report (20)
- Part of a Book (9)
- Bachelor Thesis (6)
- Book (5)
- Master's Thesis (5)
Language
- English (1522) (remove)
Has Fulltext
- yes (1522)
Is part of the Bibliography
- no (1522) (remove)
Keywords
- Heavy Ion Experiments (20)
- Lambda-Kalkül (20)
- Hadron-Hadron Scattering (14)
- Formale Semantik (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (9)
- lambda calculus (9)
- Operationale Semantik (8)
- Heavy-ion collision (7)
- Kongress (7)
Institute
- Informatik (1522) (remove)
Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme.
Paging is one of the most prominent problems in the field of online algorithms. We have to serve a sequence of page requests using a cache that can hold up to k pages. If the currently requested page is in cache we have a cache hit, otherwise we say that a cache miss occurs, and the requested page needs to be loaded into the cache. The goal is to minimize the number of cache misses by providing a good page-replacement strategy. This problem is part of memory-management when data is stored in a two-level memory hierarchy, more precisely a small and fast memory (cache) and a slow but large memory (disk). The most important application area is the virtual memory management of operating systems. Accessed pages are either already in the RAM or need to be loaded from the hard disk into the RAM using expensive I/O. The time needed to access the RAM is insignificant compared to an I/O operation which takes several milliseconds.
The traditional evaluation framework for online algorithms is competitive analysis where the online algorithm is compared to the optimal offline solution. A shortcoming of competitive analysis consists of its too pessimistic worst-case guarantees. For example LRU has a theoretical competitive ratio of k but in practice this ratio rarely exceeds the value 4.
Reducing the gap between theory and practice has been a hot research issue during the last years. More recent evaluation models have been used to prove that LRU is an optimal online algorithm or part of a class of optimal algorithms respectively, which was motivated by the assumption that LRU is one of the best algorithms in practice. Most of the newer models make LRU-friendly assumptions regarding the input, thus not leaving much room for new algorithms.
Only few works in the field of online paging have introduced new algorithms which can compete with LRU as regards the small number of cache misses.
In the first part of this thesis we study strongly competitive randomized paging algorithms, i.e. algorithms with optimal competitive guarantees. Although the tight bound for the competitive ratio has been known for decades, current algorithms matching this bound are complex and have high running times and memory requirements. We propose the algorithm OnlineMin which processes a page request in O(log k/log log k) time in the worst case. The best previously known solution requires O(k^2) time.
Usually the memory requirement of a paging algorithm is measured by the maximum number of pages that the algorithm keeps track of. Any algorithm stores information about the k pages in the cache. In addition it can also store information about pages not in cache, denoted bookmarks. We answer the open question of Bein et al. '07 whether strongly competitive randomized paging algorithms using only o(k) bookmarks exist or not. To do so we modify the Partition algorithm of McGeoch and Sleator '85 which has an unbounded bookmark complexity, and obtain Partition2 which uses O(k/log k) bookmarks.
In the second part we extract ideas from theoretical analysis of randomized paging algorithms in order to design deterministic algorithms that perform well in practice. We refine competitive analysis by introducing the attack rate
parameter r, which ranges between 1 and k. We show that r is a tight bound on the competitive ratio of deterministic algorithms.
We give empirical evidence that r is usually much smaller than k and thus r-competitive algorithms have a reasonable performance on real-world traces. By introducing the r-competitive priority-based algorithm class OnOPT we obtain a collection of promising algorithms to beat the LRU-standard. We single out the new algorithm RDM and show that it outperforms LRU and some of its variants on a wide range of real-world traces.
Since RDM is more complex than LRU one may think at first sight that the gain in terms of lowering the number of cache misses is ruined by high runtime for processing pages. We engineer a fast implementation of RDM, and compare it
to LRU and the very fast FIFO algorithm in an overall evaluation scheme, where we measure the runtime of the algorithms and add penalties for each cache miss.
Experimental results show that for realistic penalties RDM still outperforms these two algorithms even if we grant the competitors an idealistic runtime of 0.
We introduce tree-width for first order formulae φ, fotw(φ). We show that computing fotw is fixed-parameter tractable with parameter fotw. Moreover, we show that on classes of formulae of bounded fotw, model checking is fixed parameter tractable, with parameter the length of the formula. This is done by translating a formula φ with fotw(φ)<k into a formula of the k-variable fragment Lk of first order logic. For fixed k, the question whether a given first order formula is equivalent to an Lk formula is undecidable. In contrast, the classes of first order formulae with bounded fotw are fragments of first order logic for which the equivalence is decidable. Our notion of tree-width generalises tree-width of conjunctive queries to arbitrary formulae of first order logic by taking into account the quantifier interaction in a formula. Moreover, it is more powerful than the notion of elimination-width of quantified constraint formulae, defined by Chen and Dalmau (CSL 2005): for quantified constraint formulae, both bounded elimination-width and bounded fotw allow for model checking in polynomial time. We prove that fotw of a quantified constraint formula φ is bounded by the elimination-width of φ, and we exhibit a class of quantified constraint formulae with bounded fotw, that has unbounded elimination-width. A similar comparison holds for strict tree-width of non-recursive stratified datalog as defined by Flum, Frick, and Grohe (JACM 49, 2002). Finally, we show that fotw has a characterization in terms of a cops and robbers game without monotonicity cost.
Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013.
The synaptic cleft is an extracellular domain that is capable of relaying a presynaptically received electrical signal by diffusive neurotransmitters to the postsynaptic membrane. The cleft is trans-synaptically bridged by ring-like shaped clusters of pre- and postsynaptically localized calcium-dependent adhesion proteins of the N-Cadherin type and is possibly the smallest intercircuit in nervous systems [1]. The strength of association between the pre- and postsynaptic membranes can account for synaptic plasticity such as long-term potentiation [2]. Through neuronal activity the intra- and extracellular calcium levels are modulated through calcium exchangers embedded in the pre- and postsynaptic membrane. Variations of the concentration of cleft calcium induces changes in the N-Cadherin-zipper, that in synaptic resting states is rigid and tightly connects the pre- and postsynaptic domain. During synaptic activity calcium concentrations are hypothesized to drop below critical thresholds which leads to loosening of the N-Cadherin connections and subsequently "unzips" the Cadherin-mediated connection. These processes may result in changes in synaptic strength [2]. In order to investigate the calcium-mediated N-Cadherin dynamics at the synaptic cleft, we developed a three-dimensional model including the cleft morphology and all prominent calcium exchangers and corresponding density distributions [3-6]. The necessity for a fully three-dimensional model becomes apparent, when investigating the effects of the spatial architecture of the synapse [7], [8]. Our data show, that the localization of calcium channels with respect to the N-Cadherin ring has substantial effects on the time-scales on which the Cadherin-zipper switches between states, ranging from seconds to minutes. This will have significant effects on synaptic signaling. Furthermore we see, that high-frequency action potential firing can only be relayed to the Calcium/N-Cadherin-system at a synapse under precise spatial synaptic reorganization.
To truly appreciate the myriad of events which relate synaptic function and vesicle dynamics, simulations should be done in a spatially realistic environment. This holds true in particular in order to explain as well the rather astonishing motor patterns which we observed within in vivo recordings which underlie peristaltic contractionsas well as the shape of the EPSPs at different forms of long-term stimulation, presented both here, at a well characterized synapse, the neuromuscular junction (NMJ) of the Drosophila larva (c.f. Figure 1). To this end, we have employed a reductionist approach and generated three dimensional models of single presynaptic boutons at the Drosophila larval NMJ. Vesicle dynamics are described by diffusion-like partial differential equations which are solved numerically on unstructured grids using the uG platform. In our model we varied parameters such as bouton-size, vesicle output probability (Po), stimulation frequency and number of synapses, to observe how altering these parameters effected bouton function. Hence we demonstrate that the morphologic and physiologic specialization maybe a convergent evolutionary adaptation to regulate the trade off between sustained, low output, and short term, high output, synaptic signals. There seems to be a biologically meaningful explanation for the co-existence of the two different bouton types as previously observed at the NMJ (characterized especially by the relation between size and Po), the assigning of two different tasks with respect to short- and long-time behaviour could allow for an optimized interplay of different synapse types. We can present astonishing similar results of experimental and simulation data which could be gained in particular without any data fitting, however based only on biophysical values which could be taken from different experimental results. As a side product, we demonstrate how advanced methods from numerical mathematics could help in future to resolve also other difficult experimental neurobiological issues.
Functional modules of metabolic networks are essential for understanding the metabolism of an organism as a whole. With the vast amount of experimental data and the construction of complex and large-scale, often genome-wide, models, the computer-aided identification of functional modules becomes more and more important. Since steady states play a key role in biology, many methods have been developed in that context, for example, elementary flux modes, extreme pathways, transition invariants and place invariants. Metabolic networks can be studied also from the point of view of graph theory, and algorithms for graph decomposition have been applied for the identification of functional modules. A prominent and currently intensively discussed field of methods in graph theory addresses the Q-modularity. In this paper, we recall known concepts of module detection based on the steady-state assumption, focusing on transition-invariants (elementary modes) and their computation as minimal solutions of systems of Diophantine equations. We present the Fourier-Motzkin algorithm in detail. Afterwards, we introduce the Q-modularity as an example for a useful non-steady-state method and its application to metabolic networks. To illustrate and discuss the concepts of invariants and Q-modularity, we apply a part of the central carbon metabolism in potato tubers (Solanum tuberosum) as running example. The intention of the paper is to give a compact presentation of known steady-state concepts from a graph-theoretical viewpoint in the context of network decomposition and reduction and to introduce the application of Q-modularity to metabolic Petri net models.
Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks’ structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network.
Integer point sets minimizing average pairwise L1 distance: What is the optimal shape of a town?
(2010)
An n-town, n[is an element of]N , is a group of n buildings, each occupying a distinct position on a 2-dimensional integer grid. If we measure the distance between two buildings along the axis-parallel street grid, then an n-town has optimal shape if the sum of all pairwise Manhattan distances is minimized. This problem has been studied for cities, i.e., the limiting case of very large n. For cities, it is known that the optimal shape can be described by a differential equation, for which no closed-form solution is known. We show that optimal n-towns can be computed in O(n[superscript 7.5]) time. This is also practically useful, as it allows us to compute optimal solutions up to n=80.
Paging is one of the prominent problems in the field of on-line algorithms. While in the deterministic setting there exist simple and efficient strongly competitive algorithms, in the randomized setting a tradeoff between competitiveness and memory is still not settled. Bein et al. [4] conjectured that there exist strongly competitive randomized paging algorithms, using o(k) bookmarks, i.e. pages not in cache that the algorithm keeps track of. Also in [4] the first algorithm using O(k) bookmarks (2k more precisely), Equitable2, was introduced, proving in the affirmative a conjecture in [7].
We prove tighter bounds for Equitable2, showing that it requires less than k bookmarks, more precisely ≈ 0.62k. We then give a lower bound for Equitable2 showing that it cannot both be strongly competitive and use o(k) bookmarks. Nonetheless, we show that it can trade competitiveness for space. More precisely, if its competitive ratio is allowed to be (Hk + t), then it requires k/(1 + t) bookmarks.
Our main result proves the conjecture that there exist strongly competitive paging algorithms using o(k) bookmarks. We propose an algorithm, denoted Partition2, which is a variant of the Partition algorithm byMcGeoch and Sleator [13]. While classical Partition is unbounded in its space requirements, Partition2 uses θ(k/ log k) bookmarks. Furthermore, we show that this result is asymptotically tight when the forgiveness steps are deterministic.
With increasing heterogeneity of modern hardware, different requirements for 3d applications arise. Despite the fact that real-time rendering of photo-realistic images is possible using today’s graphics cards, still large computational effort is required. Furthermore, smart-phones or computers with older, less powerful graphics cards may not be able to reproduce these results. To retain interactive rendering, usually the detail of a scene is reduced, and so less data needs to be processed. This removal of data, however, may introduce errors, so called artifacts. These artifacts may be distracting for a human spectator when gazing at the display. Thus, the visual quality of the presented scene is reduced. This is counteracted by identifying features of an object that can be removed without introducing artifacts. Most methods utilize geometrical properties, such as distance or shape, to rate the quality of the performed reduction. This information used to generate so called Levels Of Detail (LODs), which are made available to the rendering system. This reduces the detail of an object using the precalculated LODs, e.g. when it is moved into the back of the scene. The appropriate LOD is selected using a metric, and it is replaced with the current displayed version. This exchange must be made smoothly, requiring both LOD-versions to be drawn simultaneously during a transition. Otherwise, this exchange will introduce discontinuities, which are easily discovered by a human spectator. After completion of the transition, only the newly introduced LOD-version is drawn and the previous overhead removed. These LOD-methods usually operate with discrete levels and exploit limitations of both the display and the spectator: the human.
Humans are limited in their vision. This ranges from being unable to distinct colors at varying illumination scenarios to the limitation to focus only at one location at a time. Researchers have developed many applications to exploit these limitations to increase the quality of an applied compression. Some popular methods of vision-based compression are MPEG or JPEG. For example, a JPEG compression exploits the reduced sensitivity of humans regarding color and so encodes colors with a lower resolution. Also, other fields, such as auditive perception, allow the exploitation of human limitations. The MP3 compression, for example, reduces the quality of stored frequencies if other frequencies are masking it. For representation of perception various computer models exist. In our rendering scenario, a model is advantageous that cannot be influenced by a human spectator, such as the visual salience or saliency.
Saliency is a notion from psycho-physics that determines how an object “pops out” of its surrounding. These outstanding objects (or features) are important for the human vision and are directly evaluated by our Human Visual System (HVS). Saliency combines multiple parts of the HVS and allows an identification of regions where humans are likely to look at. In applications, saliency-based methods have been used to control recursive or progressive rendering methods. Especially expensive display methods, such as pathtracing or global illumination calculations, benefit from a perceptual representation as recursions or calculations can be aborted if only small or unperceivable errors are expected to occur. Yet, saliency is commonly applied to 2d images, and an extension towards 3d objects has only partially been presented. Some issues need to be addressed to accomplish a complete transfer.
In this work, we present a smart rendering system that not only utilizes a 3d visual salience model but also applies the reduction in detail directly during rendering. As opposed to normal LOD-methods, this detail reduction is not limited to a predefined set of levels, but rather a dynamic and continuous LOD is created. Furthermore, to apply this reduction in a human-oriented way, a universal function to compute saliency of a 3d object is presented. The definition of this function allows to precalculate and store object-related visual salience information. This stored data is then applicable in any illumination scenario and allows to identify regions of interest on the surface of a 3d object. Unlike preprocessed methods, which generate a view-independent LOD, this identification includes information of the scene as well. Thus, we are able to define a perception-based, view-specific LOD. Performance measures of a prototypical implementation on computers with modern graphic cards achieved interactive frame rates, and several tests have proven the validity of the reduction.
The adaptation of an object is performed with a dynamic data structure, the TreeCut. It is designed to operate on hierarchical representations, which define a multi-resolution object. In such a hierarchy, the leaf nodes contain the highest detail while inner nodes are approximations of their respective subtree. As opposed to classical hierarchical rendering methods, a cut is stored and re-traversal of a tree during rendering is avoided. Due to the explicit cut representation, the TreeCut can be altered using only two core operations: refine and coarse. The refine-operation increases detail by replacing a node of the tree with its children while the coarse-operation removes the node along with its siblings and replaces them with their parent node. These operations do not rely on external information and can be performed in a local manner. These only require direct successor or predecessor information. Different strategies to evolve the TreeCut are presented, which adapt the representation using only information given by the current cut. These evaluate the cut by assigning either a priority or a target-level (or bucket) to each cut-node. The former is modelled as an optimization problem that increases the average priority of a cut while being restricted in some way, e.g. in size. The latter evolves the cut to match a certain distribution. This is applied in cases where a prioritization of nodes is not applicable. Both evaluation strategies operate with linear time complexity with respect to the size of the current TreeCut.
The data layout is chosen to separate rendering data and hierarchy to enable multi-threaded evaluation and display. The object is adapted over multiple frames while the rendering is not interrupted by the used evaluation strategy. Therefore, we separate the representation of the hierarchy from the rendering data. Due to its design, this overhead imposed to the TreeCut data structure does not influence rendering performance, and a linear time complexity for rendering is retained. The TreeCut is not only limited to alter geometrical detail of an object. The TreeCut has successfully been applied to create a non-photo-realistic stippling display, which draws the object with equal sized points in varying density. In this case the bucket-based evaluation strategy is utilized, which determines the distribution of the cut based on local illumination information. As an alternative, an attention drawing mechanism is proposed, which applies the TreeCut evaluation strategies to define the display style of a notification icon. A combination of external priorities is used to derive the appropriate icon version. An application for this mechanism is a messaging system that accounts for the current user situation.
When optimizing an object or scene, perceptual methods allow to account for or exploit human limitations. Therefore, visual salience approaches derive a saliency map, which encodes regions of interest in a 2d map. Rendering algorithms extract importance from such a map and adapt the rendering accordingly, e.g. abort a recursion when the current location is unsalient. The visual salience depends on multiple factors including the view and the illumination of the scene. We extend the existing definition of the 2d saliency and propose a universal function for 3d visual salience: the Bidirectional Saliency Weight Distribution Function (BSWDF). Instead of extracting the saliency from 2d image and approximate 3d information, we directly compute this information using the 3d data. We derive a list of equivalent features for the 3d scenario and add them to the BSWDF. As the BSWDF is universal, also 2d images are covered with the BSWDF, and the calculation of the important regions within images is possible.
To extract the individual features that contribute to visual salience, capabilities of modern graphics card in combination with an accumulation method for rendering is utilized. Inspired from point-based rendering methods local features are summed up in a single surface element (surfel) and are compared with their surround to determine whether they “pop out”. These operations are performed with a shader-program that is executed on the Graphics Processing Unit (GPU) and has direct access to the 3d data. This increases processing speed because no transfer of the data is required. After computation, each of these object-specific features can be combined to derive a saliency map for this object. Surface specific information, e.g. color or curvature, can be preprocessed and stored onto disk. We define a sampling scheme to determine the views that need to be evaluated for each object. With these schemes, the features can be interpolated for any view that occurs during rendering, and the according surface data is reconstructed. These sampling schemes compose a set of images in form of a lookup table. This is similar to existing rendering techniques, which extract illumination information from a lookup. The size of the lookup table increases only with the number of samples or the image size used for creation as the images are of equal size. Thus, the quality of the saliency data is independent of the object’s geometrical complexity. The computation of a BSWDF can be performed either on a Central Processing Unit (CPU) or a GPU, and an implementation requires only a few instructions when using a shader program. If the surface features have been stored during a preprocess, a reprojection of the data is performed and combined with the current information of the object. Once the data is available, the computation of the saliency values is done using a specialized illumination model, and a priority for each primitive is extracted. If the GPU is used, the calculated data has to be transferred from the graphics card. We therefore use the “transform feedback” capabilities, which allow high transfer rates and preserve the order of processed primitives. So, an identification of regions of interest based on the currently used primitives is achieved. The TreeCut evaluation strategies are then able to optimize the representation in an perception-based manner.
As the adaptation utilizes information of the current scene, each change to an object can result in new visual salience information. So, a self-optimizing system is defined: the Feedback System. The output generated by this system converges towards a perception-optimized solution. To proof the saliency information to be useful, user tests have been performed with the results generated by the proposed Feedback System. We compared a saliency-enhanced object compression to a pure geometrical approach, common for LOD-generation. One result of the tests is that saliency information allows to increase compression even further as possible with the pure geometrical methods. The participants were not able to distinguish between objects even if the saliency-based compression had only 60% of the size of the geometrical reduced object. If the size ratio is greater, saliency-based compression is rated, on average, with higher score and these results have a high significance using statistical tests. The Feedback System extends an 3d object with the capability of self-optimization. Not only geometrical detail but also other properties can be limited and optimized using the TreeCut in combination with a BSWDF. We present a dynamic animation, which utilizes a Software Development Kit (SDK) for physical simulations. This was chosen, on the one hand, to show the universal applicability of the proposed system, and on the other hand, to focus on the connection between the TreeCut and the SDK. We adapt the existing framework, and include the SDK within our design. In this case, the TreeCut-operations not only alter geometrical but also simulation detail. This increases calculation performance because both the rendering and the SDK operate on less data after the reduction has been completed.
The selected simulation type is a soft-body simulation. Soft-bodies are deformable in a certain degree but retain their internal connection. An example is a piece of cloth that smoothly fits the underlying surface without tearing apart. Other types are rigid bodies, i.e. idealistic objects that cannot be deformed, and fluids or gaseous materials, which are well suited for point-based simulations. Any of these simulations scales with the number of simulation nodes used, and a reduction of detail increases performance significantly. We define a specialized BSWDF to evaluate simulation specific features, such as motion. The Feedback System then increases detail in highly salient regions, e.g. those with large motion, and saves computation time by reducing detail in static parts of the simulation. So, detail of the simulation is preserved while less nodes are simulated.
The incorporation of perception in real-time rendering is an important part of recent research. Today, the HVS is well understood, and valid computer models have been derived. These models are frequently used in commercial and free software, e.g. JPEG compression. Within this thesis, the Tree-Cut is presented to change the LOD of an object in a dynamic and continuous manner. No definition of the individual levels in advance is required, and the transitions are performed locally. Furthermore, in combination with an identification of important regions by the BSWDF, a perceptual evaluation of a 3d object is achieved. As opposed to existing methods, which approximate data from 2d images, the perceptual information is directly acquired from 3d data. Some of this data can be preprocessed if necessary, to defer additional computations during rendering. The Feedback System, created by the TreeCut and the BSWDF, optimizes the representation and is not limited to visual data alone. We have shown with our prototype that interactive frame rates can be achieved with modern hardware, and we have proven the validity of the reductions by performing several user tests. However, the presented system only focuses on specific aspects, and more research is required to capture even more capabilities that a perception-based rendering system can provide.
This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto.
The diagram-based method to prove correctness of program transformations consists of computing
complete set of (forking and commuting) diagrams, acting on sequences of standard reductions
and program transformations. In many cases, the only missing step for proving correctness of a
program transformation is to show the termination of the rearrangement of the sequences. Therefore
we encode complete sets of diagrams as term rewriting systems and use an automated tool
to show termination, which provides a further step in the automation of the inductive step in
correctness proofs.
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Poster presentation: Calcium plays a pivotal role in relaying electrical signals of the cell to subcellular compartments, such as the nucleus. Since this one ion type is used by the cell for many processes a neuron needs to establish finely tuned calcium pathways in order to be able to differentiate multiple tasks, [1-3].
While it is known that neurons can actively change their shape upon neuronal activity, [4-7], we here present novel findings of activity-regulated nuclear morphology, [8,9]. With the help of an experimental and computational modeling approach, we show that hippocampal neurons can change the previously spherical shape of their nuclei to complex and infolded morphologies. This morphology regulation is demonstrated to be regulated by NMDA-receptor gated calcium, while synaptic and extra-synaptic NMDA-receptors elicit opposing effects on nuclear morphology, [8].
The structural alterations of the cell nucleus have significant effects on nuclear calcium dynamics. Compartmentalization of the nucleus, due to membrane infoldings, changes calcium frequencies, amplitudes and spatial distributions, [8,10]. Since these parameters have been shown to control downstream events towards gene transcription, [11,12], the results elucidate the cellular control of nuclear function with the help of morphology modulation. With respect to processes downstream of calcium, we show that histone H3 phosphorylation is closely linked to nuclear morphology. Investigating the nuclear morphologies of hippocampal neurons, two major classes were identified [9,10]. One class contains non-infolded nuclei that have the function of calcium signal integrators, while the other class contains highly infolded nuclei, which function as frequency detectors of nuclear calcium, [10].
Extending this interdisciplinary approach of investigating structure/function relationships in neurons, the effects of cellular morphology – as well as the morphology of the endoplasmic reticulum and other organelles – on neuronal calcium signals is currently being investigated. This endeavor makes use of highly detailed, three-dimensional models of neuronal calcium dynamics, including the three-dimensional morphology of the cell and its organelles.
Conceptual design of an ALICE Tier-2 centre integrated into a multi-purpose computing facility
(2012)
This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ...
From 12.12.2010 to 17.12.2010, the Dagstuhl Seminar 10501 "Advances and Applications of Automata on Words and Trees" was held in Schloss Dagstuhl - Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.
Seminar: 10501 - Advances and Applications of Automata on Words and Trees. The aim of the seminar was to discuss and systematize the recent fast progress in automata theory and to identify important directions for future research. For this, the seminar brought together more than 40 researchers from automata theory and related fields of applications. We had 19 talks of 30 minutes and 5 one-hour lectures leaving ample room for discussions. In the following we describe the topics in more detail.