Filtern
Erscheinungsjahr
Dokumenttyp
- Konferenzveröffentlichung (41) (entfernen)
Volltext vorhanden
- ja (41) (entfernen)
Gehört zur Bibliographie
- nein (41)
Schlagworte
- Computerlinguistik (3)
- Linguistische Datenverarbeitung (3)
- Textanalyse (3)
- Textanalyse ; Linguistische Datenverarbeitung; Computerlinguistik (3)
- Databases (2)
- Adaptive Prediction (1)
- Approximation algorithm (1)
- Automata theory (1)
- Benchmark testing (1)
- Cognition (1)
- Computational complexity (1)
- Concrete (1)
- Connected Components (1)
- Correctness (1)
- Crowdsourcing (1)
- Data processing (1)
- Data science (1)
- Datenqualität (1)
- EDISON competence framework (1)
- Educational institutions (1)
- Europe (1)
- Experimental Evaluation (1)
- External Memory (1)
- Extraterrestrial measurements (1)
- Frankfurt <Main, 2003> (1)
- Graph Algorithms (1)
- Infinite games with perfect information (1)
- Informatik (1)
- Integer relations (1)
- Knowledge engineering (1)
- Kongress (1)
- Label cover (1)
- Linked Data (1)
- Medical Systems (1)
- Museums (1)
- NP-hard (1)
- Neural Networks (1)
- Neurofuzzy (1)
- Numismatics (1)
- Probabilistically checkable proofs (1)
- Program Transformations (1)
- Programming (1)
- Prophet Inequalities (1)
- Randomization (1)
- SWRL (1)
- Statistical Classification (1)
- Stochastic Probing (1)
- Termination (1)
- Testing (1)
- Training (1)
- Ungewissheit (1)
- addition-invariant first-order logic (1)
- algebraic closure properties (1)
- algorithms (1)
- ambiguity (1)
- anatomy ontologies (1)
- automata (1)
- bisimulation (1)
- call-by-need (1)
- call-by-need evaluation (1)
- combinatorics (1)
- communication complexity (1)
- concurrency (1)
- contextual equivalence (1)
- data crowdsourcing (1)
- data quality (1)
- data science education (1)
- data streams (1)
- data structures (1)
- decidable characterisations (1)
- gamification (1)
- hierarchies and reducibilities (1)
- knowledge area (1)
- lambda calculus (1)
- learning goal (1)
- learning unit (1)
- letrec (1)
- literature digitization (1)
- logic (1)
- logical interpretations (1)
- lower bounds (1)
- machine models (1)
- non-commercial publishing (1)
- nondeterministic finite automata (1)
- open access (1)
- parallel processes (1)
- random hyperbolic graph generator (1)
- reactive systems (1)
- regular tree languages (1)
- semantics (1)
- space improvements (1)
- space optimization (1)
- specification and verification (1)
- streaming algorithm (1)
- tailored learning path (1)
- text mining tools (1)
- the set disjointness problem (1)
- uncertainty (1)
- verification (1)
Institut
- Informatik (41) (entfernen)
Retiming is a widely investigated technique for performance optimization. In general, it performs extensive modifications on a circuit netlist, leaving it unclear, whether the achieved performance improvement will still be valid after placement has been performed. This paper presents an approach for integrating retiming into a timing-driven placement environment. The experimental results show the benefit of the proposed approach on circuit performance in comparison with design flows using retiming only as a pre- or postplacement optimization method.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
Interest to become a data scientist or related professions in data science domain is rapidly growing. To meet such a demand, we propose a novel educational service that aims to provide tailored learning paths for data science. Our target user is one who aims to be an expert in data science. Our approach is to analyze the background of the practitioner and match the learning units. A critical feature is that we use gamification to reinforce the practitioner engagement. We believe that our work provides a practical guideline for those who want to learn data science.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky’s lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky’s lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models. We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen. 1998 ACM Subject Classification: F.4.2, F.3.2, F.3.3, F.4.1. Key words and phrases: semantics, contextual equivalence, bisimulation, lambda calculus, call-by-need, letrec.
In order to promote the accessibility of biodiversity data in historic and contemporary literature, we introduce a new interdisciplinary project called BIOfid (FID=Fachinformationsdienst, a service for providing specialized information). The project aims at a mobilization of data available in print only by combining digitization of scientific biodiversity literature with the development of innovative text mining tools for complex, eventually semantic searches throughout the complete text corpus. A major prerequisite for the development of such search tools is the provision of sophisticated anatomy ontologies on the one hand, and of complete lists of species names (currently considered valid as well as all synonyms) at a global scale on the other hand. In the initial stage, we chose examples from German publications of the past 250 years dealing with the geographic distribution and ecology of vascular plants (Tracheophyta), birds (Aves), as well as moths and butterflies (Lepidoptera) in Germany. These taxa have been prioritized according to current demands of German research groups (about 50 sites) aiming at analyses and modeling of distribution patterns and their changes through time. In the long term, we aim at providing data and open source software applicable for any taxon and geographic region. For this purpose, a platform for open access journals for long-term availability of professional e-journals will be established. All generated data will also be made accessible through GFBio (German Federation for Biological Data). BIOfid is supported by the LIS-Scientific Library Services and Information Systems program of the German Research Foundation (DFG).
This paper describes the ongoing efforts of the authors to present ancient Greek and Roman numismatic data on the public internet, with an emphasis on efforts to integrate information from multiple sources using Linked Data and Semantic Web techniques. By way of very modern metaphor, it is useful to think of coins as intentionally created packages of 'named entities'. Each coin was struck by a particular authority, often at a known site, and coins often make reference to familiar concepts such as deities, historical events, or symbols that were widely recognized in the ancient world. The institutions represented among the authors have deployed search interfaces that allow users to take advantage of this aspect of numismatic databases. The American Numismatic Society's database provides faceted search to its collection of over 550,000 objects. The Portable Antiquities Scheme (PAS) in the UK presents individual finds (and hoards) recorded throughout the country. The Römisch-Germanische Kommission and the University of Frankfurt (DBIS) are developing a prototype metaportal (INTERFACE) that accesses national databases of coin finds held in in Frankfurt, Vienna and Utrecht. Each of these resources is beginning to explore Semantic Web/Linked data approaches so that the role of numismatic standards is immediately coming to the fore. DBIS and INTERFACE are developing a numismatic ontology. At the ANS and PAS, the public database already presents RDF serializations based on Dublin Core. Together, the authors have begun to explore standardization of conceptual names on the basis of the vocabulary presented at the site http://nomisma.org . Nomisma.org is a collaborative effort to provide stable digital representations of numismatic concepts and entities. It provides URIs for such basic concepts as 'coin', 'mint', 'axis'. All of these are defined within the scope of numismatics but are already being linked to other stable resources where available. This is particularly the case for mints. For example, the URI http://nomisma.org/id/corinth is intended to represent that ancient city in its role as a minter/issuer of coins. The URI is linked via the SKOS ontology to the Pleiades Gazetteer of ancient places. This allows Nomisma to be the basis for a common representation of the concept that an object is a coin minted at Corinth. The ANS has already deployed such relationships in its public database. The work of all these projects is very much in progress so that this paper hopes to generate discussion on how multiple large projects can move forward in their own work while encouraging sufficient commonality to support large scale research questions undertaken by diverse audiences.
Exhaustive, automatic testing of dataflow (esp. mapreduce) programs has emerged as an important challenge. Past work demonstrated effective ways to generate small example data sets that exercise operators in the Pig platform, used to generate Hadoop map-reduce programs. Although such prior techniques attempt to cover all cases of operator use, in practice they often fail. Our SEDGE system addresses these completeness problems: for every dataflow operator, we produce data aiming to cover all cases that arise in the dataflow program (e.g., both passing and failing a filter). SEDGE relies on transforming the program into symbolic constraints, and solving the constraints using a symbolic reasoning engine (a powerful SMT solver), while using input data as concrete aids in the solution process. The approach resembles dynamic-symbolic (a.k.a. "concolic") execution in a conventional programming language, adapted to the unique features of the dataflow domain.
In third-party benchmarks, SEDGE achieves higher coverage than past techniques for 5 out of 20 PigMix benchmarks and 7 out of 11 SDSS benchmarks and (with equal coverage for the rest of the benchmarks). We also show that our targeting of the high-level dataflow language pays off: for complex programs, state-of-the-art dynamic-symbolic execution at the level of the generated map-reduce code (instead of the original dataflow program) requires many more test cases or achieves much lower coverage than our approach.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency. In this paper, two approaches are presented which generalize the verification of coindexing constraints to de cient descriptions. At first, a partly heuristic method is described, which has been implemented. Secondly, a provable complete method is specified. It provides the means to exploit the results of anaphor resolution for a further structural disambiguation. By rendering possible a parallel processing model, this method exhibits, in a general sense, a higher degree of robustness. As a practically optimal solution, a combination of the two approaches is suggested.
This paper considers the logic FOcard, i.e., first-order logic with cardinality predicates that can specify the size of a structure modulo some number. We study the expressive power of FOcard on the class of languages of ranked, finite, labelled trees with successor relations. Our first main result characterises the class of FOcard-definable tree languages in terms of algebraic closure properties of the tree languages. As it can be effectively checked whether the language of a given tree automaton satisfies these closure properties, we obtain a decidable characterisation of the class of regular tree languages definable in FOcard. Our second main result considers first-order logic with unary relations, successor relations, and two additional designated symbols < and + that must be interpreted as a linear order and its associated addition. Such a formula is called addition-invariant if, for each fixed interpretation of the unary relations and successor relations, its result is independent of the particular interpretation of < and +. We show that the FOcard-definable tree languages are exactly the regular tree languages definable in addition-invariant first-order logic. Our proof techniques involve tools from algebraic automata theory, reasoning with locality arguments, and the use of logical interpretations. We combine and extend methods developed by Benedikt and Segoufin (ACM ToCL, 2009) and Schweikardt and Segoufin (LICS, 2010).
Retiming is a widely investigated technique for performance optimization. It performs powerful modifications on a circuit netlist. However, often it is not clear, whether the predicted performance improvement will still be valid after placement has been performed. This paper presents a new retiming algorithm using a highly accurate timing model taking into account the effect of retiming on capacitive loads of single wires as well as fanout systems. We propose the integration of retiming into a timing-driven standard cell placement environment based on simulated annealing. Retiming is used as an optimization technique throughout the whole placement process. The experimental results show the benefit of the proposed approach. In comparison with the conventional design flow based on standard FEAS our approach achieved an improvement in cycle time of up to 34% and 17% on the average.
This paper is a contribution to exploring and analyzing space-improvements in concurrent programming languages, in particular in the functional process-calculus CHF. Space-improvements are defined as a generalization of the corresponding notion in deterministic pure functional languages. The main part of the paper is the O(n ·logn) algorithm SPOPTN for offline space optimization of several parallel independent processes. Applications of this algorithm are: (i) affirmation of space improving transformations for particular classes of program transformations; (ii) support of an interpreter-based method for refuting space-improvements; and (iii) as a stand-alone offline-optimizer for space (or similar resources) of parallel processes.
Given x small epsilon, Greek Rn an integer relation for x is a non-trivial vector m small epsilon, Greek Zn with inner product <m,x> = 0. In this paper we prove the following: Unless every NP language is recognizable in deterministic quasi-polynomial time, i.e., in time O(npoly(log n)), the ℓinfinity-shortest integer relation for a given vector x small epsilon, Greek Qn cannot be approximated in polynomial time within a factor of 2log0.5 − small gamma, Greekn, where small gamma, Greek is an arbitrarily small positive constant. This result is quasi-complementary to positive results derived from lattice basis reduction. A variant of the well-known L3-algorithm approximates for a vector x small epsilon, Greek Qn the ℓ2-shortest integer relation within a factor of 2n/2 in polynomial time. Our proof relies on recent advances in the theory of probabilistically checkable proofs, in particular on a reduction from 2-prover 1-round interactive proof-systems. The same inapproximability result is valid for finding the ℓinfinity-shortest integer solution for a homogeneous linear system of equations over Q.
In its first part, this contribution reviews shortly the application of neural network methods to medical problems and characterizes its advantages and problems in the context of the medical background. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic systems. Then, paradigm of neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. Additionally, the problem of interfacing the network and its result is given and the neuro-fuzzy approach is presented. Finally, as case study of neural rule based diagnosis septic shock diagnosis is described, on one hand by a growing neural network and on the other hand by a rule based system. Keywords: Statistical Classification, Adaptive Prediction, Neural Networks, Neurofuzzy, Medical Systems
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
This paper gives a brief overview of computation models for data stream processing, and it introduces a new model for multi-pass processing of multiple streams, the so-called mp2s-automata. Two algorithms for solving the set disjointness problem with these automata are presented. The main technical contribution of this paper is the proof of a lower bound on the size of memory and the number of heads that are required for solving the set disjointness problem with mp2s-automata.
We present a novel practical algorithm that given a lattice basis b1, ..., bn finds in O(n exp 2 *(k/6) exp (k/4)) average time a shorter vector than b1 provided that b1 is (k/6) exp (n/(2k)) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b1, ..., bn has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method. n2
The archaeological data dealt with in our database solution Antike Fundmünzen in Europa (AFE), which records finds of ancient coins, is entered by humans. Based on the Linked Open Data (LOD) approach, we link our data to Nomisma.org concepts, as well as to other resources like Online Coins of the Roman Empire (OCRE). Since information such as denomination, material, etc. is recorded for each single coin, this information should be identical for coins of the same type. Unfortunately, this is not always the case, mostly due to human errors. Based on rules that we implemented, we were able to make use of this redundant information in order to detect possible errors within AFE, and were even able to correct errors in Nomimsa.org. However, the approach had the weakness that it was necessary to transform the data into an internal data model. In a second step, we therefore developed our rules within the Linked Open Data world. The rules can now be applied to datasets following the Nomisma. org modelling approach, as we demonstrated with data held by Corpus Nummorum Thracorum (CNT). We believe that the use of methods like this to increase the data quality of individual databases, as well as across different data sources and up to the higher levels of OCRE and Nomisma.org, is mandatory in order to increase trust in them.
Random graph models, originally conceived to study the structure of networks and the emergence of their properties, have become an indispensable tool for experimental algorithmics. Amongst them, hyperbolic random graphs form a well-accepted family, yielding realistic complex networks while being both mathematically and algorithmically tractable. We introduce two generators MemGen and HyperGen for the G_{alpha,C}(n) model, which distributes n random points within a hyperbolic plane and produces m=n*d/2 undirected edges for all point pairs close by; the expected average degree d and exponent 2*alpha+1 of the power-law degree distribution are controlled by alpha>1/2 and C. Both algorithms emit a stream of edges which they do not have to store. MemGen keeps O(n) items in internal memory and has a time complexity of O(n*log(log n) + m), which is optimal for networks with an average degree of d=Omega(log(log n)). For realistic values of d=o(n / log^{1/alpha}(n)), HyperGen reduces the memory footprint to O([n^{1-alpha}*d^alpha + log(n)]*log(n)). In an experimental evaluation, we compare HyperGen with four generators among which it is consistently the fastest. For small d=10 we measure a speed-up of 4.0 compared to the fastest publicly available generator increasing to 29.6 for d=1000. On commodity hardware, HyperGen produces 3.7e8 edges per second for graphs with 1e6 < m < 1e12 and alpha=1, utilising less than 600MB of RAM. We demonstrate nearly linear scalability on an Intel Xeon Phi.
The diagram-based method to prove correctness of program transformations consists of computing
complete set of (forking and commuting) diagrams, acting on sequences of standard reductions
and program transformations. In many cases, the only missing step for proving correctness of a
program transformation is to show the termination of the rearrangement of the sequences. Therefore
we encode complete sets of diagrams as term rewriting systems and use an automated tool
to show termination, which provides a further step in the automation of the inductive step in
correctness proofs.
We present new concepts to integrate logic synthesis and physical design. Our methodology uses general Boolean transformations as known from technology-independent synthesis, and a recursive bi-partitioning placement algorithm. In each partitioning step, the precision of the layout data increases. This allows effective guidance of the logic synthesis operations for cycle time optimization. An additional advantage of our approach is that no complicated layout corrections are needed when the netlist is changed.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
Poster presentation: Calcium plays a pivotal role in relaying electrical signals of the cell to subcellular compartments, such as the nucleus. Since this one ion type is used by the cell for many processes a neuron needs to establish finely tuned calcium pathways in order to be able to differentiate multiple tasks, [1-3].
While it is known that neurons can actively change their shape upon neuronal activity, [4-7], we here present novel findings of activity-regulated nuclear morphology, [8,9]. With the help of an experimental and computational modeling approach, we show that hippocampal neurons can change the previously spherical shape of their nuclei to complex and infolded morphologies. This morphology regulation is demonstrated to be regulated by NMDA-receptor gated calcium, while synaptic and extra-synaptic NMDA-receptors elicit opposing effects on nuclear morphology, [8].
The structural alterations of the cell nucleus have significant effects on nuclear calcium dynamics. Compartmentalization of the nucleus, due to membrane infoldings, changes calcium frequencies, amplitudes and spatial distributions, [8,10]. Since these parameters have been shown to control downstream events towards gene transcription, [11,12], the results elucidate the cellular control of nuclear function with the help of morphology modulation. With respect to processes downstream of calcium, we show that histone H3 phosphorylation is closely linked to nuclear morphology. Investigating the nuclear morphologies of hippocampal neurons, two major classes were identified [9,10]. One class contains non-infolded nuclei that have the function of calcium signal integrators, while the other class contains highly infolded nuclei, which function as frequency detectors of nuclear calcium, [10].
Extending this interdisciplinary approach of investigating structure/function relationships in neurons, the effects of cellular morphology – as well as the morphology of the endoplasmic reticulum and other organelles – on neuronal calcium signals is currently being investigated. This endeavor makes use of highly detailed, three-dimensional models of neuronal calcium dynamics, including the three-dimensional morphology of the cell and its organelles.
This paper presents a new timing driven approach for cell replication tailored to the practical needs of standard cell layout design. Cell replication methods have been studied extensively in the context of generic partitioning problems. However, until now it has remained unclear what practical benefit can be obtained from this concept in a realistic environment for timing driven layout synthesis. Therefore, this paper presents a timing driven cell replication procedure, demonstrates its incorporation into a standard cell placement and routing tool and examines its benefit on the final circuit performance in comparison with conventional gate or transistor sizing techniques. Furthermore, we demonstrate that cell replication can deteriorate the stuck-at fault testability of circuits and show that stuck-at redundancy elimination must be integrated into the placement procedure. Experimental results demonstrate the usefulness of the proposed methodology and suggest that cell replication should be an integral part of the physical design flow complementing traditional gate sizing techniques.
The Specialized Information Service Biodiversity Research (BIOfid) has been launched to mobilize valuable biological data from printed literature hidden in German libraries for over the past 250 years. In this project, we annotate German texts converted by OCR from historical scientific literature on the biodiversity of plants, birds, moths and butterflies. Our work enables the automatic extraction of biological information previously buried in the mass of papers and volumes. For this purpose, we generated training data for the tasks of Named Entity Recognition (NER) and Taxa Recognition (TR) in biological documents. We use this data to train a number of leading machine learning tools and create a gold standard for TR in biodiversity literature. More specifically, we perform a practical analysis of our newly generated BIOfid dataset through various downstream-task evaluations and establish a new state of the art for TR with 80.23% F-score. In this sense, our paper lays the foundations for future work in the field of information extraction in biology texts.
An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for re exive pronouns, nonre exive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.
We empirically investigate algorithms for solving Connected Components in the external memory model. In particular, we study whether the randomized O(Sort(E)) algorithm by Karger, Klein, and Tarjan can be implemented to compete with practically promising and simpler algorithms having only slightly worse theoretical cost, namely Borůvka’s algorithm and the algorithm by Sibeyn and collaborators. For all algorithms, we develop and test a number of tuning options. Our experiments are executed on a large set of different graph classes including random graphs, grids, geometric graphs, and hyperbolic graphs. Among our findings are: The Sibeyn algorithm is a very strong contender due to its simplicity and due to an added degree of freedom in its internal workings when used in the Connected Components setting. With the right tunings, the Karger-Klein-Tarjan algorithm can be implemented to be competitive in many cases. Higher graph density seems to benefit Karger-Klein-Tarjan relative to Sibeyn. Borůvka’s algorithm is not competitive with the two others.
Ambiguity and communication
(2009)
The ambiguity of a nondeterministic finite automaton (NFA) N for input size n is the maximal number of accepting computations of N for an input of size n. For all k, r 2 N we construct languages Lr,k which can be recognized by NFA's with size k poly(r) and ambiguity O(nk), but Lr,k has only NFA's with exponential size, if ambiguity o(nk) is required. In particular, a hierarchy for polynomial ambiguity is obtained, solving a long standing open problem (Ravikumar and Ibarra, 1989, Leung, 1998).
Seminar: 10501 - Advances and Applications of Automata on Words and Trees. The aim of the seminar was to discuss and systematize the recent fast progress in automata theory and to identify important directions for future research. For this, the seminar brought together more than 40 researchers from automata theory and related fields of applications. We had 19 talks of 30 minutes and 5 one-hour lectures leaving ample room for discussions. In the following we describe the topics in more detail.
From 12.12.2010 to 17.12.2010, the Dagstuhl Seminar 10501 "Advances and Applications of Automata on Words and Trees" was held in Schloss Dagstuhl - Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.
The early prediction of mortality is one of the unresolved tasks in intensive care medicine. This contribution models medical symptoms as observations cased by transitions between hidden markov states. Learning the underlying state transition probabilities results in a prediction probability success of about 91%. The results are discussed and put in relation to the model used. Finally, the rationales for using the model are reflected: Are there states in the septic shock data?
We introduce a new method for representing and solving a general class of non-preemptive resource-constrained project scheduling problems. The new approach is to represent scheduling problems as de- scriptions (activity terms) in a language called RSV, which allows nested expressions using pll, seq, and xor. The activity-terms of RSV are similar to concepts in a description logic. The language RSV generalizes previous approaches to scheduling with variants insofar as it permits xor's not only of atomic activities but also of arbitrary activity terms. A specific semantics that assigns their set of active schedules to activity terms shows correctness of a calculus normalizing activity terms RSV similar to propositional DNF-computation. Based on RSV, this paper describes a diagram-based algorithm for the RSV-problem which uses a scan-line principle. The scan-line principle is used for determining and resolving the occurring resource conflicts and leads to a nonredundant generation of all active schedules and thus to a computation of the optimal schedule.
This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto.