Refine
Year of publication
Document Type
- Preprint (759)
- Article (402)
- Working Paper (119)
- Doctoral Thesis (93)
- Diploma Thesis (47)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (36)
- diplomthesis (28)
- Report (25)
Has Fulltext
- yes (1619)
Is part of the Bibliography
- no (1619)
Keywords
Institute
- Informatik (1619) (remove)
Poster presentation: The analysis of neuronal processes distributed across multiple cortical areas aims at the identification of interactions between signals recorded at different sites. Such interactions can be described by measuring the stability of phase angles in the case of oscillatory signals or other forms of signal dependencies for less regular signals. Before, however, any form of interaction can be analyzed at a given time and frequency, it is necessary to assess whether all potentially contributing signals are present. We have developed a new statistical procedure for the detection of coincident power in multiple simultaneously recorded analog signals, allowing the classification of events as 'non-accidental co-activation'. This method can effectively operate on single trials, each lasting only for a few seconds. Signals need to be transformed into time-frequency space, e.g. by applying a short-time Fourier transformation using a Gaussian window. The discrete wavelet transform (DWT) is used in order to weight the resulting power patterns according to their frequency. Subsequently, the weighted power patterns are binarized via applying a threshold. At this final stage, significant power coincidence is determined across all subgroups of channel combinations for individual frequencies by selecting the maximum ratio between observed and expected duration of co-activation as test statistic. The null hypothesis that the activity in each channel is independent from the activity in every other channel is simulated by independent, random rotation of the respective activity patterns. We applied this procedure to single trials of multiple simultaneously sampled local field potentials (LFPs) obtained from occipital, parietal, central and precentral areas of three macaque monkeys. Since their task was to use visual cues to perform a precise arm movement, co-activation of numerous cortical sites was expected. In a data set with 17 channels analyzed, up to 13 sites expressed simultaneous power in the range between 5 and 240 Hz. On average, more than 50% of active channels participated at least once in a significant power co-activation pattern (PCP). Because the significance of such PCPs can be evaluated at the level of single trials, we are confident that this procedure is useful to study single trial variability with sufficient accuracy that much of the behavioral variability can be explained by the dynamics of the underlying distributed neuronal processes.
Human lymph nodes play a central part of immune defense against infection agents and tumor cells. Lymphoid follicles are compartments of the lymph node which are spherical, mainly filled with B cells. B cells are cellular components of the adaptive immune systems. In the course of a specific immune response, lymphoid follicles pass different morphological differentiation stages. The morphology and the spatial distribution of lymphoid follicles can be sometimes associated to a particular causative agent and development stage of a disease. We report our new approach for the automatic detection of follicular regions in histological whole slide images of tissue sections immuno-stained with actin. The method is divided in two phases: (1) shock filter-based detection of transition points and (2) segmentation of follicular regions. Follicular regions in 10 whole slide images were manually annotated by visual inspection, and sample surveys were conducted by an expert pathologist. The results of our method were validated by comparing with the manual annotation. On average, we could achieve a Zijbendos similarity index of 0.71, with a standard deviation of 0.07.
Paging is one of the most prominent problems in the field of online algorithms. We have to serve a sequence of page requests using a cache that can hold up to k pages. If the currently requested page is in cache we have a cache hit, otherwise we say that a cache miss occurs, and the requested page needs to be loaded into the cache. The goal is to minimize the number of cache misses by providing a good page-replacement strategy. This problem is part of memory-management when data is stored in a two-level memory hierarchy, more precisely a small and fast memory (cache) and a slow but large memory (disk). The most important application area is the virtual memory management of operating systems. Accessed pages are either already in the RAM or need to be loaded from the hard disk into the RAM using expensive I/O. The time needed to access the RAM is insignificant compared to an I/O operation which takes several milliseconds.
The traditional evaluation framework for online algorithms is competitive analysis where the online algorithm is compared to the optimal offline solution. A shortcoming of competitive analysis consists of its too pessimistic worst-case guarantees. For example LRU has a theoretical competitive ratio of k but in practice this ratio rarely exceeds the value 4.
Reducing the gap between theory and practice has been a hot research issue during the last years. More recent evaluation models have been used to prove that LRU is an optimal online algorithm or part of a class of optimal algorithms respectively, which was motivated by the assumption that LRU is one of the best algorithms in practice. Most of the newer models make LRU-friendly assumptions regarding the input, thus not leaving much room for new algorithms.
Only few works in the field of online paging have introduced new algorithms which can compete with LRU as regards the small number of cache misses.
In the first part of this thesis we study strongly competitive randomized paging algorithms, i.e. algorithms with optimal competitive guarantees. Although the tight bound for the competitive ratio has been known for decades, current algorithms matching this bound are complex and have high running times and memory requirements. We propose the algorithm OnlineMin which processes a page request in O(log k/log log k) time in the worst case. The best previously known solution requires O(k^2) time.
Usually the memory requirement of a paging algorithm is measured by the maximum number of pages that the algorithm keeps track of. Any algorithm stores information about the k pages in the cache. In addition it can also store information about pages not in cache, denoted bookmarks. We answer the open question of Bein et al. '07 whether strongly competitive randomized paging algorithms using only o(k) bookmarks exist or not. To do so we modify the Partition algorithm of McGeoch and Sleator '85 which has an unbounded bookmark complexity, and obtain Partition2 which uses O(k/log k) bookmarks.
In the second part we extract ideas from theoretical analysis of randomized paging algorithms in order to design deterministic algorithms that perform well in practice. We refine competitive analysis by introducing the attack rate
parameter r, which ranges between 1 and k. We show that r is a tight bound on the competitive ratio of deterministic algorithms.
We give empirical evidence that r is usually much smaller than k and thus r-competitive algorithms have a reasonable performance on real-world traces. By introducing the r-competitive priority-based algorithm class OnOPT we obtain a collection of promising algorithms to beat the LRU-standard. We single out the new algorithm RDM and show that it outperforms LRU and some of its variants on a wide range of real-world traces.
Since RDM is more complex than LRU one may think at first sight that the gain in terms of lowering the number of cache misses is ruined by high runtime for processing pages. We engineer a fast implementation of RDM, and compare it
to LRU and the very fast FIFO algorithm in an overall evaluation scheme, where we measure the runtime of the algorithms and add penalties for each cache miss.
Experimental results show that for realistic penalties RDM still outperforms these two algorithms even if we grant the competitors an idealistic runtime of 0.
The thesis in general deals with CORBA, the Common Object Request Broker Architecture. More specifically, it takes a look at the server-side, where object adapters exist to aid the developer in implementing objects and in dealing with request processing. The new Portable Object Adapter was recently added to the CORBA 2.2 standard. My task was the implementation of the POA in MICO and the examination if (a) the POA specification is sensible and (b) in which areas it improves over the old Basic Object Adapter. After introducing distributed platforms in general and CORBA in particular, the thesis' main two chapters are a detailed abstract examination ("Design") of the POA design and their relization ("Implementation"), highlighting the potential trouble spots, persistence and collocation.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency.
This paper describes the ROSANA approach, which generalizes the verification of coindexing restrictions in order to make it applicable to the deficient syntactic descriptions that are provided by a robust state-of-the-art parser. By a formal evaluation on two corpora that differ with respect to text genre and domain, it is shown that ROSANA achieves high-quality robust coreference resolution. Moreover, by an in-depth analysis, it is proven that the robust implementation of syntactic disjoint reference is nearly optimal. The study reveals that, compared with approaches that rely on shallow preprocessing, the largely nonheuristic disjoint reference algorithmization opens up the possibility/or a slight improvement. Furthermore, it is shown that more significant gains are to be expected elsewhere, particularly from a text-genre-specific choice of preference strategies.
The performance study of the ROSANA system crucially rests on an enhanced evaluation methodology for coreference resolution systems, the development of which constitutes the second major contribution o/the paper. As a supplement to the model-theoretic scoring scheme that was developed for the Message Understanding Conference (MUC) evaluations, additional evaluation measures are defined that, on one hand, support the developer of anaphor resolution systems, and, on the other hand, shed light on application aspects of pronoun interpretation.
We study the descriptional complexity of cellular automata (CA), a parallel model of computation. We show that between one of the simplest cellular models, the realtime-OCA. and "classical" models like deterministic finite automata (DFA) or pushdown automata (PDA), there will be savings concerning the size of description not bounded by any recursive function, a so-called nonrecursive trade-off. Furthermore, nonrecursive trade-offs are shown between some restricted classes of cellular automata. The set of valid computations of a Turing machine can be recognized by a realtime-OCA. This implies that many decidability questions are not even semi decidable for cellular automata. There is no pumping lemma and no minimization algorithm for cellular automata.
Mit der Smart Learning Infrastruktur wurde ein neuartiges didaktisches Konzept für Kurse in der Weiterbildung entwickelt. Diese Infrastruktur ist vielfältig anwendbar. Erste Analysen von Kursen zeigen, dass TeilnehmerInnen, die alle Übungen korrekt abgearbeitet haben, eine bessere Note erreichen als die Durchschnittsnote. Dieser Beitrag beschreibt ein Konzept für ein Gamification-Modul, welches mit spielerischen Elementen möglichst frühzeitig dazu animiert, alle Übungen eines Kurses korrekt und mit Verstand abzuarbeiten.
Wie eine "Heilslehre" überzieht der Begriff "Digitalisierung" fast alle Lebensbereiche – natürlich auch den Bildungsbereich. Gerade wir Informatiker*innen sind gefordert, diese Wege der Bildungstransformation mitzugestalten. Wir zusammen mit den Erziehungswissenschaftler*innen und Psychologen*innen müssen identifizieren, aufzeigen und vorbildlich umsetzen, was sinnvoll und möglich ist. Wir sind diejenigen, die die Bedingungen des Gelingens und auch die der Irrwege erforschen und aufzeigen müssen. Digitalisierungswahnsinn brauchen wir nicht!
Die 16. Jahrestagung DeLFI 2018 der Fachgruppe eLearning der Gesellschaft für Informatik e. V. findet vom 10. bis 13.September 2018 an der Johann Wolfgang Goethe – Universität, Frankfurt am Main statt, gemeinsam mit der 8. Tagung für Hochschuldidaktik der Informatik HDI 2018. ...
Algorithms for the Maximum Cardinality Matching Problem which greedily add edges to the solution enjoy great popularity. We systematically study strengths and limitations of such algorithms, in particular of those which consider node degree information to select the next edge. Concentrating on nodes of small degree is a promising approach: it was shown, experimentally and analytically, that very good approximate solutions are obtained for restricted classes of random graphs. Results achieved under these idealized conditions, however, remained unsupported by statements which depend on less optimistic assumptions.
The KarpSipser algorithm and 1-2-Greedy, which is a simplified variant of the well-known MinGreedy algorithm, proceed as follows. In each step, if a node of degree one (resp. at most two) exists, then an edge incident with a minimum degree node is picked, otherwise an arbitrary edge is added to the solution.
We analyze the approximation ratio of both algorithms on graphs of degree at most D. Families of graphs are known for which the expected approximation ratio converges to 1/2 as D grows to infinity, even if randomization against the worst case is used. If randomization is not allowed, then we show the following convergence to 1/2: the 1-2-Greedy algorithm achieves approximation ratio (D-1)/(2D-3); if the graph is bipartite, then the more restricted KarpSipser algorithm achieves the even stronger factor D/(2D-2). These guarantees set both algorithms apart from other famous matching heuristics like e.g. Greedy or MRG: these algorithms depend on randomization to break the 1/2-barrier even for paths with D=2. Moreover, for any D our guarantees are strictly larger than the best known bounds on the expected performance of the randomized variants of Greedy and MRG.
To investigate whether KarpSipser or 1-2-Greedy can be refined to achieve better performance, or be simplified without loss of approximation quality, we systematically study entire classes of deterministic greedy-like algorithms for matching. Therefore we employ the adaptive priority algorithm framework by Borodin, Nielsen, and Rackoff: in each round, an adaptive priority algorithm requests one or more edges by formulating their properties---like e.g. "is incident with a node of minimum degree"---and adds the received edges to the solution. No constraints on time and space usage are imposed, hence an adaptive priority algorithm is restricted only by its nature of picking edges in a greedy-like fashion. If an adaptive priority algorithm requests edges by processing degree information, then we show that it does not surpass the performance of KarpSipser: our D/(2D-2)-guarantee for bipartite graphs is tight and KarpSipser is optimal among all such "degree-sensitive" algorithms even though it uses degree information merely to detect degree-1 nodes. Moreover, we show that if degrees of both nodes of an edge may be processed, like e.g. the Double-MinGreedy algorithm does, then the performance of KarpSipser can only be increased marginally, if at all. Of special interest is the capability of requesting edges not only by specifying the degree of a node but additionally its set of neighbors. This enables an adaptive priority algorithm to "traverse" the input graph. We show that on general degree-bounded graphs no such algorithm can beat factor (D-1)/(2D-3). Hence our bound for 1-2-Greedy is tight and this algorithm performs optimally even though it ignores neighbor information. Furthermore, we show that an adaptive priority algorithm deteriorates to approximation ratio exactly 1/2 if it does not request small degree nodes. This tremendous decline of approximation quality happens for graphs on which 1-2-Greedy and KarpSipser perform optimally, namely paths with D=2. Consequently, requesting small degree nodes is vital to beat factor 1/2.
Summarizing, our results show that 1-2-Greedy and KarpSipser stand out from known and hypothetical algorithms as an intriguing combination of both approximation quality and conceptual simplicity.
In this talk we presented a novel technique, based on Deep Learning, to determine the impact parameter of nuclear collisions at the CBM experiment. PointNet based Deep Learning models are trained on UrQMD followed by CBMRoot simulations of Au+Au collisions at 10 AGeV to reconstruct the impact parameter of collisions from raw experimental data such as hits of the particles in the detector planes, tracks reconstructed from the hits or their combinations. The PointNet models can perform fast, accurate, event-by-event impact parameter determination in heavy ion collision experiments. They are shown to outperform a simple model which maps the track multiplicity to the impact parameter. While conventional methods for centrality classification merely provide an expected impact parameter distribution for a given centrality class, the PointNet models predict the impact parameter from 2–14 fm on an event-by-event basis with a mean error of −0.33 to 0.22 fm.
FIFO is the most prominent queueing strategy due to its simplicity and the fact that it only works with local information. Its analysis within the adversarial queueing theory however has shown, that there are networks that are not stable under the FIFO protocol, even at arbitrarily low rate. On the other hand there are networks that are universally stable, i.e., they are stable under every greedy protocol at any rate r < 1. The question as to which networks are stable under the FIFO protocol arises naturally. We offer the first polynomial time algorithm for deciding FIFO stability and simple-path FIFO stability of a directed network, answering an open question posed in [1, 4]. It turns out, that there are networks, that are FIFO stable but not universally stable, hence FIFO is not a worst case protocol in this sense. Our characterization of FIFO stability is constructive and disproves an open characterization in [4].
Various static analyses of functional programming languages that permit infinite data structures make use of set constants like Top, Inf, and Bot, denoting all terms, all lists not eventually ending in Nil, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics in the set of all, also infinite, computable trees, where all term constructors are non-strict. This internal report proves decidability, in particular DEXPTIME-completeness, of inclusion of co-inductively defined sets by using algorithms and results from tree automata and set constraints, and contains detailed proofs. The test for set inclusion is required by certain strictness analysis algorithms in lazy functional programming languages and could also be the basis for further set-based analyses.
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
The knowledge of the material budget with a high precision is fundamental for measurements of direct photon production using the photon conversion method due to its direct impact on the total systematic uncertainty. Moreover, it influences many aspects of the charged-particle reconstruction performance. In this article, two procedures to determine data-driven corrections to the material-budget description in ALICE simulation software are developed. One is based on the precise knowledge of the gas composition in the Time Projection Chamber. The other is based on the robustness of the ratio between the produced number of photons and charged particles, to a large extent due to the approximate isospin symmetry in the number of produced neutral and charged pions. Both methods are applied to ALICE data allowing for a reduction of the overall material budget systematic uncertainty from 4.5% down to 2.5%. Using these methods, a locally correct material budget is also achieved. The two proposed methods are generic and can be applied to any experiment in a similar fashion.
The knowledge of the material budget with a high precision is fundamental for measurements of direct photon production using the photon conversion method due to its direct impact on the total systematic uncertainty. Moreover, it influences many aspects of the charged-particle reconstruction performance. In this article, two procedures to determine data-driven corrections to the material-budget description in ALICE simulation software are developed. One is based on the precise knowledge of the gas composition in the Time Projection Chamber. The other is based on the robustness of the ratio between the produced number of photons and charged particles, to a large extent due to the approximate isospin symmetry in the number of produced neutral and charged pions. Both methods are applied to ALICE data allowing for a reduction of the overall material budget systematic uncertainty from 4.5% down to 2.5%. Using these methods, a locally correct material budget is also achieved. The two proposed methods are generic and can be applied to any experiment in a similar fashion.
The knowledge of the material budget with a high precision is fundamental for measurements of direct photon production using the photon conversion method due to its direct impact on the total systematic uncertainty. Moreover, it influences many aspects of the charged-particle reconstruction performance. In this article, two procedures to determine data-driven corrections to the material-budget description in ALICE simulation software are developed. One is based on the precise knowledge of the gas composition in the Time Projection Chamber. The other is based on the robustness of the ratio between the produced number of photons and charged particles, to a large extent due to the approximate isospin symmetry in the number of produced neutral and charged pions. Both methods are applied to ALICE data allowing for a reduction of the overall material budget systematic uncertainty from 4.5% down to 2.5%. Using these methods, a locally correct material budget is also achieved. The two proposed methods are generic and can be applied to any experiment in a similar fashion.
Data structures and advanced models of computation on big data : report from Dagstuhl seminar 14091
(2014)
This report documents the program and the outcomes of Dagstuhl Seminar 14091 "Data Structures and Advanced Models of Computation on Big Data". In today's computing environment vast amounts of data are processed, exchanged and analyzed. The manner in which information is stored profoundly influences the efficiency of these operations over the data. In spite of the maturity of the field many data structuring problems are still open, while new ones arise due to technological advances.
The seminar covered both recent advances in the "classical" data structuring topics as well as new models of computation adapted to modern architectures, scientific studies that reveal the need for such models, applications where large data sets play a central role, modern computing platforms for very large data, and new data structures for large data in modern architectures.
The extended abstracts included in this report contain both recent state of the art advances and lay the foundation for new directions within data structures research.
Data driven automatic model selection and parameter adaptation – a case study for septic shock
(2004)
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated.
Dancing is an activity that positively enhances the mood of people that consists of feeling the music and expressing it in rhythmic movements with the body. Learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. In this paper, we present the first implementation of the Dancing Coach (DC), a generic system designed to support the practice of dancing steps, which in its current state supports the practice of basic salsa dancing steps. However, the DC has been designed to allow the addition of more dance styles. We also present the first user evaluation of the DC, which consists of user tests with 25 participants. Results from the user test show that participants stated they had learned the basic salsa dancing steps, to move to the beat and body coordination in a fun way. Results also point out some direction on how to improve the future versions of the DC.
Background: In the context of the investigation of the quark gluon plasma produced in heavy-ion collisions, hadrons containing heavy (charm or beauty) quarks play a special role for the characterization of the hot and dense medium created in the interaction. The measurement of the production of charm and beauty hadrons in proton–proton collisions, besides providing the necessary reference for the studies in heavy-ion reactions, constitutes an important test of perturbative quantum chromodynamics (pQCD) calculations. Heavy-flavor production in proton–nucleus collisions is sensitive to the various effects related to the presence of nuclei in the colliding system, commonly denoted cold-nuclear-matter effects. Most of these effects are expected to modify open-charm production at low transverse momenta (pT) and, so far, no measurement of D-meson production down to zero transverse momentum was available at mid-rapidity at the energies attained at the CERN Large Hadron Collider (LHC).
Purpose: The measurements of the production cross sections of promptly produced charmed mesons in p-Pb collisions at the LHC down to pT=0 and the comparison to the results from pp interactions are aimed at the assessment of cold-nuclear-matter effects on open-charm production, which is crucial for the interpretation of the results from Pb-Pb collisions.
Methods: The prompt charmed mesons D0,D+,D*+, and D+s were measured at mid-rapidity in p-Pb collisions at a center-of-mass energy per nucleon pair √sNN=5.02 TeV with the ALICE detector at the LHC. D mesons were reconstructed from their decays D0→K−π+,D+→K−π+π+, D*+→D0π+,D+s→ϕπ+→K−K+π+, and their charge conjugates, using an analysis method based on the selection of decay topologies displaced from the interaction vertex. In addition, the prompt D0 production cross section was measured in pp collisions at √s=7 TeV and p-Pb collisions at √sNN=5.02 TeV down to pT=0 using an analysis technique that is based on the estimation and subtraction of the combinatorial background, without reconstruction of the D0 decay vertex.
Results: The production cross section in pp collisions is described within uncertainties by different implementations of pQCD calculations down to pT=0. This allowed also a determination of the total c¯c production cross section in pp collisions, which is more precise than previous ALICE measurements because it is not affected by uncertainties owing to the extrapolation to pT=0. The nuclear modification factor RpPb(pT), defined as the ratio of the pT-differential D meson cross section in p-Pb collisions and that in pp collisions scaled by the mass number of the Pb nucleus, was calculated for the four D-meson species and found to be compatible with unity within uncertainties. The results are compared to theoretical calculations that include cold-nuclear-matter effects and to transport model calculations incorporating the interactions of charm quarks with an expanding deconfined medium.
Conclusions: These measurements add experimental evidence that the modification of the D-meson transverse momentum distributions observed in Pb–Pb collisions with respect to pp interactions is due to strong final-state effects induced by the interactions of the charm quarks with the hot and dense partonic medium created in ultrarelativistic heavy-ion collisions. The current precision of the measurement does not allow us to draw conclusions on the role of the different cold-nuclear-matter effects and on the possible presence of additional hot-medium effects in p-Pb collisions. However, the analysis technique without decay-vertex reconstruction, applied on future larger data samples, should provide access to the physics-rich range down to pT=0.
The production cross sections of the prompt charmed mesons D0, D+, D∗+ and Ds were measured at mid-rapidity in p-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV with the ALICE detector at the LHC. D mesons were reconstructed from their decays D0→K−π+, D+→K−π+π+, D∗+→D0π+, D+s→ϕπ+→K−K+π+, and their charge conjugates. The pT-differential production cross sections were measured at mid-rapidity in the interval 1<pT<24 GeV/c for D0, D+ and D∗+ mesons and in 2<pT<12 GeV/c for Ds mesons, using an analysis method based on the selection of decay topologies displaced from the interaction vertex. The production cross sections of the D0, D+ and D∗+ mesons were also measured in three pT intervals as a function of the rapidity ycms in the centre-of-mass system in −1.26<ycms<0.34. In addition, the prompt D0 cross section was measured in pp collisions at s√=7 TeV and p-Pb collisions at sNN−−−√=5.02 TeV down to pT=0 using an analysis technique that is based on the estimation and subtraction of the combinatorial background, without reconstruction of the D0 decay vertex. The nuclear modification factor RpPb(pT), defined as the ratio of the pT-differential D-meson cross section in p-Pb collisions and that in pp collisions scaled by the mass number of the Pb nucleus, was calculated for the four D-meson species and found to be compatible with unity within experimental uncertainties. The results are compared to theoretical calculations that include cold-nuclear-matter effects and to transport model calculations incorporating the interactions of charm quarks with an expanding deconfined medium.
The production cross sections of the prompt charmed mesons D0, D+, D∗+ and Ds were measured at mid-rapidity in p-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV with the ALICE detector at the LHC. D mesons were reconstructed from their decays D0→K−π+, D+→K−π+π+, D∗+→D0π+, D+s→ϕπ+→K−K+π+, and their charge conjugates. The pT-differential production cross sections were measured at mid-rapidity in the interval 1<pT<24 GeV/c for D0, D+ and D∗+ mesons and in 2<pT<12 GeV/c for Ds mesons, using an analysis method based on the selection of decay topologies displaced from the interaction vertex. The production cross sections of the D0, D+ and D∗+ mesons were also measured in three pT intervals as a function of the rapidity ycms in the centre-of-mass system in −1.26<ycms<0.34. In addition, the prompt D0 cross section was measured in pp collisions at s√=7 TeV and p-Pb collisions at sNN−−−√=5.02 TeV down to pT=0 using an analysis technique that is based on the estimation and subtraction of the combinatorial background, without reconstruction of the D0 decay vertex. The nuclear modification factor RpPb(pT), defined as the ratio of the pT-differential D-meson cross section in p-Pb collisions and that in pp collisions scaled by the mass number of the Pb nucleus, was calculated for the four D-meson species and found to be compatible with unity within experimental uncertainties. The results are compared to theoretical calculations that include cold-nuclear-matter effects and to transport model calculations incorporating the interactions of charm quarks with an expanding deconfined medium.
The azimuthal anisotropy coefficient v2 of prompt D0, D+, D∗+ and D+s mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV, with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays at mid-rapidity, |y|<0.8, in the transverse momentum interval 1<pT<24 GeV/c. The measured D-meson v2 has similar values as that of charged pions. The D+s v2, measured for the first time, is found to be compatible with that of non-strange D mesons. The measurements are compared with theoretical calculations of charm-quark transport in a hydrodynamically expanding medium and have the potential to constrain medium parameters.
The azimuthal anisotropy coefficient v2 of prompt D0, D+, D∗+ and D+s mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV, with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays at mid-rapidity, |y|<0.8, in the transverse momentum interval 1<pT<24 GeV/c. The measured D-meson v2 has similar values as that of charged pions. The D+s v2, measured for the first time, is found to be compatible with that of non-strange D mesons. The measurements are compared with theoretical calculations of charm-quark transport in a hydrodynamically expanding medium and have the potential to constrain medium parameters.
The azimuthal anisotropy coefficient v2 of prompt D0, D+, D∗+ and D+s mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV, with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays at mid-rapidity, |y|<0.8, in the transverse momentum interval 1<pT<24 GeV/c. The measured D-meson v2 has similar values as that of charged pions. The D+s v2, measured for the first time, is found to be compatible with that of non-strange D mesons. The measurements are compared with theoretical calculations of charm-quark transport in a hydrodynamically expanding medium and have the potential to constrain medium parameters.e
We present new concepts to integrate logic synthesis and physical design. Our methodology uses general Boolean transformations as known from technology-independent synthesis, and a recursive bi-partitioning placement algorithm. In each partitioning step, the precision of the layout data increases. This allows effective guidance of the logic synthesis operations for cycle time optimization. An additional advantage of our approach is that no complicated layout corrections are needed when the netlist is changed.
CRFVoter : gene and protein related object recognition using a conglomerate of CRF-based tools
(2019)
Background: Gene and protein related objects are an important class of entities in biomedical research, whose identification and extraction from scientific articles is attracting increasing interest. In this work, we describe an approach to the BioCreative V.5 challenge regarding the recognition and classification of gene and protein related objects. For this purpose, we transform the task as posed by BioCreative V.5 into a sequence labeling problem. We present a series of sequence labeling systems that we used and adapted in our experiments for solving this task. Our experiments show how to optimize the hyperparameters of the classifiers involved. To this end, we utilize various algorithms for hyperparameter optimization. Finally, we present CRFVoter, a two-stage application of Conditional Random Field (CRF) that integrates the optimized sequence labelers from our study into one ensemble classifier.
Results: We analyze the impact of hyperparameter optimization regarding named entity recognition in biomedical research and show that this optimization results in a performance increase of up to 60%. In our evaluation, our ensemble classifier based on multiple sequence labelers, called CRFVoter, outperforms each individual extractor’s performance. For the blinded test set provided by the BioCreative organizers, CRFVoter achieves an F-score of 75%, a recall of 71% and a precision of 80%. For the GPRO type 1 evaluation, CRFVoter achieves an F-Score of 73%, a recall of 70% and achieved the best precision (77%) among all task participants.
Conclusion: CRFVoter is effective when multiple sequence labeling systems are to be used and performs better then the individual systems collected by it.
The prevention of credit card fraud is an important application for prediction techniques. One major obstacle for using neural network training techniques is the high necessary diagnostic quality: Since only one financial transaction of a thousand is invalid no prediction success less than 99.9% is acceptable. Due to these credit card transaction proportions complete new concepts had to be developed and tested on real credit card data. This paper shows how advanced data mining techniques and neural network algorithm can be combined successfully to obtain a high fraud coverage combined with a low false alarm rate.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
Correlated event-by-event fluctuations of flow harmonics in Pb–Pb collisions at √sNN = 2.76 TeV
(2016)
We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus-nucleus collisions, obtained for the first time using a new analysis method based on multiparticle cumulants in mixed harmonics. This novel method is robust against systematic biases originating from non-flow effects and by construction any dependence on symmetry planes is eliminated. We demonstrate that correlations of flow harmonics exhibit a better sensitivity to medium properties than the individual flow harmonics. The new measurements are performed in Pb-Pb collisions at the centre-of-mass energy per nucleon pair of sNN−−−√=2.76 TeV by the ALICE experiment at the Large Hadron Collider (LHC). The centrality dependence of correlation between event-by-event fluctuations of the elliptic, v2, and quadrangular, v4, flow harmonics, as well as of anti-correlation between v2 and triangular, v3, flow harmonics are presented. The results cover two different regimes of the initial state configurations: geometry-dominated (in mid-central collisions) and fluctuation-dominated (in the most central collisions). Comparisons are made to predictions from MC-Glauber, viscous hydrodynamics, AMPT and HIJING models. Together with the existing measurements of individual flow harmonics the presented results provide further constraints on initial conditions and the transport properties of the system produced in heavy-ion collisions.
Correlated event-by-event fluctuations of flow harmonics in Pb–Pb collisions at √sNN = 2.76 TeV
(2016)
We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus-nucleus collisions, obtained for the first time using a new analysis method based on multiparticle cumulants in mixed harmonics. This novel method is robust against systematic biases originating from non-flow effects and by construction any dependence on symmetry planes is eliminated. We demonstrate that correlations of flow harmonics exhibit a better sensitivity to medium properties than the individual flow harmonics. The new measurements are performed in Pb-Pb collisions at the centre-of-mass energy per nucleon pair of sNN−−−√=2.76 TeV by the ALICE experiment at the Large Hadron Collider (LHC). The centrality dependence of correlation between event-by-event fluctuations of the elliptic, v2, and quadrangular, v4, flow harmonics, as well as of anti-correlation between v2 and triangular, v3, flow harmonics are presented. The results cover two different regimes of the initial state configurations: geometry-dominated (in mid-central collisions) and fluctuation-dominated (in the most central collisions). Comparisons are made to predictions from MC-Glauber, viscous hydrodynamics, AMPT and HIJING models. Together with the existing measurements of individual flow harmonics the presented results provide further constraints on initial conditions and the transport properties of the system produced in heavy-ion collisions.
Correlated event-by-event fluctuations of flow harmonics in Pb–Pb collisions at √sNN = 2.76 TeV
(2016)
We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus-nucleus collisions, obtained for the first time using a new analysis method based on multiparticle cumulants in mixed harmonics. This novel method is robust against systematic biases originating from non-flow effects and by construction any dependence on symmetry planes is eliminated. We demonstrate that correlations of flow harmonics exhibit a better sensitivity to medium properties than the individual flow harmonics. The new measurements are performed in Pb-Pb collisions at the centre-of-mass energy per nucleon pair of sNN−−−√=2.76 TeV by the ALICE experiment at the Large Hadron Collider (LHC). The centrality dependence of correlation between event-by-event fluctuations of the elliptic, v2, and quadrangular, v4, flow harmonics, as well as of anti-correlation between v2 and triangular, v3, flow harmonics are presented. The results cover two different regimes of the initial state configurations: geometry-dominated (in mid-central collisions) and fluctuation-dominated (in the most central collisions). Comparisons are made to predictions from MC-Glauber, viscous hydrodynamics, AMPT and HIJING models. Together with the existing measurements of individual flow harmonics the presented results provide further constraints on initial conditions and the transport properties of the system produced in heavy-ion collisions.
This paper extends the internal frank report 28 as follows: It is shown that for a call-by-need lambda calculus LRCCP-Lambda extending the calculus LRCC-Lambda by por, i.e in a lambda-calculus with letrec, case, constructors, seq and por, copying can be done without restrictions, and also that call-by-need and call-by-name strategies are equivalent w.r.t. contextual equivalence.
Call-by-need lambda calculi with letrec provide a rewritingbased operational semantics for (lazy) call-by-name functional languages. These calculi model the sharing behavior during evaluation more closely than let-based calculi that use a fixpoint combinator. In a previous paper we showed that the copy-transformation is correct for the small calculus LR-Lambda. In this paper we demonstrate that the proof method based on a calculus on infinite trees for showing correctness of instantiation operations can be extended to the calculus LRCC-Lambda with case and constructors, and show that copying at compile-time can be done without restrictions. We also show that the call-by-need and call-by-name strategies are equivalent w.r.t. contextual equivalence. A consequence is correctness of all the transformations like instantiation, inlining, specialization and common subexpression elimination in LRCC-Lambda. We are confident that the method scales up for proving correctness of copy-related transformations in non-deterministic lambda calculi if restricted to "deterministic" subterms.
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
We present a theoretical analysis of structural FSM traversal, which is the basis for the sequential equivalence checking algorithm Record & Play presented earlier. We compare the convergence behaviour of exact and approximative structural FSM traversal with that of standard BDD-based FSM traversal. We show that for most circuits encountered in practice exact structural FSM traversal reaches the fixed point as fast as symbolic FSM traversal, while approximation can significantly reduce in the number of iterations needed. Our experiments confirm these results.
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be accompanied with a type modification by generalizing or instantiating types.
The pi-calculus is a well-analyzed model for mobile processes and mobile computations.
While a lot of other process and lambda calculi that are core languages of higher-order concurrent and/or functional programming languages use a contextual semantics observing the termination behavior of programs in all program contexts, traditional program equivalences in the pi-calculus are bisimulations and barbed testing equivalences, which observe the communication capabilities of processes under reduction and in contexts.
There is a distance between these two approaches to program equivalence which makes it hard to compare the pi-calculus with other languages. In this paper we contribute to bridging this gap by investigating a contextual semantics of the synchronous pi-calculus with replication and without sums.
To transfer contextual equivalence to the pi-calculus we add a process Stop as constant which indicates success and is used as the base to define and analyze the contextual equivalence which observes may- and should-convergence of processes.
We show as a main result that contextual equivalence in the pi-calculus with Stop conservatively extends barbed testing equivalence in the (Stop-free) pi-calculus. This implies that results on contextual equivalence can be directly transferred to the (Stop-free) pi-calculus with barbed testing equivalence.
We analyze the contextual ordering, prove some nontrivial process equivalences, and provide proof tools for showing contextual equivalences. Among them are a context lemma, and new notions of sound applicative similarities for may- and should-convergence.
The ALICE Collaboration reports the measurement of semi-inclusive distributions of charged-particle jets recoiling from a high-transverse momentum trigger hadron in p–Pb collisions at √sNN=5.02 TeV. Jets are reconstructed from charged-particle tracks using the anti-kT algorithm with resolution parameter R=0.2 and 0.4. A data-driven statistical approach is used to correct the uncorrelated background jet yield. Recoil jet distributions are reported for jet transverse momentum 15<pT,jetch<50GeV/c and are compared in various intervals of p–Pb event activity, based on charged-particle multiplicity and zero-degree neutral energy in the forward (Pb-going) direction. The semi-inclusive observable is self-normalized and such comparisons do not require the interpretation of p–Pb event activity in terms of collision geometry, in contrast to inclusive jet observables. These measurements provide new constraints on the magnitude of jet quenching in small systems at the LHC. In p–Pb collisions with high event activity, the average medium-induced out-of-cone energy transport for jets with R=0.4 and 15<pT,jetch<50GeV/c is measured to be less than 0.4 GeV/c at 90% confidence, which is over an order of magnitude smaller than a similar measurement for central Pb–Pb collisions at √sNN=2.76TeV. Comparison is made to theoretical calculations of jet quenching in small systems, and to inclusive jet measurements in p–Pb collisions selected by event activity at the LHC and in d–Au collisions at RHIC.
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow v2 reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality but different initial geometry. This selection technique, Event Shape Engineering, has been used in the analysis of charge-dependent two- and three-particle correlations in Pb–Pb collisions at √sNN=2.76 TeV. The two-particle correlator 〈cos(φα−φβ)〉, calculated for different combinations of charges α and β, is almost independent of v2 (for a given centrality), while the three-particle correlator 〈cos(φα+φβ−2Ψ2)〉 scales almost linearly both with the event v2 and charged-particle pseudorapidity density. The charge dependence of the three-particle correlator is often interpreted as evidence for the Chiral Magnetic Effect (CME), a parity violating effect of the strong interaction. However, its measured dependence on v2 points to a large non-CME contribution to the correlator. Comparing the results with Monte Carlo calculations including a magnetic field due to the spectators, the upper limit of the CME signal contribution to the three-particle correlator in the 10–50% centrality interval is found to be 26–33% at 95% confidence level.
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow v2 reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality but different initial geometry. This selection technique, Event Shape Engineering, has been used in the analysis of charge-dependent two- and three-particle correlations in Pb-Pb collisions at sNN−−−√=2.76 TeV. The two-particle correlator ⟨cos(φα−φβ)⟩, calculated for different combinations of charges α and β, is almost independent of v2 (for a given centrality), while the three-particle correlator ⟨cos(φα+φβ−2Ψ2)⟩ scales almost linearly both with the event v2 and charged-particle pseudorapidity density. The charge dependence of the three-particle correlator is often interpreted as evidence for the Chiral Magnetic Effect (CME), a parity violating effect of the strong interaction. However, its measured dependence on v2 points to a large non-CME contribution to the correlator. Comparing the results with Monte Carlo calculations including a magnetic field due to the spectators, the upper limit of the CME signal contribution to the three-particle correlator in the 10-50% centrality interval is found to be 26-33% at 95% confidence level.
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow v2 reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality but different initial geometry. This selection technique, Event Shape Engineering, has been used in the analysis of charge-dependent two- and three-particle correlations in Pb-Pb collisions at sNN−−−√=2.76 TeV. The two-particle correlator ⟨cos(φα−φβ)⟩, calculated for different combinations of charges α and β, is almost independent of v2 (for a given centrality), while the three-particle correlator ⟨cos(φα+φβ−2Ψ2)⟩ scales almost linearly both with the event v2 and charged-particle pseudorapidity density. The charge dependence of the three-particle correlator is often interpreted as evidence for the Chiral Magnetic Effect (CME), a parity violating effect of the strong interaction. However, its measured dependence on v2 points to a large non-CME contribution to the correlator. Comparing the results with Monte Carlo calculations including a magnetic field due to the spectators, the upper limit of the CME signal contribution to the three-particle correlator in the 10-50% centrality interval is found to be 26-33% at 95% confidence level.
The interaction of K− with protons is characterised by the presence of several coupled channels, systems like K¯¯¯¯0n and πΣ with a similar mass and the same quantum numbers as the K−p state. The strengths of these couplings to the K−p system are of crucial importance for the understanding of the nature of the Λ(1405) resonance and of the attractive K−p strong interaction. In this article, we present measurements of the K−p correlation functions in relative momentum space obtained in pp collisions at s√ = 13 TeV, in p-Pb collisions at sNN−−−√ = 5.02 TeV, and (semi)peripheral Pb-Pb collisions at sNN−−−√ = 5.02 TeV. The emitting source size, composed of a core radius anchored to the K+p correlation and of a resonance halo specific to each particle pair, varies between 1 and 2 fm in these collision systems. The strength and the effects of the K¯¯¯¯0n and πΣ inelastic channels on the measured K−p correlation function are investigated in the different colliding systems by comparing the data with state-of-the-art models of chiral potentials. A novel approach to determine the conversion weights ω, necessary to quantify the amount of produced inelastic channels in the correlation function, is presented. In this method, particle yields are estimated from thermal model predictions, and their kinematic distribution from blast-wave fits to measured data. The comparison of chiral potentials to the measured K−p interaction indicates that, while the πΣ−K−p dynamics is well reproduced by the model, the coupling to the K¯¯¯¯0n channel in the model is currently underestimated.
The interaction of K− with protons is characterised by the presence of several coupled channels, systems like K¯¯¯¯0n and πΣ with a similar mass and the same quantum numbers as the K−p state. The strengths of these couplings to the K−p system are of crucial importance for the understanding of the nature of the Λ(1405) resonance and of the attractive K−p strong interaction. In this article, we present measurements of the K−p correlation functions in relative momentum space obtained in pp collisions at s√ = 13 TeV, in p-Pb collisions at sNN−−−√ = 5.02 TeV, and (semi)peripheral Pb-Pb collisions at sNN−−−√ = 5.02 TeV. The emitting source size, composed of a core radius anchored to the K+p correlation and of a resonance halo specific to each particle pair, varies between 1 and 2 fm in these collision systems. The strength and the effects of the K¯¯¯¯0n and πΣ inelastic channels on the measured K−p correlation function are investigated in the different colliding systems by comparing the data with state-of-the-art models of chiral potentials. A novel approach to determine the conversion weights ω, necessary to quantify the amount of produced inelastic channels in the correlation function, is presented. In this method, particle yields are estimated from thermal model predictions, and their kinematic distribution from blast-wave fits to measured data. The comparison of chiral potentials to the measured K−p interaction indicates that, while the πΣ−K−p dynamics is well reproduced by the model, the coupling to the K¯¯¯¯0n channel in the model is currently underestimated.