Article
Refine
Year of publication
Document Type
- Article (391) (remove)
Has Fulltext
- yes (391)
Is part of the Bibliography
- no (391)
Keywords
- Heavy Ion Experiments (19)
- Hadron-Hadron scattering (experiments) (10)
- Hadron-Hadron Scattering (9)
- LHC (8)
- Heavy-ion collision (7)
- Quark-Gluon Plasma (4)
- ALICE (3)
- ALICE experiment (3)
- Collective Flow (3)
- Jets and Jet Substructure (3)
Institute
- Informatik (391) (remove)
In dyadic communication, both interlocutors adapt to each other linguistically, that is, they align interpersonally. In this article, we develop a framework for modeling interpersonal alignment in terms of the structural similarity of the interlocutors’ dialog lexica. This is done by means of so-called two-layer time-aligned network series, that is, a time-adjusted graph model. The graph model is partitioned into two layers, so that the interlocutors’ lexica are captured as subgraphs of an encompassing dialog graph. Each constituent network of the series is updated utterance-wise. Thus, both the inherent bipartition of dyadic conversations and their gradual development are modeled. The notion of alignment is then operationalized within a quantitative model of structure formation based on the mutual information of the subgraphs that represent the interlocutor’s dialog lexica. By adapting and further developing several models of complex network theory, we show that dialog lexica evolve as a novel class of graphs that have not been considered before in the area of complex (linguistic) networks. Additionally, we show that our framework allows for classifying dialogs according to their alignment status. To the best of our knowledge, this is the first approach to measuring alignment in communication that explores the similarities of graph-like cognitive representations. Keywords: alignment in communication; structural coupling; linguistic networks; graph distance measures; mutual information of graphs; quantitative network analysis
In contrast to the symbolic approach, neural networks seldom are designed to explain what they have learned. This is a major obstacle for its use in everyday life. With the appearance of neuro-fuzzy systems which use vague, human-like categories the situation has changed. Based on the well-known mechanisms of learning for RBF networks, a special neuro-fuzzy interface is proposed in this paper. It is especially useful in medical applications, using the notation and habits of physicians and other medically trained people. As an example, a liver disease diagnosis system is presented.
Relying on the theory of Saward (2010) and Disch (2015), we study political representation through the lens of representative claim-making. We identify a gap between the theoretical concept of claim-making and the empirical (quantitative) assessment of representative claims made in the real world’s representative contexts. Therefore, we develop a new approach to map and quantify representative claims in order to subsequently measure the reception and validation of the claims by the audience. To test our method, we analyse all the debates of the German parliament concerned with the introduction of the gender quota in German supervisory boards from 2013 to 2017 in a two-step process. At first, we assess which constituencies the MPs claim to represent and how they justify their stance. Drawing on multiple correspondence analysis, we identify different claim patterns. Second, making use of natural language processing techniques and logistic regression on social media data, we measure if and how the asserted claims in the parliamentary debates are received and validated by the respective audience. We come to the conclusion that the constituency as ultimate judge of legitimacy has not been comprehensively conceptualized yet.
In this paper we present a non-deterministic call-by-need (untyped) lambda calculus lambda nd with a constant choice and a let-syntax that models sharing. Our main result is that lambda nd has the nice operational properties of the standard lambda calculus: confluence on sets of expressions, and normal order reduction is sufficient to reach head normal form. Using a strong contextual equivalence we show correctness of several program transformations. In particular of lambdalifting using deterministic maximal free expressions. These results show that lambda nd is a new and also natural combination of non-determinism and lambda-calculus, which has a lot of opportunities for parallel evaluation. An intended application of lambda nd is as a foundation for compiling lazy functional programming languages with I/O based on direct calls. The set of correct program transformations can be rigorously distinguished from non-correct ones. All program transformations are permitted with the slight exception that for transformations like common subexpression elimination and lambda-lifting with maximal free expressions the involved subexpressions have to be deterministic ones.
This article shows that there exist two particular linear orders such that first-order logic with these two linear orders has the same expressive power as first-order logic with the Bit-predicate FO(Bit). As a corollary we obtain that there also exists a built-in permutation such that first-order logic with a linear order and this permutation is as expressive as FO(Bit).
Poster presentation: Introduction Dopaminergic neurons in the midbrain show a variety of firing patterns, ranging from very regular firing pacemaker cells to bursty and irregular neurons. The effects of different experimental conditions (like pharmacological treatment or genetical manipulations) on these neuronal discharge patterns may be subtle. Applying a stochastic model is a quantitative approach to reveal these changes. ...
After a short introduction into traditional image transform coding, multirate systems and multiscale signal coding the paper focuses on the subject of image encoding by a neural network. Taking also noise into account a network model is proposed which not only learns the optimal localized basis functions for the transform but also learns to implement a whitening filter by multi-resolution encoding. A simulation showing the multi-resolution capabilitys concludes the contribution.
Current deep learning methods are regarded as favorable if they empirically perform well on dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving data is investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten. However, comparison of individual methods is nevertheless performed in isolation from the real world by monitoring accumulated benchmark test set performance. The closed world assumption remains predominant, i.e. models are evaluated on data that is guaranteed to originate from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown and corrupted instances. In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era. Hence, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Finally, the established synergies are supported empirically, showing joint improvement in alleviating catastrophic forgetting, querying data, selecting task orders, while exhibiting robust open world application.
We present the FPGA implementation of an algorithm [4] that computes implications between signal values in a boolean network. The research was performed as a masterrsquos thesis [5] at the University of Frankfurt. The recursive algorithm is rather complex for a hardware realization and therefore the FPGA implementation is an interesting example for the potential of reconfigurable computing beyond systolic algorithms. A circuit generator was written that transforms a boolean network into a network of small processing elements and a global control logic which together implement the algorithm. The resulting circuit performs the computation two orders of magnitudes faster than a software implementation run by a conventional workstation.
The interaction between Λ baryons and kaons/antikaons is a crucial ingredient for the strangeness S=0 and S=−2 sector of the meson–baryon interaction at low energies. In particular, the ΛK‾ might help in understanding the origin of states such as the Ξ(1620), whose nature and properties are still under debate. Experimental data on Λ–K and Λ–K‾ systems are scarce, leading to large uncertainties and tension between the available theoretical predictions constrained by such data. In this Letter we present the measurements of Λ–K⊕+Λ‾–K− and Λ–K⊕−Λ‾–K+ correlations obtained in the high-multiplicity triggered data sample in pp collisions at s=13 TeV recorded by ALICE at the LHC. The correlation function for both pairs is modeled using the Lednický–Lyuboshits analytical formula and the corresponding scattering parameters are extracted. The Λ–K⊕−Λ‾–K+ correlations show the presence of several structures at relative momenta k⁎ above 200 MeV/c, compatible with the Ω baryon, the Ξ(1690), and Ξ(1820) resonances decaying into Λ–K− pairs. The low k⁎ region in the Λ–K⊕−Λ‾–K+ also exhibits the presence of the Ξ(1620) state, expected to strongly couple to the measured pair. The presented data allow to access the ΛK+ and ΛK− strong interaction with an unprecedented precision and deliver the first experimental observation of the Ξ(1620) decaying into ΛK−.