Refine
Year of publication
Document Type
- Article (392) (remove)
Language
- English (392) (remove)
Has Fulltext
- yes (392)
Is part of the Bibliography
- no (392)
Keywords
- Heavy Ion Experiments (19)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- LHC (8)
- Heavy-ion collision (7)
- Collective Flow (4)
- Petri net (4)
- Quark-Gluon Plasma (4)
- ALICE (3)
- ALICE experiment (3)
Institute
- Informatik (392) (remove)
In this talk we presented a novel technique, based on Deep Learning, to determine the impact parameter of nuclear collisions at the CBM experiment. PointNet based Deep Learning models are trained on UrQMD followed by CBMRoot simulations of Au+Au collisions at 10 AGeV to reconstruct the impact parameter of collisions from raw experimental data such as hits of the particles in the detector planes, tracks reconstructed from the hits or their combinations. The PointNet models can perform fast, accurate, event-by-event impact parameter determination in heavy ion collision experiments. They are shown to outperform a simple model which maps the track multiplicity to the impact parameter. While conventional methods for centrality classification merely provide an expected impact parameter distribution for a given centrality class, the PointNet models predict the impact parameter from 2–14 fm on an event-by-event basis with a mean error of −0.33 to 0.22 fm.
The ongoing digitalization of educational resources and the use of the internet lead to a steady increase of potentially available learning media. However, many of the media which are used for educational purposes have not been designed specifically for teaching and learning. Usually, linguistic criteria of readability and comprehensibility as well as content-related criteria are used independently to assess and compare the quality of educational media. This also holds true for educational media used in economics. This article aims to improve the analysis of textual learning media used in economic education by drawing on threshold concepts. Threshold concepts are key terms in knowledge acquisition within a domain. From a linguistic perspective, however, threshold concepts are instances of specialized vocabularies, exhibiting particular linguistic features. In three kinds of (German) resources, namely in textbooks, in newspapers, and on Wikipedia, we investigate the distributive profiles of 63 threshold concepts identified in economics education (which have been collected from threshold concept research). We looked at the threshold concepts' frequency distribution, their compound distribution, and their network structure within the three kinds of resources. The two main findings of our analysis show that firstly, the three kinds of resources can indeed be distinguished in terms of their threshold concepts' profiles. Secondly, Wikipedia definitely shows stronger associative connections between economic threshold concepts than the other sources. We discuss the findings in relation to adequate media use for teaching and learning—not only in economic education.
Learning to solve graph tasks is one of the key prerequisites of acquiring domain-specific knowledge in most study domains. Analyses of graph understanding often use eye-tracking and focus on analyzing how much time students spend gazing at particular areas of a graph—Areas of Interest (AOIs). To gain a deeper insight into students’ task-solving process, we argue that the gaze shifts between students’ fixations on different AOIs (so-termed transitions) also need to be included in holistic analyses of graph understanding that consider the importance of transitions for the task-solving process. Thus, we introduced Epistemic Network Analysis (ENA) as a novel approach to analyze eye-tracking data of 23 university students who solved eight multiple-choice graph tasks in physics and economics. ENA is a method for quantifying, visualizing, and interpreting network data allowing a weighted analysis of the gaze patterns of both correct and incorrect graph task solvers considering the interrelations between fixations and transitions. After an analysis of the differences in the number of fixations and the number of single transitions between correct and incorrect solvers, we conducted an ENA for each task. We demonstrate that an isolated analysis of fixations and transitions provides only a limited insight into graph solving behavior. In contrast, ENA identifies differences between the gaze patterns of students who solved the graph tasks correctly and incorrectly across the multiple graph tasks. For instance, incorrect solvers shifted their gaze from the graph to the x-axis and from the question to the graph comparatively more often than correct solvers. The results indicate that incorrect solvers often have problems transferring textual information into graphical information and rely more on partly irrelevant parts of a graph. Finally, we discuss how the findings can be used to design experimental studies and for innovative instructional procedures in higher education
Volatility clustering and fat tails are prominently observed in financial markets. Here, we analyze the underlying mechanisms of three agent-based models explaining these stylized facts in terms of market instabilities and compare them on empirical grounds. To this end, we first develop a general framework for detecting tail events in stock markets. In particular, we introduce Hawkes processes to automatically identify and date onsets of market turmoils which result in increased volatility. Second, we introduce three different indicators to predict those onsets. Each of the three indicators is derived from and tailored to one of the models, namely quantifying information content, critical slowing down or market risk perception. Finally, we apply our indicators to simulated and real market data. We find that all indicators reliably predict market events on simulated data and clearly distinguish the different models. In contrast, a systematic comparison on the stocks of the Forbes 500 companies shows a markedly lower performance. Overall, predicting the onset of market turmoils appears difficult, yet, over very short time horizons high or rising volatility exhibits some predictive power.
Monitoring is an indispensable tool for the operation of any large installation of grid or cluster computing, be it high energy physics or elsewhere. Usually, monitoring is configured to collect a small amount of data, just enough to enable detection of abnormal conditions. Once detected, the abnormal condition is handled by gathering all information from the affected components. This data is processed by querying it in a manner similar to a database.
This contribution shows how the metaphor of a debugger (for software applications) can be transferred to a compute cluster. The concepts of variables, assertions and breakpoints that are used in debugging can be applied to monitoring by defining variables as the quantities recorded by monitoring and breakpoints as invariants formulated via these variables. It is found that embedding fragments of a data extracting and reporting tool such as the UNIX tool awk facilitates concise notations for commonly used variables since tools like awk are designed to process large event streams (in textual representations) with bounded memory. A functional notation similar to both the pipe notation used in the UNIX shell and the point-free style used in functional programming simplify the combination of variables that commonly occur when formulating breakpoints.
We test the hypothesis that the extent to which one obtains information on a given topic through Wikipedia depends on the language in which it is consulted. Controlling the size factor, we investigate this hypothesis for a number of 25 subject areas. Since Wikipedia is a central part of the web-based information landscape, this indicates a language-related, linguistic bias. The article therefore deals with the question of whether Wikipedia exhibits this kind of linguistic relativity or not. From the perspective of educational science, the article develops a computational model of the information landscape from which multiple texts are drawn as typical input of web-based reading. For this purpose, it develops a hybrid model of intra- and intertextual similarity of different parts of the information landscape and tests this model on the example of 35 languages and corresponding Wikipedias. In the way it measures the similarities of hypertexts, the article goes beyond existing approaches by examining their structural and semantic aspects intra- and intertextually. In this way it builds a bridge between reading research, educational science, Wikipedia research and computational linguistics.
A new method of event characterization based on Deep Learning is presented. The PointNet models can be used for fast, online event-by-event impact parameter determination at the CBM experiment. For this study, UrQMD and the CBM detector simulation are used to generate Au+Au collision events at 10 AGeV which are then used to train and evaluate PointNet based architectures. The models can be trained on features like the hit position of particles in the CBM detector planes, tracks reconstructed from the hits or combinations thereof. The Deep Learning models reconstruct impact parameters from 2-14 fm with a mean error varying from -0.33 to 0.22 fm. For impact parameters in the range of 5-14 fm, a model which uses the combination of hit and track information of particles has a relative precision of 4-9% and a mean error of -0.33 to 0.13 fm. In the same range of impact parameters, a model with only track information has a relative precision of 4-10% and a mean error of -0.18 to 0.22 fm. This new method of event-classification is shown to be more accurate and less model dependent than conventional methods and can utilize the performance boost of modern GPU processor units.
The impact of columnar file formats on SQL‐on‐hadoop engine performance: a study on ORC and Parquet
(2019)
Columnar file formats provide an efficient way to store data to be queried by SQL‐on‐Hadoop engines. Related works consider the performance of processing engine and file format together, which makes it impossible to predict their individual impact. In this work, we propose an alternative approach: by executing each file format on the same processing engine, we compare the different file formats as well as their different parameter settings. We apply our strategy to two processing engines, Hive and SparkSQL, and evaluate the performance of two columnar file formats, ORC and Parquet. We use BigBench (TPCx‐BB), a standardized application‐level benchmark for Big Data scenarios. Our experiments confirm that the file format selection and its configuration significantly affect the overall performance. We show that ORC generally performs better on Hive, whereas Parquet achieves best performance with SparkSQL. Using ZLIB compression brings up to 60.2% improvement with ORC, while Parquet achieves up to 7% improvement with Snappy. Exceptions are the queries involving text processing, which do not benefit from using any compression.
The specific temporal evolution of bacterial and phage population sizes, in particular bacterial depletion and the emergence of a resistant bacterial population, can be seen as a kinetic fingerprint that depends on the manifold interactions of the specific phage–host pair during the course of infection. We have elaborated such a kinetic fingerprint for a human urinary tract Klebsiella pneumoniae isolate and its phage vB_KpnP_Lessing by a modeling approach based on data from in vitro co-culture. We found a faster depletion of the initially sensitive bacterial population than expected from simple mass action kinetics. A possible explanation for the rapid decline of the bacterial population is a synergistic interaction of phages which can be a favorable feature for phage therapies. In addition to this interaction characteristic, analysis of the kinetic fingerprint of this bacteria and phage combination revealed several relevant aspects of their population dynamics: A reduction of the bacterial concentration can be achieved only at high multiplicity of infection whereas bacterial extinction is hardly accomplished. Furthermore the binding affinity of the phage to bacteria is identified as one of the most crucial parameters for the reduction of the bacterial population size. Thus, kinetic fingerprinting can be used to infer phage–host interactions and to explore emergent dynamics which facilitates a rational design of phage therapies.
BIOfid is a specialized information service currently being developed to mobilize biodiversity data dormant in printed historical and modern literature and to offer a platform for open access journals on the science of biodiversity. Our team of librarians, computer scientists and biologists produce high-quality text digitizations, develop new text-mining tools and generate detailed ontologies enabling semantic text analysis and semantic search by means of user-specific queries. In a pilot project we focus on German publications on the distribution and ecology of vascular plants, birds, moths and butterflies extending back to the Linnaeus period about 250 years ago. The three organism groups have been selected according to current demands of the relevant research community in Germany. The text corpus defined for this purpose comprises over 400 volumes with more than 100,000 pages to be digitized and will be complemented by journals from other digitization projects, copyright-free and project-related literature. With TextImager (Natural Language Processing & Text Visualization) and TextAnnotator (Discourse Semantic Annotation) we have already extended and launched tools that focus on the text-analytical section of our project. Furthermore, taxonomic and anatomical ontologies elaborated by us for the taxa prioritized by the project’s target group - German institutions and scientists active in biodiversity research - are constantly improved and expanded to maximize scientific data output. Our poster describes the general workflow of our project ranging from literature acquisition via software development, to data availability on the BIOfid web portal (http://biofid.de/), and the implementation into existing platforms which serve to promote global accessibility of biodiversity data.
Risk evaluations for agricultural chemicals are necessary to preserve healthy populations of honey bee colonies. Field studies on whole colonies are limited in behavioural research, while results from lab studies allow only restricted conclusions on whole colony impacts. Methods for automated long-term investigations of behaviours within comb cells, such as brood care, were hitherto missing. In the present study, we demonstrate an innovative video method that enables within-cell analysis in honey bee (Apis mellifera) observation hives to detect chronic sublethal neonicotinoid effects of clothianidin (1 and 10 ppb) and thiacloprid (200 ppb) on worker behaviour and development. In May and June, colonies which were fed 10 ppb clothianidin and 200 ppb thiacloprid in syrup over three weeks showed reduced feeding visits and duration throughout various larval development days (LDDs). On LDD 6 (capping day) total feeding duration did not differ between treatments. Behavioural adaptation was exhibited by nurses in the treatment groups in response to retarded larval development by increasing the overall feeding timespan. Using our machine learning algorithm, we demonstrate a novel method for detecting behaviours in an intact hive that can be applied in a versatile manner to conduct impact analyses of chemicals, pests and other stressors.
Measurement of ϒ(1S) elliptic flow at forward rapidity in Pb-Pb collisions at √sNN = 5.02 TeV
(2019)
The first measurement of the ϒ(1S) elliptic flow coefficient (v2) is performed at forward rapidity (2.5 < y < 4) in Pb–Pb collisions at √sNN = 5.02 TeV with the ALICE detector at the LHC. The results are obtained with the scalar product method and are reported as a function of transverse momentum (pT) up to 15 GeV/c in the 5%–60% centrality interval. The measured Υ(1S)v2 is consistent with 0 and with the small positive values predicted by transport models within uncertainties. The v2 coefficient in 2 < pT < 15 GeV/c is lower than that of inclusive J/ψ mesons in the same pT interval by 2.6 standard deviations. These results, combined with earlier suppression measurements, are in agreement with a scenario in which the Υ(1S) production in Pb–Pb collisions at LHC energies is dominated by dissociation limited to the early stage of the collision, whereas in the J/ψ case there is substantial experimental evidence of an additional regeneration component.
Dancing is an activity that positively enhances the mood of people that consists of feeling the music and expressing it in rhythmic movements with the body. Learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. In this paper, we present the first implementation of the Dancing Coach (DC), a generic system designed to support the practice of dancing steps, which in its current state supports the practice of basic salsa dancing steps. However, the DC has been designed to allow the addition of more dance styles. We also present the first user evaluation of the DC, which consists of user tests with 25 participants. Results from the user test show that participants stated they had learned the basic salsa dancing steps, to move to the beat and body coordination in a fun way. Results also point out some direction on how to improve the future versions of the DC.
Iconographic representations on ancient artifacts are described in many existing databases and literature as human readable text. We applied Natural Language Processing (NLP) approaches in order to extract the semantics out of these textual descriptions and in this way enable semantic searches over them. This allows more sophisticated requests compared to the common existing keyword searches. As we show in our experiments based on numismatic datasets, the approach is generic in the sense that once the system is trained on one dataset, it can be applied without any further manual work also to datasets that have similar content. Of course, additional adaptions would further improve the results. Since the approach requires manual work only during the training phase, it can easily be applied to huge datasets without manual work and therefore without major extra costs. In fact, in our experience bigger datasets generate even better results because there is more data for training. Since our approach is not bound to a certain domain and the numismatic datasets are just an example, it could serve as a blueprint for many other areas. It could also help to build bridges between disciplines since textual iconographic descriptions are to be found also for pottery, sculpture and elsewhere.
Correction to: Scientifc Reports https://doi.org/10.1038/s41598-019-43857-5, published online 17 May 2019. In the original version of this Article, Jan-Hendrik Trösemeier was incorrectly affiliated with ‘Division of Allergology, Paul Ehrlich Institut, Langen, Germany’. Te correct afliations are listed below...
Though the range of invariance in recognition of novel objects is a basic aspect of human vision, its characterization has remained surprisingly elusive. Here we report tolerance to scale and position changes in one-shot learning by measuring recognition accuracy of Korean letters presented in a flash to non-Korean subjects who had no previous experience with Korean letters. We found that humans have significant scale-invariance after only a single exposure to a novel object. The range of translation-invariance is limited, depending on the size and position of presented objects. To understand the underlying brain computation associated with the invariance properties, we compared experimental data with computational modeling results. Our results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance, by encoding different scale channels as well as eccentricity-dependent representations captured by neurons’ receptive field sizes and sampling density that change with eccentricity. Our psychophysical experiments and related simulations strongly suggest that the human visual system uses a computational strategy that differs in some key aspects from current deep learning architectures, being more data efficient and relying more critically on eye-movements.
In pathology, tissue images are evaluated using a light microscope, relying on the expertise and experience of pathologists. There is a great need for computational methods to quantify and standardize histological observations. Computational quantification methods become more and more essential to evaluate tissue images. In particular, the distribution of tumor cells and their microenvironment are of special interest. Here, we systematically investigated tumor cell properties and their spatial neighborhood relations by a new application of statistical analysis to whole slide images of Hodgkin lymphoma, a tumor arising in lymph nodes, and inflammation of lymph nodes called lymphadenitis. We considered properties of more than 400, 000 immunohistochemically stained, CD30-positive cells in 35 whole slide images of tissue sections from subtypes of the classical Hodgkin lymphoma, nodular sclerosis and mixed cellularity, as well as from lymphadenitis. We found that cells of specific morphology exhibited significant favored and unfavored spatial neighborhood relations of cells in dependence of their morphology. This information is important to evaluate differences between Hodgkin lymph nodes infiltrated by tumor cells (Hodgkin lymphoma) and inflamed lymph nodes, concerning the neighborhood relations of cells and the sizes of cells. The quantification of neighborhood relations revealed new insights of relations of CD30-positive cells in different diagnosis cases. The approach is general and can easily be applied to whole slide image analysis of other tumor types.
The morphology of presynaptic specializations can vary greatly ranging from classical single-release-site boutons in the central nervous system to boutons of various sizes harboring multiple vesicle release sites. Multi-release-site boutons can be found in several neural contexts, for example at the neuromuscular junction (NMJ) of body wall muscles of Drosophila larvae. These NMJs are built by two motor neurons forming two types of glutamatergic multi-release-site boutons with two typical diameters. However, it is unknown why these distinct nerve terminal configurations are used on the same postsynaptic muscle fiber. To systematically dissect the biophysical properties of these boutons we developed a full three-dimensional model of such boutons, their release sites and transmitter-harboring vesicles and analyzed the local vesicle dynamics of various configurations during stimulation. Here we show that the rate of transmission of a bouton is primarily limited by diffusion-based vesicle movements and that the probability of vesicle release and the size of a bouton affect bouton-performance in distinct temporal domains allowing for an optimal transmission of the neural signals at different time scales. A comparison of our in silico simulations with in vivo recordings of the natural motor pattern of both neurons revealed that the bouton properties resemble a well-tuned cooperation of the parameters release probability and bouton size, enabling a reliable transmission of the prevailing firing-pattern at diffusion-limited boutons. Our findings indicate that the prevailing firing-pattern of a neuron may determine the physiological and morphological parameters required for its synaptic terminals.
Relying on the theory of Saward (2010) and Disch (2015), we study political representation through the lens of representative claim-making. We identify a gap between the theoretical concept of claim-making and the empirical (quantitative) assessment of representative claims made in the real world’s representative contexts. Therefore, we develop a new approach to map and quantify representative claims in order to subsequently measure the reception and validation of the claims by the audience. To test our method, we analyse all the debates of the German parliament concerned with the introduction of the gender quota in German supervisory boards from 2013 to 2017 in a two-step process. At first, we assess which constituencies the MPs claim to represent and how they justify their stance. Drawing on multiple correspondence analysis, we identify different claim patterns. Second, making use of natural language processing techniques and logistic regression on social media data, we measure if and how the asserted claims in the parliamentary debates are received and validated by the respective audience. We come to the conclusion that the constituency as ultimate judge of legitimacy has not been comprehensively conceptualized yet.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.