Refine
Year of publication
- 2018 (25) (remove)
Document Type
- Article (25) (remove)
Has Fulltext
- yes (25)
Is part of the Bibliography
- no (25)
Keywords
- Heavy Ion Experiments (5)
- (surface) partial differential equations (1)
- 3D spatio-temporal resolved mathematical models (1)
- BESIII (1)
- Biodiversity (1)
- Branching fractions (1)
- Cuneiform (1)
- Finite Volumes (1)
- Geodesics (1)
- Graph theory (1)
Institute
- Informatik (25) (remove)
This paper describes work on the morphological and syntactic annotation of Sumerian cuneiform as a model for low resource languages in general. Cuneiform texts are invaluable sources for the study of history, languages, economy, and cultures of Ancient Mesopotamia and its surrounding regions. Assyriology, the discipline dedicated to their study, has vast research potential, but lacks the modern means for computational processing and analysis. Our project, Machine Translation and Automated Analysis of Cuneiform Languages, aims to fill this gap by bringing together corpus data, lexical data, linguistic annotations and object metadata. The project’s main goal is to build a pipeline for machine translation and annotation of Sumerian Ur III administrative texts. The rich and structured data is then to be made accessible in the form of (Linguistic) Linked Open Data (LLOD), which should open them to a larger research community. Our contribution is two-fold: in terms of language technology, our work represents the first attempt to develop an integrative infrastructure for the annotation of morphology and syntax on the basis of RDF technologies and LLOD resources. With respect to Assyriology, we work towards producing the first syntactically annotated corpus of Sumerian.
"Prognosen sind schwierig, besonders, wenn sie die Zukunft betreffen", sagt ein geflügeltes Wort. Die letzte Finanzkrise ist dafür ein gutes Beispiel, denn die wenigsten Analysten und Wirtschaftsweisen haben sie kommen sehen. Da Finanzkrisen glücklicherweise selten sind, ist es allerdings schwierig, Modelle zu entwickeln, die rechtzeitig vor einem Crash warnen.
Iconographic representations on ancient artifacts are described in many existing databases and literature as human readable text. We applied Natural Language Processing (NLP) approaches in order to extract the semantics out of these textual descriptions and in this way enable semantic searches over them. This allows more sophisticated requests compared to the common existing keyword searches. As we show in our experiments based on numismatic datasets, the approach is generic in the sense that once the system is trained on one dataset, it can be applied without any further manual work also to datasets that have similar content. Of course, additional adaptions would further improve the results. Since the approach requires manual work only during the training phase, it can easily be applied to huge datasets without manual work and therefore without major extra costs. In fact, in our experience bigger datasets generate even better results because there is more data for training. Since our approach is not bound to a certain domain and the numismatic datasets are just an example, it could serve as a blueprint for many other areas. It could also help to build bridges between disciplines since textual iconographic descriptions are to be found also for pottery, sculpture and elsewhere.
In this paper, we study the limit of compactness which is a graph index originally introduced for measuring structural characteristics of hypermedia. Applying compactness to large scale small-world graphs (Mehler, 2008) observed its limit behaviour to be equal 1. The striking question concerning this finding was whether this limit behaviour resulted from the specifics of small-world graphs or was simply an artefact. In this paper, we determine the necessary and sufficient conditions for any sequence of connected graphs resulting in a limit value of CB = 1 which can be generalized with some consideration for the case of disconnected graph classes (Theorem 3). This result can be applied to many well-known classes of connected graphs. Here, we illustrate it by considering four examples. In fact, our proof-theoretical approach allows for quickly obtaining the limit value of compactness for many graph classes sparing computational costs.
Künstliche Intelligenz (KI), also intelligente Software, führt heutzutage Aufgaben aus, die man einst nur Menschen zutraute. Schon heute ist sie in vielen Bereichen unserer Gesellschaft angekommen – man denke an selbstfahrende Fahrzeuge, medizinische Diagnostik, Übersetzungsprogramme, persönliche Gesprächsassistenten, Suchfunktionen und Robotik. Doch wie weit können wir KI-Systemen vertrauen?
Exploring biophysical properties of virus-encoded components and their requirement for virus replication is an exciting new area of interdisciplinary virological research. To date, spatial resolution has only rarely been analyzed in computational/biophysical descriptions of virus replication dynamics. However, it is widely acknowledged that intracellular spatial dependence is a crucial component of virus life cycles. The hepatitis C virus-encoded NS5A protein is an endoplasmatic reticulum (ER)-anchored viral protein and an essential component of the virus replication machinery. Therefore, we simulate NS5A dynamics on realistic reconstructed, curved ER surfaces by means of surface partial differential equations (sPDE) upon unstructured grids. We match the in silico NS5A diffusion constant such that the NS5A sPDE simulation data reproduce experimental NS5A fluorescence recovery after photobleaching (FRAP) time series data. This parameter estimation yields the NS5A diffusion constant. Such parameters are needed for spatial models of HCV dynamics, which we are developing in parallel but remain qualitative at this stage. Thus, our present study likely provides the first quantitative biophysical description of the movement of a viral component. Our spatio-temporal resolved ansatz paves new ways for understanding intricate spatial-defined processes central to specfic aspects of virus life cycles.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.
We report measurements of the production of prompt D0, D+, D*+ and D+s mesons in Pb–Pb collisions at the centre-of-mass energy per nucleon-nucleon pair sNN−−−√=5.02 TeV, in the centrality classes 0–10%, 30–50% and 60–80%. The D-meson production yields are measured at mid-rapidity (|y| < 0.5) as a function of transverse momentum (pT). The pT intervals covered in central collisions are: 1 < pT< 50 GeV/c for D0, 2 < pT< 50GeV/c for D+, 3 < pT< 50GeV/c for D*+, and 4 < pT< 16GeV/c for D +s mesons. The nuclear modification factors (RAA) for non-strange D mesons (D0, D+, D*+) show minimum values of about 0.2 for pT = 6–10 GeV/c in the most central collisions and are compatible within uncertainties with those measured at s√NN=2.76 TeV. For D +s mesons, the values of RAA are larger than those of non-strange D mesons, but compatible within uncertainties. In central collisions the average RAA of non-strange D mesons is compatible with that of charged particles for pT> 8 GeV/c, while it is larger at lower pT. The nuclear modification factors for strange and non-strange D mesons are also compared to theoretical models with different implementations of in-medium energy loss.
We report the measured transverse momentum (pT) spectra of primary charged particles from pp, p-Pb and Pb-Pb collisions at a center-of-mass energy sNN−−−√=5.02 TeV in the kinematic range of 0.15 < pT< 50 GeV/c and |η| < 0.8. A significant improvement of systematic uncertainties motivated the reanalysis of data in pp and Pb-Pb collisions at sNN−−−√=2.76 TeV, as well as in p-Pb collisions at sNN−−−√=5.02 TeV, which is also presented. Spectra from Pb-Pb collisions are presented in nine centrality intervals and are compared to a reference spectrum from pp collisions scaled by the number of binary nucleon-nucleon collisions. For central collisions, the pT spectra are suppressed by more than a factor of 7 around 6–7 GeV/c with a significant reduction in suppression towards higher momenta up to 30 GeV/c. The nuclear modification factor RpPb, constructed from the pp and p-Pb spectra measured at the same collision energy, is consistent with unity above 8 GeV/c. While the spectra in both pp and Pb-Pb collisions are substantially harder at sNN−−−√=5.02 TeV compared to 2.76 TeV, the nuclear modification factors show no significant collision energy dependence. The obtained results should provide further constraints on the parton energy loss calculations to determine the transport properties of the hot and dense QCD matter.
BIOfid is a specialized information service currently being developed to mobilize biodiversity data dormant in printed historical and modern literature and to offer a platform for open access journals on the science of biodiversity. Our team of librarians, computer scientists and biologists produce high-quality text digitizations, develop new text-mining tools and generate detailed ontologies enabling semantic text analysis and semantic search by means of user-specific queries. In a pilot project we focus on German publications on the distribution and ecology of vascular plants, birds, moths and butterflies extending back to the Linnaeus period about 250 years ago. The three organism groups have been selected according to current demands of the relevant research community in Germany. The text corpus defined for this purpose comprises over 400 volumes with more than 100,000 pages to be digitized and will be complemented by journals from other digitization projects, copyright-free and project-related literature. With TextImager (Natural Language Processing & Text Visualization) and TextAnnotator (Discourse Semantic Annotation) we have already extended and launched tools that focus on the text-analytical section of our project. Furthermore, taxonomic and anatomical ontologies elaborated by us for the taxa prioritized by the project’s target group - German institutions and scientists active in biodiversity research - are constantly improved and expanded to maximize scientific data output. Our poster describes the general workflow of our project ranging from literature acquisition via software development, to data availability on the BIOfid web portal (http://biofid.de/), and the implementation into existing platforms which serve to promote global accessibility of biodiversity data.