Refine
Year of publication
- 2017 (24) (remove)
Document Type
- Conference Proceeding (24) (remove)
Has Fulltext
- yes (24)
Is part of the Bibliography
- no (24) (remove)
Keywords
- Digital Humanities (4)
- Literaturwissenschaft (4)
- Kongress (3)
- Germanistik (2)
- Literatur (2)
- Arbeitsgemeinschaft Historische Grundwissenschaften (1)
- BIOfid (1)
- Comic (1)
- Datenqualität (1)
- Deutsch (1)
Institute
The archaeological data dealt with in our database solution Antike Fundmünzen in Europa (AFE), which records finds of ancient coins, is entered by humans. Based on the Linked Open Data (LOD) approach, we link our data to Nomisma.org concepts, as well as to other resources like Online Coins of the Roman Empire (OCRE). Since information such as denomination, material, etc. is recorded for each single coin, this information should be identical for coins of the same type. Unfortunately, this is not always the case, mostly due to human errors. Based on rules that we implemented, we were able to make use of this redundant information in order to detect possible errors within AFE, and were even able to correct errors in Nomimsa.org. However, the approach had the weakness that it was necessary to transform the data into an internal data model. In a second step, we therefore developed our rules within the Linked Open Data world. The rules can now be applied to datasets following the Nomisma. org modelling approach, as we demonstrated with data held by Corpus Nummorum Thracorum (CNT). We believe that the use of methods like this to increase the data quality of individual databases, as well as across different data sources and up to the higher levels of OCRE and Nomisma.org, is mandatory in order to increase trust in them.
The Gribov mode in hot QCD
(2017)
The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.
I review a number of recent developments in the physics of compact stars containing deconfined quark matter, including (a) their cooling with possible phase transition from a fully gapped to a gapless phase of QCD at low temperatures and large isospin; (b) the transport coeffcients of the 2SC phase and the role played by the Aharonov-Bohm interactions between flux-tubes and unpaired fermions; (c) rapidly rotating compact stars and spin-down and spin-up induced phase transition between hadronic and QCD matter as well as between different phases of QCD.
Quantitative Textanalyse wird oft mit empirischer Literaturwissenschaft verwechselt oder als Wörterzählen verniedlicht. Gerade in den Anfängen der Romanistik, als Linguistik und Literaturwissenschaft noch wesentlich enger verknüpft waren, wurde jedoch auch in der Textanalyse literarischer Werke mit Konkordanztabellen und anderen äußeren Strukturmerkmalen von Texten gearbeitet. Heute wird im Kontext der Digital Humanities in der Literaturwissenschaft versucht, Erkenntnisse aus dem Bereich der forensischen Linguistik und Autorschaftsattribution auch zur literarischen Stil- und Gattungsdiskussion zu verwenden. Die Methode der Stilometrie nutzt dabei vor allem das leicht zugängliche Tool stylo für das Statistikprogramm R, das von der Gruppe computational stylistics entwickelt wurde.
Der Workshop setzt sich aus folgenden Teilen zusammen:
1. Einführung in die quantitative Textanalyse im Kontext der Digital Humanities
2. Erläuterung der Funktionsweise von Stilometrie: mathematische Distanzmaße und statistische Verteilung
3. Anwendungsbeispiel mit stylo für R
The huge neutron fluxes offer the possibility to use research reactors to produce isotopes of interest, which can be investigated afterwards. An example is the half-lives of long-lived isotopes like 129I. A direct usage of reactor neutrons in the astrophysical energy regime is only possible, if the corresponding ions are not at rest in the laboratory frame. The combination of an ion storage ring with a reactor and a neutron guide could open the path to direct measurements of neutron-induced cross sections on short-lived radioactive isotopes in the astrophysically interesting energy regime.
Random graph models, originally conceived to study the structure of networks and the emergence of their properties, have become an indispensable tool for experimental algorithmics. Amongst them, hyperbolic random graphs form a well-accepted family, yielding realistic complex networks while being both mathematically and algorithmically tractable. We introduce two generators MemGen and HyperGen for the G_{alpha,C}(n) model, which distributes n random points within a hyperbolic plane and produces m=n*d/2 undirected edges for all point pairs close by; the expected average degree d and exponent 2*alpha+1 of the power-law degree distribution are controlled by alpha>1/2 and C. Both algorithms emit a stream of edges which they do not have to store. MemGen keeps O(n) items in internal memory and has a time complexity of O(n*log(log n) + m), which is optimal for networks with an average degree of d=Omega(log(log n)). For realistic values of d=o(n / log^{1/alpha}(n)), HyperGen reduces the memory footprint to O([n^{1-alpha}*d^alpha + log(n)]*log(n)). In an experimental evaluation, we compare HyperGen with four generators among which it is consistently the fastest. For small d=10 we measure a speed-up of 4.0 compared to the fastest publicly available generator increasing to 29.6 for d=1000. On commodity hardware, HyperGen produces 3.7e8 edges per second for graphs with 1e6 < m < 1e12 and alpha=1, utilising less than 600MB of RAM. We demonstrate nearly linear scalability on an Intel Xeon Phi.
In order to promote the accessibility of biodiversity data in historic and contemporary literature, we introduce a new interdisciplinary project called BIOfid (FID=Fachinformationsdienst, a service for providing specialized information). The project aims at a mobilization of data available in print only by combining digitization of scientific biodiversity literature with the development of innovative text mining tools for complex, eventually semantic searches throughout the complete text corpus. A major prerequisite for the development of such search tools is the provision of sophisticated anatomy ontologies on the one hand, and of complete lists of species names (currently considered valid as well as all synonyms) at a global scale on the other hand. In the initial stage, we chose examples from German publications of the past 250 years dealing with the geographic distribution and ecology of vascular plants (Tracheophyta), birds (Aves), as well as moths and butterflies (Lepidoptera) in Germany. These taxa have been prioritized according to current demands of German research groups (about 50 sites) aiming at analyses and modeling of distribution patterns and their changes through time. In the long term, we aim at providing data and open source software applicable for any taxon and geographic region. For this purpose, a platform for open access journals for long-term availability of professional e-journals will be established. All generated data will also be made accessible through GFBio (German Federation for Biological Data). BIOfid is supported by the LIS-Scientific Library Services and Information Systems program of the German Research Foundation (DFG).
Biodiversity research heavily relies on recent and older literature, and the data contained therein. Despite great effort, large parts of the literature and the data it holds are still not available in appropriate formats needed for efficient compilation and analysis. As a part of the current funding strategy of the German Research Council (Deutsche Forschungsgemeinschaft, DFG), and resulting from an extensive dialogue with the scientific community in Germany, a "Specialised Information Service" (Fachinformationsdienst, FID) for Biodiversity Research will be established with the objective of making further segments of literature about biodiversity available in up-to-date formats. This project, starting 2017, is conducted by the University Library Johann Christian Senckenberg (Frankfurt/Main, Germany) together with the Senckenberg Gesellschaft für Naturforschung and the Text Technology Lab of the Goethe University (Frankfurt/Main).
The new Specialised Information Service for Biodiversity Research (FID Biodiversitätsforschung) comprises four core elements: (A) A text mining approach which encompasses advanced text technologies and a large body of 20th century literature; (B) the digitisation of selected German biodiversity literature; (C) a platform für Open Access journals; and (D) Acquisition of specialised print literature.