Refine
Year of publication
- 2017 (24) (remove)
Document Type
- Conference Proceeding (24) (remove)
Has Fulltext
- yes (24)
Is part of the Bibliography
- no (24)
Keywords
- Digital Humanities (4)
- Literaturwissenschaft (4)
- Kongress (3)
- Germanistik (2)
- Literatur (2)
- Arbeitsgemeinschaft Historische Grundwissenschaften (1)
- BIOfid (1)
- Comic (1)
- Datenqualität (1)
- Deutsch (1)
Institute
Due to the massive parallel operation modes at GSI accelerators, a lot of accelerator setup and re-adjustment has to be made by operators during a beam time. This is typically done manually using potentiometers and is very time-consuming. With the FAIR project the complexity of the accelerator facility increases further and for efficiency reasons it is recommended to establish a high level of automation for future operation. Modern Accelerator Control Systems allow a fast access to both, accelerator settings and beam diagnostics data. This provides the opportunity to implement algorithms for automated adjustment of e.g. magnet settings to maximize transmission and optimize required beam parameters. The fast-switching magnets in GSI-beamlines are an optimal basis for an automatic exploration of the parameter-space. The optimization of the parameters for the SIS18 multi-turn-injection using a genetic algorithm has already been simulated*. The first results of our automatized online parameter optimization at the CRYRING@ESR injector are presented here.
The thermodynamics of Quantum Chromodynamics (QCD) in external (electro-)magnetic fields shows some unexpected features like inverse magnetic catalysis, which have been revealed mainly through lattice studies. Many effective descriptions, on the other hand, use Landau levels or approximate the system by just the lowest Landau level (LLL). Analyzing lattice configurations we ask whether such a picture is justified. We find the LLL to be separated from the rest by a spectral gap in the two-dimensional Dirac operator and analyze the corresponding LLL signature in four dimensions. We determine to what extent the quark condensate is LLL dominated at strong magnetic fields.
Web archives created by the Internet Archive (IA) (https://archive.org), national libraries and other archiving services contain large amounts of information collected for a time period of over twenty years. These archives constitute a valuable source for research in many disciplines, including the digital humanities and the historical sciences by offering a unique possibility to look into past events and their representation on the Web.
Most Web archive services aim to capture the entire Web (IA) or national top-level domains and are therefore broad in their scope, diverse regarding the topics they contain and the time intervals they cover. Due to the large size and the broad scope it is difficult for interested researchers to locate relevant information in the archives as search facilities are very limited. Many users are more interested in studying smaller and topically coherent event-centric collections of documents contained in a Web archive [1,2]. Such collections can reflect specific events such as elections, or natural disasters, e.g. the Fukushima nuclear disaster (2011) or the German federal elections.
We study simulated animats in terms of wheeled robots with the most simple neural controller possible – a single neuron per actuator. The system is fully self-organized in the sense that the controlling neuron receives uniquely the actual angle of the wheel as an input. Non-trivial locomotion results in structured environments, with the robot determining autonomously the direction of movement (time-reversal symmetry is spontaneously broken). Our controller, which mimics the mechanism used to transmit power in steam locomotives, abstracts from the body plan of the animat, working without problems also in the presence of noise and for chains of individual two-wheeled cars. Being fully compliant our controller may be also used, in the spirit of morphological computation, as a basic unit for higher-level evolutionary algorithms.
Biodiversity research heavily relies on recent and older literature, and the data contained therein. Despite great effort, large parts of the literature and the data it holds are still not available in appropriate formats needed for efficient compilation and analysis. As a part of the current funding strategy of the German Research Council (Deutsche Forschungsgemeinschaft, DFG), and resulting from an extensive dialogue with the scientific community in Germany, a "Specialised Information Service" (Fachinformationsdienst, FID) for Biodiversity Research will be established with the objective of making further segments of literature about biodiversity available in up-to-date formats. This project, starting 2017, is conducted by the University Library Johann Christian Senckenberg (Frankfurt/Main, Germany) together with the Senckenberg Gesellschaft für Naturforschung and the Text Technology Lab of the Goethe University (Frankfurt/Main).
The new Specialised Information Service for Biodiversity Research (FID Biodiversitätsforschung) comprises four core elements: (A) A text mining approach which encompasses advanced text technologies and a large body of 20th century literature; (B) the digitisation of selected German biodiversity literature; (C) a platform für Open Access journals; and (D) Acquisition of specialised print literature.
In order to promote the accessibility of biodiversity data in historic and contemporary literature, we introduce a new interdisciplinary project called BIOfid (FID=Fachinformationsdienst, a service for providing specialized information). The project aims at a mobilization of data available in print only by combining digitization of scientific biodiversity literature with the development of innovative text mining tools for complex, eventually semantic searches throughout the complete text corpus. A major prerequisite for the development of such search tools is the provision of sophisticated anatomy ontologies on the one hand, and of complete lists of species names (currently considered valid as well as all synonyms) at a global scale on the other hand. In the initial stage, we chose examples from German publications of the past 250 years dealing with the geographic distribution and ecology of vascular plants (Tracheophyta), birds (Aves), as well as moths and butterflies (Lepidoptera) in Germany. These taxa have been prioritized according to current demands of German research groups (about 50 sites) aiming at analyses and modeling of distribution patterns and their changes through time. In the long term, we aim at providing data and open source software applicable for any taxon and geographic region. For this purpose, a platform for open access journals for long-term availability of professional e-journals will be established. All generated data will also be made accessible through GFBio (German Federation for Biological Data). BIOfid is supported by the LIS-Scientific Library Services and Information Systems program of the German Research Foundation (DFG).
Random graph models, originally conceived to study the structure of networks and the emergence of their properties, have become an indispensable tool for experimental algorithmics. Amongst them, hyperbolic random graphs form a well-accepted family, yielding realistic complex networks while being both mathematically and algorithmically tractable. We introduce two generators MemGen and HyperGen for the G_{alpha,C}(n) model, which distributes n random points within a hyperbolic plane and produces m=n*d/2 undirected edges for all point pairs close by; the expected average degree d and exponent 2*alpha+1 of the power-law degree distribution are controlled by alpha>1/2 and C. Both algorithms emit a stream of edges which they do not have to store. MemGen keeps O(n) items in internal memory and has a time complexity of O(n*log(log n) + m), which is optimal for networks with an average degree of d=Omega(log(log n)). For realistic values of d=o(n / log^{1/alpha}(n)), HyperGen reduces the memory footprint to O([n^{1-alpha}*d^alpha + log(n)]*log(n)). In an experimental evaluation, we compare HyperGen with four generators among which it is consistently the fastest. For small d=10 we measure a speed-up of 4.0 compared to the fastest publicly available generator increasing to 29.6 for d=1000. On commodity hardware, HyperGen produces 3.7e8 edges per second for graphs with 1e6 < m < 1e12 and alpha=1, utilising less than 600MB of RAM. We demonstrate nearly linear scalability on an Intel Xeon Phi.