Refine
Year of publication
Document Type
- Conference Proceeding (749) (remove)
Language
- English (749) (remove)
Has Fulltext
- yes (749) (remove)
Is part of the Bibliography
- no (749) (remove)
Keywords
- Computerlinguistik (20)
- Informationsstruktur (16)
- Phonetik (12)
- Japanisch (9)
- Democracy (8)
- Englisch (7)
- Grammatik (7)
- Law (6)
- Maschinelle Übersetzung (6)
- Nungisch (6)
Institute
- Physik (243)
- Rechtswissenschaft (101)
- Medizin (81)
- Universitätsbibliothek (67)
- Informatik (37)
- Frankfurt Institute for Advanced Studies (FIAS) (27)
- Extern (26)
- Wirtschaftswissenschaften (14)
- Geowissenschaften (13)
- Biochemie und Chemie (11)
The broad class of U(N) and SU(N) Polyakov loop models on the lattice are solved exactly in the combined large N, Nf limit, where N is a number of colors and Nf is a number of quark flavors, and in any dimension. In this ’t Hooft-Veneziano limit the ratio N/Nf is kept fixed. We calculate both the free energy and various correlation functions. The critical behavior of the models is described in details at finite temperatures and non-zero baryon chemical potential. Furthermore, we prove that the calculation of the N-point (baryon) correlation function reduces to the geometric median problem in the confinement phase. In the deconfinement phase we establish an existence of the complex masses and an oscillating decay of correlations in a certain region of parameters.
Recently, an approximate SU(4) chiral spin-flavour symmetry was observed in multiplet patterns of QCD meson correlation functions, in a temperature range above the chiral crossover. This symmetry is larger than the chiral symmetry of massless QCD, and can only arise effectively when colour-electric quark-gluon interactions dynamically dominate the quantum effective action. At temperatures about three times the crossover temperature, these patterns disappear again, indicating the screening of colour-electric interactions, and the expected chiral symmetry is recovered. In this contribution we collect independent evidence for such an intermediate temperature range, based on screening masses and the pion spectral function. Both kinds of observables behave non-perturbatively in this window, with resonance-like peaks for the pion and its first excitation disappearing gradually with temperature. Using symmetry arguments and the known behaviour of screening masses at small densities, we discuss how this chiral spin symmetric band continues into the QCD phase diagram.
We study the high temperature transition in pure SU(3) gauge theory and in full QCD with 3D-convolutional neural networks trained as parts of either unsupervised or semi-supervised learning problems. Pure gauge configurations are obtained with the MILC public code and full QCD are from simulations of Nf=2+1+1 Wilson fermions at maximal twist. We discuss the capability of different approaches to identify different phases using as input the configurations of Polyakov loops. To better expose fluctuations, a standardized version of Polyakov loops is also considered.
According to perturbation theory predictions, QCD matter in the zero-temperature, high-density limits of QCD at nonzero isospin chemical potential is expected to be in a superfluid Bardeen-Cooper-Schrieffer (BCS) phase of u and d¯ Cooper pairs. It is also expected, on symmetry grounds, that such phase connects via an analytical crossover to the phase with Bose-Einstein condensation (BEC) of charged pions at μI≥mπ/2. With lattice results, showing some indications that the deconfinement crossover also smoothly penetrates the BEC phase, the conjecture was made that the former connects continuously to the BEC-BCS crossover. We compute the spectrum of the Dirac operator, and use generalized Banks-Casher relations, to test this conjecture and identify signatures of the superfluid BCS phase.
n this joint contribution we announce the formation of the "OPEN LATtice initiative", this https URL, to study Stabilised Wilson Fermions (SWF). They are a new avenue for QCD calculations with Wilson-type fermions and we report results on our continued study of this framework: Tuning the clover improvement coefficient, and extending the reach of lattice spacings to a=0.12 fm. We fix the flavor symmetric points mπ=mK=412 MeV at a=0.055,0.064,0.077,0.094,0.12 fm and define the trajectories to the physical point by fixing the trace of the quark mass matrix. Currently our pion mass range extends down to mπ∼200 MeV. We outline our tuning goals and strategy as well as our future planned ensembles. First scaling studies are performed on fπ and mπ. Additionally results of a preliminary continuum extrapolation of mN at the flavor symmetric point are presented. Going further a first determination of the light and strange hadron spectrum chiral dependence is shown, which serves to check the quality of the action for precision measurements. We also investigate other quantities such as flowed gauge observables to study how the continuum limit is approached. Taken together we observe the SWF enable us to perform stable lattice simulations across a large range of parameters in mass, volume and lattice spacing. Pooling resources our new initiative has made our reported progress possible and through it we will share generated gauge ensembles under an open science philosophy.
Effective three-dimensional Polyakov loop theories derived from QCD by strong coupling and hopping expansions are valid for heavy quarks and can also be applied to finite chemical potential μ, due to their considerably milder sign problem. We apply the Monte-Carlo method to the Nf=1,2 effective theories up to O(κ4) in the hopping parameter at μ=0 to determine the critical quark mass, at which the first-order deconfinement phase transition terminates. The critical end point obtained from the effective theory to order O(κ2) agrees well with 4-dimensional QCD simulations with a hopping expanded determinant by the WHOT-QCD collaboration. We also compare with full QCD simulations and thus obtain a measure for the validity of both the strong coupling and the hopping expansion in this regime.
The global center symmetry of quenched QCD at zero baryonic chemical potential is broken spontaneously at a critical temperature Tc leading to a first-order phase transition. Including heavy dynamical quarks breaks the center symmetry explicitly and weakens the first-order phase transition for decreasing quark masses until it turns into a smooth crossover at a Z(2)-critical point. We investigate the Z(2)-critical quark mass value towards the continuum limit for Nf=2 flavors using lattice QCD in the staggered formulation. As part of a continued study, we present results from Monte-Carlo simulations on Nτ=8,10 lattices. Several aspect ratios and quark mass values were simulated in order to obtain the critical mass from a fit of the Polyakov loop to a kurtosis finite size scaling formula. Moreover, the possibility to develop a Ginzburg-Landau effective theory around the Z(2)-critical point is explored.
The so-called Columbia plot summarises the order of the QCD thermal transition as a function of the number of quark flavours and their masses. Recently, it was demonstrated that the first-order chiral transition region, as seen for Nf∈[3,6] on coarse lattices, exhibits tricritical scaling while extrapolating to zero on sufficiently fine lattices. Here we extend these studies to imaginary baryon chemical potential. A similar shrinking of the first-order region is observed with decreasing lattice spacing, which again appears compatible with a tricritical extrapolation to zero.
The order of the chiral phase transition of lattice QCD with unimproved staggered fermions is known to depend on the number of quark flavours, their masses and the lattice spacing. Previous studies in the literature for Nf∈{3,4} show first-order transitions, which weaken with decreasing lattice spacing. Here we investigate what happens when lattices are made coarser to establish contact to the strong coupling region. For Nf∈{4,8} we find a drastic weakening of the transition when going from Nτ=4 to Nτ=2, which is consistent with a second-order chiral transition reported in the literature for Nf=4 in the strong coupling limit. This implies a non-monotonic behaviour of the critical quark or pseudo-scalar meson mass, which separates first-order transitions from crossover behaviour, as a function of lattice spacing.
For the exploration of the phase diagram of QCD, effective Polyakov loop theories derived from lattice QCD provide a valuable tool in the heavy quark mass regime. In practice, the evaluation of these theories is complicated by the appearance of long-range and multipoint interaction terms. On the other hand, it is well known that for theories with such kind of interactions mean field approximations can be expected to yield reliable results. Here, we apply this framework to the critical endpoint of the deconfinement transition and results are compared to the literature. This treatment can also be used to investigate the phase diagram at non-zero baryon and isospin chemical potential.
In the strong coupling and heavy quark mass regime, lattice QCD dimensionally reduces to effective theories of Polyakov loops depending on the parameters of the original Wilson action β,κ and Nτ. We apply coarse graining techniques to such theories in 1d and 2d, corresponding to lattice QCD at finite temperature and non-zero chemical potential in 1+1d and 2+1d, respectively. In 1d the method is applied to the effective theories up to O(κ4). Using the transfer matrix, the recursion relations are solved analytically. The thermodynamic limit is taken for some observables. Afterwards, continuum extrapolation is performed numerically and results are discussed. In 2d the coarse graining method is applied in the pure gauge and static quark limit. Running couplings are obtained and the fixed points of the transformations are discussed. Finally, the critical coupling of the deconfinement transition is determined in both limits. Agreement to about 12% with Monte Carlo results of 2+1d Yang-Mills theory from the literature is observed.
Approaching the continuum limit of the deconfinement critical point for Nf=2 staggered fermions
(2022)
Quenched QCD at zero baryonic chemical potential undergoes a first-order deconfinement phase transition at a critical temperature Tc, which is related to the spontaneous breaking of the global center symmetry. The center symmetry is broken explicitly by including dynamical quarks, which weaken the first-order phase transition for decreasing quark masses. At a certain critical quark mass, which corresponds to the Z(2)-critical point, the first-order phase transition turns into a smooth crossover. We investigate the Z(2)-critical quark mass for Nf=2 staggered fermions on Nτ=8,10 lattices, where larger Nτ correspond to finer lattices. Monte-Carlo simulations are performed for several quark mass values and aspect ratios in order to extrapolate to the thermodynamic limit. We present final results for Nτ=8 and preliminary results for Nτ=10 for the critical mass, which are obtained from fitting to a kurtosis finite size scaling formula of the absolute value of the Polyakov loop.
Phase transitions in a non-perturbative regime can be studied by ab initio Lattice Field Theory methods. The status and future research directions for LFT investigations of Quantum Chromo-Dynamics under extreme conditions are reviewed, including properties of hadrons and of the hypothesized QCD axion as inferred from QCD topology in different phases. We discuss phase transitions in strong interactions in an extended parameter space, and the possibility of model building for Dark Matter and Electro-Weak Symmetry Breaking. Methodological challenges are addressed as well, including new developments in Artificial Intelligence geared towards the identification of different phases and transitions.
Quenched QCD at zero baryonic chemical potential undergoes a first-order deconfinement phase transition at a critical temperature Tc, which is related to the spontaneous breaking of the global center symmetry. Including heavy, dynamical quarks breaks the center symmetry explicitly and weakens the first-order phase transition. For decreasing quark masses the first-order phase transition turns into a smooth crossover at a Z2-critical point. The critical quark mass corresponding to this point has been examined with Nf=2 Wilson fermions for several Nτ in a recent study within our group. For comparison, we also locate the critical point with Nf=2 staggered fermions on Nτ=8 lattices. For this purpose we perform Monte Carlo simulations for several quark mass values and various aspect ratios in order to extrapolate to the thermodynamic limit. The critical mass is obtained by fitting to a finite size scaling formula of the kurtosis of the Polyakov loop. Our results indicate large discretization effects, requiring simulations on lattices with Nτ>8.
In an ideal world, extraction of machine-readable data and knowledge from natural-language biodiversity literature would be done automatically, but not so currently. The BIOfid project has developed some tools that can help with important parts of this highly demanding task, while certain parts of the workflow cannot be automated yet. BIOfid focuses on the 20th century legacy literature, a large part of which is only available in printed form. In this workshop, we will present the current state of the art in mobilisation of data from our corpus, as well as some challenges ahead of us. Together with the participants, we will exercise or explain the following tasks (some of which can be performed by the participants themselves, while other tasks currently require execution by our specialists with special equipment): Preparation of text files as an input; pre-processing with TextImager/TextAnnotator; semiautomated annotation and linking of named entities; generation of output in various formats; evaluation of the output. The workshop will also provide an outlook for further developments regarding extraction of statements from natural-language literature, with the long-term aim to produce machine-readable data from literature that can extend biodiversity databases and knowledge graphs.
The archaeological data dealt with in our database solution Antike Fundmünzen in Europa (AFE), which records finds of ancient coins, is entered by humans. Based on the Linked Open Data (LOD) approach, we link our data to Nomisma.org concepts, as well as to other resources like Online Coins of the Roman Empire (OCRE). Since information such as denomination, material, etc. is recorded for each single coin, this information should be identical for coins of the same type. Unfortunately, this is not always the case, mostly due to human errors. Based on rules that we implemented, we were able to make use of this redundant information in order to detect possible errors within AFE, and were even able to correct errors in Nomimsa.org. However, the approach had the weakness that it was necessary to transform the data into an internal data model. In a second step, we therefore developed our rules within the Linked Open Data world. The rules can now be applied to datasets following the Nomisma. org modelling approach, as we demonstrated with data held by Corpus Nummorum Thracorum (CNT). We believe that the use of methods like this to increase the data quality of individual databases, as well as across different data sources and up to the higher levels of OCRE and Nomisma.org, is mandatory in order to increase trust in them.
OPEN SCIENCE, VERSION 3.0: Breaking down barriers for equitable and efficient research communication
(2022)
The recognition of pharmacological substances, compounds and proteins is an essential preliminary work for the recognition of relations between chemicals and other biomedically relevant units. In this paper, we describe an approach to Task 1 of the PharmaCoNER Challenge, which involves the recognition of mentions of chemicals and drugs in Spanish medical texts. We train a state-of-the-art BiLSTM-CRF sequence tagger with stacked Pooled Contextualized Embeddings, word and sub-word embeddings using the open-source framework FLAIR. We present a new corpus composed of articles and papers from Spanish health science journals, termed the Spanish Health Corpus, and use it to train domain-specific embeddings which we incorporate in our model training. We achieve a result of 89.76% F1-score using pre-trained embeddings and are able to improve these results to 90.52% F1-score using specialized embeddings.
Despite the great importance of the Latin language in the past, there are relatively few resources available today to develop modern NLP tools for this language. Therefore, the EvaLatin Shared Task for Lemmatization and Part-of-Speech (POS) tagging was published in the LT4HALA workshop. In our work, we dealt with the second EvaLatin task, that is, POS tagging. Since most of the available Latin word embeddings were trained on either few or inaccurate data, we trained several embeddings on better data in the first step. Based on these embeddings, we trained several state-of-the-art taggers and used them as input for an ensemble classifier called LSTMVoter. We were able to achieve the best results for both the cross-genre and the cross-time task (90.64% and 87.00%) without using additional annotated data (closed modality). In the meantime, we further improved the system and achieved even better results (96.91% on classical, 90.87% on cross-genre and 87.35% on cross-time).
The annotation of texts and other material in the field of digital humanities and Natural Language Processing (NLP) is a common task of research projects. At the same time, the annotation of corpora is certainly the most time- and cost-intensive component in research projects and often requires a high level of expertise according to the research interest. However, for the annotation of texts, a wide range of tools is available, both for automatic and manual annotation. Since the automatic pre-processing methods are not error-free and there is an increasing demand for the generation of training data, also with regard to machine learning, suitable annotation tools are required. This paper defines criteria of flexibility and efficiency of complex annotations for the assessment of existing annotation tools. To extend this list of tools, the paper describes TextAnnotator, a browser-based, multi-annotation system, which has been developed to perform platform-independent multimodal annotations and annotate complex textual structures. The paper illustrates the current state of development of TextAnnotator and demonstrates its ability to evaluate annotation quality (inter-annotator agreement) at runtime. In addition, it will be shown how annotations of different users can be performed simultaneously and collaboratively on the same document from different platforms using UIMA as the basis for annotation.
The Specialized Information Service Biodiversity Research (BIOfid) has been launched to mobilize valuable biological data from printed literature hidden in German libraries for over the past 250 years. In this project, we annotate German texts converted by OCR from historical scientific literature on the biodiversity of plants, birds, moths and butterflies. Our work enables the automatic extraction of biological information previously buried in the mass of papers and volumes. For this purpose, we generated training data for the tasks of Named Entity Recognition (NER) and Taxa Recognition (TR) in biological documents. We use this data to train a number of leading machine learning tools and create a gold standard for TR in biodiversity literature. More specifically, we perform a practical analysis of our newly generated BIOfid dataset through various downstream-task evaluations and establish a new state of the art for TR with 80.23% F-score. In this sense, our paper lays the foundations for future work in the field of information extraction in biology texts.
Challenges of FAIR phase 0
(2018)
After two-year's shutdown, the GSI accelerators plus the latest addition of storage ring CRYRING, will be back into operation in 2018 as the FAIR phase 0 with the goal to fulfill the needs of scientific community and the FAIR accelerators and detector development. Even though GSI has been well known for its operation of a variety of ion beams ranging from proton up to uranium for multi research areas such as nuclear physics, astrophysics, biophysics, material science, the upcoming beam time faces a number of challenges in re-commissioning its existing circular accelerators with brand new control system and upgrade of beam instrumentations, as well as in rising failures of dated components and systems. The cycling synchrotron SIS18 has been undergoing a set of upgrade measures for fulfilling future FAIR operation, among which many measures will also be commissioned during the upcoming beam time. This paper presents the highlights of the challenges such as re-establishing the high intensity heavy ion operation as well as parallel operation mode for serving multi users. The status of preparation including commissioning results will also be reported.
GSI High Energy Beam Transfer lines (HEST) link the SIS18 synchrotron with two storage rings (Experimental Storage Ring and Cryring) and six experimental caves. The recent upgrades to HEST beam instrumentation enables precise measurements of beam properties along the lines and allow for faster and more precise beams setup on targets. Preliminary results of some of the measurements performed during runs in 2018 and 2019 are presented here. The focus is on response matrix measurements and quadrupole scans performed on HADES beam line. The errors and future improvements are discussed.
An automated beam-setting optimization application has been implemented on top of FAIR’s control system software stack based on CERN’s LSA framework. The optimization functionality is built using the Jenetics software library implemented in Java. Tests of the software with beam have been performed at the CRYRING@ESR ion storage ring.
The present study focuses on the beam line optimization from the heavy-ion synchrotron SIS18 to the HADES experiment. BOBYQA (Bound Optimization BY Quadratic Approximation) solves bound constrained optimization problems without using derivatives of the objective function. The Bayesian optimization is another strategy for global optimization of costly, noisy functions without using derivatives. A python programming interface to MADX allow the use of the python implementation of BOBYQA and Bayesian method. This gave the possibility to use tracking simulation with MADX to determine the loss budget for each lattice setting during the optimization and compare both optimization methods.
Due to the massive parallel operation modes at GSI accelerators, a lot of accelerator setup and re-adjustment has to be made by operators during a beam time. This is typically done manually using potentiometers and is very time-consuming. With the FAIR project the complexity of the accelerator facility increases further and for efficiency reasons it is recommended to establish a high level of automation for future operation. Modern Accelerator Control Systems allow a fast access to both, accelerator settings and beam diagnostics data. This provides the opportunity to implement algorithms for automated adjustment of e.g. magnet settings to maximize transmission and optimize required beam parameters. The fast-switching magnets in GSI-beamlines are an optimal basis for an automatic exploration of the parameter-space. The optimization of the parameters for the SIS18 multi-turn-injection using a genetic algorithm has already been simulated*. The first results of our automatized online parameter optimization at the CRYRING@ESR injector are presented here.
Since the last 20 years, modern heuristic algorithms and machine learning have been increasingly used for several purposes in accelerator technology and physics. Since computing power has become less and less of a limiting factor, these tools have become part of the physicist community's standard toolkit [1][2] [3] [4] [5]. This paper describes the construction of an algorithm that can be used to generate an optimised lattice design for transfer lines under the consideration of restrictions that usually limit design options in reality. The developed algorithm has been applied to the existing SIS18 to HADES transfer line in GSI.
We empirically investigate algorithms for solving Connected Components in the external memory model. In particular, we study whether the randomized O(Sort(E)) algorithm by Karger, Klein, and Tarjan can be implemented to compete with practically promising and simpler algorithms having only slightly worse theoretical cost, namely Borůvka’s algorithm and the algorithm by Sibeyn and collaborators. For all algorithms, we develop and test a number of tuning options. Our experiments are executed on a large set of different graph classes including random graphs, grids, geometric graphs, and hyperbolic graphs. Among our findings are: The Sibeyn algorithm is a very strong contender due to its simplicity and due to an added degree of freedom in its internal workings when used in the Connected Components setting. With the right tunings, the Karger-Klein-Tarjan algorithm can be implemented to be competitive in many cases. Higher graph density seems to benefit Karger-Klein-Tarjan relative to Sibeyn. Borůvka’s algorithm is not competitive with the two others.
We investigate privacy concerns and the privacy behavior of users of the AR smartphone game Pokémon Go. Pokémon Go accesses several functionalities of the smartphone and, in turn, collects a plethora of data of its users. For assessing the privacy concerns, we conduct an online study in Germany with 683 users of the game. The results indicate that the majority of the active players are concerned about the privacy practices of companies. This result hints towards the existence of a cognitive dissonance, i.e. the privacy paradox. Since this result is common in the privacy literature, we complement the first study with a second one with 199 users, which aims to assess the behavior of users with regard to which measures they undertake for protecting their privacy. The results are highly mixed and dependent on the measure, i.e. relatively many participants use privacy-preserving measures when interacting with their smartphone. This implies that many users know about risks and might take actions to protect their privacy, but deliberately trade-off their information privacy for the utility generated by playing the game.
Privacy and its protection is an important part of the culture in the USA and Europe. Literature in this field lacks empirical data from Japan. Thus, it is difficult– especially for foreign researchers – to understand the situation in Japan. To get a deeper understanding we examined the perception of a topic that is closely related to privacy: the perceived benefits of sharing data and the willingness to share in respect to the benefits for oneself, others and companies. We found a significant impact of the gender to each of the six analysed constructs.
This paper provides an assessment framework for privacy policies of Internet of Things Services which is based on particular GDPR requirements. The objective of the framework is to serve as supportive tool for users to take privacy-related informed decisions. For example when buying a new fitness tracker, users could compare different models in respect to privacy friendliness or more particular aspects of the framework such as if data is given to a third party. The framework consists of 16 parameters with one to four yes-or-no-questions each and allows the users to bring in their own weights for the different parameters. We assessed 110 devices which had 94 different policies. Furthermore, we did a legal assessment for the parameters to deal with the case that there is no statement at all regarding a certain parameter. The results of this comparative study show that most of the examined privacy policies of IoT devices/services are insufficient to address particular GDPR requirements and beyond. We also found a correlation between the length of the policy and the privacy transparency score, respectively.
The Ceboruco is a 2280 m high stratovolcano located in Nayarit State, Mexico. Despite its last eruption which occurred in 1870, it is the most active volcano in the area, showing volcanicearthquake activity together with ongoing vapor emissions. The magnetotelluric survey was carried out in November 2016. It was part of a geothermal project (CeMIEGeo-P24) and focused on the determination of the electrical conductivity distribution in the subsurface of the volcano.
The Magnetotelluric Apparent Resistivity Tensor, as introduced by Brown (2016), can be decomposed into an amplitude and a phase tensor. The fundamental physics behind those new tensors were presented in Hering et al. (2019), using canonical models in 1-D (isotropic and anisotropic) and 2-D resistivity environments. Here, the tensors are introduced for a high-quality data set, where their interpretational benefits become very obvious. Additionally, results from an isotropic 3-D inversion are presented and compared to an alternative 3-D anisotropic forward model.
In November 2016, magnetotelluric (MT) data were collected at the Ceboruco Volcano in cooperation with the Centro de Sismología y Volcanología de Occidente (SisVoc, Universidad de Guadalajara, Mexico). The Ceboruco is a 2280 m high stratovolcano, located in Nayarit State, Mexico. It is placed in the central part of the Tepic-Zacoalco Rift (TZR), which constitutes the north-western end of the Trans-Mexican Volcanic Belt. Together with Chapala and Colima (in the Jalisco Block), they form the triple rift system developed as a consequence of the ongoing subduction of the Rivera and Cocos oceanic plates beneath the North American continental crust. Although its last eruption occurred in 1870, it is the most active volcano in the area, showing volcanic-earthquake activity together with ongoing vapor emissions. The survey was part of a geothermal project (CeMIEGeo-P24) and focused on the determination of electrical conductivity properties to characterize the deep structure and the geothermal potential of the Volcano. Frequency dependent magnetotelluric response functions were calculated from 25 broadband MT stations, which covered an area of 10 x 10 km2 including its crater, calderas and foreland. The results were interpreted using anisotropic 3-D forward modelling and isotropic 3-D inversion approaches, considering strong topographical effects. The final resistivity model implies a highly conductive layer, reaching from near-surface to approximately 2 km depth, which might be related to a hydrothermal system. Here, mineralized fluids and clay minerals can cause high conductivities around 1 S/m. For longer periods, the principal axes of the MT response tensors (phase tensor, apparent resistivity tensor) are in good agreement with the strike direction of the underlying rift system. However, they are not rendered by the isotropic inversion. Thus the data suggest an anisotropic electrical conductivity at greater depth with its principal axis determined by the response tensors.
Impairment in past tense production as well as interaction between tense and aspect have been found in both fluent and non-fluent aphasia (e.g. Dragoy & Bastiaanse, 2013). Inflection has been found to be relatively preserved in semantic dementia (SD) (Thompson et al., 2012). The aims of the present study are a) to compare the morphosyntactic abilities of patients with aphasia and SD in tense and aspect marking and b) to explore the interaction of lexical (+/- telic) and grammatical (perfective/imperfective) aspect in aphasia and SD. A sentence completion task was administered to 30 native speakers of Greek: 10 patients with aphasia (6 anomic, 2 Wernicke and 2 agrammatic), 10 age and education-matched controls, 5 patients with SD and 5 controls. The material consisted of unergative, unaccusative and transitive verbs (12 of each verb class) and the participants had to apply present (imperfective) and past (perfective) tense. Unergative and unaccusative verbs differ in terms of their aspectual properties with the unergative being [-telic], and unaccusative [+telic]. Transitive verbs vary. A principal distinction between the tested conditions was the standard ummarked combination ([+telic] verbs in past perfective and [–telic] verbs in present imperfective) vs. the marked one ([+telic] verbs in present imperfective and [–telic] in past perfective). Both control groups performed at ceiling in all conditions. Aphasic participants were significantly more impaired than the control group in all conditions. SD participants were significantly more impaired than the controls only in the production of present tense (M-W U= 1.5, p= 0.024). There was no difference between past perfective and present imperfective for neither group, but there was an interaction between verb class and tense for the aphasic participants, as performance in unaccusative verbs in past perfective (unmarked condition) was significantly better than in unergatives in past perfective (marked condition) (Z=2.512, p=0.012) but performance in unaccusatives in present imperfective (marked condition) was significantly worse than performance in unergatives in present imperfective (unmarked condition) (Z=2.680, p=0.004). In sum, aphasic participants performed significantly better in the unmarked than in the marked conditions. Such an interaction was not found for the SD group. Aphasic participants performed significantly worse than the SD subjects in past perfective tense (M-W U= 7.5, p=0.029) in total, and the difference was significant only for unaccusative verbs (M-W U= 6.5, p=0.021), although both groups performed very well in this condition. There was no difference in present, neither for each verb class separately nor for the total score. A general past tense deficit cannot be upheld for either group. Rather, SD participants appear relatively impaired in producing present tense. We argue for slight morphosyntactic impairment in SD, although with a different underlying cause than in aphasia. Moreover, our data suggest an effect of aspectual markedness in aphasia but not in SD. We discuss this finding in the light of the different neuropathology of the two populations.
We study simulated animats in terms of wheeled robots with the most simple neural controller possible – a single neuron per actuator. The system is fully self-organized in the sense that the controlling neuron receives uniquely the actual angle of the wheel as an input. Non-trivial locomotion results in structured environments, with the robot determining autonomously the direction of movement (time-reversal symmetry is spontaneously broken). Our controller, which mimics the mechanism used to transmit power in steam locomotives, abstracts from the body plan of the animat, working without problems also in the presence of noise and for chains of individual two-wheeled cars. Being fully compliant our controller may be also used, in the spirit of morphological computation, as a basic unit for higher-level evolutionary algorithms.
The scientific innovation process embraces the steps from problem definition through the development and evaluation of innovative solutions to their successful exploitation. The challenges imposed by this process can be answered by the creation of a powerful and flexible next-generation e-Science infrastructure, which exploits leading edge information and knowledge technologies and enables a comprehensive and intelligent means of supporting this process. This paper describes our vision of a Knowledge-based eScience infrastructure, which is based on the results of an in-depth study of the researchers requirements. Furthermore, it introduces the Fraunhofer e-Science Cockpit as a first implementation of our vision.
The correspondence between the terminology used for querying and the one used in content objects to be retrieved, is a crucial prerequisite for effective retrieval technology. However, as terminology is evolving over time, a growing gap opens up between older documents in (long-term) archives and the active language used for querying such archives. Thus, technologies for detecting and systematically handling terminology evolution are required to ensure "semantic" accessibility of (Web) archive content on the long run. As a starting point for dealing with terminology evolution this paper formalizes the problem and discusses issues, first ideas and relevant technologies.
Web archives created by the Internet Archive (IA) (https://archive.org), national libraries and other archiving services contain large amounts of information collected for a time period of over twenty years. These archives constitute a valuable source for research in many disciplines, including the digital humanities and the historical sciences by offering a unique possibility to look into past events and their representation on the Web.
Most Web archive services aim to capture the entire Web (IA) or national top-level domains and are therefore broad in their scope, diverse regarding the topics they contain and the time intervals they cover. Due to the large size and the broad scope it is difficult for interested researchers to locate relevant information in the archives as search facilities are very limited. Many users are more interested in studying smaller and topically coherent event-centric collections of documents contained in a Web archive [1,2]. Such collections can reflect specific events such as elections, or natural disasters, e.g. the Fukushima nuclear disaster (2011) or the German federal elections.
The Specialised Information Service Performing Arts (SIS PA) is part of a funding programme by the German Research Foundation that enables libraries to develop tailor-made services for individual disciplines in order to provide researchers direct access to relevant materials and resources from their field. For the field of performing arts, the SIS PA is aggregating metadata about theater and dance resources from currently, mostly, German-speaking cultural heritage institutions in a VuFind-based search portal.
In this article, we focus on metadata quality and its impact on the aggregation workflow by describing the different, possibly data provider-specific, process stages of improving data quality in order to achieve a searchable, interlinked knowledge base. We also describe lessons learned and limitations of the process.
[Tagungsbericht] Making finance sustainable: Ten years equator principles – success or letdown?
(2013)
In 2003, a number of banks adopted the Equator Principles (EPs), a voluntary Code of Conduct based on the International Finance Corporation’s (IFC) performance standards, to ensure the ecological and social sustainability of project finance. These so called Equator Principles Financial Institutions (EPFI) commit to requiring their borrowers to adopt sustainable management plans of environmental and social risks associated with their projects. The Principles apply to the project finance business segment of the banks and cover projects with a total cost of US $10 million or more. While for long developing countries relied on World Bank and other public assistance to finance infrastructure projects there has occurred a shift in recent years to private funding. The NGOs have been frustrated by this shift of project finance as they had spent their resources to exercise pressure on the public financial institutions to incorporate environmental and social standards in their project finance activities. However, after a shift of NGO pressure to private financial institutions the latter adopted the EPs for fear of reputational risks. NGOs had laid down their own more ambitious ideas about sustainable finance in the Collevecchio Declaration on Financial Institutions and Sustainability. Legally speaking, the EPs are a self-regulatory soft law instrument. However, it has a hard law dimension as the Equator Banks require their borrowers to comply with the EPs through covenants in the loan contracts that may trigger a default in a case of violation. ...
This paper is a contribution to exploring and analyzing space-improvements in concurrent programming languages, in particular in the functional process-calculus CHF. Space-improvements are defined as a generalization of the corresponding notion in deterministic pure functional languages. The main part of the paper is the O(n ·logn) algorithm SPOPTN for offline space optimization of several parallel independent processes. Applications of this algorithm are: (i) affirmation of space improving transformations for particular classes of program transformations; (ii) support of an interpreter-based method for refuting space-improvements; and (iii) as a stand-alone offline-optimizer for space (or similar resources) of parallel processes.
Augmented reality (AR) gained much public attention since the success of Pok´emon Go in 2016. Technology companies like Apple or Google are currently focusing primarily on mobile AR (MAR) technologies, i.e. applications on mobile devices, like smartphones or tablets. Associated privacy issues have to be investigated early to foster market adoption. This is especially relevant since past research found several threats associated with the use of smartphone applications. Thus, we investigate two of the main privacy risks for MAR application users based on a sample of 19 of the most downloaded MAR applications for Android. First, we assess threats arising from bad privacy policies based on a machine-learning approach. Second, we investigate which smartphone data resources are accessed by the MAR applications. Third, we combine both approaches to evaluate whether privacy policies cover certain data accesses or not. We provide theoretical and practical implications and recommendations based on our results.
This publication's objective is to serve as the documentation of a graduate student symposium with the same name held in July 2017. The goal of the symposium was to discuss problems in current film culture with a focus on filmic heritage and innovative projects in the field of film education. Think Film! is a compilation of most of the talks given at the symposium. As quite often conferences or workshops are not documented, the goal of the publication is on the one hand to preserve the results for other scholars, as well as to make them accessible for the general public, and on the other hand to give the panelists a designated space to present their research.
The authors discuss questions of film heritage and digitization, funding, film festivals, film museums and local film culture with a focus on the conditions in Germany, Czech Republic and India as well as relate their findings to the changes film and media studies have undergone in recent years.
We discuss the diffusion currents occurring in a dilute system and show that the charge currents do not only depend on gradients in the corresponding charge density, but also on the other conserved charges in the system—the diffusion currents are therefore coupled. Gradients in one charge thus generate dissipative currents in a different charge. In this approach, we model the Navier-Stokes term of the generated currents to consist of a diffusion coefficient matrix, in which the diagonal entries are the usual diffusion coefficients and the off-diagonal entries correspond to the coupling of different diffusion currents. We evaluate the complete diffusion matrix for a specific hadron gas and for a simplified quark-gluon gas, including baryon, electric and strangeness charge. Our findings are that the off-diagonal entries can range within the same magnitude as the diagonal ones.
The changing shape of the rapidity spectrum of net protons over the SPS energy range is still lacking theoretical understanding. In this work, a model for string excitation and string fragmentation is implemented for the description of high energy collisions within a hadronic transport approach. The free parameters of the string model are tuned to reproduce the experimentally measured particle production in proton-proton collisions. With the fixed parameters we advance to calculations for heavy ion collisions, where the shape of the proton rapidity spectrum changes from a single peak to a double peak structure with increasing beam energy in the experiment. We present calculations of proton rapidity spectra at different SPS energies in heavy ion collisions. Qualitatively, a good agreement with the experimental findings is obtained. In a future work, the formation process of string fragments will be studied in detail aiming to quantitatively reproduce the measurement.
Einstein’s theory of general relativity is often regarded as the best theory of gravity that we know. Yet, this theory often manifests itself under conditions where no symmetry is present and nonlinear dynamics dominates. I will discuss how these conditions are systematically accompanied by the restoration of some degree of symmetry. Hence, despite gravity appearing often under conditions devoid of symmetry, asymptotic solutions tend to restore symmetry.
The huge neutron fluxes offer the possibility to use research reactors to produce isotopes of interest, which can be investigated afterwards. An example is the half-lives of long-lived isotopes like 129I. A direct usage of reactor neutrons in the astrophysical energy regime is only possible, if the corresponding ions are not at rest in the laboratory frame. The combination of an ion storage ring with a reactor and a neutron guide could open the path to direct measurements of neutron-induced cross sections on short-lived radioactive isotopes in the astrophysically interesting energy regime.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
A 3d regional density-driven flow model of a heterogeneous aquifer system at the German North Sea Coast is set up within the joint project NAWAK (“Development of sustainable adaption strategies for the water supply and distribution infrastructure on condition of climatic and demographic change”). The development of the freshwater-saltwater interface is simulated for three climate and demographic scenarios.
Groundwater flow simulations are performed with the finite volume code d3f++ (distributed density driven flow) that has been developed with a view to the modelling of large, complex, strongly density-influenced aquifer systems over long time periods.
I review a number of recent developments in the physics of compact stars containing deconfined quark matter, including (a) their cooling with possible phase transition from a fully gapped to a gapless phase of QCD at low temperatures and large isospin; (b) the transport coeffcients of the 2SC phase and the role played by the Aharonov-Bohm interactions between flux-tubes and unpaired fermions; (c) rapidly rotating compact stars and spin-down and spin-up induced phase transition between hadronic and QCD matter as well as between different phases of QCD.
The Gribov mode in hot QCD
(2017)
The physics of EPOS
(2013)
We describe two independent frameworks which provide unambiguous determinations of the deconfinement and the decoupling conditions of a relativistic gas at finite temperature. First, we use the Polyakov-Nambu-Jona–Lasinio model to compute meson and baryon masses at finite temperature and determine their melting temperature as a function of their strangeness content. Second, we analyze a simple expanding gas within a Friedmann-Robertson-Walker metric, which admits a well-defined decoupling mechanism. We examine the decoupling time as a function of the particle mass and cross section. We find evidences of an inherent dependence of the hadronization and freeze-out conditions on flavor, and on mass and cross section, respectively.
The High-Acceptance DiElectron Spectrometer (HADES) operates in the 1 - 2A GeV energy regime in fixed target experiments to explore baryon-rich strongly interacting matter in heavy-ion collisions at moderate temperatures with rare and penetrating probes. We present results on the production of strange hadrons below their respective NN threshold energy in Au+Au collisions at 1.23A GeV ( = 2.4 GeV). Special emphasis is put on the enhanced feed-down contribution of ϕ mesons to the inclusive yield of K- and its implication on the measured spectral shape of K-. Furthermore, we investigate global properties of the system, confronting the measured hadron yields and transverse mass spectra with a Statistical Hadronization Model (SHM) and a blastwave parameterization, respectively. These supplement the world data of the chemical and kinetic freeze-out temperatures.
The Projectile Spectator Detector (PSD) of the CBM experiment at the future FAIR facility is a compensating lead-scintillator calorimeter designed to measure the energy distribution of the forward going projectile nucleons and nuclei fragments (reaction spectators) produced close to the beam rapidity. The detector performance for the centrality and reaction plane determination is reviewed based on Monte-Carlo simulations of gold-gold collisions by means of four different heavy-ion event generators. The PSD energy resolution and the linearity of the response measured at CERN PS for the PSD supermodule consisting of 9 modules are presented. Predictions of the calorimeter radiation conditions at CBM and response measurement of one PSD module equipped with neutron irradiated MPPCs used for the light read out are discussed.
We study the properties of the survival probability of an unstable quantum state described by a Lee Hamiltonian. This theoretical approach resembles closely Quantum Field Theory (QFT): one can introduce in a rather simple framework the concept of propagator and Feynman rules, Within this context, we re-derive (in a detailed and didactical way) the well-known result according to which the amplitude of the survival probability is the Fourier transform of the energy distribution (or spectral function) of the unstable state (in turn, the energy distribution is proportional to the imaginary part of the propagator of the unstable state). Typically, the survival probability amplitude is the starting point of many studies of non-exponential decays. This work represents a further step toward the evaluation of the survival probability amplitude in genuine relativistic QFT. However, although many similarities exist, QFT presents some differences w.r.t. the Lee Hamiltonian which should be studied in the future.
Exotic nuclear matter
(2016)
Recent developments of nuclear structure theory for exotic nuclei are addressed. The inclusion of hyperons and nucleon resonances is discussed. Nuclear multipole response functions, hyperon interactions in infinite matter and in neutron stars and theoretical aspects of excitations of nucleon resonances in nuclei are discussed.
The thermodynamics of Quantum Chromodynamics (QCD) in external (electro-)magnetic fields shows some unexpected features like inverse magnetic catalysis, which have been revealed mainly through lattice studies. Many effective descriptions, on the other hand, use Landau levels or approximate the system by just the lowest Landau level (LLL). Analyzing lattice configurations we ask whether such a picture is justified. We find the LLL to be separated from the rest by a spectral gap in the two-dimensional Dirac operator and analyze the corresponding LLL signature in four dimensions. We determine to what extent the quark condensate is LLL dominated at strong magnetic fields.
We present an overview on the resonance dynamics within the microscopic parton-hadron-string dynamics (PHSD) approach which incorporates explicit partonic degrees-of-freedom in terms of strongly interacting quasiparticles (quarks and gluons) in line with an equation-of-state from lattice QCD as well as the dynamical hadronization and hadronic collision dynamics in the final reaction phase. We discuss how the vector meson resonances can be used as a probe of the in-medium effects and demostrate that the low mass dilepton spectra show visible in-medium effects from dynamical vector-meson spectral functions from SIS to SPS energies whereas at RHIC and LHC energies such medium effects become more moderate. We show also that the intermediate mass spectra are dominated by the radiation from the partonic degrees of freedom at RHIC and LHC energies.
n this contribution we lay down a lattice setup that allows for the nonperturbative study of a field theoretical model where a SU(2) fermion doublet, subjected to non-Abelian gauge interactions, is also coupled to a complex scalar field doublet via a Yukawa and an “irrelevant” Wilson-like term. Using naive fermions in quenched approximation and based on the renormalizedWard identities induced by purely fermionic chiral transformations, lattice observables are discussed that enable: a) in theWigner phase, the determinations of the critical Yukawa coupling value where the purely fermionic chiral transformation become a symmetry up to lattice artifacts; b) in the Nambu-Goldstone phase of the resulting critical theory, a stringent test of the actual generation of a fermion mass term of non-perturbative origin. A soft twisted fermion mass term is introduced to circumvent the problem of exceptional configurations, and observables are then calculated in the limit of vanishing twisted mass.
An overview is given on the experimental study of physics with relativistic heavy-ion collisions, with emphasis on recent measurements at the Large Hadron Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC). The focus here is laid on p–Pb collisions at the LHC and the corresponding d–Au measurements at RHIC. The topics touched are “collectivity and approach to equilibrium”, “high pT and jets”, “heavy flavour and electroweak bosons” and “search for exotic objects”.
We investigate the properties of QCD at finite isospin chemical potential at zero and non-zero temperatures. This theory is not affected by the sign problem and can be simulated using Monte-Carlo techniques. With increasing isospin chemical potential and temperatures below the deconfinement transition the system changes into a phase where charged pions condense, accompanied by an accumulation of low modes of the Dirac operator. The simulations are enabled by the introduction of a pionic source into the action, acting as an infrared regulator for the theory, and physical results are obtained by removing the regulator via an extrapolation. We present an update of our study concerning the associated phase diagram using 2+1 flavours of staggered fermions with physical quark masses and the comparison to Taylor expansion. We also present first results for our determination of the equation of state at finite isospin chemical potential and give an example for a cosmological application. The results can also be used to gain information about QCD at small baryon chemical potentials using reweighting with respect to the pionic source parameter and the chemical potential and we present first steps in this direction.
We show the first results for parton distribution functions within the proton at the physical pion mass, employing the method of quasi-distributions. In particular, we present the matrix elements for the iso-vector combination of the unpolarized, helicity and transversity quasi-distributions, obtained with Nf = 2 twisted mass cloverimproved fermions and a proton boosted with momentum = 0.83 GeV. The momentum smearing technique has been applied to improve the overlap with the proton boosted state. Moreover, we present the renormalized helicity matrix elements in the RI’ scheme, following the non-perturbative renormalization prescription recently developed by our group.
In this proceeding we review our recent work using supervised learning with a deep convolutional neural network (CNN) to identify the QCD equation of state (EoS) employed in hydrodynamic modeling of heavy-ion collisions given only final-state particle spectra ρ(pT, Ф). We showed that there is a traceable encoder of the dynamical information from phase structure (EoS) that survives the evolution and exists in the final snapshot, which enables the trained CNN to act as an effective “EoS-meter” in detecting the nature of the QCD transition.
The standard implementation of the HRG model has been shown to be unable to describe all the available data on QCD matter. Here we show the balance of repulsive and attractive hadronic interactions on QCD thermodynamics through observables both calculated by lattice simulations and measured in experiment. Attractive interactions are mediated by resonance formation, which are here implemented through extra states predicted by the Quark Model, while repulsive interactions are modelled by means of Excluded Volume (EV) effects. Informations on flavour dependent effective sizes are extracted. It is found that EV effects are present in lattice QCD thermodynamics, and are essential for a comprehensive description of higher order fluctuations of conserved charges.
This paper traces the military role of Tibnīn and its rulers in the Latin East against the Muslims until 1187/ 583. Tibnīn played a key role in overcoming the Muslims in Tyre and controlled it in 1124. It also played a vital role in the conflict between Damascus and the Kingdom of Jerusalem. Tibnīn participated in defending Antioch, Banyas, Hebron and Transjordan several times. Furthermore, its soldiers and Knights joined the army of the Kingdom of Jerusalem to capture Ascalon in 1153, and joined the campaigns of Amaury I, King of Jerusalem, against Egypt from 1164 to1169. The military situation of Tibnīn under the rule of the royal house until its fall to the Muslims in 1187/ 583 will be studied as well
We will discuss the issue of Landau levels of quarks in lattice QCD in an external magnetic field. We will show that in the two-dimensional case the lowest Landau level can be identified unambiguously even if the strong interactions are turned on. Starting from this observation, we will then show how one can define a “plowest Landau level” in the four-dimensional case, and discuss how much of the observed effects of a magnetic field can be explained in terms of it. Our results can be used to test the validity of low-energy models of QCD that make use of the lowest-Landau-level approximation.
The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.
We discuss the effects of the final hadronic state, in ultra-relativistic nuclear collisions, on hadronic resonance properties and measurable production rates. In particular we will compare our results with recent ALICE data on resonance production. We show that the hadronic phase of the system evolution has a considerable impact on the measured resonance ratios and pT spectra. We also discuss some of the remaining uncertainties in the model and how they may be addressed in future studies.
In this talk we discuss the effects of the hadronic rescattering on final state observables in high energy nuclear collisions. We do so by employing the UrQMD transport model for a realistic description of the hadronic decoupling process. The rescattering of hadrons modifies every hadronic bulk observable. For example apparent multiplicity of resonances is suppressed as compared to a chemical equilibrium freeze-out model. Stable and unstable particles change their momentum distribution by more than 30% through rescattering. The hadronic rescattering also leads to a substantial decorrelation of the conserved charge distributions. These findings show that it is all but trivial to conclude from the final state observables on the properties of the system at an earlier time where it may have been in or close to local equilibrium.
The dynamics of strange vector meson resonances (K* and K̄*) is investigated within the Parton-Hadron-String Dynamics (PHSD) transport approach. We present the time evolution of the production of K*− resonances from the QGP phase by quark fusion as well as from hadronic sources. We also give a brief overview of the modification of the K* through Kπ decay and K*N interaction in a hot and dense nuclear medium.
A full session was organized in memory of Helmut Oeschler during the 2017 edition of the Strangeness in Quark Matter Conference. It was heart-warming to discuss with the audience his main achievements and share anecdotes about this exceptionally praised and appreciated colleague, who was also a great friend for many at the conference. A brief summary of the session is provided with these proceedings.
Observations of long rang azimuthal correlations in small collision systems (p+p/A) have triggered an enormous excitement in the heavy-ion community. However, it is presently unclear to what extent the experimentally observed correlations should be attributed to initial state momentum correlations and/or the final state response to the initial state geometry. We discuss how a consistent theoretical description of the nonequilibrium dynamics is important to address both effects within a unified framework and present first results from weakly coupled non-equilibrium simulations in [1] to quantify the relative importance of initial state and final state effects based on theoretical calculations.
We compute hybrid static potentials in SU(3) lattice gauge theory. We present a method to automatically generate a large set of suitable creation operators with defined quantum numbers from elementary building blocks. We show preliminary results for several channels and discuss, which structures of the gluonic flux tube seem to be realized by the ground states in these channels.
We discuss the current developments by the European Twisted Mass Collaboration in extracting parton distribution functions from the quasi-PDF approach. We concentrate on the non-perturbative renormalization prescription recently developed by us, using the RI′ scheme. We show results for the renormalization functions of matrix elements needed for the computation of quasi-PDFs, including the conversion to the MS scheme, and for renormalized matrix elements. We discuss the systematic effects present in the Z-factors and the possible ways of addressing them in the future.
We report on the status of ongoing investigations aiming at locating the deconfinement critical point with standard Wilson fermions and Nf = 2 flavors towards the continuum limit (standard Columbia plot); locating the tricritical masses at imaginary chemical potential with unimproved staggered fermions at Nf = 2 (extended Columbia plot); identifying the order of the chiral phase transition at μ = 0 for Nf = 2 via extrapolation from non integer Nf (alternative Columbia plot).
Targeting for rare observables, the CBM experiment will operate at high interaction rates of up to 10 MHz, which is unprecedented in heavy-ion experiments so far. It requires a novel free-streaming readout system and a new concept of data processing. The huge data rates of the CBM experiment will be reduced online to the recordable rate before saving the data to the mass storage. Full collision reconstruction and selection will be performed online in a dedicated processor farm. In order to make an efficient event selection online a clean sample of particles has to be provided by the reconstruction package called First Level Event Selection (FLES).
The FLES reconstruction and selection package consists of several modules: track finding, track fitting, event building, short-lived particles finding, and event selection. Since detector measurements contain also time information, the event building is done at all stages of the reconstruction process. The input data are distributed within the FLES farm in a form of time-slices. A time-slice is reconstructed in parallel between processor cores. After all tracks of the whole time-slice are found and fitted, they are collected into clusters of tracks originated from common primary vertices, which then are fitted, thus identifying the interaction points. Secondary tracks are associated with primary vertices according to their estimated production time. After that short-lived particles are found and the full event building process is finished. The last stage of the FLES package is a selection of events according to the requested trigger signatures. The event reconstruction procedure and the results of its application to simulated collisions in the CBM detector setup are presented and discussed in detail.
Professor Walter Greiner, our mentor, colleague, and friend, passed away in the age of eighty. During his lifetime, the search for elements beyond uranium started and elements up to the so far heaviest one with atomic number 118 were discovered. In this talk I will present a short history from early searches for ‘trans-uraniums’ up to the production and safe identification of shell-stabilized ‘Super-Heavy Nuclei’ (SHN). The nuclear shell model reveals that these nuclei should be located in a region with closed shells for the protons at Z = 114, 120 or 126 and for the neutrons at N = 184. The outstanding aim of experimental investigations is the exploration of this region of spherical SHN. Systematic studies of heavy ion reactions for the synthesis of SHN revealed production cross-sections which reached values down to one picobarn and even below for the heaviest species. The systematics of measured cross-sections can be understood only on the basis of relatively high fission barriers as predicted for nuclei in and around the island of SHN. A key role in answering some of the open questions plays the synthesis of isotopes of element 120. Attempts aiming for synthesizing this element at the velocity filter SHIP will be reported.
We study tetraquark resonances with lattice QCD potentials computed for two static quarks and two dynamical quarks, the Born-Oppenheimer approximation and the emergent wave method of scattering theory. As a proof of concept we focus on systems with isospin I = 0, but consider different relative angular momenta l of the heavy b quarks. We compute the phase shifts and search for S and T matrix poles in the second Riemann sheet. We predict a new tetraquark resonance for l = 1, decaying into two B mesons, with quantum numbers I(JP) = 0(1−), mass MeV and decay width MeV.
We investigate the well-known vector state ψ(4040) in the frame-work of a quantum field theoretical model. In particular, we study its spectral function and search for the pole(s) in the complex plane. Quite interestingly, the spectral function has a non-standard shape and two poles are present. The role of the meson-meson quantum loops (in particular DD* ones) is crucial and could also explain the not yet conformed “state” Y(4008).
As a first step, a simple and pedagogical recall of the η-η′ system is presented, in which the role of the axial anomaly, related to the heterochiral nature of the multiplet of (pseudo)scalar states, is underlined. As a consequence, η is close to the octet and η′ to the singlet configuration. On the contrary, for vector and tensor states, which belong to homochiral multiplets, no anomalous contribution to masses and mixing is present. Then, the isoscalar physical states are to a very good approximation nonstrange and strange, respectively. Finally, for pseudotensor states, which are part of an heterochiral multiplet (just as pseudoscalar ones), a sizable anomalous term is expected: η2(1645) roughly corresponds to the octet and η2(1870) to the singlet.
Charmonia with different transverse momentum pT usually comes from different mechanisms in the relativistic heavy ion collisions. This work tries to review the theoretical studies on quarkonium evolutions in the deconfined medium produced in p-Pb and Pb-Pb collisions. The charmonia with high pT are mainly from the initial hadronic collisions, and therefore sensitive to the initial energy density of the bulk medium. For those charmonia within 0.1 < pT < 5 GeV/c at the energies of Large Hadron Collisions (LHC), They are mainly produced by the recombination of charm and anti-charm quarks in the medium. In the extremely low pT ∼ 1/RA (RA is the nuclear radius), additional contribution from the coherent interactions between electromagnetic fields generated by one nucleus and the target nucleus plays a non-negligible role in the J/ψ production even in semi-central Pb-Pb collisions.
With the ongoing loss of global biodiversity, long-term recordings of species distribution patterns are increasingly becoming important to investigate the causes and consequences for their change. Therefore, the digitization of scientific literature, both modern and historical, has been attracting growing attention in recent years. To meet this growing demand the Specialised Information Service for Biodiversity Research (BIOfid) was launched in 2017 with the aim of increasing the availability and accessibility of biodiversity information. Closely tied to the research community the interdisciplinary BIOfid team is digitizing data sources of biodiversity related research and provides a modern and professional infrastructure for hosting and sharing them. As a pilot project, German publications on the distribution and ecology of vascular plants, birds, moths and butterflies covering the past 250 years are prioritized. Large parts of the text corpus defined in accordance with the needs of the relevant German research community have already been transferred to a machine-readable format and will be publicly accessible soon. Software tools for text mining, semantic annotation and analysis with respect to the current trends in machine learning are developed to maximize bioscientific data output through user-specific queries that can be created via the BIOfid web portal (https://www.biofid.de/). To boost knowledge discovery, specific ontologies focusing on morphological traits and taxonomy are being prepared and will continuously be extended to keep up with an ever-expanding volume of literature sources.