Refine
Year of publication
Document Type
- Preprint (746)
- Article (400)
- Working Paper (119)
- Doctoral Thesis (92)
- Diploma Thesis (46)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (35)
- diplomthesis (29)
- Report (25)
Has Fulltext
- yes (1602)
Is part of the Bibliography
- no (1602)
Keywords
Institute
- Informatik (1602) (remove)
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We present techniques to prove termination of cycle rewriting, that is, string rewriting on cycles, which are strings in which the start and end are connected. Our main technique is to transform cycle rewriting into string rewriting and then apply state of the art techniques to prove termination of the string rewrite system. We present three such transformations, and prove for all of them that they are sound and complete. In this way not only termination of string rewriting of the transformed system implies termination of the original cycle rewrite system, a similar conclusion can be drawn for non-termination. Apart from this transformational approach, we present a uniform framework of matrix interpretations, covering most of the earlier approaches to automatically proving termination of cycle rewriting. All our techniques serve both for proving termination and relative termination. We present several experiments showing the power of our techniques.
With the rise of digitalization and ubiquity of media use, both opportunities and challenges emerge for academic learning. One prevalent challenge is media multitasking, which can become distracting and hinder learning success. This thesis investigates two facets of this issue: the enhancement of data tracking, and the exploration of digital interventions that support self-control.
The first paper focuses on digital tracking of media use, as a comprehensive understanding of digital distractions requires careful data collection to avoid misinterpretations. The paper presents a tracking system where media use is linked to learning activities. An annotation dashboard enabled the enrichment of the log data with self-reports. The efficacy of this system was evaluated in a 14-day online course taken by 177 students, with results confirming the initial assumptions about media tracking.
The second paper tackles the recognition of whether a text was thoroughly read, an issue brought on by the tendency of students to skip lengthy and demanding texts. A method utilizing scroll data and time series classification algorithms is presented and tested, showing promising results for early recognition and intervention.
The third paper presents the results of a systematic literature review on the effectiveness of digital self-control tools in academic learning. The paper identifies gaps in existing research and outlines a roadmap for further research on self-control tools.
The fourth paper shares findings from a survey of 273 students, exploring the practical use and perceived helpfulness of DSCTs. The study highlights the challenge of balancing between too restrictive and too lenient DSCTs, particularly for platforms offering both learning content and entertainment. The results also show a special role of media use that is highly habitual.
The fifth paper of this work investigates facets of app-based habit building. In a study over 27 days, 106 school-aged children used the specially developed PROMPT-app. The children carried out one of three digital activities each day, each of which was supposed to promote a deeper or more superficial processing of plans. Significant differences regarding the processing of plans emerged between the three activities, and the results suggest that a child-friendly planning application needs to be personalized to be effective.
Overall, this work offers a comprehensive insight into the complexity and potentials of dealing with distracting media usage and shows ways for future research and interventions in this fascinating and ever more important field.
Trotz eines umfangreichen Angebots an Literatur und Ratgebern im Bereich des Projektmanagements scheitern auch heute noch viele IT-Projekte. Ursache sind oft Probleme im Projektteam oder Fehleinschätzungen in der Planung des Projektes und Überwachung des Projektstatus. Insbesondere durch neue Technologien und Globalisierung entstandene Arbeitsweisen wie das virtuelle Team sind davon betroffen. In dieser Arbeit wird auf die Frage eingegangen, was virtuelle Teams sind und welche Probleme die Arbeit von virtuellen Teams belastet. Dafür werden aktuell existierende Tools aus dem Bereich des Web 2.0 analysiert und aus dem Stand der angebotenen Tools vermeidbare Schwächen der Helfer herausgearbeitet. Anschließend wird ein mittels einer Anforderungsanalyse und eines Konzepts, welches neue Methoden zur Darstellung von Projektstatus und Verknüpfung mit Dokumentation und Kommunikation nutzt, das Tool „TeamVision“ erstellt, welches versucht, virtuelle Teams möglichst effizient zu managenen, Probleme schnell zu erkennen und somit die Arbeit innerhalb des Teams zu beschleunigen. Hierbei wird insbesondere das Ergebnis der Analyse benutzt, dass viele Tools einzelne Verwaltungsaufgaben getrennt durchführen. Informationen müssen vom Nutzer selbst aus den verschiedenen Grafiken, Listen oder anderen Darstellungen gesammelt und selbst assoziiert werden. Die prototypische Implementierung von TeamVision versucht den Informationsfluss beherrschbar zu machen, indem Übersichten in einem Projektbaum zusammengefasst werden, der mittels Zoomfunktionen und visueller Hilfsmitel wie Farbgebung versucht, die Informationsbeschaffung zu erleichtern.
Interest to become a data scientist or related professions in data science domain is rapidly growing. To meet such a demand, we propose a novel educational service that aims to provide tailored learning paths for data science. Our target user is one who aims to be an expert in data science. Our approach is to analyze the background of the practitioner and match the learning units. A critical feature is that we use gamification to reinforce the practitioner engagement. We believe that our work provides a practical guideline for those who want to learn data science.
Measurements of the pT-dependent flow vector fluctuations in Pb-Pb collisions at sNN−−−√=5.02 TeV using azimuthal correlations with the ALICE experiment at the LHC are presented. A four-particle correlation approach [1] is used to quantify the effects of flow angle and magnitude fluctuations separately. This paper extends previous studies to additional centrality intervals and provides measurements of the pT-dependent flow vector fluctuations at sNN−−−√=5.02 TeV with two-particle correlations. Significant pT-dependent fluctuations of the V⃗ 2 flow vector in Pb-Pb collisions are found across different centrality ranges, with the largest fluctuations of up to ∼15% being present in the 5% most central collisions. In parallel, no evidence of significant pT-dependent fluctuations of V⃗ 3 or V⃗ 4 is found. Additionally, evidence of flow angle and magnitude fluctuations is observed with more than 5σ significance in central collisions. These observations in Pb-Pb collisions indicate where the classical picture of hydrodynamic modeling with a common symmetry plane breaks down. This has implications for hard probes at high pT, which might be biased by pT-dependent flow angle fluctuations of at least 23% in central collisions. Given the presented results, existing theoretical models should be re-examined to improve our understanding of initial conditions, quark--gluon plasma (QGP) properties, and the dynamic evolution of the created system.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependence of correlations between v3 and v2 and between v4 and v2 is also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the Large Hadron Collider. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependences of correlations between v3 and v2 and between v4 and v2 are also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. This might be an indication of possible viscous corrections to the equilibrium distribution at hadronic freeze-out, which might help to understand the possible contribution of bulk viscosity in the hadronic phase of the system. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The results are reported in terms of multiparticle correlation observables dubbed Symmetric Cumulants. These observables are robust against biases originating from nonflow effects. The centrality dependence of correlations between the higher order harmonics (the quadrangular v4 and pentagonal v5 flow) and the lower order harmonics (the elliptic v2 and triangular v3 flow) is presented. The transverse momentum dependence of correlations between v3 and v2 and between v4 and v2 is also reported. The results are compared to calculations from viscous hydrodynamics and A Multi-Phase Transport ({AMPT}) model calculations. The comparisons to viscous hydrodynamic models demonstrate that the different order harmonic correlations respond differently to the initial conditions and the temperature dependence of the ratio of shear viscosity to entropy density (η/s). A small average value of η/s is favored independent of the specific choice of initial conditions in the models. The calculations with the AMPT initial conditions yield results closest to the measurements. Correlations between the magnitudes of v2, v3 and v4 show moderate pT dependence in mid-central collisions. Together with existing measurements of individual flow harmonics, the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, pPb, and PbPb, at the top energy of the Large Hadron Collider (√sNN=5.02TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for pPb and PbPb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, p-Pb, and Pb-Pb, at the top energy of the Large Hadron Collider (sNN−−−√=5.02 TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for p-Pb and Pb-Pb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
We present the first systematic comparison of the charged-particle pseudorapidity densities for three widely different collision systems, pp, p-Pb, and Pb-Pb, at the top energy of the Large Hadron Collider (sNN−−−√=5.02 TeV) measured over a wide pseudorapidity range (−3.5<η<5), the widest possible among the four experiments at that facility. The systematic uncertainties are minimised since the measurements are recorded by the same experimental apparatus (ALICE). The distributions for p-Pb and Pb-Pb collisions are determined as a function of the centrality of the collisions, while results from pp collisions are reported for inelastic events with at least one charged particle at midrapidity. The charged-particle pseudorapidity densities are, under simple and robust assumptions, transformed to charged-particle rapidity densities. This allows for the calculation and the presentation of the evolution of the width of the rapidity distributions and of a lower bound on the Bjorken energy density, as a function of the number of participants in all three collision systems. We find a decreasing width of the particle production, and roughly a smooth ten fold increase in the energy density, as the system size grows, which is consistent with a gradually higher dense phase of matter.
The first measurements of K∗(892)0 resonance production as a function of charged-particle multiplicity in Xe−Xe collisions at sNN−−−√= 5.44 TeV and pp collisions at s√= 5.02 TeV using the ALICE detector are presented. The resonance is reconstructed at midrapidity (|y|<0.5) using the hadronic decay channel K∗0→K±π∓. Measurements of transverse-momentum integrated yield, mean transverse-momentum, nuclear modification factor of K∗0, and yield ratios of resonance to stable hadron (K∗0/K) are compared across different collision systems (pp, p−Pb, Xe−Xe, and Pb−Pb) at similar collision energies to investigate how the production of K∗0 resonances depends on the size of the system formed in these collisions. The hadronic rescattering effect is found to be independent of the size of colliding systems and mainly driven by the produced charged-particle multiplicity, which is a proxy of the volume of produced matter at the chemical freeze-out. In addition, the production yields of K∗0 in Xe−Xe collisions are utilized to constrain the dependence of the kinetic freeze-out temperature on the system size using HRG-PCE model.
To truly appreciate the myriad of events which relate synaptic function and vesicle dynamics, simulations should be done in a spatially realistic environment. This holds true in particular in order to explain as well the rather astonishing motor patterns which we observed within in vivo recordings which underlie peristaltic contractionsas well as the shape of the EPSPs at different forms of long-term stimulation, presented both here, at a well characterized synapse, the neuromuscular junction (NMJ) of the Drosophila larva (c.f. Figure 1). To this end, we have employed a reductionist approach and generated three dimensional models of single presynaptic boutons at the Drosophila larval NMJ. Vesicle dynamics are described by diffusion-like partial differential equations which are solved numerically on unstructured grids using the uG platform. In our model we varied parameters such as bouton-size, vesicle output probability (Po), stimulation frequency and number of synapses, to observe how altering these parameters effected bouton function. Hence we demonstrate that the morphologic and physiologic specialization maybe a convergent evolutionary adaptation to regulate the trade off between sustained, low output, and short term, high output, synaptic signals. There seems to be a biologically meaningful explanation for the co-existence of the two different bouton types as previously observed at the NMJ (characterized especially by the relation between size and Po), the assigning of two different tasks with respect to short- and long-time behaviour could allow for an optimized interplay of different synapse types. We can present astonishing similar results of experimental and simulation data which could be gained in particular without any data fitting, however based only on biophysical values which could be taken from different experimental results. As a side product, we demonstrate how advanced methods from numerical mathematics could help in future to resolve also other difficult experimental neurobiological issues.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. To truly appreciate the myriad of events which relate synaptic function and vesicle dynamics, simulations should be done in a spatially realistic environment. This holds true in particular in order to explain the rather astonishing motor patterns presented here which we observed within in vivo recordings which underlie peristaltic contractions at a well characterized synapse, the neuromuscular junction (NMJ) of the Drosophila larva. To this end, we have employed a reductionist approach and generated three dimensional models of single presynaptic boutons at the Drosophila larval NMJ. Vesicle dynamics are described by diffusion-like partial differential equations which are solved numerically on unstructured grids using the uG platform. In our model we varied parameters such as bouton-size, vesicle output probability (Po), stimulation frequency and number of synapses, to observe how altering these parameters effected bouton function. Hence we demonstrate that the morphologic and physiologic specialization maybe a convergent evolutionary adaptation to regulate the trade off between sustained, low output, and short term, high output, synaptic signals. There seems to be a biologically meaningful explanation for the co-existence of the two different bouton types as previously observed at the NMJ (characterized especially by the relation between size and Po),the assigning of two different tasks with respect to short- and long-time behaviour could allow for an optimized interplay of different synapse types. As a side product, we demonstrate how advanced methods from numerical mathematics could help in future to resolve also other difficult experimental neurobiological issues.
The morphology of presynaptic specializations can vary greatly ranging from classical single-release-site boutons in the central nervous system to boutons of various sizes harboring multiple vesicle release sites. Multi-release-site boutons can be found in several neural contexts, for example at the neuromuscular junction (NMJ) of body wall muscles of Drosophila larvae. These NMJs are built by two motor neurons forming two types of glutamatergic multi-release-site boutons with two typical diameters. However, it is unknown why these distinct nerve terminal configurations are used on the same postsynaptic muscle fiber. To systematically dissect the biophysical properties of these boutons we developed a full three-dimensional model of such boutons, their release sites and transmitter-harboring vesicles and analyzed the local vesicle dynamics of various configurations during stimulation. Here we show that the rate of transmission of a bouton is primarily limited by diffusion-based vesicle movements and that the probability of vesicle release and the size of a bouton affect bouton-performance in distinct temporal domains allowing for an optimal transmission of the neural signals at different time scales. A comparison of our in silico simulations with in vivo recordings of the natural motor pattern of both neurons revealed that the bouton properties resemble a well-tuned cooperation of the parameters release probability and bouton size, enabling a reliable transmission of the prevailing firing-pattern at diffusion-limited boutons. Our findings indicate that the prevailing firing-pattern of a neuron may determine the physiological and morphological parameters required for its synaptic terminals.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
A newly developed observable for correlations between symmetry planes, which characterize the direction of the anisotropic emission of produced particles, is measured in Pb-Pb collisions at sNN−−−√=2.76 TeV with ALICE. This so-called Gaussian Estimator allows for the first time the study of these quantities without the influence of correlations between different flow amplitudes. The centrality dependence of various correlations between two, three and four symmetry planes is presented. The ordering of magnitude between these symmetry plane correlations is discussed and the results of the Gaussian Estimator are compared with measurements of previously used estimators. The results utilizing the new estimator lead to significantly smaller correlations than reported by studies using the Scalar Product method. Furthermore, the obtained symmetry plane correlations are compared to state-of-the-art hydrodynamic model calculations for the evolution of heavy-ion collisions. While the model predictions provide a qualitative description of the data, quantitative agreement is not always observed, particularly for correlators with significant non-linear response of the medium to initial state anisotropies of the collision system. As these results provide unique and independent information, their usage in future Bayesian analysis can further constrain our knowledge on the properties of the QCD matter produced in ultrarelativistic heavy-ion collisions.
The production yield of the Λ(1520) baryon resonance is measured at mid-rapidity in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the Λ(1520)→pK− (and charge conjugate) hadronic decay channel as a function of the transverse momentum (pT) and collision centrality. The pT-integrated production rate of Λ(1520) relative to Λ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at LHC and the first evidence of Λ(1520) suppression in heavy-ion collisions. The measured Λ(1520)/Λ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured pT distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances.
The production yield of the Λ(1520) baryon resonance is measured at mid-rapidity in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the Λ(1520)→pK− (and charge conjugate) hadronic decay channel as a function of the transverse momentum (pT) and collision centrality. The pT-integrated production rate of Λ(1520) relative to Λ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at the LHC and the first 3σ evidence of Λ(1520) suppression within a single collision system. The measured Λ(1520)/Λ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured pT distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances.
The production yield of the Λ(1520) baryon resonance is measured at mid-rapidity in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the Λ(1520)→pK− (and charge conjugate) hadronic decay channel as a function of the transverse momentum (pT) and collision centrality. The pT-integrated production rate of Λ(1520) relative to Λ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at the LHC and the first 3σ evidence of Λ(1520) suppression within a single collision system. The measured Λ(1520)/Λ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured pT distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances.
Inclusive transverse momentum spectra of primary charged particles in Pb–Pb collisions at √sNN=2.76 TeV have been measured by the ALICE Collaboration at the LHC. The data are presented for central and peripheral collisions, corresponding to 0–5% and 70–80% of the hadronic Pb–Pb cross section. The measured charged particle spectra in |η|<0.8 and 0.3<pT<20 GeV/c are compared to the expectation in pp collisions at the same sNN, scaled by the number of underlying nucleon–nucleon collisions. The comparison is expressed in terms of the nuclear modification factor RAA. The result indicates only weak medium effects (RAA≈0.7) in peripheral collisions. In central collisions, RAA reaches a minimum of about 0.14 at pT=6–7 GeV/c and increases significantly at larger pT. The measured suppression of high-pT particles is stronger than that observed at lower collision energies, indicating that a very dense medium is formed in central Pb–Pb collisions at the LHC.
Suche im Semantic Web : Erweiterung des VRP um eine intuitive und RQL-basierte Anfrageschnittstelle
(2003)
Datenflut im World Wide Web - ein Problem jedes Internetbenutzers. Klassische Internetsuchmaschinen sind überfordert und liefern immer seltener brauchbare Resultate. Das Semantic Web verspricht Hoffnung - maßgeblich basierend auf RDF. Das Licht der Öffentlichkeit erblickt das Semantic Web vermutlich zunächst in spezialisierten Informationsportalen, so genannten Infomediaries. Besucher von Informationsportalen benötigen eine Abfragesprache, welche ebenso einfach wie eine gewöhnliche Internetsuchmaschine anzuwenden ist. Eine derartige Abfragesprache existiert für RDF zur Zeit nicht. Diese Arbeit stellt eine neuartige Abfragesprache vor, welche dieser Anforderung genügt: eRQL. Bestandteil dieser Arbeit ist der mittels Java implementierte eRQL-Prozessor eRqlEngine, welcher unter http://www.wleklinski.de/rdf/ und unter http://www.dbis.informatik.uni-frankfurt.de/~tolle/RDF/eRQL/ bezogen werden kann.
Iterative arrays (IAs) are a, parallel computational model with a sequential processing of the input. They are one-dimensional arrays of interacting identical deterministic finite automata. In this note, realtime-lAs with sublinear space bounds are used to accept formal languages. The existence of a proper hierarchy of space complexity classes between logarithmic anel linear space bounds is proved. Furthermore, an optimal spacc lower bound for non-regular language recognition is shown. Key words: Iterative arrays, cellular automata, space bounded computations, decidability questions, formal languages, theory of computation
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton-proton collisions at a center-of-mass energy of s√=13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
The two-particle momentum correlation functions between charm mesons (D∗± and D±) and charged light-flavor mesons (π± and K±) in all charge-combinations are measured for the first time by the ALICE Collaboration in high-multiplicity proton–proton collisions at a center-of-mass energy of √s = 13 TeV. For DK and D∗K pairs, the experimental results are in agreement with theoretical predictions of the residual strong interaction based on quantum chromodynamics calculations on the lattice and chiral effective field theory. In the case of Dπ and D∗π pairs, tension between the calculations including strong interactions and the measurement is observed. For all particle pairs, the data can be adequately described by Coulomb interaction only, indicating a shallow interaction between charm and light-flavor mesons. Finally, the scattering lengths governing the residual strong interaction of the Dπ and D∗π systems are determined by fitting the experimental correlation functions with a model that employs a Gaussian potential. The extracted values are small and compatible with zero.
Studying strangeness and baryon production mechanisms through angular correlations between charged
(2023)
The angular correlations between charged Ξ baryons and associated identified hadrons (pions, kaons, protons, Λ baryons, and Ξ baryons) are measured in pp collisions at s√=13 TeV with the ALICE detector to give insight into the particle production mechanisms and balancing of quantum numbers on the microscopic level. In particular, the distribution of strangeness is investigated in the correlations between the doubly-strange Ξ baryon and mesons and baryons that contain a single strange quark, K and Λ. As a reference, the results are compared to Ξπ and Ξp correlations, where the associated mesons and baryons do not contain a strange valence quark. These measurements are expected to be sensitive to whether strangeness is produced through string breaking or in a thermal production scenario. Furthermore, the multiplicity dependence of the correlation functions is measured to look for the turn-on of additional particle production mechanisms with event activity. The results are compared to predictions from the string-breaking model PYTHIA 8, including tunes with baryon junctions and rope hadronisation enabled, the cluster hadronisation ly or qualitatively by the Monte Carlo models, no one model can match all features of the data. These results provide stringent constraints on the strangeness and baryon number production mechanisms in pp collisions.
The very forward energy is a powerful tool for characterising the proton fragmentation in pp and p-Pb collisions and, studied in correlation with particle production at midrapidity, provides direct insightsinto the initial stages and the subsequent evolution of the collision. Furthermore, the correlation between the forward energy and the production of particles with large transverse momenta at midrapidity provides information complementary to the measurements of the underlying event, which are usually interpreted in the framework of models implementing centrality-dependent multiple parton interaction. Results about the very forward energy, measured by the ALICE zero degree calorimeters (ZDC), and its dependence on the activity measured at midrapidity in pp collisions at s√=13 TeV and in p-Pb collisions at sNN−−−√=8.16 TeV are presented and discussed. The measurements performed in pp collisions are compared with the expectations of three hadronic interaction event generators: PYTHIA 6 (Perugia 2011 tune), PYTHIA 8 (Monash tune), and EPOS LHC. These results provide new constraints on the validity of models in describing the beam remnants at very forward rapidities, where perturbative QCD cannot be used.
Study of the Λ–Λ interaction with femtoscopy correlations in pp and p–Pb collisions at the LHC
(2019)
This work presents new constraints on the existence and the binding energy of a possible Λ-Λ bound state, the H-dibaryon, derived from Λ-Λ femtoscopic measurements by the ALICE collaboration. The results are obtained from a new measurement using the femtoscopy technique in pp collisions at s√=13 TeV and p-Pb collisions at sNN−−−√=5.02 TeV, combined with previously published results from p-Pb collisions at s√=7 TeV. The Λ-Λ scattering parameter space, spanned by the inverse scattering length f−10 and the effective range d0, is constrained by comparing the measured Λ-Λ correlation function with calculations obtained within the Lednicky model. The data are compatible with hypernuclei results and lattice computations, both predicting a shallow attractive interaction, and permit to test different theoretical approaches describing the Λ-Λ interaction. The region in the (f−10,d0) plane which would accommodate a Λ-Λ bound state is substantially restricted compared to previous studies. The binding energy of the possible Λ-Λ bound state is estimated within an effective-range expansion approach and is found to be BΛΛ=3.2+1.6−2.4(stat)+1.8−1.0(syst) MeV.
Study of the Λ–Λ interaction with femtoscopy correlations in pp and p–Pb collisions at the LHC
(2019)
This work presents new constraints on the existence and the binding energy of a possible Λ-Λ bound state, the H-dibaryon, derived from Λ-Λ femtoscopic measurements by the ALICE collaboration. The results are obtained from a new measurement using the femtoscopy technique in pp collisions at s√=13 TeV and p-Pb collisions at sNN−−−√=5.02 TeV, combined with previously published results from p-Pb collisions at s√=7 TeV. The Λ-Λ scattering parameter space, spanned by the inverse scattering length f−10 and the effective range d0, is constrained by comparing the measured Λ-Λ correlation function with calculations obtained within the Lednicky model. The data are compatible with hypernuclei results and lattice computations, both predicting a shallow attractive interaction, and permit to test different theoretical approaches describing the Λ-Λ interaction. The region in the (f−10,d0) plane which would accommodate a Λ-Λ bound state is substantially restricted compared to previous studies. The binding energy of the possible Λ-Λ bound state is estimated within an effective-range expansion approach and is found to be BΛΛ=3.2+1.6−2.4(stat)+1.8−1.0(syst) MeV.
The interactions of kaons (K) and antikaons (K¯¯¯¯) with few nucleons (N) were studied so far using kaonic atom data and measurements of kaon production and interaction yields in nuclei. Some details of the three-body KNN and K¯¯¯¯NN dynamics are still not well understood, mainly due to the overlap with multi-nucleon interactions in nuclei. An alternative method to probe the dynamics of three-body systems with kaons is to study the final state interaction within triplet of particles emitted in pp collisions at the Large Hadron Collider, which are free from effects due to the presence of bound nucleons. This Letter reports the first femtoscopic study of p−p−K+ and p−p−K− correlations measured in high-multiplicity pp collisions at s√ = 13 TeV by the ALICE Collaboration. The analysis shows that the measured p−p−K+ and p−p−K− correlation functions can be interpreted in terms of pairwise interactions in the triplets, indicating that the dynamics of such systems is dominated by the two-body interactions without significant contributions from three-body effects or bound states.
The second (v2) and third (v3) flow harmonic coefficients of J/ψ mesons are measured at forward rapidity (2.5 < y < 4.0) in Pb-Pb collisions at sNN−−−√ = 5.02 TeV with the ALICE detector at the LHC. Results are obtained with the scalar product method and reported as a function of transverse momentum, pT, for various collision centralities. A positive value of J/ψ v3 is observed with 3.7σ significance. The measurements, compared to those of prompt D0 mesons and charged particles at mid-rapidity, indicate an ordering with vn(J/ψ) <vn(D0) <vn(h±) (n = 2, 3) at low and intermediate pT up to 6 GeV/c and a convergence with v2(J/ψ) ≈v2(D0) ≈v2(h±) at high pT above 6-8 GeV/c. In semi-central collisions (5-40% and 10-50% centrality intervals) at intermediate pT between 2 and 6 GeV/c, the ratio v3/v2 of J/ψ mesons is found to be significantly lower (4.6σ) with respect to that of charged particles. In addition, the comparison to the prompt D0-meson ratio in the same pT interval suggests an ordering similar to that of the v2 and v3 coefficients. The J/ψ v2 coefficient is further studied using the Event Shape Engineering technique. The obtained results are found to be compatible with the expected variations of the eccentricity of the initial-state geometry.
The second (v2) and third (v3) flow harmonic coefficients of J/ψ mesons are measured at forward rapidity (2.5 < y < 4.0) in Pb-Pb collisions at sNN−−−√ = 5.02 TeV with the ALICE detector at the LHC. Results are obtained with the scalar product method and reported as a function of transverse momentum, pT, for various collision centralities. A positive value of J/ψ v3 is observed with 3.7σ significance. The measurements, compared to those of prompt D0 mesons and charged particles at mid-rapidity, indicate an ordering with vn(J/ψ) <vn(D0) <vn(h±) (n = 2, 3) at low and intermediate pT up to 6 GeV/c and a convergence with v2(J/ψ) ≈v2(D0) ≈v2(h±) at high pT above 6-8 GeV/c. In semi-central collisions (5-40% and 10-50% centrality intervals) at intermediate pT between 2 and 6 GeV/c, the ratio v3/v2 of J/ψ mesons is found to be significantly lower (4.6σ) with respect to that of charged particles. In addition, the comparison to the prompt D0-meson ratio in the same pT interval suggests an ordering similar to that of the v2 and v3 coefficients. The J/ψ v2 coefficient is further studied using the Event Shape Engineering technique. The obtained results are found to be compatible with the expected variations of the eccentricity of the initial-state geometry.
The second (v2) and third (v3) flow harmonic coefficients of J/ψ mesons are measured at forward rapidity (2.5 < y < 4.0) in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the LHC. Results are obtained with the scalar product method and reported as a function of transverse momentum, pT, for various collision centralities. A positive value of J/ψ v3 is observed with 3.7σ significance. The measurements, compared to those of prompt D0 mesons and charged particles at mid-rapidity, indicate an ordering with vn(J/ψ) < vn(D0) < vn(h±) (n = 2, 3) at low and intermediate pT up to 6 GeV/c and a convergence with v2(J/ψ) ≈ v2(D0) ≈ v2(h±) at high pT above 6–8 GeV/c. In semi-central collisions (5–40% and 10–50% centrality intervals) at intermediate pT between 2 and 6 GeV/c, the ratio v3/v2 of J/ψ mesons is found to be significantly lower (4.6σ) with respect to that of charged particles. In addition, the comparison to the prompt D0-meson ratio in the same pT interval suggests an ordering similar to that of the v2 and v3 coefficients. The J/ψ v2 coefficient is further studied using the Event Shape Engineering technique. The obtained results are found to be compatible with the expected variations of the eccentricity of the initial-state geometry.
Study of flavor dependence of the baryon-to-meson ratio in proton–proton collisions at √s= 13 TeV
(2023)
The production cross sections of D0 and Λ+c hadrons originating from beauty-hadron decays (i.e. non-prompt) were measured for the first time at midrapidity (|y|<0.5) by the ALICE Collaboration in proton-proton collisions at a center-of-mass energy s√=13 TeV. They are described within uncertainties by perturbative QCD calculations employing the fragmentation fractions of beauty quarks to baryons measured at forward rapidity by the LHCb Collaboration. The bb¯¯¯ production cross section per unit of rapidity at midrapidity, estimated from these measurements, is dσbb¯¯¯/dy||y|<0.5=83.1±3.5(stat.)±5.4(syst.)+12.3−3.2(extrap.)μb. The baryon-to-meson ratios are computed to investigate the hadronization mechanism of beauty quarks. The non-prompt Λ+c/D0 production ratio has a similar trend to the one measured for the promptly produced charmed particles and to the p/π+ and Λ/K0S ratios, suggesting a similar baryon-formation mechanism among light, strange, charm, and beauty hadrons. The pT-integrated non-prompt Λc/D0 ratio is found to be significantly higher than the one measured in e+e− collisions.
This letter reports measurements which characterize the underlying event associated with hard scatterings at mid-pseudorapidity (|η|<0.8) in pp, p−Pb and Pb−Pb collisions at centre-of-mass energy per nucleon pair, sNN−−−√=5.02 TeV. The measurements are performed with ALICE at the LHC. Different multiplicity classes are defined based on the event activity measured at forward rapidities. The hard scatterings are identified by the leading particle defined as the charged particle with the largest transverse momentum (pT) in the collision and having 8<pT<15 GeV/c. The pT spectra of associated particles (0.5≤pT<6 GeV/c) are measured in different azimuthal regions defined with respect to the leading particle direction: toward, transverse, and away. The associated charged particle yields in the transverse region are subtracted from those of the away and toward regions. The remaining jet-like yields are reported as a function of the multiplicity measured in the transverse region. The measurements show a suppression of the jet-like yield in the away region and an enhancement of high-pT associated particles in the toward region in central Pb−Pb collisions, as compared to minimum-bias pp collisions. These observations are consistent with previous measurements that used two-particle correlations, and with an interpretation in terms of parton energy loss in a high-density quark gluon plasma. These yield modifications vanish in peripheral Pb−Pb collisions and are not observed in either high-multiplicity pp or p−Pb collisions.
This letter reports measurements which characterize the underlying event associated with hard scatterings at mid-pseudorapidity (|η|<0.8) in pp, p−Pb and Pb−Pb collisions at centre-of-mass energy per nucleon pair, sNN−−−√=5.02 TeV. The measurements are performed with ALICE at the LHC. Different multiplicity classes are defined based on the event activity measured at forward rapidities. The hard scatterings are identified by the leading particle defined as the charged particle with the largest transverse momentum (pT) in the collision and having 8<pT<15 GeV/c. The pT spectra of associated particles (0.5≤pT<6 GeV/c) are measured in different azimuthal regions defined with respect to the leading particle direction: toward, transverse, and away. The associated charged particle yields in the transverse region are subtracted from those of the away and toward regions. The remaining jet-like yields are reported as a function of the multiplicity measured in the transverse region. The measurements show a suppression of the jet-like yield in the away region and an enhancement of high-pT associated particles in the toward region in central Pb−Pb collisions, as compared to minimum-bias pp collisions. These observations are consistent with previous measurements that used two-particle correlations, and with an interpretation in terms of parton energy loss in a high-density quark gluon plasma. These yield modifications vanish in peripheral Pb−Pb collisions and are not observed in either high-multiplicity pp or p−Pb collisions.
This letter reports measurements which characterize the underlying event associated with hard scatterings at mid-pseudorapidity (|η|<0.8) in pp, p−Pb and Pb−Pb collisions at centre-of-mass energy per nucleon pair, sNN−−−√=5.02 TeV. The measurements are performed with ALICE at the LHC. Different multiplicity classes are defined based on the event activity measured at forward rapidities. The hard scatterings are identified by the leading particle defined as the charged particle with the largest transverse momentum (pT) in the collision and having 8<pT<15 GeV/c. The pT spectra of associated particles (0.5≤pT<6 GeV/c) are measured in different azimuthal regions defined with respect to the leading particle direction: toward, transverse, and away. The associated charged particle yields in the transverse region are subtracted from those of the away and toward regions. The remaining jet-like yields are reported as a function of the multiplicity measured in the transverse region. The measurements show a suppression of the jet-like yield in the away region and an enhancement of high-pT associated particles in the toward region in central Pb−Pb collisions, as compared to minimum-bias pp collisions. These observations are consistent with previous measurements that used two-particle correlations, and with an interpretation in terms of parton energy loss in a high-density quark gluon plasma. These yield modifications vanish in peripheral Pb−Pb collisions and are not observed in either high-multiplicity pp or p−Pb collisions.
The inclusive J/ψ production in Pb-Pb collisions at the center-of-mass energy per nucleon pair sNN−−−√ = 5.02 TeV, measured with the ALICE detector at the CERN LHC, is reported. The J/ψ meson is reconstructed via the dimuon decay channel at forward rapidity (2.5<y<4) down to zero transverse momentum. The suppression of the J/ψ yield in Pb-Pb collisions with respect to binary-scaled pp collisions is quantified by the nuclear modification factor (RAA). The RAA at sNN−−−√ = 5.02 TeV is presented and compared with previous measurements at sNN−−−√ = 2.76 TeV as a function of the centrality of the collision, and of the J/ψ transverse momentum and rapidity. The inclusive J/ψ RAA shows a suppression increasing toward higher pT, with a steeper dependence for central collisions. The modification of the J/ψ average pT and p2T is also studied. Comparisons with the results of models based on a transport equation and on statistical hadronization are also carried out.
The inclusive J/ψ production in Pb-Pb collisions at the center-of-mass energy per nucleon pair sNN−−−√ = 5.02 TeV, measured with the ALICE detector at the CERN LHC, is reported. The J/ψ meson is reconstructed via the dimuon decay channel at forward rapidity (2.5<y<4) down to zero transverse momentum. The suppression of the J/ψ yield in Pb-Pb collisions with respect to binary-scaled pp collisions is quantified by the nuclear modification factor (RAA). The RAA at sNN−−−√ = 5.02 TeV is presented and compared with previous measurements at sNN−−−√ = 2.76 TeV as a function of the centrality of the collision, and of the J/ψ transverse momentum and rapidity. The inclusive J/ψ RAA shows a suppression increasing toward higher pT, with a steeper dependence for central collisions. The modification of the J/ψ average pT and p2T is also studied. Comparisons with the results of models based on a transport equation and on statistical hadronization are also carried out.
The inclusive J/ψ production in Pb-Pb collisions at the center-of-mass energy per nucleon pair sNN−−−√ = 5.02 TeV, measured with the ALICE detector at the CERN LHC, is reported. The J/ψ meson is reconstructed via the dimuon decay channel at forward rapidity (2.5<y<4) down to zero transverse momentum. The suppression of the J/ψ yield in Pb-Pb collisions with respect to binary-scaled pp collisions is quantified by the nuclear modification factor (RAA). The RAA at sNN−−−√ = 5.02 TeV is presented and compared with previous measurements at sNN−−−√ = 2.76 TeV as a function of the centrality of the collision, and of the J/ψ transverse momentum and rapidity. The inclusive J/ψ RAA shows a suppression increasing toward higher pT, with a steeper dependence for central collisions. The modification of the J/ψ average pT and p2T is also studied. Comparisons with the results of models based on a transport equation and on statistical hadronization are also carried out.
The inclusive J/ψ production in Pb–Pb collisions at the center-of-mass energy per nucleon pair sNN−−−√ = 5.02 TeV, measured with the ALICE detector at the CERN LHC, is reported. The J/ψ meson is reconstructed via the dimuon decay channel at forward rapidity (2.5 < y < 4) down to zero transverse momentum. The suppression of the J/ψ yield in Pb–Pb collisions with respect to binary-scaled pp collisions is quantified by the nuclear modification factor (RAA). The RAA at sNN−−−√ = 5.02 TeV is presented and compared with previous measurements at sNN−−−√ = 2.76 TeV as a function of the centrality of the collision, and of the J/ψ transverse momentum and rapidity. The inclusive J/ψ RAA shows a suppression increasing toward higher transverse momentum, with a steeper dependence for central collisions. The modification of the J/ψ average transverse momentum and average squared transverse momentum is also studied. Comparisons with the results of models based on a transport equation and on statistical hadronization are carried out.
Die Erwartungen von Studieninteressierten weichen häufig beträchtlich von den tatsächlichen Studieninhalten und Anforderungen ab. Ein Grund dafür ist, dass viele sich nicht genügend Klarheit verschaffen, welche eigenen Stärken und Schwächen für den Erfolg in Studium und Beruf »tatsächlich« relevant sind. So könnte zum Beispiel ein Abiturient mit guten Noten in Mathematik und Physik und mäßigen Zensuren in Deutsch und Englisch noch schlussfolgern, dass ihm »das Naturwissenschaftliche mehr liegt«. Ob das naturwissenschaftliche Verständnis für ein erfolgreiches Studium der Informatik jedoch gut genug ausgeprägt ist, lässt sich nicht so leicht erschließen. Noch schwieriger ist es für Studieninteressierte einzuschätzen, wie ihre »Soft Skills« ausgeprägt sind – also die Persönlichkeitsmerkmale, die in der Schule nicht systematisch beurteilt werden, jedoch hochgradig aussagekräftig für langfristigen Erfolg in Studium und Beruf sind. Ein Wechsel des Studienfaches zu Beginn des Studiums führt häufig zu einer Verlängerung der Studiendauer. Auch wenn eine derartige »Orientierungsphase« oftmals als normal und wichtig eingeschätzt wird, zeigt die praktische Erfahrung, dass Studierende mit kurzer Studiendauer jenen, die länger studiert haben, bei der Stellenvergabe tendenziell vorgezogen werden. Eine längere Studiendauer wird von Arbeitgebern häufig als Zeichen mangelnder Zielstrebigkeit oder fehlender Berufsmotivation interpretiert und kann sich so Chancen mindernd für Berufseinsteiger auswirken. Ebenso ist es im Interesse der Universitäten, die Zahl der Studienfachwechsel und -abbrüche so gering wie möglich zu halten – nicht zuletzt aus wirtschaftlichen Gründen. Deshalb bietet die Universität Frankfurt Studieninteressierten – zunächst in den Fächern Informatik und Psychologie – mit dem Self-Assessment konkrete Entscheidungshilfen an. Der verfolgte Ansatz zielt darauf ab, Abiturientinnen und Abiturienten möglichst frühzeitig und mit vertretbarem Aufwand die Möglichkeit zu bieten, selbst zu überprüfen, inwieweit ihre Erwartungen an einen Studiengang mit den tatsächlichen Inhalten und Anforderungen übereinstimmen. Das Konzept zur Erstellung eines Self-Assessments, das hier beispielhaft für den Studiengang Informatik vorgestellt wird, entstand nicht umsonst in enger Kooperation mit dem Institut für Psychologie (Prof. Dr. Helfried Moosbrugger, Dr. Siegbert Reiß, Ewa Jonkisz). Denn neben der fachlichen Qualifikation entscheiden über den Studienerfolg auch persönliche Eigenschaften wie Leistungsbereitschaft und Hartnäckigkeit. Die Auswertung des anonym durchgeführten Self-Assessments deckt außerdem Wissenslücken bei den Studieninteressierten auf, so dass eine gezielte Vorbereitung auf das Studium möglich wird. Zum Beispiel bietet der Fachbereich Mathematik und Informatik gezielte Vorbereitungskurse für Studienanfänger an, und zwar in Programmierung und Mathematik. Auch werden in den Semesterferien Repetitorien und Vorbereitungskurse angeboten – alles aus Studienbeiträgen finanziert. Auf diese Weise kann es zu einem homogeneren Kenntnisstand speziell bei den Studierenden im ersten Semester kommen. Ziel ist es, dadurch auch den »Erstsemesterschock « zu mildern. Das Online-Beratungsangebot trägt damit zu einer direkten Verbesserung der Lern- und Lehrsituation bei.
Die Simulation von Strömung in geklüftet porösen Medien ist von entscheidender Bedeutung in Hinblick auf viele hydrogeologische Anwendungsgebiete, wie beispielsweise der Vorbeugung einer Grundwasserverschmutzung in der Nähe einer Mülldeponie oder einer Endlagerstätte für radioaktive Abfälle, der Förderung fossiler Brennstoffe oder der unterirdischen Speicherung von Kohlendioxid. Aufgrund ihrer Beschaffenheit und insbesondere der großen Permeabilität innerhalb der Klüfte, stellen diese bevorzugte Transportwege dar und können das Strömungsprofil entscheidend beeinflussen. Allerdings stellt die anisotrope Geometrie der Klüfte in Zusammenhang mit den enormen Sprüngen in Parametern wie der Permeabilität auf kleinstem Raum große Anforderungen an die numerischen Verfahren.
Deswegen werden in dieser Arbeit zwei Ansätze zur Modellierung der Klüfte verfolgt. Ein niederdimensionaler Ansatz motiviert durch die anisotrope Geometrie mit sehr geringer Öffnungsweite und sehr langer Erstreckung der Klüfte und ein volldimensionaler Ansatz, der alle Vorgänge innerhalb der Kluft auflöst. Es werden die Ergebnisse dieser Ansätze für Benchmark-Probleme untersucht, mit dem Ergebnis, dass nur bei sehr dünnen Klüften der numerisch günstigere niederdimensionale Ansatz zufriedenstellende Ergebnisse liefert. Weiterhin wird ein Kriterium eingeführt, dass während der Laufzeit anhand von Eigenschaften der Kluft und Strömungsparametern angibt, ob der niederdimensionale Ansatz ausreichende Gültigkeit besitzt. Es wird ein dimensions-adaptiver Ansatz präsentiert, der dann entsprechend dieses Kriteriums einen Wechsel zum volldimensionalen Modell durchführt. Die Ergebnisse zeigen, dass so wesentlich genauere Ergebnisse erzielt werden können, ohne dass eine volle Auflösung in jedem Fall und über den gesamten Rechenzeitraum erforderlich ist.
Die Implementation der Striktheits-Analyse, die im Zuge dieser Arbeit vorgenommen wurde, stellt eine effiziente Approximation der abstrakten Reduktion mit Pfadanalyse dar. Durch die G#-Maschine, ein neues, auf der G-Maschine basierendes Maschinenmodell, wurde die verwendete Methode systematisch dargelegt. Die große Ähnlichkeit mit der G-Maschin, die in unserer Implementation beibehalten werden konnte, zeigt, wie natürlich die verwendete Methode der Reduktion in funktionalen Programmiersprachen entspricht. Obwohl die Umsetzung mehr Wert auf Nachvollziehbarkeit, als auf Effizienz legt, zeigt sie, daß die Methode der abstrakten Reduktion mit Pfadanalyse auch in einer funktionalen Implementierung durchaus alltagstauglich ist und Striktheits-Information findet, die Umsetzungen anderer Methoden nicht finden. Es bestehen Möglichkeiten zur Optimierung u. a. von Programmteilen, die für jede simulierte G#-Maschinen-Anweisung ausgeführt werden. Bei vorsichtiger Einschätzung erscheint eine Halbierung der Laufzeit mit vertretbarem Aufwand erreichbar.
Dieses Dokument beschreibt eine Applikation namens Stolperwege, die als prototypische Kommunikationstechnologie für eine mobile Public History of the Holocaust dienen soll, und zwar ausgehend vom Beispiel des Kunstprojekts namens Stolpersteine von Gunter Demnig. Auf diese Weise soll eine zentrale Herausforderung bezogen auf die Vermittlung der Geschichte des Holocaust aufgegriffen werden, welche in der Anknüpfung an die neuesten Entwicklungen von Kommunikationsmedien besteht. Die Stolperwege-App richtet sich an Schülerinnen und Schüler, Bewohnerinnen und Bewohner, Historikerinnen und Historiker und allgemein an Besucherinnen und Besucher einer Stadt, die vor Ort den Spuren des Holocaust nachspüren wollen, um sich an der Schreibung einer Public History of the Holocaust aktiv zu beteiligen.
Already today modern driver assistance systems contribute more and more to make individual mobility in road traffic safer and more comfortable. For this purpose, modern vehicles are equipped with a multitude of sensors and actuators which perceive, interpret and react to the environment of the vehicle. In order to reach the next set of goals along this path, for example to be able to assist the driver in increasingly complex situations or to reach a higher degree of autonomy of driver assistance systems, a detailed understanding of the vehicle environment and especially of other moving traffic participants is necessary.
It is known that motion information plays a key role for human object recognition [Spelke, 1990]. However, full 3D motion information is mostly not taken into account for Stereo Vision-based object segmentation in literature. In this thesis, novel approaches for motion-based object segmentation of stereo image sequences are proposed from which a generic environmental model is derived that contributes to a more precise analysis and understanding of the respective traffic scene. The aim of the environmental model is to yield a minimal scene description in terms of a few moving objects and stationary background such as houses, crash barriers or parking vehicles. A minimal scene description aggregates as much information as possible and it is characterized by its stability, precision and efficiency.
Instead of dense stereo and optical flow information, the proposed object segmentation builds on the so-called Stixel World, an efficient superpixel-like representation of space-time stereo data. As it turns out this step substantially increases stability of the segmentation and it reduces the computational time by several orders of magnitude, thus enabling real-time automotive use in the first place. Besides the efficient, real-time capable optimization, the object segmentation has to be able to cope with significant noise which is due to the measurement principle of the used stereo camera system. For that reason, in order to obtain an optimal solution under the given extreme conditions, the segmentation task is formulated as a Bayesian optimization problem which allows to incorporate regularizing prior knowledge and redundancies into the object segmentation.
Object segmentation as it is discussed here means unsupervised segmentation since typically the number of objects in the scene and their individual object parameters are not known in advance. This information has to be estimated from the input data as well.
For inference, two approaches with their individual pros and cons are proposed, evaluated and compared. The first approach is based on dynamic programming. The key advantage of this approach is the possibility to take into account non-local priors such as shape or object size information which is impossible or which is prohibitively expensive with more local, conventional graph optimization approaches such as graphcut or belief propagation.
In the first instance, the Dynamic Programming approach is limited to one-dimensional data structures, in this case to the first Stixel row. A possible extension to capture multiple Stixel rows is discussed at the end of this thesis.
Further novel contributions include a special outlier concept to handle gross stereo errors associated with so-called stereo tear-off edges. Additionally, object-object interactions are taken into account by explicitly modeling object occlusions. These extensions prove to be dramatic improvements in practice.
This first approach is compared with a second approach that is based on an alternating optimization of the Stixel segmentation and of the relevant object parameters in an expectation maximization (EM) sense. The labeling step is performed by means of the _−expansion graphcut algorithm, the parameter estimation step is done via one-dimensional sampling and multidimensional gradient descent. By using the Stixel World and due to an efficient implementation, one step of the optimization only takes about one millisecond on a standard single CPU core. To the knowledge of the author, at the time of development there was no faster global optimization in a demonstrator car.
For both approaches, various testing scenarios have been carefully selected and allow to examine the proposed methods thoroughly under different real-world conditions with limited groundtruth at hand. As an additional innovative application, the first approach was successfully implemented in a demonstrator car that drove the so-called Bertha Benz Memorial Route from Mannheim to Pforzheim autonomously in real traffic.
At the end of this thesis, the limits of the proposed systems are discussed and a prospect on possible future work is given.
Frankfurt, an einem gewöhnlichen Morgen gegen 8:00 Uhr: Von Osten strömen zahlreiche Pendler über die A 66 in Richtung Innenstadt. Spätestens »Am Erlenbruch« kommt es zu Staus und zäh fließendem Verkehr. Frankfurter Informatiker können diese Staus mithilfe eines Simulationssystems vorhersagen. Mehr noch: Sie berechnen den Ausstoß von Schadstoffen und deren Verteilung über das Stadtgebiet. Ziel ist die Optimierung von Verkehrsleitstrategien.
Spam detection in wikis
(2012)
Wikis haben durch ihre kollaborativen Eigenschaften maßgeblich an der Entstehung des Web 2.0 beigetragen: Durch die Zusammenarbeit vieler Benutzer ist es möglich geworden, große Mengen an Daten aufzubereiten und strukturiert zusammenzustellen. So ist ein Datenschatz angewachsen, der wertvoll für die maschinelle Verarbeitung von Text ist: Mittels der Techniken des TextMining lassen sich aus Wikis viele Informationen extrahieren. Dazu ist es zunächst sinnvoll, deren Inhalte herunterzuladen und lokal zu speichern.
Zum Editieren von Seiten existieren häufig keine Zugangsbeschränkungen. So wird die genannte Akkumulation von Informationen ermöglicht, da sich viele Benutzer beteiligen können. Jedoch birgt dies die Gefahr, dass Wikis durch Spam verunreinigt werden: Zur Verwendung als Wissensbasis ist dies hinderlich.
Gängige Anti-Spam-Maßnahmen finden online statt und setzen unter anderem auf die Überwachung durch die Nutzer oder den Einsatz von Blacklists für Weblinks. Im Gegensatz dazu wird im Rahmen dieser Arbeit folgender Ansatz gewählt: Ein lokal gespeichertes Wiki wird einer Bestandsaufnahme unterzogen und in seiner Gesamtheit untersucht. Es werden ausschließlich die Inhalte der Seiten berücksichtigt. Die Spam-Erkennung beruht auf einer Kombination von Entscheidungsregeln sowie der Berücksichtigung von Wortwahrscheinlichkeiten. Dadurch konnten gute Ergebnisse erzielt werden.
The focus of this paper are space-improvements of programs, which are transformations that do not worsen the space requirement during evaluations. A realistic theoretical treatment must take garbage collection method into account. We investigate space improvements under the assumption of an optimal garbage collector. Such a garbage collector is not implementable, but there is an advantage: The investigations are independent of potential changes in an implementable garbage collector and our results show that the evaluation and other similar transformations are space-improvements.
We explore space improvements in LRP, a polymorphically typed call-by-need functional core language. A relaxed space measure is chosen for the maximal size usage during an evaluation. It Abstracts from the details of the implementation via abstract machines, but it takes garbage collection into account and thus can be seen as a realistic approximation of space usage. The results are: a context lemma for space improving translations and for space equivalences; all but one reduction rule of the calculus are shown to be space improvements, and the exceptional one, the copy-rule, is shown to increase space only moderately.
Several further program transformations are shown to be space improvements or space equivalences, in particular the translation into machine expressions is a space equivalence. These results are a step Forward in making predictions about the change in runtime space behavior of optimizing transformations in callbyneed functional languages.
We explore space improvements in LRP, a polymorphically typed call-by-need functional core language. A relaxed space measure is chosen for the maximal size usage during an evaluation. It Abstracts from the details of the implementation via abstract machines, but it takes garbage collection into account and thus can be seen as a realistic approximation of space usage. The results are: a context lemma for space improving translations and for space equivalences; all but one reduction rule of the calculus are shown to be space improvements, and the exceptional one, the copy-rule, is shown to increase space only moderately.
Several further program transformations are shown to be space improvements or space equivalences, in particular the translation into machine expressions is a space equivalence. These results are a step Forward in making predictions about the change in runtime space behavior of optimizing transformations in callbyneed functional languages.
We explore space improvements in LRP, a polymorphically typed call-by-need functional core language. A relaxed space measure is chosen for the maximal size usage during an evaluation. It Abstracts from the details of the implementation via abstract machines, but it takes garbage collection into account and thus can be seen as a realistic approximation of space usage. The results are: a context lemma for space improving translations and for space equivalences; all but one reduction rule of the calculus are shown to be space improvements, and the exceptional one, the copy-rule, is shown to increase space only moderately.
Several further program transformations are shown to be space improvements or space equivalences, in particular the translation into machine expressions is a space equivalence. These results are a step Forward in making predictions about the change in runtime space behavior of optimizing transformations in callbyneed functional languages.
A measurement of dielectron production in proton-proton (pp) collisions at s√=13 TeV, recorded with the ALICE detector at the CERN LHC, is presented in this Letter. The data set was recorded with a reduced magnetic solenoid field. This enables the investigation of a kinematic domain at low dielectron invariant mass mee and pair transverse momentum pT,ee that was previously inaccessible at the LHC. The cross section for dielectron production is studied as a function of mee, pT,ee, and event multiplicity dNch/dη. The expected dielectron rate from hadron decays, called hadronic cocktail, utilizes a parametrization of the measured η/π0 ratio in pp and proton-nucleus (p-A) collisions, assuming that this ratio shows no strong dependence on collision energy at low transverse momentum. Comparison of the measured dielectron yield to the hadronic cocktail at 0.15<mee<0.6 GeV/c2 and for pT,ee<0.4 GeV/c indicates an enhancement of soft dielectrons, reminiscent of the 'anomalous' soft-photon and -dilepton excess in hadron-hadron collisions reported by several experiments under different experimental conditions. The enhancement factor over the hadronic cocktail amounts to 1.61±0.13(stat.)±0.17(syst.,data)±0.34(syst.,cocktail) in the ALICE acceptance. Acceptance-corrected excess spectra in mee and pT,ee are extracted and compared with calculations of dielectron production from hadronic bremsstrahlung and thermal radiation within a hadronic many-body approach.
A measurement of dielectron production in proton-proton (pp) collisions at s√=13 TeV, recorded with the ALICE detector at the CERN LHC, is presented in this Letter. The data set was recorded with a reduced magnetic solenoid field. This enables the investigation of a kinematic domain at low dielectron invariant mass mee and pair transverse momentum pT,ee that was previously inaccessible at the LHC. The cross section for dielectron production is studied as a function of mee, pT,ee, and event multiplicity dNch/dη. The expected dielectron rate from hadron decays, called hadronic cocktail, utilizes a parametrization of the measured η/π0 ratio in pp and proton-nucleus (p-A) collisions, assuming that this ratio shows no strong dependence on collision energy at low transverse momentum. Comparison of the measured dielectron yield to the hadronic cocktail at 0.15<mee<0.6 GeV/c2 and for pT,ee<0.4 GeV/c indicates an enhancement of soft dielectrons, reminiscent of the 'anomalous' soft-photon and -dilepton excess in hadron-hadron collisions reported by several experiments under different experimental conditions. The enhancement factor over the hadronic cocktail amounts to 1.61±0.13(stat.)±0.17(syst.,data)±0.34(syst.,cocktail) in the ALICE acceptance. Acceptance-corrected excess spectra in mee and pT,ee are extracted and compared with calculations of dielectron production from hadronic bremsstrahlung and thermal radiation within a hadronic many-body approach.
A measurement of dielectron production in proton-proton (pp) collisions at s√=13 TeV, recorded with the ALICE detector at the CERN LHC, is presented in this Letter. The data set was recorded with a reduced magnetic solenoid field. This enables the investigation of a kinematic domain at low dielectron invariant mass mee and pair transverse momentum pT,ee that was previously inaccessible at the LHC. The cross section for dielectron production is studied as a function of mee, pT,ee, and event multiplicity dNch/dη. The expected dielectron rate from hadron decays, called hadronic cocktail, utilizes a parametrization of the measured η/π0 ratio in pp and proton-nucleus (p-A) collisions, assuming that this ratio shows no strong dependence on collision energy at low transverse momentum. Comparison of the measured dielectron yield to the hadronic cocktail at 0.15<mee<0.6 GeV/c2 and for pT,ee<0.4 GeV/c indicates an enhancement of soft dielectrons, reminiscent of the 'anomalous' soft-photon and -dilepton excess in hadron-hadron collisions reported by several experiments under different experimental conditions. The enhancement factor over the hadronic cocktail amounts to 1.61±0.13(stat.)±0.17(syst.,data)±0.34(syst.,cocktail) in the ALICE acceptance. Acceptance-corrected excess spectra in mee and pT,ee are extracted and compared with calculations of dielectron production from hadronic bremsstrahlung and thermal radiation within a hadronic many-body approach.
A measurement of dielectron production in proton-proton (pp) collisions at s√=13 TeV, recorded with the ALICE detector at the CERN LHC, is presented in this Letter. The data set was recorded with a reduced magnetic solenoid field. This enables the investigation of a kinematic domain at low dielectron invariant mass mee and pair transverse momentum pT,ee that was previously inaccessible at the LHC. The cross section for dielectron production is studied as a function of mee, pT,ee, and event multiplicity dNch/dη. The expected dielectron rate from hadron decays, called hadronic cocktail, utilizes a parametrization of the measured η/π0 ratio in pp and proton-nucleus (p-A) collisions, assuming that this ratio shows no strong dependence on collision energy at low transverse momentum. Comparison of the measured dielectron yield to the hadronic cocktail at 0.15<mee<0.6 GeV/c2 and for pT,ee<0.4 GeV/c indicates an enhancement of soft dielectrons, reminiscent of the 'anomalous' soft-photon and -dilepton excess in hadron-hadron collisions reported by several experiments under different experimental conditions. The enhancement factor over the hadronic cocktail amounts to 1.69±0.14(stat.)±0.18(syst.,data)±0.36(syst.,cocktail) in the ALICE acceptance. Acceptance-corrected excess spectra in mee and pT,ee are extracted and compared with calculations of dielectron production from hadronic bremsstrahlung and thermal radiation within a hadronic many-body approach.
The first measurements of skewness and kurtosis of mean transverse momentum (⟨pT⟩) fluctuations are reported in Pb−Pb collisions at sNN−−−√ = 5.02 TeV, Xe−Xe collisions at sNN−−−√ = 5.44 TeV and pp collisions at s√=5.02 TeV using the ALICE detector. The measurements are carried out as a function of system size ⟨dNch/dη⟩1/3|η|<0.5, using charged particles with transverse momentum (pT) and pseudorapidity (η), in the range 0.2<pT<3.0 GeV/c and |η|<0.8, respectively. In Pb−Pb and Xe−Xe collisions, positive skewness is observed in the fluctuations of ⟨pT⟩ for all centralities, which is significantly larger than what would be expected in the scenario of independent particle emission. This positive skewness is considered a crucial consequence of the hydrodynamic evolution of the hot and dense nuclear matter created in heavy-ion collisions. Furthermore, similar observations of positive skewness for minimum bias pp collisions are also reported here. Kurtosis of ⟨pT⟩ fluctuations is found to be in good agreement with the kurtosis of Gaussian distribution, for most central Pb−Pb collisions. Hydrodynamic model calculations with MUSIC using Monte Carlo Glauber initial conditions are able to explain the measurements of both skewness and kurtosis qualitatively from semicentral to central collisions in Pb--Pb system. Color reconnection mechanism in PYTHIA8 model seems to play a pivotal role in capturing the qualitative behavior of the same measurements in pp collisions.
When performing transfer learning in Computer Vision, normally a pretrained model (source model) that is trained on a specific task and a large dataset like ImageNet is used. The learned representation of that source model is then used to perform a transfer to a target task. Performing transfer learning in this way had a great impact on Computer Vision, because it worked seamlessly, especially on tasks that are related to each other. Current research topics have investigated the relationship between different tasks and their impact on transfer learning by developing similarity methods. These similarity methods have in common, to do transfer learning without actually doing transfer learning in the first place but rather by predicting transfer learning rankings so that the best possible source model can be selected from a range of different source models. However, these methods have focused only on singlesource transfers and have not paid attention to multi-source transfers. Multi-source transfers promise even better results than single-source transfers as they combine information from multiple source tasks, all of which are useful to the target task. We fill this gap and propose a many-to-one task similarity method called MOTS that predicts both, single-source transfers and multi-source transfers to a specific target task. We do that by using linear regression and the source representations of the source models to predict the target representation. We show that we achieve at least results on par with related state-of-the-art methods when only focusing on singlesource transfers using the Pascal VOC and Taskonomy benchmark. We show that we even outperform all of them when using single and multi-source transfers together (0.9 vs. 0.8) on the Taskonomy benchmark. We additionally investigate the performance of MOTS in conjunction with a multi-task learning architecture. The task-decoder heads of a multi-task learning architecture are used in different variations to do multi-source transfers since it promises efficiency over multiple singletask architectures and incurs less computational cost. Results show that our proposed method accurately predicts transfer learning rankings on the NYUD dataset and even shows the best transfer learning results always being achieved when using more than one source task. Additionally, it is further examined that even just using one task-decoder head from the multi-task learning architecture promises better transfer learning results, than using a single-task architecture for the same task, which is due to the shared information from different tasks in the multi-task learning architecture in previous layers. Since the MOTS rankings for selecting the MTI-Net task-decoder head with the highest transfer learning performance were very accurate for the NYUD but not satisfying for the Pascal VOC dataset, further experiments need to varify the generalizability of MOTS rankings for the selection of the optimal task-decoder head from a multi-task architecture.
Simulation von Prüfungsordnungen und Studiengängen mit Hilfe von Constraint-logischer Programmierung
(2006)
In dieser Arbeit wurde versucht die Prüfungsordnung des Bachelorstudiengangs Informatik mit Hilfe der Constraint-logischen Programmiersprache - genauer der Programmiersprache ECLiPSe e zu simulieren und diese auf logische Fehler zu überprüfen. Hierfür wurden die beiden deklarativen Programmierparadigmen, die logische Programmierung und die Constraint-Programmierung, getrennt erläutert, da sie unterschiedliche Techniken bei der Problembehandlung einsetzen. Zunächst wurde als Grundlage die Prädikatenlogik (Kapitel 2), erarbeitet und ausführlich dargestellt. Anschließend wurde die logische Programmierung mit der, auf Prädikatenlogik basierenden, Programmiersprache Prolog erläutert (Kapitel 3). Nach der Erläuterung des logischen Teils wurde die Constraint-Programmierung (Kapitel 4) eingehend erläutert. Damit wurde eine Basis geschaffen, um die Constraint-logische Programmierung zu erläutern. Die Constraint-logische Programmierung (Kapitel 5) wurde als eine Erweiterung der logischen Programmierung um Constraints und deren Behandlung dargestellt. Dabei wurde zunächst ein allgemeiner Ansatz der Constraint-logischen Programmierung (CLP-Paradigma) erläutert. Mit der Einführung der Programmiersprache ECLiPSe, wurden alle Werkzeuge behandelt, die für die Simulation benötigt wurden. Schließlich wurde genauer auf die Modellierung des Problems und seine Implementierung in der Sprache ECLiPSe eingegangen (Kapitel 6). Grundidee der Simulation war, die Regeln der Prüfungsordnung als Constraints zu formulieren, so dass sie formal bearbeitet werden konnten. Hier wurden zwei Arten von Tests durchgeführt: • Constraint-Erfüllbarkeitsproblem: Mit dem ersten Test wurde nach eine Lösung gesucht, in der alle Constraints erfüllt sind. • Constraint-Optimierungsproblem: Hier wurde nach einer optimalen Lösung gesucht unter mehreren Kandidaten, in der alle Constraints erfüllt sind. Fazit: Die Constraint-logische Programmierung ist ein viel versprechendes Gebiet, da sie ein Mittel zur Behandlung kombinatorischer Probleme darstellt. Solche Probleme treten in vielen verschiedenen Berufsfeldern auf und lassen sich sonst nur mit großem Aufwand bewältigen. Beim Auftreten solcher Probleme kann schnell ein konzeptuelles Modell erstellt werden, das sehr einfach in ein ausführbares Programm (Design-Modell) umgewandelt wird. Programm-Modifikation ist erheblich leichter als in den prozeduralen Programmiersprachen.
Die Darstellung photorealistischer Szenen durch Computer hat in Folge der Entwicklung immer effizienterer Algorithmen und leistungsfähigerer Hardware in den vergangenen Jahren gewaltige Fortschritte gemacht. Täuschend echt simulierte Spezialeffekte sind aus kaum einem Hollywood-Spielfilm mehr wegzudenken und sind zum Teil nur sehr schwierig als computergenerierte Bilder zu erkennen. Aufgrund der Komplexität von lebenden Organismen gibt es allerdings noch kein einwandfreies Verfahren, welches ein komplettes Lebewesen realistisch, sei es statisch oder in Bewegung, mit dem Computer simulieren kann. Im Bereich der Animation sind wirkungsvolle Resultate zu verzeichnen, da das Skelett eines Menschen oder Wirbeltieres durch geeignete Methoden simuliert und Bewegungen damit täuschend echt mit dem Computer nachgebildet werden können. Die Schwierigkeit, eine komplett realistische Visualisierung eines Lebewesens zu erreichen, liegt allerdings in der Darstellung weiterer Strukturen eines Organismus, die zwar nicht direkt sichtbar sind aber dennoch Einfluss auf die sichtbaren Bereiche haben. Bei diesen Strukturen handelt es sich um Muskel- und Fettgewebeschichten. Die Oberfläche von Figuren wird durch Muskeln sowohl in der Bewegung als auch in statischen Positionen deutlich sichtbar verändert. Dieser Effekt wird bisher bei der Visualisierung von Lebewesen nur unzureichend beachtet, was zu den aufgeführten nicht vollständig realistisch wirkenden Ergebnissen führt. Bei der Simulation von Muskeln wurden bis heute verschiedene Muskelmodelle entwickelt, die einen Muskel als Gesamtheit in Hinblick auf seine grundsätzlichen physikalischen Eigenschaften, wie z. B. Kraftentwicklung oder Kontraktionsgeschwindigkeit, sehr gut beschreiben. Viele Effekte des Muskels, die sich hauptsächlich auf einer tiefer liegenden Ebene abspielen, sind bis heute noch nicht erforscht, was folglich auch keine entsprechende Simulation auf dem Computer zulässt. Beschrieben werden die verschiedenen Muskeltypen (Skelett-, glatte und Herzmuskulatur) und Muskelformen (spindelförmige, einfach/doppelt gefiedert, etc.). Des weiteren wird auf die unterschiedlichen Muskelfasertypen (FTO, STO, usw.) mit ihren Eigenschaften und Funktionen eingegangen. Weitere Themen sind der strukturelle Aufbau eines Skelettmuskels, der Kontraktionsmechanismus und die Ansteuerung durch Nervenreize. Im Bereich Biomechanik, also der Forschung nach den physikalischen Vorgängen im Muskel, führte die Komplexität der Struktur und Funktionsweise eines Muskels zu einer ausgedehnten Vielfalt an Forschungsarbeiten. Zahlreiche Effekte, die bei einem arbeitenden Muskel beobachtet werden können, konnten bis heute noch nicht erklärt werden. Die Erkenntnisse, die für diese Arbeit relevant sind, sind jedoch in einem ausreichenden Maße erforscht und durch entsprechende mathematische Modelle repräsentierbar. Die Mechanik, die einem Muskel zugrunde liegt, wird auf diesen Modelle aufbauend beschrieben. Neben den Größen, die im später vorgestellten Modell verwendet worden sind, wird auch auf sonstige für biomechanische Untersuchungen relevante Eigenschaften eingegangen. Weiterhin wird dargestellt, wie verschiedene Kontraktionen (Einzelzuckung, Tetanus) mechanisch funktionieren. Für Muskelarbeit und Muskelleistung werden verschiedene Diagramme vorgestellt, welche die Zusammenhänge zwischen den physikalischen Größen Kraft, Geschwindigkeit, Arbeit und Leistung zeigen. Nach Vorstellung der ISOFIT-Methode zur Bestimmung von Muskel-Sehnen-Eigenschaften werden mathematische Formeln und Gleichungen zur Beschreibung von Kraft-Geschwindigkeits- und Kraft-Längen-Verhältnissen sowie der serienelastischen Komponente und der Muskelaktivierung, die zur Bewegungsgleichung führen, angegeben. Es folgen weitere mathematische Funktionen, welche die Aktivierungsvorgänge unterschiedlicher Muskelkontraktionen beschreiben, sowie das Muskelmodell nach Hill, welches seit vielen Jahren eine geeignete Basis für Forschungen im Bereich der Biomechanik darstellt. Bezüglich der Computergraphik wird ein kurzer Abriss gegeben, wie künstliche Menschen modelliert und animiert werden. Eine Übersicht über verschiedene Methoden zur Repräsentation der Oberfläche von Körpern, sowie deren Deformation unter Berücksichtigung der Einwirkung von Muskeln gibt die State-of-the-Art-Recherche. Neben den Oberflächenmodellen (Starrkörperdeformation, lokale Oberflächen-Operatoren, Skinning, Konturverformung, Deformation durch Keyshapes) werden auch Volumen- (Körperrepräsentation durch Primitive, Iso-Flächen) und Multi-Layer-Modelle (3-Layer-Modell, 4-Layer-Modell) vorgestellt und deren Vor- und Nachteile herausgearbeitet. Eine geeignete Repräsentation der Oberfläche, die Verformungen durch Muskelaktivität einbezieht, wurde durch die Benutzung von Pneus gekoppelt mit der Quaoaring-Technik gefunden. Dieses Verfahren, das auf Beobachtungen aus der Biologie basiert und zur Darstellung von organischen Körpern benutzt wird, ist ausgesprochen passend, um einen Muskel-Sehnen-Apparat graphisch darzustellen, handelt es sich doch hierbei auch um eine organische Struktur. Um die beiden Teilmodelle Simulation und Visualisierung zu verbinden, bietet sich die aus der Biomechanik bekannte Actionline an, die eine imaginäre Kraftlinie im Muskel und der Sehne darstellt. Die bei der Quaoaring-Methode verwendete Centerline, welches die Basis zur Modellierung des volumenkonstanten Körpers ist, kann durch die Kopplung an die physikalischen Vorgänge zu einer solchen Actionline erweitert werden. Veränderungen in der Länge und des Verlaufs der Actionline z. B. durch Muskelkontraktion wirken sich dadurch direkt auf die Form des Muskels aus und die Verbindung zur Visualisierung ist hergestellt.
This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seq-operator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howe's method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on letrec-free expressions.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky’s lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky’s lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models. We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen. 1998 ACM Subject Classification: F.4.2, F.3.2, F.3.3, F.4.1. Key words and phrases: semantics, contextual equivalence, bisimulation, lambda calculus, call-by-need, letrec.
The calculus LRP is a polymorphically typed call-by-need lambda calculus extended by data constructors, case-expressions, seq-expressions and type abstraction and type application. This report is devoted to the extension LRPw of LRP by scoped sharing decorations. The extension cannot be properly encoded into LRP if improvements are defined w.r.t. the number of lbeta, case, and seq-reductions, which makes it necessary to reconsider the claims and proofs of properties. We show correctness of improvement properties of reduction and transformation rules and also of computation rules for decorations in the extended calculus LRPw. We conjecture that conservativity of the embedding of LRP in LRPw holds.
The calculus LRP is a polymorphically typed call-by-need lambda calculus extended by data constructors, case-expressions, seq-expressions and type abstraction and type application. This report is devoted to the extension LRPw of LRP by scoped sharing decorations. The extension cannot be properly encoded into LRP if improvements are defined w.r.t. the number of lbeta, case, and seq-reductions, which makes it necessary to reconsider the claims and proofs of properties. We show correctness of improvement properties of reduction and transformation rules and also of computation rules for decorations in the extended calculus LRPw. We conjecture that conservativity of the embedding of LRP in LRPw holds.
This report documents the extension LRPw of LRP by sharing decorations. We show correctness of improvement properties of reduction and transformation rules and also of computation rules for decorations in the extended calculus LRPw. We conjecture that conservativity of the embedding of LRP in LRPw holds.
Shader zur Bildbearbeitung
(2009)
In den letzten Jahren haben Grafikkarten eine starke Veränderung erfahren. Anfangs war lediglich die Darstellung vorberechneter Primitive möglich, mittlerweile lassen sich Vertex- und Pixelshader komplett frei programmieren. Die Spezialisierung auf den Rendervorgang hat die GPUs (Graphics Processing Units) zu massiv-parallelen Prozessoren wachsen lassen, die unter optimaler Ausnutzung ein Vielfaches der Rechenleistung aktueller CPUs erreichen. Die programmierbaren Shader haben Grafikkarten in der letzten Zeit vermehrt als weiteren Prozessor für General Purpose-Programmierung werden lassen.
Aktuelle Bildbearbeitungsprogramme zeigen, dass sich die Tendenz Richtung GPU bewegt, so wird sich auch in dieser Arbeit die enorme Rechenleistung der GPU für die Bildbearbeitung zu nutzen gemacht. Bildfilter lassen sich als Pixelshader realisieren und ermöglichen so die Ausführung direkt auf der GPU. Das vorgestellte Framework SForge wurde mit dem Ziel entwickelt, zu einem bestehenden Framework kompatibel zu sein. Als bestehendes Framework wurde auf AForge zurückgegriffen. Mit SForge können bestehende und eigene Bildfilter direkt auf der GPU ausgeführt werden, aber auch die Konvertierung von Farbräumen und Farbsystemen wurden realisiert. Das Framework arbeitet floatbasierend. Somit können auch HDR-Daten verarbeitet werden, um beispielsweise Tonemapping anzuwenden. Filter mit Parametern lassen sich über einen optionalen Dialog interaktiv ändern und modifizieren das Resultat in Echtzeit.
In order to promote the accessibility of biodiversity data in historic and contemporary literature, we introduce a new interdisciplinary project called BIOfid (FID=Fachinformationsdienst, a service for providing specialized information). The project aims at a mobilization of data available in print only by combining digitization of scientific biodiversity literature with the development of innovative text mining tools for complex, eventually semantic searches throughout the complete text corpus. A major prerequisite for the development of such search tools is the provision of sophisticated anatomy ontologies on the one hand, and of complete lists of species names (currently considered valid as well as all synonyms) at a global scale on the other hand. In the initial stage, we chose examples from German publications of the past 250 years dealing with the geographic distribution and ecology of vascular plants (Tracheophyta), birds (Aves), as well as moths and butterflies (Lepidoptera) in Germany. These taxa have been prioritized according to current demands of German research groups (about 50 sites) aiming at analyses and modeling of distribution patterns and their changes through time. In the long term, we aim at providing data and open source software applicable for any taxon and geographic region. For this purpose, a platform for open access journals for long-term availability of professional e-journals will be established. All generated data will also be made accessible through GFBio (German Federation for Biological Data). BIOfid is supported by the LIS-Scientific Library Services and Information Systems program of the German Research Foundation (DFG).
In intensive care units physicians are aware of a high lethality rate of septic shock patients. In this contribution we present typical problems and results of a retrospective, data driven analysis based on two neural network methods applied on the data of two clinical studies. Our approach includes necessary steps of data mining, i.e. building up a data base, cleaning and preprocessing the data and finally choosing an adequate analysis for the medical patient data. We chose two architectures based on supervised neural networks. The patient data is classified into two classes (survived and deceased) by a diagnosis based either on the black-box approach of a growing RBF network and otherwise on a second network which can be used to explain its diagnosis by human-understandable diagnostic rules. The advantages and drawbacks of these classification methods for an early warning system are discussed.
The endoplasmic reticulum–mitochondria encounter structure (ERMES) connects the mitochondrial outer membrane with the ER. Multiple functions have been linked to ERMES, including maintenance of mitochondrial morphology, protein assembly and phospholipid homeostasis. Since the mitochondrial distribution and morphology protein Mdm10 is present in both ERMES and the mitochondrial sorting and assembly machinery (SAM), it is unknown how the ERMES functions are connected on a molecular level. Here we report that conserved surface areas on opposite sides of the Mdm10 β-barrel interact with SAM and ERMES, respectively. We generated point mutants to separate protein assembly (SAM) from morphology and phospholipid homeostasis (ERMES). Our study reveals that the β-barrel channel of Mdm10 serves different functions. Mdm10 promotes the biogenesis of α-helical and β-barrel proteins at SAM and functions as integral membrane anchor of ERMES, demonstrating that SAM-mediated protein assembly is distinct from ER-mitochondria contact sites.
The paper focuses on the division of the sensor field into subsets of sensor events and proposes the linear transformation with the smallest achievable error for reproduction: the transform coding approach using the principal component analysis (PCA). For the implementation of the PCA, this paper introduces a new symmetrical, lateral inhibited neural network model, proposes an objective function for it and deduces the corresponding learning rules. The necessary conditions for the learning rate and the inhibition parameter for balancing the crosscorrelations vs. the autocorrelations are computed. The simulation reveals that an increasing inhibition can speed up the convergence process in the beginning slightly. In the remaining paper, the application of the network in picture encoding is discussed. Here, the use of non-completely connected networks for the self-organized formation of templates in cellular neural networks is shown. It turns out that the self-organizing Kohonen map is just the non-linear, first order approximation of a general self-organizing scheme. Hereby, the classical transform picture coding is changed to a parallel, local model of linear transformation by locally changing sets of self-organized eigenvector projections with overlapping input receptive fields. This approach favors an effective, cheap implementation of sensor encoding directly on the sensor chip. Keywords: Transform coding, Principal component analysis, Lateral inhibited network, Cellular neural network, Kohonen map, Self-organized eigenvector jets.
Algorithms and data structures constitute the theoretical foundations of computer science and are an integral part of any classical computer science curriculum. Due to their high level of abstraction, the understanding of algorithms is of crucial concern to the vast majority of novice students. To facilitate the understanding and teaching of algorithms, a new research field termed "algorithm visualisation" evolved in the early 1980's. This field is concerned with innovating techniques and concepts for the development of effective algorithm visualisations for teaching, study, and research purposes. Due to the large number of requirements that high-quality algorithm visualisations need to meet, developing and deploying effective algorithm visualisations from scratch is often deemed to be an arduous, time-consuming task, which necessitates high-level skills in didactics, design, programming and evaluation. A substantial part of this thesis is devoted to the problems and solutions related to the automation of three-dimensional visual simulation of algorithms. The scientific contribution of the research presented in this work lies in addressing three concerns: - Identifying and investigating the issues related to the full automation of visual simulations. - Developing an automation-based approach to minimising the effort required for creating effective visual simulations. - Designing and implementing a rich environment for the visualisation of arbitrary algorithms and data structures in 3D. The presented research in this thesis is of considerable interest to (1) researchers anxious to facilitate the development process of algorithm visualisations, (2) educators concerned with adopting algorithm visualisations as a teaching aid and (3) students interested in developing their own algorithm animations.
Durch das Semantische Web soll es Maschinen ermöglicht werden Metadaten zu verstehen. Hierin steckt ein enormes Potenzial, wodurch sich der Umgang mit dem heutigen Internet grundlegend ändern kann. Das Semantische Web steht jedoch noch am Anfang. Es gilt noch einige offene und strittige Punkte zu klären. Das Fundament des Semantischen Webs wird durch das Resource Description Framework (RDF) gebildet, worauf sich diese Arbeit konzentriert. Hauptziel meiner Arbeit war die Verbesserung der Funktionalität und der Nutzungsfreundlichkeit für RDF-Speicher- und Anfragesysteme. Dabei stand die allgemeine Nutzung für ein Informationsportal oder eine Internetsuchmaschine im Vordergrund. Meine Überlegungen hierzu wurden in dem Speichersystem RDF-Source related Storage System (RDF-S3) und der darauf aufsetzenden Anfragesprache easy RDF Query Language (eRQL) umgesetzt. Insbesondere wurden die folgende Kernpunkte berücksichtigt: • Allgemeine Nutzbarkeit der Anfragesprache, sodass auch unerfahrene Nutzer einfach und schnell Anfragen erstellen können. Um auch von unerfahrenen Nutzern bedient werden zu können, konnte keine komplexe Syntax verwendet werden, wie dies bei den meisten existierenden Anfragesprachen der Fall ist. Es wurde sich daher an Anfragesprachen existierender Suchmaschinen angelehnt. Entsprechend bilden sogenannte Ein-Wort-Anfragen, die den Suchbegriffen entsprechen, eine wichtige Rolle. Um gezieltere Anfragen stellen zu können, sind jedoch die Schemainformationen der gespeicherten Daten sehr wichtig. Hier bietet bereits die RDF Query Language (RQL) viele hilfreiche Kurzschreibweisen, an die sich eRQL anlehnt. • Bereitstellung glaubwürdiger Metadaten, sodass den Anfrageergebnissen vertraut werden kann. Das Semantische Web ist ein verteiltes System, wobei keine Kontrolle auf die Datenquellen ausgeübt werden kann. Den Daten kann daher nicht ohne weiteres vertraut werden. Anders ist dies mit Metadaten, die von eigenen Systemen erzeugt wurden. Man weiß wie sie erzeugt wurden und kann ihnen entsprechend vertrauen. Wichtig ist eine klare Trennung zwischen den Daten und den Metadaten über diese, da sonst eine absichtliche Nachbildung der Metadaten von außen (Suchmaschinen-Spamming) das System unterlaufen kann. Für die Glaubwürdigkeit von Anfrageergebnissen sind vor allem die Herkunft der Daten und deren Aktualität entscheidend. In den umgesetzten Entwicklungen zu dieser Arbeit wurde sich daher auf diese Informationen konzentriert. In RDF-S3 wird die Verknüpfung der RDF-Aussage mit ihren Herkunftsdaten im Speichermodell abgebildet. Dies ermöglicht eine gezielte Ausnutzung dieser Daten in eRQL-Anfragen. Durch den sogenannten Dokumenten-Modus bietet eRQL die Möglichkeit Anfragen auf eine Gruppe von Quellen zu begrenzen oder bestimmte unglaubwürdige Quellen auszuschließen. Auch können die Herkunftsdaten das Anfrageergebniss erweitern und dadurch das Verständnis und die Glaubwürdigkeit für das Ergebnis erhöhen. • Anfrageergebnisse können um ihre Umgebung erweitert werden, sodass sie besser verstanden werden können. Für eRQL-Anfragen besteht die Möglichkeit die Umgebnung zu den Treffern (RDF-Aussagen) mit zu berücksichtigen und im Ergebnis mit anzuzeigen. Dies erhöht das Verständnis für die Ergebnisse. Weiterhin ergeben sich hierdurch neue Möglichkeiten wie das Auffinden von Pfaden zwischen Teilergebnissen einer Anfrage. • Unterstützung und Kombination von Daten- und Schemaanfragen. Mit eRQL werden beide Anfragetypen unterstützt und können sinnvoll miteinander kombiniert werden. Die Einbeziehung der Umgebung ermöglicht für die Kombination von Daten- und Schemaanfragen neue Möglichkeiten. Dabei werden sowohl Daten- als auch Schemaanfragen (oder deren Kombination) durch das Speichermodell von RDF-S3 optimal unterstützt. Weitere nennenswerte Eigenschaften von RDF-S3 und eRQL sind: • Durch die Möglichkeit gezielt einzelne Quellen wieder zu entfernen oder zu aktualisieren, bietet RDF-S3 eine gute Wartbarkeit der gespeicherten Daten. • RDF-S3 und eRQL sind zu 100 % in Java entwickelt, wodurch ihr Einsatz unabhängig vom Betriebssystem möglich ist. • Der Datenbankzugriff erfolgt über JDBC, wobei keine besonderen Eigenschaften für die verwendete RDBMS nötig sind . Dies sorgt für eine hohe Portabilität. RDF-S3 und eRQL wurden als Beispielimplementierungen entwickelt. Für einen produktiven Einsatz sollten die Systeme an die gegebene Hardware-Umgebung und Anwendungsfall angepasst werden. In Kapitel 6 werden Erweiterungen und Änderungsmöglichkeiten genannt, die je nach Situation geprüft werden sollten. Ein noch vorhandenes Problem für einen produktiven Einsatz auf großen Datenmengen ist die aufwendige Berechnung der Umgebungen für Anfrageergebnisse. Die Berechnung von Umgebungen im Vorhinein könnte hier eine Lösung sein, die jedoch durch die Möglichkeit der Einschränkung auf glaubwürdige Quellen erschwert wird.
This paper describes the ongoing efforts of the authors to present ancient Greek and Roman numismatic data on the public internet, with an emphasis on efforts to integrate information from multiple sources using Linked Data and Semantic Web techniques. By way of very modern metaphor, it is useful to think of coins as intentionally created packages of 'named entities'. Each coin was struck by a particular authority, often at a known site, and coins often make reference to familiar concepts such as deities, historical events, or symbols that were widely recognized in the ancient world. The institutions represented among the authors have deployed search interfaces that allow users to take advantage of this aspect of numismatic databases. The American Numismatic Society's database provides faceted search to its collection of over 550,000 objects. The Portable Antiquities Scheme (PAS) in the UK presents individual finds (and hoards) recorded throughout the country. The Römisch-Germanische Kommission and the University of Frankfurt (DBIS) are developing a prototype metaportal (INTERFACE) that accesses national databases of coin finds held in in Frankfurt, Vienna and Utrecht. Each of these resources is beginning to explore Semantic Web/Linked data approaches so that the role of numismatic standards is immediately coming to the fore. DBIS and INTERFACE are developing a numismatic ontology. At the ANS and PAS, the public database already presents RDF serializations based on Dublin Core. Together, the authors have begun to explore standardization of conceptual names on the basis of the vocabulary presented at the site http://nomisma.org . Nomisma.org is a collaborative effort to provide stable digital representations of numismatic concepts and entities. It provides URIs for such basic concepts as 'coin', 'mint', 'axis'. All of these are defined within the scope of numismatics but are already being linked to other stable resources where available. This is particularly the case for mints. For example, the URI http://nomisma.org/id/corinth is intended to represent that ancient city in its role as a minter/issuer of coins. The URI is linked via the SKOS ontology to the Pleiades Gazetteer of ancient places. This allows Nomisma to be the basis for a common representation of the concept that an object is a coin minted at Corinth. The ANS has already deployed such relationships in its public database. The work of all these projects is very much in progress so that this paper hopes to generate discussion on how multiple large projects can move forward in their own work while encouraging sufficient commonality to support large scale research questions undertaken by diverse audiences.
RDF is widely used in order to catalogue the chaos of data across the internet. But these descriptions must be stored, evaluated, analyzed and verified. This creates the need to search for an environment to realize these aspects and strengthen RDFs influence. InterSystems postrelational database Caché exposes many features that are similar to RDF and provide persistence with semantic part. Some models for relational databases exist but these lack features like object-oriented data-structures and multidimensional variables. The aim of this thesis is to develop an RDF model for Caché that saves RDF data in an object-oriented form. Furthermore an interface for importing RDF data will be presented and implemented.
Iconographic representations on ancient artifacts are described in many existing databases and literature as human readable text. We applied Natural Language Processing (NLP) approaches in order to extract the semantics out of these textual descriptions and in this way enable semantic searches over them. This allows more sophisticated requests compared to the common existing keyword searches. As we show in our experiments based on numismatic datasets, the approach is generic in the sense that once the system is trained on one dataset, it can be applied without any further manual work also to datasets that have similar content. Of course, additional adaptions would further improve the results. Since the approach requires manual work only during the training phase, it can easily be applied to huge datasets without manual work and therefore without major extra costs. In fact, in our experience bigger datasets generate even better results because there is more data for training. Since our approach is not bound to a certain domain and the numismatic datasets are just an example, it could serve as a blueprint for many other areas. It could also help to build bridges between disciplines since textual iconographic descriptions are to be found also for pottery, sculpture and elsewhere.
Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural models. In the present thesis, we introduce several recurrent network models of threshold units that combine spike timing dependent plasticity with homeostatic plasticity mechanisms like intrinsic plasticity or synaptic normalization. We investigate how these different forms of plasticity shape the dynamics and computational properties of recurrent networks. The networks receive input sequences composed of different symbols and learn the structure embedded in these sequences in an unsupervised manner. Information is encoded in the form of trajectories through a high-dimensional state space reminiscent of recent biological findings on cortical coding. We find that these self-organizing plastic networks are able to represent and "understand" the spatio-temporal patterns in their inputs while maintaining their dynamics in a healthy regime suitable for learning. The emergent properties are not easily predictable on the basis of the individual plasticity mechanisms at work. Our results underscore the importance of studying the interaction of different forms of plasticity on network behavior.
We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature processing units as a kind of resource-restricted associative multiresolution learning We clame that such an architecture must reach maturity by basic statistical proportions, optimizing the information processing capabilities of each layer. The final symbolic output is learned by pure association of features of different levels and kind of sensorial input. Finally, we also show that common error-correction learning for motor skills can be accomplished also by non-specific associative learning. Keywords: feedforward network layers, maximal information gain, restricted Hebbian learning, cellular neural nets, evolutionary associative learning
In dieser Arbeit wird die Verteilung von zeitlich abhängigen Tasks in einem verteilten System unter den Gesichtspunkten des Organic Computing untersucht. Sie leistet Beiträge zur Theorie des Schedulings und zur selbstorganisierenden Verteilung solcher abhängiger Tasks unter Echtzeitbedingungen. Die Arbeit ist in zwei Teile gegliedert: Im ersten Teil werden Tasks als sogenannte Pfade modelliert, welche aus einer festen Folge von Aufträgen bestehen. Dabei muss ein Pfad ununterbrechbar auf einer Ressource ausgeführt werden und die Reihenfolge seiner Aufträge muss eingehalten werden. Natürlich kann es auch zeitliche Abhängigkeiten zwischen Aufträgen verschiedener Pfade geben. Daraus resultiert die Frage, ob ein gegebenes System S von Pfaden mit seinen Abhängigkeiten überhaupt ausführbar ist: Dies ist genau dann der Fall wenn die aus den Abhängigkeiten zwischen den Aufträgen resultierende Relation <A irreflexiv ist. Weiterhin muss für ein ausführbares System von Pfaden geklärt werden, wie ein konkreter Ausführungsplan aussieht. Zu diesem Zweck wird eine weitere Relation < auf den Pfaden eingeführt. Falls < auf ihnen irreflexiv ist, so kann man eine Totalordnung auf ihnen erzeugen und erhält somit einen Ausführungsplan. Anderenfalls existieren Zyklen von Pfaden bezüglich der Relation <. In der Arbeit wird weiterhin untersucht, wie man diese isoliert und auf einem transformierten Pfadsystem eine Totalordnung und damit einen Ausführungsplan erstellt. Die Größe der Zyklen von Pfaden bezüglich < ist der wichtigste Parameter für die Anzahl der Ressourcen, die für die Ausführung eines Systems benötigt werden. Deshalb wird in der Arbeit ebenfalls ausführlich untersucht, ob und wie man Zyklen anordnen kann, um die Ressourcenzahl zu verkleinern und somit den Ressourcenaufwand zu optimieren. Dabei werden zwei Ideen verfolgt: Erstens kann eine Bibliothek erstellt werden, in der generische Zyklen zusammen mit ihren Optimierungen vorliegen. Die zweite Idee greift, wenn in der Bibliothek keine passenden Einträge gefunden werden können: Hier erfolgt eine zufällige oder auf einer Heuristik basierende Anordnung mit dem Ziel, den Ressourcenaufwand zu optimieren. Basierend auf den theoretischen Betrachtungen werden Algorithmen entwickelt und es werden Zeitschranken für ihre Ausführung angegeben. Da auch die Ausführungszeit eines Pfadsystems wichtig ist, werden zwei Rekursionen angegeben und untersucht. Diese schätzen die Gesamtausführungszeit unter der Bedingung ab, dass keine Störungen an den Ressourcen auftreten können. Die Verteilung der Pfade auf Ressourcen wird im zweiten Teil der Arbeit untersucht. Zunächst wird ein künstliches Hormonsystems (KHS) vorgestellt, welches eine Verteilung unter Berücksichtigung der Eigenschaften des Organic Computing leistet. Es werden zwei Alternativen untersucht: Im ersten Ansatz, dem einstufigen KHS, werden die Pfade eines Systems direkt durch das KHS auf die Ressourcen zu Ausführung verteilt. Zusätzlich werden Mechanismen zur Begrenzung der Übernahmehäufigkeit der Pfade auf den Ressourcen und ein Terminierungs-mechanismus entwickelt. Im zweiten Ansatz, dem zweistufigen KHS, werden durch das KHS zunächst Ressourcen exklusiv für Klassen von Pfaden reserviert. Dann werden die Pfade des Systems auf genau den reservierten Ressourcen vergeben, so dass eine Ausführung ohne Wechselwirkung zwischen Pfaden verschiedener Klassen ermöglicht wird. Auch hierfür werden Methoden zur Beschränkung der Übernahmehäufigkeiten und Terminierung geschaffen. Für die Verteilung und Terminierung von Pfaden durch das einstufige oder zweistufige KHS können Zeitschranken angegeben werden, so dass auch harte Echtzeitschranken eingehalten werden können. Zum Schluss werden beide Ansätze mit verschiedenen Benchmarks evaluiert und ihre Leistungsfähigkeit demonstriert. Es zeigt sich, dass der erste Ansatz für einen Nutzer einfacher zu handhaben ist, da die benötigten Parameter sehr leicht berechnet werden können. Der zweite Ansatz ist sehr gut geeignet, wenn eine geringe Anzahl von Ressourcen vorhanden ist und die Pfade verschiedener Klassen möglichst unabhängig voneinander laufen sollen. Fazit: Durch die in dieser Arbeit gewonnenen Erkenntnisse ist jetzt möglich, mit echtzeitfähigen Algorithmen die Ausführbarkeit von zeitlich abhängigen Tasks zu untersuchen und den Ressourcenaufwand für ihre Ausführung zu optimieren. Weiterhin werden zwei verschiedene Ansätze eines künstlichen Hormonsystems zur Allokation solcher Tasks in einem verteilten System bereit gestellt, die ihre Stärken unter jeweils verschiedenen Randbedingungen voll entfalten und somit ein breites Anwendungsfeld abdecken. Für den Rechenzeitaufwand beider Ansätze können Schranken angegeben werden, was sie für den Einsatz in Echtzeitsystemen qualifiziert.
Eingebettete Systeme sind Rechnersysteme, die in einem technischen Umfeld eingebettet sind und dort ihre Arbeit verrichten. Kennzeichen heutiger und zukünftiger eingebetteter Systeme sind, dass sie in einer immer größeren Anzahl in der Industrie, im Haushalt und in Büros, in Eisenbahnen und Flugzeugen und in vielen weiteren Umgebungen auftreten. Sie sind oftmals stark vernetzt und müssen hochverlässlich sein, um Unfälle zu vermeiden und so Anwender und Nutzer vor Schaden zu bewahren. Die Beherrschung dieser eingebetteten Systeme ist meist hochkomplex, da durch die Vernetzung eine Vielzahl von Komponenten zusammenarbeiten. Für den Anwender ist es daher schwer, den Überblick zu behalten. Im Hinblick auf die Verlässlichkeit ist es wichtig, Reaktionen auf Fehler und unvorhergesehene Situationen in diesen Systemen innerhalb definierter Zeitschranken zu liefern, um Schaden zu vermeiden.
Selbstorganisation wird heutzutage als probates Mittel angesehen, um die Herausforderungen, die sich mit der Inbetriebnahme, Nutzung und Instandhaltung von komplexen eingebetteten Systemen ergeben, zu meistern. Der Beitrag dieser Arbeit ist eine Untersuchung selbstorganisierender eingebetteter Systeme:
Im ersten Teil wird ein Überblick über den aktuellen Stand der Forschung bei eingebetteten Systemen sowie über den Bereich der Selbstorganisation für eingebettete Systeme gegeben. Dabei wird die Idee des Organic Computings beschrieben, welches sich mit Selbstorganisationsprinzipien in IT-Systemen beschäftigt, und es werden aktuelle Forschungstrends dazu beschrieben.
Im zweiten Teil der Arbeit werden eigene Arbeiten im Feld von selbstorganisierenden eingebetteten Systemen vorgestellt. Sie behandeln verschiedene Aspekte eines künstlichen Hormonsystems (KHS), welches zur selbstorganisierten Verteilung von Tasks auf einer Menge von vernetzten Prozessoren genutzt werden kann. Dabei werden einerseits grundlegende Definitionen des Organic Computings im Bezug auf das KHS untersucht und bewertet. Andererseits werden neue Lerntechniken für das KHS untersucht, die sich am maschinellen Lernen orientieren. Außerdem wird ein mehrstufiges KHS entwickelt und evaluiert, um die Vergabe einer sehr großen Anzahl von Tasks (≥ 1000) auf einer sehr großen Anzahl von Prozessoren (≥ 10000) zu ermöglichen.
We present an efficient variant of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lov´asz [LLL82]. We organize LLL-reduction in segments of size k. Local LLL-reduction of segments is done using local coordinates of dimension 2k. Strong segment LLL-reduction yields bases of the same quality as LLL-reduction but the reduction is n-times faster for lattices of dimension n. We extend segment LLL-reduction to iterated subsegments. The resulting reduction algorithm runs in O(n3 log n) arithmetic steps for integer lattices of dimension n with basis vectors of length 2O(n), compared to O(n5) steps for LLL-reduction.
Exhaustive, automatic testing of dataflow (esp. mapreduce) programs has emerged as an important challenge. Past work demonstrated effective ways to generate small example data sets that exercise operators in the Pig platform, used to generate Hadoop map-reduce programs. Although such prior techniques attempt to cover all cases of operator use, in practice they often fail. Our SEDGE system addresses these completeness problems: for every dataflow operator, we produce data aiming to cover all cases that arise in the dataflow program (e.g., both passing and failing a filter). SEDGE relies on transforming the program into symbolic constraints, and solving the constraints using a symbolic reasoning engine (a powerful SMT solver), while using input data as concrete aids in the solution process. The approach resembles dynamic-symbolic (a.k.a. "concolic") execution in a conventional programming language, adapted to the unique features of the dataflow domain.
In third-party benchmarks, SEDGE achieves higher coverage than past techniques for 5 out of 20 PigMix benchmarks and 7 out of 11 SDSS benchmarks and (with equal coverage for the rest of the benchmarks). We also show that our targeting of the high-level dataflow language pays off: for complex programs, state-of-the-art dynamic-symbolic execution at the level of the generated map-reduce code (instead of the original dataflow program) requires many more test cases or achieves much lower coverage than our approach.
Assuming a cryptographically strong cyclic group G of prime order q and a random hash function H, we show that ElGamal encryption with an added Schnorr signature is secure against the adaptive chosen ciphertext attack, in which an attacker can freely use a decryption oracle except for the target ciphertext. We also prove security against the novel one-more-decyption attack. Our security proofs are in a new model, corresponding to a combination of two previously introduced models, the Random Oracle model and the Generic model. The security extends to the distributed threshold version of the scheme. Moreover, we propose a very practical scheme for private information retrieval that is based on blind decryption of ElGamal ciphertexts.
We introduce novel security proofs that use combinatorial counting arguments rather than reductions to the discrete logarithm or to the Diffie-Hellman problem. Our security results are sharp and clean with no polynomial reduction times involved. We consider a combination of the random oracle model and the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including l sequential interactions with the signer cannot produce l+1 signatures with a better probability than (t 2)/q. We also characterize the different power of sequential and of parallel attacks. Secondly, we prove signed ElGamal encryption is secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge ciphertext. Moreover, signed ElGamal encryption is secure against the one-more decryption attack: A generic adversary performing t generic steps including l interactions with the decryption oracle cannot distinguish the plaintexts of l + 1 ciphertexts from random strings with a probability exceeding (t 2)/q.
We present a novel parallel one-more signature forgery against blind Okamoto-Schnorr and blind Schnorr signatures in which an attacker interacts some times with a legitimate signer and produces from these interactions signatures. Security against the new attack requires that the following ROS-problem is intractable: find an overdetermined, solvable system of linear equations modulo with random inhomogenities (right sides). There is an inherent weakness in the security result of POINTCHEVAL AND STERN. Theorem 26 [PS00] does not cover attacks with 4 parallel interactions for elliptic curves of order 2200. That would require the intractability of the ROS-problem, a plausible but novel complexity assumption. Conversely, assuming the intractability of the ROS-problem, we show that Schnorr signatures are secure in the random oracle and generic group model against the one-more signature forgery.
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given \exp_{\alpha}(x), assuming that the exponentiation function \exp_{\alpha}(x) = \alpha^x is one-way. We reduce he general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number \frac{x}{q} \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive shift bits lsb(2^{-i}x mod q) for i=k+1,...,k+j. When we restrict \exp_{\alpha} to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2^{-j}-fraction of \exp_{\alpha}. For groups of odd group order q we show that every two 2^{-j}-fractions of \exp_{\alpha} are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our key theorem shows that arbitrary j consecutive shift bits of x are simultaneously secure when given \exp_{\alpha}(x) iff the 2^{-j}-fractions of \exp_{\alpha} are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of \frac{x}{q} \in [0,1). For one-way \exp_{\alpha} the individual bits of x are secure when given \exp_{\alpha}(x) by the method of Hastad, N\"aslund [HN98]. For groups of even order 2^{s}q we show that the j least-significant bits of \lfloor x/2^s\rfloor, as well as the j most-significant bits of \frac{x}{q} \in [0,1), are simultaneously secure iff the 2^{-j}-fractions of \exp_{\alpha'} are one-way for \alpha' := \alpha^{2^s}. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of \exp_{\alpha} for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using $t$ generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q) j\sqrt{t} (2^j/q)^{\frac14}).
Korrektur zu: C.P. Schnorr: Security of 2t-Root Identification and Signatures, Proceedings CRYPTO'96, Springer LNCS 1109, (1996), pp. 143-156 page 148, section 3, line 5 of the proof of Theorem 3. Die Korrektur wurde präsentiert als: "Factoring N via proper 2 t-Roots of 1 mod N" at Eurocrypt '97 rump session.
The measurement of azimuthal correlations of charged particles is presented for Pb-Pb collisions at sNN−−−√= 2.76 TeV and p-Pb collisions at sNN−−−√= 5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. These correlations are measured for the second, third and fourth order flow vector in the pseudorapidity region |η|<0.8 as a function of centrality and transverse momentum pT using two observables, to search for evidence of pT-dependent flow vector fluctuations. For Pb-Pb collisions at 2.76 TeV, the measurements indicate that pT-dependent fluctuations are only present for the second order flow vector. Similar results have been found for p-Pb collisions at 5.02 TeV. These measurements are compared to hydrodynamic model calculations with event-by-event geometry fluctuations in the initial state to constrain the initial conditions and transport properties of the matter created in Pb-Pb and p-Pb collisions.
The measurement of azimuthal correlations of charged particles is presented for Pb-Pb collisions at sNN−−−√= 2.76 TeV and p-Pb collisions at sNN−−−√= 5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. These correlations are measured for the second, third and fourth order flow vector in the pseudorapidity region |η|<0.8 as a function of centrality and transverse momentum pT using two observables, to search for evidence of pT-dependent flow vector fluctuations. For Pb-Pb collisions at 2.76 TeV, the measurements indicate that pT-dependent fluctuations are only present for the second order flow vector. Similar results have been found for p-Pb collisions at 5.02 TeV. These measurements are compared to hydrodynamic model calculations with event-by-event geometry fluctuations in the initial state to constrain the initial conditions and transport properties of the matter created in Pb-Pb and p-Pb collisions.
The measurement of azimuthal correlations of charged particles is presented for Pb-Pb collisions at sNN−−−√=2.76 TeV and p-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the CERN Large Hadron Collider. These correlations are measured for the second, third and fourth order flow vector in the pseudorapidity region |η| < 0.8 as a function of centrality and transverse momentum p T using two observables, to search for evidence of p T-dependent flow vector fluctuations. For Pb-Pb collisions at 2.76 TeV, the measurements indicate that p T-dependent fluctuations are only present for the second order flow vector. Similar results have been found for p-Pb collisions at 5.02 TeV. These measurements are compared to hydrodynamic model calculations with event-by-event geometry fluctuations in the initial state to constrain the initial conditions and transport properties of the matter created in Pb–Pb and p–Pb collisions.