Refine
Year of publication
Document Type
- Preprint (759)
- Article (402)
- Working Paper (119)
- Doctoral Thesis (93)
- Diploma Thesis (47)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (36)
- diplomthesis (28)
- Report (25)
Has Fulltext
- yes (1619)
Is part of the Bibliography
- no (1619)
Keywords
Institute
- Informatik (1619) (remove)
Results on the production of 4He and 4He¯¯¯¯¯¯ nuclei in Pb-Pb collisions at sNN−−−√=2.76 TeV in the rapidity range ∣y∣<1, using the ALICE detector, are presented in this paper. The rapidity densities corresponding to 0-10% central events are found to be dN/dy4He=(0.8±0.4 (stat)±0.3 (syst))×10−6 and dN/dy4He¯¯¯¯¯¯¯=(1.1±0.4 (stat)±0.2 (syst))×10−6, respectively. This is in agreement with the statistical thermal model expectation assuming the same chemical freeze-out temperature (Tchem = 156 MeV) as for light hadrons. The measured ratio of 4He¯¯¯¯¯¯/4He is 1.4±0.8 (stat)±0.5 (syst).
Results on the production of 4He and 4He¯¯¯¯¯¯ nuclei in Pb-Pb collisions at sNN−−−√=2.76 TeV in the rapidity range ∣y∣<1, using the ALICE detector, are presented in this paper. The rapidity densities corresponding to 0-10% central events are found to be dN/dy4He=(0.8±0.4 (stat)±0.3 (syst))×10−6 and dN/dy4He¯¯¯¯¯¯¯=(1.1±0.4 (stat)±0.2 (syst))×10−6, respectively. This is in agreement with the statistical thermal model expectation assuming the same chemical freeze-out temperature (Tchem = 156 MeV) as for light hadrons. The measured ratio of 4He¯¯¯¯¯¯/4He is 1.4±0.8 (stat)±0.5 (syst).
The transverse momentum (pT) differential yields of (anti-)3He and (anti-)3H measured in p-Pb collisions at sNN−−−√ = 5.02 TeV with ALICE at the Large Hadron Collider (LHC) are presented. The ratios of the pT-integrated yields of (anti-)3He and (anti-)3H to the proton yields are reported, as well as the pT dependence of the coalescence parameters B3 for (anti-)3He and (anti-)3H. For (anti-)3He, the results obtained in four classes of the mean charged-particle multiplicity density are also discussed. These results are compared to predictions from a canonical statistical hadronization model and coalescence approaches. An upper limit on the total yield of 4He¯ is determined.
The transverse momentum (pT) differential yields of (anti-)3He and (anti-)3H measured in p-Pb collisions at sNN−−−√ = 5.02 TeV with ALICE at the LHC are presented. The ratios of the pT-integrated yields of (anti-)3He and (anti-)3H to the proton yields are reported, as well as the pT dependence of the coalescence parameters B3 for (anti-)3He and (anti-)3H. For (anti-)3He, the results obtained in four classes of the mean charged-particle multiplicity density are also discussed. These results are compared to predictions from a canonical statistical hadronization model and coalescence approaches. An upper limit on the total yield of 4He¯ is determined.
The transverse momentum (pT) differential yields of (anti-)3He and (anti-)3H measured in p-Pb collisions at sNN−−−√ = 5.02 TeV with ALICE at the Large Hadron Collider (LHC) are presented. The ratios of the pT-integrated yields of (anti-)3He and (anti-)3H to the proton yields are reported, as well as the pT dependence of the coalescence parameters B3 for (anti-)3He and (anti-)3H. For (anti-)3He, the results obtained in four classes of the mean charged-particle multiplicity density are also discussed. These results are compared to predictions from a canonical statistical hadronization model and coalescence approaches. An upper limit on the total yield of 4He¯ is determined.
The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ...
The first measurement at the LHC of charge-dependent directed flow (v1) relative to the spectator plane is presented for Pb-Pb collisions at sNN−−−√ = 5.02 TeV. Results are reported for charged hadrons and D0 mesons for the transverse momentum intervals pT>0.2 GeV/c and 3<pT< 6 GeV/c in the 5-40% and 10-40% centrality classes, respectively. The difference between the positively and negatively charged hadron v1 has a positive slope as a function of pseudorapidity η, dΔv1/dη=[1.68 ± 0.49 (stat.) ± 0.41 (syst.)] ×10−4. The same measurement for D0 and D¯0 mesons yields a positive value dΔv1/dη= [4.9 ± 1.7 (stat.) ± 0.6 (syst.)]×10−1, which is about three orders of magnitude larger than the one of the charged hadrons. These measurements can provide new insights into the effects of the strong electromagnetic field and the initial tilt of matter created in non-central heavy-ion collisions on the dynamics of light (u, d, and s) and heavy (c) quarks. The large difference between the observed Δv1 of charged hadrons and D0 mesons may reflect different sensitivity of the charm and light quarks to the early time dynamics of a heavy-ion collision. These observations challenge some of the recent theoretical calculations, which predicted a negative and an order of magnitude smaller value of dΔv1/dη for both light-flavour and charmed hadrons.
The first measurement at the LHC of charge-dependent directed flow (v1) relative to the spectator plane is presented for Pb-Pb collisions at sNN−−−√ = 5.02 TeV. Results are reported for charged hadrons and D0 mesons for the transverse momentum intervals pT>0.2 GeV/c and 3<pT< 6 GeV/c in the 5-40% and 10-40% centrality classes, respectively. The difference between the positively and negatively charged hadron v1 has a positive slope as a function of pseudorapidity η, dΔv1/dη=[1.68 ± 0.49 (stat.) ± 0.41 (syst.)] ×10−4. The same measurement for D0 and D¯0 mesons yields a positive value dΔv1/dη= [4.9 ± 1.7 (stat.) ± 0.6 (syst.)]×10−1, which is about three orders of magnitude larger than the one of the charged hadrons. These measurements can provide new insights into the effects of the strong electromagnetic field and the initial tilt of matter created in non-central heavy-ion collisions on the dynamics of light (u, d, and s) and heavy (c) quarks. The large difference between the observed Δv1 of charged hadrons and D0 mesons may reflect different sensitivity of the charm and light quarks to the early time dynamics of a heavy-ion collision. These observations challenge some of the recent theoretical calculations, which predicted a negative and an order of magnitude smaller value of dΔv1/dη for both light-flavour and charmed hadrons.
The first measurement at the LHC of charge-dependent directed flow (v1) relative to the spectator plane is presented for Pb-Pb collisions at sNN−−−√ = 5.02 TeV. Results are reported for charged hadrons and D0 mesons for the transverse momentum intervals pT>0.2 GeV/c and 3<pT< 6 GeV/c in the 5-40% and 10-40% centrality classes, respectively. The difference between the positively and negatively charged hadron v1 is found to have a positive slope as a function of pseudorapidity η, dΔv1/dη=[1.68 ± 0.49 (stat.) ± 0.41 (syst.)] ×10−4, with a 2.6σ significance. The same measurement for D0 and D¯0 mesons yields a positive value dΔv1/dη= [4.9 ± 1.7 (stat.) ± 0.6 (syst.)]×10−1, which is about three orders of magnitude larger than the one of the charged hadrons, and is larger than zero with significance of 2.7σ. These measurements can provide new insights into the effects of the strong electromagnetic field and the initial tilt of matter created in non-central heavy-ion collisions on the dynamics of light (u, d, and s) and heavy (c) quarks. The large difference between the observed Δv1 of charged hadrons and D0 mesons may reflect different sensitivity of the charm and light quarks to the early time dynamics of a heavy-ion collision. These observations challenge some of the recent theoretical calculations incorporating effects of the strong electromagnetic field, which predicted a negative and an order of magnitude smaller value of dΔv1/dη for both light-flavour and charmed hadrons.
The Chiral Magnetic Wave (CMW) phenomenon is essential to provide insights into the strong interaction in QCD, the properties of the quark-gluon plasma, and the topological characteristics of the early universe, offering a deeper understanding of fundamental physics in high-energy collisions. Measurements of the charge-dependent anisotropic flow coefficients are studied in Pb-Pb collisions at center-of-mass energy per nucleon-nucleon collision sNN−−−√= 5.02 TeV to probe the CMW. In particular, the slope of the normalized difference in elliptic (v2) and triangular (v3) flow coefficients of positively and negatively charged particles as a function of their event-wise normalized number difference, is reported for inclusive and identified particles. The slope rNorm3 is found to be larger than zero and to have a magnitude similar to rNorm2, thus pointing to a large background contribution for these measurements. Furthermore, rNorm2 can be described by a blast wave model calculation that incorporates local charge conservation. In addition, using the event shape engineering technique yields a fraction of CMW (fCMW) contribution to this measurement which is compatible with zero. This measurement provides the very first upper limit for fCMW, and in the 10-60% centrality interval it is found to be 26% (38%) at 95% (99.7%) confidence level.
The Chiral Magnetic Wave (CMW) phenomenon is essential to provide insights into the strong interaction in QCD, the properties of the quark-gluon plasma, and the topological characteristics of the early universe, offering a deeper understanding of fundamental physics in high-energy collisions. Measurements of the charge-dependent anisotropic flow coefficients are studied in Pb-Pb collisions at center-of-mass energy per nucleon-nucleon collision sNN−−−√= 5.02 TeV to probe the CMW. In particular, the slope of the normalized difference in elliptic (v2) and triangular (v3) flow coefficients of positively and negatively charged particles as a function of their event-wise normalized number difference, is reported for inclusive and identified particles. The slope rNorm3 is found to be larger than zero and to have a magnitude similar to rNorm2, thus pointing to a large background contribution for these measurements. Furthermore, rNorm2 can be described by a blast wave model calculation that incorporates local charge conservation. In addition, using the event shape engineering technique yields a fraction of CMW (fCMW) contribution to this measurement which is compatible with zero. This measurement provides the very first upper limit for fCMW, and in the 10-60% centrality interval it is found to be 26% (38%) at 95% (99.7%) confidence level.
The Chiral Magnetic Wave (CMW) phenomenon is essential to provide insights into the strong interaction in QCD, the properties of the quark-gluon plasma, and the topological characteristics of the early universe, offering a deeper understanding of fundamental physics in high-energy collisions. Measurements of the charge-dependent anisotropic flow coefficients are studied in Pb-Pb collisions at center-of-mass energy per nucleon-nucleon collision sNN−−−√= 5.02 TeV to probe the CMW. In particular, the slope of the normalized difference in elliptic (v2) and triangular (v3) flow coefficients of positively and negatively charged particles as a function of their event-wise normalized number difference, is reported for inclusive and identified particles. The slope rNorm3 is found to be larger than zero and to have a magnitude similar to rNorm2, thus pointing to a large background contribution for these measurements. Furthermore, rNorm2 can be described by a blast wave model calculation that incorporates local charge conservation. In addition, using the event shape engineering technique yields a fraction of CMW (fCMW) contribution to this measurement which is compatible with zero. This measurement provides the very first upper limit for fCMW, and in the 10-60% centrality interval it is found to be 26% (38%) at 95% (99.7%) confidence level.
This Letter presents the first measurement of event-by-event fluctuations of the net number (difference between the particle and antiparticle multiplicities) of multistrange hadrons Ξ− and Ξ¯¯¯¯+ and its correlation with the net-kaon number using the data collected by the ALICE Collaboration in pp, p−Pb, and Pb−Pb collisions at a center-of-mass energy per nucleon pair sNN−−−√=5.02 TeV. The statistical hadronization model with a correlation over three units of rapidity between hadrons having the same and opposite strangeness content successfully describes the results. On the other hand, string-fragmentation models that mainly correlate strange hadrons with opposite strange quark content over a small rapidity range fail to describe the data.
We investigate unary regular languages and compare deterministic finite automata (DFA’s), nondeterministic finite automata (NFA’s) and probabilistic finite automata (PFA’s) with respect to their size. Given a unary PFA with n states and an e-isolated cutpoint, we show that the minimal equivalent DFA has at most n exp 1/2e states in its cycle. This result is almost optimal, since for any alpha < 1 a family of PFA’s can be constructed such that every equivalent DFA has at least n exp alpha/2e states. Thus we show that for the model of probabilistic automata with a constant error bound, there is only a polynomial blowup for cyclic languages. Given a unary NFA with n states, we show that efficiently approximating the size of a minimal equivalent NFA within the factor sqrt(n)/ln n is impossible unless P = NP. This result even holds under the promise that the accepted language is cyclic. On the other hand we show that we can approximate a minimal NFA within the factor ln n, if we are given a cyclic unary n-state DFA.
Die allgemein steigende Komplexität technischer Systeme macht sich auch in eingebetteten Systemen bemerkbar. Außerdem schrumpfen die Strukturgrößen der eingesetzten Komponenten, was wiederum die Auftrittswahrscheinlichkeit verschiedener Effekte erhöht, die zu Fehlern und Ausfällen dieser Komponenten und damit der Gesamtsysteme führen können. Da in vielen Anwendungsbereichen ferner Sicherheitsanforderungen eingehalten werden müssen, sind zur Gewährleistung der Zuverlässigkeit flexible Redundanzkonzepte nötig.
Ein Forschungsgebiet, das sich mit Methoden zur Beherrschung der Systemkomplexität befasst, ist das Organic Computing. In dessen Rahmen werden Konzepte erforscht, um in natürlichen Systemen beobachtbare Eigenschaften und Organisationsprinzipien auf technische Systeme zu übertragen. Hierbei sind insbesondere sogenannte Selbst-X-Eigenschaften wie Selbstorganisation, -konfiguration und -heilung von Bedeutung.
Eine konkrete Ausprägung dieses Forschungszweigs ist das künstliche Hormonsystem (artificial hormone system, AHS). Hierbei handelt es sich um eine Middleware für verteilte Systeme, welche es ermöglicht, die Tasks des Systems selbstständig auf seine Prozessorelemente (PEs) zu verteilen und insbesondere Ausfälle einzelner Tasks oder ganzer PEs automatisch zu kompensieren, indem die betroffenen Tasks auf andere PEs migriert werden. Hierbei existiert keine zentrale Instanz, welche die Taskverteilung steuert und somit einen Single-Point-of-Failure darstellen könnte. Entsprechend kann das AHS aufgrund seiner automatischen (Re)konfiguration der Tasks als selbstkonfigurierend und selbstheilend bezeichnet werden, was insbesondere die Zuverlässigkeit des realisierten Systems erhöht. Die Dauer der Selbstkonfiguration und Selbstheilung unterliegt zudem harten Zeitschranken, was den Einsatz des AHS auch in Echtzeitsystemen erlaubt.
Das AHS nimmt jedoch an, dass alle Tasks gleichwertig sind, zudem werden alle Tasks beim Systemstart in einer zufälligen Reihenfolge auf die einzelnen PEs verteilt. Häufig sind die in einem System auszuführenden Tasks jedoch für das Gesamtsystem von unterschiedlicher Wichtigkeit oder müssen gar in einer bestimmten Reihenfolge gestartet werden.
Um den genannten Eigenschaften Rechnung zu tragen, liefert diese Dissertation gegenüber dem aktuellen Stand der Forschung folgende Beiträge:
Zunächst werden die bisher bekannten Zeitschranken des AHS genauer betrachtet und verfeinert.
Anschließend wird das AHS durch die Einführung von Zuteilungsprioritäten erweitert: Mithilfe dieser Prioritäten kann eine Reihenfolge definiert werden, in welcher die Tasks beim Start des Systems auf die PEs verteilt beziehungsweise in welcher betroffene Tasks nach einem Ausfall auf andere PEs migriert werden.
Die Zeitschranken dieser AHS-Erweiterung werden im Detail analysiert.
Durch die Priorisierung von Tasks ist es möglich, implizit Teilmengen von Tasks zu definieren, die ausgeführt werden sollen, falls die Rechenkapazitäten des Systems nach einer bestimmten Anzahl von PE-Ausfällen nicht mehr ausreichen, um alle Tasks auszuführen: Die im Rahmen dieser Dissertation entwickelten Erweiterungen erlauben es in solchen Überlastsituationen, das System automatisch und kontrolliert zu degradieren, sodass die wichtigsten Systemfunktionalitäten lauffähig bleiben.
Überlastsituationen werden daher im Detail betrachtet und analysiert. In solchen müssen gegebenenfalls Tasks niedriger Priorität gestoppt werden, um auf den funktionsfähig verbleibenden PEs hinreichend viel Rechenkapazität zu schaffen, um Tasks höherer Priorität ausführen zu können und das System so in einen wohldefinierten Zustand zu überführen. Die Entscheidung, in welcher Reihenfolge hierbei Tasks gestoppt werden, wird von einer Task-Dropping-Strategie getroffen, die entsprechend einen großen Einfluss auf die Dauer einer solchen Selbstheilung nimmt.
Es werden zwei verschiedene Task-Dropping-Strategien entwickelt und im Detail analysiert: die naive Task-Dropping-Strategie, welche alle niedrigprioren Tasks auf einmal stoppt, sowie das Eager Task Dropping, das in mehreren Phasen jeweils höchstens eine Task pro PE stoppt. Im Vergleich zeigt sich, dass von letzterem fast immer weniger Tasks gestoppt werden als von der naiven Strategie, was einen deutlich schnelleren Abschluss der Selbstheilung ermöglicht. Lediglich in wenigen Sonderfällen ist die naive Strategie überlegen.
Es wird detailliert gezeigt, dass die entwickelte AHS-Erweiterung auch in Überlastsituationen die Einhaltung bestimmter harter Zeitschranken garantieren kann, was den Einsatz des erweiterten AHS in Echtzeitsystemen erlaubt.
Alle theoretisch hergeleiteten Zeitschranken werden durch umfassende Evaluationen vollumfänglich bestätigt.
Abschließend wird das erweiterte, prioritätsbasierten AHS mit verschiedenen verwandten Konzepten verglichen, um dessen Vorteile gegenüber dem Stand der Forschung herauszuarbeiten sowie zukünftige vertiefende Forschung zu motivieren.
We provide elementary algorithms for two preservation theorems for first-order sentences (FO) on the class ℭd of all finite structures of degree at most d: For each FO-sentence that is preserved under extensions (homomorphisms) on ℭd, a ℭd-equivalent existential (existential-positive) FO-sentence can be constructed in 5-fold (4-fold) exponential time. This is complemented by lower bounds showing that a 3-fold exponential blow-up of the computed existential (existential-positive) sentence is unavoidable. Both algorithms can be extended (while maintaining the upper and lower bounds on their time complexity) to input first-order sentences with modulo m counting quantifiers (FO+MODm). Furthermore, we show that for an input FO-formula, a ℭd-equivalent Feferman-Vaught decomposition can be computed in 3-fold exponential time. We also provide a matching lower bound
A central concern in genetics is to identify mechanisms of transcriptional regulation. The aim is to unravel the mapping between the DNA sequence and gene expression. However, it turned out that this is extremely complex. Gene regulation is highly cell type-specific and even moderate changes in gene ex- pression can have functional consequences.
Important contributors to gene regulation are transcription factors (TFs), that are able to directly interact with the DNA. Often, a first step in understanding the effect of a TF on the gene’s regulation is to identify the genomic regions a TF binds to. Therefore, one needs to be aware of the TF’s binding preferences, which are commonly summarized in TF binding motifs. Although for many TFs the binding motif is experimentally validated, there is still a large number of TFs where no binding motif is known. There exist many tools that link TF binding motifs to TFs. We developed the method Massif that improves the performance of such tools by incorporating a domain score that uses the DNA binding domain of the studied TF as additional information.
TF binding sites are often enriched in regulatory elements (REMs) such as promoters or enhancers, where the latter can be located megabases away from its target gene. However, to understand the regulation of a gene it is crucial to know where the REMs of a gene are located. We introduced the EpiRegio webserver that holds REMs associated to target genes predicted across many cell types and tissues using STITCHIT, a previously established method. Our publicly available webserver enables to query for REMs associated to genes (gene query) and REMs overlapping genomic regions (region query). We illus- trated the usefulness of EpiRegio by pointing to a TF that occurs enriched in the REMs of differential expressed genes in circPLOD2 depleted pericytes. Further, we highlighted genes, which are affected by CRISPR-Cas induced mutations in non-coding genomic regions using EpiRegio’s region query. Non-coding genetic variants within REMs may alter gene expression by modifying TF binding sites, which can lead to various kinds of traits or diseases. To understand the underlying molecular mechanisms, one aims to evaluate the effect of such genetic variations on TF binding sites. We developed an accurate and fast statistical approach, that can assess whether a single nucleotide polymorphism (SNP) is regulatory. Further, we combined this approach with epigenetic data and additional analyses in our Sneep workflow. For instance, it enables to identify TFs whose binding preferences are affected by the analyzed SNPs, which is illustrated on eQTL datasets for different cell types. Additionally, we used our Sneep workflow to highlight cardiovascular disease genes using regulatory SNPs and REM-gene interactions.
Overall, the described results allow a better understanding of REM-gene interactions and their interplay with TFs on gene regulation.
Exported proteases of Helicobacter pylori (H. pylori) are potentially involved in pathogen-associated disorders leading to gastric inflammation and neoplasia. By comprehensive sequence screening of the H. pylori proteome for predicted secreted proteases, we retrieved several candidate genes. We detected caseinolytic activities of several such proteases, which are released independently from the H. pylori type IV secretion system encoded by the cag pathogenicity island (cagPAI). Among these, we found the predicted serine protease HtrA (Hp1019), which was previously identified in the bacterial secretome of H. pylori. Importantly, we further found that the H. pylori genes hp1018 and hp1019 represent a single gene likely coding for an exported protein. Here, we directly verified proteolytic activity of HtrA in vitro and identified the HtrA protease in zymograms by mass spectrometry. Overexpressed and purified HtrA exhibited pronounced proteolytic activity, which is inactivated after mutation of Ser205 to alanine in the predicted active center of HtrA. These data demonstrate that H. pylori secretes HtrA as an active protease, which might represent a novel candidate target for therapeutic intervention strategies.
The measurement of the mass differences for systems bound by the strong force has reached a very high precision with protons and anti-protons. The extension of such measurement from (anti-)baryons to (anti-)nuclei allows one to probe any difference in the interactions between nucleons and anti-nucleons encoded in the (anti-)nuclei masses. This force is a remnant of the underlying strong interaction among quarks and gluons and can be described by effective theories, but cannot yet be directly derived from quantum chromodynamics. Here we report a measurement of the difference between the ratios of the mass and charge of deuterons and anti-deuterons, and 3He and 3He¯¯¯¯¯¯ nuclei carried out with the ALICE (A Large Ion Collider Experiment) detector in Pb-Pb collisions at a centre-of-mass energy per nucleon pair of 2.76 TeV. Our direct measurement of the mass-over-charge differences confirm CPT invariance to an unprecedented precision in the sector of light nuclei. This fundamental symmetry of nature, which exchanges particles with anti-particles, implies that all physics laws are the same under the simultaneous reversal of charge(s) (charge conjugation C), reflection of spatial coordinates (parity transformation P) and time inversion (T).
A memory checker for a data structure provides a method to check that the output of the data structure operations is consistent with the input even if the data is stored on some insecure medium. In [8] we present a general solution for all data structures that are based on insert(i,v) and delete(j) commands. In particular this includes stacks, queues, deques (double-ended queues) and lists. Here, we describe more time and space efficient solutions for stacks, queues and deques. Each algorithm takes only a single function evaluation of a pseudorandomlike function like DES or a collision-free hash function like MD5 or SHA for each push/pop resp. enqueue/dequeue command making our methods applicable to smart cards.
Sharing of substructures like subterms and subcontexts in terms is a common method for space-efficient representation of terms, which allows for example to represent exponentially large terms in polynomial space, or to represent terms with iterated substructures in a compact form. We present singleton tree grammars as a general formalism for the treatment of sharing in terms. Singleton tree grammars (STG) are recursion-free context-free tree grammars without alternatives for non-terminals and at most unary second-order nonterminals. STGs generalize Plandowski's singleton context free grammars to terms (trees). We show that the test, whether two different nonterminals in an STG generate the same term can be done in polynomial time, which implies that the equality test of terms with shared terms and contexts, where composition of contexts is permitted, can be done in polynomial time in the size of the representation. This will allow polynomial-time algorithms for terms exploiting sharing. We hope that this technique will lead to improved upper complexity bounds for variants of second order unification algorithms, in particular for variants of context unification and bounded second order unification.
Polarization of Λ and ¯Λ hyperons along the beam direction in Pb–Pb collisions at √sNN = 5.02 TeV
(2022)
The polarization of the Λ and Λ¯¯¯¯ hyperons along the beam (z) direction, Pz, has been measured in Pb-Pb collisions at sNN−−−√ = 5.02TeV recorded with ALICE at the Large Hadron Collider (LHC). The main contribution to Pz comes from elliptic flow induced vorticity and can be characterized by the second Fourier sine coefficient Pz,s2=⟨Pzsin(2φ−2Ψ2)⟩, where φ is the hyperon azimuthal emission angle, and Ψ2 is the elliptic flow plane angle. We report the measurement of Pz,s2 for different collision centralities, and in the 30-50% centrality interval as a function of the hyperon transverse momentum and rapidity. The Pz,s2 is positive similarly as measured by the STAR Collaboration in Au-Au collisions at sNN−−−√ = 200 GeV, with somewhat smaller amplitude in the semi-central collisions. This is the first experimental evidence of a non-zero hyperon Pz in Pb-Pb collisions at the LHC. The comparison of the measured Pz,s2 with the hydrodynamic model calculations shows sensitivity to the competing contributions from thermal and the recently found shear induced vorticity, as well as to whether the polarization is acquired at the quark-gluon plasma or the hadronic phase.
Polarization of Λ and ¯Λ hyperons along the beam direction in Pb–Pb collisions at √sNN = 5.02 TeV
(2021)
The polarization of the Λ and Λ¯¯¯¯ hyperons along the beam (z) direction, Pz, has been measured in Pb-Pb collisions at sNN−−−√ = 5.02TeV recorded with ALICE at the Large Hadron Collider (LHC). The largest contribution to Pz comes from elliptic flow induced vorticity and can be characterized by the second Fourier sine coefficient Pz,s2=⟨Pzsin(2φ−2Ψ2)⟩, where φ is the hyperon azimuthal emission angle, and Ψ2 is the elliptic flow plane angle. We report the measurement of Pz,s2 for different collision centralities, and in the 30-50% centrality interval as a function of the hyperon transverse momentum and rapidity. The Pz,s2 is positive similarly as measured by the STAR Collaboration in Au-Au collisions at sNN−−−√ = 200 GeV, with somewhat smaller amplitude in the semi-central collisions. This is the first experimental evidence of a non-zero hyperon Pz in Pb-Pb collisions at the LHC. The comparison of the measured Pz,s2 with the hydrodynamic model calculations shows sensitivity to the competing contributions from thermal and the recently found shear induced vorticity, as well as to whether the polarization is acquired at the quark-gluon plasma or the hadronic phase.
Point-Based Animation
(2011)
Die punktbasierte Animation ist ein relativ neues Gebiet im Bereich der Animation. Der Unterschied zu den weit verbreiteten polygonnetzbasierten Verfahren liegt darin, dass zwischen den einzelnen Punkten, welche die Oberfläche des zu animierenden Objekts definieren, keine Topologieinformationen vorhanden sind. Mit polygonnetzbasierten Techniken ist keine Volumensimulation möglich, da keine Volumeninformationen vorhanden sind. Die aktuellen Verfahren im punktbasierten Feld ermöglichen die Animation von Flüssigkeiten, Rauch oder Explosionseffekten. In dieser Arbeit wird eine Animation auf Grundlage eines zur Verfügung gestellten Punktmodells ausgeführt. Um zu gewährleisten, dass die Animation korrekt nach den Gesetzen der Physik arbeitet, wird eine Physikengine zu Hilfe genommen. Diese beiden Bereiche werden in dieser Arbeit miteinander verknüpft. Zunächst werden einfache Simulationen im Sektor der starren Körperdynamik durchgeführt. Dabei werden einzelne Punkte unter Einfluss der Gravitation auf eine Ebene fallen gelassen. Vor allem die Berechnung der Kollision mit der Ebene und der Punkte untereinander ist hierbei interessant. Um sehenswerte physikalische Effekte animieren zu können, muss die Elastizität mit berücksichtigt werden. DesWeiteren wird in der Arbeit die Animation elastischer Körper verwirklicht. Hierbei wird eine an den Ecken fixierte elastische Ebene animiert. Einzelne Punkte können aus diesem elastischen Objekt herausgezogen werden, in Folge dessen sich das Objekt selbst repariert. Ebenfalls kann ein herausgeschnittner Punkt wieder in das Objekt eingefügt werden.
Retiming is a widely investigated technique for performance optimization. It performs powerful modifications on a circuit netlist. However, often it is not clear, whether the predicted performance improvement will still be valid after placement has been performed. This paper presents a new retiming algorithm using a highly accurate timing model taking into account the effect of retiming on capacitive loads of single wires as well as fanout systems. We propose the integration of retiming into a timing-driven standard cell placement environment based on simulated annealing. Retiming is used as an optimization technique throughout the whole placement process. The experimental results show the benefit of the proposed approach. In comparison with the conventional design flow based on standard FEAS our approach achieved an improvement in cycle time of up to 34% and 17% on the average.
Pion–kaon femtoscopy and the lifetime of the hadronic phase in Pb−Pb collisions at √sNN = 2.76 TeV
(2021)
In this paper, the first femtoscopic analysis of pion-kaon correlations at the LHC is reported. The analysis was performed on the Pb-Pb collision data at sNN−−−√ = 2.76 TeV recorded with the ALICE detector. The non-identical particle correlations probe the spatio-temporal separation between sources of different particle species as well as the average source size of the emitting system. The sizes of the pion and kaon sources increase with centrality, and pions are emitted closer to the centre of the system and/or later than kaons. This is naturally expected in a system with strong radial flow and is qualitatively reproduced by hydrodynamic models. ALICE data on pion-kaon emission asymmetry are consistent with (3+1)-dimensional viscous hydrodynamics coupled to a statistical hadronization model, resonance propagation, and decay code THERMINATOR 2 calculation, with an additional time delay between 1 and 2 fm/c for kaons. The delay can be interpreted as evidence for a significant hadronic rescattering phase in heavy-ion collisions at the LHC.
Pion–kaon femtoscopy and the lifetime of the hadronic phase in Pb−Pb collisions at √sNN = 2.76 TeV
(2020)
In this paper, the first femtoscopic analysis of pion-kaon correlations at the LHC is reported. The analysis was performed on the Pb-Pb collision data at sNN−−−√ = 2.76 TeV recorded with the ALICE detector. The non-identical particle correlations probe the spatio-temporal separation between sources of different particle species as well as the average source size of the emitting system. The sizes of the pion and kaon sources increase with centrality, and pions are emitted closer to the centre of the system and/or later than kaons. This is naturally expected in a system with strong radial flow and is qualitatively reproduced by hydrodynamic models. ALICE data on pion-kaon emission asymmetry are consistent with (3+1)-dimensional viscous hydrodynamics coupled to a statistical hadronization model, resonance propagation, and decay code THERMINATOR 2 calculation, with an additional time delay between 1 and 2 fm/c for kaons. The delay can be interpreted as evidence for a significant hadronic rescattering phase in heavy-ion collisions at the LHC.
An excess of J/ψ yield at very low transverse momentum (pT<0.3 GeV/c), originating from coherent photoproduction, is observed in peripheral and semicentral hadronic Pb−Pb collisions at a center-of-mass energy per nucleon pair of √sNN=5.02 TeV. The measurement is performed with the ALICE detector via the dimuon decay channel at forward rapidity (2.5<y<4). The nuclear modification factor at very low pT and the coherent photoproduction cross section are measured as a function of centrality down to the 10% most central collisions. These results extend the previous study at √sNN=2.76 TeV, confirming the clear excess over hadronic production in the pT range 0−0.3 GeV/c and the centrality range 70−90%, and establishing an excess with a significance greater than 5σ also in the 50−70% and 30−50% centrality ranges. The results are compared with earlier measurements at √sNN=2.76 TeV and with different theoretical predictions aiming at describing how coherent photoproduction occurs in hadronic interactions with nuclear overlap.
An excess of J/ψ yield at very low transverse momentum (pT<0.3 GeV/c), originating from coherent photoproduction, is observed in peripheral and semicentral hadronic Pb–Pb collisions at a center-of-mass energy per nucleon pair of sNN=5.02 TeV. The measurement is performed with the ALICE detector via the dimuon decay channel at forward rapidity (2.5<y<4). The nuclear modification factor at very low pT and the coherent photoproduction cross section are measured as a function of centrality down to the 10% most central collisions. These results extend the previous study at sNN=2.76 TeV, confirming the clear excess over hadronic production in the pT range 0−0.3 GeV/c and the centrality range 70–90%, and establishing an excess with a significance greater than 5σ also in the 50–70% and 30–50% centrality ranges. The results are compared with earlier measurements at sNN=2.76 TeV and with different theoretical predictions aiming at describing how coherent photoproduction occurs in hadronic interactions with nuclear overlap.
An excess of J/ψ yield at very low transverse momentum (pT<0.3 GeV/c), originating from coherent photoproduction, is observed in peripheral and semicentral hadronic Pb−Pb collisions at a center-of-mass energy per nucleon pair of sNN−−−√=5.02 TeV. The measurement is performed with the ALICE detector via the dimuon decay channel at forward rapidity (2.5<y<4). The nuclear modification factor at very low pT and the coherent photoproduction cross section are measured as a function of centrality down to the 10% most central collisions. These results extend the previous study at sNN−−−√=2.76 TeV, confirming the clear excess over hadronic production in the pT range 0−0.3 GeV/c and the centrality range 70−90%, and establishing an excess with a significance greater than 5σ also in the 50−70% and 30−50% centrality ranges. The results are compared with earlier measurements at sNN−−−√=2.76 TeV and with different theoretical predictions aiming at describing how coherent photoproduction occurs in hadronic interactions with nuclear overlap.
K+K− pairs may be produced in photonuclear collisions, either from the decays of photoproduced ϕ(1020) mesons, or directly as non-resonant K+K− pairs. Measurements of K+K− photoproduction probe the couplings between the ϕ(1020) and charged kaons with photons and nuclear targets. We present the first measurement of coherent photoproduction of K+K− pairs on lead ions in ultra-peripheral collisions using the ALICE detector, including the first investigation of direct K+K− production. There is significant K+K− production at low transverse momentum, consistent with coherent photoproduction on lead targets. In the mass range 1.1<MKK<1.4 GeV/c2 above the ϕ(1020) resonance, for rapidity |yKK|<0.8 and pT,KK<0.1 GeV/c, the measured coherent photoproduction cross section is dσ/dy = 3.37 ± 0.61 (stat.) ± 0.15 (syst.) mb. The centre-of-mass energy per nucleon of the photon-nucleus (Pb) system WγPb,n ranges from 33 to 188 GeV, far higher than previous measurements on heavy-nucleus targets. The cross section is larger than expected for ϕ(1020) photoproduction alone. The mass spectrum is fit to a cocktail consisting of ϕ(1020) decays, direct K+K− photoproduction, and interference between the two. The confidence regions for the amplitude and relative phase angle for direct K+K− photoproduction are presented.
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
Performance and storage requirements of topology-conserving maps for robot manipulator control
(1989)
A new programming paradigm for the control of a robot manipulator by learning the mapping between the Cartesian space and the joint space (inverse Kinematic) is discussed. It is based on a Neural Network model of optimal mapping between two high-dimensional spaces by Kohonen. This paper describes the approach and presents the optimal mapping, based on the principle of maximal information gain. It is shown that Kohonens mapping in the 2-dimensional case is optimal in this sense. Furthermore, the principal control error made by the learned mapping is evaluated for the example of the commonly used PUMA robot, the trade-off between storage resources and positional error is discussed and an optimal position encoding resolution is proposed.
A generalization of the compressed string pattern match that applies to terms with variables is investigated: Given terms s and t compressed by singleton tree grammars, the task is to find an instance of s that occurs as a subterm in t. We show that this problem is in NP and that the task can be performed in time O(ncjVar(s)j), including the construction of the compressed substitution, and a representation of all occurrences. We show that the special case where s is uncompressed can be performed in polynomial time. As a nice application we show that for an equational deduction of t to t0 by an equality axiom l = r (a rewrite) a single step can be performed in polynomial time in the size of compression of t and l; r if the number of variables is fixed in l. We also show that n rewriting steps can be performed in polynomial time, if the equational axioms are compressed and assumed to be constant for the rewriting sequence. Another potential application are querying mechanisms on compressed XML-data bases.
Particle production as a function of charged-particle flattenicity in pp collisions at √s = 13 TeV
(2024)
This paper reports the first measurement of the transverse momentum (pT) spectra of primary charged pions, kaons, (anti)protons, and unidentified particles as a function of the charged-particle flattenicity in pp collisions at s√=13 TeV. Flattenicity is a novel event shape observable that is measured in the pseudorapidity intervals covered by the V0 detector, 2.8<η<5.1 and −3.7<η<−1.7. According to QCD-inspired phenomenological models, it shows sensitivity to multiparton interactions and is less affected by biases towards larger pT due to local multiplicity fluctuations in the V0 acceptance than multiplicity. The analysis is performed in minimum-bias (MB) as well as in high-multiplicity events up to pT=20 GeV/c. The event selection requires at least one charged particle produced in the pseudorapidity interval |η|<1. The measured pT distributions, average pT, kaon-to-pion and proton-to-pion particle ratios, presented in this paper, are compared to model calculations using PYTHIA 8 based on color strings and EPOS LHC. The modification of the pT-spectral shapes in low-flattenicity events that have large event activity with respect to those measured in MB events develops a pronounced peak at intermediate pT (2<pT<8 GeV/c), and approaches the vicinity of unity at higher pT. The results are qualitatively described by PYTHIA, and they show different behavior than those measured as a function of charged-particle multiplicity based on the V0M estimator.
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss (dE/dx) and time-of-flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high-purity samples of identified particles in the decay channels K0S→π−π+, ϕ→K−K+, and Λ→pπ− in p-Pb collisions at sNN−−−√=5.02 TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected pT spectra of pions, kaons, protons, and D0 mesons in pp collisions at s√=7 TeV. In all cases, the results using Bayesian PID were found to be consistent with previous measurements performed by ALICE using a standard PID approach. For the measurement of D0→K−π+, it was found that a Bayesian PID approach gave a higher signal-to-background ratio and a similar or larger statistical significance when compared with standard PID selections, despite a reduced identification efficiency. Finally, we present an exploratory study of the measurement of Λ+c→pK−π+ in pp collisions at s√=7 TeV, using the Bayesian approach for the identification of its decay products.
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss (dE/dx) and time-of-flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high-purity samples of identified particles in the decay channels K0S→π−π+, ϕ→K−K+, and Λ→pπ− in p-Pb collisions at sNN−−−√=5.02 TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected pT spectra of pions, kaons, protons, and D0 mesons in pp collisions at s√=7 TeV. In all cases, the results using Bayesian PID were found to be consistent with previous measurements performed by ALICE using a standard PID approach. For the measurement of D0→K−π+, it was found that a Bayesian PID approach gave a higher signal-to-background ratio and a similar or larger statistical significance when compared with standard PID selections, despite a reduced identification efficiency. Finally, we present an exploratory study of the measurement of Λ+c→pK−π+ in pp collisions at s√=7 TeV, using the Bayesian approach for the identification of its decay products.
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss (dE/dx) and time-of-flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high-purity samples of identified particles in the decay channels K0S→π−π+, ϕ→K−K+, and Λ→pπ− in p-Pb collisions at sNN−−−√=5.02 TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected pT spectra of pions, kaons, protons, and D0 mesons in pp collisions at s√=7 TeV. In all cases, the results using Bayesian PID were found to be consistent with previous measurements performed by ALICE using a standard PID approach. For the measurement of D0→K−π+, it was found that a Bayesian PID approach gave a higher signal-to-background ratio and a similar or larger statistical significance when compared with standard PID selections, despite a reduced identification efficiency. Finally, we present an exploratory study of the measurement of Λ+c→pK−π+ in pp collisions at s√=7 TeV, using the Bayesian approach for the identification of its decay products.
Parallel global edge switching for the uniform sampling of simple graphs with prescribed degrees
(2023)
The uniform sampling of simple graphs matching a prescribed degree sequence is an important tool in network science, e.g. to construct graph generators or null-models. Here, the Edge Switching Markov Chain (ES-MC) is a common choice. Given an arbitrary simple graph with the required degree sequence, ES-MC carries out a large number of small changes, called edge switches, to eventually obtain a uniform sample. In practice, reasonably short runs efficiently yield approximate uniform samples.
In this work, we study the problem of executing edge switches in parallel. We discuss parallelizations of ES-MC, but find that this approach suffers from complex dependencies between edge switches. For this reason, we propose the Global Edge Switching Markov Chain (G-ES-MC), an ES-MC variant with simpler dependencies. We show that G-ES-MC converges to the uniform distribution and design shared-memory parallel algorithms for ES-MC and G-ES-MC. In an empirical evaluation, we provide evidence that G-ES-MC requires not more switches than ES-MC (and often fewer), and demonstrate the efficiency and scalability of our parallel G-ES-MC implementation.
Parallel FFT-hashing
(1994)
We propose two families of scalable hash functions for collision resistant hashing that are highly parallel and based on the generalized fast Fourier transform (FFT). FFT hashing is based on multipermutations. This is a basic cryptographic primitive for perfect generation of diffusion and confusion which generalizes the boxes of the classic FFT. The slower FFT hash functions iterate a compression function. For the faster FFT hash functions all rounds are alike with the same number of message words entering each round.
We study online secretary problems with returns in combinatorial packing domains with n candidates that arrive sequentially over time in random order. The goal is to determine a feasible packing of candidates of maximum total value. In the first variant, each candidate arrives exactly twice. All 2n arrivals occur in random order. We propose a simple 0.5‐competitive algorithm. For the online bipartite matching problem, we obtain an algorithm with ratio at least 0.5721 − o(1), and an algorithm with ratio at least 0.5459 for all n ≥ 1. We extend all algorithms and ratios to k ≥ 2 arrivals per candidate. In the second variant, there is a pool of undecided candidates. In each round, a random candidate from the pool arrives. Upon arrival a candidate can be either decided (accept/reject) or postponed. We focus on minimizing the expected number of postponements when computing an optimal solution. An expected number of Θ(n log n) is always sufficient. For bipartite matching, we can show a tight bound of O(r log n), where r is the size of the optimum matching. For matroids, we can improve this further to a tight bound of O(r′ log(n/r′)), where r′ is the minimum rank of the matroid and the dual matroid.
Es steht ausser Zweifel, das der Schutz der Privatsphäre von Internet-Nutzern gegenwärtig unzureichend ist. Die Chance sich im Netz relativ weiträumig und frei zu bewegen, steht die Möglichkeit gegenüber allerlei Informationen über Internet-Nutzer zu sammeln und auszuwerten. Dies ist natürlich auch im Intranet möglich. Im Rahmen dieser Diplomarbeit wurden die verschiedenen Möglichkeiten überprüft, die zur Veröffentlichung von Datenschutzmassnahmen angeboten werden. Zunächst ist der OECD Privacy Statement Generator, dessen Principles Grundlage bei der Formulierung von Lufthansa Principles waren, auf Lufthansatauglichkeit untersucht worden. Dabei hat sich ergeben, dass trotz der theoretischen Übereinstimmung der Principles der Lufthansa AG mit denen der OECD, der Gebrauch des Generators bei Lufthansa in dieser Form nicht möglich ist. Da anfänglich eine Anpassung des Generators an Lufthansabedürfnisse geplant war, sind im Rahmen dieser Diplomarbeit Änderungsvorschläge gemacht worden. Die Anpassung des Codes erfolgte nicht, da dieser nur für öffentliche Stellen und nicht für Privatunternehmen zugänglich ist. Mit P3P entwickelte das W3C eine Datenschutztechnik, die für Nutzer die Kontrolle über persönliche Daten automatisiert und damit den Schutz der Privatsphäre und die Akzeptanz der User verbessert. Nach der Einführung des P3P- und APPEL-Vokabulars, mit dem man einerseits Datenschutzmassnahmen und andererseits Datenschutzpräferenzen ausdrücken kann, sollte daher geprüft werden, ob dieses Vokabular ausreicht, um Lufthansa-spezifische Aussagen zu machen und in wieweit diese erweiterbar bzw. anpassbar sind. Die Untersuchung hat ergeben, dass das Vokabular bis zu einem gewissen Masse ausreicht und es ein Element gibt, das EXTENSION Element, mit dem eine Erweiterung des P3P Standardvokabulars möglich ist. Im Rahmen dieser Arbeit wurden solche auf Lufthansa abgestimmte Erweiterungen sowohl für eine P3P Policy als auch für eine entsprechende APPEL Präferenz formuliert. Die Lufthansa AG hat somit mit P3P die Möglichkeit, Ihre Datenschutzpraktiken für den Mitarbeiter transparenter zu gestalten, da sie auch über das Standardvokabular hinausgehende Aussagen formulieren kann. In der Diplomarbeit sind ausserdem die sich zur Zeit auf dem Markt befindlichen Tools, die bei der Erstellung einer maschinenlesbaren Datenschutzmassnahme, der sog. Privacy Policy benutzt werden können, untersucht worden. Der IBM P3P Policy Editor scheint für den Gebrauch bei Lufthansa denkbar, da die Handhabung des Generators einfach ist. Der Mitarbeiter, der die Policy für seine Abteilung erstellen soll, braucht sich nicht mit den Einzelheiten des P3P Vokabulars auseinander setzten. Mit diesem Editor kann zunächst ein Basisgerüst einer Policy erstellt werden. Die mit dem P3P Element EXTENSION formulierten Erweiterungen müssen jedoch selbsterstellt werden und können nachträglich in das Basisgerüst der Policy miteingebunden werden. Zusätzlich zu der maschinenlesbaren Form einer Policy erstellt der IBM Editor auch eine menschenlesbare HTML-Version der Policy. Dies ist sehr von Vorteil, da in einem Arbeitsgang zwei Policy-Versionen erstellt werden. Im Vergleich zu dem Formulierungsentwurf des OECD Generators ist die menschenlesbare Version des IBM Editors ausserdem wesentlich kürzer und dadurch auch übersichtlicher. Im Ganzen ist es daher sinnvoller, den IBM Editor zu benutzen, als den OECD Generator neu zu programmieren und dann mit Hilfe eines anderen Tools die P3P Policy zu erstellen. Zur Erstellung einer APPEL Präferenz ist zur Zeit nur ein Hilfsmittel auf dem Markt erhältlich. Der Grund hierfür ist sicherlich, dass sich APPEL noch zu keinem Standard entwickelt hat, sondern sich noch in einem Entwurfsstadium befindet. Der APPEL Editor von JRC ist ähnlich wie der P3P Policy Editor von IBM aufgebaut. Auch hier müssen Erweiterungen selbst formuliert und in dem vom Editor erstellten Basisgerüst einer Präferenz eingebunden werden. Nachdem die grundsätzliche Erweiterbarkeit von P3P in dieser Diplomarbeit festgestellt wurde, sind die sog. User Agents behandelt worden. Von Bedeutung war neben der Funktionsweise der einzelnen User Agents ihre Handhabung des EXTENSION Elementes. Da zu erwarten ist, dass nur wenige Nutzer die Voreinstellungen ihrer Software selbst verändern und sich mit der APPEL Spezifikation auseinander setzten, wird der Standardkonfiguration eines P3P User Agents eine große Bedeutung beigemessen. Bei allen vorgestellten User Agents gab es verschiedene Sicherheitsniveaux bzgl. Datenschutz aus denen der Nutzer auswählen konnte. Entsprechend des Niveaux wurde die Präferenz des Nutzers automatisch konfiguriert. Bei allen war es ausserdem möglich, selbsterstellte APPEL Formulierungen zu importieren. Bei dem Proxy von JRC ist es möglich, Settings mit einzubinden, die das EXTENSION Element beinhalten. AT&T erlaubt dies nicht und von Microsoft fehlt hierzu jegliche Angabe. Bzgl. der Lufthansa AG erscheint es sinnvoll, den Mitarbeitern einen eigenen User Agent anzubieten, der alle zusätzlich formulierten Aspekte aufgreift und mit dem der Mitarbeiter seine Präferenzen bzgl. des Intranets leicht ausdrücken kann. Der Aufwand, den Mitarbeitern das erweiterte Vokabular für die Formulierung einer Präferenz zur Verfügung zu stellen, setzt voraus, dass jeder Mitarbeiter sich mit der Syntax und Semantik von P3P und APPEL auskennt. Dies ist im Vergleich zur Programmierung eines eigenen User Agents, der den Mitarbeitern zur Verfügung gestellt wird, aufwendiger und wahrscheinlich auch nicht realisierbar. Grundsätzlich kann durch diese Massnahmen das Vertrauen der Mitarbeiter ins Intranet und damit die Nutzung dieses Mediums gesteigert werden. Die vermehrte Nutzung des Intranets auch für private Zwecke, wie zum Beispiel der Buchung von Reisen oder die Abfrage nach Flügen etc., würde für beide Seiten auch wirtschaftlichen Nutzen bringen. Für die Mitarbeiter selbst käme es z.B. zu einer Zeitersparnis, da sie jetzt zur Buchung ihrer Reisen nicht mehr in die Reisestelle müssen. Dies hätte natürlich auch wirtschaftliche Auswirkungen, da der Aufwand, um zur Reisestelle zu kommen, wegfällt. Aber auch die Lufthansa AG hätte durch die transparentere Gestaltung ihrer Datenschutzpraktiken wirtschaftlichen Nutzen. Die Mitarbeiter z.B. in der Reisestelle würden durch das neue Vertrauen ihrer Kollegen ins Intranet und damit einhergehend die selbstständige Online-Buchungsmöglichkeit entlastet werden. Es könnten mehr Kapazitäten für andere Aufgaben frei werden. Das Ausmass der Vorteile für Lufthansa und Ihrer Mitarbeiter, die sich aus der Veröffentlichung von Datenschutzmaßnahmen ergeben und damit ist auch das vermehrte Vertrauen der Mitarbeiter ins Intranet gemeint, ist zur Zeit noch nicht in vollem Umfang erfassbar, da sich viele Intranetprojekte der Lufthansa AG noch im Entwicklungsstadium befinden. Für die Zukunft ist jedoch auch festzustellen, dass datenschutzfreundliche Technologien allein nicht die Lösung zur Sicherstellung des Datenschutzes im Inter- als auch im Intranet sein können. Vielmehr müssen die aufkommenden technischen Maßnahmen durch nationale und internationale Regelungen unterstützt und ergänzt werden. Erst durch Festlegung internationaler Konventionen, die den Datenschutz in Zusammenhang mit grenzüberschreitenden Computernetzwerken und Diensten regeln, kann ein effektiver und unabhängiger Kontrollmechanismus sowie die Möglichkeit zu Sanktionen gewährleistet werden. Die Veröffentlichung von Datenschutzmassnahmen gewährleistet leider nicht ihre Einhaltung.
The pathogenesis of nodular lymphocyte–predominant Hodgkin lymphoma (NLPHL) and its relationship to other lymphomas are largely unknown. This is partly because of the technical challenge of analyzing its rare neoplastic lymphocytic and histiocytic (L&H) cells, which are dispersed in an abundant nonneoplastic cellular microenvironment. We performed a genome-wide expression study of microdissected L&H lymphoma cells in comparison to normal and other malignant B cells that indicated a relationship of L&H cells to and/or that they originate from germinal center B cells at the transition to memory B cells. L&H cells show a surprisingly high similarity to the tumor cells of T cell–rich B cell lymphoma and classical Hodgkin lymphoma, a partial loss of their B cell phenotype, and deregulation of many apoptosis regulators and putative oncogenes. Importantly, L&H cells are characterized by constitutive nuclear factor {kappa}B activity and aberrant extracellular signal-regulated kinase signaling. Thus, these findings shed new light on the nature of L&H cells, reveal several novel pathogenetic mechanisms in NLPHL, and may help in differential diagnosis and lead to novel therapeutic strategies.
Heterologously expressed genes require adaptation to the host organism to ensure adequate levels of protein synthesis, which is typically approached by replacing codons by the target organism’s preferred codons. In view of frequently encountered suboptimal outcomes we introduce the codon-specific elongation model (COSEM) as an alternative concept. COSEM simulates ribosome dynamics during mRNA translation and informs about protein synthesis rates per mRNA in an organism- and context-dependent way. Protein synthesis rates from COSEM are integrated with further relevant covariates such as translation accuracy into a protein expression score that we use for codon optimization. The scoring algorithm further enables fine-tuning of protein expression including deoptimization and is implemented in the software OCTOPOS. The protein expression score produces competitive predictions on proteomic data from prokaryotic, eukaryotic, and human expression systems. In addition, we optimized and tested heterologous expression of manA and ova genes in Salmonella enterica serovar Typhimurium. Superiority over standard methodology was demonstrated by a threefold increase in protein yield compared to wildtype and commercially optimized sequences.
This paper is a contribution to exploring and analyzing space-improvements in concurrent programming languages, in particular in the functional process-calculus CHF. Space-improvements are defined as a generalization of the corresponding notion in deterministic pure functional languages. The main part of the paper is the O(n ·logn) algorithm SPOPTN for offline space optimization of several parallel independent processes. Applications of this algorithm are: (i) affirmation of space improving transformations for particular classes of program transformations; (ii) support of an interpreter-based method for refuting space-improvements; and (iii) as a stand-alone offline-optimizer for space (or similar resources) of parallel processes.
In Abschnitt 4.1 wurden Algebren auf Mengen und Relationen vorgestellt. Desweitern wurde in Abschnitt 4.2 eine zu ALC erweiterte terminologische Wissensrepräsentationssprache ALCX definiert und gezeigt, daß die in Abschnitt 4.1 vorgestellten Algebren zur Festlegung der odell-theoretischen Semantik der Sprache ALCX benutzt werden können. Als Ergebnis einer derartigen Festlegung kann jeder terminologische Ausdruck direkt mit einem algebraischen Term assoziiert werden. Desweitern können alle in den Algebren aufgeführten universellen Identitäten in semantisch äquivalente terminologische Identitäten überführt werden. Eine mittels ALCX repräsentierte Wissensbasis ist in meinem Programm mit Hilfe der in der funktionalen Programmiersprache Gofer (Version 2.28) gegebenen Möglichkeiten formuliert. Mein Programm bietet durch Simplifizierung von Anfrageausdrücken eine optimierte Lösung des aus dem Bereich der Wissensrepräsentation bekannten Retrieval-Problems. Die Simplifizierung von Anfrageausdrücken wird in meinem Programm erzielt, indem in den oben genannten Algebren erfüllte universelle Identitäten als Simplifikationshilfsmittel benutzt werden. Die in den Algebren aufgeführten universellen Identitäten bestehen stets aus zwei äquivalenten Termen, für die eine unterschiedliche Anzahl von Reduktionsschritten zur Evaluierung benötigt werden. Es existieren universelle Identitäten, bei denen unabhängig von den Instanzen der Argumentterme ein Term gegenüber einem anderen Term effizienter auswertbar ist. Bei diesen universellen Identitäten kann eine Simplifikation von einem Term zu einem anderen Term immer unproblematisch realisiert werden. Es gibt jedoch auch universelle Identitäten, bei denen die Unabhängigkeit von den Instanzen der Argumentterme nicht gegeben ist. In diesen Fällen wird ein von der Wissensbasis abhängiges Komplexitätsabschätzungsverfahren eingesetzt, um den effizienter auswertbaren Term von zwei an einer universellen Identität beteiligten Termen zu ermitteln. Daß die Simplifikation eines Anfrageausdrucks große Einsparungen bei dessen Auswertung nach sich ziehen kann, wurde an verschiedenen Beispielen in Abschnitt 5.4.4 gezeigt. Über eine weitere Möglichkeit, die zum Simplifizieren von Anfrageausdrucken verwendet werden kann, wurde in Abschnitt 5.4.5 berichtet. Die Genauigkeit der Komplexitätsabschätzung kann erhöht werden, indem das Komplexitätsabschätzungsverfahren erweitert wird. So kann beispielsweise die Komplexität der Funktion "oder" genauer abgeschätzt werden, indem für zu vereinigende Mengen überprüft wird, ob Teilmengenbeziehungen vorhanden sind. Durch eine genauere Abschätzung der Mächtigkeit entstehender Vereinigungsmengen wird die Komplexitätsabschätzung insgesamt in ihrer Genauigkeit gesteigert. Eine Änderung der Instanzen meiner terminologischen Datenbank erfordert eine Änderung des Skripts meines Programms. Indem die Instanzen-Daten in einer veränderlichen Datenbank festgehalten würden, könnte hier Abhilfe geschaffen werden. Dies könnte im Rahmen der Erstellung einer benutzterfreundlichen Ein- und Ausgabe-Schnittstelle realisiert werden.
Mit dem World Wide Web sind der Bestand und die Verfügbarkeit von Informationen rapide angewachsen. Der Einzelne hat Schwierigkeiten, der Menge an Informationen und Wissen Herr zu werden und so der Informationsüberflutung zu begegnen. Dieses Problem wurde von Forschern und Technikern bereits frühzeitig erkannt und durch verschiedene Konzepte wie die intelligente Suche und die Klassifikation von Informationen zu lösen versucht. Ein weiteres Konzept ist die Personalisierung, die das bedarfsgerechte Zuschneiden von Informationen auf die Bedürfnisse des einzelnen Anwenders zum Ziel hat. Diese Arbeit beschreibt dazu eine Reihe von Personalisierungstechniken und im Speziellen das Kollaborative Filtern als eine dieser Techniken. Bestehende Schwächen des Kollaborativen Filterns wie zu dünn besetzte Benutzerprofile und das mangelnde Erkennen von Änderungen im Benutzerinteresse im Verlauf der Zeit werden durch verschiedene Ansätze zu entschärfen versucht. Dazu wird eine Kombination mit Inhaltsbasierten Filtern und die Verbreiterung der Datenbasis bewerteter Ressourcen betrieben. Ziel ist die Optimierung der Personalisierung, so dass Anwender besser auf sie abgestimmte Informationen erhalten. Ein Teil der beschriebenen Ansätze wird zudem in einem prototypischen Informationssystem umgesetzt, um die Machbarkeit und den Nutzen zu prüfen. Dazu werden der auf der Java 2 Enterprise Edition aufbauende WebSphere Applikationsserver von IBM und die relationale Datenbank DB2 UDB aus gleichem Hause eingesetzt.
An der Universität Frankfurt entwickelte Online-Self-Assessment-Verfahren für die Studiengänge Psychologie und Informatik sollen Studieninteressierten noch vor Studienbeginn auf der Basis von Selbsterkundungsmaßnahmen und Tests eine Rückmeldung über ihre eigenen Fähigkeiten, Motive, personalen Kompetenzen und Interessen mit Blick auf den jeweiligen Studiengang geben. Sowohl die Befunde zur psychometrischen Güte der Verfahren als auch jene zur prognostischen Validität lassen ihren Einsatz zur Feststellung studienrelevanter Kompetenzen als geeignet erscheinen. Da die erfassten Kompetenzen und Merkmale substanzielle Beziehun-gen zu Studienleistungen aufweisen, könnten die Informationen über individuelle Stärken zur Wahl eines geeigneten Studienganges genutzt werden; Schwächen hingegen könnten frühzeitig Hinweise für geeignete Fördermaßnahmen liefern.
Modern experiments in heavy ion collisions operate with huge data rates that can not be fully stored on the currently available storage devices. Therefore the data flow should be reduced by selecting those collisions that potentially carry the information of the physics interest. The future CBM experiment will have no simple criteria for selecting such collisions and requires the full online reconstruction of the collision topology including reconstruction of short-lived particles.
In this work the KF Particle Finder package for online reconstruction and selection of short-lived particles is proposed and developed. It reconstructs more than 70 decays, covering signals from all the physics cases of the CBM experiment: strange particles, strange resonances, hypernuclei, low mass vector mesons, charmonium, and open-charm particles.
The package is based on the Kalman filter method providing a full set of the particle parameters together with their errors including position, momentum, mass, energy, lifetime, etc. It shows a high quality of the reconstructed particles, high efficiencies, and high signal to background ratios.
The KF Particle Finder is extremely fast for achieving the reconstruction speed of 1.5 ms per minimum-bias AuAu collision at 25 AGeV beam energy on single CPU core. It is fully vectorized and parallelized and shows a strong linear scalability on the many-core architectures of up to 80 cores. It also scales within the First Level Event Selection package on the many-core clusters up to 3200 cores.
The developed KF Particle Finder package is a universal platform for short- lived particle reconstruction, physics analysis and online selection.
We propose a variation of online paging in two-level memory systems where pages in the fast cache get modified and therefore have to be explicitly written back to the slow memory upon evictions. For increased performance, up to alpha arbitrary pages can be moved from the cache to the slow memory within a single joint eviction, whereas fetching pages from the slow memory is still performed on a one-by-one basis. The main objective in this new alpha-paging scenario is to bound the number of evictions. After providing experimental evidence that alpha-paging can adequately model flash-memory devices in the context of translation layers we turn to the theoretical connections between alpha-paging and standard paging. We give lower bounds for deterministic and randomized alpha-paging algorithms. For deterministic algorithms, we show that an adaptation of LRU is strongly competitive, while for the randomized case we show that by adapting the classical Mark algorithm we get an algorithm with a competitive ratio larger than the lower bound by a multiplicative factor of approximately 1.7.
The size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb and strong final state interactions. In this paper, results are presented from femtoscopic analyses of π±π±, K±K±, K0SK0S, pp, and p¯¯¯p¯¯¯ correlations from Pb-Pb collisions at sNN−−−√=2.76 TeV by the ALICE experiment at the LHC. One-dimensional radii of the system are extracted from correlation functions in terms of the invariant momentum difference of the pair. The comparison of the measured radii with the predictions from a hydrokinetic model is discussed. The pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass mT which is consistent with hydrodynamic model predictions for central collisions. The kaon and proton source sizes can be reasonably described by approximate mT-scaling.
The size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb and strong final state interactions. In this paper, results are presented from femtoscopic analyses of π±π±, K±K±, K0SK0S, pp, and p¯¯¯p¯¯¯ correlations from Pb-Pb collisions at sNN−−−√=2.76 TeV by the ALICE experiment at the LHC. One-dimensional radii of the system are extracted from correlation functions in terms of the invariant momentum difference of the pair. The comparison of the measured radii with the predictions from a hydrokinetic model is discussed. The pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass mT which is consistent with hydrodynamic model predictions for central collisions. The kaon and proton source sizes can be reasonably described by approximate mT-scaling.
The size of the particle emission region in high-energy collisions can be deduced using the femtoscopic correlations of particle pairs at low relative momentum. Such correlations arise due to quantum statistics and Coulomb and strong final state interactions. In this paper, results are presented from femtoscopic analyses of π±π±, K±K±, K0SK0S, pp, and p¯¯¯p¯¯¯ correlations from Pb-Pb collisions at sNN−−−√=2.76 TeV by the ALICE experiment at the LHC. One-dimensional radii of the system are extracted from correlation functions in terms of the invariant momentum difference of the pair. The comparison of the measured radii with the predictions from a hydrokinetic model is discussed. The pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass mT which is consistent with hydrodynamic model predictions for central collisions. The kaon and proton source sizes can be reasonably described by approximate mT-scaling.
The correlations of identical charged kaons were measured in p-Pb collisions at sNN−−−√=5.02 TeV by the ALICE experiment at the LHC. The femtoscopic invariant radii and correlation strengths were extracted from one-dimensional kaon correlation functions and were compared with those obtained in pp and Pb-Pb collisions at s√=7 TeV and sNN−−−√=2.76 TeV, respectively. The presented results also complement the identical-pion femtoscopic data published by the ALICE collaboration. The extracted radii increase with increasing charged-particle multiplicity and decrease with increasing pair transverse momentum. At comparable multiplicities, the radii measured in p-Pb collisions are found to be close to those observed in pp collisions. The obtained femtoscopic parameters are reproduced by the EPOS 3 hadronic interaction model and disfavor models with large initial size or strong collective expansion at low multiplicities.
The correlations of identical charged kaons were measured in p-Pb collisions at sNN−−−√=5.02 TeV by the ALICE experiment at the LHC. The femtoscopic invariant radii and correlation strengths were extracted from one-dimensional kaon correlation functions and were compared with those obtained in pp and Pb-Pb collisions at s√=7 TeV and sNN−−−√=2.76 TeV, respectively. The presented results also complement the identical-pion femtoscopic data published by the ALICE collaboration. The extracted radii increase with increasing charged-particle multiplicity and decrease with increasing pair transverse momentum. At comparable multiplicities, the radii measured in p-Pb collisions are found to be close to those observed in pp collisions. The obtained femtoscopic parameters are reproduced by the EPOS hadronic interaction model and disfavor models with large initial size or strong collective expansion at low multiplicities.
The correlations of identical charged kaons were measured in p-Pb collisions at sNN−−−√=5.02 TeV by the ALICE experiment at the LHC. The femtoscopic invariant radii and correlation strengths were extracted from one-dimensional kaon correlation functions and were compared with those obtained in pp and Pb-Pb collisions at s√=7 TeV and sNN−−−√=2.76 TeV, respectively. The presented results also complement the identical-pion femtoscopic data published by the ALICE collaboration. The extracted radii increase with increasing charged-particle multiplicity and decrease with increasing pair transverse momentum. At comparable multiplicities, the radii measured in p-Pb collisions are found to be close to those observed in pp collisions. The obtained femtoscopic parameters are reproduced by the EPOS 3 hadronic interaction model and disfavor models with large initial size or strong collective expansion at low multiplicities.
The effect of adding two-way communication to k cells one-way cellular automata (kC-OCAs) on their size of description is studied. kC-OCAs are a parallel model for the regular languages that consists of an array of k identical deterministic finite automata (DFAs), called cells, operating in parallel. Each cell gets information from its right neighbor only. In this paper, two models with different amounts of two-way communication are investigated. Both models always achieve quadratic savings when compared to DFAs. When compared to a one-way cellular model, the result is that minimum two-way communication can achieve at most quadratic savings whereas maximum two-way communication may provide savings bounded by a polynomial of degree k.
At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
The free energy of TAP-solutions for the SK-model of mean field spin glasses can be expressed as a nonlinear functional of local terms: we exploit this feature in order to contrive abstract REM-like models which we then solve by a classical large deviations treatment. This allows to identify the origin of the physically unsettling quadratic (in the inverse of temperature) correction to the Parisi free energy for the SK-model, and formalizes the true cavity dynamics which acts on TAP-space, i.e. on the space of TAP-solutions. From a non-spin glass point of view, this work is the first in a series of refinements which addresses the stability of hierarchical structures in models of evolving populations.
We study Gaifman locality and Hanf locality of an extension of first-order logic with modulo p counting quantifiers (FO+MODp , for short) with arbitrary numerical predicates. We require that the validity of formulas is independent of the particular interpretation of the numerical predicates and refer to such formulas as arb-invariant formulas. This paper gives a detailed picture of locality and non-locality properties of arb-invariant FO+MODp . For example, on the class of all finite structures, for any p 2, arb-invariant FO+MODp is neither Hanf nor Gaifman local with respect to a sublinear locality radius. However, in case that p is an odd prime power, it is weakly Gaifman local with a polylogarithmic locality radius. And when restricting attention to the class of string structures, for odd prime powers p, arb-invariant FO+MODp is both Hanf and Gaifman local with a polylogarithmic locality radius. Our negative results build on examples of order-invariant FO+MODp formulas presented in Niemist ̈o’s PhD thesis. Our positive results make use of the close connection between FO+MODp and Boolean circuits built from NOT-gates and AND-, OR-, and MOD p - gates of arbitrary fan-in.
In this paper, we study the limit of compactness which is a graph index originally introduced for measuring structural characteristics of hypermedia. Applying compactness to large scale small-world graphs (Mehler, 2008) observed its limit behaviour to be equal 1. The striking question concerning this finding was whether this limit behaviour resulted from the specifics of small-world graphs or was simply an artefact. In this paper, we determine the necessary and sufficient conditions for any sequence of connected graphs resulting in a limit value of CB = 1 which can be generalized with some consideration for the case of disconnected graph classes (Theorem 3). This result can be applied to many well-known classes of connected graphs. Here, we illustrate it by considering four examples. In fact, our proof-theoretical approach allows for quickly obtaining the limit value of compactness for many graph classes sparing computational costs.
We show that non-interactive statistically-secret bit commitment cannot be constructed from arbitrary black-box one-to-one trapdoor functions and thus from general public-key cryptosystems. Reducing the problems of non-interactive crypto-computing, rerandomizable encryption, and non-interactive statistically-sender-private oblivious transfer and low-communication private information retrieval to such commitment schemes, it follows that these primitives are neither constructible from one-to-one trapdoor functions and public-key encryption in general. Furthermore, our separation sheds some light on statistical zeroknowledge proofs. There is an oracle relative to which one-to-one trapdoor functions and one-way permutations exist, while the class of promise problems with statistical zero-knowledge proofs collapses in P. This indicates that nontrivial problems with statistical zero-knowledge proofs require more than (trapdoor) one-wayness.
Given x small epsilon, Greek Rn an integer relation for x is a non-trivial vector m small epsilon, Greek Zn with inner product <m,x> = 0. In this paper we prove the following: Unless every NP language is recognizable in deterministic quasi-polynomial time, i.e., in time O(npoly(log n)), the ℓinfinity-shortest integer relation for a given vector x small epsilon, Greek Qn cannot be approximated in polynomial time within a factor of 2log0.5 − small gamma, Greekn, where small gamma, Greek is an arbitrarily small positive constant. This result is quasi-complementary to positive results derived from lattice basis reduction. A variant of the well-known L3-algorithm approximates for a vector x small epsilon, Greek Qn the ℓ2-shortest integer relation within a factor of 2n/2 in polynomial time. Our proof relies on recent advances in the theory of probabilistically checkable proofs, in particular on a reduction from 2-prover 1-round interactive proof-systems. The same inapproximability result is valid for finding the ℓinfinity-shortest integer solution for a homogeneous linear system of equations over Q.
The descriptional complexity of iterative arrays (lAs) is studied. Iterative arrays are a parallel computational model with a sequential processing of the input. It is shown that lAs when compared to deterministic finite automata or pushdown automata may provide savings in size which are not bounded by any recursive function, so-called non-recursive trade-offs. Additional non-recursive trade-offs are proven to exist between lAs working in linear time and lAs working in real time. Furthermore, the descriptional complexity of lAs is compared with cellular automata (CAs) and non-recursive trade-offs are proven between two restricted classes. Finally, it is shown that many decidability questions for lAs are undecidable and not semidecidable.
Various concurrency primitives have been added to sequential programming languages, in order to turn them concurrent. Prominent examples are concurrent buffers for Haskell, channels in Concurrent ML, joins in JoCaml, and handled futures in Alice ML. Even though one might conjecture that all these primitives provide the same expressiveness, proving this equivalence is an open challenge in the area of program semantics. In this paper, we establish a first instance of this conjecture. We show that concurrent buffers can be encoded in the lambda calculus with futures underlying Alice ML. Our correctness proof results from a systematic method, based on observational semantics with respect to may and must convergence.
We investigate a restricted one-way cellular automaton (OCA) model where the number of cells is bounded by a constant number k, so-called kC-OCAs. In contrast to the general model, the generative capacity of the restricted model is reduced to the set of regular languages. A kC-OCA can be algorithmically converted to a deterministic finite automaton (DFA). The blow-up in the number of states is bounded by a polynomial of degree k. We can exhibit a family of unary languages which shows that this upper bound is tight in order of magnitude. We then study upper and lower bounds for the trade-off when converting DFAs to kC-OCAs. We show that there are regular languages where the use of kC-OCAs cannot reduce the number of states when compared to DFAs. We then investigate trade-offs between kC-OCAs with different numbers of cells and finally treat the problem of minimizing a given kC-OCA.
It is shown that between one-turn pushdown automata (1-turn PDAs) and deterministic finite automata (DFAs) there will be savings concerning the size of description not bounded by any recursive function, so-called non-recursive tradeoffs. Considering the number of turns of the stack height as a consumable resource of PDAs, we can show the existence of non-recursive trade-offs between PDAs performing k+ 1 turns and k turns for k >= 1. Furthermore, non-recursive trade-offs are shown between arbitrary PDAs and PDAs which perform only a finite number of turns. Finally, several decidability questions are shown to be undecidable and not semidecidable.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi that use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as well as two variants of must-convergence, which hold in a broad class of extended process- and extended lambda calculi, if the calculi satisfy certain natural conditions. As a guide-line, the proofs of the context lemmas are valid in call-by-need calculi, in callby-value calculi if substitution is restricted to variable-by-variable and in process calculi like variants of the π-calculus. For calculi employing beta-reduction using a call-by-name or call-by-value strategy or similar reduction rules, some iu-variants of ciu-theorems are obtained from our context lemmas. Our results reestablish several context lemmas already proved in the literature, and also provide some new context lemmas as well as some new variants of the ciu-theorem. To make the results widely applicable, we use a higher-order abstract syntax that allows untyped calculi as well as certain simple typing schemes. The approach may lead to a unifying view of higher-order calculi, reduction, and observational equality.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met.
Functional modules of metabolic networks are essential for understanding the metabolism of an organism as a whole. With the vast amount of experimental data and the construction of complex and large-scale, often genome-wide, models, the computer-aided identification of functional modules becomes more and more important. Since steady states play a key role in biology, many methods have been developed in that context, for example, elementary flux modes, extreme pathways, transition invariants and place invariants. Metabolic networks can be studied also from the point of view of graph theory, and algorithms for graph decomposition have been applied for the identification of functional modules. A prominent and currently intensively discussed field of methods in graph theory addresses the Q-modularity. In this paper, we recall known concepts of module detection based on the steady-state assumption, focusing on transition-invariants (elementary modes) and their computation as minimal solutions of systems of Diophantine equations. We present the Fourier-Motzkin algorithm in detail. Afterwards, we introduce the Q-modularity as an example for a useful non-steady-state method and its application to metabolic networks. To illustrate and discuss the concepts of invariants and Q-modularity, we apply a part of the central carbon metabolism in potato tubers (Solanum tuberosum) as running example. The intention of the paper is to give a compact presentation of known steady-state concepts from a graph-theoretical viewpoint in the context of network decomposition and reduction and to introduce the application of Q-modularity to metabolic Petri net models.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in a an extended lambda-calculus with case, constructors, seq, let, and choice, with a simple set of reduction rules. Unfortunately, a direct proof appears to be impossible. The correctness proof is by defining another calculus comprising the complex variants of copy, case-reduction and seq-reductions that use variablebinding chains. This complex calculus has well-behaved diagrams and allows a proof that of correctness of transformations, and also that the simple calculus defines an equivalent contextual order.
We provide the first non-trivial result on dynamic breadth-first search (BFS) in external-memory: For general sparse undirected graphs of initially $n$ nodes and O(n) edges and monotone update sequences of either $\Theta(n)$ edge insertions or $\Theta(n)$ edge deletions, we prove an amortized high-probability bound of $O(n/B^{2/3}+\sort(n)\cdot \log B)$ I/Os per update. In contrast, the currently best approach for static BFS on sparse undirected graphs requires $\Omega(n/B^{1/2}+\sort(n))$ I/Os. 1998 ACM Subject Classification: F.2.2. Key words and phrases: External Memory, Dynamic Graph Algorithms, BFS, Randomization.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence, and as a consequence also yields soundness of the encodings with respect to a contextually defined notion of program equivalence. While these translations encode blocking into queuing and waiting, we also describe an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.