Refine
Year of publication
Document Type
- Article (31215) (remove)
Language
- English (15825)
- German (13382)
- Portuguese (696)
- French (387)
- Croatian (251)
- Spanish (250)
- Italian (134)
- Turkish (113)
- Multiple languages (36)
- Latin (35)
Has Fulltext
- yes (31215) (remove)
Keywords
- Deutsch (503)
- taxonomy (449)
- Literatur (299)
- new species (193)
- Hofmannsthal, Hugo von (185)
- Rezeption (178)
- Übersetzung (163)
- Filmmusik (155)
- Johann Wolfgang von Goethe (131)
- Vormärz (117)
Institute
- Medizin (5366)
- Physik (1919)
- Biowissenschaften (1144)
- Biochemie und Chemie (1113)
- Extern (1108)
- Gesellschaftswissenschaften (803)
- Frankfurt Institute for Advanced Studies (FIAS) (753)
- Geowissenschaften (592)
- Präsidium (453)
- Philosophie (448)
Accurate impact parameter determination in a heavy-ion collision is crucial for almost all further analysis. We investigate the capabilities of an artificial neural network in that respect. First results show that the neural network is capable of improving the accuracy of the impact parameter determination based on observables such as the flow angle, the average directed inplane transverse momentum and the difference between transverse and longitudinal momenta. However, further investigations are necessary to discover the full potential of the neural network approach.
Azimuthal correlations of pions are studied with the quantum molecular dynamics model. Pions are preferentially emitted perpendicular to the reaction plane. Our analysis shows that this anisotropy is dominated by pion absorption on the spectator matter in the reaction plane. Pions emitted perpendicular to the reaction plane undergo less rescattering than those emitted in the reaction plane and might therefore be more sensitive to the early hot and dense reaction phase.
We study dilepton production from a quark-gluon plasma of given energy density at finite quark chemical potential μ and find that the dilepton production rate is a strongly decreasing function of μ. Therefore, the signal to background ratio of dileptons from a plasma created in a heavy-ion collision may decrease significantly.
Viscous hydrodynamic calculations of high energy heavy-ion collisions (Nb-Nb and Au-Au) from 200 to 800 MeV/nucleon are presented. The resulting baryon rapidity distributions, the in-plane transverse momentum transfer (bounce-off), and the azimuthal dependence of the midrapidity particles (off-plane squeeze out) compare well with Plastic Ball data. We find that the considered observables are sensitive both to the nuclear equation of state and to the nuclear shear viscosity η. Transverse momentum distributions indicate a high shear viscosity (η≊60 MeV/fm2 c) in the compression zone, in agreement with nuclear matter estimates. The bulk viscosity ζ influences only the entropy production during the expansion stage; collective observables like flow and dN/dY do not depend strongly on ζ. The recently observed off-plane (φ=90°) squeeze-out, which is found in the triple-differential rapidity distribution, exhibits the strongest sensitivity to the nuclear equation of state. It is demonstrated that for very central collisions, b=1 fm, the squeeze-out is visible even in the double-differential cross section. This is experimentally accessible by studying azimuthally symmetric events, as confirmed recently by data of the European 4π detector collaboration at Gesellchaft für Schwerionforschung Darmstadt.
If density isomers exist they can be detected by measuring the excitation function of subthreshold kaon production. When the system reaches the density where the density isomer has influence on the equation of state (which depends on the beam energy and on the optical potential), we observe a jump in the cross section of the kaons whereas other observables change little. Above threshold Λ¯’s or p¯’s may be used to continue the search. This is the result of microscopic Boltzman-Uehling-Uhlenbeck calculations.
The article connects two strands of the recent sociolegal debate: (1) the empirical discovery of new forms of spontaneous law in die Course of globalization, and (2) the emergence of deconstructive theories of law that undermine the law's hierarchy. The article puts forward the thesis that law's hierarchy has successfully resisted all old and new attempts at its deconstruction; it breaks, however, under the pressures of globalization that produced a global law without the state, as self-created law of global society that has no institutionalized support whatsoever in international poliucs and public international law. Consequently, the article criticizes deconstructive theories for their lack of autological analysis. These theories do not take into account the historical condicions of deconstruction. Accordingly, deconstructive analysis of law would have to look for new legal distinctions that are plausible under the new condicions of a doubly fragmented global society. The article sketches the contours of an emerging polycontextural law.
Jura Studieren
(1968)
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.
We present a biologically-inspired system for real-time, feed-forward object recognition in cluttered scenes. Our system utilizes a vocabulary of very sparse features that are shared between and within different object models. To detect objects in a novel scene, these features are located in the image, and each detected feature votes for all objects that are consistent with its presence. Due to the sharing of features between object models our approach is more scalable to large object databases than traditional methods. To demonstrate the utility of this approach, we train our system to recognize any of 50 objects in everyday cluttered scenes with substantial occlusion. Without further optimization we also demonstrate near-perfect recognition on a standard 3-D recognition problem. Our system has an interpretation as a sparsely connected feed-forward neural network, making it a viable model for fast, feed-forward object recognition in the primate visual system.
In November 2005, a survey was begun of the wells in and around Hagia Sophia Church in Istanbul. The long-term goal of the survey is the understanding of the function of the tunnels and the water systems used for Hagia Sophia and its surroundings during the Byzantine and the Ottoman periods. Alternate research methods, such as geophysical research, will be used in future surveys. The 2005 survey examined the channels that run from under the narthex and continue northwards and the southwards of the building as well as channels that run towards the atrium, hippodrome, and garden in the north. The survey resulted in the first photos of the well-bottoms in the history of Hagia Sophia.
Gene trapping is a method of generating murine embryonic stem (ES) cell lines containing insertional mutations in known and novel genes. A number of international groups have used this approach to create sizeable public cell line repositories available to the scientific community for the generation of mutant mouse strains. The major gene trapping groups worldwide have recently joined together to centralize access to all publicly available gene trap lines by developing a user-oriented Website for the International Gene Trap Consortium (IGTC). This collaboration provides an impressive public informatics resource comprising ~45 000 well-characterized ES cell lines which currently represent ~40% of known mouse genes, all freely available for the creation of knockout mice on a non-collaborative basis. To standardize annotation and provide high confidence data for gene trap lines, a rigorous identification and annotation pipeline has been developed combining genomic localization and transcript alignment of gene trap sequence tags to identify trapped loci. This information is stored in a new bioinformatics database accessible through the IGTC Website interface. The IGTC Website (www.genetrap.org) allows users to browse and search the database for trapped genes, BLAST sequences against gene trap sequence tags, and view trapped genes within biological pathways. In addition, IGTC data have been integrated into major genome browsers and bioinformatics sites to provide users with outside portals for viewing this data. The development of the IGTC Website marks a major advance by providing the research community with the data and tools necessary to effectively use public gene trap resources for the large-scale characterization of mammalian gene function.
Kongressbericht: Auf der Tagung der Deutschen Gesellschaft für Allgemeinmedizin und Familienmedizin e.V. (DEGAM) 2004 entstand die Idee, E-Learning-Aktivitäten in der Allgemeinmedizin sichtbar zu machen und zu bündeln. Ein Kongress sollte die allgemeinmedizinischen Vertreter aus Lehre und Forschung sowie Industrievertreter zusammenbringen, um das Spektrum der Möglichkeiten und laufende Projekte kennen zu lernen. Mit motivierten Referenten, über 60 aktiven Teilnehmern und einem positiven Feedback, kann der Kongress in Frankfurt am 8. und 9. Juli 2005 als erster dieser Art in Deutschland als erfolgreich bezeichnet werden.
Background: Depression is a disorder with high prevalence in primary health care and a significant burden of illness. The delivery of health care for depression, as well as other chronic illnesses, has been criticized for several reasons and new strategies to address the needs of these illnesses have been advocated. Case management is a patient-centered approach which has shown efficacy in the treatment of depression in highly organized Health Maintenance Organization (HMO) settings and which might also be effective in other, less structured settings. Methods/Design: PRoMPT (PRimary care Monitoring for depressive Patients Trial) is a cluster randomised controlled trial with General Practice (GP) as the unit of randomisation. The aim of the study is to evaluate a GP applied case-management for patients with major depressive disorder. 70 GPs were randomised either to intervention group or to control group with the control group delivering usual care. Each GP will include 10 patients suffering from major depressive disorder according to the DSM-IV criteria. The intervention group will receive treatment based on standardized guidelines and monthly telephone monitoring from a trained practice nurse. The nurse investigates the patient's status concerning the MDD criteria, his adherence to GPs prescriptions, possible side effects of medication, and treatment goal attainment. The control group receives usual care – including recommended guidelines. Main outcome measure is the cumulative score of the section depressive disorders (PHQ-9) from the German version of the Prime MD Patient Health Questionnaire (PHQ-D). Secondary outcome measures are the Beck-Depression-Inventory, self-reported adherence (adapted from Moriskey) and the SF-36. In addition, data are collected about patients' satisfaction (EUROPEP-tool), medication, health care utilization, comorbidity, suicide attempts and days out of work. The study comprises three assessment times: baseline (T0) , follow-up after 6 months (T1) and follow-up after 12 months (T2). Discussion: Depression is now recognized as a disorder with a high prevalence in primary care but with insufficient treatment response. Case management seems to be a promising intervention which has the potential to bridge the gap of the usually time-limited and fragmented provision of care. Case management has been proven to be effective in several studies but its application in the private general medical practice setting remains unclear.
Background: Diabetes model projects in different regions of Germany including interventions such as quality circles, patient education and documentation of medical findings have shown improvements of HbA1c levels, blood pressure and occurrence of hypoglycaemia in before-after studies (without control group). In 2002 the German Ministry of Health defined legal regulations for the introduction of nationwide disease management programs (DMP) to improve the quality of care in chronically ill patients. In April 2003 the first DMP for patients with type 2 diabetes was accredited. The evaluation of the DMP is essential and has been made obligatory in Germany by the Fifth Book of Social Code. The aim of the study is to assess the effectiveness of DMP by example of type 2 diabetes in the primary care setting of two German federal states (Rheinland-Pfalz and Sachsen-Anhalt). Methods/Design: The study is three-armed: a prospective cluster-randomized comparison of two interventions (DMP 1 and DMP 2) against routine care without DMP as control group. In the DMP group 1 the patients are treated according to the current situation within the German-Diabetes-DMP. The DMP group 2 represents diabetic care within ideally implemented DMP providing additional interventions (e.g. quality circles, outreach visits). According to a sample size calculation a sample size of 200 GPs (each GP including 20 patients) will be required for the comparison of DMP 1 and DMP 2 considering possible drop-outs. For the comparison with routine care 4000 patients identified by diabetic tracer medication and age (> 50 years) will be analyzed. Discussion: This study will evaluate the effectiveness of the German Diabetes-DMP compared to a Diabetes-DMP providing additional interventions and routine care in the primary care setting of two different German federal states.
Das Projekt LaMedica (http://www.lamedica.de) hat zum Ziel, eine multimediale Lehr- und Lernplattform zu entwickeln, Inhalte für die Medizin zu erstellen und diese in die Lehre zu implementieren. Es wurde eine on-line Autorenumgebung geschaffen, die sehr unterschiedliche didaktische Ansätze unterstützt: systematische und vernetzte Wissensvermittlung, fallbasiertes Lernen, Erstellung von Vorlesungen und Lernerfolgskontrolle. Die Lehrinhalte können zielgruppenspezifisch aufbereitet und dargestellt werden und richten sich insbesondere an Studenten, Ärzte in der Weiterbildung und Fachärzte. Eine on-line Medien-Datenbank unterstützt die Wiederverwendung und den Austausch von Inhalten auf der Basis eines Content-Management-Systems durch Verwendung des Learning Objects Metadata Standards (LOM). Die Förderung erfolgt durch das BMBF (FKZ NM054A).
Aims: This paper is a review of the literature on problem-related drinking of alcohol among medical doctors, and it deals with the epidemiology and results. Methods: A search of computer literature databases - PubMed and ETOH - was performed to locate articles reporting problem-related drinking among doctors, using population-based samples of doctors within the last two decades. Results: In the light of different definitions of problem-related drinking, there was found a breadth of prevalence of problem-related drinking - from heavy drinking and hazardous drinking (12%-16%) to misuse and dependence (6%-8%) - within the population-based samples of doctors. An increased risk was positively related to male doctors and doctors of the age of 40-45 years and older, and to some factors of work, lifestyle and health. Conclusion: For the future, it seems necessary to sensitise the research for problem-related drinking of doctors in Germany, e.g. initiating a representative survey, analysing the drinking of alcohol in the context of health, life-style and work-related factors.
Celiac disease
(2006)
Celiac disease is a chronic intestinal disease caused by intolerance to gluten. It is characterized by immune-mediated enteropathy, associated with maldigestion and malabsorption of most nutrients and vitamins. In predisposed individuals, the ingestion of gluten-containing food such as wheat and rye induces a flat jejunal mucosa with infiltration of lymphocytes. The main symptoms are: stomach pain, gas, and bloating, diarrhea, weight loss, anemia, edema, bone or joint pain. Prevalence for clinically overt celiac disease varies from 1:270 in Finland to 1:5000 in North America. Since celiac disease can be asymptomatic, most subjects are not diagnosed or they can present with atypical symptoms. Furthermore, severe inflammation of the small bowel can be present without any gastrointestinal symptoms. The diagnosis should be made early since celiac disease causes growth retardation in untreated children and atypical symptoms like infertility or neurological symptoms. Diagnosis requires endoscopy with jejunal biopsy. In addition, tissue-transglutaminase antibodies are important to confirm the diagnosis since there are other diseases which can mimic celiac disease. The exact cause of celiac disease is unknown but is thought to be primarily immune mediated (tissue-transglutaminase autoantigen); often the disease is inherited. Management consists in life long withdrawal of dietary gluten, which leads to significant clinical and histological improvement. However, complete normalization of histology can take years.
Prostaglandin E2 (PGE2) plays an important role in bone development and metabolism. To interfere therapeutically in the PGE2 pathway, however, knowledge about the involved enzymes (cyclooxygenases) and receptors (PGE2 receptors) is essential. We therefore examined the production of PGE2 in cultured growth plate chondrocytes in vitro and the effects of exogenously added PGE2 on cell proliferation. Furthermore, we analysed the expression and spatial distribution of cyclooxygenase (COX)-1 and COX-2 and PGE2 receptor types EP1, EP2, EP3 and EP4 in the growth plate in situ and in vitro. PGE2 synthesis was determined by mass spectrometry, cell proliferation by DNA [3H]-thymidine incorporation, mRNA expression of cyclooxygenases and EP receptors by RT-PCR on cultured cells and in homogenized growth plates. To determine cellular expression, frozen sections of rat tibial growth plate and primary chondrocyte cultures were stained using immunohistochemistry with polyclonal antibodies directed towards COX-1, COX-2, EP1, EP2, EP3, and EP4. Cultured growth plate chondrocytes transiently secreted PGE2 into the culture medium. Although both enzymes were expressed in chondrocytes in vitro and in vivo, it appears that mainly COX-2 contributed to PGE2-dependent proliferation. Exogenously added PGE2 stimulated DNA synthesis in a dose-dependent fashion and gave a bell-shaped curve with a maximum at 10-8 M. The EP1/EP3 specific agonist sulprostone and the EP1-selective agonist ONO-D1-004 increased DNA synthesis. The effect of PGE2 was suppressed by ONO-8711. The expression of EP1, EP2, EP3, and EP4 receptors in situ and in vitro was observed; EP2 was homogenously expressed in all zones of the growth plate in situ, whereas EP1 expression was inhomogenous, with spared cells in the reserve zone. In cultured cells these four receptors were expressed in a subset of cells only. The most intense staining for the EP1 receptor was found in polygonal cells surrounded by matrix. Expression of receptor protein for EP3 and EP4 was observed also in rat growth plates. In cultured chrondrocytes, however, only weak expression of EP3 and EP4 receptor was detected. We suggest that in growth plate chondrocytes, COX-2 is responsible for PGE2 release, which stimulates cell proliferation via the EP1 receptor.
Herman P. Schwan [1915–2005] was a distinguished scientist and engineer, and a founding father of the field of biomedical engineering. A man of integrity, Schwan influenced the lives of many, including his wife and children, and his many students and colleagues. Active in science until nearly the end of his life, he will be very much missed by his family and many colleagues.
High-throughput gene trapping is a random approach for inducing insertional mutations across the mouse genome. This approach uses gene trap vectors that simultaneously inactivate and report the expression of the trapped gene at the insertion site, and provide a DNA tag for the rapid identification of the disrupted gene. Gene trapping has been used by both public and private institutions to produce libraries of embryonic stem (ES) cells harboring mutations in single genes. Presently,~ 66% of the protein coding genes in the mouse genome have been disrupted by gene trap insertions. Among these, however, genes encoding signal peptides or transmembrane domains (secretory genes) are underrepresented because they are not susceptible to conventional trapping methods. Here, we describe a high-throughput gene trapping strategy that effectively targets secretory genes. We used this strategy to assemble a library of ES cells harboring mutations in 716 unique secretory genes, of which 61% were not trapped by conventional trapping, indicating that the two strategies are complementary. The trapped ES cell lines, which can be ordered from the International Gene Trap Consortium (http://www.genetrap.org), are freely available to the scientific community.
Background: Murine leukemia virus (MLV) vector particles can be pseudotyped with a truncated variant of the human immunodeficiency virus type 1 (HIV-1) envelope protein (Env) and selectively target gene transfer to human cells expressing both CD4 and an appropriate co-receptor. Vector transduction mimics the HIV-1 entry process and is therefore a safe tool to study HIV-1 entry. Results: Using FLY cells, which express the MLV gag and pol genes, we generated stable producer cell lines that express the HIV-1 envelope gene and a retroviral vector genome encoding the green fluorescent protein (GFP). The BH10 or 89.6 P HIV-1 Env was expressed from a bicistronic vector which allowed the rapid selection of stable cell lines. A codon-usage-optimized synthetic env gene permitted high, Rev-independent Env expression. Vectors generated by these producer cells displayed different sensitivity to entry inhibitors. Conclusion: These data illustrate that MLV/HIV-1 vectors are a valuable screening system for entry inhibitors or neutralizing antisera generated by vaccines.
Background: Cancer gene therapy will benefit from vectors that are able to replicate in tumor tissue and cause a bystander effect. Replication-competent murine leukemia virus (MLV) has been described to have potential as cancer therapeutics, however, MLV infection does not cause a cytopathic effect in the infected cell and viral replication can only be studied by immunostaining or measurement of reverse transcriptase activity. Results: We inserted the coding sequences for green fluorescent protein (GFP) into the proline-rich region (PRR) of the ecotropic envelope protein (Env) and were able to fluorescently label MLV. This allowed us to directly monitor viral replication and attachment to target cells by flow cytometry. We used this method to study viral replication of recombinant MLVs and split viral genomes, which were generated by replacement of the MLV env gene with the red fluorescent protein (RFP) and separately cloning GFP-Env into a retroviral vector. Co-transfection of both plasmids into target cells resulted in the generation of semi-replicative vectors, and the two color labeling allowed to determine the distribution of the individual genomes in the target cells and was indicative for the occurrence of recombination events. Conclusions: Fluorescently labeled MLVs are excellent tools for the study of factors that influence viral replication and can be used to optimize MLV-based replication-competent viruses or vectors for gene therapy.
The 5'-terminal cloverleaf (CL)-like RNA structures are essential for the initiation of positive- and negative-strand RNA synthesis of entero- and rhinoviruses. SLD is the cognate RNA ligand of the viral proteinase 3C (3Cpro), which is an indispensable component of the viral replication initiation complex. The structure of an 18mer RNA representing the apical stem and the cGUUAg D-loop of SLD from the first 5'-CL of BEV1 was determined in solution to a root-mean-square deviation (r.m.s.d.) (all heavy atoms) of 0.59 A (PDB 1Z30). The first (antiG) and last (synA) nucleotide of the D-loop forms a novel ‘pseudo base pair’ without direct hydrogen bonds. The backbone conformation and the base-stacking pattern of the cGUUAg-loop, however, are highly similar to that of the coxsackieviral uCACGg D-loop (PDB 1RFR) and of the stable cUUCGg tetraloop (PDB 1F7Y) but surprisingly dissimilar to the structure of a cGUAAg stable tetraloop (PDB 1MSY), even though the cGUUAg BEV D-loop and the cGUAAg tetraloop differ by 1 nt only. Together with the presented binding data, these findings provide independent experimental evidence for our model [O. Ohlenschläger, J. Wöhnert, E. Bucci, S. Seitz, S. Häfner, R. Ramachandran, R. Zell and M. Görlach (2004) Structure, 12, 237–248] that the proteinase 3Cpro recognizes structure rather than sequence.
We have isolated the human protein SNEV as downregulated in replicatively senescent cells. Sequence homology to the yeast splicing factor Prp19 suggested that SNEV might be the orthologue of Prp19 and therefore might also be involved in pre-mRNA splicing. We have used various approaches including gene complementation studies in yeast using a temperature sensitive mutant with a pleiotropic phenotype and SNEV immunodepletion from human HeLa nuclear extracts to determine its function. A human–yeast chimera was indeed capable of restoring the wild-type phenotype of the yeast mutant strain. In addition, immunodepletion of SNEV from human nuclear extracts resulted in a decrease of in vitro pre-mRNA splicing efficiency. Furthermore, as part of our analysis of protein–protein interactions within the CDC5L complex, we found that SNEV interacts with itself. The self-interaction domain was mapped to amino acids 56–74 in the protein's sequence and synthetic peptides derived from this region inhibit in vitro splicing by surprisingly interfering with spliceosome formation and stability. These results indicate that SNEV is the human orthologue of yeast PRP19, functions in splicing and that homo-oligomerization of SNEV in HeLa nuclear extract is essential for spliceosome assembly and that it might also be important for spliceosome stability.
In order to further understand how DNA polymerases discriminate against incorrect dNTPs, we synthesized two sets of dNTP analogues and tested them as substrates for DNA polymerase a (pol alpha) and Klenow fragment (exo-) of DNA polymerase I (Escherichia coli ). One set of analogues was designed to test the importance of the electronic nature of the base. The bases consisted of a benzimidazole ring with one or two exocyclic substituent(s) that are either electron-donating (methyl and methoxy) or electronwithdrawing (trifluoromethyl and dinitro). Both pol a and Klenow fragment exhibit a remarkable inability to discriminate against these analogues as compared to their ability to discriminate against incorrect natural dNTPs. Neither polymerase shows any distinct electronic or steric preferences for analogue incorporation. The other set of analogues, designed to examine the importance of hydrophobicity in dNTP incorporation, consists of a set of four regioisomers of trifluoromethyl benzimidazole. Whereas pol a and Klenow fragment exhibited minimal discrimination against the 5- and 6-regioisomers, they discriminated much more effectively against the 4- and 7-regioisomers. Since all four of these analogues will have similar hydrophobicity and stacking ability, these data indicate that hydrophobicity and stacking ability alone cannot account for the inability of pol a and Klenow fragment to discriminate against unnatural bases. After incorporation, however, both sets of analogues were not efficiently elongated. These results suggest that factors other than hydrophobicity, sterics and electronics govern the incorporation of dNTPs into DNA by pol {alpha} and Klenow fragment.
Background: Costly structures need to represent an adaptive advantage in order to be maintained over evolutionary times. Contrary to many other conspicuous shell ornamentations of gastropods, the haired shells of several Stylommatophoran land snails still lack a convincing adaptive explanation. In the present study, we analysed the correlation between the presence/absence of hairs and habitat conditions in the genus Trochulus in a Bayesian framework of character evolution. Results: Haired shells appeared to be the ancestral character state, a feature most probably lost three times independently. These losses were correlated with a shift from humid to dry habitats, indicating an adaptive function of hairs in moist environments. It had been previously hypothesised that these costly protein structures of the outer shell layer facilitate the locomotion in moist habitats. Our experiments, on the contrary, showed an increased adherence of haired shells to wet surfaces. Conclusion: We propose the hypothesis that the possession of hairs facilitates the adherence of the snails to their herbaceous food plants during foraging when humidity levels are high. The absence of hairs in some Trochulus species could thus be explained as a loss of the potential adaptive function linked to habitat shifts.
The volume changes of lithium and sodium under pressure are discussed with respect to the packing density of the atoms and their valence. In densely packed Li I (bcc), Li II (fcc), and Li III (alpha-Hg ype), valence increases from 1 at ~ 5 GPa to ~ 2.5 at 40 GPa. The maximum valence 3 is attained in Li IV (body-centered cubic, 16 atoms per cell, packing density q = 0.965) at 47 GPa. In densely packed Na I (bcc) a linear increase of valence from 1 at ~ 10 GPa to 2.9 at 65 GPa is found which continues in Na II (fcc) up to 4.1 at 103 GPa.
A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools.
Channel routing is an NP-complete problem. Therefore, it is likely that there is no efficient algorithm solving this problem exactly.In this paper, we show that channel routing is a fixed-parameter tractable problem and that we can find a solution in linear time for a fixed channel width.We implemented our approach for the restricted layer model. The algorithm finds an optimal route for channels with up to 13 tracks within minutes or up to 11 tracks within seconds.Such narrow channels occur for example as a leaf problem of hierarchical routers or within standard cell generators.
We present a theoretical analysis of structural FSM traversal, which is the basis for the sequential equivalence checking algorithm Record & Play presented earlier. We compare the convergence behaviour of exact and approximative structural FSM traversal with that of standard BDD-based FSM traversal. We show that for most circuits encountered in practice exact structural FSM traversal reaches the fixed point as fast as symbolic FSM traversal, while approximation can significantly reduce in the number of iterations needed. Our experiments confirm these results.
We present the FPGA implementation of an algorithm [4] that computes implications between signal values in a boolean network. The research was performed as a masterrsquos thesis [5] at the University of Frankfurt. The recursive algorithm is rather complex for a hardware realization and therefore the FPGA implementation is an interesting example for the potential of reconfigurable computing beyond systolic algorithms. A circuit generator was written that transforms a boolean network into a network of small processing elements and a global control logic which together implement the algorithm. The resulting circuit performs the computation two orders of magnitudes faster than a software implementation run by a conventional workstation.
One of the most severe short-comings of currently available equivalence checkers is their inability to verify integer multipliers. In this paper, we present a bit level reverse-engineering technique that can be integrated into standard equivalence checking flows. We propose a Boolean mapping algorithm that extracts a network of half adders from the gate netlist of an addition circuit. Once the arithmetic bit level representation of the circuit is obtained, equivalence checking can be performed using simple arithmetic operations. Experimental results show the promise of our approach.
This paper argues that short (clause-internal) scrambling to a pre-subject position has A properties in Japanese but A'-properties in German, while long scrambling (scrambling across sentence boundaries) from finite clauses, which is possible in Japanese but not in German, has A'-properties throughout. It is shown that these differences between German and Japanese can be traced back to parametric variation of phrase structure and the parameterized properties of functional heads. Due to the properties of Agreement, sentences in Japanese may contain multiple (Agro- and Agrs-) specifiers whereas German does not allow for this. In Japanese, a scrambled element may be located in a Spec AgrP, i.e. an A- or L-related position, whereas scrambled NPs in German can only appear in an AgrP-adjoined (broadly-L-related) position, which only has A'-properties. Given our assumption that successive cyclic adjunction is generally impossible, elements in German may not be long scrambled because a scrambled element that is moved to an adjunction site inside an embedded clause may not move further. In Japanese, long distance scrambling out of finite CPs is possible since scrambling may proceed in a successive cyclic manner via embedded Spec- (AgrP) positions. Our analysis of the differences between German and Japanese scrambling provides us with an account of further contrasts between the two languages such as the existence of surprising asymmetries between German and Japanese remnant-movement phenomena, and the fact that unlike German, Japanese freely allows wh-scrambling. Investigation of the properties of Japanese wh-movement also leads us to the formulation of the "Wh-cluster Hypothesis", which implies that Japanese is an LF multiple wh-fronting language.
In this article, I discuss some important properties of wh-questions and wh-scrambling in Japanese. The questions I will address are (i) which instances of (wh-) scrambling involve reconstruction and (ii) how the undoing effects of scrambling can be derived. First I will discuss the claim that (wh-) scrambling is semantically vacuous and is therefore undone at LF (Saito 1989, 1992). Then I consider the data that led Takahashi (1993) to the conclusion that at least some instances of wh-scrambling have to be analyzed as instances of "full wh-movement" i.e., overt movement of the wh-phrase in its scopal position. It will be argued that these examples are not instances of full wh-movement in Japanese, but that they also represent semantically vacuous scrambling. Those instances of scrambling that apprently cannot be undone are best explained with recourse to parsing effects. I conclude that wh-scrambling in Japanese is always triggered by a ([-wh]-) scrambling feature. In addition, long distance scrambling (scrambling out of finite CPs) is analyzed as adjunction movement, whereas short distance scrambling is movement to a specifier position of IP. Turning to the mechanisms of undoing, I will argue that only long distance scrambling is undone. This is shown to follow from Chomsky's (1995) bare phrase structure analysis, according to which multi-segmental categories derived by adjunction movement are not licensed at LF. The article is organized as follows. In section 2, the wh-scrambling phenomenon is described. In section 3, I discuss the reconstruction properties of scrambling. In addition, this section provides some basic assumptions about my analysis of Japanese scrambling in general. In section 4, I turn to the analysis of wh-scrambling as an instance of full wh-movement in Japanese. Section 5 provides discussion of multiple wh-questions in Japanese, and section 6 gives the conclusion.
Die Doppelobjekt-Konstruktion bildet einen Untersuchungsgegenstand, der in der Vergangenheit die Theoriebildung in der Syntaxforschung wesentlich beeinflusst hat. Untersuchungen zu Doppelobjekt-Konstruktionen sind u.a. folgenreich gewesen für die Kasustheorie sowie für Analysen der Verbbewegung, Satz-, VP- und Argument-Struktur. In diesem Aufsatz stelle ich eine Analyse einiger wichtiger Aspekte der Doppelobjekt-Konstruktion im Deutschen vor. Untersucht wird, in welcher Position die Objekte des Verbs basisgeneriert werden und in welchen abgeleiteten Positionen sie erscheinen. Die Beantwortung dieser Fragen liefert eine Erklärung für das asymmetrische Verhalten der beteiligten Objekte in Bezug auf ihr Bindungs- und Extraktionsverhalten.
In diesem Aufsatz diskutiere ich die Distribution von kohärenten Kontroll-Infinitiven im Deutschen. Es werden die Verbklassen bestimmt, die kohärente Infinitive lizenzieren. Dabei zeigt sich, dass ausschließlich Infinitive 'kohärent' konstruiert werden können, die die Position der Akkusativ NP (bzw. die Position des direkten Objekts) einnehmen. Kontroll-Infinitive in anderen strukturellen Positionen sind zwangsläufig 'inkohärent'. Transparente Infinitive in Sprachen wie dem Polnischen und Spanischen sind in derselben Weise in ihrer Distribution beschränkt. Ich schlage eine einheitliche Analyse der relevanten Daten vor, die weitere distributionelle Generalisierungen bezüglich des Auftretens kohärenter Infinitive korrekt prognostiziert. Für die idiolektale Variation, die unter Sprechern in Bezug auf bestimmte Verben existiert, die kohärente Infinitive lizenzieren, wird eine Erklärung formuliert, die auf der Idee basiert, dass die Bildung dieser Infinitive an die Präsenz eines Inkorporations-Merkmals gebunden ist, das beim Spracherwerb auf der Grundlage positiver Evidenz erworben wird. In this article, I discuss the distribution of so-called 'coherent (control) infinitives' in German. In section 2, I will argue that both coherent as well as incoherent control-infinitives have a sentential status. In section 3, I argue that only infinitives occupying the position of the direct object show the well-known properties associated with coherent infinitives. Control infinitives in other structural positions represent incoherent infinitives. This situation is not limited to German infinitives. Transparent infinitives in Polish and Spanish show the same structural asymmetry. In section 4, I propose a unified analysis for the data that in addition, correctly predicts further restrictions for the distribution of coherent infinitives. In section 5, I propose an account for idiolectal variation in the class of verbs that license coherent infinitives. This account is based on the idea that coherent infinitives require an incorporation-feature in their lexical entry that is acquired on the basis of positive evidence.
In diesem Aufsatz gehe ich der Frage nach, in wie viel unterschiedlichen Positionen Verben im deutschen Satz vorkommen können. Anhand syntaktischer Tests wird gezeigt, daß das Verb im Deutschen in insgesamt drei unterschiedlichen Positionen auftritt und nicht, wie in der traditionellen Grammatik angenommen wird, in nur zwei Positionen (in der rechten und in der linken Satzklammer). Es wird dafür argumentiert, daß die Anwendung des abstrakten Satzschemas, wie es heute gängigerweise in der generativen Grammatikforschung als universelles Satzmodell angenommen wird, die Erklärung einer Vielzahl syntaktischer Phänomene im Deutschen ermöglicht, die mit der traditionellen Verbstellungsanalyse, die von nur zwei Verbpositionen ausgeht, nicht erklärt werden können. Gemäß des universellen Satzschemas repräsentiert die Infl(ection)-Position eine Verbposition im Satz. Diese Position ist identisch mit der rechten Satzklammer. Eine weitere Verbposition ist die V-Position innerhalb des Mittelfelds und eine dritte potenzielle Position für das Verb entspricht der C(omplementizer-) Position (bzw. der linken Satzklammer). Der Aufsatz ist folgendermaßen gegliedert. In der Einleitung schildere ich kurz die unterschiedlichen Auffassungen, die in der Vergangenheit zur Verbstellungsproblematik im Deutschen vertreten wurden. In Abschnitt 2 nenne ich die wichtigsten Argumente, die gegen die Annahme vorgebracht wurden, daß im deutschen Satz für Verben insgesamt drei Positionen zur Verfügung stehen. Im Anschluss daran werden in den Abschnitten 3.1 bis 3.2 Argumente diskutiert, die für drei Verbpositionen sprechen. Abschnitt 3.3 behandelt die Frage, wie vor diesem Hintergrund die Daten aus Abschnitt 2, die sich als problematisch für diese Analyse erwiesen haben, erklärt werden können. In Abschnitt 4 wende ich mich weiterer unabhängiger Evidenz aus dem Bereich der historischen Syntax zu, die für drei Verbpositionen im Deutschen spricht. In Abschnitt 5 gebe ich eine kurze Zusammenfassung der wichtigsten Ergebnisse.
Ausgangspunkt der folgenden Untersuchung ist die Überlegung, daß verschiedene Versionen der Prinzipien- und Parametertheorie unterschiedliche Prognosen bezüglich strukturell ambiger Wortfolgen in Passiv-Konstruktionen des Deutschen machen. Im Rahmen einer Theorie, in der Move-alpha frei appliziert, wie etwa in der Rektions- und Bindungstheorie (Chomsky 1981, 1986a, 1986b), können multiple Derivationen für derartige Abfolgen nicht ausgeschlossen werden, wohingegen eine andere Situation vorliegt, wenn man die entsprechenden Konstruktionen im Rahmen des Minimalistischen Programms analysiert. Hier kann die Anzahl möglicher (und mit einer Wortfolge verträglicher) Derivationen mit Hilfe von Ökonomieprinzipien beschränkt werden. Auf Grundlage verschiedener syntaktischer Tests wird im weiteren gezeigt, daß bestimmte Wortfolgen nur mit einer Derivation verträglich sind, was im Einklang mit einer minimalistischen Analyse der Daten steht. Der Aufsatz ist folgendermaßen gegliedert. In Abschnitt 2 erläutere ich das Grundproblem der multiplen Derivationen, das sich im Deutschen z. B. bei Passiv-Konstruktionen ergibt, wenn man annimmt, daß NP-Bewegung und Scrambling optional erfolgen und ferner keine Beschränkungen für potentielle Derivationen gelten. In Abschnitt 3 diskutiere ich die Voraussetzungen für die Überprüfung der Prognosen der verschiedenen Varianten des Prinzipien- und Parametermodells und versuche anschließend auf Grundlage syntaktischer Tests zu belegen, daß die diskutierten Beispiele tatsächlich nicht strukturell ambig, sondern strukturell eindeutig sind, wie es die Analyse der entsprechenden Konstruktionen im Rahmen des Minimalistischen Programms vorhersagt. Abschnitt 4 beschreibt die Konsequenzen der Analyse für weitere Sprachen wie Niederländisch und Japanisch und zusätzliche Bewegungstypen. Abschnitt 5 enthält die Konklusion.
Homing in with GPS
(2000)
This a review of the present status of heavy-ion collisions at intermediate energies. The main goal of heavy-ion physics in this energy regime is to shed some light on the nuclear equation of state (EOS), hence we present the basic concept of the EOS in nuclear matter as well as of nuclear shock waves which provide the key mechanism for the compression of nuclear matter. The main part of this article is devoted to the models currently used for describing heavy-ion reactions theoretically and to the observables useful for extracting information about the EOS from experiments. A detailed discussion of the flow effects with a broad comparison with the avaible data is presented. The many-body aspects of such reactions are investigated via the multifragmentation break up of excited nuclear systems and a comparison of model calculations with the most recent multifragmentation experiments is presented.
In the framework of the relativistic quantum dynamics approach we investigate antiproton observables in Au-Au collisions at 10.7A GeV. The rapidity dependence of the in-plane directed transverse momentum p(y) of p's shows the opposite sigh of the nucleon flow, which has indeed recently been discovered at 10.7A GeV by the E877 group. The "antiflow" of p's is also predicted at 2A GeV and at 160 A GeV and appears at all energies also for pi's and K's. These predicted p anticorrelations are a direct proof of strong p annihilation in massive heavy ion reactions.
The quantum statistical model (QSM) is used to calculate nuclear fragment distributions in chemical equilibrium. Several observable isotopic effects are predicted for intermediate energy heavy ion collisions. It is demonstrated that particle ratios for different systemsdo not depend on the breakup density-the only free parameter in our model.The importance of entropy measurements is discussed. Specific particle ratios for the system Au-Au are predicted, which can be used to determine the chemical potentials of the hot midrapidity fragment source in nearly central heavy ion collisions. Pacs-Nr. 25.70 Pq
The Monte Carlo parton string model for multiparticle production in hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions at high energies is described. An adequate choice of the parameters in the model gives the possibility of recovering the main results of the dual parton model, with the advantage of treating both hadron and nuclear interactions on the same footing, reducing them to interactions between partons. Also the possibility of considering both soft and hard parton interactions is introduced.
The properties of pions from the hot and dense reaction stage of relativistic heavy ion collisions are investigated with the quantum molecular dynamics model. Pions originating from this reaction stage stem from resonance decay with enhanced mass. They carry high transverse momenta. The calculation shows a direct correlation between high pt pions, early freeze-out times and high freeze-out densities.
Dilepton spectra for p+p and p+d reactions at 4.9GeV are calculated. We consider electromagnetic bremsstrahlung also in inelastic reactions. N* and Delta* decay present the major contributions to the pho and omega meson yields.Pion annihilation yields only 1.5% of all pho's in p+d. The pho mass spectrum is strongly distorted due to phase space effects, populating dominantly dilepton masses below 770MeV.
Strong mean meson fields, which are known to exist in normal nuclei, experience a violent deformation in the course of a heavy-ion collision at relativistic energies. This may give rise to a new collective mechanism of the particle production, not reducible to the superposition of elementary nucleon-nucleon collisions.
We investigate the sensivity of pionic bounce-off and squeeze-out on the density and momentum dependence of the real part of the nucleon optical potential. For the in-plane pion bounce-off we find a strong sensivity on both the density and momentum dependence whereas the out-of-plane pion squeeze-out shows a strong sensivity only towards the momentum dependence but little sensivity towards the density dependence.
We demonstrate the importance of the Bose-statistical effects for pion production in relativistic heavy-ion collisions. The evolution of the pion phase-space density in central collisions of ultrarelativistic nuclei is studied in a simple kinetic model taking into account the effect of Bose-simulated pion production by the NN collisions in a dense cloud of mesons.
The volume changes of solid iodine under pressure are discussed with respect to the packing density of the atoms and to valence. The packing density of solid iodine which is 0.805 under ambient pressure increases to 0.976 in monoatomic iodine-II, 0.993 in iodine-III, and 1 in fcc iodine-IV. Simultaneously, the valence increases from 1 in the free molecule to 1.78 in the crystal structure under ambient pressure, 2.72 – 2.81 in iodine-II, 2.86 – 2.96 in iodine-III, and 3 in fcc iodine-IV. The valence then remains constant up to about 180 GPa and rises moderately to 3.15 at the highest investigated pressure of 276 GPa. Parameters for calculating bond numbers, valences and atomic volumes of densely packed halogens, hydrogen, oxygen, and nitrogen are given.
The volume changes of cesium under pressure are discussed with respect to the packing density of the atoms and valence. The element is univalent in densely packed Cs I and Cs II. Valence increases in Cs III (packing density q = 0.973), in Cs IV (q = 0.943), in Cs V (q ~ 0.99), and in close packed Cs VI. The diminuition of volume beyond ~ 15 GPa is caused by this increase only which implies that electrons of the fifth shell act as valence electrons.
Relationships between bond lengths and bond numbers and also between atomic volumes and valencies are derived and parameters for their calculation are given for the s-block, p-block, and d-block metals. From the atomic volumes under pressure, the valencies of three solid lanthanoids have been confirmed or redetermined: La 3; Ce 2. 3. and 4; Yb 2 and 3.
Die Datenbank BioLIS wird durch die Universitätsbibliothek Johann Christian Senckenberg (Frankfurt/M.) kostenfrei online zur Verfügung gestellt. Sie weist deutsche biologische Zeitschriftenliteratur aus dem Zeit¬raum 1970 bis 1996 nach – damit ist BioLIS eine wesentliche Ergänzung zu der Datenbank „Biological Abstracts“. Die bibliografischen Angaben zu den nachgewiesenen Aufsätzen werden durch umfassende Schlagwörter und Namen behandelter Organismen ergänzt, so dass Spezialrecherchen insbesondere nach Literatur über bestimmte Organismen möglich sind.
We demonstrate that the creation of strange matter is conceivable in the midrapidity region of heavy ion collisions at Brookhaven RHIC and CERN LHC. A finite net-baryon density, abundant (anti)strangeness production, as well as strong net-baryon and net-strangeness fluctuations, provide suitable initial conditions for the formation of strangelets or metastable exotic multistrange ( baryonic) objects. Even at very high initial entropy per baryon SyAinit ¯ 500 and low initial baryon numbers of Ainit B ¯ 30 a quark-gluon-plasma droplet can immediately charge up with strangeness and accumulate net-baryon number. PACS numbers: 25.75.Dw, 12.38.Mh, 24.85.+
Measured hadron yields from relativistic nuclear collisions can be equally well understood in two physically distinct models, namely a static thermal hadronic source versus a time-dependent, non-equilibrium hadronization off a quark gluon plasma droplet. Due to the time-dependent particle evaporation off the hadronic surface in the latter approach the hadron ratios change (by factors of / 5) in time. The overall particle yields then reflect time averages over the actual thermodynamic properties of the system at a certain stage of evolution.
Metallic radii rm are correlated with the ionic radii ri by linear relationships. For groups 1 up to 7 as well as for Al, Ga, In, Tl, Sn, and Pb the ionic radii refer to the maximum valences (oxidation states) as known from compounds according to rm ~ 1.16 x (ri + 0.64) [A° ]. For groups 8 up to 12, rm ~ 0.48 x (ri + 2.26) [°A] with valences W = 14 - G (G = group number). These valences are considered regular (Wr). For groups 1 up to 12, they obey the equation Wr = 7 - |G - 7|. According to this equation all outer s electrons and the unpaired d electrons should be involved in chemical bonding, i.e. in the cohesion of the element in the solid state. From the melting temperatures and the atomic volumes it is concluded, however, that only 19 out of the 30 d-block elements have regular valences, namely the elements of groups 3, 5, 6, 10, 11 as well as Os, Ir, Zn, Cd, and possibly Ru. All of the non-regular valences are lower than the regular ones. Four of them are integers: Mn 3; Fe, Co 4; Re 6.
Abstract: Local thermal and chemical equilibration is studied for central AqA collisions at 10.7 160 AGeV in the Ultrarelativis- . tic Quantum Molecular Dynamics model UrQMD . The UrQMD model exhibits strong deviations from local equilibrium at the high density hadron string phase formed during the early stage of the collision. Equilibration of the hadron resonance matter is established in the central cell of volume Vs125 fm3 at later stages, tG10 fmrc, of the resulting quasi-isentropic expansion. The thermodynamical functions in the cell and their time evolution are presented. Deviations of the UrQMD quasi-equilibrium state from the statistical mechanics equilibrium are found. They increase with energy per baryon and lead to a strong enhancement of the pion number density as compared to statistical mechanics estimates at SPS energies. PACS: 25.75.-q; 24.10.Lx; 24.10.Pa; 64.30.qt
Noneequilibrium models (three-fluid hydrodynamics and UrQMD) use to discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that these two models - although they do treat the most interesting early phase of the collisions quite differently(thermalizing QGP vs. coherent color fields with virtual particles) - both yields a reasonable agreement with a large variety of the available heavy ion data.
We study J/psi suppression in AB collisions assuming that the charmonium states evolve from small, color transparent configurations. Their interaction with nucleons and nonequilibrated, secondary hadrons is simulated using the microscopic model UrQMD. The Drell-Yan lepton pair yield and the J/psi Drell-Yan ratio are calculated as a function of the neutral transverse energy in Pb+Pb collisions at 160 GeV and found to be in reasonable agreement with existing data.
We derive the relativistic quantum transport equation for the pion distribution function based on an effective Lagrangian of the QHD-II model. The closed-time-path Green s function technique and the semiclassical, quasiparticle, and Born approximations are employed in the derivation. Both the mean field and collision term are derived from the same Lagrangian and presented analytically. The dynamical equation for the pions is consistent with that for the nucleons and Delta's which we developed before. Thus, we obtain a relativistic transport model which describes the hadronic matter with N,Delta, and pi degrees of freedom simultaneously. Within this approach, we investigate the medium effects on the pion dispersion relation as well as the pion absorption and pion production channels in cold nuclear matter. In contrast to the results of the nonrelativistic model, the pion dispersion relation becomes harder at low momenta and softer at high momenta as compared to the free one, which is mainly caused by the relativistic kinetics. The theoretically predicted free piN->Delta cross section is in agreement with the experimental data. Medium effects on the piN->Delta cross section and momentum-dependent Delta-decay width are shown to be substantial. PACS-numbers: 24.10.Jv, 13.75.Cs, 21.65.1f, 25.75.2q
We calculate the shadowing of sea quarks and gluons and show that the shadowing of gluons is not simply given by the sea quark shadowing, especially at small x. The calculations are done in the lab frame approach by using the generalized vector meson dominance model. Here the virtual photon turns into a hadronic fluctuation long before the nucleus. The subsequent coherent interaction with more than one nucleon in the nucleus leads to the depletion sigma(gamma*A )< A*sigma(gamma * N) known as shadowing. A comparison of the shadowing of quarks to E665 data for 40Ca and 207Pb shows good agreement.
This paper evaluates the effects of job creation schemes on the participating individuals in Germany. Since previous empirical studies of these measures have been based on relatively small datasets and focussed on East Germany, this is the first study which allows to draw policy-relevant conclusions. The very informative and exhaustive dataset at hand not only justifies the application of a matching estimator but also allows to take account of threefold heterogeneity. The recently developed multiple treatment framework is used to evaluate the effects with respect to regional, individual and programme heterogeneity. The results show considerable differences with respect to these sources of heterogeneity, but the overall finding is very clear. At the end of our observation period, that is two years after the start of the programmes, participants in job creation schemes have a significantly lower success probability on the labour market in comparison to matched non-participants.
Über die Bildsammlung der Deutschen Kolonialgesellschaft in der Stadt- und Universitätsbibliothek Frankfurt am Main, deren Entstehungsgeschichte und den Werdegang der Sicherungs- maßnahmen hat Irmtraud D. Wolcke-Renk in RUNDBRIEF FOTOGRAPHIE N.F.11 berichtet. Ergänzend sollen hier einige Überlegungen zu den technischen Aspekten der Gesamtsicherung vorgestellt werden.
A generic property of a first-order phase transition in equilibrium, and in the limit of large entropy per unit of conserved charge, is the smallness of the isentropic speed of sound in the mixed phase . A specific prediction is that this should lead to a non-isotropic momentum distribution of nucleons in the reaction plane (for energies < 40A GeV in our model calculation). On the other hand, we show that from present effective theories for low-energy QCD one does not expect the thermal transition rate between various states of the effective potential to be much larger than the expansion rate, questioning the applicability of the idealized Maxwell/Gibbs construction. Experimental data could soon provide essential information on the dynamics of the phase transition.
The flying geese model, a theory of industrial development in latecomer economies, was developed in the 1930s by the Japanese economist Akamatsu Kaname (1896–1974). While rarely known in western countries, it is highly prominent in Japan and seen as the main economic theory underlying Japan’s economic assistance to developing countries. Akamatsu’s original interpretation of the flying geese model differs fundamentally from theories of western origin, such as the neoclassical model and Raymond Vernon’s product cycle theory. These differences include the roles of factors and linkages in economic development, the effects of demand and supply, as well as the dynamic and dialectical character of Akamatsu’s thinking. Later reformulations of the flying geese model, pioneered by Kojima Kiyoshi, attempt to combine aspects of Akamatsu’s theory with neoclassical thinking. This can be described as the “westernization” of the flying geese model. It is this reformulated interpretation that has become popular in Japan’s political discourse, a process that might be explained by the change in Japan’s perspective from that of a developing to that of an advanced economy. The position taken by Japan in its recent controversy with the World Bank, however, shows that many basic elements of Akamatsu’s thinking are still highly influential within both Japan’s academia and its government and are therefore relevant for understanding current debates on development theory.
The lightest supersymmetric particle, most likely the neutralino, might account for a large fraction of dark matter in the Universe.We show that the primordial spectrum of density fluctuations in neutralino cold dark matter (CDM) has a sharp cut-off due to two damping mechanisms: collisional damping during the kinetic decoupling of the neutralinos at (10 MeV) and free streaming after last scattering of neutralinos. The cut-off in the primordial spectrum defines a minimal mass for CDM objects in hierarchical structure formation. For typical neutralino and sfermion masses the first gravitionally bound neutralino clouds have masses above 10 -6 M .
We study the bound states of anti-nucleons emerging from the lower continuum in finite nuclei within the relativistic Hartree approach including the contributions of the Dirac sea to the source terms of the meson fields. The Dirac equation is reduced to two Schr¨odinger-equivalent equations for the nucleon and the anti-nucleon respectively. These two equations are solved simultaneously in an iteration procedure. Numerical results show that the bound levels of anti-nucleons vary drastically when the vacuum contributions are taken into account. PACS number(s): 21.10.-k; 21.60.-n; 03.65.Pm
Recent progress in the understanding of the high density phase of neutron stars advances the view that a substantial fraction of the matter consists of hyperons. The possible impacts of a highly attractive interaction between hyperons on the properties of compact stars are investigated.We find that a hadronic equation of state with hyperons allows for a first order phase transition to hyperonic matter. The corresponding hyperon stars can have rather small radii of R 8 km.
The production of black holes at Tevatron and LHC in spacetimes with compactified space-like large extra dimensions is studied. Either black holes can already be observed in ¯ pp collisions at s = 1.8 TeV or the fundamental gravity scale has to be above 1.4 TeV. At LHC the creation of a large number of quasi-stable black holes is predicted, with lifetimes beyond several hundred fm/c. A cut-off in the high-PT jet cross section is shown to be a unique signature of black hole production. This signal is compared to the jet plus missing energy signature due to graviton production in the final state as proposed by the ATLAS collaboration.
Abstract Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics. PACS: 07.05.Tp; 13; 23
The amount of proton stopping in central Pb+Pb collisions from 20–160 A GeV as well as hyperon and antihyperon rapidity distributions are calculated within the UrQMD model in comparison to experimental data at 40, 80, and 160 A GeV taken recently from the NA49 collaboration. Furthermore, the amount of baryon stopping at 160A GeV for Pb+Pb collisions is studied as a function of centrality in comparison to the NA49 data. We find that the strange baryon yield is reasonably described for central collisions, however, the rapidity distributions are somewhat more narrow than the data. Moreover, the experimental antihyperon rapidity distributions at 40, 80, and 160 A GeV are underestimated by up to factors of 3—depending on the annihilation cross section employed—which might be addressed to missing multimeson fusion channels in the UrQMD model. Pacs-Nr.: 25.75.2q, 24.10.Jv, 24.10.Lx
Accelerating cavities exchange HOM power through interconnecting beam pipes in case of signal frequencies above the cut-off of their propagating waveguide modes. This may lead either to improved HOM damping or - in the case most severe - to unwanted phase coherence of fields to the beam. Therefore the knowledge of the scattering properties of a cavity as a line element is needed to analyse all kinds of RF cavity-cavity interaction. Since there is a lack of measurement tools capable to provide a multidimensional scattering matrix at a given frequency point, we have been developing a method for this purpose. It uses a set of 2-port S-parameters of the device under test, embedded in a number of geometrically different RF environments. The application of the method is demonstrated with copper models of TESLA cavities.
Considered are the classes QL (quasilinear) and NQL (nondet quasllmear) of all those problems that can be solved by deterministic (nondetermlnlsttc, respectively) Turmg machines in time O(n(log n) ~) for some k Effloent algorithms have time bounds of th~s type, it is argued. Many of the "exhausUve search" type problems such as satlsflablhty and colorabdlty are complete in NQL with respect to reductions that take O(n(log n) k) steps This lmphes that QL = NQL iff satisfiabdlty is m QL CR CATEGORIES: 5.25
Abstract: The measured particle ratios in central heavy-ion collisions at RHIC-BNL are investigated within a chemical and thermal equilibrium chiral SU(3) Ã É approach. The commonly adopted non-interacting gas calculations yield temperatures close to or above the critical temperature for the chiral phase transition, but without taking into account any interactions. In contrast, the chiral SU(3) model predicts temperature and density dependent effective hadron masses and effective chemical potentials in the medium and a transition to a chirally restored phase at high temperatures or chemical potentials. Three different parametrizations of the model, which show different types of phase transition behaviour, are investigated. We show that if a chiral phase transition occured in those collisions, freezing of the relative hadron abundances in the symmetric phase is excluded by the data. Therefore, either very rapid chemical equilibration must occur in the broken phase, or the measured hadron ratios are the outcome of the dynamical symmetry breaking. Furthermore, the extracted chemical freeze-out parameters differ considerably from those obtained in simple non-interacting gas calculations. In particular, the three models yield up to 35 MeV lower temperatures than the free gas approximation. The inmedium masses turn out to differ up to 150 MeV from their vacuum values.
By replacing the irises in an electron linac by a slit one gets a structure capable of focussing/defocussing an electron beam (rf-quadrupoles). Therefore one can think of a combination of rf- and conventional magnetic quadrupoles for transversal focussing in linear-colliders. Furthermore they can meet the demands of BNS-damping without initial energy spread. Considering multibunch-operation of a collider, the long-range wake behaviour of this kind of structure has to be investigated. A three-cell structure has been built and investigated for dipole-type transversal long-range wakes. The experimental results are compared to numerical simulations done with MAFIA.