Refine
Year of publication
Document Type
- Article (30653) (remove)
Language
- English (15764)
- German (13019)
- Portuguese (584)
- French (385)
- Croatian (251)
- Spanish (242)
- Italian (132)
- Turkish (101)
- Latin (35)
- Multiple languages (35)
Has Fulltext
- yes (30653)
Is part of the Bibliography
- no (30653) (remove)
Keywords
- Deutsch (482)
- taxonomy (449)
- Literatur (282)
- new species (193)
- Hofmannsthal, Hugo von (184)
- Rezeption (155)
- Filmmusik (154)
- Übersetzung (135)
- Vormärz (117)
- Johann Wolfgang von Goethe (107)
Institute
- Medizin (5360)
- Physik (1918)
- Biowissenschaften (1144)
- Biochemie und Chemie (1113)
- Extern (1069)
- Gesellschaftswissenschaften (803)
- Frankfurt Institute for Advanced Studies (FIAS) (750)
- Geowissenschaften (592)
- Präsidium (453)
- Philosophie (448)
Background: Costly structures need to represent an adaptive advantage in order to be maintained over evolutionary times. Contrary to many other conspicuous shell ornamentations of gastropods, the haired shells of several Stylommatophoran land snails still lack a convincing adaptive explanation. In the present study, we analysed the correlation between the presence/absence of hairs and habitat conditions in the genus Trochulus in a Bayesian framework of character evolution. Results: Haired shells appeared to be the ancestral character state, a feature most probably lost three times independently. These losses were correlated with a shift from humid to dry habitats, indicating an adaptive function of hairs in moist environments. It had been previously hypothesised that these costly protein structures of the outer shell layer facilitate the locomotion in moist habitats. Our experiments, on the contrary, showed an increased adherence of haired shells to wet surfaces. Conclusion: We propose the hypothesis that the possession of hairs facilitates the adherence of the snails to their herbaceous food plants during foraging when humidity levels are high. The absence of hairs in some Trochulus species could thus be explained as a loss of the potential adaptive function linked to habitat shifts.
The volume changes of lithium and sodium under pressure are discussed with respect to the packing density of the atoms and their valence. In densely packed Li I (bcc), Li II (fcc), and Li III (alpha-Hg ype), valence increases from 1 at ~ 5 GPa to ~ 2.5 at 40 GPa. The maximum valence 3 is attained in Li IV (body-centered cubic, 16 atoms per cell, packing density q = 0.965) at 47 GPa. In densely packed Na I (bcc) a linear increase of valence from 1 at ~ 10 GPa to 2.9 at 65 GPa is found which continues in Na II (fcc) up to 4.1 at 103 GPa.
A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools.
Channel routing is an NP-complete problem. Therefore, it is likely that there is no efficient algorithm solving this problem exactly.In this paper, we show that channel routing is a fixed-parameter tractable problem and that we can find a solution in linear time for a fixed channel width.We implemented our approach for the restricted layer model. The algorithm finds an optimal route for channels with up to 13 tracks within minutes or up to 11 tracks within seconds.Such narrow channels occur for example as a leaf problem of hierarchical routers or within standard cell generators.
We present a theoretical analysis of structural FSM traversal, which is the basis for the sequential equivalence checking algorithm Record & Play presented earlier. We compare the convergence behaviour of exact and approximative structural FSM traversal with that of standard BDD-based FSM traversal. We show that for most circuits encountered in practice exact structural FSM traversal reaches the fixed point as fast as symbolic FSM traversal, while approximation can significantly reduce in the number of iterations needed. Our experiments confirm these results.
We present the FPGA implementation of an algorithm [4] that computes implications between signal values in a boolean network. The research was performed as a masterrsquos thesis [5] at the University of Frankfurt. The recursive algorithm is rather complex for a hardware realization and therefore the FPGA implementation is an interesting example for the potential of reconfigurable computing beyond systolic algorithms. A circuit generator was written that transforms a boolean network into a network of small processing elements and a global control logic which together implement the algorithm. The resulting circuit performs the computation two orders of magnitudes faster than a software implementation run by a conventional workstation.
One of the most severe short-comings of currently available equivalence checkers is their inability to verify integer multipliers. In this paper, we present a bit level reverse-engineering technique that can be integrated into standard equivalence checking flows. We propose a Boolean mapping algorithm that extracts a network of half adders from the gate netlist of an addition circuit. Once the arithmetic bit level representation of the circuit is obtained, equivalence checking can be performed using simple arithmetic operations. Experimental results show the promise of our approach.
This paper argues that short (clause-internal) scrambling to a pre-subject position has A properties in Japanese but A'-properties in German, while long scrambling (scrambling across sentence boundaries) from finite clauses, which is possible in Japanese but not in German, has A'-properties throughout. It is shown that these differences between German and Japanese can be traced back to parametric variation of phrase structure and the parameterized properties of functional heads. Due to the properties of Agreement, sentences in Japanese may contain multiple (Agro- and Agrs-) specifiers whereas German does not allow for this. In Japanese, a scrambled element may be located in a Spec AgrP, i.e. an A- or L-related position, whereas scrambled NPs in German can only appear in an AgrP-adjoined (broadly-L-related) position, which only has A'-properties. Given our assumption that successive cyclic adjunction is generally impossible, elements in German may not be long scrambled because a scrambled element that is moved to an adjunction site inside an embedded clause may not move further. In Japanese, long distance scrambling out of finite CPs is possible since scrambling may proceed in a successive cyclic manner via embedded Spec- (AgrP) positions. Our analysis of the differences between German and Japanese scrambling provides us with an account of further contrasts between the two languages such as the existence of surprising asymmetries between German and Japanese remnant-movement phenomena, and the fact that unlike German, Japanese freely allows wh-scrambling. Investigation of the properties of Japanese wh-movement also leads us to the formulation of the "Wh-cluster Hypothesis", which implies that Japanese is an LF multiple wh-fronting language.
In this article, I discuss some important properties of wh-questions and wh-scrambling in Japanese. The questions I will address are (i) which instances of (wh-) scrambling involve reconstruction and (ii) how the undoing effects of scrambling can be derived. First I will discuss the claim that (wh-) scrambling is semantically vacuous and is therefore undone at LF (Saito 1989, 1992). Then I consider the data that led Takahashi (1993) to the conclusion that at least some instances of wh-scrambling have to be analyzed as instances of "full wh-movement" i.e., overt movement of the wh-phrase in its scopal position. It will be argued that these examples are not instances of full wh-movement in Japanese, but that they also represent semantically vacuous scrambling. Those instances of scrambling that apprently cannot be undone are best explained with recourse to parsing effects. I conclude that wh-scrambling in Japanese is always triggered by a ([-wh]-) scrambling feature. In addition, long distance scrambling (scrambling out of finite CPs) is analyzed as adjunction movement, whereas short distance scrambling is movement to a specifier position of IP. Turning to the mechanisms of undoing, I will argue that only long distance scrambling is undone. This is shown to follow from Chomsky's (1995) bare phrase structure analysis, according to which multi-segmental categories derived by adjunction movement are not licensed at LF. The article is organized as follows. In section 2, the wh-scrambling phenomenon is described. In section 3, I discuss the reconstruction properties of scrambling. In addition, this section provides some basic assumptions about my analysis of Japanese scrambling in general. In section 4, I turn to the analysis of wh-scrambling as an instance of full wh-movement in Japanese. Section 5 provides discussion of multiple wh-questions in Japanese, and section 6 gives the conclusion.
Die Doppelobjekt-Konstruktion bildet einen Untersuchungsgegenstand, der in der Vergangenheit die Theoriebildung in der Syntaxforschung wesentlich beeinflusst hat. Untersuchungen zu Doppelobjekt-Konstruktionen sind u.a. folgenreich gewesen für die Kasustheorie sowie für Analysen der Verbbewegung, Satz-, VP- und Argument-Struktur. In diesem Aufsatz stelle ich eine Analyse einiger wichtiger Aspekte der Doppelobjekt-Konstruktion im Deutschen vor. Untersucht wird, in welcher Position die Objekte des Verbs basisgeneriert werden und in welchen abgeleiteten Positionen sie erscheinen. Die Beantwortung dieser Fragen liefert eine Erklärung für das asymmetrische Verhalten der beteiligten Objekte in Bezug auf ihr Bindungs- und Extraktionsverhalten.
In diesem Aufsatz diskutiere ich die Distribution von kohärenten Kontroll-Infinitiven im Deutschen. Es werden die Verbklassen bestimmt, die kohärente Infinitive lizenzieren. Dabei zeigt sich, dass ausschließlich Infinitive 'kohärent' konstruiert werden können, die die Position der Akkusativ NP (bzw. die Position des direkten Objekts) einnehmen. Kontroll-Infinitive in anderen strukturellen Positionen sind zwangsläufig 'inkohärent'. Transparente Infinitive in Sprachen wie dem Polnischen und Spanischen sind in derselben Weise in ihrer Distribution beschränkt. Ich schlage eine einheitliche Analyse der relevanten Daten vor, die weitere distributionelle Generalisierungen bezüglich des Auftretens kohärenter Infinitive korrekt prognostiziert. Für die idiolektale Variation, die unter Sprechern in Bezug auf bestimmte Verben existiert, die kohärente Infinitive lizenzieren, wird eine Erklärung formuliert, die auf der Idee basiert, dass die Bildung dieser Infinitive an die Präsenz eines Inkorporations-Merkmals gebunden ist, das beim Spracherwerb auf der Grundlage positiver Evidenz erworben wird. In this article, I discuss the distribution of so-called 'coherent (control) infinitives' in German. In section 2, I will argue that both coherent as well as incoherent control-infinitives have a sentential status. In section 3, I argue that only infinitives occupying the position of the direct object show the well-known properties associated with coherent infinitives. Control infinitives in other structural positions represent incoherent infinitives. This situation is not limited to German infinitives. Transparent infinitives in Polish and Spanish show the same structural asymmetry. In section 4, I propose a unified analysis for the data that in addition, correctly predicts further restrictions for the distribution of coherent infinitives. In section 5, I propose an account for idiolectal variation in the class of verbs that license coherent infinitives. This account is based on the idea that coherent infinitives require an incorporation-feature in their lexical entry that is acquired on the basis of positive evidence.
In diesem Aufsatz gehe ich der Frage nach, in wie viel unterschiedlichen Positionen Verben im deutschen Satz vorkommen können. Anhand syntaktischer Tests wird gezeigt, daß das Verb im Deutschen in insgesamt drei unterschiedlichen Positionen auftritt und nicht, wie in der traditionellen Grammatik angenommen wird, in nur zwei Positionen (in der rechten und in der linken Satzklammer). Es wird dafür argumentiert, daß die Anwendung des abstrakten Satzschemas, wie es heute gängigerweise in der generativen Grammatikforschung als universelles Satzmodell angenommen wird, die Erklärung einer Vielzahl syntaktischer Phänomene im Deutschen ermöglicht, die mit der traditionellen Verbstellungsanalyse, die von nur zwei Verbpositionen ausgeht, nicht erklärt werden können. Gemäß des universellen Satzschemas repräsentiert die Infl(ection)-Position eine Verbposition im Satz. Diese Position ist identisch mit der rechten Satzklammer. Eine weitere Verbposition ist die V-Position innerhalb des Mittelfelds und eine dritte potenzielle Position für das Verb entspricht der C(omplementizer-) Position (bzw. der linken Satzklammer). Der Aufsatz ist folgendermaßen gegliedert. In der Einleitung schildere ich kurz die unterschiedlichen Auffassungen, die in der Vergangenheit zur Verbstellungsproblematik im Deutschen vertreten wurden. In Abschnitt 2 nenne ich die wichtigsten Argumente, die gegen die Annahme vorgebracht wurden, daß im deutschen Satz für Verben insgesamt drei Positionen zur Verfügung stehen. Im Anschluss daran werden in den Abschnitten 3.1 bis 3.2 Argumente diskutiert, die für drei Verbpositionen sprechen. Abschnitt 3.3 behandelt die Frage, wie vor diesem Hintergrund die Daten aus Abschnitt 2, die sich als problematisch für diese Analyse erwiesen haben, erklärt werden können. In Abschnitt 4 wende ich mich weiterer unabhängiger Evidenz aus dem Bereich der historischen Syntax zu, die für drei Verbpositionen im Deutschen spricht. In Abschnitt 5 gebe ich eine kurze Zusammenfassung der wichtigsten Ergebnisse.
Ausgangspunkt der folgenden Untersuchung ist die Überlegung, daß verschiedene Versionen der Prinzipien- und Parametertheorie unterschiedliche Prognosen bezüglich strukturell ambiger Wortfolgen in Passiv-Konstruktionen des Deutschen machen. Im Rahmen einer Theorie, in der Move-alpha frei appliziert, wie etwa in der Rektions- und Bindungstheorie (Chomsky 1981, 1986a, 1986b), können multiple Derivationen für derartige Abfolgen nicht ausgeschlossen werden, wohingegen eine andere Situation vorliegt, wenn man die entsprechenden Konstruktionen im Rahmen des Minimalistischen Programms analysiert. Hier kann die Anzahl möglicher (und mit einer Wortfolge verträglicher) Derivationen mit Hilfe von Ökonomieprinzipien beschränkt werden. Auf Grundlage verschiedener syntaktischer Tests wird im weiteren gezeigt, daß bestimmte Wortfolgen nur mit einer Derivation verträglich sind, was im Einklang mit einer minimalistischen Analyse der Daten steht. Der Aufsatz ist folgendermaßen gegliedert. In Abschnitt 2 erläutere ich das Grundproblem der multiplen Derivationen, das sich im Deutschen z. B. bei Passiv-Konstruktionen ergibt, wenn man annimmt, daß NP-Bewegung und Scrambling optional erfolgen und ferner keine Beschränkungen für potentielle Derivationen gelten. In Abschnitt 3 diskutiere ich die Voraussetzungen für die Überprüfung der Prognosen der verschiedenen Varianten des Prinzipien- und Parametermodells und versuche anschließend auf Grundlage syntaktischer Tests zu belegen, daß die diskutierten Beispiele tatsächlich nicht strukturell ambig, sondern strukturell eindeutig sind, wie es die Analyse der entsprechenden Konstruktionen im Rahmen des Minimalistischen Programms vorhersagt. Abschnitt 4 beschreibt die Konsequenzen der Analyse für weitere Sprachen wie Niederländisch und Japanisch und zusätzliche Bewegungstypen. Abschnitt 5 enthält die Konklusion.
Homing in with GPS
(2000)
This a review of the present status of heavy-ion collisions at intermediate energies. The main goal of heavy-ion physics in this energy regime is to shed some light on the nuclear equation of state (EOS), hence we present the basic concept of the EOS in nuclear matter as well as of nuclear shock waves which provide the key mechanism for the compression of nuclear matter. The main part of this article is devoted to the models currently used for describing heavy-ion reactions theoretically and to the observables useful for extracting information about the EOS from experiments. A detailed discussion of the flow effects with a broad comparison with the avaible data is presented. The many-body aspects of such reactions are investigated via the multifragmentation break up of excited nuclear systems and a comparison of model calculations with the most recent multifragmentation experiments is presented.
In the framework of the relativistic quantum dynamics approach we investigate antiproton observables in Au-Au collisions at 10.7A GeV. The rapidity dependence of the in-plane directed transverse momentum p(y) of p's shows the opposite sigh of the nucleon flow, which has indeed recently been discovered at 10.7A GeV by the E877 group. The "antiflow" of p's is also predicted at 2A GeV and at 160 A GeV and appears at all energies also for pi's and K's. These predicted p anticorrelations are a direct proof of strong p annihilation in massive heavy ion reactions.
The quantum statistical model (QSM) is used to calculate nuclear fragment distributions in chemical equilibrium. Several observable isotopic effects are predicted for intermediate energy heavy ion collisions. It is demonstrated that particle ratios for different systemsdo not depend on the breakup density-the only free parameter in our model.The importance of entropy measurements is discussed. Specific particle ratios for the system Au-Au are predicted, which can be used to determine the chemical potentials of the hot midrapidity fragment source in nearly central heavy ion collisions. Pacs-Nr. 25.70 Pq
The Monte Carlo parton string model for multiparticle production in hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions at high energies is described. An adequate choice of the parameters in the model gives the possibility of recovering the main results of the dual parton model, with the advantage of treating both hadron and nuclear interactions on the same footing, reducing them to interactions between partons. Also the possibility of considering both soft and hard parton interactions is introduced.
The properties of pions from the hot and dense reaction stage of relativistic heavy ion collisions are investigated with the quantum molecular dynamics model. Pions originating from this reaction stage stem from resonance decay with enhanced mass. They carry high transverse momenta. The calculation shows a direct correlation between high pt pions, early freeze-out times and high freeze-out densities.
Dilepton spectra for p+p and p+d reactions at 4.9GeV are calculated. We consider electromagnetic bremsstrahlung also in inelastic reactions. N* and Delta* decay present the major contributions to the pho and omega meson yields.Pion annihilation yields only 1.5% of all pho's in p+d. The pho mass spectrum is strongly distorted due to phase space effects, populating dominantly dilepton masses below 770MeV.
Strong mean meson fields, which are known to exist in normal nuclei, experience a violent deformation in the course of a heavy-ion collision at relativistic energies. This may give rise to a new collective mechanism of the particle production, not reducible to the superposition of elementary nucleon-nucleon collisions.
We investigate the sensivity of pionic bounce-off and squeeze-out on the density and momentum dependence of the real part of the nucleon optical potential. For the in-plane pion bounce-off we find a strong sensivity on both the density and momentum dependence whereas the out-of-plane pion squeeze-out shows a strong sensivity only towards the momentum dependence but little sensivity towards the density dependence.
We demonstrate the importance of the Bose-statistical effects for pion production in relativistic heavy-ion collisions. The evolution of the pion phase-space density in central collisions of ultrarelativistic nuclei is studied in a simple kinetic model taking into account the effect of Bose-simulated pion production by the NN collisions in a dense cloud of mesons.
The volume changes of solid iodine under pressure are discussed with respect to the packing density of the atoms and to valence. The packing density of solid iodine which is 0.805 under ambient pressure increases to 0.976 in monoatomic iodine-II, 0.993 in iodine-III, and 1 in fcc iodine-IV. Simultaneously, the valence increases from 1 in the free molecule to 1.78 in the crystal structure under ambient pressure, 2.72 – 2.81 in iodine-II, 2.86 – 2.96 in iodine-III, and 3 in fcc iodine-IV. The valence then remains constant up to about 180 GPa and rises moderately to 3.15 at the highest investigated pressure of 276 GPa. Parameters for calculating bond numbers, valences and atomic volumes of densely packed halogens, hydrogen, oxygen, and nitrogen are given.
The volume changes of cesium under pressure are discussed with respect to the packing density of the atoms and valence. The element is univalent in densely packed Cs I and Cs II. Valence increases in Cs III (packing density q = 0.973), in Cs IV (q = 0.943), in Cs V (q ~ 0.99), and in close packed Cs VI. The diminuition of volume beyond ~ 15 GPa is caused by this increase only which implies that electrons of the fifth shell act as valence electrons.
Relationships between bond lengths and bond numbers and also between atomic volumes and valencies are derived and parameters for their calculation are given for the s-block, p-block, and d-block metals. From the atomic volumes under pressure, the valencies of three solid lanthanoids have been confirmed or redetermined: La 3; Ce 2. 3. and 4; Yb 2 and 3.
Die Datenbank BioLIS wird durch die Universitätsbibliothek Johann Christian Senckenberg (Frankfurt/M.) kostenfrei online zur Verfügung gestellt. Sie weist deutsche biologische Zeitschriftenliteratur aus dem Zeit¬raum 1970 bis 1996 nach – damit ist BioLIS eine wesentliche Ergänzung zu der Datenbank „Biological Abstracts“. Die bibliografischen Angaben zu den nachgewiesenen Aufsätzen werden durch umfassende Schlagwörter und Namen behandelter Organismen ergänzt, so dass Spezialrecherchen insbesondere nach Literatur über bestimmte Organismen möglich sind.
We demonstrate that the creation of strange matter is conceivable in the midrapidity region of heavy ion collisions at Brookhaven RHIC and CERN LHC. A finite net-baryon density, abundant (anti)strangeness production, as well as strong net-baryon and net-strangeness fluctuations, provide suitable initial conditions for the formation of strangelets or metastable exotic multistrange ( baryonic) objects. Even at very high initial entropy per baryon SyAinit ¯ 500 and low initial baryon numbers of Ainit B ¯ 30 a quark-gluon-plasma droplet can immediately charge up with strangeness and accumulate net-baryon number. PACS numbers: 25.75.Dw, 12.38.Mh, 24.85.+
Measured hadron yields from relativistic nuclear collisions can be equally well understood in two physically distinct models, namely a static thermal hadronic source versus a time-dependent, non-equilibrium hadronization off a quark gluon plasma droplet. Due to the time-dependent particle evaporation off the hadronic surface in the latter approach the hadron ratios change (by factors of / 5) in time. The overall particle yields then reflect time averages over the actual thermodynamic properties of the system at a certain stage of evolution.
Metallic radii rm are correlated with the ionic radii ri by linear relationships. For groups 1 up to 7 as well as for Al, Ga, In, Tl, Sn, and Pb the ionic radii refer to the maximum valences (oxidation states) as known from compounds according to rm ~ 1.16 x (ri + 0.64) [A° ]. For groups 8 up to 12, rm ~ 0.48 x (ri + 2.26) [°A] with valences W = 14 - G (G = group number). These valences are considered regular (Wr). For groups 1 up to 12, they obey the equation Wr = 7 - |G - 7|. According to this equation all outer s electrons and the unpaired d electrons should be involved in chemical bonding, i.e. in the cohesion of the element in the solid state. From the melting temperatures and the atomic volumes it is concluded, however, that only 19 out of the 30 d-block elements have regular valences, namely the elements of groups 3, 5, 6, 10, 11 as well as Os, Ir, Zn, Cd, and possibly Ru. All of the non-regular valences are lower than the regular ones. Four of them are integers: Mn 3; Fe, Co 4; Re 6.
Abstract: Local thermal and chemical equilibration is studied for central AqA collisions at 10.7 160 AGeV in the Ultrarelativis- . tic Quantum Molecular Dynamics model UrQMD . The UrQMD model exhibits strong deviations from local equilibrium at the high density hadron string phase formed during the early stage of the collision. Equilibration of the hadron resonance matter is established in the central cell of volume Vs125 fm3 at later stages, tG10 fmrc, of the resulting quasi-isentropic expansion. The thermodynamical functions in the cell and their time evolution are presented. Deviations of the UrQMD quasi-equilibrium state from the statistical mechanics equilibrium are found. They increase with energy per baryon and lead to a strong enhancement of the pion number density as compared to statistical mechanics estimates at SPS energies. PACS: 25.75.-q; 24.10.Lx; 24.10.Pa; 64.30.qt
Noneequilibrium models (three-fluid hydrodynamics and UrQMD) use to discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions. It is demonstrated that these two models - although they do treat the most interesting early phase of the collisions quite differently(thermalizing QGP vs. coherent color fields with virtual particles) - both yields a reasonable agreement with a large variety of the available heavy ion data.
We study J/psi suppression in AB collisions assuming that the charmonium states evolve from small, color transparent configurations. Their interaction with nucleons and nonequilibrated, secondary hadrons is simulated using the microscopic model UrQMD. The Drell-Yan lepton pair yield and the J/psi Drell-Yan ratio are calculated as a function of the neutral transverse energy in Pb+Pb collisions at 160 GeV and found to be in reasonable agreement with existing data.
We derive the relativistic quantum transport equation for the pion distribution function based on an effective Lagrangian of the QHD-II model. The closed-time-path Green s function technique and the semiclassical, quasiparticle, and Born approximations are employed in the derivation. Both the mean field and collision term are derived from the same Lagrangian and presented analytically. The dynamical equation for the pions is consistent with that for the nucleons and Delta's which we developed before. Thus, we obtain a relativistic transport model which describes the hadronic matter with N,Delta, and pi degrees of freedom simultaneously. Within this approach, we investigate the medium effects on the pion dispersion relation as well as the pion absorption and pion production channels in cold nuclear matter. In contrast to the results of the nonrelativistic model, the pion dispersion relation becomes harder at low momenta and softer at high momenta as compared to the free one, which is mainly caused by the relativistic kinetics. The theoretically predicted free piN->Delta cross section is in agreement with the experimental data. Medium effects on the piN->Delta cross section and momentum-dependent Delta-decay width are shown to be substantial. PACS-numbers: 24.10.Jv, 13.75.Cs, 21.65.1f, 25.75.2q
We calculate the shadowing of sea quarks and gluons and show that the shadowing of gluons is not simply given by the sea quark shadowing, especially at small x. The calculations are done in the lab frame approach by using the generalized vector meson dominance model. Here the virtual photon turns into a hadronic fluctuation long before the nucleus. The subsequent coherent interaction with more than one nucleon in the nucleus leads to the depletion sigma(gamma*A )< A*sigma(gamma * N) known as shadowing. A comparison of the shadowing of quarks to E665 data for 40Ca and 207Pb shows good agreement.
This paper evaluates the effects of job creation schemes on the participating individuals in Germany. Since previous empirical studies of these measures have been based on relatively small datasets and focussed on East Germany, this is the first study which allows to draw policy-relevant conclusions. The very informative and exhaustive dataset at hand not only justifies the application of a matching estimator but also allows to take account of threefold heterogeneity. The recently developed multiple treatment framework is used to evaluate the effects with respect to regional, individual and programme heterogeneity. The results show considerable differences with respect to these sources of heterogeneity, but the overall finding is very clear. At the end of our observation period, that is two years after the start of the programmes, participants in job creation schemes have a significantly lower success probability on the labour market in comparison to matched non-participants.
Über die Bildsammlung der Deutschen Kolonialgesellschaft in der Stadt- und Universitätsbibliothek Frankfurt am Main, deren Entstehungsgeschichte und den Werdegang der Sicherungs- maßnahmen hat Irmtraud D. Wolcke-Renk in RUNDBRIEF FOTOGRAPHIE N.F.11 berichtet. Ergänzend sollen hier einige Überlegungen zu den technischen Aspekten der Gesamtsicherung vorgestellt werden.
A generic property of a first-order phase transition in equilibrium, and in the limit of large entropy per unit of conserved charge, is the smallness of the isentropic speed of sound in the mixed phase . A specific prediction is that this should lead to a non-isotropic momentum distribution of nucleons in the reaction plane (for energies < 40A GeV in our model calculation). On the other hand, we show that from present effective theories for low-energy QCD one does not expect the thermal transition rate between various states of the effective potential to be much larger than the expansion rate, questioning the applicability of the idealized Maxwell/Gibbs construction. Experimental data could soon provide essential information on the dynamics of the phase transition.
The flying geese model, a theory of industrial development in latecomer economies, was developed in the 1930s by the Japanese economist Akamatsu Kaname (1896–1974). While rarely known in western countries, it is highly prominent in Japan and seen as the main economic theory underlying Japan’s economic assistance to developing countries. Akamatsu’s original interpretation of the flying geese model differs fundamentally from theories of western origin, such as the neoclassical model and Raymond Vernon’s product cycle theory. These differences include the roles of factors and linkages in economic development, the effects of demand and supply, as well as the dynamic and dialectical character of Akamatsu’s thinking. Later reformulations of the flying geese model, pioneered by Kojima Kiyoshi, attempt to combine aspects of Akamatsu’s theory with neoclassical thinking. This can be described as the “westernization” of the flying geese model. It is this reformulated interpretation that has become popular in Japan’s political discourse, a process that might be explained by the change in Japan’s perspective from that of a developing to that of an advanced economy. The position taken by Japan in its recent controversy with the World Bank, however, shows that many basic elements of Akamatsu’s thinking are still highly influential within both Japan’s academia and its government and are therefore relevant for understanding current debates on development theory.
The lightest supersymmetric particle, most likely the neutralino, might account for a large fraction of dark matter in the Universe.We show that the primordial spectrum of density fluctuations in neutralino cold dark matter (CDM) has a sharp cut-off due to two damping mechanisms: collisional damping during the kinetic decoupling of the neutralinos at (10 MeV) and free streaming after last scattering of neutralinos. The cut-off in the primordial spectrum defines a minimal mass for CDM objects in hierarchical structure formation. For typical neutralino and sfermion masses the first gravitionally bound neutralino clouds have masses above 10 -6 M .
We study the bound states of anti-nucleons emerging from the lower continuum in finite nuclei within the relativistic Hartree approach including the contributions of the Dirac sea to the source terms of the meson fields. The Dirac equation is reduced to two Schr¨odinger-equivalent equations for the nucleon and the anti-nucleon respectively. These two equations are solved simultaneously in an iteration procedure. Numerical results show that the bound levels of anti-nucleons vary drastically when the vacuum contributions are taken into account. PACS number(s): 21.10.-k; 21.60.-n; 03.65.Pm
Recent progress in the understanding of the high density phase of neutron stars advances the view that a substantial fraction of the matter consists of hyperons. The possible impacts of a highly attractive interaction between hyperons on the properties of compact stars are investigated.We find that a hadronic equation of state with hyperons allows for a first order phase transition to hyperonic matter. The corresponding hyperon stars can have rather small radii of R 8 km.
The production of black holes at Tevatron and LHC in spacetimes with compactified space-like large extra dimensions is studied. Either black holes can already be observed in ¯ pp collisions at s = 1.8 TeV or the fundamental gravity scale has to be above 1.4 TeV. At LHC the creation of a large number of quasi-stable black holes is predicted, with lifetimes beyond several hundred fm/c. A cut-off in the high-PT jet cross section is shown to be a unique signature of black hole production. This signal is compared to the jet plus missing energy signature due to graviton production in the final state as proposed by the ATLAS collaboration.