Refine
Year of publication
Document Type
- Article (30829) (remove)
Language
- English (15927)
- German (13032)
- Portuguese (584)
- French (385)
- Croatian (251)
- Spanish (242)
- Italian (132)
- Turkish (101)
- Latin (35)
- Multiple languages (35)
Has Fulltext
- yes (30829)
Is part of the Bibliography
- no (30829) (remove)
Keywords
- Deutsch (482)
- taxonomy (451)
- Literatur (282)
- new species (196)
- Hofmannsthal, Hugo von (184)
- Rezeption (155)
- Filmmusik (154)
- Übersetzung (135)
- Vormärz (117)
- Johann Wolfgang von Goethe (107)
Institute
- Medizin (5419)
- Physik (1984)
- Biowissenschaften (1153)
- Biochemie und Chemie (1113)
- Extern (1069)
- Gesellschaftswissenschaften (803)
- Frankfurt Institute for Advanced Studies (FIAS) (756)
- Geowissenschaften (593)
- Präsidium (453)
- Philosophie (448)
The transverse momentum and rapidity distributions of negative hadrons and participant protons have been measured for central 32S+ 32S collisions at plab=200 GeV/c per nucleon. The proton mean rapidity shift < Delta y>~1.6 and mean transverse momentum <pT>~0.6 GeV/c are much higher than in pp or peripheral AA collisions and indicate an increase in the nuclear stopping power. All pT spectra exhibit similar source temperatures. Including previous results for K0s Lambda , and Lambda -bar, we account for all important contributions to particle production.
The NA35 experiment has collected a high statistics set of momentum analyzed negative hadrons near and forward of midrapidity for central collisions of 200A GeV/c 32S+S, Cu, Ag, and Au. Using momentum space correlations to study the size of the source of particle production, the transverse source radii are found to decrease by ~40% at midrapidity and ~20% at forward rapidity while the longitudinal radius RL is found to decrease by ~50% as pT increases over the interval 50<pT<600 MeV/c. Calculations using a microscopic phase space approach (relativistic quantum molecular dynamics) reproduce the observed trends of the data. PACS: 25.75.+r
Transverse momenta and rapidities of Lambda 's produced in central nucleus-nucleus collisions at 4.5 GeV/c·u (C-C,...,O-Pb) were studied and compared with those from inelastic He-Li interactions at the same incident momentum. Polarization of the Lambda hyperons was found to be consistent with zero ( alpha P=-0.06=0.11 for Lambda 's from central collisions). An upper limit of the Lambda -bar / Lambda production ratio was estimated to be less than 4.5 x 10-3. The experiment was performed in a triggered streamer chamber.
Difficulties of the thermodynamical model approach to pion production in relativistic ion collisions
(1983)
Thermodynamical models with various forms of partial transparency of nuclear matter are considered. It is shown that the introduction of transparency, however, significantly improves agreement with pion data concerning multiplicities and transverse momenta leads to a serious discrepancy with average rapidities of pions. Qualitative arguments are given that difficulties of the thermodynamical approach can be overcome if one assumes hydrodynamical expansion in the first stage of nuclear interactions.
A detailed study of pion production in inelastic and central nucleus-nucleus collisions was carried out using a 2 m streamer spectrometer. Nuclear targets mounted inside the streamer chamber were exposed to nuclear beams of 4.5 GeV/c/nucleon momentum. A systematic study of the influence of the central trigger on observed data is performed. The data on multiplicities, rapidities, transverse momenta, and emission angles of negative pions are presented for various pairs of colliding nuclei. Intercorrelations between various characteristics are studied and discussed. The results are compared with predictions of some theoretical models. It is shown that the main features of the pion production in nuclear collisions can be satisfactorily described by a model assuming independent nucleon-nucleon collisions with subsequent cascading process. However, the observed correlation between Lambda and pion characteristics seems to be unexplained by this picture.
The main results obtained within the energy scan program at the CERN SPS are presented. The anomalies in energy dependence of hadron production indicate that the onset of deconfinement phase transition is located at about 30 A GeV. For the first time we seem to have clear evidence for the existence of a deconfined state of matter in nature. PACS numbers: 24.85.+p
We present the measured correlation functions for pi+ pi-, pi- pi- and pi+ pi+ pairs in central S+Ag collisions at 200 GeV per nucleon. The Gamov function, which has been traditionally used to correct the correlation functions of charged pions for the Coulomb interaction, is found to be inconsistent with all measured correlation functions. Certain problems which have been dominating the systematic uncertainty of the correlation analysis are related to this inconsistency. It is demonstrated that a new Coulomb correction method, based exclusively on the measured correlation function for pi+ pi- pairs, may solve the problem.
Using the NA49 main TPC, the central production of hyperons has been measured in CERN SPS Pb - Pb collisions at 158 GeV c-1. The preliminary ratio, studied at 2.0 < y < 2.6 and 1 < pT < 3 GeV c-1, equals ~ (13 ± 4)% (systematic error only). It is compatible, within errors, with the previously obtained ratios for central S + S [1], S + W [2], and S + Au [3] collisions. The fit to the transverse momentum distribution resulted in an inverse slope parameter T of 297 MeV. At this level of statistics we do not see any noticeable enhancement of hyperon production with the increased volume (and, possibly, degree of equilibration) of the system from S + S to Pb + Pb. This result is unexpected and counterintuitive, and should be further investigated. If confirmed, it will have a significant impact on our understanding of mechanisms leading to the enhanced strangeness production in heavy-ion collisions.
Preliminary inclusive spectra for K+, K-, Ks0, Λ, and are presented which were measured in central Pb + Pb collisions at 158 GeV per nucleon by the NA49 experiment. A comparison with data from lighter collision systems shows a strong change of the shape of the Λ rapidity distribution. The strangeness enhancement observed in S + S compared to p + p and p + A is not further increased in Pb + Pb.
Preliminary data on phi production in central Pb + Pb collisions at 158 GeV per nucleon are presented, measured by the NA49 experiment in the hadronic decay channel phi - K+K-. At mid-rapidity, the kaons were separated from pions and protons by combining dE/dx and time-of-flight information; in the forward rapidity range only dE/dx identification was used to obtain the rapidity distribution and a rapidity-integrated mt-spectrum. The mid-rapidity yield obtained was dN/dy = 1.85 ± 0.3 per event; the total phi multiplicity was estimated to be 5.0 ± 0.7 per event. Comparison with published pp data shows a slight, but not very significant strangeness enhancement.
Lambda and Antilambda reconstruction in central Pb+Pb collisions using a time projection chamber
(1997)
The large acceptance time projection chambers of the NA49 experiment are used to record the trajectory of charged particles from Pb + Pb collisions at 158 GeV per nucleon. Neutral strange hadrons have been reconstructed from their charged decay products. To obtain distributions of Λ, and Ks0 in discrete bins of rapidity, y, and transverse momentum, pT, calculations have been performed to determine the acceptance of the detector and the efficiency of the reconstruction software as a function of both variables. The lifetime distributions obtained give values of cτ = 7.8 ± 0.6 cm for Λ and cτ = 2.5 ± 0.3 cm for Ks0, consistent with data book values.
The transverse mass mt distributions for deuterons and protons are measured in Pb+Pb reactions near midrapidity and in the range 0<mt–m<1.0 (1.5) GeV/c2 for minimum bias collisions at 158A GeV and for central collisions at 40 and 80 A GeV beam energies. The rapidity density dn/dy, inverse slope parameter T and mean transverse mass <mt> derived from mt distributions as well as the coalescence parameter B2 are studied as a function of the incident energy and the collision centrality. The deuteron mt spectra are significantly harder than those of protons, especially in central collisions. The coalescence factor B2 shows three systematic trends. First, it decreases strongly with increasing centrality reflecting an enlargement of the deuteron coalescence volume in central Pb+Pb collisions. Second, it increases with mt. Finally, B2 shows an increase with decreasing incident beam energy even within the SPS energy range. The results are discussed and compared to the predictions of models that include the collective expansion of the source created in Pb+Pb collisions.
Englische Fassung: Alienating Justice: On the Social Surplus Value of the Twelfth Camel. In: David Nelken and Jirí Pribán (Hg.) Law's New Boundaries: Consequences of Legal Autopoiesis. Ashgate, London 2001, 21-44. Französische Fassung: Les multiples aliénations du droit : Sur la plus-value sociale du douzième chameau. Droit et Société 47, 2001, 75-100. Polnische Fassung: Sprawiedliwosc alienujaca : O dodatkowej wartosci dwunastego wielblada. Ius et Lex 1, 2002, 109-132. Italienische Fassung: Le molteplici alienazioni del diritto : Sul plusvalore sociale del dodicesimo camello. In: Annamaria Rufino und Gunther Teubner, Il diritto possibile: Funzioni e prospettive del medium giuridico. Guerini, Milano, 2005, 93-130.
Deutsche Fassung: Die Episteme des Rechts. Zu den erkenntnistheoretischen Grundlagen des reflexiven Rechts. In: Dieter Grimm (Hg.) Steigende Staatsaufgaben - sinkende Steuerungsfähigkeit des Rechts. Nomos, Baden-Baden 1990, 115-154. Englische Fassung: How the Law Thinks: Toward a Constructivist Epistemology of Law. Law and Society Review 23, 1989, 727-757, und in: Wolfgang Krohn, Günter Küppers und Helga Nowotny (Hg.) Self-Organization: Portrait of a Scientific Revolution. Sociology of the Sciences: A Yearbook, Bd. XIV. Kluwer, Boston, 1990, 87-113 und in: M.D.A. Freeman (Hg.) Lloyd's Introduction to Jurisprudence 6. Aufl. Sweet & Maxwell, London 1995, 636-654. Französische Fassung: Pour une épistémologie constructiviste du droit. In Gunther Teubner, Droit et réflexivité. Librairie générale de droit et de jurisprudence, Paris 1994, 171-204. Veränderte Fassung in: Annales: Economies, Sociétés, Civilisations 1992, Paris, 1149-1169. Italienische Fassung: Il diritto come soggetto epistemico: Per una epistemologie giuridica "costruttivista," Rivista critica del diritto privato 8, 1990, 287-326.
Deutsche Fassung: Die Episteme des Rechts. Zu den erkenntnistheoretischen Grundlagen des reflexiven Rechts. In: Dieter Grimm (Hg.) Steigende Staatsaufgaben - sinkende Steuerungsfähigkeit des Rechts. Nomos, Baden-Baden 1990, 115-154. Französische Fassung: Pour une épistémologie constructiviste du droit. In Gunther Teubner, Droit et réflexivité. Librairie générale de droit et de jurisprudence, Paris 1994, 171-204. Veränderte Fassung in: Annales: Economies, Sociétés, Civilisations 1992, Paris, 1149-1169. Italienische Fassung: Il diritto come soggetto epistemico: Per una epistemologie giuridica "costruttivista," Rivista critica del diritto privato 8, 1990, 287-326.
Deutsche Fassung: Die Episteme des Rechts. Zu den erkenntnistheoretischen Grundlagen des reflexiven Rechts. In: Dieter Grimm (Hg.) Steigende Staatsaufgaben - sinkende Steuerungsfähigkeit des Rechts. Nomos, Baden-Baden 1990, 115-154. Englische Fassung: How the Law Thinks: Toward a Constructivist Epistemology of Law. Law and Society Review 23, 1989, 727-757, und in: Wolfgang Krohn, Günter Küppers und Helga Nowotny (Hg.) Self-Organization: Portrait of a Scientific Revolution. Sociology of the Sciences: A Yearbook, Bd. XIV. Kluwer, Boston, 1990, 87-113 und in: M.D.A. Freeman (Hg.) Lloyd's Introduction to Jurisprudence 6. Aufl. Sweet & Maxwell, London 1995, 636-654. Französische Fassung: Pour une épistémologie constructiviste du droit. In Gunther Teubner, Droit et réflexivité. Librairie générale de droit et de jurisprudence, Paris 1994, 171-204. Veränderte Fassung in: Annales: Economies, Sociétés, Civilisations 1992, Paris, 1149-1169. Italienische Fassung: Il diritto come soggetto epistemico: Per una epistemologie giuridica "costruttivista," Rivista critica del diritto privato 8, 1990, 287-326.
By order of 29 November 1999 the Federal Court of Justice (Bundesgerichtshof) referred to the European Court of Justice (ECJ) for a preliminary ruling under Article 234 EC two questions regarding the interpretation of the "doorstep-selling directive", and the "consumer credit directive", which arose in the course of proceedings involving Mr and Mrs Heininger, who took out from the Bayerische Hypo- und Vereinsbank AG bank a loan to purchase a flat, secured by a charge on the property (Grundschuld). Five years later they sought to cancel the credit agreement, maintaining that an estate agent had called uninvited at their home and induced them to purchase the flat in question and - at the same time acting on a self-employed basis as agent for the bank - to enter into the loan agreement, without informing them of their right of cancellation. Article 1 para. 1 of the doorstep-selling directive provides that it applies to contracts under which a trader supplies goods or services to a consumer and which are concluded during a visit by a trader to the consumer's home where the visit does not take place at the express request of the consumer'. Article 3 para. 2 a) of that directive provides that the directive shall not apply to contracts for the construction, sale and rental of immovable property or contracts concerning other rights relating to immovable property. Article 4 of the directive provides that traders shall be required to give consumers written notice of their right of cancellation. Article 5 provides that the consumer shall have the right to cancel the contract within seven days from receipt by the consumer of the notice. Article 2 of the consumer credit directive provides that it shall not apply to credit agreements intended primarily for the purpose of acquiring or retaining property rights in land or in an existing or projected building, and that Article 1 a) and Articles 4 to 12 of the directive shall not apply to credit agreements, secured by mortgage on immovable property. The German legislation transposing the doorstep-selling directive (the "HWiG") provides for a right of cancellation by the consumer within a period of one week, if a transaction is entered into away from the trader's business premises. The cooling-off period does not start to run until the customer receives a notice in writing containing information on this right and if that notice is not given, the right of cancellation will not lapse until one month after both parties have performed their obligations under the agreement in full. Section 5 para. 2 of the HwiG provides that where the transaction also falls within the scope of the legislation transposing the consumer credit directive (the "VerbrKrG"), only the provisions of the latter are to apply. Section 3 para. 2 of the VerbrKrG, in setting out the exceptions to the scope of that law, provides that inter alia Section 7 (right of cancellation) shall not apply to credit agreements in which credit is subject to the giving of security by way of a charge on immovable property, and is granted on usual terms for credits secured by a charge on immovable property and the intermediate financing of the same. Given this legal framework it is obvious that the Heiningers could not cancel the credit agreement according to the VerbrKrG. Although the agreement constitutes a consumer credit under section 1 VerbrKrG, the right of revocation is excluded by section 3 para. 2 VerbrKrG, the exclusion of which is backed by the consumer credit directive. Although the credit agreement was entered into away from the banks business premises, they as well could not cancel it under the HWiG since this law is not applicable to consumer credit agreements. Thus, the claim of the Heiningers was denied by German courts until the Federal Court of Justice raised the question, if the subsidiarity clause in section 5 para. 2 of the HWiG constitutes a contradiction to the provisions of the door step selling directive.
In the early Nineties the Hague Conference on International Private Law on initiative of the United States started negotiations on a Convention on the Recognition and Enforcement of Foreign Judgments in Civil and Commercial Matters (the "Hague Convention"). In October 1999 the Special Commission on duty presented a preliminary text, which was drafted quite closely to the European Convention on Jurisdiction and Enforcement of Judgments in Civil and Commercial Matters (the "Brussels Convention"). The latter was concluded between the then 6 Member States of the EEC in Brussels in 1968 and amended several times on occasion of the entry of new Member States. In 2000, after the Treaty of Amsterdam altered the legal basis for judicial co-operation in civil matters in Europe, it was transformed into an EC Regulation (the "Brussels I Regulation"). The 1999 draft of the Hague Convention was heavily criticized by the USA and other states for its European approach of a double convention, regulating not only the recognition and enforcement of judgments, but at the same time the extent of and the limits to jurisdiction to adjudicate in international cases. During a diplomatic conference in June 2001 a second draft was presented which contained alternative versions of several articles and thus resembled more the existing dissent than a draft convention would. Difficulties to reach a consensus remained, especially with regard to activity based jurisdiction, intellectual property, consumer rights and employee rights. In addition, the appropriateness of the whole draft was questioned in light of the problems posed by the de-territorialization of relevant conduct through the advent of the Internet. In April 2002 it was decided to continue negotiations on an informal level on the basis of a nucleus approach. The core consensus as identified by a working group, however, was not very broad. The experts involved came to the conclusion that the project should be limited to choice of court agreements. In March 2004 a draft was presented which sets out its aims as follows: "The objective of the Convention is to make exclusive choice of court agreements as effective as possible in the context of international business. The hope is that the Convention will do for choice of court agreements what the New York Convention of 1958 has done for arbitration agreements." In April 2004 the Special Commission of the Hague Conference adopted a Draft "Convention on Exclusive Choice of Court Agreements", which according to its Art. 2 No. 1 a) is not applicable to choice of court agreements, to which a natural person acting primarily for personal, family or household purposes (a consumer) is a party". The broader project of a global judgments convention thus seems to be abandoned, or at least to be postponed for an unlimited time period. There are - of course - several reasons why the Hague Judgments project failed. Samuel Baumgartner has described an important one as the "Justizkonflikt" between the United States and Europe or, more specifically Germany. Within the context of the general topic of this conference, that is (international) jurisdiction for human rights, in the remainder of this presentation I shall elaborate on the socio-cultural aspects of the impartiality of judgments and their enforcement on a global scale.
Wiederfang von zwei Sumpfmeisen (Parus palustris) nach einer Serie von Orientierungsversuchen
(1989)
We controlled two Marsh Tits in mist nets after they have been in orientation experiments for several weeks and released at the site of capture. One was controlled 1 1/2 years after the tests. There does not seem to be any impact of the experiments on the ability to survive well.
Die Schleiereule (Tyto alba) ist eine in fast allen Regionen der Erde vorkommende Eulenart. In Mitteleuropa erreicht sie die nördlichste Grenze ihres Verbreitungsgebiets. Man trifft sie hier in tiefergelegenen, waldarmen Gegenden an. Eine Arbeitsgruppe der Hessischen Gesellschaft für Ornithologie und Naturschutz (HGON) und des Deutschen Bund für Vogelschutz (DBV) führt im hessischen Main-Kinzig-Kreis seit 1976 Maßnahmen zum Schutz der Schleiereulen durch. Dazu gehören das Anbringen von Brutkisten an geeigneten Stellen und Winterfütterungsversuche. Die Brutkisten wurden jedes Jahr kontrolliert und die sich darin befindenden Jungvögel beringt. Ziel der vorliegenden Arbeit ist die Darstellung von Ergebnissen der Untersuchungen aus den zurückliegenden 12 Jahren. Dabei wird das Hauptaugenmerk einamal auf die Brutbiologie der Schleiereule und zum anderen auf die Disnigration der jungen Eulen gelegt.
Der Titel der Tagung, deren Beiträge dieser Band dokumentiert, ist Programm: Jenseits der postmodernen Abschiedsstimmung, in die manche Reflexion über die Zukunft des Staates je nach theoretischer und politischer Orientierung melancholisch oder mit Schadenfreude verfällt, setzt er voraus, was eigentlich selbstverständlich sein sollte: dass es auch in Zukunft den Staat weder theoretisch noch praktisch zu verabschieden gilt. Er versucht deutlich zu machen, dass es im Jahre 1 eines neuen Jahrtausends in der Berliner Republik nicht mehr um eine Fortsetzung der allgemeinen Verunsicherung der achtziger und neunziger Jahre gehen kann. Es reicht nicht theoretisch (und manchmal – so scheint es – nur theoretisch und ohne zur Kenntnis zu nehmen, welche Rolle moderne Staaten in den Industriegesellschaften faktisch spielen) zu bezweifeln, ob der Staat der Zukunft noch souverän, national, sozial, steuernd, intervenierend etc. sein könne, um nur einige Attribute des Staates zu nennen, die Gegenstand der skeptischen Überlegungen sind. Rückblickend auf die Debatten um die Steuerungsfähigkeit des Staates, die Krise des Sozialstaats, Deregulierung, Privatisierung und Entbürokratisierung sowie Internationalisierung und Globalisierung ist es an der Zeit, Lösungswege zur Diskussion zu stellen. Nach der soziologischen Entzauberung und philosophischen Dekonstruktion des Staates bedarf es gegenwärtig einer Gegenbewegung: der praxisfähigen Rekonstruktion normativer Leitbilder. ...
Indem das Internet als Infrastruktur die Transaktionskosten grenzüberschreitender Kommunikation radikal senkt, wirkt es als Katalysator der Globalisierung der Gesellschaft1. Rechtskollisionen erhalten hierdurch in allen gesellschaftlichen Bereichen eine gesteigerte Bedeutung. Im Rahmen der allgemeinen Debatte um die Etablierung einer Global Governance kommt der Internetgovernance deshalb eine paradigmatische Rolle zu. Aus ökonomischer Sicht steht dabei die Schaffung eines Rechtsrahmens für den globalen E-Commerce im Vordergrund. Im Hinblick auf eine innovationsoffene Regulierung erscheint es in diesem Zusammenhang als reizvoll, der Frage nach einem Rechtsrahmen für den grenzüberschreitenden Business-to-Consumer-E-Commerce nachzugehen. Denn das deutsche und europäische Verbrauchervertragsrecht stehen aktuell eher für eine gegenläufige Tendenz zur Begrenzung der Privatautonomie zugunsten zwingender Vorgaben des Gesetzgebers, die auch kollisionsrechtlich gegen eine parteiautonome Rechtswahl abgesichert werden5. Während etwa das in der E-Commerce-Richtlinie verankerte Herkunftsstaatprinzip nicht nur dasWirtschaftsaufsichtsrecht,sondern auch weite Teile des Zivilrechts den Innovationskräften des Systemwettbewerbs öffnet, scheint sich das Verbrauchervertragsrecht aufgrund seines Schutzzweckes als mit innovationsoffenen Regulierungsmodellen inkompatibel zu erweisen. Ist damit auf dem Gebiet des Verbrauchervertragsrechts nicht nur der traditionelle Wettbewerb der individuellen Vertragsklauseln sowie der Klauselwerke (AGB) innerhalb einer staatlichen Privatrechtsordnung, sondern auch der institutionelle Wettbewerb zwischen den Verbraucherschutzmodellen der verschiedenen staatlichen Privatrechtsordnungen ausgeschlossen, so verbleibt als potentieller Innovationsspeicher nur der Raum der gesellschaftlichen Selbstregulierung jenseits des (staatlichen) Rechts. Vor diesem Hintergrund wird im folgenden untersucht, ob und inwieweit sich aufgrund der spezifischen Charakteristika der Internetkommunikation im Bereich des globalen E-Commerce eine Verdichtung von Phänomenen der privaten Normsetzung und der sozialen Selbstregulierung beobachten läßt, die als Emergenz eines transnationalen Verbrauchervertragsrechts interpretiert werden kann. Zunächst soll dabei eine Definition transnationalen Rechts entwickelt werden, die diesen Begriff an die spontanen Innovationskräfte der globalen Zivil(rechts)gesellschaft koppelt (II.). In einem zweiten Schritt werden dann Entstehungsbedingungen und Phänomene eines transnationalen Verbrauchervertragsrechts beleuchtet (III.). Sodann wird der Frage nach einer Konstitutionalisierung des transnationalen Verbrauchervertragsrechts nachgegangen (IV.). Der Beitrag schließt mit einem Ausblick auf potentielle Ziele und Methoden der Regulierung des Wettbewerbs transnationaler Verbraucherschutzregimes (V.).
In April 2003 I commented on the European Commission’s Action Plan on a More Coherent European Contract Law [COM(2003) 68 final] and the Green Paper on the Modernisation of the 1980 Rome Convention [COM(2002) 654 final].1 While the main argument of that paper, i.e. the common neglect of the inherent interrelation between both the further harmonisation of substantive contract law by directives or through an optional European Civil Code on the one hand and the modernisation of conflict rules for consumer contracts in Art. 5 Rome Convention on the other hand, remain pressing issues, and as the German Law Journal continues its efforts in offering timely and critical analysis on consumer law issues,2 there is a variety of recent developments worth noting.
The negative-pion multiplicity is measured for central collisions of 40Ar with KCl at eight energies from 0.36 to 1.8 GeV/nucleon and for 4He on KCl and 40Ar on BaI2 at 977 and 772 MeV/nucleon, respectively. A systematic discrepancy with a cascade-model calculation which fits proton- and pion-nucleus cross sections but omits potential-energy effects is used to derive the energy going into bulk compression of the system. A value of the incompressibility constant of K=240 MeV is extracted in a parabolic form of the nuclear-matter equation of state.
The parities of eleven J=1 levels in 208Pb were determined by nuclear resonance fluorescence scattering of linearly polarized photons. A new 1+ level at Ex=5.846 MeV with Gamma 02 / Gamma =1.2±0.4 eV was found. This level can probably be identified with the theoretically predicted isoscalar 1+ state in 208Pb. All other bound dipole states below 7 MeV with Gamma 02 / Gamma >1.5 eV have negative parity. The 1- assignment to the 4.842-MeV level is of special significance because of previous conflicting results about its parity.
The 16O ( gamma ,p0) reaction has been studied with linearly polarized bremsstrahlung photons in and below the giant E1 resonance. The parity of the absorbed radiation was determined from the observed azimuthal asymmetry of the emitted protons. Combined with unpolarized measurements the polarized results determine the proton decay amplitudes of the M1 resonance at Ex=16.2 MeV in 16O. The shape of the unpolarized 16O ( gamma ,p3) angular distribution in the giant E1 resonance was derived from the measured analyzing power. NUCLEAR REACTIONS 16O( gamma ,p), E=15-25 MeV; measured analyzing power theta =90° linearly polarized bremsstrahlung; 16O dipole levels deduced pi ; 16.2 MeV 1+ resonance deduced p0 decay amplitudes; 16O GEDR deduced p3 angular distribution.
The ultrarelativistic quantum molecular dynamics model (UrQMD) is used to study global observables in central reactions of Au+Au at sqrt[s]=200A GeV at the Relativistic Heavy Ion Collider (RHIC). Strong stopping governed by massive particle production is predicted if secondary interactions are taken into account. The underlying string dynamics and the early hadronic decoupling implies only small transverse expansion rates. However, rescattering with mesons is found to act as a source of pressure leading to additional flow of baryons and kaons, while cooling down pions.
11 262 keV 1+ state in 20Ne
(1983)
The excitation energy of the lowest 1+, T=1 state in 20Ne, which is important for parity nonconservation studies, has been determined in a photon scattering experiment to be 11 262.3 ± 1.9 keV. Values for the gamma -ray branching of this level to the ground state and to the first 2+ level in 20Ne are 84 ± 5% and 16 ± 5%, respectively. NUCLEAR REACTIONS 20Ne( gamma , gamma ), E gamma <18 MeV, bremsstrahlung; measured E gamma , gamma branching. Ne natural targets.
Proton emission in relativistic nuclear collisions is examined for events of low and high multiplicity, corresponding to large and small impact parameters. Peripheral reactions exhibit distributions of protons in agreement with spectator-participant decay modes. Central collisions of equal-size nuclei are dominated by the formation and decay of a fireball system. Central collisions of light projectiles with heavy targets exhibit an enhancement in sideward emission which is predicted by recent hydrodynamical calculations.
Angular distributions for elastic and inelastic transitions in 20Ne + 16O scattering have been measured at E(20Ne)=50 MeV. For the 0+, 2+, and 4+ members of the 20Ne ground-state rotational band, the angular distributions exhibit pronounced backward peaking characteristic of an alpha -cluster exchange mechanism. The analysis of the ground-state transition in the first-order elastic transfer model yields no satisfactory fit although microscopic cluster form factors and full recoil corrections are employed. A coupled channels calculation for the 0+, 2+, and 4+ transitions reveals very strong coupling effects, indicating that the coherent superposition of first-order optical model and distorted-wave Born-approximation amplitudes may not be an adequate model for these reactions. NUCLEAR REACTIONS 16O(20Ne, 16O) and 16O(20Ne, 20Ne), elastic and inelastic transfer; E=50MeV; measured sigma (Ef , theta ); optical model + DWBA, and CCBA analyses.
The elastic alpha scattering to backward angles has been studied for 40,42,44,48Ca between 40.7 and 72.3 MeV. The cross sections for 40Ca are larger than those for the higher isotopes up to the highest energies. They show backward increases that disappear above 50 MeV. The enhancement factor for 40Ca over 42,44Ca varies smoothly with energy. 48Ca does also show a backward cross-section enhancement over 42,44Ca. alpha -cluster rotational bands in the 44Ti compound state, four-nucleon correlations in 40Ca, and the l-dependent optical model are discussed as approaches to understand the anomaly. The rotator model appears to agree qualitatively with the experimental data. It involves rotational bands extending at least up to J=16 in 44Ti.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Back-angle enhancements of elastic alpha -scattering cross sections have been observed for nuclei at the ends of the 1p, 2s-1d, and f7 / 2 shells. Strong reduction of this enhancement occurs if excess neutrons enter the next open major shell. The results are discussed in terms of intermediate alpha structure.
Pion-production cross sections have been measured for the reaction 40Ar+40Ca--> pi ++X at a laboratory energy of 1.05 GeV/nucleon. A maximum in the pi + cross section occurs at mid-rapidity, which is anomalous relative to p+p and p+nucleus reactions and compared to many other heavy-ion reactions. Calculations based on cascade and thermal models fail to fit the data.
Inclusive energy spectra of protons, deuterons, and tritons were measured with a telescope of silicon and germanium detectors with a detection range for proton energies up to 200 MeV. Fifteen sets of data were taken using projectiles ranging from protons to 40Ar on targets from 27Al to 238U at bombarding energies from 240 MeV/nucleon to 2.1 GeV/nucleon. Particular attention was paid to the absolute normalization of the cross sections. For three previously reported reactions, He fragment cross sections have been corrected and are presented. To facilitate a comparison with theory the sum of nucleonic charges emitted as protons plus composite particles was estimated and is presented as a function of fragment energy per nucleon in the interval from 15 to 200 MeV/nucleon. For low-energy fragments at forward angles the protons account for only 25% of the nucleonic charges. The equal mass 40Ar plus Ca systems were examined in the center of mass. Here at 0.4 GeV/nucleon 40Ar plus Ca the proton spectra appear to be nearly isotropic in the center of mass over the region measured. Comparisons of some data with firestreak, cascade, and fluid dynamics models indicate a failure of the first and a fair agreement with the latter two. In addition, associated fast charged particle multiplicities (where the particles had energies larger than 25 MeV/nucleon) and azimuthal correlations were measured with an 80 counter array of plastic scintillators. It was found that the associated multiplicities were a smooth function of the total kinetic energy of the projectile. NUCLEAR REACTIONS U(20Ne,X), E / A=240 MeV/nucleon; U(40Ar,X), Ca(40Ar,X), U(20Ne,X), Au(20Ne,X), Ag(20Ne,X), Al(20Ne,X), U(4He,X), Al(4He,X), E / A=390 MeV/nucleon; U(40Ar,X), Ca(40Ar,X), U(20Ne,X), U(4He,X), U(p,X), E / A=1.04 GeV/nucleon; U(20Ne,X), E / A=2.1 GeV/nucleon; measured sigma (E, theta ), X=p,d,t.
Exclusive pi - and charged-particle production in collisions of Ar+KCl is studied at incident energies from 0.4 to 1.8 GeV/u. Complete disintegration of both nuclei is observed. The correlation between pi - and total charge multiplicity shows no islands of anomalous pion production. For constant numbers of proton participants the pi - multiplicity distributions are Poissons. For central collisions <n pi -> increases smoothly and to first order linearly with the c.m. energy. Disagreement with the firestreak model is found. Pacs numbers: 25.70.Hi, 24.10.Dp
Lambda 's produced in central collisions of 40Ar+KC1 at 1.8-GeV/u incident energy were detected in a streamer chamber by their charged-particle decay. For central collisions with impact parameters b<2.4 fm the Lambda production cross section is 7.6±2.2 mb. A calculation in which Lambda production occurs in the early stage of the collision qualitatively reproduces the results but underestimates the transverse momenta. An average Lambda polarization of -0.10±0.05 is observed. PACS numbers: 25.70 Bc
Pion production and charged-particle multiplicity selection in relativistic nuclear collisions
(1982)
Spectra of positive pions with energies of 15-95 MeV were measured for high energy proton, 4He, 20Ne, and 40Ar bombardments of targets of 27Al, 40Ca, 107,109Ag, 197Au, and 238U. A Si-Ge telescope was used to identify charged pions by dE / dx-E and, in addition, stopped pi + were tagged by the subsequent muon decay. In all, results for 14 target-projectile combinations are presented to study the dependence of pion emission patterns on the bombarding energy (from E / A=0.25 to 2.1 GeV) and on the target and the projectile masses. In addition, associated charged-particle multiplicities were measured in an 80-paddle array of plastic scintillators, and used to make impact parameter selections on the pion-inclusive data. NUCLEAR REACTIONS U(20Ne, pi +), E / A=250 MeV; U(40Ar, pi +), Ca(40Ar, pi +), U(20Ne, pi +), Au(20Ne, pi +), Ag(20Ne, pi +), Al(20Ne, pi +), U(4He, pi +), Al(4He, pi +). E / A=400 MeV; Ca(40Ar, pi +), U(20Ne, pi +), U(4He, pi +), U(p, pi +), E / A=1.05), GeV; U(20Ne, pi +), E / A=2.1 GeV; measured sigma (E, theta ), inclusive and selected on associated charged-particle multiplicity.
Energy spectra and angular distributions have been measured of 3He and 4He fragments emitted from Ag and U targets, bombarded with 2.7-GeV protons, and 1.05-GeV/nucleon alpha particles and 16O ions. All cross sections increase dramatically with projectile mass. No narrow peaks are found in the angular distributions or in the energy spectra.
Double-differential cross sections have been measured for high-energy p, d, t, 3He, and 4He particles emitted from uranium targets irradiated with 20Ne ions at energies of 250, 400, and 2100 MeV/nucleon and 4He ions at 400 MeV/nucleon. By using the shape and yield of the proton energy spectra, the shape and yield of the d, t, 3He, and 4He energy spectra can be deduced at all measured angles for all incident projectile energies by assuming that they are formed by a coalescence of cascade nucleons, using a model analogous to that of Butler and Pearson, and Schwarzschild and Zupancic-caron.
A simple model is proposed for the emission of nucleons with velocities intermediate between those of the target and projectile. In this model, the nucleons which are mutually swept out from the target and projectile form a hot quasiequilibrated fireball which decays as an ideal gas. The overall features of the proton-inclusive spectra from 250- and 400-MeV/nucleon 20Ne ions and 400-MeV/nucleon 4He ions interacting with uranium are fitted without any adjustable parameters.
The energy spectra of protons and light nuclei produced by the interaction of 4He and 20Ne projectiles with Al and U targets have been investigated at incident energies ranging from 0.25 to 2.1 GeV per nucleon. Single fragment inclusive spectra have been obtained at angles between 25° and 150°, in the energy range from 30 to 150 MeV/nucleon. The multiplicity of intermediate and high energy charged particles was determined in coincidence with the measured fragments. In a separate study, fragment spectra were obtained in the evaporation energy range from 12C and 20Ne bombardment of uranium. We observe structureless, exponentially decaying spectra throughout the range of studied fragment masses. There is evidence for two major classes of fragments; one with emission at intermediate temperature from a system moving slowly in the lab frame, and the other with high temperature emission from a system propagating at a velocity intermediate between target and projectile. The high energy proton spectra are fairly well reproduced by a nuclear fireball model based on simple geometrical, kinematical, and statistical assumptions. Light cluster emission is also discussed in the framework of statistical models. NUCLEAR REACTIONS U(20Ne,X), E=250 MeV/nucl.; U(20Ne,X), U(α,X) E=400 MeV/nucl.; U(20Ne,X), Al(20Ne,X), E=2.1 GeV/nucl.; measured σ(E,θ), X=p, d, t, 3He,4He. U(20Ne,X), U(α,X), E=400 MeV/nucl.; U(20Ne,X), E=2.1 GeV/nucl.; measured σ(E, θ), Li to O. U(20Ne,X), U(12C,X), E=2.1 GeV/nucl.; measured σ(E, 90°), 4He to B. Nuclear fireballs, coalescence, thermodynamics of light nuclei production.
Particle production in central Pb+Pb collisions was studied with the NA49 large acceptance spectrometer at the CERN SPS at beam energies of 20, 30, 40, 80, and 158 GeV per nucleon. A change of the energy dependence is observed around 30A GeV for the yields of pions and strange particles as well as for the shapes of the transverse mass spectra. At present only a reaction scenario with onset of deconfinement is able to reproduce the measurements.
Das Tetralemma des Rechts : zur Möglichkeit einer Selbstbeschränkung des Kommunikationssystems Recht
(2000)
Was tut das Recht wenn es nichts tut? In diese Frage hat Niklas Luhmann das Problem gekleidet, wie ein judicial self-restraint unter Geltung des Justizverweigerungsverbotes denkbar ist. Eine Beantwortung dieser Frage aus Sicht einer Systemtheorie, die das Recht als operativ geschlossenes Kommunikationssystem im Rahmen einer auf der Erkenntnistheorie des radikalen Konstruktivismus fußenden Theorie der Gesellschaft zu erfassen sucht (Recht als autopoietisches System), hat Luhmann zwar angerissen, aber nicht befriedigend zu Ende gedacht. Besonders interessant ist diese Frage vor dem Hintergrund der Diskussion um ein prozedurales Rechtsparadigma, welches angesichts der gegenwärtigen gesellschaftlichen Umbrüche das überkommene materiale Paradigma ablösen soll (Prozeduralisierung des Rechts). Es erscheint daher reizvoll, auf der Suche nach Antworten einen Beitrag sowohl zur Systemtheorie des Rechts als auch zu einer Theorie des prozeduralen Rechts zu leisten.
Reflexive transnational law : the privatisation of civil law and the civilisation of private law
(2002)
The author examines the emergence of a transnational private law in alternative dispute resolution bodies and private norm formulating agencies from a reflexive law perspective. After introducing the concept of reflexive law he applies the idea of law as a communicative system to the ongoing debate on the existence of a New Law Merchant or lex mercatoria. He then discusses some features of international commercial arbitration (e.g. the lack of transparency) which hinder self-reference (autopoiesis) and thus the production of legal certainty in lex mercatoria as an autonomous legal system. He then contrasts these findings with the Domain Name Dispute Resolution System, which as opposed to Lex Mercatoria was rationally planned and highly formally organised by WIPO and ICANN, and which is allowing for self-reference and thus is designed as an autopoietic legal system, albeit with a very limited scope, i.e. the interference of abusive domain name registrations with trademarks (cybersquatting). From the comparison of both examples the author derives some preliminary ideas regarding a theory of reflexive transnational law, suggesting that the established general trend of privatisation of civil law need to be accompanied by a civilisation of private law, i.e. the constitutionalization of transnational private regimes by embedding them into a procedural constitution of freedom.
"Nicht-Ereignisse", Lebensenttäuschungen aufgrund des dauerhaften Ausbleibens erwünschter Ereignisse oder des Nicht-Erreichens von bedeutsamen Lebenszielen, können zu existenziellen Krisen führen. Die Autoren haben 40 Personen befragt und an ihrem Beispiel die Bewältigungsprozesse solcher Krisen untersucht, die z.B. durch ungewollte Kinderlosigkeit oder eine ausgebliebene berufliche Karriere ausgelöst worden waren. Dabei fanden sie verschiedene Prozesshilfen: kognitive und emotionale Verarbeitungsprozesse, soziale Unterstützung, Ersatzaktivitäten und pragmatisches Handeln. Alle Befragten berichteten von Entwicklungsgewinnen aufgrund der Krise und ihrer Bewältigung.
Die Zunahme an Gewalttaten, insbesondere durch Kinder und Jugendliche, wird in der öffentlichen und pädagogischen Diskussion weithin beklagt. Zwar zeigen zeitvergleichende Analysen, dass von einer dramatischen Erhöhung der Gewalhandlungen keine Rede sein kann; eher ist die öffentliche Sensibilität für derartige Vorfälle gestiegen. Andererseits gibt es erschreckende Beispiele für besonders brutale Übergriffe, die im öffentlichen Bewusstsein naturgemäß dominieren. Eindeutig zugenommen haben in den letzten Jahren politisch motivierte Gewalttaten, insbesondere mit rechtsextremistischem Hintergrund. Doch unabhängig davon, ob und wo die Zahl der Gewalthandlungen angestiegen ist, beinhaltet jede einzelne Tat einen Angriff auf die Menschenwürde und die politische Kultur und ruft deshalb nach Gegenmaßnahmen.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Vielleicht hätte sich außerhalb der Fachwissenschaft niemand für das Weltklimaproblem interessiert, wären da nicht zwei brisante, miteinander gekoppelte Fakten: Die Menschheit ist hochgradig von der Gunst des Klimas abhängig. Es kann uns daher nicht gleichgültig sein, was mit unserem Klima geschieht. Und: Die Menschheit ist mehr und mehr dazu übergegangen, das Klima auch selbst zu beeinflussen. Daraus erwächst uns allen eine besondere Verantwortung. ...
Wenn sich beim Klimagipfel in Den Haag [genauer bei der nun schon 6. Vertragstaatenkonferenz zur Klimaschutzkonvention der Vereinten Nationen] nun wieder die Delegationen aus fast allen Staaten der Welt treffen, um über Klimaschutzmaßnahmen zu beraten, dann schwingt auch immer die Frage mit: Sind solche Maßnahmen wirklich notwendig? Sollen wir nicht einfach warten, bis wir mehr, ja vielleicht alles wissen? ...
Die Zunahme der Konzentration von CO2 und anderen "Treibhausgasen" in der Atmosphäre ist unzweifelhaft, und ebenso unzweifelhaft reagiert das Klima darauf. Christian-Dietrich Schönwiese, Professor für Meteorologische Umweltforschung und Klimatologie an der Universität Frankfurt am Main, sieht dringenden politischen Handlungsbedarf und plädiert gleichzeitig dafür, die Debatte rund um den Klimaschutz zu versachlichen.
Die öffentliche Klimadebatte scheint sich zu verselbständigen. Abgehoben von den Erkenntnissen der Fachwissenschaftler reden die einen von der "Klimakatastrophe", die uns demnächst mit voller Wucht treffen wird, wenn wir nicht sofort alles ganz anders machen; Panik ist ihnen das rechte Mittel, Aufmerksamkeit zu erregen. Die anderen sehen im "Klimaschwindel" einen Vorwand für Forschungsgelder und zusätzliche Steuerbelastung der Wirtschaft; ihre Strategie ist Verwirrung und Verharmlosung. Mit der Fixierung auf solche Extrempositionen werden wir den Herausforderungen der Zukunft sicherlich nicht gerecht. Höchste Zeit für eine Versachlichung und für einen klärenden Beitrag zum Verwirrspiel "Klima".
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
Simulation of global temperature variations and signal detection studies using neural networks
(1998)
The concept of neural network models (NNM) is a statistical strategy which can be used if a superposition of any forcing mechanisms leads to any effects and if a sufficient related observational data base is available. In comparison to multiple regression analysis (MRA), the main advantages are that NNM is an appropriate tool also in the case of non-linear cause-effect relations and that interactions of the forcing mechanisms are allowed. In comparison to more sophisticated methods like general circulation models (GCM), the main advantage is that details of the physical background like feedbacks can be unknown. Neural networks learn from observations which reflect feedbacks implicitly. The disadvantage, of course, is that the physical background is neglected. In addition, the results prove to be sensitively dependent from the network architecture like the number of hidden neurons or the initialisation of learning parameters. We used a supervised backpropagation network (BPN) with three neuron layers, an unsupervised Kohonen network (KHN) and a combination of both called counterpropagation network (CPN). These concepts are tested in respect to their ability to simulate the observed global as well as hemispheric mean surface air temperature annual variations 1874 - 1993 if parameter time series of the following forcing mechanisms are incorporated : equivalent CO2 concentrations, tropospheric sulfate aerosol concentrations (both anthropogenic), volcanism, solar activity, and ENSO (all natural). It arises that in this way up to 83% of the observed temperature variance can be explained, significantly more than by MRA. The implication of the North Atlantic Oscillation does not improve these results. On a global average, the greenhouse gas (GHG) signal so far is assessed to be 0.9 - 1.3 K (warming), the sulfate signal 0.2 - 0.4 K (cooling), results which are in close similarity to the GCM findings published in the recent IPCC Report. The related signals of the natural forcing mechanisms considered cover amplitudes of 0.1 - 0.3 K. Our best NNM estimate of the GHG doubling signal amounts to 2.1K, equilibrium, or 1.7 K, transient, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
Zur Aktualität von Wagenscheins Schulkritik heute : das Wirklichkeits-Defizit im schulischen Lernen
(2005)
Der Magnetstein - Wozu braucht es Materialien zum Sachunterricht : Ein Gespräch mit Horst Rumpf
(2003)
Der Sachunterricht beschäftigt sich mit Sachzusammenhängen. Seine grundsätzliche didaktische Zielsetzung besteht deshalb darin, mit Sachzusammenhängen umzugehen und nicht mit Bildern oder Texten über Sachen. Mit Sachen umgehen kann entweder heißen, sie in der Realität, also außerhalb der Schule aufzusuchen oder sie in die Schule hereinzuholen. Drei Gründe legen es dennoch nahe, auch Materialien im Sachunterricht einzusetzen: 1. Es gibt Sachzusammenhänge, die nicht aufgesucht und untersucht werden können. Etwa: Einen Ameisenhaufen darf man nicht zerstören und eine Schulklasse kann man nicht bei einer Geburt zuschauen lassen. 2. Materialien können die Auseinandersetzung mit Sachzusammenhängen vorbereiten oder begleiten. Es lassen sich aus Büchern und Bildern Anregungen entnehmen, ebenso Fragen oder Aufgaben. Die Dokumentation der Sachbegegnung kann ebenfalls durch Materialien unterstützt werden. 3. Sachzusammenhänge sind komplex. Materialien können eine Reduktion der Komplexität vornehmen, indem sie das Wesentliche eines Zusammenhanges enthalten. Hier geht es vor allem um Dinge, um Geräte oder einfache Abläufe. Diese Gegenstände können - im Unterschied zur Realität - Kindern die Möglichkeit bieten, selbst handelnd mit ihnen umzugehen. Diese Ausgangsthesen beinhalten eine bestimmte Grundposition zum Sachunterricht. Die Diskussion der Materialien zum Sachunterricht kann nicht von der Didaktik des Sachunterrichts getrennt werden, denn die Art des Zuganges zu einem Sachverhalt ist entscheidend dafür, was eigentlich von der Sache gelernt wird. Deshalb soll in einem ersten Schritt an einigen Beispielen eine Kritik eines Sachunterrichts vorgenommen werden, wie er in manchen Grundschulklassen anzutreffen ist.
Der Beitrag setzt sich zunächst kritisch mit verbreiteten Konzepten zum Thema Wetter und Computer auseinander. Behauptet wird ein Widerspruch zwischen didaktischem Anspruch und didaktischer Umsetzung. Der Anspruch wird in einem handlungs- und erfahrungsorientierten Unterricht gesehen. Die Umsetzung erfolgt, so die Kritik, allerdings als eine für Grundschüler nicht verstehbare Einführung in eine Wissenschaft. Für einen handlungs- und erfahrungsorientierten Sachunterricht wird als Voraussetzung gesehen, dass die Schüler die Möglichkeit erhalten, sich öffentlich und argumentativ mit ihren Konzepten und Deutungsmustern auseinander zu setzen. Für dieses Verständnis von "offenem Unterricht" eignet sich der Computer dann, wenn er nicht als Wissensspeicher und nicht zur Dokumentation von Ergebnissen herangezogen wird, sondern als Medium der Kommunikation und zur Dokumentierung von Prozessen. Konkret wird dies dargestellt an einem Unterrichtskonzept, in dem Schüler mehrerer Schulen Wetterprognosen vornehmen, an der Realität überprüfen und begründen.
Die wissenschaftliche Praxis des Sachunterrichts ist von der Praxis des schulischen Sachunterrichts systematisch unterscheidbar. Der wissenschaftliche Sachunterricht und seine Didaktik besteht als ein eigenständiger Diskurs, der sich nicht aus einer Zusammensetzung der verschiedenen Fachwissenschaften und deren Didaktiken begründet. Er hat die Aufgabe den Bildungsauftrag des Sachunterrichts in Schule und Universität zu bestimmten. Der wissenschaftliche Diskurs des Sachunterrichts bewegt sich im Kontext der diskursiven Zusammenhänge über Kind, Sache und Welt. ...
Dem Leser einer Reihe von Veröffentlichungen zur Didaktik des Sachunterrichts fällt auf, dass es bei aller Vielzahl von Differenzen und Differenzierungen zu didaktischen und pädagogischen Fragen so etwas wie eine - mehr implizite als explizite - Übereinstimmung in der Konstitution des Gegenstandes "Sache" zu geben scheint. Die "Sache" ist danach das zu Erkennende; der Sachunterricht soll entsprechend den Schüler in die Lage versetzen, zu erkennen, was gegeben ist. Aus einer jüngeren erkenntnistheoretischen Sicht lässt sich die Haltbarkeit dieser Grundannahme bezweifeln. Im Durchgang durch einige Konzepte zur Didaktik des Sachunterrichts soll dieser Zweifel begründet werden.
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
(2005)
Background: Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins.
Results: The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in d evelopment and c ell d eath. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone.
Conclusion: It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background: Osteoarthritis (OA) has a high prevalence in primary care. Conservative, guideline orientated approaches aiming at improving pain treatment and increasing physical activity, have been proven to be effective in several contexts outside the primary care setting, as for instance the Arthritis Self management Programs (ASMPs). But it remains unclear if these comprehensive evidence based approaches can improve patients' quality of life if they are provided in a primary care setting. Methods/Design: PraxArt is a cluster randomised controlled trial with GPs as the unit of randomisation. The aim of the study is to evaluate the impact of a comprehensive evidence based medical education of GPs on individual care and patients' quality of life. 75 GPs were randomised either to intervention group I or II or to a control group. Each GP will include 15 patients suffering from osteoarthritis according to the criteria of ACR. In intervention group I GPs will receive medical education and patient education leaflets including a physical exercise program. In intervention group II the same is provided, but in addition a practice nurse will be trained to monitor via monthly telephone calls adherence to GPs prescriptions and advices and ask about increasing pain and possible side effects of medication. In the control group no intervention will be applied at all. Main outcome measurement for patients' QoL is the GERMAN-AIMS2-SF questionnaire. In addition data about patients' satisfaction (using a modified EUROPEP-tool), medication, health care utilization, comorbidity, physical activity and depression (using PHQ-9) will be retrieved. Measurements (pre data collection) will take place in months I-III, starting in June 2005. Post data collection will be performed after 6 months. Discussion: Despite the high prevalence and increasing incidence, comprehensive and evidence based treatment approaches for OA in a primary care setting are neither established nor evaluated in Germany. If the evaluation of the presented approach reveals a clear benefit it is planned to provide this GP-centred interventions on a much larger scale.
A systematic analysis of data on strangeness and pion production in nucleon–nucleon and central nucleus–nucleus collisions is presented. It is shown that at all collision energies the pion/baryon and strangeness/pion ratios indicate saturation with the size of the colliding nuclei. The energy dependence of the saturation level suggests that the transition to the Quark Gluon Plasma occurs between 15 A·GeV/c (BNL AGS) and 160 A·GeV/c (CERN SPS) collision energies. The experimental results interpreted in the framework of a statistical approach show that the effective number of degrees of freedom increases in the course of the phase transition and that the plasma created at CERN SPS energies may have a temperature of about 280 MeV (energy density ~ 10 GeV/fm exp-3). The presence of the phase transition can lead to the non–monotonic collision energy dependence of the strangeness/pion ratio. After an initial increase the ratio should drop to the characteristic value for the QGP. Above the transition region the ratio is expected to be collision energy independent. Experimental studies of central Pb+Pb collisions in the energy range 20–160 A·GeV/c are urgently needed in order to localize the threshold energy, and study the properties of the QCD phase transition.
Von den Konflikten auf dem Weg zum guten und schlechten Menschen : 2. Wie lernt man, kalt zu werden?
(1997)
Vortrag an der "Die Lebendigkeit kritischer Gesellschaftstheorie", einer Arbeitstagung aus Anlass des 100. Geburtstages von Theodor W. Adorno, die am 4. - 6. Juli 2003 an der Johann Wolfgang Goethe-Universität in Frankfurt am Main statfand. Koordiniert von Prof. Dr. Andreas Gruschka und Prof. Dr. Ulrich Oevermann. Die Tagungsbeiträge können auch auf CD (je CD ein Vortrag) oder als DVD (alle Vorträge im MP3-Format) käuflich erworben werden. S.a. "Die Lebendigkeit der kritischen Gesellschaftstheorie", herausgegeben von Andreas Gruschka und Ulrich Oevermann. Dokumentation der Arbeitstagung aus Anlass des 100. Geburtstages von Theodor W. Adorno. Büchse der Pandora 2004, ISBN 3-88178-324-5
For investigation of space charge compensation process due to residual gas ionization and the experimentally study of the rise of compensation, a Low Energy Beam Transport (LEBT) system consisting of an ion source, two solenoids, a decompensation electrode to generate a pulsed decompensated ion beam and a diagnostic section was set up. The potentials at the beam axis and the beam edge were ascertained from time resolved measurements by a residual gas ion energy analyzer. A numerical simulation of self-consistent equilibrium states of the beam plasma has been developed to determine plasma parameters which are difficult to measure directly. The temporal development of the kinetic and potential energy of the compensation electrons has been analyzed by using the numerically gained results of the simulation. To investigate the compensation process the distribution and the losses of the compensation electrons were studied as a function of time. The acquired data show that the theoretical estimated rise time of space charge compensation neglecting electron losses is shorter than the build up time determined experimentally. To describe the process of space charge compensation an interpretation of the achieved results is given.
Investigation of the focus shift due to compensation process for low energy ion beam transport
(2000)
In magnetic Low Energy Beam Transport (LEBT) sections space charge compensation helps to enhance the transportable beam current and to reduce emittance growth due to space charge forces. For pulsed beams the time neccesary to establish space charge compensation is of great interest for beam transport. Particularly with regard to beam injection into the first accelerator section (e.g. RFQ) investigation of effects on shift of the beam focus due to space charge compensation are very important. The achieved results helps to obviate a mismatch into the first RFQ. To investigate the space charge compensation due to residual gas ionization, time resolved measurements using pulsed ion beams were performed at the LEBT system at the IAP and at the CEA-Saclay injektion line. A residual gas ion energy analyser (RGIA) equiped with a channeltron was used to measure the potential destribution as a function of time to estimate the rise time of compensation. For time resolved measurements (delta t min=50ns) of the radial density profile of the ion beam a CCD-camera was applied. The measured data were used in a numerical simulation of selfconsistant eqilibrium states of the beam plasma [1] to determine plasma parameters such as the density, the temperature, the kinetic and potential energy of the compensation electrons as a function of time. Measurements were done using focused proton beams (10keV, 2mA at IAP and 92keV, 62mA at CEA-Saclay) to get a better understanding of the influence of the compensation process. An interpretation of the acquired data and the achieved results will be presented.
Influence of space charge fluctuations on the low energy beam transport of high current ion beams
(2000)
For future high current ion accelerators like SNS, ESS or IFMIF the beam behaviour in low energy beam transport sections is dominated by space charge forces. Therefore space charge fluctuations (e. g. source noise) can drastically influence the beam transport properties of the low energy beam transport section. Losses of beam ions and emittance growth are the most severe problems. For electrostatic transport systems either a LEBT design has to be found which is insensitive to variations of the space charge or the origin of the fluctuations has to be eliminated. For space charge compensated transport as proposed for ESS and IFMIF the situation is different: No major influence on beam transport is expected for fluctuations below a cut-off frequency given by the production rate of the compensation particles. Above this frequency the fluctuations can not be compensated by particle production alone, but redistributions of the compensation particles helps to compensate the influence of the fluctuations. Above a second cut-off frequency given by the density and the temperature of the compensation particles their redistribution is too slow to reduce the influence of the space charge fluctuations. Transport simulations for the IFMIF injector including space charge fluctuations will be presented together with a determination of the cut-off frequencies. The results will be compared with measurements of the rise time of space charge compensation.
New results on the production of Xi and Omega hyperons in Pb+Pb interactions at 40 A GeV and Lambda at 30 A GeV are presented. Transverse mass spectra as well as rapidity spectra of these hyperons are shown and compared to previously measured data at different beam energies. The energy dependence of hyperon production (4Pi yields) is discussed. Additionally, the centrality dependence of Xi- production at 40 A GeV is presented.
First results on the production of Xi- and Anti-xi hyperons in Pb+Pb interactions at 40 A GeV are presented. The Anti-xi/Xi- ratio at midrapidity is studied as a function of collision centrality. The ratio shows no significant centrality dependence within statistical errors; it ranges from 0.07 to 0.15. The Anti-xi/Xi- ratio for central Pb+Pb collisions increases strongly with the collision energy.
High perveance negative ion beams with low emittance are essential for several next generation particle accelerators (i. g. spallation sources like ESS [1] and SNS [2]). The extraction and transport of these beams have intrinsic difficulties different from positive ion beams. Limitation of beam current and emittance growth have to be avoided. To fulfill the requirements of those projects a detailed knowledge of the physics of beam formation the interaction of the H- with the residual gas and transport is substantial. A compact cesium free H- volume source delivering a low energy high perveance beam (6.5 keV, 2.3 mA, perveance K= 0.0034) has been built to study the fundamental physics of beam transport and will be integrated into the existing LEBT section in the near future. First measurements of the interaction between the ion beam and the residual gas will be presented together with the experimental set up and preliminary results.