Article
Refine
Year of publication
Document Type
- Article (582) (remove)
Has Fulltext
- yes (582)
Is part of the Bibliography
- no (582)
Keywords
- climate change (11)
- Climate change (5)
- Atmospheric chemistry (4)
- Geochemistry (4)
- loess (4)
- Biogeochemistry (3)
- COSMO-CLM (3)
- Klimawandel (3)
- Palaeoclimate (3)
- Salinity (3)
Institute
- Geowissenschaften (582) (remove)
Paläozoische Notizen
(1907)
Unter diesem Titel sollen kleinere Beobachtungen an paläozoischen Fossilien beschrieben werden. Hauptsächlich werden es Stücke des Senckenbergischen Museums sein, jedoch sollen auch Exemplare anderer Sammlungen gelegentlich in Betracht gezogen werden. Die ersten vier der hier beschriebenen Versteinerungen stammen aus den Oberkoblenzschichten von Prüm in der Eifel, wo der Verfasser im Sommer 1905 größere Aufsammlungen machen konnte.
Sinopa rapax Leidy
(1913)
Die Steinauer Höhle
(1914)
Die über dem letztinterglazialen Boden (Parabraunerde bzw. pseudovergleyte Parabraunerde) folgenden würmzeitlichen äolischen Sedimente können durch kennzeichnende Bodenhorizonte in 3 Abschnitte (Alt-, Mittel- und Jungwürm) gegliedert werden. Charakteristisch für das Altwürm sind Humuszonen, für das Mittelwürm neben einigen „Naßböden" braune Verlehmungszonen bis zu einer Mächtigkeit von 1,1 m und für das Jungwürm mehrere schwach ausgebildete, geringmächtige Verbraunungszonen und „Naßböden". Den wichtigsten Leithorizont des Jungwürms bildet das Kärlicher Tuffbändchen, das in jüngster Zeit auch in Nordhessen aufgefunden wurde. Abschließend wird das in Hessen aufgestellte Gliederungsschema mit den Würmlöß-Gliederungen in anderen Teilen Europas verglichen und eine Parallelisierung versucht.
In der Monte Cavallo-Gruppe fand Verf. in Stauseesedimenten fossile Holzstücke, für die die 14 C-Datierung ein Alter von 29 350 ± 460 Jahren vor 1950 n. Chr. ergab. Die schluffigen Ablagerungen, die in etwa 900 m, fast 80 m über dem heutigen Flußbett des T. Caltea aufgeschlossen sind, ruhen auf Schottern und werden von Moräne überlagert.
Aufgrund seiner stratigraphischen Lage und seines 14C-datierten Alters kann das Holz (Picea abies bzw. Larix) einem zeitlichen Äquivalent des Paudorf-Interstadials zugeordnet werden. Dieser Fund von Großresten ermöglicht somit eine erste absolute Datierung des Paudorf-Interstadials für die Südalpen und Oberitalien.
Aus Franken wird die Entwicklung quartärer Hohlformen beschrieben, deren Rekonstruktion mit Hilfe lößstratigraphischer Methoden (fossile Böden, Tuffbänder, Umlagerungszonen etc.) möglich ist. Bei vielen Formen zeigt sich, daß sie bereits größere Vorläuferformen präwürmzeitlichen Alters hatten. Die Entwicklung während des Würms läßt sich an manchen Beispielen in besonders instruktiver Weise verfolgen. Zu Beginn des Würms, im unteren Mittelwürm und im unteren Jungwürm dominierte zeitweise die Abtragung und Verlagerung. Im oberen Mittelwürm sowie im oberen Jungwürm herrschte äolische Lößsedimentation vor. Diese Ergebnisse stimmen gut mit den bereits aus anderen mitteleuropäischen Lößgebieten bekannten Befunden überein. Mit dem Trockental-System von Helmstadt wird die Entwicklung von Hohlformen beschrieben, deren Anlage bis in das ältere Pleistozän zurückreicht.
In vorliegender Untersuchung wurde der Rißlöß zwischen der 1. und 2. fossilen Parabraunerde anhand schwächerer Bodenbildungen und eingeschalteter Abtragungsphasen zu gliedern versucht. Im jüngeren Riß herrschte starke Lößsedimentation vor, wobei es in mindestens 6 kaltfeuchten Abschnitten zur Ausbildung schwacher periglazialer Naßböden kam. Die Naßbodenserie wurde als Bruchköbeler Böden (B) bezeichnet. Im jüngsten Rißlöß ist wenige dm unter dem Eemboden als tephrochronologischer Leithorizont der Krifteler Tuff (vgl. SEMMEL 1968) eingeschaltet. Den mittleren Profilbereich im Rißlöß zeichnen feuchtere Klimaabschnitte mit starken Verschwemmungsphasen aus, die in den meisten Profilen zu erheblichen Diskordanzen geführt haben. An der Basis der wenigen kompletten Rißlöß-Profile treten vorwiegend in Hessen über der zumeist gekappten 2. fossilen Parabraunerde maximal zwei Schwarzerden auf, die von SEMMEL (1968) als Weilbacher Humuszonen bezeichnet werden. Unmittelbar über diesen Schwarzerden folgt die Ostheimer Zone, eine Fließerde aus aufgearbeitetem Solumaterial der liegenden Böden. Insgesamt zeigt die aus den Rißböden rekonstruierte Klimaabfolge — neben geringfügigen Abweichungen — überraschende Parallelen zur paläopedologisch-klimatischen Gliederung der Würmkaltzeit.
Als erste Ergebnisse der derzeit laufenden paläomagnetischen Untersuchungen im Rhein-Main-Gebiet werden die Befunde von einigen stratigraphisch bedeutenden Profilen mitgeteilt. Während die Remanenzwerte aus altpleistozänen Tonen der Kelsterbacher Terrasse mit einer Ausnahme revers sind, konnte im Autobahneinschnitt Abenheim (NW Worms) und im Lößprofil der Ziegeleigrube Bad Soden am Taunus der Ubergang von reverser zu normaler Magnetisierung gefunden werden. Die Grenze zwischen MATUYAMA- und BRUNHES-Epoche liegt im Profil Bad Soden unter dem 6., das Jaramillo-Event unter dem 7. fossilen Bt-Horizont.
Li6UO6 has a reversible phase transformation at 680°C and decomposes above about 850°C. At high pressure the low temperature modification becomes unstable because of an invariant point in the system Li2O—Li4UO5 at approximately 13 Kb and 620°C. β-Li6UO6 has a triclinic unit cell with a = 5.203, b= 5.520, c = 5.536 Å, α = 114.7, β = 120.7 and γ = 75.5°. The close relationship between the crystal structures of Li6TeO6 and Li6UO6 is also suggested from similar infrared spectra and from partial solid solution Li6UO6—Li6TeO6.
Das Cranium eines fossilen Hominiden des Formenkreises Homo sapiens sapiens wurde relativ-geologisch sowie absolut durch Radiokohlenstoff und Aminosäuren auf ungefähr 31 000 Jahre B.P. datiert. Andere absolute sowie relative Daten an Mollusken und Mammutzähnen in überlagernden jüngeren Straten datieren auf 18 000 — 21000 und 16 000 Jahren B.P. Geomorphologische und geophysikalische Datierungen stimmen somit gut überein. Er ist der älteste datierte und früheste Bewohner Zentraleuropas, der dem Homo sapiens sapiens angehört.
Aus dem Gebiet der weichselzeitlichen Vereisung in Polen werden allgemein verbreitete geringmächtige periglaziale Deckschichten beschrieben. Sie zeigen in der Regel eine äolische Beeinflussung und unterscheiden sich dadurch vom Liegenden. Es handelt sich um spätglaziale Bildungen, wie sie aus dem Jungmoränengebiet der DDR seit langem bekannt sind. Ähnliche Substrate wurden auch im nördlichen Alpenvorland gefunden.
Im Lößprofil der Ziegelei Glos bei Lisieux ist eine Abfolge fossiler Boden aufgeschlossen, die sich gut mit der hessischen Lößstratigraphie verbinden läßt. Über dem letztinterglazialen Boden liegen nach einer Diskordanz der Lohner Boden, der E1-, der E2- und der E4-Naß-boden. Letzterer wird häufig als Äquivalent des „Sol de Kesselt" angesehen. Somit zeigt sich auch für dieses Gebiet, daß dieser Boden stratigraphisch nicht dem Lohner Boden entsprechen kann.
Die öffentliche Klimadebatte scheint sich zu verselbständigen. Abgehoben von den Erkenntnissen der Fachwissenschaftler reden die einen von der "Klimakatastrophe", die uns demnächst mit voller Wucht treffen wird, wenn wir nicht sofort alles ganz anders machen; Panik ist ihnen das rechte Mittel, Aufmerksamkeit zu erregen. Die anderen sehen im "Klimaschwindel" einen Vorwand für Forschungsgelder und zusätzliche Steuerbelastung der Wirtschaft; ihre Strategie ist Verwirrung und Verharmlosung. Mit der Fixierung auf solche Extrempositionen werden wir den Herausforderungen der Zukunft sicherlich nicht gerecht. Höchste Zeit für eine Versachlichung und für einen klärenden Beitrag zum Verwirrspiel "Klima".
Simulation of global temperature variations and signal detection studies using neural networks
(1998)
The concept of neural network models (NNM) is a statistical strategy which can be used if a superposition of any forcing mechanisms leads to any effects and if a sufficient related observational data base is available. In comparison to multiple regression analysis (MRA), the main advantages are that NNM is an appropriate tool also in the case of non-linear cause-effect relations and that interactions of the forcing mechanisms are allowed. In comparison to more sophisticated methods like general circulation models (GCM), the main advantage is that details of the physical background like feedbacks can be unknown. Neural networks learn from observations which reflect feedbacks implicitly. The disadvantage, of course, is that the physical background is neglected. In addition, the results prove to be sensitively dependent from the network architecture like the number of hidden neurons or the initialisation of learning parameters. We used a supervised backpropagation network (BPN) with three neuron layers, an unsupervised Kohonen network (KHN) and a combination of both called counterpropagation network (CPN). These concepts are tested in respect to their ability to simulate the observed global as well as hemispheric mean surface air temperature annual variations 1874 - 1993 if parameter time series of the following forcing mechanisms are incorporated : equivalent CO2 concentrations, tropospheric sulfate aerosol concentrations (both anthropogenic), volcanism, solar activity, and ENSO (all natural). It arises that in this way up to 83% of the observed temperature variance can be explained, significantly more than by MRA. The implication of the North Atlantic Oscillation does not improve these results. On a global average, the greenhouse gas (GHG) signal so far is assessed to be 0.9 - 1.3 K (warming), the sulfate signal 0.2 - 0.4 K (cooling), results which are in close similarity to the GCM findings published in the recent IPCC Report. The related signals of the natural forcing mechanisms considered cover amplitudes of 0.1 - 0.3 K. Our best NNM estimate of the GHG doubling signal amounts to 2.1K, equilibrium, or 1.7 K, transient, respectively.
Der Auffassung, das Substrat der Lockerbraunerde auf dem Oberwald des Vogelsberges sei ausschließlich ein äolisches Sediment der Jüngeren Tundrenzeit, wird widersprochen. Verschiedene Befunde, vor allem 14C-Datierungen, lassen vielmehr den Schluß zu, daß große Teile des Substrates vieler Lockerbraunerden holozäne anthropogene Kolluvien sind. Bekräftigt wird dagegen der Befund gleichen (jungtundrenzeitlichen) Alters für Lockerbraunerde-Substrat und Deckschutt (Decksediment).
Wenn sich beim Klimagipfel in Den Haag [genauer bei der nun schon 6. Vertragstaatenkonferenz zur Klimaschutzkonvention der Vereinten Nationen] nun wieder die Delegationen aus fast allen Staaten der Welt treffen, um über Klimaschutzmaßnahmen zu beraten, dann schwingt auch immer die Frage mit: Sind solche Maßnahmen wirklich notwendig? Sollen wir nicht einfach warten, bis wir mehr, ja vielleicht alles wissen? ...
Die Zunahme der Konzentration von CO2 und anderen "Treibhausgasen" in der Atmosphäre ist unzweifelhaft, und ebenso unzweifelhaft reagiert das Klima darauf. Christian-Dietrich Schönwiese, Professor für Meteorologische Umweltforschung und Klimatologie an der Universität Frankfurt am Main, sieht dringenden politischen Handlungsbedarf und plädiert gleichzeitig dafür, die Debatte rund um den Klimaschutz zu versachlichen.
Crustal structure at the western end of the North Anatolian Fault Zone from deep seismic sounding
(2001)
The first deep seismic sounding experiment in Northwestern Anatolia was carried out in October 1991 as part of the "German - Turkish Project on Earthquake Prediction Research" in the Mudurnu area of the North Anatolian Fault Zone. The experiment was a joint enterprise by the Institute of Meteorology and Geophysics of Frankfurt University, the Earthquake Research Institute (ERI) in Ankara, and the Turkish Oil Company (TPAO). Two orthogonal profiles, each 120 km in length with a crossing point near Akyazi, were covered in succession by 30 short period tape recording seismograph stations with 2 km station spacing. 12 shots, with charge sizes between 100 and 250 kg, were fired and 342 seismograms out of 360 were used for evaluation. By coincidence an M b = 4.5 earthquake located below Imroz Island was also recorded and provided additional information on Moho and the sub-Moho velocity. A ray tracing method orginally developed by Weber (1986) was used for travel time inversion. From a compilation of all data two generalized crustal models were derived, one with velocity gradients within the layers and one with constant layer velocities. The latter consists of a sediment cover of about 2 km with V p » 3.6 km/s, an upper crystalline crust down to 13 km with V p » 5.9 km/s, a middle crust down to 25 km depth with V p » 6.5 km/s, a lower crust down to 39 km Moho depth with V p » 7.0 km/s and V p » 8.05 km/s below the Moho. The structure of the individual profiles differs slightly. The thickest sediment cover is reached in the Izmit-Sapanca-trough and in the Akyazi basin. Of particular interest is a step of about 4 km in the lower crust near Lake Sapanca and probably an even larger one in the Moho (derived from the Imroz earthquake data). After the catastrophic earthquake of Izmit on 17 August 1999 this significant heterogeneity in crustal structure appears in a new light with regard to the possible cause of the Izmit earthquake. Heterogeneities in structure are frequently also heterogeneities in strength and stress that impede or even lock rupture. The Izmit earthquake is discussed in relation to a large stepover or jog at the North Anatolian Fault.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Vielleicht hätte sich außerhalb der Fachwissenschaft niemand für das Weltklimaproblem interessiert, wären da nicht zwei brisante, miteinander gekoppelte Fakten: Die Menschheit ist hochgradig von der Gunst des Klimas abhängig. Es kann uns daher nicht gleichgültig sein, was mit unserem Klima geschieht. Und: Die Menschheit ist mehr und mehr dazu übergegangen, das Klima auch selbst zu beeinflussen. Daraus erwächst uns allen eine besondere Verantwortung. ...
Excitation functions for quasi-elastic scattering have been measured at backward angles for the systems 32,34S+197Au and 32,34S+208Pb for energies spanning the Coulomb barrier. Representative distributions, sensitive to the low energy part of the fusion barrier distribution, have been extracted from the data. For the fusion reactions of 32,34S with 197Au couplings related to the nuclear structure of 197Au appear to be dominant in shaping the low energy part of the barrier distibution. For the system 32S+208Pb the barrier distribution is broader and extends further to lower energies, than in the case of 34S+208Pb. This is consistent with the interpretation that the neutron pick-up channels are energetically more favoured in the 32S induced reaction and therefore couple more strongly to the relative motion. It may also be due to the increased collectivity of 32S, when compared with 34S.
An easy-to-use model to evaluate conductivities at high and middle latitudes in the height range 70–100 km is presented. It is based on electron density profiles obtained with the EISCAT VHF radar during 11 years and on the neutral atmospheric model MSIS95. The model uses solar zenith angle, geomagnetic activity and season as input parameters. It was mainly constructed to study the properties of Schumann resonances that depend on such conductivity profiles.
Turbulent fluxes of carbonyl sulfide (COS) and carbon disulfide (CS2) were measured over a spruce forest in Central Germany using the relaxed eddy accumulation (REA) technique. A REA sampler was developed and validated using simultaneous measurements of CO2 fluxes by REA and by eddy correlation. REA measurements were conducted during six campaigns covering spring, summer, and fall between 1997 and 1999. Both uptake and emission of COS and CS2 by the forest were observed, with deposition occurring mainly during the sunlit period and emission mainly during the dark period. On the average, however, the forest acts as a sink for both gases. The average fluxes for COS and CS2 are -93 ± 11.7 pmol m-2 s-1 and -18 ± 7.6 pmol m-2 s-1, respectively. The fluxes of both gases appear to be correlated to photosynthetically active radiation and to the CO2 and \chem{H_2O} fluxes, supporting the idea that the air-vegetation exchange of both gases is controlled by stomata. An uptake ratio COS/CO2 of 10 ± 1.7 pmol m mol-1 has been derived from the regression line for the correlation between the COS and CO2 fluxes. This uptake ratio, if representative for the global terrestrial net primary production, would correspond to a sink of 2.3 ± 0.5 Tg COS yr-1.
Turbulent fluxes of carbonyl sulfide (COS) and carbon disulfide (CS2) were measured over a spruce forest in Central Germany using the relaxed eddy accumulation (REA) technique. A REA sampler was developed and validated using simultaneous measurements of CO2 fluxes by REA and by eddy correlation. REA measurements were conducted during six campaigns covering spring, summer, and fall between 1997 and 1999. Both uptake and emission of COS and CS2 by the forest were observed, with deposition occurring mainly during the sunlit period and emission mainly during the dark period. On the average, however, the forest acts as a sink for both gases. The average fluxes for COS and CS2 are -93 ± 11.7 pmol m -2 s -1 and -18 ± 7.6 pmol m -2 s -1, respectively. The fluxes of both gases appear to be correlated to photosynthetically active radiation and to the CO2 and H2O fluxes, supporting the idea that the air-vegetation exchange of both gases is controlled by stomata. An uptake ratio COS / CO2 of 10 ± 1.7 pmol mmol -1 has been derived from the regression line for the correlation between the COS and CO2 fluxes. This uptake ratio, if representative for the global terrestrial net primary production, would correspond to a sink of 2.3 ± 0.5 Tg COS yr-1.
Measurements of OH, the sum of peroxy radicals (ROx), non-methane hydrocarbons (NMHCs) and various other trace gases were made at the Meteorological Observatory Hohenpeissenberg in June 2000. The data from an intensive measurement period characterised by high solar insolation (18-21 June) are analysed. The maximum midday OH concentration ranged between 4.5 x 106 molecules cm-3 and 7.4 x 106 molecules cm-3. The maximum total ROx mixing ratio increased from about 55 pptv on 18 June to nearly 70 pptv on 20 and 21 June. A total of 64 NMHCs, including isoprene and monoterpenes, were measured every 1 to 6 hours. The oxidation rate of the NMHCs by OH was calculated and reached a total of over 14 x 106 molecules cm-3 s-1 on two days. A simple photostationary state balance model was used to simulate the ambient OH and ROx concentrations with the measured data as input. The model was able to reproduce the main features of the diurnal profiles of both OH and ROx. The model results proved to be most sensitive to assumptions about the mixing ratio of formaldehyde (HCHO), which was included as a proxy for carbonyl compounds, and about the partitioning between HO2 and RO2. The measured OH concentration and ROx mixing ratios were reproduced well by assuming the presence of 3 ppbv HCHO and a ratio HO2/RO2 between 1:1 and 1:2. The most important source of OH, and conversely the greatest sink for ROx, was the recycling of HO2 radicals to OH. This reaction was responsible for the recycling of more than 45 x 106 molecules cm-3 s-1 on two days. The most important sink for OH, and the largest source of ROx, was the oxidation of NMHCs, in particular, of isoprene and the monoterpenes.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
Subvisible cirrus clouds (SVCs) may contribute to dehydration close to the tropical tropopause. The higher and colder SVCs and the larger their ice crystals, the more likely they represent the last efficient point of contact of the gas phase with the ice phase and, hence, the last dehydrating step, before the air enters the stratosphere. The first simultaneous in situ and remote sensing measurements of SVCs were taken during the APE-THESEO campaign in the western Indian ocean in February/March 1999. The observed clouds, termed Ultrathin Tropical Tropopause Clouds (UTTCs), belong to the geometrically and optically thinnest large-scale clouds in the Earth's atmosphere. Individual UTTCs may exist for many hours as an only 200–300 m thick cloud layer just a few hundred meters below the tropical cold point tropopause, covering up to 105 km2. With temperatures as low as 181 K these clouds are prime representatives for defining the water mixing ratio of air entering the lower stratosphere.
Mechanisms by which subvisible cirrus clouds (SVCs) might contribute to dehydration close to the tropical tropopause are not well understood. Recently Ultrathin Tropical Tropopause Clouds (UTTCs) with optical depths around 10-4 have been detected in the western Indian ocean. These clouds cover thousands of square kilometers as 200-300 m thick distinct and homogeneous layer just below the tropical tropopause. In their condensed phase UTTCs contain only 1-5% of the total water, and essentially no nitric acid. A new cloud stabilization mechanism is required to explain this small fraction of the condensed water content in the clouds and their small vertical thickness. This work suggests a mechanism, which forces the particles into a thin layer, based on upwelling of the air of some mm/s to balance the ice particles, supersaturation with respect to ice above and subsaturation below the UTTC. In situ measurements suggest that these requirements are fulfilled. The basic physical properties of this mechanism are explored by means of a single particle model. Comprehensive 1-D cloud simulations demonstrate this stabilization mechanism to be robust against rapid temperature fluctuations of +/- 0.5 K. However, rapid warming (Delta T > 2 K) leads to evaporation of the UTTC, while rapid cooling (Delta T < -2 K) leads to destabilization of the particles with the potential for significant dehydration below the cloud
Mechanisms by which subvisible cirrus clouds (SVCs) might contribute to dehydration close to the tropical tropopause are not well understood. Recently Ultrathin Tropical Tropopause Clouds (UTTCs) with optical depths around 10−4 have been detected in the western Indian ocean. These clouds cover thousands of square kilometers as 200–300 m thick distinct and homogeneous layer just below the tropical tropopause. In their condensed phase UTTCs contain only 1–5% of the total water, and essentially no nitric acid. A new cloud stabilization mechanism is required to explain this small fraction of the condensed water content in the clouds and their small vertical thickness. This work suggests a mechanism, which forces the particles into a thin layer, based on upwelling of the air of some mm/s to balance the ice particles, supersaturation with respect to ice above and subsaturation below the UTTC. In situ measurements suggest that these requirements are fulfilled. The basic physical properties of this mechanism are explored by means of a single particle model. Comprehensive 1-D cloud simulations demonstrate this stabilization mechanism to be robust against rapid temperature fluctuations of +/−0.5 K. However, rapid warming (ΔT>2 K) leads to evaporation of the UTTC, while rapid cooling (ΔT<−2 K) leads to destabilization of the particles with the potential for significant dehydration below the cloud.
Subvisible cirrus clouds (SVCs) may contribute to dehydration close to the tropical tropopause. The higher and colder SVCs and the larger their ice crystals, the more likely they represent the last efficient point of contact of the gas phase with the ice phase and, hence, the last dehydrating step, before the air enters the stratosphere. The first simultaneous in situ and remote sensing measurements of SVCs were taken during the APE-THESEO campaign in the western Indian ocean in February/March 1999. The observed clouds, termed Ultrathin Tropical Tropopause Clouds (UTTCs), belong to the geometrically and optically thinnest large-scale clouds in the Earth´s atmosphere. Individual UTTCs may exist for many hours as an only 200--300 m thick cloud layer just a few hundred meters below the tropical cold point tropopause, covering up to 105 km2. With temperatures as low as 181 K these clouds are prime representatives for defining the water mixing ratio of air entering the lower stratosphere.
Measurements of OH, total peroxy radicals, non-methane hydrocarbons (NMHCs) and various other trace gases were made at the Meteorological Observatory Hohenpeissenberg in June 2000. The data from an intensive measurement period characterised by high solar insolation (18-21 June) are analysed. The maximum midday OH concentration ranged between 4.5x106 molecules cm-3 and 7.4x106 molecules cm-3. The maximum total ROx (ROx =OH+RO+HO2+RO2) mixing ratio increased from about 55 pptv on 18 June to nearly 70 pptv on 20 and 21 June. A total of 64 NMHCs, including isoprene and monoterpenes, were measured every 1 to 6 hours. The oxidation rate of the NMHCs by OH was calculated and reached a total of over 14x106 molecules cm-3 s-1 on two days. A simple photostationary state balance model was used to simulate the ambient OH and peroxy radical concentrations with the measured data as input. This approach was able to reproduce the main features of the diurnal profiles of both OH and peroxy radicals. The balance equations were used to test the effect of the assumptions made in this model. The results proved to be most sensitive to assumptions about the impact of unmeasured volatile organic compounds (VOC), e.g. formaldehyde (HCHO), and about the partitioning between HO2 and RO2. The measured OH concentration and peroxy radical mixing ratios were reproduced well by assuming the presence of 3 ppbv HCHO as a proxy for oxygenated hydrocarbons, and a HO2/ RO2 ratio between 1:1 and 1:2. The most important source of OH, and conversely the greatest sink for peroxy radicals, was the recycling of HO2 radicals to OH. This reaction was responsible for the recycling of more than 45x106 molecules cm-3 s-1 on two days. The most important sink for OH, and the largest source of peroxy radicals, was the oxidation of NMHCs, in particular, of isoprene and the monoterpenes.
We report measurements of the deuterium content of molecular hydrogen (H2) obtained from a suite of air samples that were collected during a stratospheric balloon flight between 12 and 33 km at 40º N in October 2002. Strong deuterium enrichments of up to 400 permil versus Vienna Standard Mean Ocean Water (VSMOW) are observed, while the H2 mixing ratio remains virtually constant. Thus, as hydrogen is processed through the H2 reservoir in the stratosphere, deuterium is accumulated in H2 . Using box model calculations we investigated the effects of H2 sources and sinks on the stratospheric enrichments. Results show that considerable isotope enrichments in the production of H2 from CH4 must take place, i.e., deuterium is transferred preferentially to H2 during the CH4 oxidation sequence. This supports recent conclusions from tropospheric H2 isotope measurements which show that H2 produced photochemically from CH4 and non-methane hydrocarbons must be enriched in deuterium to balance the tropospheric hydrogen isotope budget. In the absence of further data on isotope fractionations in the individual reaction steps of the CH4 oxidation sequence, this effect cannot be investigated further at present. Our measurements imply that molecular hydrogen has to be taken into account when the hydrogen isotope budget in the stratosphere is investigated.
Aus der Notwendigkeit heraus, "nachhaltig die Funktionen des Bodens zu sichern" (§1 BBodSchG), und damit auch Bodenschutz vorsorgend in Planungsprozesse zu integrieren, wurde ein neues Bodenschutzkonzept entwickelt. Es basiert auf einer differenzierten, aber gleichzeitig nachvollziehbaren Bodenbewertung. Das Problem bei der Bodenbewertung ist, dass etwas bewertet werden soll, für das - je nach Fragestellung - immer wieder neue Ziele definiert werden müssen. Deshalb liegt der Bodenbewertung ein Zielsystem zu Grunde, das Schutzziele klar festlegt und mit Hilfe dessen die Bodenbewertung nachvollziehbar wird. Für das Bodenschutzkonzept werden aus der Vielzahl möglicher Kriterien wichtige vorgestellt, aus denen die - bezogen auf dieses Zielsystem - wesentlichen ausgewählt werden können. Um aussagekräftige Daten für diese Kriterien zu erhalten stützt sich die Bodenbewertung auf bodenkundliche sowie landschaftsgenetisch-geomorphologische Zusammenhänge. Die eigentliche Bewertung erfolgt dann in drei Schritten: zuerst eine Einzelbewertung, dann zusammengefasst nach den Bodenfunktionen Lebensraumfunktion, Regelungsfunktion, Informationsfunktion, dem Eigenwert des Bodens (Schutzwürdigkeit) sowie der Empfindlichkeit und Gefährdung (Schutzbedürftigkeit). Im dritten Schritt werden diese Bewertungen dann zu einer gewichteten, verbal-argumentativen Gesamtbewertung der Schutzwürdigkeit und Schutzbedürftigkeit zusammengefasst. Mit Hilfe des Bewertungsverfahrens werden auch Zielkonflikte zwischen den unterschiedlichen Schutzgütern offengelegt. Schutzmaßnahmen ergeben sich dann stringent aus den vorher im Zielsystem gesetzten Prämissen, d.h., Ziele und Maßnahmen sind begründbar gewählt, stehen in einem ökologischen Gesamtzusammenhang und lassen sich sehr gut nachvollziehen. Das hier vorgestellte, neue Bodenschutzkonzept ist für verschiedene Planungsebenen geeignet. Es ist in unterschiedlichen Naturräumen anwendbar, kann verschiedene Schutzziele mit Hilfe des Zielsystems bestimmen und so z.B. die Naturraumvielfalt in einem Gebiet ebenso berücksichtigen wie die Meinungsvielfalt, was unter vorsorgendem Bodenschutz zu verstehen sei.
We have used the SLIMCAT 3-D off-line chemical transport model (CTM) to quantify the Arctic chemical ozone loss in the year 2002/2003 and compare it with similar calculations for the winters 1999/2000 and 2003/2004. Recent changes to the CTM have improved the model's ability to reproduce polar chemical and dynamical processes. The updated CTM uses σ-θ as a vertical coordinate which allows it to extend down to the surface. The CTM has a detailed stratospheric chemistry scheme and now includes a simple NAT-based denitrification scheme in the stratosphere.
In the model runs presented here the model was forced by ECMWF ERA40 and operational analyses. The model used 24 levels extending from the surface to ~55 km and a horizontal resolution of either 7.5°×7.5° or 2.8°×2.8°. Two different radiation schemes, MIDRAD and the CCM scheme, were used to diagnose the vertical motion in the stratosphere. Based on tracer observations from balloons and aircraft, the more sophisticated CCM scheme gives a better representation of the vertical transport in this model which includes the troposphere. The higher resolution model generally produces larger chemical O3 depletion, which agrees better with observations.
The CTM results show that very early chemical ozone loss occurred in December 2002 due to extremely low temperatures and early chlorine activation in the lower stratosphere. Thus, chemical loss in this winter started earlier than in the other two winters studied here. In 2002/2003 the local polar ozone loss in the lower stratosphere was ~40% before the stratospheric final warming. Larger ozone loss occurred in the cold year 1999/2000 which had a persistently cold and stable vortex during most of the winter. For this winter the current model, at a resolution of 2.8°×2.8°, can reproduce the observed loss of over 70% locally. In the warm and more disturbed winter 2003/2004 the chemical O3 loss was generally much smaller, except above 620 K where large losses occurred due to a period of very low minimum temperatures at these altitudes.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Balloon-borne measurements of CFC-11 (on flights of the DIRAC in situ gas chromatograph and the DESCARTES grab sampler), ClO and O3 were made during the 1999/2000 winter as part of the SOLVE-THESEO 2000 campaign. Here we present the CFC-11 data from nine flights and compare them first with data from other instruments which flew during the campaign and then with the vertical distributions calculated by the SLIMCAT 3-D CTM. We calculate ozone loss inside the Arctic vortex between late January and early March using the relation between CFC-11 and O3 measured on the flights, the peak ozone loss (1200 ppbv) occurs in the 440–470 K region in early March in reasonable agreement with other published empirical estimates. There is also a good agreement between ozone losses derived from three independent balloon tracer data sets used here. The magnitude and vertical distribution of the loss derived from the measurements is in good agreement with the loss calculated from SLIMCAT over Kiruna for the same days.