550 Geowissenschaften
Refine
Year of publication
- 2021 (95)
- 2016 (78)
- 2020 (58)
- 2022 (54)
- 2019 (48)
- 2011 (44)
- 2015 (44)
- 2010 (42)
- 2013 (42)
- 2014 (40)
- 2009 (39)
- 2017 (38)
- 2008 (35)
- 2012 (31)
- 2018 (31)
- 2005 (20)
- 2023 (20)
- 2007 (15)
- 2006 (11)
- 2024 (11)
- 2003 (10)
- 2002 (9)
- 2004 (5)
- 2001 (4)
- 1996 (2)
- 1838 (1)
- 1871 (1)
- 1885 (1)
- 1904 (1)
- 1905 (1)
- 1908 (1)
- 1910 (1)
- 1911 (1)
- 1921 (1)
- 1925 (1)
- 1929 (1)
- 1958 (1)
- 1961 (1)
- 1962 (1)
- 1964 (1)
- 1969 (1)
- 1973 (1)
- 1974 (1)
- 1976 (1)
- 1981 (1)
- 1993 (1)
- 1998 (1)
- 2000 (1)
Document Type
- Article (690)
- Doctoral Thesis (116)
- Conference Proceeding (14)
- Working Paper (11)
- Book (6)
- Part of a Book (4)
- Master's Thesis (3)
- Preprint (2)
- Review (2)
- Bachelor Thesis (1)
Language
- English (850) (remove)
Has Fulltext
- yes (850)
Is part of the Bibliography
- no (850)
Keywords
- climate change (11)
- Climate change (8)
- COSMO-CLM (6)
- Palaeoclimate (6)
- precipitation (6)
- Atmospheric chemistry (5)
- Biogeochemistry (5)
- Palaeoceanography (5)
- Geochemistry (4)
- Bewässerung (3)
Institute
- Geowissenschaften (619)
- Geowissenschaften / Geographie (138)
- Biodiversität und Klima Forschungszentrum (BiK-F) (62)
- Senckenbergische Naturforschende Gesellschaft (48)
- Geographie (22)
- Biowissenschaften (21)
- Institut für Ökologie, Evolution und Diversität (8)
- Institut für sozial-ökologische Forschung (ISOE) (5)
- Frankfurt Institute for Advanced Studies (FIAS) (4)
- Medizin (4)
One possible approach to study systematically the influence of the deformation regime on the geometry of geological structures like folds and boudins is analogue modelling. For a complete understanding of the resulting structures, consideration of the third dimension is required. This PhD study deals with scaled analogue modelling under constriction and plane-strain conditions to improve our knowledge of folding and boudinage of lower crustal rocks in space and time. Plasticine is an appropriate analogue material for rocks in the lower crust. Therefore, this material was used for the experiments. The macroscopic behaviour of most types of plasticine is quite similar to rocks undergoing strain-rate softening and strain hardening regardless of the different microscopic aspects of deformation. Therefore, if one is aware that the stress exponent and viscosity increase with increasing strain, the original plasticine types used with stress exponents ranging from 5.8 to 8.0 are adequate for modelling geologic structures. The same holds for plasticine/oil mixtures. Thus, plasticine and plasticine/oil mixtures can be used to model the viscous flow of different rock types in the lower crust. If climb-accommodated dislocation creep and associated steady-state flow is assumed for the natural rocks, the plasticine/oil mixtures should be used, which flow under steady-state conditions. Three different experimental studies of plane-strain coaxial deformation of stiff layers, with viscosity η2 and stress exponent n2, embedded in a weak matrix, with viscosity η1 and stress exponent n1, have been carried out. The undeformed samples (matrix plus layer) were cubes with an edge length of 12 cm. All experimental runs have been carried out at T = 25 ± 1°C and varying strain rates ė, ranging from 7.9 x 10 high -6 s high -1 to 1.7 x 10 high -2 s high -1, until a finite longitudinal strain of 30% – 40% was achieved. The first experimental study improved the understanding about the evolution of folds and boudins when the layer is oriented perpendicular to the Y-axis of the finite strain ellipsoid. The rock analogues used were Beck’s green plasticine (matrix) and Beck’s black plasticine (competent layer), both of which are strain-rate softening modelling materials with stress exponent n = ca. 8. The effective viscosity η of the matrix plasticine was changed by adding different amounts of oil to the original plasticine. At a strain rate ė of 10 high -3 s high -1 and a finite strain e of 10%, the effective viscosity of the matrix ranges from 1.2 x 10 high 6 to 7.2 x 10 high 6 Pa s. The effective viscosity of the competent layer has been determined as 4.2 x 10 high 7 Pa s. If the viscosity ratio is large (> ca. 20) and the initial thickness of the competent layer is small, both folds and boudins develop simultaneously. Although the growth rate of the folds seems to be higher than the growth rate of the boudins, the wavelength of both structures is approximately the same as is suggested by analytical solutions. A further unexpected, but characteristic, aspect of the deformed competent layer is a significant increase in thickness, which can be used to distinguish plane-strain folds and boudins from constrictional folds and boudins. In the second experimental study, the impact of varying strain rates on growing folds and boudins under plane strain have been investigated. The strain rates used range from 7.9 x 10 high -6 s high -1 to 1.7 x 10 high -2 s high -1. The stiff layer and matrix consist of non-linear viscous Kolb grey and Beck’s green plasticine, respectively, both of which are strain-rate softening modelling materials with power law exponents (n) and apparent viscosities (η) ranging from 6.5 to 7.9 and 8.5 x 10 high 6 to 7.2 x 10 high 6 Pa s, respectively. The effective viscosity (η) of the matrix plasticine was partly modified by adding oil to the original plasticine. At the strain rates used in the experiments the viscosity ratio between layer and matrix ranges between 3 and 10. Different runs have been carried out where the layer was oriented perpendicular to the principal strain axes (X>Y>Z). The results suggest a considerable influence of the strain rate on the geometry of the deformed stiff layer including its thickness. This holds for every type of layer orientation (S ┴ X, S ┴ Y, S ┴ Z). If the stiff layer is oriented perpendicular to the short axis Z of the finite strain ellipsoid, the number of the resulting boudins and the thickness of the stiff layer increase, whereas the length of boudins decreases with increasing strain rate. If the stiff layer is oriented perpendicular to the long axis, X, of the finite strain ellipsoid, enlargement of the strain rate results in increasing wavelength of folds, whereas the number of folds and the degree of thickening of the stiff layer decreased. If the stiff layer is oriented perpendicular to the intermediate Y-axis of the finite strain ellipsoid enlargement of the strain rate results in a decreasing number of boudins and folds associated with increasing wavelengths of both structures. The wavelength of folds is approximately half of the boudins wavelength. This is true for the case where folds and boudins develop simultaneously (S ┴ Y) and for cases where both structures develop independently (folds at S ┴ X and boudins at S ┴ Z). In the third experimental study, scaled analogue experiments have been carried out to demonstrate the growth of plane-strain folds and boudins through space and time. Previous 3D-studies are based only on finite deformation structures. Their results can therefore not be used to prove if both structures grew simultaneously or in sequence. Plane strain acted on a single stiff layer that was embedded in a weak matrix, with the layer oriented perpendicular to the intermediate Y-axis of the finite strain ellipsoid. Two different experimental runs have been carried out using computer tomography (CT) to analyse the results. The first run was carried out without interruption. During the second run, the deformation was stopped in each case at longitudinal strain increments of 10%. Every experiment was carried out at a temperature T of 25°C and a strain rate, ė, of ca. 4 x 10 high -3 s high -1 until a finite longitudinal strain of 40% was achieved with a viscosity contrast m of 18.6 between the non-linear viscous layer (Kolb brown plasticine) and the matrix (Beck’s green plasticine with 150 ml oil kg high -1). The apparent viscosity, η, and the stress exponent, n, for the layer at a strain rate ė = ca. 10 high -3 s high -1 and a finite strain e = 10% are 2.23 x 10 high 7 Pa s and n = 5.8 and for the matrix 1.2 x 10 high 6 Pa s and 10.5. These new data that result from incremental analogue modelling corroborate previous suggestions that folds and boudins are coeval structures in cases of plane-strain coaxial deformation with the stiff layer oriented perpendicular to the intermediate Y-axis of the finite strain ellipsoid. They will be of interest for all workers who are dealing with plane-strain boudins and folds, where the fold axes are parallel to the major axis (X) of the finite strain ellipsoid. As has been demonstrated by the first experimental study, coeval folding and boudinage under plane strain, with S ┴ Y, are associated with a significant increase in the thickness of the competent layer. The latter phenomenon does not occur in other cases of simultaneous folding and boudinage, such as bulk pure constriction. To study the impact of layer thickness on the geometry of folds and boudins under pure constriction, we carried out additional experiments using different types of plasticine for a stiff layer and a weaker matrix to model folding and boudinaging under pure constriction, with the initially planar layer oriented parallel to the Xaxis of the finite strain ellipsoid. The stiff layer and matrix consist of non-linear viscous Kolb brown and Beck’s green plasticine, respectively, both of which are strain-rate softening modelling materials. Six runs have been carried out using thicknesses of the stiff layer of 1, 2, 4, 6, 8 and 10 ± 0.2 mm. All experimental runs were carried out at a temperature T of 30 ± 2°C and a strain rate, ė, of ca. 1.1 x 10 high -4 s high -1 until a finite longitudinal strain of 40% was achieved with a viscosity contrast m of 3.1 between the stiff layer (Kolb brown plasticine) and the matrix (Beck’s green plasticine). The apparent viscosity, η, and the stress exponent, n, for the layer at a strain rate ė = ca. 10 high -3 s high -1 and a finite strain e = 10% are 2.23 x 10 high 7 Pa s and n = 5.8 and for the matrix 7.2 x 10 high 6 Pa s and 7.9. Our results suggest a considerable influence of the initial thickness of the stiff layer on the geometry of the deformed stiff layer. There is no evidence for folding in XY=XZ-sections if the initial thickness of the competent layer is larger than ca. 8 mm. If the initial thickness of the competent layer is set at ca. 10 ± 0.2 mm, both folds and boudins develop simultaneously. However, the growth rate of the boudins seems to be higher than the growth rate of the folds. A further expected, but characteristic, aspect of the deformed competent layer is no change in thickness of the competent layer, which can be used to distinguish plane-strain folds and boudins from constrictional folds and boudins. The model results are important for the analysis and interpretation of deformation structures in rheologically stratified rocks undergoing dislocation creep under bulk constriction. Tectonic settings where constrictional folds and boudins may develop simultaneously are stems of salt diapirs, subduction zones or thermal plumes. To make (paleo) viscosimetric statements possible, the rheological data of the different plasticine types were related to the geometrical data. When comparing the normalized dominant wavelength Wd obtained from the deformed layer of the models with the theoretical dominant wavelength (Ld) calculated using the Smith equation (1977, 1979), the latter probably also holds when folding and boudinage develop simultaneously (S ┴ Y) and when boudins develop independently (S ┴ Z), but can obviously not be applied at very low viscosity ratios as is indicated by the low-strain-rate experiments.
The present work was devised to address the systematic analysis of samples from a range of Roman non-ferrous metal artefacts from different archaeological contexts and sites in the Roman provinces of Germania Superior. One of the focal points of this study is the provenancing of different lead objects from five important Roman settlements between 15 BC and the beginning of fourth century AD. For this purpose, measurements were made on lead and copper ore samples from the Siegerland, Eifel, Hunsrück and Lahn-Dill area in Germany and supplemented with data from the literature to create a data bank of lead isotope ratios of European deposits. Compositional analysis of lead objects by Electron Microprobe analysis showed that Romans were able to purify lead from ore up to 99%. Multi-Collector Inductively Coupled Plasma Mass-Spectrometry was used to determine the source of lead, which played an important role in nearly all aspects of Roman life. Lead isotope ratios were measured for ore samples from German deposits from the eastern side of the Rhine (Siegerland, Lahn-Dill, Ems) and the western side of the Rhine (Eifel, Hunsrück), which contained enough ore reserves to answer the increasing local demand and are believed to have been mined during the Roman period. This data together with those from Mediterranean ore deposits from the literature was used to establish a data bank. The Mediterranean ore deposits range from Cambrian (high 207Pb/206Pb) to tertiary (lower 207Pb/206Pb) values. In particular, the Cypriot deposits are younger, while the Spanish deposits fall either with the younger Sardic ores or close to the older Cypriot ores. The lead isotope ratios of most German ore deposits fall in between the 208Pb/206Pb vs. 207Pb/206Pb ratios of Sardinia and Cyprus, where the lead isotope signature of ore deposits from France and Britain are also found. Over 240 lead objects were measured from Wallendorf (second century BC to first century AD) Dangstetten (15-8 BC), Waldgirmes (AD 1-10), Mainz (AD 1-300), Martberg (first to fourth centuries AD) & Trier (third to fourth centuries AD). Comparing the lead isotope ratios of lead objects and those from German ores shows that the source of over 85 percent of objects are Eifel ore deposits, but the Roman’s had also imported lead from the Southern Massif Central and from Great Britain. A further topic of this work was the systematic study of the variation of copper isotope ratios in different copper minerals and the mechanisms, which controls copper isotope fractionation in ores deposits. For this purpose, copper isotope analyses were made by Multi-Collector Inductively Coupled Plasma Mass-Spectrometry from a series of hydrothermal copper sulphides and their alteration products. Copper and lead isotope ratios were measured in coexisting phases of chalcopyrite and malachite and also coexisting malachite and azurite. No significant fractionation was observed in malachite-azurite phases, but in chalcopyrite-malachite coexisting phases, malachite always shows a positive fractionation to heavier isotope values. Zhu et al. and Larson et al. showed that isotopic variations in copper principally reflect mass fractionation in response to low temperature processes rather than source heterogeneity. The low temperature ore formation processes are mostly represented by weathering of primary sulphide ores to produce secondary carbonate phases and therefore are usually observed on the surface of ore deposits, which were probably removed during the early Bronze Age. Using this concept, copper isotope ratios were measured in some Early Bronze Age copper alloys and Roman copper alloys. However, no large copper isotope fractionation has been observed. Lead and copper isotope ratios were measured on samples from the Kupferschiefer. Two profiles were investigated; 1) Sangerhausen, which was not directly influenced by the oxidizing brines of Rote Fäule and 2) Oberkatz, where both Rote Fäule-controlled and structure-controlled mineralization were observed. Results from maturation studies of organic matter suggest the maximum temperature affecting the Kupferschiefer did not exceed 130°C. delta-65-Cu ranges between -0.78-+0.58‰, shows a positive correlation with copper concentration. Maximum temperature in the Kupferschiefer profile from Oberkatz is supposed to be around 150°C. delta-65Cu in this profile ranges between -0.71-+0.68‰. The pattern of copper isotope fractionation and copper concentration is same as the for profile of Sangerhausen. Origina lead isotope ratios are strongly overprinted by high concentrations of uranium in bottom of both profiles causing more radiogenic lead.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
Simulation of global temperature variations and signal detection studies using neural networks
(1998)
The concept of neural network models (NNM) is a statistical strategy which can be used if a superposition of any forcing mechanisms leads to any effects and if a sufficient related observational data base is available. In comparison to multiple regression analysis (MRA), the main advantages are that NNM is an appropriate tool also in the case of non-linear cause-effect relations and that interactions of the forcing mechanisms are allowed. In comparison to more sophisticated methods like general circulation models (GCM), the main advantage is that details of the physical background like feedbacks can be unknown. Neural networks learn from observations which reflect feedbacks implicitly. The disadvantage, of course, is that the physical background is neglected. In addition, the results prove to be sensitively dependent from the network architecture like the number of hidden neurons or the initialisation of learning parameters. We used a supervised backpropagation network (BPN) with three neuron layers, an unsupervised Kohonen network (KHN) and a combination of both called counterpropagation network (CPN). These concepts are tested in respect to their ability to simulate the observed global as well as hemispheric mean surface air temperature annual variations 1874 - 1993 if parameter time series of the following forcing mechanisms are incorporated : equivalent CO2 concentrations, tropospheric sulfate aerosol concentrations (both anthropogenic), volcanism, solar activity, and ENSO (all natural). It arises that in this way up to 83% of the observed temperature variance can be explained, significantly more than by MRA. The implication of the North Atlantic Oscillation does not improve these results. On a global average, the greenhouse gas (GHG) signal so far is assessed to be 0.9 - 1.3 K (warming), the sulfate signal 0.2 - 0.4 K (cooling), results which are in close similarity to the GCM findings published in the recent IPCC Report. The related signals of the natural forcing mechanisms considered cover amplitudes of 0.1 - 0.3 K. Our best NNM estimate of the GHG doubling signal amounts to 2.1K, equilibrium, or 1.7 K, transient, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
In dieser Arbeit wurde der chemische Ozonverlust in der arktischen Stratosphäre über elf Jahre hinweg, zwischen 1991 und 2002, mit Hilfe der so genannten "Ozon-Tracer Korrelationstechnik" (TRAC), untersucht. Bei dieser Methode werden Korrelationen zwischen Ozon und langlebigen Spurenstoffen im Verlauf des Winters im Polarwirbels beobachtet und so der jährliche akkumulierte Ozonverlust berechnet. Die Ergebnisse dieser Arbeit basieren im wesentlichen auf Messdaten der Satelliteninstrumente: HALOE (Halogen Occultation Experiment) auf UARS (Upper Atmosphere Research Satellite) und ILAS (Improved Limb Atmospheric Spectrometer) Instrument auf ADEOS (Advanced Earth Observing Satellite). Das HALOE Instrument misst seit Oktober 1991 kontinuierlich alle zwei bis drei Monate für einige Tage in höheren nördlichen Breiten. ILAS lieferte ausschließlich für den Winter 1996-97 Messungen, die über sieben Monate hinweg in hohen Breiten aufgenommen wurden. Aufgrund der eingeführten Erweiterungen und Verbesserungen der Methode in dieser Arbeit, konnte die Methode anhand einer detaillierten Studie für den Winter 1996-97 validiert werden. Die ILAS Messreihe wurde dazu verwendet, erstmals die Untersuchung der zeitlichen Entwicklung von Ozon-Tracer Korrelationen kontinuierlich für die gesamte Lebensdauer des Polarwirbels durchzuführen. Dabei wurden auch Korrelationen während der Bildung des Wirbels untersucht und im Besonderen mögliche Mischungsvorgänge zwischen Wirbelluft und Luftmassen außerhalb des Wirbels. Ausserdem wurde ein Vergleich der Ergebnisse von ILAS und HALOE Messdaten durchgeführt und Unterschiede in den Ergebnissen tiefgreifend analysiert. Basierend auf HALOE Messungen konnte die erweiterte TRAC Methode über elf Jahren hinweg angewendet werden. Damit war erstmals eine konsistente Analyse von Ozonverlust und Chloraktivierung über diesen Zeitraum möglich. Die Erweiterungen führten zu einer Verringerung und genauen Quantifizierung von Unsicherheiten der Ergebnisse. Ein deutlicher Zusammenhang zwischen meteorologischen Bedingungen, Chloraktivierung und dem chemischen Ozonverlust wurde deutlich. Weiterhin zeigte sich eine Abhängigkeit zwischen den meteorologischen Bedingungen und der Homogenität des Ozonverlustes innerhalb eines Winters, sowie der mögliche Einfluss von horizontaler Mischung auf Luftmassen in einem schwach ausgeprägten Polarwirbel. In dieser Arbeit wurde eine positive Korrelation zwischen den über die gesamte Lebensdauer des Wirbels auftretenden möglichen PSC-Flächen und den akkumulierten Ozonverlusten für die elf untersuchten Jahre deutlich. Es konnte darüber hinaus gezeigt werden, dass der Ozonverlust von deutlich mehr Einflüssen als nur von der Fläche möglichen PSC Auftretens bestimmt wird, sondern zum Beispiel von der Stärke der Sonneneinstrahlung abhängt. Außerdem lassen sich Auswirkungen von Vulkanausbrüchen, wie zum Beispiel im Jahr 1991 der des Mount Pinatubo, identifizieren.
Die vorliegende Arbeit liefert einen Beitrag zum Verständnis der Rolle des RO x bei der troposphärischen Ozonbildung. Troposphärisches Ozon (O 3 ) spielt eine wichtige Rolle bei der Selbstreinigung der Atmosphäre. Andererseits führen erhöhte Ozonkonzentrationen zu gesundheitlichen Beeinträchtigungen beim Menschen und Schäden an Pflanzen und Umwelt. Die Anwesenheit von flüchtigen organischen Verbindungen (VOCs) führt zur Bildung von Peroxyradikalen (RO x ), die das normale photochemische Gleichgewicht zwischen Ozon und Stickoxiden zu Gunsten erhöhter OzonKonzentrationen verschieben. Im Rahmen der Arbeit wurde ein chemischer Verstärker zur Messung der GesamtPeroxyradikalkonzentration gebaut. RO x reagiert im Einlass des Gerätes mit hinzugefügtem NO und CO in einer Kettenreaktion und bildet dabei NO 2 . Dieses wird mit einem Luminoldetektor nachgewiesen. Der Detektor wird alle 2 Stunden kalibriert. Die Kettenlänge wird durch eine Kalibrierung des Gerätes mit HO 2 Radikalen bestimmt, die durch die Photolyse von H 2 O gebildet werden. Der Verstärkungsfaktor wurde in Bezug auf eine Querempfindlichkeit gegen Wasserdampf korrigiert. Die Messgenauigkeit ist etwa 70% bei 60% relativer Feuchte. Messungen am Taunus Observatorium auf dem Kleinen Feldberg in den Sommermonaten der beiden Jahre 1998 und 1999 werden vorgestellt. Die Ozon und RO x Konzentrationen sind gut miteinander korreliert. Allerdings ist die Tagestemperatur die für die Ozon und RO x Konzentrationen bei weitem wichtigste Einflussgröße und ist daher der beste Parameter zur statistischen Beschreibung von photochemischen Vorgängen. Auf der Grundlage der Messungen am Kleinen Feldberg wurde ein einfaches statistisches Modell zur Vorhersage des Ozonmaximums erstellt. Mit den Parametern Temperatur und Ozonkonzentration am Vortag konnte das statistische Modell bereits 80% der Variation der Ozonkonzentration erklären. Durch die Berücksichtigung der RO x Messungen am Vormittag konnte lediglich eine Verbesserung der erklärten Varianz um 0.5% erzielt werden. Um einen Hinweis auf den Einfluss anthropogener Emissionen zu bekommen, wurde der Wochengang von Ozon, RO x und NO x ebenfalls untersucht. Die Zunahme des Ozonmischungsverhältnisses am Wochenende bei gleichzeitigem Rückgang des Mischungsverhältnisses der Stickoxide wird damit erklärt, dass am Kleinen Feldberg eine VOClimitierte Situation vorgefunden wurde. Die Ozonbildungsrate auf Basis der Reaktion zwischen RO x und NO wurde für Tage mit einem Maximum der Globalstrahlung über 600 W m tdatensatz niedrig (r = 0,46). Die beobachtete Änderung des Ozonmischungsverhältnisses wurde mit dem berechneten mittleren Tagesgang der Ozonbildungsrate verglichen. Die Ozonbildungsrate lag um die Mittagszeit bei etwa 5 ppbv h Verlustprozesse zu erklären. Am Abend werden etwa 2 ppbv O 3 pro Stunde abgebaut. Im Rahmen einer Messkampagne im Juni/Juli 2000 am Meteorologischen Observatorium Hohenpeißenberg fanden Messungen der Konzentrationen von RO x , OH, einer Reihe von VOCs, und anderen relevanten Spurengasen statt. Die Messdaten werden mit Hilfe eines auf der Annahme des lokalen photostationären Gleichgewichts der Radikale basierenden Modells interpretiert. Die Modellergebnisse stimmten sehr gut mit den Messungen überein. Die Überschätzung der Konzentration an 2 Tagen wurde durch den Einfluss sauerstoffhaltiger VOCs erklärt. Das '' Recycling" der HO 2 Radikale (die Reaktion zwischen HO 2 und NO) ist die wichtigste Quelle für OH und die wichtigste Senke für RO x . Durch die erhöhte NOKonzentration am Vormittag wird HO 2 sehr schnell in OH umgewandelt, das wiederum für die VOCOxidation und RO x Bildung verantwortlich ist. Die wichtigste OHSenke und RO x Quelle ist die Oxidation von Isopren und den Terpenen. Um die Rolle der photochemischen Ozonbildung auf regionaler Skala zu untersuchen, wurden Ozonmessungen aus ganz Deutschland auf unterschiedlichen zeitlichen und räumlichen Skalen statistisch untersucht. Die Netto Änderungsrate der Ozonkonzentration war tagsüber an 3 nahe zusammenliegenden Stationen sehr ähnlich. Die OzonMessdaten von 277 deutschen Messstationen wurden mit den an einer Waldmessstelle nahe Königstein gemessenen Ozonwerten korreliert. Die Ozonmessungen in Königstein erklären 50% der Varianz der sommerlichen Ozonmessungen zwischen 11:00 und 16:00 MEZ an Stationen, die in einem Umkreis von etwa 250 km von Königstein liegen. Auf das ganze Jahr bezogen, liegt diese ''charakteristische Entfernung" bei etwa 350 km. Diese Ergebnisse deuten darauf hin, dass die Prozesse, die einen wichtigen Einfluss auf die Ozonkonzentration ausüben, auf regionalen Skalen von einigen hundert Kilometern aktiv sind. Zusammenfassend lässt sich sagen, dass die gemessenen RO x Konzentrationen mit den aufgrund der Oxidation der VOCs durch OH berechneten Konzentrationen konsistent sind. Obwohl die RO x Konzentationen für die chemische Modellierung von Bedeutung sind, tragen RO x Messungen nur wenig zu einer Verbesserung der Qualität von kurzfristigen statistischen Ozonprognosen bei. Keywords: Ozone, Troposphere, Peroxy Radicals, Free Radicals, Photochemistry, Chemical Amplifier
In der hier vorliegenden Arbeit wurde der troposphärische Kreislauf von Carbonylsulfid (COS) untersucht. COS ist ein Quellgas des stratosphärischen SulfatAerosols, das die Strahlungsbilanz beeinflussen und den chemischen Abbau des stratosphärischen Ozons beschleunigen kann. Trotz zahlreicher Studien sind die Quellen und Senken des atmosphärischen COS bisher nur unzulänglich quantifiziert. Insbesondere bestehen große Unsicherheiten in den Abschätzungen der Beiträge des Ozeans und der anthropogenen Quellen, sowie der Senkenstärke der Landvegetation. Schiffs und flugzeuggetragene Messungen des atmosphärischen COS ergaben kein einheitliches interhemisphärisches Verhältnis (IHR=MNH /M SH ). Während die Messungen von Bingemer et al. (1990), StaubesDiederich (1992) und Johnson et al. (1993) ein IHR zwischen 1.10 und 1.25 zeigten, fanden die Messungen von Torres et al. (1980), StaubesDiederich (1992), Weiss et al. (1995) und Thornton et al. (1996) keinen oder nur einen geringfügigen N/SGradienten. Die Untersuchung von Chin und Davis (1993) zeigt ein N/SVerhältnis der COS Quellstärke von 2.3, das hauptsächlich auf die stärkeren anthropogenen Quellen auf der Nordhalbkugel zurückzuführen ist. Es ist unklar, ob der zeitweilige Konzentrationsüberschuß der Nordhemisphäre Zeichen anthropogener Quellen dort oder Teil eines durch die Senkenfunktion der Landpflanzen verursachten saisonalen Signals ist. Die Konsistenz der Breitenverteilung des COSMischungsverhältnisses mit den geographischen bzw. saisonalen Variationen der COSQuellen und Senken muß überprüft werden. Dazu werden genaue Kenntnissen der Quell und Senkenstärken des atmosphärischen COS und ihrer raumzeitlichen Variabilität benötigt. Vor dem obigen Hintergrund ergeben sich als Schwerpunkte dieser Arbeit: (1) der Austausch von COS zwischen Atmosphäre und Ozean sowie (2) zwischen Atmosphäre und terrestrischer Vegetation und (3) die raumzeitliche Variabilität des atmosphärischen COS. Zur Untersuchung des Austausches von COS zwischen Atmosphäre und Ozean wurde das KonzentrationsUngleichgewicht von COS zwischen Ozean und Atmosphäre durch Messungen des COS im Seewasser und in der Meeresluft ermittelt und die resultierenden Austauschflüsse mit einem Modell berechnet. Die Messungen fanden an Bord des Forschungsschiffs Polarstern während der Fahrten ANT/XV1 (15.10.6.11.1997, BremerhavenKapstadt) und ANT /XV5 (26.5.6.20.1998, KapstadtBremerhaven) statt. Die Konzentration des gelösten COS und das Sättigungsverhältnis von COS zwischen Ozean und Atmosphäre zeigen ausgeprägte Tagesgänge und saisonale und geographische Variationen. Die mittlere Konzentration von COS im Seewasser beträgt 14.7 pmol L -1 für die HerbstFahrt bzw. 18.1 pmol L -1 für die SommerFahrt. Höchste COSKonzentrationen werden in der jeweiligen SommerHemisphäre und in Gebieten mit hoher biologischer Produktivität beobachtet, d.h. im BenguelaStrom im November, im NordostAtlantik im Juni und in den Auftriebgebieten vor Westafrika im Oktober bzw. Juni. In den übrigen Gebieten sind die Konzentrationen um eine Größenordnung niedriger. Die Konzentration von COS im Seewasser steigt frühmorgens von ihrem tiefsten Stand an. Um ca. 15 Uhr Ortszeit erreicht sie ihr Maximum, danach nimmt sie ab. Der Tagesgang unterstützt die Theorie, daß COS im Seewasser photochemisch produziert wird. Während der Tagesstunden wird eine Übersättigung des offenen Ozean für COS gefunden. Dagegen ist eine Untersättigung des Ozeans in den späten Nachtstunden zu beobachten. Der Ozean wirkt in den Tagesstunden als COSQuelle, in der späten Nacht als COSSenke. Die Untersättigung tritt sogar im Sommer in produktiven Meeresgebieten regelmäßig auf. Eine Konsequenz dieser Beobachtung ist die weitere Reduzierung der ozeanischen Quelle von COS gegenüber bisher publizierten Abschätzungen. Methylmercaptan (CH 3 SH) ist in allen Seewasserproben zu beobachten. Der Tagesmittelwert der CH 3 SHKonzentration variiert zwischen 29 und 303 pm L -1 und ist 316 fach größer als der der COSKonzentration. Der Tagesgang der CH 3 SHKonzentration zeigt ein Minimum um die Mittagszeit. Die Tagesmittel der CH 3 SH und COSKonzentrationen sind signifikant miteinander korreliert. Diese Daten liefern den Beweis dafür, daß CH 3 SH eine der wichtigen Vorgängersubstanzen von COS ist. Die Regressionslinie der Korrelation zwischen den mittleren COS und CH 3 SHKonzentrationen weist nur einen geringfügigen Achsenabschnitt auf. Somit kann die CH 3 SHKonzentration als ein Indikator der Konzentration von COSVorgängern benutzt werden. Es besteht außerdem eine Korrelation zwischen der CH 3 SHKonzentration und dem Logarithmus der Konzentration des gelösten Chlorophyll a. Diese Korrelation deutet darauf hin, daß der Gehalt von CH 3 SH im Seewasser eine enge Beziehung zur marinen Primärproduktion hat. COS wird im Seewasser durch Hydrolyse abgebaut. Die Abbaurate hängt von der Temperatur des Seewassers ab. Je wärmer das Seewasser ist, desto schneller wird COS abgebaut, und um so kürzer ist die Lebenszeit von COS im Seewasser. Die Lebenszeit kann einerseits durch das ReaktionsgeschwindigkeitsGesetz von Arrhenius berechnet werden, andererseits läßt sie sich durch exponentielle Anpassung an den nächtlichen Konzentrationsverlauf (d.h. bei Abwesenheit von Photoproduktion) abschätzen. Eine solche Anpassung des exponentiellen Abklingens wurde anhand von dicht gestaffelten Messungen während einiger Nächte vorgenommen. Die gefitteten Lebenszeiten stimmen mit den theoretischen Werten gut überein, obwohl die gefittete Lebenszeit neben Hydrolyse noch von anderen Prozessen (z.B. Transport nach unten, AirSeaAustausch, usw.) beeinflußt wird. Diese gute Übereinstimmung unterstützt die Aussage, daß die Hydrolyse eine bedeutende Rolle beim Abbau von COS im Seewasser spielt. Die berechnete HydrolyseLebenszeit ist mit dem Tagesmittel der COSKonzentration korreliert. Da die Tagesmittelwerte sowohl zeitliche wie auch räumliche Mittelwerte der COSKonzentrationen darstellen, zeigt diese Korrelation, daß Hydrolyse eine bedeutende Rolle in der raumzeitlichen Variabilität der COSKonzentration einnimmt. Da die Konzentration des gelösten COS von mehreren Faktoren abhängig ist, scheint eine multivariable Betrachtung sinnvoll. Hierfür wurde eine "Multiple Linear Regression Analysis'' (MLRA) ausgeführt. Diese Analyse ergibt ein empirisches Modell der folgenden Form für die Berechnung des Tagesmittels der COSKonzentration: [COS] = 1.8# 13log[Chl] - 1.5W s 0.057G - 0.73, mit [COS] = mittlere Konzentration von COS in pmol L -1 # = HydrolyseLebenszeit in Stunde [Chl] = mittlere Konzentration von Chlorophyll a in mg m -3 W s = Windgeschwindigkeit in m s -1 G = Intensität der Globalstrahlung in W m -2 . Die Parameter auf der rechten Seite der Gleichung können direkt oder indirekt von Satelliten aus gemessen werden, deshalb kann dieses Modell für die Abschätzung der Konzentration von COS im Seewasser anhand von Satelliten Daten verwendet werden. Das empirische Modell soll noch durch weitere Messungen bestätigt bzw. verbessert werden. Der Austauschfluß von COS zwischen der Atmosphäre und dem offenen Ozean wurde mit dem AirSeaFlußModell von Liss and Slater (1974) zusammen mit dem Modell von Erickson (1993) f