550 Geowissenschaften
Refine
Year of publication
Document Type
- Article (690)
- Doctoral Thesis (116)
- Conference Proceeding (14)
- Working Paper (11)
- Book (6)
- Part of a Book (4)
- Master's Thesis (3)
- Preprint (2)
- Review (2)
- Bachelor Thesis (1)
Language
- English (850) (remove)
Has Fulltext
- yes (850)
Is part of the Bibliography
- no (850)
Keywords
- climate change (11)
- Climate change (8)
- COSMO-CLM (6)
- Palaeoclimate (6)
- precipitation (6)
- Atmospheric chemistry (5)
- Biogeochemistry (5)
- Palaeoceanography (5)
- Geochemistry (4)
- Bewässerung (3)
Institute
- Geowissenschaften (619)
- Geowissenschaften / Geographie (138)
- Biodiversität und Klima Forschungszentrum (BiK-F) (62)
- Senckenbergische Naturforschende Gesellschaft (48)
- Geographie (22)
- Biowissenschaften (21)
- Institut für Ökologie, Evolution und Diversität (8)
- Institut für sozial-ökologische Forschung (ISOE) (5)
- Frankfurt Institute for Advanced Studies (FIAS) (4)
- Medizin (4)
- Extern (3)
- Starker Start ins Studium: Qualitätspakt Lehre (3)
- Center for Scientific Computing (CSC) (2)
- Geschichtswissenschaften (2)
- Informatik und Mathematik (2)
- Physik (2)
- Zentrum für Interdisziplinäre Afrikaforschung (ZIAF) (2)
- Biochemie und Chemie (1)
- Center for Financial Studies (CFS) (1)
- House of Finance (HoF) (1)
- SFB 268 (1)
- Sprach- und Kulturwissenschaften (1)
- Wirtschaftswissenschaften (1)
- keine Angabe Institut (1)
Constructive waterfalls
(1911)
The excavation of valleys by waterfalls is one of the best known and most effective processes by which rivers cut down the surface of the earth. The influence of waterfalls is usually regarded as solely destructive, and as always helping to lower the land. They undermine and cut backward the rock faces over which they fall : by this recession they excavate deep gorges ; and the existence of these gorges enables the adjacent country to be lowered to the level of the valIey floors. The waterfalls, moreover, empty any lakes they rnay reach in their retreat, while the ravines below the falls may drain the springs and thus desiccate the neighbouring hihlands. Observations in various countries had suggested to me that waterfalls may sometimes be constructive in stead of destructive, and that they may reserse their usual procedure, advancing instead of retreating, filling valleys instead of excavating them, and forrning alluvial plains and lakes instead of destroying them. The best illustrations I have seen of such advancing, constructive waterfalls are on some rivers of Dalmatia and Bosnia, where they occur in various stages of development. ...
The biomarker record in two different lakes in central Europe, Lake Albano and Lake Constance, is used to reflect environmental changes and lake system response during the Late Glacial and Holocene. Extractable organic compounds in lake sediments, which can be assigned to their biological source (biomarkers) function as fingerprints of past aquatic or land plant organisms. Using gas chromatography coupled with mass spectrometry, 21 different biomarkers (predominantly steroids and triterpenoids) as well as a variety of n-alkanes, nalkanols, and n-alkanoic acids could be identified in the sediment records of Lake Albano and Lake Constance. In the Holocene sediments of Lake Albano, the distribution of biomarkers such as dinosterol (dinoflagellates), isoarborinol, and diplopterol (aquatic organisms) indicate three biomarker zones: The period between 0-3,800 years BP (zone 3) is characterized by high concentrations of these biomarkers and others such as tetrahymanol and diploptene. Conversely, zone 2 (3,800-6,500 years BP) shows very low concentrations of all autochthonous biomarkers. In zone 1 (6,500–11,480 years BP), dinosterol, isoarborinol, and diplopterol range on a relatively high level, whereas diploptene and tetrahymanol display comparatively low concentrations. The results suggest at least two distinct changes in the predominance of primary producers during the Holocene, which are related to changes in the lake system such as lake mixing and water column stratification. This interpretation is consistent with previous investigations of Lake Albano sediments including pigment and hydrogen index data (Ariztegui et al., 1996b; Guilizzoni et al., 2002). Allochthonous biomarkers such as long-chain n-alkanes, amyrenones and friedelin indicate a development from forest to a more open landscape from 6,000 and 5.000 years BP, respectively. After a period of high concentrations during the first half of the Holocene, all biomarkers derived from deciduous trees exhibit relatively low values until around 1,000 years BP. Again, this is consistent with results from previous pollen investigations (Ariztegui et al., 2000). The sediment core from Upper Lake Constance comprises the Late Glacial and Holocene. It was analysed for biomarkers and inorganic tracers in order to compare the biomarker results with other proxy data from the same core. Magnetic susceptibility (MS) was measured to get a high-resolution stratigraphic framework of the core and to obtain further information about changes of the proportions of allochthonous and autochthonous input. Enhanced concentrations and accumulation rates of dinosterol (biomarker for dinoflagellates) and biogenic calcite give evidence of increasing lake productivity at the beginning of the Holocene followed by a decrease in bioproductivity after around 7,000 years BP. Younger Dryas sediments are characterized by low amounts of both dinosterol and biogenic calcite indicating a low productivity. The comparison of the concentrations and accumulation rates of b-sitosterol and stigmastanol with parameters reflecting lake productivity suggests that both steroids in Lake Constance sediments are mainly derived from terrigenous sources. Biomarkers as well as concentrations and accumulation rates of allochthonous inorganic compounds such as titanium, magnesium and strontium indicate a slightly enhanced allochthonous input after 8,500 years BP. Significant increase of erosive matter input from enhanced soil erosion is not observed before 4,000 years BP. This can be attributed to the combined effects of precipitation increase as a result of climatic deterioration and anthropogenic deforestation which is consistent with observations from other lakes in Central Europe. The MS record of Lake Constance confirms these results by tracing the climatically induced shifts of more intense bioproduction (low MS caused by increased calcite deposition) during the ‘climatic optimum’. This is followed by increasing input of terrigenous sediment compounds during colder and wetter periods which lead to higher MS values in the lake sediments. The occurrence of tetrahymanol in Lake Constance sediments questions the unambiguous use of tetrahymanol as an indicator for water column stratification. Anaerobic organic macroaggregates within the oxygenated, photic zone of the water column have to be considered as a possible living space for anaerobic microorganisms containing tetrahymanol. The direct comparison of two very different lakes Albano and Constance with respect to biomarkers indicating climate or environmental change provides a contribution to the recent biomarker research for a better understanding of biomarkers in lacustrine sediments.
Spatial interpolation of precipitation data is uncertain. How important is this uncertainty and how can it be considered in evaluation of high-resolution probabilistic precipitation forecasts? These questions are discussed by experimental evaluation of the COSMO consortium's limited-area ensemble prediction system COSMO-LEPS. The applied performance measure is the often used Brier skill score (BSS). The observational references in the evaluation are (a) analyzed rain gauge data by ordinary Kriging and (b) ensembles of interpolated rain gauge data by stochastic simulation. This permits the consideration of either a deterministic reference (the event is observed or not with 100% certainty) or a probabilistic reference that makes allowance for uncertainties in spatial averaging. The evaluation experiments show that the evaluation uncertainties are substantial even for the large area (41 300 km2) of Switzerland with a mean rain gauge distance as good as 7 km: the one- to three-day precipitation forecasts have skill decreasing with forecast lead time but the one- and two-day forecast performances differ not significantly.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigation Areas for the continents of Africa and Europe as well as for the countries Argentina, Brazil, Mexico, Peru and Uruguay in Latin America. For this update, an new inventory of subnational irrigation statistics was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 48.8 million ha while the total area equipped for irrigation at the global scale is 278.8 million ha. The total number of subnational units in the inventory used for this update is 16 822 while the number of subnational units in the global inventory increased to 26 909. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation.
Mechanisms by which subvisible cirrus clouds (SVCs) might contribute to dehydration close to the tropical tropopause are not well understood. Recently Ultrathin Tropical Tropopause Clouds (UTTCs) with optical depths around 10-4 have been detected in the western Indian ocean. These clouds cover thousands of square kilometers as 200-300 m thick distinct and homogeneous layer just below the tropical tropopause. In their condensed phase UTTCs contain only 1-5% of the total water, and essentially no nitric acid. A new cloud stabilization mechanism is required to explain this small fraction of the condensed water content in the clouds and their small vertical thickness. This work suggests a mechanism, which forces the particles into a thin layer, based on upwelling of the air of some mm/s to balance the ice particles, supersaturation with respect to ice above and subsaturation below the UTTC. In situ measurements suggest that these requirements are fulfilled. The basic physical properties of this mechanism are explored by means of a single particle model. Comprehensive 1-D cloud simulations demonstrate this stabilization mechanism to be robust against rapid temperature fluctuations of +/- 0.5 K. However, rapid warming (Delta T > 2 K) leads to evaporation of the UTTC, while rapid cooling (Delta T < -2 K) leads to destabilization of the particles with the potential for significant dehydration below the cloud
Measurements of OH, total peroxy radicals, non-methane hydrocarbons (NMHCs) and various other trace gases were made at the Meteorological Observatory Hohenpeissenberg in June 2000. The data from an intensive measurement period characterised by high solar insolation (18-21 June) are analysed. The maximum midday OH concentration ranged between 4.5x106 molecules cm-3 and 7.4x106 molecules cm-3. The maximum total ROx (ROx =OH+RO+HO2+RO2) mixing ratio increased from about 55 pptv on 18 June to nearly 70 pptv on 20 and 21 June. A total of 64 NMHCs, including isoprene and monoterpenes, were measured every 1 to 6 hours. The oxidation rate of the NMHCs by OH was calculated and reached a total of over 14x106 molecules cm-3 s-1 on two days. A simple photostationary state balance model was used to simulate the ambient OH and peroxy radical concentrations with the measured data as input. This approach was able to reproduce the main features of the diurnal profiles of both OH and peroxy radicals. The balance equations were used to test the effect of the assumptions made in this model. The results proved to be most sensitive to assumptions about the impact of unmeasured volatile organic compounds (VOC), e.g. formaldehyde (HCHO), and about the partitioning between HO2 and RO2. The measured OH concentration and peroxy radical mixing ratios were reproduced well by assuming the presence of 3 ppbv HCHO as a proxy for oxygenated hydrocarbons, and a HO2/ RO2 ratio between 1:1 and 1:2. The most important source of OH, and conversely the greatest sink for peroxy radicals, was the recycling of HO2 radicals to OH. This reaction was responsible for the recycling of more than 45x106 molecules cm-3 s-1 on two days. The most important sink for OH, and the largest source of peroxy radicals, was the oxidation of NMHCs, in particular, of isoprene and the monoterpenes.
Subvisible cirrus clouds (SVCs) may contribute to dehydration close to the tropical tropopause. The higher and colder SVCs and the larger their ice crystals, the more likely they represent the last efficient point of contact of the gas phase with the ice phase and, hence, the last dehydrating step, before the air enters the stratosphere. The first simultaneous in situ and remote sensing measurements of SVCs were taken during the APE-THESEO campaign in the western Indian ocean in February/March 1999. The observed clouds, termed Ultrathin Tropical Tropopause Clouds (UTTCs), belong to the geometrically and optically thinnest large-scale clouds in the Earth´s atmosphere. Individual UTTCs may exist for many hours as an only 200--300 m thick cloud layer just a few hundred meters below the tropical cold point tropopause, covering up to 105 km2. With temperatures as low as 181 K these clouds are prime representatives for defining the water mixing ratio of air entering the lower stratosphere.
We have used the SLIMCAT 3-D off-line chemical transport model (CTM) to quantify the Arctic chemical ozone loss in the year 2002/2003 and compare it with similar calculations for the winters 1999/2000 and 2003/2004. Recent changes to the CTM have improved the model's ability to reproduce polar chemical and dynamical processes. The updated CTM uses σ-θ as a vertical coordinate which allows it to extend down to the surface. The CTM has a detailed stratospheric chemistry scheme and now includes a simple NAT-based denitrification scheme in the stratosphere.
In the model runs presented here the model was forced by ECMWF ERA40 and operational analyses. The model used 24 levels extending from the surface to ~55km and a horizontal resolution of either 7.5° x 7.5° or 2.8° x 2.8°. Two different radiation schemes, MIDRAD and the CCM scheme, were used to diagnose the vertical motion in the stratosphere. Based on tracer observations from balloons and aircraft, the more sophisticated CCM scheme gives a better representation of the vertical transport in this model which includes the troposphere. The higher resolution model generally produces larger chemical O3 depletion, which agrees better with observations.
The CTM results show that very early chemical ozone loss occurred in December 2002 due to extremely low temperatures and early chlorine activation in the lower stratosphere. Thus, chemical loss in this winter started earlier than in the other two winters studied here. In 2002/2003 the local polar ozone loss in the lower stratosphere was ~40% before the stratospheric final warming. Larger ozone loss occurred in the cold year 1999/2000 which had a persistently cold and stable vortex during most of the winter. For this winter the current model, at a resolution of 2.8° x 2.8°, can reproduce the observed loss of over 70% locally. In the warm and more disturbed winter 2003/2004 the chemical O3 loss was generally much smaller, except above 620K where large losses occurred due to a period of very low minimum temperatures at these altitudes.
Number concentrations of total and non-volatile aerosol particles with size diameters >0.01 μm as well as particle size distributions (0.4–23 μm diameter) were measured in situ in the Arctic lower stratosphere (10–20.5 km altitude). The measurements were obtained during the campaigns European Polar Stratospheric Cloud and Lee Wave Experiment (EUPLEX) and Envisat-Arctic-Validation (EAV). The campaigns were based in Kiruna, Sweden, and took place from January to March 2003. Measurements were conducted onboard the Russian high-altitude research aircraft Geophysica using the low-pressure Condensation Nucleus Counter COPAS (COndensation PArticle Counter System) and a modified FSSP 300 (Forward Scattering Spectrometer Probe). Around 18–20 km altitude typical total particle number concentrations nt range at 10–20 cm−3 (ambient conditions). Correlations with the trace gases nitrous oxide (N2O) and trichlorofluoromethane (CFC-11) are discussed. Inside the polar vortex the total number of particles >0.01 μm increases with potential temperature while N2O is decreasing which indicates a source of particles in the above polar stratosphere or mesosphere. A separate channel of the COPAS instrument measures the fraction of aerosol particles non-volatile at 250°C. Inside the polar vortex a much higher fraction of particles contained non-volatile residues than outside the vortex (~67% inside vortex, ~24% outside vortex). This is most likely due to a strongly increased fraction of meteoric material in the particles which is transported downward from the mesosphere inside the polar vortex. The high fraction of non-volatile residual particles gives therefore experimental evidence for downward transport of mesospheric air inside the polar vortex. It is also shown that the fraction of non-volatile residual particles serves directly as a suitable experimental vortex tracer. Nanometer-sized meteoric smoke particles may also serve as nuclei for the condensation of gaseous sulfuric acid and water in the polar vortex and these additional particles may be responsible for the increase in the observed particle concentration at low N2O. The number concentrations of particles >0.4 μm measured with the FSSP decrease markedly inside the polar vortex with increasing potential temperature, also a consequence of subsidence of air from higher altitudes inside the vortex. Another focus of the analysis was put on the particle measurements in the lowermost stratosphere. For the total particle density relatively high number concentrations of several hundred particles per cm3 at altitudes below ~14 km were observed in several flights. To investigate the origin of these high number concentrations we conducted air mass trajectory calculations and compared the particle measurements with other trace gas observations. The high number concentrations of total particles in the lowermost stratosphere are probably caused by transport of originally tropospheric air from lower latitudes and are potentially influenced by recent particle nucleation.
We report measurements of the deuterium content of molecular hydrogen (H2) obtained from a suite of air samples that were collected during a stratospheric balloon flight between 12 and 33 km at 40º N in October 2002. Strong deuterium enrichments of up to 400 permil versus Vienna Standard Mean Ocean Water (VSMOW) are observed, while the H2 mixing ratio remains virtually constant. Thus, as hydrogen is processed through the H2 reservoir in the stratosphere, deuterium is accumulated in H2 . Using box model calculations we investigated the effects of H2 sources and sinks on the stratospheric enrichments. Results show that considerable isotope enrichments in the production of H2 from CH4 must take place, i.e., deuterium is transferred preferentially to H2 during the CH4 oxidation sequence. This supports recent conclusions from tropospheric H2 isotope measurements which show that H2 produced photochemically from CH4 and non-methane hydrocarbons must be enriched in deuterium to balance the tropospheric hydrogen isotope budget. In the absence of further data on isotope fractionations in the individual reaction steps of the CH4 oxidation sequence, this effect cannot be investigated further at present. Our measurements imply that molecular hydrogen has to be taken into account when the hydrogen isotope budget in the stratosphere is investigated.
Balloon-borne measurements of CFC11 (from the DIRAC in situ gas chromatograph and the DESCARTES grab sampler), ClO and O3 were made during the 1999/2000 Arctic winter as part of the SOLVE-THESEO 2000 campaign, based in Kiruna (Sweden). Here we present the CFC11 data from nine flights and compare them first with data from other instruments which flew during the campaign and then with the vertical distributions calculated by the SLIMCAT 3D CTM. We calculate ozone loss inside the Arctic vortex between late January and early March using the relation between CFC11 and O3 measured on the flights. The peak ozone loss (~1200ppbv) occurs in the 440-470K region in early March in reasonable agreement with other published empirical estimates. There is also a good agreement between ozone losses derived from three balloon tracer data sets used here. The magnitude and vertical distribution of the loss derived from the measurements is in good agreement with the loss calculated from SLIMCAT over Kiruna for the same days.
Turbulent fluxes of carbonyl sulfide (COS) and carbon disulfide (CS2) were measured over a spruce forest in Central Germany using the relaxed eddy accumulation (REA) technique. A REA sampler was developed and validated using simultaneous measurements of CO2 fluxes by REA and by eddy correlation. REA measurements were conducted during six campaigns covering spring, summer, and fall between 1997 and 1999. Both uptake and emission of COS and CS2 by the forest were observed, with deposition occurring mainly during the sunlit period and emission mainly during the dark period. On the average, however, the forest acts as a sink for both gases. The average fluxes for COS and CS2 are -93 ± 11.7 pmol m-2 s-1 and -18 ± 7.6 pmol m-2 s-1, respectively. The fluxes of both gases appear to be correlated to photosynthetically active radiation and to the CO2 and \chem{H_2O} fluxes, supporting the idea that the air-vegetation exchange of both gases is controlled by stomata. An uptake ratio COS/CO2 of 10 ± 1.7 pmol m mol-1 has been derived from the regression line for the correlation between the COS and CO2 fluxes. This uptake ratio, if representative for the global terrestrial net primary production, would correspond to a sink of 2.3 ± 0.5 Tg COS yr-1.
A comprehensive set of stratospheric balloon and aircraft samples was analyzed for the position-dependent isotopic composition of nitrous oxide (N2O). Results for a total of 220 samples from between 1987 and 2003 are presented, nearly tripling the number of mass-spectrometric N2O isotope measurements in the stratosphere published to date. Cryogenic balloon samples were obtained at polar (Kiruna/Sweden, 68° N), mid-latitude (southern France, 44° N) and tropical sites (Hyderabad/India, 18° N). Aircraft samples were collected with a newly-developed whole air sampler on board of the high-altitude aircraft M55 Geophysica during the EUPLEX 2003 campaign. For mixing ratios above 200 nmol mol−1, relative isotope enrichments (δ values) and mixing ratios display a compact relationship, which is nearly independent of latitude and season and which can be explained equally well by Rayleigh fractionation or mixing. However, for mixing ratios below 200 nmol mol−1 this compact relationship gives way to meridional, seasonal and interannual variations. A comparison to a previously published mid-latitude balloon profile even shows large zonal variations, justifying the use of three-dimensional (3-D) models for further data interpretation.
In general, the magnitude of the apparent fractionation constants (i.e., apparent isotope effects) increases continuously with altitude and decreases from the equator to the North Pole. Only the latter observation can be understood qualitatively by the interplay between the time-scales of N2O photochemistry and transport in a Rayleigh fractionation framework. Deviations from Rayleigh fractionation behavior also occur where polar vortex air mixes with nearly N2O-free upper stratospheric/mesospheric air (e.g., during the boreal winters of 2003 and possibly 1992). Aircraft observations in the polar vortex at mixing ratios below 200 nmol mol−1 deviate from isotope variations expected for both Rayleigh fractionation and two-end-member mixing, but could be explained by continuous weak mixing between intravortex and extravortex air (Plumb et al., 2000). However, it appears that none of the simple approaches described here can capture all features of the stratospheric N2O isotope distribution, again justifying the use of 3-D models. Finally, correlations between 18O/16O and average 15N/14N isotope ratios or between the position-dependent 15N/14N isotope ratios show that photo-oxidation makes a large contribution to the total N2O sink in the lower stratosphere (possibly up to 100% for N2O mixing ratios above 300 nmol mol−1). Towards higher altitudes, the temperature dependence of these isotope correlations becomes visible in the stratospheric observations.
Balloon-borne stratospheric BrO measurements: comparison with Envisat/SCIAMACHY BrO limb profiles
(2006)
For the first time, results of all four existing stratospheric BrO profiling instruments, are presented and compared with reference to the SLIMCAT 3-dimensional chemical transport model (3-D CTM). Model calculations are used to infer a BrO profile validation set, measured by 3 different balloon sensors, for the new Envisat/SCIAMACHY (ENVIronment SATellite/SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) satellite instrument. The balloon observations include (a) balloon-borne in situ resonance fluorescence detection of BrO, (b) balloon-borne solar occultation DOAS measurements (Differential Optical Absorption Spectroscopy) of BrO in the UV, and (c) BrO profiling from the solar occultation SAOZ (Systeme d'Analyse par Observation Zenithale) balloon instrument. Since stratospheric BrO is subject to considerable diurnal variation and none of the measurements are performed close enough in time and space for a direct comparison, all balloon observations are considered with reference to outputs from the 3-D CTM. The referencing is performed by forward and backward air mass trajectory calculations to match the balloon with the satellite observations. The diurnal variation of BrO is considered by 1-D photochemical model calculation along the trajectories. The 1-D photochemical model is initialised with output data of the 3-D model with additional constraints on the vertical transport, the total amount and photochemistry of stratospheric bromine as given by the various balloon observations. Total [Bry]=(20.1±2.8)pptv obtained from DOAS BrO observations at mid-latitudes in 2003, serves as an upper limit of the comparison. Most of the balloon observations agree with the photochemical model predictions within their given error estimates. First retrieval exercises of BrO limb profiling from the SCIAMACHY satellite instrument agree to <±50% with the photochemically-corrected balloon observations, and tend to show less agreement below 20 km.
During SPURT (Spurenstofftransport in der Tropopausenregion, trace gas transport in the tropopause region) we performed measurements of a wide range of trace gases with different lifetimes and sink/source characteristics in the northern hemispheric upper troposphere (UT) and lowermost stratosphere (LMS). A large number of in-situ instruments were deployed on board a Learjet 35A, flying at altitudes up to 13.7 km, at times reaching to nearly 380 K potential temperature. Eight measurement campaigns (consisting of a total of 36 flights), distributed over all seasons and typically covering latitudes between 35° N and 75° N in the European longitude sector (10° W–20° E), were performed. Here we present an overview of the project, describing the instrumentation, the encountered meteorological situations during the campaigns and the data set available from SPURT. Measurements were obtained for N2O, CH4, CO, CO2, CFC12, H2, SF6, NO, NOy, O3 and H2O. We illustrate the strength of this new data set by showing mean distributions of the mixing ratios of selected trace gases, using a potential temperature – equivalent latitude coordinate system. The observations reveal that the LMS is most stratospheric in character during spring, with the highest mixing ratios of O3 and NOy and the lowest mixing ratios of N2O and SF6. The lowest mixing ratios of NOy and O3 are observed during autumn, together with the highest mixing ratios of N2O and SF6 indicating a strong tropospheric influence. For H2O, however, the maximum concentrations in the LMS are found during summer, suggesting unique (temperature- and convection-controlled) conditions for this molecule during transport across the tropopause. The SPURT data set is presently the most accurate and complete data set for many trace species in the LMS, and its main value is the simultaneous measurement of a suite of trace gases having different lifetimes and physical-chemical histories. It is thus very well suited for studies of atmospheric transport, for model validation, and for investigations of seasonal changes in the UT/LMS, as demonstrated in accompanying and elsewhere published studies.
During several balloon flights inside the Arctic polar vortex in early 2003, unusual trace gas distributions were observed, which indicate a strong influence of mesospheric air in the stratosphere. The tuneable diode laser (TDL) instrument SPIRALE (Spectroscopie InFrarouge par Absorption de Lasers Embarqués) measured unusually high CO values (up to 600 ppb) on 27 January at about 30 km altitude. The cryosampler BONBON sampled air masses with very high molecular Hydrogen, extremely low SF6 and enhanced CO values on 6 March at about 25 km altitude. Finally, the MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) Fourier Transform Infra-Red (FTIR) spectrometer showed NOy values which are significantly higher than NOy* (the NOy derived from a correlation between N2O and NOy under undisturbed conditions), on 21 and 22 March in a layer centred at 22 km altitude. Thus, the mesospheric air seems to have been present in a layer descending from about 30 km in late January to 25 km altitude in early March and about 22 km altitude on 20 March. We present corroborating evidence from a model study using the KASIMA (KArlsruhe Simulation model of the Middle Atmosphere) model that also shows a layer of mesospheric air, which descended into the stratosphere in November and early December 2002, before the minor warming which occurred in late December 2002 lead to a descent of upper stratospheric air, cutting of a layer in which mesospheric air is present. This layer then descended inside the vortex over the course of the winter. The same feature is found in trajectory calculations, based on a large number of trajectories started in the vicinity of the observations on 6 March. Based on the difference between the mean age derived from SF6 (which has an irreversible mesospheric loss) and from CO2 (whose mesospheric loss is much smaller and reversible) we estimate that the fraction of mesospheric air in the layer observed on 6 March, must have been somewhere between 35% and 100%.
A new version of a digital global map of irrigation areas was developed by combining irrigation statistics for 10 825 sub-national statistical units and geo-spatial information on the location and extent of irrigation schemes. The map shows the percentage of each 5 arc minute by 5 arc minute cell that was equipped for irrigation around the year 2000. It is thus an important data set for global studies related to water and land use. This paper describes the data set and the mapping methodology and gives, for the first time, an estimate of the map quality at the scale of countries, world regions and the globe. Two indicators of map quality were developed for this purpose, and the map was compared to irrigated areas as derived from two remote sensing based global land cover inventories.
Flow velocity in rivers has a major impact on residence time of water and thus on high and low water as well as on water quality. For global scale hydrological modeling only very limited information is available for simulating flow velocity. Based on the Manning-Strickler equation, a simple algorithm to model temporally and spatially variable flow velocity was developed with the objective of improving flow routing in the global hydrological model of Water- GAP. An extensive data set of flow velocity measurements in US rivers was used to test and to validate the algorithm before integrating it into WaterGAP. In this test, flow velocity was calculated based on measured discharge and compared to measured velocity. Results show that flow velocity can be modeled satisfactorily at selected river cross sections. It turned out that it is quite sensitive to river roughness, and the results can be optimized by tuning this parameter. After the validation of the approach, the tested flow velocity algorithm has been implemented into the WaterGAP model. A final validation of its effects on the model results is currently performed.
Maiduguri, an important city in the Sudano-Sahelian zone of West Africa, experiences both drought and floods. Although droughts are more popular, floods are a seasonal occurrence in parts of the city in the average rainy season. Both hazards exert a heavy toll on their victims. Present response to the hazard problems is characterised by a fire-fighting approach which does little about future occurrence. Much of the perception and response is spiritual and stops short of needed structural and organisational programmes for effective mitigation of hazards. Future occurrences of drought and flood may have more adverse effects as land use in the city becomes more complex and agricultural and water supply system comes to depend heavily on surfacial sources. Future effects will also depend on the socio-economic conditions of the people at risk and the capacity of those who help them. Governments and people need to work together to reduce drought and flood hazards.
One possible approach to study systematically the influence of the deformation regime on the geometry of geological structures like folds and boudins is analogue modelling. For a complete understanding of the resulting structures, consideration of the third dimension is required. This PhD study deals with scaled analogue modelling under constriction and plane-strain conditions to improve our knowledge of folding and boudinage of lower crustal rocks in space and time. Plasticine is an appropriate analogue material for rocks in the lower crust. Therefore, this material was used for the experiments. The macroscopic behaviour of most types of plasticine is quite similar to rocks undergoing strain-rate softening and strain hardening regardless of the different microscopic aspects of deformation. Therefore, if one is aware that the stress exponent and viscosity increase with increasing strain, the original plasticine types used with stress exponents ranging from 5.8 to 8.0 are adequate for modelling geologic structures. The same holds for plasticine/oil mixtures. Thus, plasticine and plasticine/oil mixtures can be used to model the viscous flow of different rock types in the lower crust. If climb-accommodated dislocation creep and associated steady-state flow is assumed for the natural rocks, the plasticine/oil mixtures should be used, which flow under steady-state conditions. Three different experimental studies of plane-strain coaxial deformation of stiff layers, with viscosity η2 and stress exponent n2, embedded in a weak matrix, with viscosity η1 and stress exponent n1, have been carried out. The undeformed samples (matrix plus layer) were cubes with an edge length of 12 cm. All experimental runs have been carried out at T = 25 ± 1°C and varying strain rates ė, ranging from 7.9 x 10 high -6 s high -1 to 1.7 x 10 high -2 s high -1, until a finite longitudinal strain of 30% – 40% was achieved. The first experimental study improved the understanding about the evolution of folds and boudins when the layer is oriented perpendicular to the Y-axis of the finite strain ellipsoid. The rock analogues used were Beck’s green plasticine (matrix) and Beck’s black plasticine (competent layer), both of which are strain-rate softening modelling materials with stress exponent n = ca. 8. The effective viscosity η of the matrix plasticine was changed by adding different amounts of oil to the original plasticine. At a strain rate ė of 10 high -3 s high -1 and a finite strain e of 10%, the effective viscosity of the matrix ranges from 1.2 x 10 high 6 to 7.2 x 10 high 6 Pa s. The effective viscosity of the competent layer has been determined as 4.2 x 10 high 7 Pa s. If the viscosity ratio is large (> ca. 20) and the initial thickness of the competent layer is small, both folds and boudins develop simultaneously. Although the growth rate of the folds seems to be higher than the growth rate of the boudins, the wavelength of both structures is approximately the same as is suggested by analytical solutions. A further unexpected, but characteristic, aspect of the deformed competent layer is a significant increase in thickness, which can be used to distinguish plane-strain folds and boudins from constrictional folds and boudins. In the second experimental study, the impact of varying strain rates on growing folds and boudins under plane strain have been investigated. The strain rates used range from 7.9 x 10 high -6 s high -1 to 1.7 x 10 high -2 s high -1. The stiff layer and matrix consist of non-linear viscous Kolb grey and Beck’s green plasticine, respectively, both of which are strain-rate softening modelling materials with power law exponents (n) and apparent viscosities (η) ranging from 6.5 to 7.9 and 8.5 x 10 high 6 to 7.2 x 10 high 6 Pa s, respectively. The effective viscosity (η) of the matrix plasticine was partly modified by adding oil to the original plasticine. At the strain rates used in the experiments the viscosity ratio between layer and matrix ranges between 3 and 10. Different runs have been carried out where the layer was oriented perpendicular to the principal strain axes (X>Y>Z). The results suggest a considerable influence of the strain rate on the geometry of the deformed stiff layer including its thickness. This holds for every type of layer orientation (S ┴ X, S ┴ Y, S ┴ Z). If the stiff layer is oriented perpendicular to the short axis Z of the finite strain ellipsoid, the number of the resulting boudins and the thickness of the stiff layer increase, whereas the length of boudins decreases with increasing strain rate. If the stiff layer is oriented perpendicular to the long axis, X, of the finite strain ellipsoid, enlargement of the strain rate results in increasing wavelength of folds, whereas the number of folds and the degree of thickening of the stiff layer decreased. If the stiff layer is oriented perpendicular to the intermediate Y-axis of the finite strain ellipsoid enlargement of the strain rate results in a decreasing number of boudins and folds associated with increasing wavelengths of both structures. The wavelength of folds is approximately half of the boudins wavelength. This is true for the case where folds and boudins develop simultaneously (S ┴ Y) and for cases where both structures develop independently (folds at S ┴ X and boudins at S ┴ Z). In the third experimental study, scaled analogue experiments have been carried out to demonstrate the growth of plane-strain folds and boudins through space and time. Previous 3D-studies are based only on finite deformation structures. Their results can therefore not be used to prove if both structures grew simultaneously or in sequence. Plane strain acted on a single stiff layer that was embedded in a weak matrix, with the layer oriented perpendicular to the intermediate Y-axis of the finite strain ellipsoid. Two different experimental runs have been carried out using computer tomography (CT) to analyse the results. The first run was carried out without interruption. During the second run, the deformation was stopped in each case at longitudinal strain increments of 10%. Every experiment was carried out at a temperature T of 25°C and a strain rate, ė, of ca. 4 x 10 high -3 s high -1 until a finite longitudinal strain of 40% was achieved with a viscosity contrast m of 18.6 between the non-linear viscous layer (Kolb brown plasticine) and the matrix (Beck’s green plasticine with 150 ml oil kg high -1). The apparent viscosity, η, and the stress exponent, n, for the layer at a strain rate ė = ca. 10 high -3 s high -1 and a finite strain e = 10% are 2.23 x 10 high 7 Pa s and n = 5.8 and for the matrix 1.2 x 10 high 6 Pa s and 10.5. These new data that result from incremental analogue modelling corroborate previous suggestions that folds and boudins are coeval structures in cases of plane-strain coaxial deformation with the stiff layer oriented perpendicular to the intermediate Y-axis of the finite strain ellipsoid. They will be of interest for all workers who are dealing with plane-strain boudins and folds, where the fold axes are parallel to the major axis (X) of the finite strain ellipsoid. As has been demonstrated by the first experimental study, coeval folding and boudinage under plane strain, with S ┴ Y, are associated with a significant increase in the thickness of the competent layer. The latter phenomenon does not occur in other cases of simultaneous folding and boudinage, such as bulk pure constriction. To study the impact of layer thickness on the geometry of folds and boudins under pure constriction, we carried out additional experiments using different types of plasticine for a stiff layer and a weaker matrix to model folding and boudinaging under pure constriction, with the initially planar layer oriented parallel to the Xaxis of the finite strain ellipsoid. The stiff layer and matrix consist of non-linear viscous Kolb brown and Beck’s green plasticine, respectively, both of which are strain-rate softening modelling materials. Six runs have been carried out using thicknesses of the stiff layer of 1, 2, 4, 6, 8 and 10 ± 0.2 mm. All experimental runs were carried out at a temperature T of 30 ± 2°C and a strain rate, ė, of ca. 1.1 x 10 high -4 s high -1 until a finite longitudinal strain of 40% was achieved with a viscosity contrast m of 3.1 between the stiff layer (Kolb brown plasticine) and the matrix (Beck’s green plasticine). The apparent viscosity, η, and the stress exponent, n, for the layer at a strain rate ė = ca. 10 high -3 s high -1 and a finite strain e = 10% are 2.23 x 10 high 7 Pa s and n = 5.8 and for the matrix 7.2 x 10 high 6 Pa s and 7.9. Our results suggest a considerable influence of the initial thickness of the stiff layer on the geometry of the deformed stiff layer. There is no evidence for folding in XY=XZ-sections if the initial thickness of the competent layer is larger than ca. 8 mm. If the initial thickness of the competent layer is set at ca. 10 ± 0.2 mm, both folds and boudins develop simultaneously. However, the growth rate of the boudins seems to be higher than the growth rate of the folds. A further expected, but characteristic, aspect of the deformed competent layer is no change in thickness of the competent layer, which can be used to distinguish plane-strain folds and boudins from constrictional folds and boudins. The model results are important for the analysis and interpretation of deformation structures in rheologically stratified rocks undergoing dislocation creep under bulk constriction. Tectonic settings where constrictional folds and boudins may develop simultaneously are stems of salt diapirs, subduction zones or thermal plumes. To make (paleo) viscosimetric statements possible, the rheological data of the different plasticine types were related to the geometrical data. When comparing the normalized dominant wavelength Wd obtained from the deformed layer of the models with the theoretical dominant wavelength (Ld) calculated using the Smith equation (1977, 1979), the latter probably also holds when folding and boudinage develop simultaneously (S ┴ Y) and when boudins develop independently (S ┴ Z), but can obviously not be applied at very low viscosity ratios as is indicated by the low-strain-rate experiments.
The present work was devised to address the systematic analysis of samples from a range of Roman non-ferrous metal artefacts from different archaeological contexts and sites in the Roman provinces of Germania Superior. One of the focal points of this study is the provenancing of different lead objects from five important Roman settlements between 15 BC and the beginning of fourth century AD. For this purpose, measurements were made on lead and copper ore samples from the Siegerland, Eifel, Hunsrück and Lahn-Dill area in Germany and supplemented with data from the literature to create a data bank of lead isotope ratios of European deposits. Compositional analysis of lead objects by Electron Microprobe analysis showed that Romans were able to purify lead from ore up to 99%. Multi-Collector Inductively Coupled Plasma Mass-Spectrometry was used to determine the source of lead, which played an important role in nearly all aspects of Roman life. Lead isotope ratios were measured for ore samples from German deposits from the eastern side of the Rhine (Siegerland, Lahn-Dill, Ems) and the western side of the Rhine (Eifel, Hunsrück), which contained enough ore reserves to answer the increasing local demand and are believed to have been mined during the Roman period. This data together with those from Mediterranean ore deposits from the literature was used to establish a data bank. The Mediterranean ore deposits range from Cambrian (high 207Pb/206Pb) to tertiary (lower 207Pb/206Pb) values. In particular, the Cypriot deposits are younger, while the Spanish deposits fall either with the younger Sardic ores or close to the older Cypriot ores. The lead isotope ratios of most German ore deposits fall in between the 208Pb/206Pb vs. 207Pb/206Pb ratios of Sardinia and Cyprus, where the lead isotope signature of ore deposits from France and Britain are also found. Over 240 lead objects were measured from Wallendorf (second century BC to first century AD) Dangstetten (15-8 BC), Waldgirmes (AD 1-10), Mainz (AD 1-300), Martberg (first to fourth centuries AD) & Trier (third to fourth centuries AD). Comparing the lead isotope ratios of lead objects and those from German ores shows that the source of over 85 percent of objects are Eifel ore deposits, but the Roman’s had also imported lead from the Southern Massif Central and from Great Britain. A further topic of this work was the systematic study of the variation of copper isotope ratios in different copper minerals and the mechanisms, which controls copper isotope fractionation in ores deposits. For this purpose, copper isotope analyses were made by Multi-Collector Inductively Coupled Plasma Mass-Spectrometry from a series of hydrothermal copper sulphides and their alteration products. Copper and lead isotope ratios were measured in coexisting phases of chalcopyrite and malachite and also coexisting malachite and azurite. No significant fractionation was observed in malachite-azurite phases, but in chalcopyrite-malachite coexisting phases, malachite always shows a positive fractionation to heavier isotope values. Zhu et al. and Larson et al. showed that isotopic variations in copper principally reflect mass fractionation in response to low temperature processes rather than source heterogeneity. The low temperature ore formation processes are mostly represented by weathering of primary sulphide ores to produce secondary carbonate phases and therefore are usually observed on the surface of ore deposits, which were probably removed during the early Bronze Age. Using this concept, copper isotope ratios were measured in some Early Bronze Age copper alloys and Roman copper alloys. However, no large copper isotope fractionation has been observed. Lead and copper isotope ratios were measured on samples from the Kupferschiefer. Two profiles were investigated; 1) Sangerhausen, which was not directly influenced by the oxidizing brines of Rote Fäule and 2) Oberkatz, where both Rote Fäule-controlled and structure-controlled mineralization were observed. Results from maturation studies of organic matter suggest the maximum temperature affecting the Kupferschiefer did not exceed 130°C. delta-65-Cu ranges between -0.78-+0.58‰, shows a positive correlation with copper concentration. Maximum temperature in the Kupferschiefer profile from Oberkatz is supposed to be around 150°C. delta-65Cu in this profile ranges between -0.71-+0.68‰. The pattern of copper isotope fractionation and copper concentration is same as the for profile of Sangerhausen. Origina lead isotope ratios are strongly overprinted by high concentrations of uranium in bottom of both profiles causing more radiogenic lead.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
Simulation of global temperature variations and signal detection studies using neural networks
(1998)
The concept of neural network models (NNM) is a statistical strategy which can be used if a superposition of any forcing mechanisms leads to any effects and if a sufficient related observational data base is available. In comparison to multiple regression analysis (MRA), the main advantages are that NNM is an appropriate tool also in the case of non-linear cause-effect relations and that interactions of the forcing mechanisms are allowed. In comparison to more sophisticated methods like general circulation models (GCM), the main advantage is that details of the physical background like feedbacks can be unknown. Neural networks learn from observations which reflect feedbacks implicitly. The disadvantage, of course, is that the physical background is neglected. In addition, the results prove to be sensitively dependent from the network architecture like the number of hidden neurons or the initialisation of learning parameters. We used a supervised backpropagation network (BPN) with three neuron layers, an unsupervised Kohonen network (KHN) and a combination of both called counterpropagation network (CPN). These concepts are tested in respect to their ability to simulate the observed global as well as hemispheric mean surface air temperature annual variations 1874 - 1993 if parameter time series of the following forcing mechanisms are incorporated : equivalent CO2 concentrations, tropospheric sulfate aerosol concentrations (both anthropogenic), volcanism, solar activity, and ENSO (all natural). It arises that in this way up to 83% of the observed temperature variance can be explained, significantly more than by MRA. The implication of the North Atlantic Oscillation does not improve these results. On a global average, the greenhouse gas (GHG) signal so far is assessed to be 0.9 - 1.3 K (warming), the sulfate signal 0.2 - 0.4 K (cooling), results which are in close similarity to the GCM findings published in the recent IPCC Report. The related signals of the natural forcing mechanisms considered cover amplitudes of 0.1 - 0.3 K. Our best NNM estimate of the GHG doubling signal amounts to 2.1K, equilibrium, or 1.7 K, transient, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
In dieser Arbeit wurde der chemische Ozonverlust in der arktischen Stratosphäre über elf Jahre hinweg, zwischen 1991 und 2002, mit Hilfe der so genannten "Ozon-Tracer Korrelationstechnik" (TRAC), untersucht. Bei dieser Methode werden Korrelationen zwischen Ozon und langlebigen Spurenstoffen im Verlauf des Winters im Polarwirbels beobachtet und so der jährliche akkumulierte Ozonverlust berechnet. Die Ergebnisse dieser Arbeit basieren im wesentlichen auf Messdaten der Satelliteninstrumente: HALOE (Halogen Occultation Experiment) auf UARS (Upper Atmosphere Research Satellite) und ILAS (Improved Limb Atmospheric Spectrometer) Instrument auf ADEOS (Advanced Earth Observing Satellite). Das HALOE Instrument misst seit Oktober 1991 kontinuierlich alle zwei bis drei Monate für einige Tage in höheren nördlichen Breiten. ILAS lieferte ausschließlich für den Winter 1996-97 Messungen, die über sieben Monate hinweg in hohen Breiten aufgenommen wurden. Aufgrund der eingeführten Erweiterungen und Verbesserungen der Methode in dieser Arbeit, konnte die Methode anhand einer detaillierten Studie für den Winter 1996-97 validiert werden. Die ILAS Messreihe wurde dazu verwendet, erstmals die Untersuchung der zeitlichen Entwicklung von Ozon-Tracer Korrelationen kontinuierlich für die gesamte Lebensdauer des Polarwirbels durchzuführen. Dabei wurden auch Korrelationen während der Bildung des Wirbels untersucht und im Besonderen mögliche Mischungsvorgänge zwischen Wirbelluft und Luftmassen außerhalb des Wirbels. Ausserdem wurde ein Vergleich der Ergebnisse von ILAS und HALOE Messdaten durchgeführt und Unterschiede in den Ergebnissen tiefgreifend analysiert. Basierend auf HALOE Messungen konnte die erweiterte TRAC Methode über elf Jahren hinweg angewendet werden. Damit war erstmals eine konsistente Analyse von Ozonverlust und Chloraktivierung über diesen Zeitraum möglich. Die Erweiterungen führten zu einer Verringerung und genauen Quantifizierung von Unsicherheiten der Ergebnisse. Ein deutlicher Zusammenhang zwischen meteorologischen Bedingungen, Chloraktivierung und dem chemischen Ozonverlust wurde deutlich. Weiterhin zeigte sich eine Abhängigkeit zwischen den meteorologischen Bedingungen und der Homogenität des Ozonverlustes innerhalb eines Winters, sowie der mögliche Einfluss von horizontaler Mischung auf Luftmassen in einem schwach ausgeprägten Polarwirbel. In dieser Arbeit wurde eine positive Korrelation zwischen den über die gesamte Lebensdauer des Wirbels auftretenden möglichen PSC-Flächen und den akkumulierten Ozonverlusten für die elf untersuchten Jahre deutlich. Es konnte darüber hinaus gezeigt werden, dass der Ozonverlust von deutlich mehr Einflüssen als nur von der Fläche möglichen PSC Auftretens bestimmt wird, sondern zum Beispiel von der Stärke der Sonneneinstrahlung abhängt. Außerdem lassen sich Auswirkungen von Vulkanausbrüchen, wie zum Beispiel im Jahr 1991 der des Mount Pinatubo, identifizieren.
Die vorliegende Arbeit liefert einen Beitrag zum Verständnis der Rolle des RO x bei der troposphärischen Ozonbildung. Troposphärisches Ozon (O 3 ) spielt eine wichtige Rolle bei der Selbstreinigung der Atmosphäre. Andererseits führen erhöhte Ozonkonzentrationen zu gesundheitlichen Beeinträchtigungen beim Menschen und Schäden an Pflanzen und Umwelt. Die Anwesenheit von flüchtigen organischen Verbindungen (VOCs) führt zur Bildung von Peroxyradikalen (RO x ), die das normale photochemische Gleichgewicht zwischen Ozon und Stickoxiden zu Gunsten erhöhter OzonKonzentrationen verschieben. Im Rahmen der Arbeit wurde ein chemischer Verstärker zur Messung der GesamtPeroxyradikalkonzentration gebaut. RO x reagiert im Einlass des Gerätes mit hinzugefügtem NO und CO in einer Kettenreaktion und bildet dabei NO 2 . Dieses wird mit einem Luminoldetektor nachgewiesen. Der Detektor wird alle 2 Stunden kalibriert. Die Kettenlänge wird durch eine Kalibrierung des Gerätes mit HO 2 Radikalen bestimmt, die durch die Photolyse von H 2 O gebildet werden. Der Verstärkungsfaktor wurde in Bezug auf eine Querempfindlichkeit gegen Wasserdampf korrigiert. Die Messgenauigkeit ist etwa 70% bei 60% relativer Feuchte. Messungen am Taunus Observatorium auf dem Kleinen Feldberg in den Sommermonaten der beiden Jahre 1998 und 1999 werden vorgestellt. Die Ozon und RO x Konzentrationen sind gut miteinander korreliert. Allerdings ist die Tagestemperatur die für die Ozon und RO x Konzentrationen bei weitem wichtigste Einflussgröße und ist daher der beste Parameter zur statistischen Beschreibung von photochemischen Vorgängen. Auf der Grundlage der Messungen am Kleinen Feldberg wurde ein einfaches statistisches Modell zur Vorhersage des Ozonmaximums erstellt. Mit den Parametern Temperatur und Ozonkonzentration am Vortag konnte das statistische Modell bereits 80% der Variation der Ozonkonzentration erklären. Durch die Berücksichtigung der RO x Messungen am Vormittag konnte lediglich eine Verbesserung der erklärten Varianz um 0.5% erzielt werden. Um einen Hinweis auf den Einfluss anthropogener Emissionen zu bekommen, wurde der Wochengang von Ozon, RO x und NO x ebenfalls untersucht. Die Zunahme des Ozonmischungsverhältnisses am Wochenende bei gleichzeitigem Rückgang des Mischungsverhältnisses der Stickoxide wird damit erklärt, dass am Kleinen Feldberg eine VOClimitierte Situation vorgefunden wurde. Die Ozonbildungsrate auf Basis der Reaktion zwischen RO x und NO wurde für Tage mit einem Maximum der Globalstrahlung über 600 W m tdatensatz niedrig (r = 0,46). Die beobachtete Änderung des Ozonmischungsverhältnisses wurde mit dem berechneten mittleren Tagesgang der Ozonbildungsrate verglichen. Die Ozonbildungsrate lag um die Mittagszeit bei etwa 5 ppbv h Verlustprozesse zu erklären. Am Abend werden etwa 2 ppbv O 3 pro Stunde abgebaut. Im Rahmen einer Messkampagne im Juni/Juli 2000 am Meteorologischen Observatorium Hohenpeißenberg fanden Messungen der Konzentrationen von RO x , OH, einer Reihe von VOCs, und anderen relevanten Spurengasen statt. Die Messdaten werden mit Hilfe eines auf der Annahme des lokalen photostationären Gleichgewichts der Radikale basierenden Modells interpretiert. Die Modellergebnisse stimmten sehr gut mit den Messungen überein. Die Überschätzung der Konzentration an 2 Tagen wurde durch den Einfluss sauerstoffhaltiger VOCs erklärt. Das '' Recycling" der HO 2 Radikale (die Reaktion zwischen HO 2 und NO) ist die wichtigste Quelle für OH und die wichtigste Senke für RO x . Durch die erhöhte NOKonzentration am Vormittag wird HO 2 sehr schnell in OH umgewandelt, das wiederum für die VOCOxidation und RO x Bildung verantwortlich ist. Die wichtigste OHSenke und RO x Quelle ist die Oxidation von Isopren und den Terpenen. Um die Rolle der photochemischen Ozonbildung auf regionaler Skala zu untersuchen, wurden Ozonmessungen aus ganz Deutschland auf unterschiedlichen zeitlichen und räumlichen Skalen statistisch untersucht. Die Netto Änderungsrate der Ozonkonzentration war tagsüber an 3 nahe zusammenliegenden Stationen sehr ähnlich. Die OzonMessdaten von 277 deutschen Messstationen wurden mit den an einer Waldmessstelle nahe Königstein gemessenen Ozonwerten korreliert. Die Ozonmessungen in Königstein erklären 50% der Varianz der sommerlichen Ozonmessungen zwischen 11:00 und 16:00 MEZ an Stationen, die in einem Umkreis von etwa 250 km von Königstein liegen. Auf das ganze Jahr bezogen, liegt diese ''charakteristische Entfernung" bei etwa 350 km. Diese Ergebnisse deuten darauf hin, dass die Prozesse, die einen wichtigen Einfluss auf die Ozonkonzentration ausüben, auf regionalen Skalen von einigen hundert Kilometern aktiv sind. Zusammenfassend lässt sich sagen, dass die gemessenen RO x Konzentrationen mit den aufgrund der Oxidation der VOCs durch OH berechneten Konzentrationen konsistent sind. Obwohl die RO x Konzentationen für die chemische Modellierung von Bedeutung sind, tragen RO x Messungen nur wenig zu einer Verbesserung der Qualität von kurzfristigen statistischen Ozonprognosen bei. Keywords: Ozone, Troposphere, Peroxy Radicals, Free Radicals, Photochemistry, Chemical Amplifier
In der hier vorliegenden Arbeit wurde der troposphärische Kreislauf von Carbonylsulfid (COS) untersucht. COS ist ein Quellgas des stratosphärischen SulfatAerosols, das die Strahlungsbilanz beeinflussen und den chemischen Abbau des stratosphärischen Ozons beschleunigen kann. Trotz zahlreicher Studien sind die Quellen und Senken des atmosphärischen COS bisher nur unzulänglich quantifiziert. Insbesondere bestehen große Unsicherheiten in den Abschätzungen der Beiträge des Ozeans und der anthropogenen Quellen, sowie der Senkenstärke der Landvegetation. Schiffs und flugzeuggetragene Messungen des atmosphärischen COS ergaben kein einheitliches interhemisphärisches Verhältnis (IHR=MNH /M SH ). Während die Messungen von Bingemer et al. (1990), StaubesDiederich (1992) und Johnson et al. (1993) ein IHR zwischen 1.10 und 1.25 zeigten, fanden die Messungen von Torres et al. (1980), StaubesDiederich (1992), Weiss et al. (1995) und Thornton et al. (1996) keinen oder nur einen geringfügigen N/SGradienten. Die Untersuchung von Chin und Davis (1993) zeigt ein N/SVerhältnis der COS Quellstärke von 2.3, das hauptsächlich auf die stärkeren anthropogenen Quellen auf der Nordhalbkugel zurückzuführen ist. Es ist unklar, ob der zeitweilige Konzentrationsüberschuß der Nordhemisphäre Zeichen anthropogener Quellen dort oder Teil eines durch die Senkenfunktion der Landpflanzen verursachten saisonalen Signals ist. Die Konsistenz der Breitenverteilung des COSMischungsverhältnisses mit den geographischen bzw. saisonalen Variationen der COSQuellen und Senken muß überprüft werden. Dazu werden genaue Kenntnissen der Quell und Senkenstärken des atmosphärischen COS und ihrer raumzeitlichen Variabilität benötigt. Vor dem obigen Hintergrund ergeben sich als Schwerpunkte dieser Arbeit: (1) der Austausch von COS zwischen Atmosphäre und Ozean sowie (2) zwischen Atmosphäre und terrestrischer Vegetation und (3) die raumzeitliche Variabilität des atmosphärischen COS. Zur Untersuchung des Austausches von COS zwischen Atmosphäre und Ozean wurde das KonzentrationsUngleichgewicht von COS zwischen Ozean und Atmosphäre durch Messungen des COS im Seewasser und in der Meeresluft ermittelt und die resultierenden Austauschflüsse mit einem Modell berechnet. Die Messungen fanden an Bord des Forschungsschiffs Polarstern während der Fahrten ANT/XV1 (15.10.6.11.1997, BremerhavenKapstadt) und ANT /XV5 (26.5.6.20.1998, KapstadtBremerhaven) statt. Die Konzentration des gelösten COS und das Sättigungsverhältnis von COS zwischen Ozean und Atmosphäre zeigen ausgeprägte Tagesgänge und saisonale und geographische Variationen. Die mittlere Konzentration von COS im Seewasser beträgt 14.7 pmol L -1 für die HerbstFahrt bzw. 18.1 pmol L -1 für die SommerFahrt. Höchste COSKonzentrationen werden in der jeweiligen SommerHemisphäre und in Gebieten mit hoher biologischer Produktivität beobachtet, d.h. im BenguelaStrom im November, im NordostAtlantik im Juni und in den Auftriebgebieten vor Westafrika im Oktober bzw. Juni. In den übrigen Gebieten sind die Konzentrationen um eine Größenordnung niedriger. Die Konzentration von COS im Seewasser steigt frühmorgens von ihrem tiefsten Stand an. Um ca. 15 Uhr Ortszeit erreicht sie ihr Maximum, danach nimmt sie ab. Der Tagesgang unterstützt die Theorie, daß COS im Seewasser photochemisch produziert wird. Während der Tagesstunden wird eine Übersättigung des offenen Ozean für COS gefunden. Dagegen ist eine Untersättigung des Ozeans in den späten Nachtstunden zu beobachten. Der Ozean wirkt in den Tagesstunden als COSQuelle, in der späten Nacht als COSSenke. Die Untersättigung tritt sogar im Sommer in produktiven Meeresgebieten regelmäßig auf. Eine Konsequenz dieser Beobachtung ist die weitere Reduzierung der ozeanischen Quelle von COS gegenüber bisher publizierten Abschätzungen. Methylmercaptan (CH 3 SH) ist in allen Seewasserproben zu beobachten. Der Tagesmittelwert der CH 3 SHKonzentration variiert zwischen 29 und 303 pm L -1 und ist 316 fach größer als der der COSKonzentration. Der Tagesgang der CH 3 SHKonzentration zeigt ein Minimum um die Mittagszeit. Die Tagesmittel der CH 3 SH und COSKonzentrationen sind signifikant miteinander korreliert. Diese Daten liefern den Beweis dafür, daß CH 3 SH eine der wichtigen Vorgängersubstanzen von COS ist. Die Regressionslinie der Korrelation zwischen den mittleren COS und CH 3 SHKonzentrationen weist nur einen geringfügigen Achsenabschnitt auf. Somit kann die CH 3 SHKonzentration als ein Indikator der Konzentration von COSVorgängern benutzt werden. Es besteht außerdem eine Korrelation zwischen der CH 3 SHKonzentration und dem Logarithmus der Konzentration des gelösten Chlorophyll a. Diese Korrelation deutet darauf hin, daß der Gehalt von CH 3 SH im Seewasser eine enge Beziehung zur marinen Primärproduktion hat. COS wird im Seewasser durch Hydrolyse abgebaut. Die Abbaurate hängt von der Temperatur des Seewassers ab. Je wärmer das Seewasser ist, desto schneller wird COS abgebaut, und um so kürzer ist die Lebenszeit von COS im Seewasser. Die Lebenszeit kann einerseits durch das ReaktionsgeschwindigkeitsGesetz von Arrhenius berechnet werden, andererseits läßt sie sich durch exponentielle Anpassung an den nächtlichen Konzentrationsverlauf (d.h. bei Abwesenheit von Photoproduktion) abschätzen. Eine solche Anpassung des exponentiellen Abklingens wurde anhand von dicht gestaffelten Messungen während einiger Nächte vorgenommen. Die gefitteten Lebenszeiten stimmen mit den theoretischen Werten gut überein, obwohl die gefittete Lebenszeit neben Hydrolyse noch von anderen Prozessen (z.B. Transport nach unten, AirSeaAustausch, usw.) beeinflußt wird. Diese gute Übereinstimmung unterstützt die Aussage, daß die Hydrolyse eine bedeutende Rolle beim Abbau von COS im Seewasser spielt. Die berechnete HydrolyseLebenszeit ist mit dem Tagesmittel der COSKonzentration korreliert. Da die Tagesmittelwerte sowohl zeitliche wie auch räumliche Mittelwerte der COSKonzentrationen darstellen, zeigt diese Korrelation, daß Hydrolyse eine bedeutende Rolle in der raumzeitlichen Variabilität der COSKonzentration einnimmt. Da die Konzentration des gelösten COS von mehreren Faktoren abhängig ist, scheint eine multivariable Betrachtung sinnvoll. Hierfür wurde eine "Multiple Linear Regression Analysis'' (MLRA) ausgeführt. Diese Analyse ergibt ein empirisches Modell der folgenden Form für die Berechnung des Tagesmittels der COSKonzentration: [COS] = 1.8# 13log[Chl] - 1.5W s 0.057G - 0.73, mit [COS] = mittlere Konzentration von COS in pmol L -1 # = HydrolyseLebenszeit in Stunde [Chl] = mittlere Konzentration von Chlorophyll a in mg m -3 W s = Windgeschwindigkeit in m s -1 G = Intensität der Globalstrahlung in W m -2 . Die Parameter auf der rechten Seite der Gleichung können direkt oder indirekt von Satelliten aus gemessen werden, deshalb kann dieses Modell für die Abschätzung der Konzentration von COS im Seewasser anhand von Satelliten Daten verwendet werden. Das empirische Modell soll noch durch weitere Messungen bestätigt bzw. verbessert werden. Der Austauschfluß von COS zwischen der Atmosphäre und dem offenen Ozean wurde mit dem AirSeaFlußModell von Liss and Slater (1974) zusammen mit dem Modell von Erickson (1993) f