550 Geowissenschaften
Refine
Year of publication
Document Type
- Article (907) (remove)
Has Fulltext
- yes (907)
Keywords
- Climate change (8)
- climate change (7)
- COSMO-CLM (6)
- Palaeoclimate (6)
- Atmospheric chemistry (5)
- Biogeochemistry (5)
- Palaeoceanography (5)
- precipitation (5)
- loess (4)
- Biodiversity (3)
Institute
- Geowissenschaften (546)
- Geowissenschaften / Geographie (110)
- Biodiversität und Klima Forschungszentrum (BiK-F) (59)
- Senckenbergische Naturforschende Gesellschaft (52)
- Extern (22)
- Biowissenschaften (21)
- Geographie (15)
- Präsidium (12)
- Institut für Ökologie, Evolution und Diversität (9)
- Physik (6)
Der Erdfall „Seeloch" bei Bad Hersfeld ist ein etwa kreisrunder Einbruch im Buntsandstein mit einem Durchmesser von ca. 80 m, der auf Subrosionsvorgänge lm Zechsteinsalinar zurückgeführt wird. Einbrüche dieser Art sind typisch für Salzhänge, wie an anderen Beispielen gezeigt wird. Kaminartige Durchbrüche durch mehrere hundert Meter mächtiges Deckgebirge können nur entstehen, wenn eine vorgegebene Klüftung vorhanden ist. Ähnliche Erscheinungen sind über Lösungshohlräumen beobachtet worden, die beim Absaufen von Kaliwerken entstanden. Erfahrungen aus dem Salinenbetrieb zeigen, daß auch im Steinsalz größere Hohlräume längere Zeit existieren können, ohne daß es zu Bodensenkungen oder Erdfällen kommt. Die Sedimente des „Seeloch" sind palynologisoh und stratigraphisch bearbeitet worden. Das besondere Ergebnis dieser Untersuchungen ist daß Interglazial-Alter (Riß-Würrn) des Profilabschnittes. Die bisher untersuchten Salzauslaugungssenken enthalten Ablagerungen des Tertiärs, Altpleistozäns oder des Holozäns. Aus den Korrosionssenken des nordöstlichen Hessens waren bisher nur kohlig-torfige Ablagerungen bekannt geworden, die ein oberpliozänes bis altpleistozänes oder postglaziales Alter haben. Mit den Schichten aus dem "Seeloch" von Kathus werden erstmals Ablagerungen beschrieben, für die ein eemzeitliches Alter (Riß-Würm-Interglazial) wahrscheinlich ist.
Die Geologie des Elsaß
(1940)
Chronologie des plus anciennes cartes d’Amérique : (extrait d'une lettre adressée à M. Jomard)
(1835)
An evaluation scheme is presented in this paper which can be used to assess groundwater vulnerability according to the requirements of the European Water Framework Directive (WFD). The evaluation scheme results in a groundwater vulnerability map identifying areas of high, medium and low vulnerability, as necessary for the measurement planning of the WFD. The evaluation scheme is based on the definition of the vulnerability of the Intergovernmental Panel on Climate Change (IPCC). It considers exposure, sensitivity and the adaptive capacity of the region. The adaptive capacity is evaluated in an actors' platform, which was constituted for the region in the PartizipA ("Participative modelling, Actor and Ecosystem Analysis in Regions with Intensive Agriculture") project. As a result of the vulnerability assessment, 21% of the catchment area was classified as being highly vulnerable, whereas 73% has medium vulnerability and 6% has low vulnerability. Thus, a groundwater vulnerability assessment approach is presented, which can be used in practice on a catchment scale for the WFD measurement planning.
Global modelling of continental water storage changes : sensitivity to different climate data sets
(2007)
Since 2002, the GRACE satellite mission provides estimates of the Earth's dynamic gravity field with unprecedented accuracy. Differences between monthly gravity fields contain a clear hydrological signal due to continental water storage changes. In order to evaluate GRACE results, the state-of-the-art WaterGAP Global Hydrological Model (WGHM) is applied to calculate terrestrial water storage changes on a global scale. WGHM is driven by different climate data sets to analyse especially the influence of different precipitation data on calculated water storage. The data sets used are the CRU TS 2.1 climate data set, the GPCC Full Data Product for precipitation and data from the ECMWF integrated forecast system. A simple approach for precipitation correction is introduced. WGHM results are then compared with GRACE data. The use of different precipitation data sets leads to considerable differences in computed water storage change for a large number of river basins. Comparing model results with GRACE observations shows a good spatial correlation and also a good agreement in phase. However, seasonal variations of water storage as derived from GRACE tend to be significantly larger than those computed by WGHM, regardless of which climate data set is used.
Constructive waterfalls
(1911)
The excavation of valleys by waterfalls is one of the best known and most effective processes by which rivers cut down the surface of the earth. The influence of waterfalls is usually regarded as solely destructive, and as always helping to lower the land. They undermine and cut backward the rock faces over which they fall : by this recession they excavate deep gorges ; and the existence of these gorges enables the adjacent country to be lowered to the level of the valIey floors. The waterfalls, moreover, empty any lakes they rnay reach in their retreat, while the ravines below the falls may drain the springs and thus desiccate the neighbouring hihlands. Observations in various countries had suggested to me that waterfalls may sometimes be constructive in stead of destructive, and that they may reserse their usual procedure, advancing instead of retreating, filling valleys instead of excavating them, and forrning alluvial plains and lakes instead of destroying them. The best illustrations I have seen of such advancing, constructive waterfalls are on some rivers of Dalmatia and Bosnia, where they occur in various stages of development. ...
Tropische Korallenriffe sind die artenreichsten Ökosysteme im Ozean. Die »tropischen Regenwälder der Meere« beherbergen zirka 800 Korallenarten und mehrere zehntausend Arten aus fast allen bekannten Tierstämmen. Korallenriffe bedecken weltweit eine Fläche von 600.000 Quadratkilometern, das sind 0,17 Prozent der Erdoberfläche. Sie treten als nahe der Küste gelegene Saumriffe, küstenfernere Barriereriffe, ringförmige Atolle und flache Karbonat-Plattformen auf . Der Begriff »Karbonat« weist darauf hin, dass Korallen als Riffbildner ein Skelett aus Kalk haben. Auch Kalkalgen und Weichtiere wie Muscheln und Schnecken sind durch die Bildung von Kalkskeletten und Kalkschalen am Riffaufbau beteiligt. Da tropische Korallenriffe nur in der Nähe der Meeresoberfläche wachsen, können Geowissenschaftler mit Hilfe fossiler Korallenfunde ermitteln, wie sich der Pegel des Meeresspiegels in vergangenen Jahrtausenden entwickelt hat. Auch andere wichtige Klimadaten wie Wassertemperatur, Sonneneinstrahlung und Kohlendioxid-Gehalt der Atmosphäre sind in Korallenriffen »gespeichert«. Frankfurter Geowissenschaftler erschließen diese wichtigen Daten, die weit vor menschliche Messungen zurückreichen, durch systematische Bohrungen in Korallenriffen der Karibik, des Persischen Golfs und der Malediven.
Spatial interpolation of precipitation data is uncertain. How important is this uncertainty and how can it be considered in evaluation of high-resolution probabilistic precipitation forecasts? These questions are discussed by experimental evaluation of the COSMO consortium's limited-area ensemble prediction system COSMO-LEPS. The applied performance measure is the often used Brier skill score (BSS). The observational references in the evaluation are (a) analyzed rain gauge data by ordinary Kriging and (b) ensembles of interpolated rain gauge data by stochastic simulation. This permits the consideration of either a deterministic reference (the event is observed or not with 100% certainty) or a probabilistic reference that makes allowance for uncertainties in spatial averaging. The evaluation experiments show that the evaluation uncertainties are substantial even for the large area (41 300 km2) of Switzerland with a mean rain gauge distance as good as 7 km: the one- to three-day precipitation forecasts have skill decreasing with forecast lead time but the one- and two-day forecast performances differ not significantly.
Mechanisms by which subvisible cirrus clouds (SVCs) might contribute to dehydration close to the tropical tropopause are not well understood. Recently Ultrathin Tropical Tropopause Clouds (UTTCs) with optical depths around 10-4 have been detected in the western Indian ocean. These clouds cover thousands of square kilometers as 200-300 m thick distinct and homogeneous layer just below the tropical tropopause. In their condensed phase UTTCs contain only 1-5% of the total water, and essentially no nitric acid. A new cloud stabilization mechanism is required to explain this small fraction of the condensed water content in the clouds and their small vertical thickness. This work suggests a mechanism, which forces the particles into a thin layer, based on upwelling of the air of some mm/s to balance the ice particles, supersaturation with respect to ice above and subsaturation below the UTTC. In situ measurements suggest that these requirements are fulfilled. The basic physical properties of this mechanism are explored by means of a single particle model. Comprehensive 1-D cloud simulations demonstrate this stabilization mechanism to be robust against rapid temperature fluctuations of +/- 0.5 K. However, rapid warming (Delta T > 2 K) leads to evaporation of the UTTC, while rapid cooling (Delta T < -2 K) leads to destabilization of the particles with the potential for significant dehydration below the cloud
Measurements of OH, total peroxy radicals, non-methane hydrocarbons (NMHCs) and various other trace gases were made at the Meteorological Observatory Hohenpeissenberg in June 2000. The data from an intensive measurement period characterised by high solar insolation (18-21 June) are analysed. The maximum midday OH concentration ranged between 4.5x106 molecules cm-3 and 7.4x106 molecules cm-3. The maximum total ROx (ROx =OH+RO+HO2+RO2) mixing ratio increased from about 55 pptv on 18 June to nearly 70 pptv on 20 and 21 June. A total of 64 NMHCs, including isoprene and monoterpenes, were measured every 1 to 6 hours. The oxidation rate of the NMHCs by OH was calculated and reached a total of over 14x106 molecules cm-3 s-1 on two days. A simple photostationary state balance model was used to simulate the ambient OH and peroxy radical concentrations with the measured data as input. This approach was able to reproduce the main features of the diurnal profiles of both OH and peroxy radicals. The balance equations were used to test the effect of the assumptions made in this model. The results proved to be most sensitive to assumptions about the impact of unmeasured volatile organic compounds (VOC), e.g. formaldehyde (HCHO), and about the partitioning between HO2 and RO2. The measured OH concentration and peroxy radical mixing ratios were reproduced well by assuming the presence of 3 ppbv HCHO as a proxy for oxygenated hydrocarbons, and a HO2/ RO2 ratio between 1:1 and 1:2. The most important source of OH, and conversely the greatest sink for peroxy radicals, was the recycling of HO2 radicals to OH. This reaction was responsible for the recycling of more than 45x106 molecules cm-3 s-1 on two days. The most important sink for OH, and the largest source of peroxy radicals, was the oxidation of NMHCs, in particular, of isoprene and the monoterpenes.
Subvisible cirrus clouds (SVCs) may contribute to dehydration close to the tropical tropopause. The higher and colder SVCs and the larger their ice crystals, the more likely they represent the last efficient point of contact of the gas phase with the ice phase and, hence, the last dehydrating step, before the air enters the stratosphere. The first simultaneous in situ and remote sensing measurements of SVCs were taken during the APE-THESEO campaign in the western Indian ocean in February/March 1999. The observed clouds, termed Ultrathin Tropical Tropopause Clouds (UTTCs), belong to the geometrically and optically thinnest large-scale clouds in the Earth´s atmosphere. Individual UTTCs may exist for many hours as an only 200--300 m thick cloud layer just a few hundred meters below the tropical cold point tropopause, covering up to 105 km2. With temperatures as low as 181 K these clouds are prime representatives for defining the water mixing ratio of air entering the lower stratosphere.
We have used the SLIMCAT 3-D off-line chemical transport model (CTM) to quantify the Arctic chemical ozone loss in the year 2002/2003 and compare it with similar calculations for the winters 1999/2000 and 2003/2004. Recent changes to the CTM have improved the model's ability to reproduce polar chemical and dynamical processes. The updated CTM uses σ-θ as a vertical coordinate which allows it to extend down to the surface. The CTM has a detailed stratospheric chemistry scheme and now includes a simple NAT-based denitrification scheme in the stratosphere.
In the model runs presented here the model was forced by ECMWF ERA40 and operational analyses. The model used 24 levels extending from the surface to ~55km and a horizontal resolution of either 7.5° x 7.5° or 2.8° x 2.8°. Two different radiation schemes, MIDRAD and the CCM scheme, were used to diagnose the vertical motion in the stratosphere. Based on tracer observations from balloons and aircraft, the more sophisticated CCM scheme gives a better representation of the vertical transport in this model which includes the troposphere. The higher resolution model generally produces larger chemical O3 depletion, which agrees better with observations.
The CTM results show that very early chemical ozone loss occurred in December 2002 due to extremely low temperatures and early chlorine activation in the lower stratosphere. Thus, chemical loss in this winter started earlier than in the other two winters studied here. In 2002/2003 the local polar ozone loss in the lower stratosphere was ~40% before the stratospheric final warming. Larger ozone loss occurred in the cold year 1999/2000 which had a persistently cold and stable vortex during most of the winter. For this winter the current model, at a resolution of 2.8° x 2.8°, can reproduce the observed loss of over 70% locally. In the warm and more disturbed winter 2003/2004 the chemical O3 loss was generally much smaller, except above 620K where large losses occurred due to a period of very low minimum temperatures at these altitudes.
Number concentrations of total and non-volatile aerosol particles with size diameters >0.01 μm as well as particle size distributions (0.4–23 μm diameter) were measured in situ in the Arctic lower stratosphere (10–20.5 km altitude). The measurements were obtained during the campaigns European Polar Stratospheric Cloud and Lee Wave Experiment (EUPLEX) and Envisat-Arctic-Validation (EAV). The campaigns were based in Kiruna, Sweden, and took place from January to March 2003. Measurements were conducted onboard the Russian high-altitude research aircraft Geophysica using the low-pressure Condensation Nucleus Counter COPAS (COndensation PArticle Counter System) and a modified FSSP 300 (Forward Scattering Spectrometer Probe). Around 18–20 km altitude typical total particle number concentrations nt range at 10–20 cm−3 (ambient conditions). Correlations with the trace gases nitrous oxide (N2O) and trichlorofluoromethane (CFC-11) are discussed. Inside the polar vortex the total number of particles >0.01 μm increases with potential temperature while N2O is decreasing which indicates a source of particles in the above polar stratosphere or mesosphere. A separate channel of the COPAS instrument measures the fraction of aerosol particles non-volatile at 250°C. Inside the polar vortex a much higher fraction of particles contained non-volatile residues than outside the vortex (~67% inside vortex, ~24% outside vortex). This is most likely due to a strongly increased fraction of meteoric material in the particles which is transported downward from the mesosphere inside the polar vortex. The high fraction of non-volatile residual particles gives therefore experimental evidence for downward transport of mesospheric air inside the polar vortex. It is also shown that the fraction of non-volatile residual particles serves directly as a suitable experimental vortex tracer. Nanometer-sized meteoric smoke particles may also serve as nuclei for the condensation of gaseous sulfuric acid and water in the polar vortex and these additional particles may be responsible for the increase in the observed particle concentration at low N2O. The number concentrations of particles >0.4 μm measured with the FSSP decrease markedly inside the polar vortex with increasing potential temperature, also a consequence of subsidence of air from higher altitudes inside the vortex. Another focus of the analysis was put on the particle measurements in the lowermost stratosphere. For the total particle density relatively high number concentrations of several hundred particles per cm3 at altitudes below ~14 km were observed in several flights. To investigate the origin of these high number concentrations we conducted air mass trajectory calculations and compared the particle measurements with other trace gas observations. The high number concentrations of total particles in the lowermost stratosphere are probably caused by transport of originally tropospheric air from lower latitudes and are potentially influenced by recent particle nucleation.
We report measurements of the deuterium content of molecular hydrogen (H2) obtained from a suite of air samples that were collected during a stratospheric balloon flight between 12 and 33 km at 40º N in October 2002. Strong deuterium enrichments of up to 400 permil versus Vienna Standard Mean Ocean Water (VSMOW) are observed, while the H2 mixing ratio remains virtually constant. Thus, as hydrogen is processed through the H2 reservoir in the stratosphere, deuterium is accumulated in H2 . Using box model calculations we investigated the effects of H2 sources and sinks on the stratospheric enrichments. Results show that considerable isotope enrichments in the production of H2 from CH4 must take place, i.e., deuterium is transferred preferentially to H2 during the CH4 oxidation sequence. This supports recent conclusions from tropospheric H2 isotope measurements which show that H2 produced photochemically from CH4 and non-methane hydrocarbons must be enriched in deuterium to balance the tropospheric hydrogen isotope budget. In the absence of further data on isotope fractionations in the individual reaction steps of the CH4 oxidation sequence, this effect cannot be investigated further at present. Our measurements imply that molecular hydrogen has to be taken into account when the hydrogen isotope budget in the stratosphere is investigated.
Balloon-borne measurements of CFC11 (from the DIRAC in situ gas chromatograph and the DESCARTES grab sampler), ClO and O3 were made during the 1999/2000 Arctic winter as part of the SOLVE-THESEO 2000 campaign, based in Kiruna (Sweden). Here we present the CFC11 data from nine flights and compare them first with data from other instruments which flew during the campaign and then with the vertical distributions calculated by the SLIMCAT 3D CTM. We calculate ozone loss inside the Arctic vortex between late January and early March using the relation between CFC11 and O3 measured on the flights. The peak ozone loss (~1200ppbv) occurs in the 440-470K region in early March in reasonable agreement with other published empirical estimates. There is also a good agreement between ozone losses derived from three balloon tracer data sets used here. The magnitude and vertical distribution of the loss derived from the measurements is in good agreement with the loss calculated from SLIMCAT over Kiruna for the same days.
Turbulent fluxes of carbonyl sulfide (COS) and carbon disulfide (CS2) were measured over a spruce forest in Central Germany using the relaxed eddy accumulation (REA) technique. A REA sampler was developed and validated using simultaneous measurements of CO2 fluxes by REA and by eddy correlation. REA measurements were conducted during six campaigns covering spring, summer, and fall between 1997 and 1999. Both uptake and emission of COS and CS2 by the forest were observed, with deposition occurring mainly during the sunlit period and emission mainly during the dark period. On the average, however, the forest acts as a sink for both gases. The average fluxes for COS and CS2 are -93 ± 11.7 pmol m-2 s-1 and -18 ± 7.6 pmol m-2 s-1, respectively. The fluxes of both gases appear to be correlated to photosynthetically active radiation and to the CO2 and \chem{H_2O} fluxes, supporting the idea that the air-vegetation exchange of both gases is controlled by stomata. An uptake ratio COS/CO2 of 10 ± 1.7 pmol m mol-1 has been derived from the regression line for the correlation between the COS and CO2 fluxes. This uptake ratio, if representative for the global terrestrial net primary production, would correspond to a sink of 2.3 ± 0.5 Tg COS yr-1.
A comprehensive set of stratospheric balloon and aircraft samples was analyzed for the position-dependent isotopic composition of nitrous oxide (N2O). Results for a total of 220 samples from between 1987 and 2003 are presented, nearly tripling the number of mass-spectrometric N2O isotope measurements in the stratosphere published to date. Cryogenic balloon samples were obtained at polar (Kiruna/Sweden, 68° N), mid-latitude (southern France, 44° N) and tropical sites (Hyderabad/India, 18° N). Aircraft samples were collected with a newly-developed whole air sampler on board of the high-altitude aircraft M55 Geophysica during the EUPLEX 2003 campaign. For mixing ratios above 200 nmol mol−1, relative isotope enrichments (δ values) and mixing ratios display a compact relationship, which is nearly independent of latitude and season and which can be explained equally well by Rayleigh fractionation or mixing. However, for mixing ratios below 200 nmol mol−1 this compact relationship gives way to meridional, seasonal and interannual variations. A comparison to a previously published mid-latitude balloon profile even shows large zonal variations, justifying the use of three-dimensional (3-D) models for further data interpretation.
In general, the magnitude of the apparent fractionation constants (i.e., apparent isotope effects) increases continuously with altitude and decreases from the equator to the North Pole. Only the latter observation can be understood qualitatively by the interplay between the time-scales of N2O photochemistry and transport in a Rayleigh fractionation framework. Deviations from Rayleigh fractionation behavior also occur where polar vortex air mixes with nearly N2O-free upper stratospheric/mesospheric air (e.g., during the boreal winters of 2003 and possibly 1992). Aircraft observations in the polar vortex at mixing ratios below 200 nmol mol−1 deviate from isotope variations expected for both Rayleigh fractionation and two-end-member mixing, but could be explained by continuous weak mixing between intravortex and extravortex air (Plumb et al., 2000). However, it appears that none of the simple approaches described here can capture all features of the stratospheric N2O isotope distribution, again justifying the use of 3-D models. Finally, correlations between 18O/16O and average 15N/14N isotope ratios or between the position-dependent 15N/14N isotope ratios show that photo-oxidation makes a large contribution to the total N2O sink in the lower stratosphere (possibly up to 100% for N2O mixing ratios above 300 nmol mol−1). Towards higher altitudes, the temperature dependence of these isotope correlations becomes visible in the stratospheric observations.
Balloon-borne stratospheric BrO measurements: comparison with Envisat/SCIAMACHY BrO limb profiles
(2006)
For the first time, results of all four existing stratospheric BrO profiling instruments, are presented and compared with reference to the SLIMCAT 3-dimensional chemical transport model (3-D CTM). Model calculations are used to infer a BrO profile validation set, measured by 3 different balloon sensors, for the new Envisat/SCIAMACHY (ENVIronment SATellite/SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) satellite instrument. The balloon observations include (a) balloon-borne in situ resonance fluorescence detection of BrO, (b) balloon-borne solar occultation DOAS measurements (Differential Optical Absorption Spectroscopy) of BrO in the UV, and (c) BrO profiling from the solar occultation SAOZ (Systeme d'Analyse par Observation Zenithale) balloon instrument. Since stratospheric BrO is subject to considerable diurnal variation and none of the measurements are performed close enough in time and space for a direct comparison, all balloon observations are considered with reference to outputs from the 3-D CTM. The referencing is performed by forward and backward air mass trajectory calculations to match the balloon with the satellite observations. The diurnal variation of BrO is considered by 1-D photochemical model calculation along the trajectories. The 1-D photochemical model is initialised with output data of the 3-D model with additional constraints on the vertical transport, the total amount and photochemistry of stratospheric bromine as given by the various balloon observations. Total [Bry]=(20.1±2.8)pptv obtained from DOAS BrO observations at mid-latitudes in 2003, serves as an upper limit of the comparison. Most of the balloon observations agree with the photochemical model predictions within their given error estimates. First retrieval exercises of BrO limb profiling from the SCIAMACHY satellite instrument agree to <±50% with the photochemically-corrected balloon observations, and tend to show less agreement below 20 km.
During SPURT (Spurenstofftransport in der Tropopausenregion, trace gas transport in the tropopause region) we performed measurements of a wide range of trace gases with different lifetimes and sink/source characteristics in the northern hemispheric upper troposphere (UT) and lowermost stratosphere (LMS). A large number of in-situ instruments were deployed on board a Learjet 35A, flying at altitudes up to 13.7 km, at times reaching to nearly 380 K potential temperature. Eight measurement campaigns (consisting of a total of 36 flights), distributed over all seasons and typically covering latitudes between 35° N and 75° N in the European longitude sector (10° W–20° E), were performed. Here we present an overview of the project, describing the instrumentation, the encountered meteorological situations during the campaigns and the data set available from SPURT. Measurements were obtained for N2O, CH4, CO, CO2, CFC12, H2, SF6, NO, NOy, O3 and H2O. We illustrate the strength of this new data set by showing mean distributions of the mixing ratios of selected trace gases, using a potential temperature – equivalent latitude coordinate system. The observations reveal that the LMS is most stratospheric in character during spring, with the highest mixing ratios of O3 and NOy and the lowest mixing ratios of N2O and SF6. The lowest mixing ratios of NOy and O3 are observed during autumn, together with the highest mixing ratios of N2O and SF6 indicating a strong tropospheric influence. For H2O, however, the maximum concentrations in the LMS are found during summer, suggesting unique (temperature- and convection-controlled) conditions for this molecule during transport across the tropopause. The SPURT data set is presently the most accurate and complete data set for many trace species in the LMS, and its main value is the simultaneous measurement of a suite of trace gases having different lifetimes and physical-chemical histories. It is thus very well suited for studies of atmospheric transport, for model validation, and for investigations of seasonal changes in the UT/LMS, as demonstrated in accompanying and elsewhere published studies.
During several balloon flights inside the Arctic polar vortex in early 2003, unusual trace gas distributions were observed, which indicate a strong influence of mesospheric air in the stratosphere. The tuneable diode laser (TDL) instrument SPIRALE (Spectroscopie InFrarouge par Absorption de Lasers Embarqués) measured unusually high CO values (up to 600 ppb) on 27 January at about 30 km altitude. The cryosampler BONBON sampled air masses with very high molecular Hydrogen, extremely low SF6 and enhanced CO values on 6 March at about 25 km altitude. Finally, the MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) Fourier Transform Infra-Red (FTIR) spectrometer showed NOy values which are significantly higher than NOy* (the NOy derived from a correlation between N2O and NOy under undisturbed conditions), on 21 and 22 March in a layer centred at 22 km altitude. Thus, the mesospheric air seems to have been present in a layer descending from about 30 km in late January to 25 km altitude in early March and about 22 km altitude on 20 March. We present corroborating evidence from a model study using the KASIMA (KArlsruhe Simulation model of the Middle Atmosphere) model that also shows a layer of mesospheric air, which descended into the stratosphere in November and early December 2002, before the minor warming which occurred in late December 2002 lead to a descent of upper stratospheric air, cutting of a layer in which mesospheric air is present. This layer then descended inside the vortex over the course of the winter. The same feature is found in trajectory calculations, based on a large number of trajectories started in the vicinity of the observations on 6 March. Based on the difference between the mean age derived from SF6 (which has an irreversible mesospheric loss) and from CO2 (whose mesospheric loss is much smaller and reversible) we estimate that the fraction of mesospheric air in the layer observed on 6 March, must have been somewhere between 35% and 100%.
A new version of a digital global map of irrigation areas was developed by combining irrigation statistics for 10 825 sub-national statistical units and geo-spatial information on the location and extent of irrigation schemes. The map shows the percentage of each 5 arc minute by 5 arc minute cell that was equipped for irrigation around the year 2000. It is thus an important data set for global studies related to water and land use. This paper describes the data set and the mapping methodology and gives, for the first time, an estimate of the map quality at the scale of countries, world regions and the globe. Two indicators of map quality were developed for this purpose, and the map was compared to irrigated areas as derived from two remote sensing based global land cover inventories.
Flow velocity in rivers has a major impact on residence time of water and thus on high and low water as well as on water quality. For global scale hydrological modeling only very limited information is available for simulating flow velocity. Based on the Manning-Strickler equation, a simple algorithm to model temporally and spatially variable flow velocity was developed with the objective of improving flow routing in the global hydrological model of Water- GAP. An extensive data set of flow velocity measurements in US rivers was used to test and to validate the algorithm before integrating it into WaterGAP. In this test, flow velocity was calculated based on measured discharge and compared to measured velocity. Results show that flow velocity can be modeled satisfactorily at selected river cross sections. It turned out that it is quite sensitive to river roughness, and the results can be optimized by tuning this parameter. After the validation of the approach, the tested flow velocity algorithm has been implemented into the WaterGAP model. A final validation of its effects on the model results is currently performed.
Das Klima, insbesondere der Niederschlag ist einer der wichtigsten natürlichen Gestaltungsfaktoren für die Savannenregion Westafrikas. Morphodynamik, Bodenbildung, Abflußregime sowie Wasserhaushalt werden direkt vom Klima bestimmt. Der Niederschlag ist zudem das begrenzende Element für das Wachstum von Flora und Fauna. Jede Änderung der Niederschlagsmenge hat gravierende Folgen für die Landschaft und seine Bewohner. Die Untersuchung langfristiger klimatischer Veränderungen ist ein Beitrag die Entstehung und den Wandel der Landschaft zu verstehen. Hierdurch können parallele Entwicklungen zwischen Natur- und Kulturraum im langfristigen Zusammenhängen gesehen werden. Ziel ist, das Klima des Tschadseegebietes seit dem Beginn regelmäßiger Aufzeichnung von Klimadaten mit Hilfe verschiedener statistischer Verfahren zu beschreiben. Des weiteren sollen Wechselwirkungen und Zusammenhänge zu externen Faktoren (Globale Zirkulation, Ozeantemperatur, Solarstrahlung,...) aufgezeigt werden.
Le phénomène du cuirassement reste une "curiosité" et une énigme pour le pays de la zone intertropicale. A cause de ses caractéristiques lithologiques et structurales assez particulières de ses roches (roches riches en éléments ferromagnésiens), le Burkina Faso est une véritable zone de prédilection des cuirasses. Malgré l'effort des chercheurs pour élucider le phénomène du cuirassement, force est de reconnaître que de nos jours, certains points d'ombre subsistent toujours; ce qui invite à une analyse plus poussée ... Loin de négliger les problèmes liés à la reconstitution de la génèse de la cuirasse, nous proposons ici une analyse assez originale des cuirasses sur la base des connaissances déjà acquises et de nos multiples observations sur le terrain burkinabé.
Maiduguri, an important city in the Sudano-Sahelian zone of West Africa, experiences both drought and floods. Although droughts are more popular, floods are a seasonal occurrence in parts of the city in the average rainy season. Both hazards exert a heavy toll on their victims. Present response to the hazard problems is characterised by a fire-fighting approach which does little about future occurrence. Much of the perception and response is spiritual and stops short of needed structural and organisational programmes for effective mitigation of hazards. Future occurrences of drought and flood may have more adverse effects as land use in the city becomes more complex and agricultural and water supply system comes to depend heavily on surfacial sources. Future effects will also depend on the socio-economic conditions of the people at risk and the capacity of those who help them. Governments and people need to work together to reduce drought and flood hazards.
Les débuts de l’exploitation du fer de gisement sont encore mal connus en Afrique en général et au Burkina Faso en particulier. Pourtant, pendant la colonisation française, plusieurs auteurs ont dit leur émerveillement en découvrant l’industrie métallurgique de certains peuples des pays du Burkina Faso, celle des Moose, Bwaba et Sénoufo par exemple. Selon ces auteurs, les peuples ci-dessus ont développé des technologies qui leur assuraient une bonne production de fer métal. Pour en savoir plus, nous conduisons depuis 1983 un programme sur la métallurgie du fer au Burkina Faso. L’accent a été particulièrement mis sur les aspects relatifs à la métallurgie lourde du fer. C’est pourquoi l’étude des anciennes mines présente pour nous un intérêt très spécial. Notre propos ici est de rapporter rapidement certaines informations rassemblées concernant le site et la situation des mines, les modes d’extraction, les types de minerai et les questions relatives à l’appropriation et aux conditions d’exploitation de ces mines.
Aus der Notwendigkeit heraus, "nachhaltig die Funktionen des Bodens zu sichern" (§1 BBodSchG), und damit auch Bodenschutz vorsorgend in Planungsprozesse zu integrieren, wurde ein neues Bodenschutzkonzept entwickelt. Es basiert auf einer differenzierten, aber gleichzeitig nachvollziehbaren Bodenbewertung. Das Problem bei der Bodenbewertung ist, dass etwas bewertet werden soll, für das - je nach Fragestellung - immer wieder neue Ziele definiert werden müssen. Deshalb liegt der Bodenbewertung ein Zielsystem zu Grunde, das Schutzziele klar festlegt und mit Hilfe dessen die Bodenbewertung nachvollziehbar wird. Für das Bodenschutzkonzept werden aus der Vielzahl möglicher Kriterien wichtige vorgestellt, aus denen die - bezogen auf dieses Zielsystem - wesentlichen ausgewählt werden können. Um aussagekräftige Daten für diese Kriterien zu erhalten stützt sich die Bodenbewertung auf bodenkundliche sowie landschaftsgenetisch-geomorphologische Zusammenhänge. Die eigentliche Bewertung erfolgt dann in drei Schritten: zuerst eine Einzelbewertung, dann zusammengefasst nach den Bodenfunktionen Lebensraumfunktion, Regelungsfunktion, Informationsfunktion, dem Eigenwert des Bodens (Schutzwürdigkeit) sowie der Empfindlichkeit und Gefährdung (Schutzbedürftigkeit). Im dritten Schritt werden diese Bewertungen dann zu einer gewichteten, verbal-argumentativen Gesamtbewertung der Schutzwürdigkeit und Schutzbedürftigkeit zusammengefasst. Mit Hilfe des Bewertungsverfahrens werden auch Zielkonflikte zwischen den unterschiedlichen Schutzgütern offengelegt. Schutzmaßnahmen ergeben sich dann stringent aus den vorher im Zielsystem gesetzten Prämissen, d.h., Ziele und Maßnahmen sind begründbar gewählt, stehen in einem ökologischen Gesamtzusammenhang und lassen sich sehr gut nachvollziehen. Das hier vorgestellte, neue Bodenschutzkonzept ist für verschiedene Planungsebenen geeignet. Es ist in unterschiedlichen Naturräumen anwendbar, kann verschiedene Schutzziele mit Hilfe des Zielsystems bestimmen und so z.B. die Naturraumvielfalt in einem Gebiet ebenso berücksichtigen wie die Meinungsvielfalt, was unter vorsorgendem Bodenschutz zu verstehen sei.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Vielleicht hätte sich außerhalb der Fachwissenschaft niemand für das Weltklimaproblem interessiert, wären da nicht zwei brisante, miteinander gekoppelte Fakten: Die Menschheit ist hochgradig von der Gunst des Klimas abhängig. Es kann uns daher nicht gleichgültig sein, was mit unserem Klima geschieht. Und: Die Menschheit ist mehr und mehr dazu übergegangen, das Klima auch selbst zu beeinflussen. Daraus erwächst uns allen eine besondere Verantwortung. ...
Wenn sich beim Klimagipfel in Den Haag [genauer bei der nun schon 6. Vertragstaatenkonferenz zur Klimaschutzkonvention der Vereinten Nationen] nun wieder die Delegationen aus fast allen Staaten der Welt treffen, um über Klimaschutzmaßnahmen zu beraten, dann schwingt auch immer die Frage mit: Sind solche Maßnahmen wirklich notwendig? Sollen wir nicht einfach warten, bis wir mehr, ja vielleicht alles wissen? ...
Die Zunahme der Konzentration von CO2 und anderen "Treibhausgasen" in der Atmosphäre ist unzweifelhaft, und ebenso unzweifelhaft reagiert das Klima darauf. Christian-Dietrich Schönwiese, Professor für Meteorologische Umweltforschung und Klimatologie an der Universität Frankfurt am Main, sieht dringenden politischen Handlungsbedarf und plädiert gleichzeitig dafür, die Debatte rund um den Klimaschutz zu versachlichen.
Die öffentliche Klimadebatte scheint sich zu verselbständigen. Abgehoben von den Erkenntnissen der Fachwissenschaftler reden die einen von der "Klimakatastrophe", die uns demnächst mit voller Wucht treffen wird, wenn wir nicht sofort alles ganz anders machen; Panik ist ihnen das rechte Mittel, Aufmerksamkeit zu erregen. Die anderen sehen im "Klimaschwindel" einen Vorwand für Forschungsgelder und zusätzliche Steuerbelastung der Wirtschaft; ihre Strategie ist Verwirrung und Verharmlosung. Mit der Fixierung auf solche Extrempositionen werden wir den Herausforderungen der Zukunft sicherlich nicht gerecht. Höchste Zeit für eine Versachlichung und für einen klärenden Beitrag zum Verwirrspiel "Klima".
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
Simulation of global temperature variations and signal detection studies using neural networks
(1998)
The concept of neural network models (NNM) is a statistical strategy which can be used if a superposition of any forcing mechanisms leads to any effects and if a sufficient related observational data base is available. In comparison to multiple regression analysis (MRA), the main advantages are that NNM is an appropriate tool also in the case of non-linear cause-effect relations and that interactions of the forcing mechanisms are allowed. In comparison to more sophisticated methods like general circulation models (GCM), the main advantage is that details of the physical background like feedbacks can be unknown. Neural networks learn from observations which reflect feedbacks implicitly. The disadvantage, of course, is that the physical background is neglected. In addition, the results prove to be sensitively dependent from the network architecture like the number of hidden neurons or the initialisation of learning parameters. We used a supervised backpropagation network (BPN) with three neuron layers, an unsupervised Kohonen network (KHN) and a combination of both called counterpropagation network (CPN). These concepts are tested in respect to their ability to simulate the observed global as well as hemispheric mean surface air temperature annual variations 1874 - 1993 if parameter time series of the following forcing mechanisms are incorporated : equivalent CO2 concentrations, tropospheric sulfate aerosol concentrations (both anthropogenic), volcanism, solar activity, and ENSO (all natural). It arises that in this way up to 83% of the observed temperature variance can be explained, significantly more than by MRA. The implication of the North Atlantic Oscillation does not improve these results. On a global average, the greenhouse gas (GHG) signal so far is assessed to be 0.9 - 1.3 K (warming), the sulfate signal 0.2 - 0.4 K (cooling), results which are in close similarity to the GCM findings published in the recent IPCC Report. The related signals of the natural forcing mechanisms considered cover amplitudes of 0.1 - 0.3 K. Our best NNM estimate of the GHG doubling signal amounts to 2.1K, equilibrium, or 1.7 K, transient, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).