Refine
Year of publication
- 2016 (80)
- 2010 (59)
- 2014 (53)
- 2015 (53)
- 2008 (52)
- 2011 (52)
- 2009 (48)
- 2013 (46)
- 2019 (45)
- 2020 (44)
- 2017 (35)
- 2012 (33)
- 2005 (27)
- 2018 (27)
- 2007 (24)
- 2021 (22)
- 2003 (20)
- 2006 (19)
- 2002 (17)
- 2023 (16)
- 2022 (15)
- 2001 (13)
- 2004 (13)
- 2024 (10)
- 2000 (7)
- 1998 (5)
- 1999 (4)
- 1913 (2)
- 1914 (2)
- 1919 (2)
- 1976 (2)
- 1907 (1)
- 1909 (1)
- 1918 (1)
- 1920 (1)
- 1921 (1)
- 1922 (1)
- 1925 (1)
- 1964 (1)
- 1969 (1)
- 1971 (1)
- 1973 (1)
- 1974 (1)
- 1978 (1)
- 1980 (1)
- 1990 (1)
- 1995 (1)
- 1996 (1)
- 1997 (1)
Document Type
- Article (589)
- Doctoral Thesis (139)
- Book (25)
- Contribution to a Periodical (21)
- Conference Proceeding (19)
- Working Paper (19)
- Part of Periodical (10)
- Diploma Thesis (8)
- diplomthesis (7)
- Part of a Book (5)
Has Fulltext
- yes (865) (remove)
Is part of the Bibliography
- no (865)
Keywords
- climate change (13)
- Climate change (5)
- Geochemistry (5)
- Klimaänderung (5)
- Atmospheric chemistry (4)
- Boden (4)
- Deutschland (4)
- Klima (4)
- Modellierung (4)
- Stratosphäre (4)
Institute
- Geowissenschaften (865) (remove)
Paläoklimarekonstruktionen, die es sich zum Ziel gesetzt haben, Klima-Mensch Interaktionen auf lange Zeitreihen betrachtet zu erforschen, nehmen begünstigt durch die aktuell intensiv geführte Klimadebatte, einen immer größer werdenden Stellenwert in der öffentlichen und wissenschaftlichen Wahrnehmung ein. Denn trotz aller wissenschaftlicher Fortschritte, die in den vergangenen Jahrzehnten im Bereich der modernen Klimaforschung gemacht wurden, bleibt die zuverlässige Vorhersage und Modellierung von zukünftigen Klimaveränderungen noch immer eine der größten Herausforderungen unser heutigen Zeit. Betrachtet man die Karibik exemplarisch in diesem Rahmen, dann prognostizieren viele Modellrechnungen, infolge steigender Ozeantemperaturen, ein deutlich häufigeres Auftreten von tropischen Stürmen und Hurrikanen sowie eine Verschiebung hin zu höheren Sturmstärken. Dieser Trend stellt für die Karibik und viele daran angrenzende Staaten eine der größten Gefahren des modernen Klimawandels dar, den es wissenschaftlich über einen langen Zeitrahmen zu erforschen gilt.
Klimaprognosen stützen sich meist vollständig auf hoch-aufgelöste instrumentelle Datensätze. Diese sind aber alle durch einen wesentlichen Aspekt limitiert. Aufgrund ihrer eingeschränkten Verfügbarkeit (~150 Jahre) fehlt ihnen die erforderliche Tiefe, um die auf langen Zeitskalen operierenden Prozesse der globalen Klimadynamik adäquat abbilden zu können. Betrachtet man das Holozän in seiner Gesamtheit, so wurde die globale Klimadynamik über die vergangenen ~11,700 Jahre von periodisch auftretenden Prozessen und Abläufen gesteuert. Diese wirken grundsätzlich über Zeiträume von mehreren Jahrzehnten, teilweise Jahrhunderten und in einigen Fällen sogar Jahrtausenden. Viele dieser natürlichen Prozesse, können in der kurzen Instrumentellen Ära nicht gänzlich identifiziert und angemessen in Klimamodellen berücksichtig werden. Die alleinige Berücksichtigung der Instrumentellen Ära bietet daher nur eine eingeschränkte Perspektive, um die Ursachen und Abläufe von vergangenen sowie mögliche Folgen von zukünftigen Klimaveränderungen zu verstehen. Um diese Einschränkung zu überwinden, ist es somit erforderlich, dass die geowissenschaftliche Forschung mit Proxymethoden ein zusammenfassendes und mechanistisches Verständnis über alle Holozänen Klimaveränderungen erlangt.
Wenn man sich diese Limitierung, die ansteigenden Ozeantemperaturen und das in der Karibik in den vergangen 20 Jahren vermehrte Auftreten von starken tropischen Zyklonen ins Gedächtnis ruft, ist es nachvollziehbar, dass im Rahmen dieser Doktorarbeit ein zwei Jahrtausende langer und jährlich aufgelöster Klimadatensatz erarbeitet werden soll, der spät Holozäne Variationen von Ozeanoberflächenwasser-temperaturen (SST) und daraus resultierende lang-zeitliche Veränderungen in der Häufigkeit tropischer Zyklone widerspiegelt. In Zentralamerika wird das Ende der Maya Hochkultur (900-1100 n.Chr.) mit drastischen Umweltveränderungen (z.B. Dürren) assoziiert, die während der Mittelalterlichen Warmzeit (MWP; 900-1400 n.Chr.) durch eine globale Klimaveränderung hervorgerufen wurde. Die aus einem „Blue Hole“ abgeleiteten Informationen über Klimavariationen der Vergangenheit können als Referenz für die gegenwärtige Klimakriese verwendet werden.
Als „Blue Hole“ wird eine Karsthöhle bezeichnet, die sich subaerisch während vergangener Meeresspiegeltiefstände im karbonatischen Gerüst eines Riffsystems gebildet hat und in Folge eines Meeresspiegelanstiegs vollständig überflutet wurde. In einigen wenigen marinen „Blue Holes“ treten anoxische Bodenwasserbedingungen auf. Die in diesen anoxischen Karsthöhlen abgelagerten Abfolgen mariner Sedimente können als einzigartiges Klimaarchiv verwendet werden, da sie aufgrund des Fehlens von Bioturbation eine jährliche Schichtung (Warvierung) aufweisen.
In dieser kumulativen Dissertation über das „Great Blue Hole“ werden die Ergebnisse eines 3-jährigen Forschungsprojekts vorgestellt, dass das Ziel verfolgte einen wissenschaftlich herausragenden spät Holozänen Klimadatensatz für die süd-westliche Karibik zu erzeugen. Beim „Great Blue Hole“ handelt es sich um ein weltweit einzigartiges marines Sedimentarchiv für diverse spät Holozäne Klima-veränderungen, das im Zuge dieser Dissertation sowohl nach paläoklimatischen als auch nach sedimentologischen Fragestellungen untersucht wurde. Die vorliegende Doktorarbeit befasst sich im Einzelnen mit (1) der Ausarbeitung eines jährlich aufgelösten Archives für tropische Zyklone, (2) der Entwicklung eines jährlich aufgelösten SST Datensatzes und (3) einer kompositionellen Quantifizierung der sedimentären Abfolgen sowie einer faziell-stratigraphischen Charakterisierung von Schönwetter-Sedimenten und Sturmlagen. Zu jedem dieser drei Aspekte, wurde jeweils ein Fachartikel bei einer anerkannten wissenschaftlichen Fachzeitschrift mit „peer-review“ Verfahren veröffentlicht.
Der insgesamt 8.55 m lange Sedimentbohrkern („BH6“), der für diese Dissertation untersucht wurde, stammt vom Boden des 125 m tiefen und 320 m breiten „Great Blue Holes“, das sich in der flachen östlichen Lagune des 80 km vor der Küste von Belize (Zentralamerika) gelegenen „Lighthouse Reef“ Atolls befindet. Durch seine besondere Geomorphologie wirkt das, innerhalb des atlantischen „Hurrikan Gürtels“ positionierte, „Great Blue Hole“ wie eine gigantische Sedimentfalle. Die unter Schönwetter-Bedingungen kontinuierlich abgelagerten Abfolgen feinkörniger karbonatischer Sedimente, werden von groben Sturmlagen unterbrochen, die auf „over-wash“ Prozesse von tropischen Zyklonen zurückzuführen sind.
...
[Nachruf] Arno Semmel
(2010)
In November 2016, magnetotelluric (MT) data were collected at the Ceboruco Volcano in cooperation with the Centro de Sismología y Volcanología de Occidente (SisVoc, Universidad de Guadalajara, Mexico). The Ceboruco is a 2280 m high stratovolcano, located in Nayarit State, Mexico. It is placed in the central part of the Tepic-Zacoalco Rift (TZR), which constitutes the north-western end of the Trans-Mexican Volcanic Belt. Together with Chapala and Colima (in the Jalisco Block), they form the triple rift system developed as a consequence of the ongoing subduction of the Rivera and Cocos oceanic plates beneath the North American continental crust. Although its last eruption occurred in 1870, it is the most active volcano in the area, showing volcanic-earthquake activity together with ongoing vapor emissions. The survey was part of a geothermal project (CeMIEGeo-P24) and focused on the determination of electrical conductivity properties to characterize the deep structure and the geothermal potential of the Volcano. Frequency dependent magnetotelluric response functions were calculated from 25 broadband MT stations, which covered an area of 10 x 10 km2 including its crater, calderas and foreland. The results were interpreted using anisotropic 3-D forward modelling and isotropic 3-D inversion approaches, considering strong topographical effects. The final resistivity model implies a highly conductive layer, reaching from near-surface to approximately 2 km depth, which might be related to a hydrothermal system. Here, mineralized fluids and clay minerals can cause high conductivities around 1 S/m. For longer periods, the principal axes of the MT response tensors (phase tensor, apparent resistivity tensor) are in good agreement with the strike direction of the underlying rift system. However, they are not rendered by the isotropic inversion. Thus the data suggest an anisotropic electrical conductivity at greater depth with its principal axis determined by the response tensors.
The most frequently used boundary-layer turbulence parameterization in numerical weather prediction (NWP) models are turbulence kinetic energy (TKE) based-based schemes. However, these parameterizations suffer from a potential weakness, namely the strong dependence on an ad-hoc quantity, the so-called turbulence length scale. The physical interpretation of the turbulence length scale is difficult and hence it cannot be directly related to measurements or large eddy simulation (LES) data. Consequently, formulations for the turbulence length scale in basically all TKE schemes are based on simplified assumptions and are model-dependent. A good reference for the independent evaluation of the turbulence length scale expression for NWP modeling is missing. Here we propose a new turbulence length scale diagnostic which can be used in the gray zone of turbulence without modifying the underlying TKE turbulence scheme. The new diagnostic is based on the TKE budget: The core idea is to encapsulate the sum of the molecular dissipation and the cross-scale TKE transfer into an effective dissipation, and associate it with the new turbulence length scale. This effective dissipation can then be calculated as a residuum in the TKE budget equation (for horizontal sub-domains of different sizes) using LES data. Estimation of the scale dependence of the diagnosed turbulence length scale using this novel method is presented for several idealized cases.
Aim: Predicting future changes in species richness in response to climate change is one of the key challenges in biogeography and conservation ecology. Stacked species distribution models (S‐SDMs) are a commonly used tool to predict current and future species richness. Macroecological models (MEMs), regression models with species richness as response variable, are a less computationally intensive alternative to S‐SDMs. Here, we aim to compare the results of two model types (S‐SDMS and MEMs), for the first time for more than 14,000 species across multiple taxa globally, and to trace the uncertainty in future predictions back to the input data and modelling approach used.
Location: Global land, excluding Antarctica.
Taxon: Amphibians, birds and mammals.
Methods: We fitted S‐SDMs and MEMs using a consistent set of bioclimatic variables and model algorithms and conducted species richness predictions under current and future conditions. For the latter, we used four general circulation models (GCMs) under two representative concentration pathways (RCP2.6 and RCP6.0). Predicted species richness was compared between S‐SDMs and MEMs and for current conditions also to extent‐of‐occurrence (EOO) species richness patterns. For future predictions, we quantified the variance in predicted species richness patterns explained by the choice of model type, model algorithm and GCM using hierarchical cluster analysis and variance partitioning.
Results: Under current conditions, species richness predictions from MEMs and S‐SDMs were strongly correlated with EOO‐based species richness. However, both model types over‐predicted areas with low and under‐predicted areas with high species richness. Outputs from MEMs and S‐SDMs were also highly correlated among each other under current and future conditions. The variance between future predictions was mostly explained by model type.
Main conclusions: Both model types were able to reproduce EOO‐based patterns in global terrestrial vertebrate richness, but produce less collinear predictions of future species richness. Model type by far contributes to most of the variation in the different future species richness predictions, indicating that the two model types should not be used interchangeably. Nevertheless, both model types have their justification, as MEMs can also include species with a restricted range, whereas S‐SDMs are useful for looking at potential species‐specific responses.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice nucleating particles (INPs). However, an inter-comparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nucleation research UnIT), we distributed an illite rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. Seventeen measurement methods were involved in the data inter-comparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while ten other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing dataset was evaluated using the ice nucleation active surface-site density (ns) to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers nine orders of magnitude in ns.
Our inter-comparison results revealed a discrepancy between suspension and dry-dispersed particle measurements for this mineral dust. While the agreement was good below ~ −26 °C, the ice nucleation activity, expressed in ns, was smaller for the wet suspended samples and higher for the dry-dispersed aerosol samples between about −26 and −18 °C. Only instruments making measurement techniques with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −26 and −18 °C is discussed. In general, the seventeen immersion freezing measurement techniques deviate, within the range of about 7 °C in terms of temperature, by three orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency (i.e., ns) of illite NX particles is relatively independent on droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature-dependence and weak time- and size-dependence of immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns (T) spectra, and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. A multiple exponential distribution fit is expressed as ns(T) = exp(23.82 × exp(−exp(0.16 × (T + 17.49))) + 1.39) based on the specific surface area and ns(T) = exp(25.75 × exp(−exp(0.13 × (T + 17.17))) + 3.34) based on the geometric area (ns and T in m−2 and °C, respectively). These new fits, constrained by using an identical reference samples, will help to compare IN measurement methods that are not included in the present study and, thereby, IN data from future IN instruments.
An easy-to-use model to evaluate conductivities at high and middle latitudes in the height range 70–100 km is presented. It is based on electron density profiles obtained with the EISCAT VHF radar during 11 years and on the neutral atmospheric model MSIS95. The model uses solar zenith angle, geomagnetic activity and season as input parameters. It was mainly constructed to study the properties of Schumann resonances that depend on such conductivity profiles.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
Vegetation responds to drought through a complex interplay of plant hydraulic mechanisms, posing challenges for model development and parameterization. We present a mathematical model that describes the dynamics of leaf water-potential over time while considering different strategies by which plant species regulate their water-potentials. The model has two parameters: the parameter λ describing the adjustment of the leaf water potential to changes in soil water potential, and the parameter Δψww describing the typical ‘well-watered’ leaf water potentials at non-stressed (near-zero) levels of soil water potential. Our model was tested and calibrated on 110 time-series datasets containing the leaf- and soil water potentials of 66 species under drought and non-drought conditions. Our model successfully reproduces the measured leaf water potentials over time based on three different regulation strategies under drought. We found that three parameter sets derived from the measurement data reproduced the dynamics of 53% of an drought dataset, and 52% of a control dataset [root mean square error (RMSE) < 0.5 MPa)]. We conclude that, instead of quantifying water-potential-regulation of different plant species by complex modeling approaches, a small set of parameters may be sufficient to describe the water potential regulation behavior for large-scale modeling. Thus, our approach paves the way for a parsimonious representation of the full spectrum of plant hydraulic responses to drought in dynamic vegetation models.
Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impact-model setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making.
Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop- and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation increases as one possible measure of agricultural intensification that could limit the expansion of cropland required in response to climate change and growing food demand. This example shows that current impact model uncertainties pose an important challenge to long-term mitigation planning and must not be ignored in long-term strategic decision making.
Irrigation intensifies land use by increasing crop yield but also impacts water resources. It affects water and energy balances and consequently the microclimate in irrigated regions. Therefore, knowledge of the extent of irrigated land is important for hydrological and crop modelling, global change research, and assessments of resource use and management. Information on the historical evolution of irrigated lands is limited. The new global historical irrigation data set (HID) provides estimates of the temporal development of the area equipped for irrigation (AEI) between 1900 and 2005 at 5 arcmin resolution. We collected sub-national irrigation statistics from various sources and found that the global extent of AEI increased from 63 million ha (Mha) in 1900 to 111 Mha in 1950 and 306 Mha in 2005. We developed eight gridded versions of time series of AEI by combining sub-national irrigation statistics with different data sets on the historical extent of cropland and pasture. Different rules were applied to maximize consistency of the gridded products to sub-national irrigation statistics or to historical cropland and pasture data sets. The HID reflects very well the spatial patterns of irrigated land as shown on historical maps for the western United States (around year 1900) and on a global map (around year 1960). Mean aridity on irrigated land increased and mean natural river discharge on irrigated land decreased from 1900 to 1950 whereas aridity decreased and river discharge remained approximately constant from 1950 to 2005. The data set and its documentation are made available in an open-data repository at https://mygeohub.org/publications/8 (doi:10.13019/M20599).
Irrigation intensifies land use by increasing crop yield but also impacts water resources. It affects water and energy balances and consequently the microclimate in irrigated regions. Therefore, knowledge of the extent of irrigated land is important for hydrological and crop modelling, global change research, and assessments of resource use and management. Information on the historical evolution of irrigated lands is limited. The new global Historical Irrigation Dataset (HID) provides estimates of the temporal development of the area equipped for irrigation (AEI) between 1900 and 2005 at 5 arc-minute resolution. We collected subnational irrigation statistics from various sources and found that the global extent of AEI increased from 63 million ha (Mha) in 1900 to 112 Mha in 1950 and 306 Mha in 2005. We developed eight gridded versions of time series of AEI by combining subnational irrigation statistics with different data sets on the historical extent of cropland and pasture. Different rules were applied to maximize consistency of the gridded products to subnational irrigation statistics or to historical cropland and pasture data sets. The HID reflects very well the spatial patterns of irrigated land in the western United States as shown on historical maps. Mean aridity on irrigated land increased and river discharge decreased from 1900–1950 whereas aridity decreased from 1950–2005. The dataset and its documentation are made available in an open data repository at https://mygeohub.org/publications/8 (doi:10.13019/M2MW2G).
Wetlands such as bogs, swamps, or freshwater marshes are hotspots of biodiversity. For 5.1 million km2 of inland wetlands, the dynamics of area and water storage, which strongly impact biodiversity and ecosystem services, were simulated using the global hydrological model WaterGAP. For the first time, the impacts of both human water use and man‐made reservoirs (WUR) and future climate change (CC) on wetlands around the globe were quantified. WUR impacts are concentrated in arid/semiarid regions, where WUR decreased mean wetland water storage by more than 5% on 8.2% of the mean wetland area during 1986–2005 (Am), with highest decreases in groundwater depletion area. Using output of three climate models, CC impacts on wetlands were quantified, distinguishing unavoidable impacts [i.e., at 2 °C global warming (GW)] from avoidable impacts (difference between 3 °C and 2 °C impacts). Even unavoidable CC impacts are projected to be much larger than WUR impacts, also in arid/semiarid regions. On most wetland area with reliable estimates, avoidable CC impacts are more than twice as large as unavoidable impacts. In case of 2 °C GW, half of Am is estimated to be unaffected by mean storage changes of more than 5%, but only one third in case of 3 °C GW. Temporal variability of water storage will increase for most wetlands. Wetlands in dry regions will be affected the most, particularly by water storage decreases in the dry season. Different from wealthier countries, low‐income countries will dominantly suffer from a decrease in wetland water storage due to CC.
A graph theoretical approach to the analysis, comparison, and enumeration of crystal structures
(2008)
As an alternative approach to lattices and space groups, this work explores graph theory as a means to model crystal structures. The approach uses quotient graphs and nets - the graph theoretical equivalent of cells and lattices - to represent crystal structures. After a short review of related work, new classes of cycles in nets are introduced and their ability to distinguish between non-isomorphic nets and their computational complexity are evaluated. Then, two methods to estimate a structure’s density from the corresponding net are proposed. The first uses coordination sequences to estimate the number of nodes in a sphere, whereas the second method determines the maximal volume of a unit cell. Based on the quotient graph only, methods are proposed to determine whether nets consist of islands, chains, planes, or penetrating, disconnected sub-nets. An algorithm for the enumeration of crystal structures is revised and extended to a search for structures possessing certain properties. Particular attention is given to the exclusion of redundant nets and those, which, by the nature of their connectivity, cannot correspond to a crystal structure. Nets with four four-coordinated nodes, corresponding to sp3 hybridised carbon polymorphs with four atoms per unit cell, are completely enumerated in order to demonstrate the approach. In order to render quotient graphs and nets independent from crystal structures, they are reintroduced in a purely graph-theoretical way. Based on this, the issue of iso- and automorphism of nets is reexamined. It is shown that the topology of a net (that is the bonds in a crystal) constrains severely the symmetry of the embedding (that is the crystal), and in the case of connected nets the space group except for the setting. Several examples are studied and conclusions on phases are drawn (pseudo-cubic FeS2 versus pyrite; α- versus β- quartz; marcasite- versus rutile-like phases). As the automorphisms of certain quotient graphs stipulate a translational symmetry higher than an arbitrary embedding of the corresponding net would show, they are examined in more detail and a method to reduce the size of such quotient graphs is proposed. Besides two instructional examples with 2-dimensional graphs, the halite, calcite, magnesite, barytocalcite, and a strontium feldspar structures are discussed. For some of the structures it is shown that the quotient graph which is equivalent to a centred cell is reduced to a quotient graph equivalent to the primitive cell. For the partially disordered strontium feldspar, it is shown that even if it could be annealed to an ordered structure, the unit cell would likely remain unchanged. For the calcite and barytocalcite structures it is shown that the equivalent nets are not isomorphic.
In the nineteenth century, two Neolithic axe-heads were reported from the Michelsberg enclosure system at Kapellenberg. The recent identification of an unusually large tumulus, from which the axe-heads were almost certainly once recovered, reveals that socio-political hierarchisation, linked to the emergence of high-ranking elites in Brittany and the Paris Basin during the fifth millennium cal BC, may have extended into Central Europe.
Recently, new soil data maps were developed, which include vertical soil properties like soil type. Implementing those into a multilayer Soil-Vegetation-Atmosphere-Transfer (SVAT) scheme, discontinuities in the water content occur at the interface between dissimilar soils. Therefore, care must be taken in solving the Richards equation for calculating vertical soil water fluxes. We solve a modified form of the mixed (soil water and soil matric potential based) Richards equation by subtracting the equilibrium state of soil matrix potential ψE from the hydraulic potential ψh. The sensitivity of the modified equation is tested under idealized conditions. The paper will show that the modified equation can handle with discontinuities in soil water content at the interface of layered soils.
The first concerted multi-model intercomparison of halogenated very short-lived substances (VSLS) has been performed, within the framework of the ongoing Atmospheric Tracer Transport Model Intercomparison Project (TransCom). Eleven global models or model variants participated (nine chemical transport models and two chemistry–climate models) by simulating the major natural bromine VSLS, bromoform (CHBr3) and dibromomethane (CH2Br2), over a 20-year period (1993–2012). Except for three model simulations, all others were driven offline by (or nudged to) reanalysed meteorology. The overarching goal of TransCom-VSLS was to provide a reconciled model estimate of the stratospheric source gas injection (SGI) of bromine from these gases, to constrain the current measurement-derived range, and to investigate inter-model differences due to emissions and transport processes. Models ran with standardised idealised chemistry, to isolate differences due to transport, and we investigated the sensitivity of results to a range of VSLS emission inventories. Models were tested in their ability to reproduce the observed seasonal and spatial distribution of VSLS at the surface, using measurements from NOAA's long-term global monitoring network, and in the tropical troposphere, using recent aircraft measurements – including high-altitude observations from the NASA Global Hawk platform.
The models generally capture the observed seasonal cycle of surface CHBr3 and CH2Br2 well, with a strong model–measurement correlation (r ≥ 0.7) at most sites. In a given model, the absolute model–measurement agreement at the surface is highly sensitive to the choice of emissions. Large inter-model differences are apparent when using the same emission inventory, highlighting the challenges faced in evaluating such inventories at the global scale. Across the ensemble, most consistency is found within the tropics where most of the models (8 out of 11) achieve best agreement to surface CHBr3 observations using the lowest of the three CHBr3 emission inventories tested (similarly, 8 out of 11 models for CH2Br2). In general, the models reproduce observations of CHBr3 and CH2Br2 obtained in the tropical tropopause layer (TTL) at various locations throughout the Pacific well. Zonal variability in VSLS loading in the TTL is generally consistent among models, with CHBr3 (and to a lesser extent CH2Br2) most elevated over the tropical western Pacific during boreal winter. The models also indicate the Asian monsoon during boreal summer to be an important pathway for VSLS reaching the stratosphere, though the strength of this signal varies considerably among models.
We derive an ensemble climatological mean estimate of the stratospheric bromine SGI from CHBr3 and CH2Br2 of 2.0 (1.2–2.5) ppt, ∼ 57 % larger than the best estimate from the most recent World Meteorological Organization (WMO) Ozone Assessment Report. We find no evidence for a long-term, transport-driven trend in the stratospheric SGI of bromine over the simulation period. The transport-driven interannual variability in the annual mean bromine SGI is of the order of ±5 %, with SGI exhibiting a strong positive correlation with the El Niño–Southern Oscillation (ENSO) in the eastern Pacific. Overall, our results do not show systematic differences between models specific to the choice of reanalysis meteorology, rather clear differences are seen related to differences in the implementation of transport processes in the models.
The first concerted multi-model intercomparison of halogenated very short-lived substances (VSLS) has been performed, within the framework of the ongoing Atmospheric Tracer Transport Model Intercomparison Project (TransCom). Eleven global models or model variants participated, simulating the major natural bromine VSLS, bromoform (CHBr3) and dibromomethane (CH2Br2), over a 20-year period (1993-2012). The overarching goal of TransCom-VSLS was to provide a reconciled model estimate of the stratospheric source gas injection (SGI) of bromine from these gases, to constrain the current measurement-derived range, and to investigate inter-model differences
due to emissions and transport processes. Models ran with standardised idealised chemistry, to isolate differences due to transport, and we investigated the sensitivity of results to a range of VSLS emission inventories. Models were tested in their ability to reproduce the observed seasonal and spatial distribution of VSLS at the surface, using measurements from NOAA’s long-term global monitoring network, and in the tropical troposphere, using recent aircraft measurements - including high altitude observations from the NASA Global Hawk platform.
The models generally capture the seasonal cycle of surface CHBr3 and CH2Br2 well, with a strong model measurement correlation (r ≥0.7) and a low sensitivity to the choice of emission inventory, at most sites. In a given model, the absolute model-measurement agreement is highly sensitive to the choice of emissions and inter-model differences are also apparent, even when using the same inventory, highlighting the challenges faced in evaluating such inventories at the global scale. Across the ensemble, most consistency is found within the tropics where most of the models (8 out of 11) achieve optimal agreement to surface CHBr3 observations using the lowest of the three CHBr3 emission inventories tested (similarly, 8 out of 11 models for CH2Br2). In general, the models are able to reproduce well observations of CHBr3 and CH2Br2 obtained in the tropical tropopause layer (TTL) at various locations throughout the Pacific. Zonal variability in VSLS loading in the TTL is generally consistent among models, with CHBr3 (and to a lesser extent CH2Br2) most elevated over the tropical West Pacific during boreal winter. The models also indicate the Asian Monsoon during boreal summer to be an important pathway for VSLS reaching the stratosphere, though the strength of this signal varies considerably among models.
We derive an ensemble climatological mean estimate of the stratospheric bromine SGI from CHBr3 and CH2Br2 of 2.0 (1.2-2.5) ppt, ∼57% larger than the best estimate from the most re- cent World Meteorological Organization (WMO) Ozone Assessment Report. We find no evidence for a long-term, transport-driven trend in the stratospheric SGI of bromine over the simulation period. However, transport-driven inter-annual variability in the annual mean bromine SGI is of the order of a ±5%, with SGI exhibiting a strong positive correlation with ENSO in the East Pacific
The Earth’s surface condition we find today is a result of long exposure to metabolism of life forms. Particularly, molecular oxygen in the atmosphere is a feature which developed over time. The first substantial and lasting rise of atmospheric oxygen level happened ≈ 2.5 Ga ago, but localities are reported where transiently elevated oxygen levels appeared before this time-point. To trace the timing and circumstances of the earliest availability of free oxygen in the atmosphere is important to understand the habitats of early microbial life forms on Earth.
This thesis focuses to obtain information of oxygen levels and the related atmospheric cycling of metals in sediments of the 3.5 to 3.2 Ga Barberton Greenstone Belt. First, as iron was a ubiquitous constituent of Archean seawater, I investigated its isotopic composition in minerals of chemical sediments. Hereby, I tried to resolve the changes within the water basin on small scale sedimentary sequence cycles. Second, I focused on the minor constituents of Archean seawater. The Re-Os geochronologic system and the abundance patterns of the platinum-group elements were chosen to integrate information of oxygen promoted weathering of a large source area. To integrate information of a large time interval, the isotopes of uranium were investigated over a large stratigraphic section.
The two key findings of this thesis are:
• Quantitative oxidation of ferrous iron in surface layers of Paleoarchean seawater occurred during the onset and termination of hydrothermal FeIIaq delivery into shallow waters.
• Paleoarchean sedimentary successions of the Barberton Greenstone Belt lack any evidence of transient basin-scale oxygenation.
The Manzimnyama Iron Formation (IF, Fig Tree Group, Barberton Greenstone Belt, South Africa) has been deciphered to exist of cyclic stacks of lithostratigraphic units with varying amounts of iron oxide and carbonate minerals. In-situ femtosecond-Laser-Ablation ICP-MS iron isotope measurements showed that the majority of siderite (γ56Fe ≈ −0.5 ‰) precipitated directly from seawater of γ56Fe ≈ 0 ‰. Ferric iron from the surface layers is preserved in ≤ 1μ m hematite and in magnetite that has been grown within the consolidated sediment. During FeIIaq events, fine-grained hematite (γ56Fe ≈ 2.2 ‰) and magnetite (γ56Fe 0.5 to 0.8 ‰) indicate oxygen levels in surface waters of lower than 0.0002 μM. Upon onset and termination of iron oxide abundance, magnetite with γ56Fe ≈ 0 ‰ indicates that low concentrations of FeIIaq in surface waters were oxidized quantitatively. These observations demonstrate the existence of iron oxidation in Paleoarchean surface waters independent of FeIIaq concentration. This is the first investigation of Paleoarchean IF showing that lithostratigraphic cyclicity can be traced in iron isotopic composition of oxide minerals.
ID-ICP-MS measurement of Re, Ir, Ru, Pt and Pd, trace element (SF-ICP-MS) and ID-MCICP- MS uranium isotope determination have been applied to carbonaceous shale of the Mapepe Fm. (Fig Tree Group) after inverse Aqua Regia leaching and bulk digestion. The sediments reveal a silicified fraction which exhibits a seawater REE signature and a mixture of detrital and meteoritic PGE. Neither enrichment of the redox-sensitive elements Re or Mo nor fractionated uranium isotopes have been found on a stratigraphic interval of several hundred meters. The non-silica fraction shows no depletion of Re which indicates that the detrital material had no contact to oxidizing fluids. ID-TIMS measurements of Re and Os after the CrO3-SO4 Carius Tube method of two sample intervals showed that the Re-Os isotopic systems of the non-silica fractions are identical to two komatiite occurrences. Weltevreden Fm. and Komati Fm. rocks were uplifted, eroded and transported to the deep part of the sedimentary basin without any change to the Re-Os system. Negative fractionated uranium isotopes (γ238U = −0.41 ± 0.01 ‰) associated with detrital Ba-Cr-U occurrences suggest the existence of distal redox-processes that involve uranium species. This study demonstrates that over the time of exposure and deposition of the Mapepe Fm. sedimentation, free oxygen was not available for weathering in the catchment area.
This paper presents an analysis of the recent tropospheric molecular hydrogen (H2) budget with a particular focus on soil uptake and surface emissions. A variational inversion scheme is combined with observations from the RAMCES and EUROHYDROS atmospheric networks, which include continuous measurements performed between mid-2006 and mid-2009. Net H2 surface flux, soil uptake distinct from surface emissions and finally, soil uptake, biomass burning, anthropogenic emissions and N2 fixation-related emissions separately were inverted in several scenarios. The various inversions generate an estimate for each term of the H2 budget. The net H2 flux per region (High Northern Hemisphere, Tropics and High Southern Hemisphere) varies between −8 and 8 Tg yr−1. The best inversion in terms of fit to the observations combines updated prior surface emissions and a soil deposition velocity map that is based on soil uptake measurements. Our estimate of global H2 soil uptake is −59 ± 4.0 Tg yr−1. Forty per cent of this uptake is located in the High Northern Hemisphere and 55% is located in the Tropics. In terms of surface emissions, seasonality is mainly driven by biomass burning emissions. The inferred European anthropogenic emissions are consistent with independent H2 emissions estimated using a H2/CO mass ratio of 0.034 and CO emissions considering their respective uncertainties. To constrain a more robust partition of H2 sources and sinks would need additional constraints, such as isotopic measurements.
This paper presents an analysis of the recent tropospheric molecular hydrogen (H2) budget with a particular focus on soil uptake and European surface emissions. A variational inversion scheme is combined with observations from the RAMCES and EUROHYDROS atmospheric networks, which include continuous measurements performed between mid-2006 and mid-2009. Net H2 surface flux, then deposition velocity and surface emissions and finally, deposition velocity, biomass burning, anthropogenic and N2 fixation-related emissions were simultaneously inverted in several scenarios. These scenarios have focused on the sensibility of the soil uptake value to different spatio-temporal distributions. The range of variations of these diverse inversion sets generate an estimate of the uncertainty for each term of the H2 budget. The net H2 flux per region (High Northern Hemisphere, Tropics and High Southern Hemisphere) varies between −8 and +8 Tg yr−1. The best inversion in terms of fit to the observations combines updated prior surface emissions and a soil deposition velocity map that is based on bottom-up and top-down estimations. Our estimate of global H2 soil uptake is −59±9 Tg yr−1. Forty per cent of this uptake is located in the High Northern Hemisphere and 55% is located in the Tropics. In terms of surface emissions, seasonality is mainly driven by biomass burning emissions. The inferred European anthropogenic emissions are consistent with independent H2 emissions estimated using a H2/CO mass ratio of 0.034 and CO emissions within the range of their respective uncertainties. Additional constraints, such as isotopic measurements would be needed to infer a more robust partition of H2 sources and sinks.
Atmospheric new particle formation is a general phenomenon observed over coniferous forests. So far nucleation is either parameterised as a function of gaseous sulphuric acid concentration only, which is unable to explain the observed seasonality of nucleation events at different measurement sites, or as a function of sulphuric acid and organic molecules. Here we introduce different nucleation parameters based on the interaction of sulphuric acid and terpene oxidation products and elucidate the individual importance. They include basic trace gas and meteorological measurements such as ozone and water vapour concentrations, temperature (for terpene emission) and UV B radiation as a proxy for OH radical formation. We apply these new parameters to field studies conducted at conducted at Finnish and German measurement sites and compare these to nucleation observations on a daily and annual scale. General agreement was found, although the specific compounds responsible for the nucleation process remain speculative. This can be interpreted as follows: During cooler seasons the emission of biogenic terpenes and the OH availability limits the new particle formation while towards warmer seasons the ratio of ozone and water vapour concentration seems to dominate the general behaviour. Therefore, organics seem to support ambient nucleation besides sulphuric acid or an OH-related compound. Using these nucleation parameters to extrapolate the current conditions to prognosed future concentrations of ozone, water vapour and organic concentrations leads to a significant potential increase in the nucleation event number.
The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point into the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the ozone depletion potential (ODP). In this context time-independent values are needed which, in particular, should be independent of the trends in the tropospheric mixing ratios (tropospheric trends) of the respective halogenated trace gases. For a given atmospheric situation, such FRF values would represent a molecular property.
We analysed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965 and 2011 on different mean age levels and found that the widely used formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the widely used calculation method of FRF.
Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time dependence in FRFs. Therefore we implemented a loss term in the formulation of the FRF and applied the parameterization of a mean arrival time to our data set.
We find that the time dependence in the FRF can almost be compensated for by applying a new trend correction in the calculation of the FRF. We suggest that this new method should be used to calculate time-independent FRFs, which can then be used e.g. for the calculation of ODP.
The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point in the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the Ozone Depletion Potential (ODP). To apply FRF in this context, steady-state values are needed, thus representing a molecular property for a given atmospheric situation. In particular, these values should be independent of the tropospheric trends of the respective halogenated trace gases.
We analyzed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965–2011 on different mean age levels and found that the current formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the current calculation method of FRF.
Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time-dependence in correlations of different tracers. Therefore we implemented a loss term in the formulation of FRF and applied the parameterization of a "mean arrival time" to our data set.
We find that the time-dependence in FRF can almost be compensated by applying a new trend correction in the calculation of FRF. We suggest that this new method should be used to calculate time-independent FRF, which can then be used e.g. for the calculation of ODP
The Late Tertiary to Quaternary evolution of the Ntem interior delta in SW Cameroon shall be modelled. A step fault was formed along neotectonically remobilized Precambrian structures. Uncalibrated 14C-datations in this ‘sediment trap’ show Pleistocene to Holocene ages. Both within and below the interior delta pebbles and clasts which are cemented in an iron and manganese matrix were found. These ‘fanglomerates’ are used to discuss different processes of the younger evolution also concerning climatic fluctuations in the study area.
Appropriate precautions in the case of flood occurrence often require long lead times (several days) in hydrological forecasting. This in turn implies large uncertainties that are mainly inherited from the meteorological precipitation forecast. Here we present a case study of the extreme flood event of August 2005 in the Swiss part of the Rhine catchment (total area 34 550 km2). This event caused tremendous damage and was associated with precipitation amounts and flood peaks with return periods beyond 10 to 100 years. To deal with the underlying intrinsic predictability limitations, a probabilistic forecasting system is tested, which is based on a hydrological-meteorological ensemble prediction system. The meteorological component of the system is the operational limited-area COSMO-LEPS that downscales the ECMWF ensemble prediction system to a horizontal resolution of 10 km, while the hydrological component is based on the semi-distributed hydrological model PREVAH with a spatial resolution of 500 m. We document the setup of the coupled system and assess its performance for the flood event under consideration. We show that the probabilistic meteorological-hydrological ensemble prediction chain is quite effective and provides additional guidance for extreme event forecasting, in comparison to a purely deterministic forecasting system. For the case studied, it is also shown that most of the benefits of the probabilistic approach may be realized with a comparatively small ensemble size of 10 members.
In order to quantitatively analyse the chemical and dynamical evolution of the polar vortex it has proven extremely useful to work with coordinate systems that follow the vortex flow. We propose here a two-dimensional quasi-Lagrangian coordinate system {X i, delta X i}, based on the mixing ratio of a long-lived stratospheric trace gas i, and its systematic use with i = N2O, in order to describe the structure of a well-developed Antarctic polar vortex. In the coordinate system {X i, delta X i} the mixing ratio X i is the vertical coordinate and delta X i = X i(theta) - X i vort(theta) is the meridional coordinate (X i vort(theta) being a vertical reference profile in the vortex core). The quasi-Lagrangian coordinates {X i, delta X i} persist for much longer time than standard isentropic coordinates, potential temperature theta and equivalent latitude Phi e, do not require explicit reference to geographic space, and can be derived directly from high-resolution in situ measurements. They are therefore well-suited for studying the evolution of the Antarctic polar vortex throughout the polar winter with respect to the relevant chemical and microphysical processes. By using the introduced coordinate system {X N2O, delta X N2O} we analyze the well-developed Antarctic vortex investigated during the APE-GAIA (Airborne Polar Experiment – Geophysica Aircraft in Antarctica – 1999) campaign (Carli et al., 2000). A criterion, which uses the local in-situ measurements of X i=X i(theta) and attributes the inner vortex edge to a rapid change (delta-step) in the meridional profile of the mixing ratio X i, is developed to determine the (Antarctic) inner vortex edge. In turn, we suggest that the outer vortex edge of a well-developed Antarctic vortex can be attributed to the position of a local minimum of the X H2O gradient in the polar vortex area. For a well-developed Antarctic vortex, the delta X N2O-parametrization of tracer-tracer relationships allows to distinguish the tracer inter-relationships in the vortex core, vortex boundary region and surf zone and to examine their meridional variation throughout these regions. This is illustrated by analyzing the tracer-tracer relationships X i : X N2O obtained from the in-situ data of the APE-GAIA campaign for i = CFC-11, CFC-12, H-1211 and SF6. A number of solitary anomalous points in the CFC-11 : N2O correlation, observed in the Antarctic vortex core, are interpreted in terms of small-scale cross-isentropic dispersion.
Chlorine and bromine atoms lead to catalytic depletion of ozone in the stratosphere. Therefore the use and production of ozone-depleting substances (ODSs) containing chlorine and bromine is regulated by the Montreal Protocol to protect the ozone layer. Equivalent effective stratospheric chlorine (EESC) has been adopted as an appropriate metric to describe the combined effects of chlorine and bromine released from halocarbons on stratospheric ozone. Here we revisit the concept of calculating EESC. We derive a refined formulation of EESC based on an advanced concept of ODS propagation into the stratosphere and reactive halogen release. A new transit time distribution is introduced in which the age spectrum for an inert tracer is weighted with the release function for inorganic halogen from the source gases. This distribution is termed the release time distribution. We show that a much better agreement with inorganic halogen loading from the chemistry transport model TOMCAT is achieved compared with using the current formulation. The refined formulation shows EESC levels in the year 1980 for the mid-latitude lower stratosphere, which are significantly lower than previously calculated. The year 1980 is commonly used as a benchmark to which EESC must return in order to reach significant progress towards halogen and ozone recovery. Assuming that – under otherwise unchanged conditions – the EESC value must return to the same level in order for ozone to fully recover, we show that it will take more than 10 years longer than estimated in this region of the stratosphere with the current method for calculation of EESC. We also present a range of sensitivity studies to investigate the effect of changes and uncertainties in the fractional release factors and in the assumptions on the shape of the release time distributions. We further discuss the value of EESC as a proxy for future evolution of inorganic halogen loading under changing atmospheric dynamics using simulations from the EMAC model. We show that while the expected changes in stratospheric transport lead to significant differences between EESC and modelled inorganic halogen loading at constant mean age, EESC is a reasonable proxy for modelled inorganic halogen on a constant pressure level.
The frequency of extreme events has changed, having a direct impact on human lives. Regional climate models help us to predict these regional climate changes. This work presents an atmosphere–ocean coupled regional climate system model (RCSM; with the atmospheric component COSMO-CLM and the ocean component NEMO) over the European domain, including three marginal seas: the Mediterranean, North, and Baltic Sea. To test the model, we evaluate a simulation of more than 100 years (1900–2009) with a spatial grid resolution of about 25 km. The simulation was nested into a coupled global simulation with the model MPI-ESM in a low-resolution configuration, whose ocean temperature and salinity were nudged to the ocean–ice component of the MPI-ESM forced with the NOAA 20th Century Reanalysis (20CR). The evaluation shows the robustness of the RCSM and discusses the added value by the coupled marginal seas over an atmosphere-only simulation. The coupled system is stable for the complete 20th century and provides a better representation of extreme temperatures compared to the atmosphere-only model. The produced long-term dataset will help us to better understand the processes leading to meteorological and climate extremes.
This study presents a method for adjusting long-term climate data records (CDRs) for the integrated use with near-real-time data using the example of surface incoming solar irradiance (SIS). Recently, a 23-year long (1983–2005) continuous SIS CDR has been generated based on the visible channel (0.45–1 μm) of the MVIRI radiometers onboard the geostationary Meteosat First Generation Platform. The CDR is available from the EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF). Here, it is assessed whether a homogeneous extension of the SIS CDR to the present is possible with operationally generated surface radiation data provided by CM SAF using the SEVIRI and GERB instruments onboard the Meteosat Second Generation satellites. Three extended CM SAF SIS CDR versions consisting of MVIRI-derived SIS (1983–2005) and three different SIS products derived from the SEVIRI and GERB instruments onboard the MSG satellites (2006 onwards) were tested. A procedure to detect shift inhomogeneities in the extended data record (1983–present) was applied that combines the Standard Normal Homogeneity Test (SNHT) and a penalized maximal T-test with visual inspection. Shift detection was done by comparing the SIS time series with the ground stations mean, in accordance with statistical significance. Several stations of the Baseline Surface Radiation Network (BSRN) and about 50 stations of the Global Energy Balance Archive (GEBA) over Europe were used as the ground-based reference. The analysis indicates several breaks in the data record between 1987 and 1994 probably due to artefacts in the raw data and instrument failures. After 2005 the MVIRI radiometer was replaced by the narrow-band SEVIRI and the broadband GERB radiometers and a new retrieval algorithm was applied. This induces significant challenges for the homogenisation across the satellite generations. Homogenisation is performed by applying a mean-shift correction depending on the shift size of any segment between two break points to the last segment (2006–present). Corrections are applied to the most significant breaks that can be related to satellite changes. This study focuses on the European region, but the methods can be generalized to other regions. To account for seasonal dependence of the mean-shifts the correction was performed independently for each calendar month. In comparison to the ground-based reference the homogenised data record shows an improvement over the original data record in terms of anomaly correlation and bias. In general the method can also be applied for the adjustment of satellite datasets addressing other variables to bridge the gap between CDRs and near-real-time data.
We present a compact and versatile cryofocusing– thermodesorption unit, which we developed for quantitative analysis of halogenated trace gases in ambient air. Possible applications include aircraft-based in situ measurements, in situ monitoring and laboratory operation for the analysis of flask samples. Analytes are trapped on adsorptive material cooled by a Stirling cooler to low temperatures (e.g. -80°C) and subsequently desorbed by rapid heating of the adsorptive material (e.g. 200°C). The set-up involves neither the exchange of adsorption tubes nor any further condensation or refocusing steps. No moving parts are used that would require vacuum insulation. This allows for a simple and robust design. Reliable operation is ensured by the Stirling cooler, which neither contains a liquid refrigerant nor requires refilling a cryogen. At the same time, it allows for significantly lower adsorption temperatures compared to commonly used Peltier elements. We use gas chromatography – mass spectrometry (GC–MS) for separation and detection of the preconcentrated analytes after splitless injection. A substance boiling point range of approximately -80 to +150°C and a substance mixing ratio range of less than 1 ppt (pmol mol−1)to more than 500 ppt in preconcentrated sample volumes of 0.1 to 10 L of ambient air is covered, depending on the application and its analytical demands. We present the instrumental design of the preconcentration unit and demonstrate capabilities and performance through the examination of analyte breakthrough during adsorption, repeatability of desorption and analyte residues in blank tests. Examples of application are taken from the analysis of flask samples collected at Mace Head Atmospheric Research Station in Ireland using our laboratory GC–MS instruments and by data obtained during a research flight with our in situ aircraft instrument GhOSTMS (Gas chromatograph for the Observation of Tracers – coupled with a Mass Spectrometer).
We present a compact and versatile cryofocusing–thermodesorption unit, which we developed for quantitative analysis of halogenated trace gases in ambient air. Possible applications include aircraft-based in situ measurements, in situ monitoring and laboratory operation for the analysis of flask samples. Analytes are trapped on adsorptive material cooled by a Stirling cooler to low temperatures (e.g. −80 °C) and subsequently desorbed by rapid heating of the adsorptive material (e.g. +200 °C). The set-up involves neither the exchange of adsorption tubes nor any further condensation or refocusing steps. No moving parts are used that would require vacuum insulation. This allows for a simple and robust design. Reliable operation is ensured by the Stirling cooler, which neither contains a liquid refrigerant nor requires refilling a cryogen. At the same time, it allows for significantly lower adsorption temperatures compared to commonly used Peltier elements. We use gas chromatography – mass spectrometry (GC–MS) for separation and detection of the preconcentrated analytes after splitless injection. A substance boiling point range of approximately −80 to +150 °C and a substance mixing ratio range of less than 1 ppt (pmol mol−1) to more than 500 ppt in preconcentrated sample volumes of 0.1 to 10 L of ambient air is covered, depending on the application and its analytical demands. We present the instrumental design of the preconcentration unit and demonstrate capabilities and performance through the examination of analyte breakthrough during adsorption, repeatability of desorption and analyte residues in blank tests. Examples of application are taken from the analysis of flask samples collected at Mace Head Atmospheric Research Station in Ireland using our laboratory GC–MS instruments and by data obtained during a research flight with our in situ aircraft instrument GhOST-MS (Gas chromatograph for the Observation of Tracers – coupled with a Mass Spectrometer).
Abiotic formation of n-alkane hydrocarbons has been postulated to occur within Earth's crust. Apparent evidence was primarily based on uncommon carbon and hydrogen isotope distribution patterns that set methane and its higher chain homologues apart from biotic isotopic compositions associated with microbial production and closed system thermal degradation of organic matter. Here, we present the first global investigation of the carbon and hydrogen isotopic compositions of n-alkanes in volcanic-hydrothermal fluids hosted by basaltic, andesitic, trachytic and rhyolitic rocks. We show that the bulk isotopic compositions of these gases follow trends that are characteristic of high temperature, open system degradation of organic matter. In sediment-free systems, organic matter is supplied by surface waters (seawater, meteoric water) circulating through the reservoir rocks. Our data set strongly implies that thermal degradation of organic matter is able to satisfy isotopic criteria previously classified as being indicative of abiogenesis. Further considering the ubiquitous presence of surface waters in Earth’s crust, abiotic hydrocarbon occurrences might have been significantly overestimated.
Der 300 km breite Eucla Schelf Südaustraliens gehört zu den weltgrößten modernen nicht-tropischen Ablagerungssystemen. Während des Pleistozäns wurde hier ein etwa 500 m mächtiger pleistozäner Sedimentstapel abgelagert, der sich aus progradierenden Klinoformen zusammensetzt. Die Ocean Drilling Program Sites 1127, 1129 und 1131 bilden ein proximal-distal Profil entlang des Eucla Shelfs-Kontinentalhangs. Die dabei erbohrten pleistozänen Periplattform-Ablagerungen bestehen überwiegend aus bioklastenreichen, fein- bis grobkörnigen, unlithifizierten bis teilweise lithifizierten Pack-, Wacke- und Grainstones. Eine ausgeprägte sedimentäre Zyklizität der analysierten Ablagerungen drückt sich in Fluktuationen der Korngröße und der mineralogischen Zusammensetzung, der natürlichen Radioaktivität, der stabilen Isotope sowie in Veränderungen der Fazies aus. Zur Untersuchung der sedimentären Zyklizität dieser nicht-tropischen Sedimente wurden sechs Sedimentintervalle früh- bis mittelpleistozänen Alters innerhalb der Bohrungen Site 1127, 1129 und 1131 ausgewählt. Die früh- bis mittelpleistozäne Periplattform-Sedimentabfolge des Eucla Schelfs wird durch die Stapelung genetischer Sequenzen gebildet. Diese entstehen als eine Folge hochfrequenter Meeresspiegelschwankungen, die unmittelbare Auswirkungen auf den Grad der Überflutung und damit auf den Sedimentexport vom Eucla Schelf ins angrenzende Becken haben. Eine genetische Sequenz weist eine Mächtigkeit von etwa 25 m unmittelbar beckenwärts der Schelfkante auf. Die maximale Mächtigkeit von ca. 30 m wird in beckenwärtigeren Bereichen erreicht, bevor die genetische Sequenz erneut auskeilt und in den hier untersuchten distalsten Ablagerungsbereichen Mächtigkeiten von 10-15 m aufweist. Die Begrenzungen der genetischen Sequenzen werden durch abrupte Korngrößenwechsel oder durch Umkehrpunkte in Korngrößentrends gebildet. Innerhalb einer genetischen Sequenz werden Hochstands-Ablagerungen durch grobkörnige bioklastenreiche Pack- bis Grainstones charakterisiert, die wiederum große Mengen an Tunikaten Spikulae, braunen hoch-Mg Bioklasten und Bryozoen-Detritus beinhalten. Tiefstands-Ablagerungen andererseits werden durch feinkörnige Packstones mit erhöhten Gehalten an Schwammnadeln und Mikrit charakterisiert. Die metastabilen Karbonatmodifikationen Aragonit und Hoch-Mg Kalzit können jeweils bis zu 34 % der Gesamtprobe ausmachen und sind in Ablagerungen des Meeresspiegel-Anstiegs und -Hochstands angereichert. Hauptaragonitbildner sind dabei Tunikaten Spikulae. Dolomit ist auf Ablagerungen des beginnenden Meeresspiegel-Anstiegs beschränkt. Die primäre Verteilung der metastabilen Karbonatmodifikationen innerhalb der genetischen Sequenzen führt so während späterer Versenkungsstadien möglicherweise zu einer differentiellen Diagenese. Die sedimentäre Zyklizität der Ablagerungen des späten Mittelpleistozäns unterscheidet sich von der Zyklizität des frühen- bis mittleren Pleistozäns durch eine Zunahme der Häufigkeit allochthoner Schelfkomponenten wie Rotalgen-Detritus und brauner Hoch-Mg Kalzit-Bioklasten. Zugleich zeigt sich ein Häufigkeits-Rückgang autochthoner Schwammnadeln. Diese Variationen während des frühen und mittleren Pleistozäns werden als eine Folge der Progradation der Schelfkante und der sich daraus ergebenden verändernden relativen Position zur Schelfkante sowie des sich verändernden Nährstoffeintrags interpretiert. Site 1127 zeigt darüberhinaus eine Verdopplung der Zyklenmächtigkeiten der mittelpleistozänen Ablagerungen. Dies ist höchstwahrscheinlich auf Veränderungen der Erdorbitalparameter (Milankovitch-Zyklizität) zurückzuführen. Im letzten Teil der Arbeit werden die sedimentären Zyklizitäten dieser nicht-tropischen Periplattform-Karbonate mit pleistozänen tropischen Ablagerungen der westlichen Flanke der Großen Bahama Bank verglichen (ODP Site 1009). Die Gliederung in Coarsening Upward-Zyklen ist dabei ein wesentliches Merkmal sowohl der nicht-tropischen als auch der tropischen Periplattform-Karbonate. Im Gegensatz zu den untersuchten nicht-tropischen Karbonaten werden jedoch tropische Ablagerungen des Meeresspiegel-Anstiegs und -Hochstands durch feinkörniges, mikritreiches Material. Maxima des Aragonit- bzw. Minima des Hoch-Mg Kalzitgehalts charakterisiert. Die Mächtigkeit einzelner Zyklen von ca. 10 m ist darüberhinaus aufgrund geringfügig niedrigerer Sedimentationsraten geringer als in den untersuchten nicht-tropischen Karbonaten, in denen die minimalen Zyklenmächtigkeiten 10-15 m betragen.
Abrupt climate changes of the last deglaciation detected in a Western Mediterranean forest record
(2010)
Abrupt changes in Western Mediterranean climate during the last deglaciation (20 to 6 cal ka BP) are detected in marine core MD95-2043 (Alboran Sea) through the investigation of high-resolution pollen data and pollen-based climate reconstructions by the modern analogue technique (MAT) for annual precipitation (Pann) and mean temperatures of the coldest and warmest months (MTCO and MTWA). Changes in temperate Mediterranean forest development and composition and MAT reconstructions indicate major climatic shifts with parallel temperature and precipitation changes at the onsets of Heinrich stadial 1 (equivalent to the Oldest Dryas), the Bölling-Allerød (BA), and the Younger Dryas (YD). Multi-centennial-scale oscillations in forest development occurred throughout the BA, YD, and early Holocene. Shifts in vegetation composition and (Pann reconstructions indicate that forest declines occurred during dry, and generally cool, episodes centred at 14.0, 13.3, 12.9, 11.8, 10.7, 10.1, 9.2, 8.3 and 7.4 cal ka BP. The forest record also suggests multiple, low-amplitude Preboreal (PB) climate oscillations, and a marked increase in moisture availability for forest development at the end of the PB at 10.6 cal ka BP. Dry atmospheric conditions in the Western Mediterranean occurred in phase with Lateglacial events of high-latitude cooling including GI-1d (Older Dryas), GI-1b (Intra-Allerød Cold Period) and GS-1 (YD), and during Holocene events associated with high-latitude cooling, meltwater pulses and N. Atlantic ice-rafting. A possible climatic mechanism for the recurrence of dry intervals and an opposed regional precipitation pattern with respect to Western-central Europe relates to the dynamics of the westerlies and the prevalence of atmospheric blocking highs. Comparison of radiocarbon and ice-core ages for well-defined climatic transitions in the forest record suggests possible enhancement of marine reservoir ages in the Alboran Sea by 200 years (surface water age 600 years) during the Lateglacial.
Abrupt climate changes of the last deglaciation detected in a western Mediterranean forest record
(2009)
Abrupt changes in Western Mediterranean climate during the last deglaciation (20 to 6 cal ka BP) are detected in marine core MD95-2043 (Alboran Sea) through the investigation of high-resolution pollen data and pollen-based climate reconstructions by the modern analogue technique (MAT) for annual precipitation (Pann) and mean temperatures of the coldest and warmest months (MTCO and MTWA). Changes in temperate Mediterranean forest development and composition and MAT reconstructions indicate major climatic shifts with parallel temperature and precipitation changes at the onsets of Heinrich stadial 1 (equivalent to the Oldest Dryas), the Bölling-Allerød (BA), and the Younger Dryas (YD). Multi-centennial-scale oscillations in forest development occurred throughout the BA, YD, and early Holocene. Shifts in vegetation composition and (Pann reconstructions indicate that forest declines occurred during dry, and generally cool, episodes centred at 14.0, 13.3, 12.9, 11.8, 10.7, 10.1, 9.2, 8.3 and 7.4 cal ka BP. The forest record also suggests multiple, low-amplitude Preboreal (PB) climate oscillations, and a marked increase in moisture availability for forest development at the end of the PB at 10.6 cal ka BP. Dry atmospheric conditions in the Western Mediterranean occurred in phase with Lateglacial events of high-latitude cooling including GI-1d (Older Dryas), GI-1b (Intra-Allerød Cold Period) and GS-1 (YD), and during Holocene events associated with high-latitude cooling, meltwater pulses and N. Atlantic ice-rafting. A possible climatic mechanism for the recurrence of dry intervals and an opposed regional precipitation pattern with respect to Western-central Europe relates to the dynamics of the westerlies and the prevalence of atmospheric blocking highs. Comparison of radiocarbon and ice-core ages for well-defined climatic transitions in the forest record suggests possible enhancement of marine reservoir ages in the Alboran Sea by 200 years (surface water age 600 years) during the Lateglacial.
In vorliegender Untersuchung wurde der Rißlöß zwischen der 1. und 2. fossilen Parabraunerde anhand schwächerer Bodenbildungen und eingeschalteter Abtragungsphasen zu gliedern versucht. Im jüngeren Riß herrschte starke Lößsedimentation vor, wobei es in mindestens 6 kaltfeuchten Abschnitten zur Ausbildung schwacher periglazialer Naßböden kam. Die Naßbodenserie wurde als Bruchköbeler Böden (B) bezeichnet. Im jüngsten Rißlöß ist wenige dm unter dem Eemboden als tephrochronologischer Leithorizont der Krifteler Tuff (vgl. SEMMEL 1968) eingeschaltet. Den mittleren Profilbereich im Rißlöß zeichnen feuchtere Klimaabschnitte mit starken Verschwemmungsphasen aus, die in den meisten Profilen zu erheblichen Diskordanzen geführt haben. An der Basis der wenigen kompletten Rißlöß-Profile treten vorwiegend in Hessen über der zumeist gekappten 2. fossilen Parabraunerde maximal zwei Schwarzerden auf, die von SEMMEL (1968) als Weilbacher Humuszonen bezeichnet werden. Unmittelbar über diesen Schwarzerden folgt die Ostheimer Zone, eine Fließerde aus aufgearbeitetem Solumaterial der liegenden Böden. Insgesamt zeigt die aus den Rißböden rekonstruierte Klimaabfolge — neben geringfügigen Abweichungen — überraschende Parallelen zur paläopedologisch-klimatischen Gliederung der Würmkaltzeit.
We report the first measurements of 1,1,1,2,3,3,3-heptafluoropropane (HFC-227ea), a substitute for ozone depleting compounds, in air samples originating from remote regions of the atmosphere and present evidence for its accelerating growth. Observed mixing ratios ranged from below 0.01 ppt in deep firn air to 0.59 ppt in the current northern mid-latitudinal upper troposphere. Firn air samples collected in Greenland were used to reconstruct a history of atmospheric abundance. Year-on-year increases were deduced, with acceleration in the growth rate from 0.029 ppt per year in 2000 to 0.056 ppt per year in 2007. Upper tropospheric air samples provide evidence for a continuing growth until late 2009. Furthermore we calculated a stratospheric lifetime of 370 years from measurements of air samples collected on board high altitude aircraft and balloons. Emission estimates were determined from the reconstructed atmospheric trend and suggest that current "bottom-up" estimates of global emissions for 2005 are too high by a factor of three.
We evaluate the near-surface representation of thermally driven winds in the Swiss Alps in a numerical weather prediction model at km-scale resolution. In addition, the influence of grid resolution (2.2 km and 1.1 km), topography filtering, and land surface datasets on the accuracy of the simulated valley winds is investigated. The simulations are evaluated against a comprehensive set of surface observations for an 18-day fair-weather summer period in July 2006. The episode is characterized by strong diurnal wind systems and the formation of shallow convection over the mountains, which transitions to precipitating convection in some areas. The near-surface winds (10 m above ground level) follow a typical diurnal pattern with strong daytime up-valley flow and weaker nighttime down-valley flow. At a 2.2 km resolution the valley winds are poorly simulated for most stations, while at a 1.1 km resolution the diurnal cycle of the valley winds is well represented in most large (e.g., Rhein valley at Chur and Rhone valley at Visp) and medium-sized valleys (e.g., Linth valley at Glarus). In the smaller valleys (e.g., Maggia valley at Cevio), the amplitude of the valley wind is still significantly underestimated, even at a 1.1 km resolution. Detailed sensitivity experiments show that the use of high-resolution land surface datasets, for both the soil characteristics as well as for the land cover, and reduced filtering of the topography are essential to achieve good performance at a 1.1 km resolution
Rationale: Potassium (K) is a major component of several silicate minerals and seawater, and, therefore, constraining past changes in the potassium cycle is a promising way of tracing large-scale geological processes on Earth. However, [K] measurement using inductively coupled plasma mass spectrometry (ICP-MS) is challenging due to an ArH+ interference, which may be of a similar magnitude to the K+ ion beam in samples with <0.1% m/m [K].
Methods: In this work, we investigated the effect of the ArH+ interference on K/Ca data quality by comparing results from laser-ablation (LA)-ICP-MS measured in medium and high mass resolution modes and validating our LA results via solution ICP-optical emission spectroscopy (OES) and solution ICP-MS measurements. To do so, we used a wide range of geological reference materials, with a particular focus on marine carbonates, which are potential archives of past changes in the K cycle but are typically characterised by [K] < 200 μg/g. In addition, we examine the degree to which trace-element data quality is driven by downhole fractionation during LA-ICP-MS measurements.
Results: Our results show that medium mass resolution (MR) mode is sufficiently capable of minimising the effect of the ArH+ interference on K+. However, the rate of downhole fractionation for Na and K varies between different samples as a result of their differing bulk composition, resulting in matrix-specific inaccuracy. We show how this can be accounted for via downhole fractionation corrections, resulting in an accuracy of better than 1% and a long-term reproducibility (intermediate precision) of <6% (relative standard deviation) in JCp-1NP using LA-ICP-MS in MR mode.
Conclusion: Our [K] measurement protocol is demonstrably precise and accurate and applicable to a wide range of materials. The measurement of K/Ca in relatively low-[K] marine carbonates is presented here as a key example of a new application opened up by these advances.
A twentieth century-long coupled atmosphere-ocean regional climate simulation with COSMO-CLM (Consortium for Small-Scale Modeling, Climate Limited-area Model) and NEMO (Nucleus for European Modelling of the Ocean) is studied here to evaluate the added value of coupled marginal seas over continental regions. The interactive coupling of the marginal seas, namely the Mediterranean, the North and the Baltic Seas, to the atmosphere in the European region gives a comprehensive modelling system. It is expected to be able to describe the climatological features of this geographically complex area even more precisely than an atmosphere-only climate model. The investigated variables are precipitation and 2 m temperature. Sensitivity studies are used to assess the impact of SST (sea surface temperature) changes over land areas. The different SST values affect the continental precipitation more than the 2 m temperature. The simulated variables are compared to the CRU (Climatic Research Unit) observational data, and also to the HOAPS/GPCC (Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite Data, Global Precipitation Climatology Centre) data. In the coupled simulation, added skill is found primarily during winter over the eastern part of Europe. Our analysis shows that, over this region, the coupled system is dryer than the uncoupled system, both in terms of precipitation and soil moisture, which means a decrease in the bias of the system. Thus, the coupling improves the simulation of precipitation over the eastern part of Europe, due to cooler SST values and in consequence, drier soil.
Large-scale hydrological modelling has become increasingly wide-spread during the last decade. An annual workshop series on large-scale hydrological modelling has provided, since 1997, a forum to the German-speaking community for discussing recent developments and achievements in this research area. In this paper we present the findings from the 2007 workshop which focused on advances and visions in large-scale hydrological modelling. We identify the state of the art, difficulties and research perspectives with respect to the themes "sensitivity of model results", "integrated modelling" and "coupling of processes in hydrosphere, atmosphere and biosphere". Some achievements in large-scale hydrological modelling during the last ten years are presented together with a selection of remaining challenges for the future.
In 1998 the German Universities of Kassel and Giessen organised a workshop on water and solute transport in large drainage basins. The workshop focused on analysing and summarising the state of research, existing problems and perspectives in this research area. It was the second of a series of annual workshops since 1997 that became an important discussion forum for the German-speaking research community in the field of hydrological modelling. Now the 11th Workshop on Large-scale Hydrological Modelling referred to the same questions as posed in 1998 in order to evaluate the developments and advances of the last ten years. Based on keynote presentations, the workshop focused on discussion in working groups where also posters were presented. This volume of "Advances in Geosciences" comprises seven papers referring to the poster contributions. At the end of the volume, an overview paper summarises the outcome of the workshop presentations and discussions (Doll et al.). ...
Processes occurring in the tropical upper troposphere (UT), the Tropical Transition Layer (TTL), and the lower stratosphere (LS) are of importance for the global climate, for stratospheric dynamics and air chemistry, and for their influence on the global distribution of water vapour, trace gases and aerosols. In this contribution we present aerosol and trace gas (in-situ) measurements from the tropical UT/LS over Southern Brazil, Northern Australia, and West Africa. The instruments were operated on board of the Russian high altitude research aircraft M-55 "Geophysica" and the DLR Falcon-20 during the campaigns TROCCINOX (Araçatuba, Brazil, February 2005), SCOUT-O3 (Darwin, Australia, December 2005), and SCOUT-AMMA (Ouagadougou, Burkina Faso, August 2006). The data cover submicron particle number densities and volatility from the COndensation PArticle counting System (COPAS), as well as relevant trace gases like N2O, ozone, and CO. We use these trace gas measurements to place the aerosol data into a broader atmospheric context. Also a juxtaposition of the submicron particle data with previous measurements over Costa Rica and other tropical locations between 1999 and 2007 (NASA DC-8 and NASA WB-57F) is provided. The submicron particle number densities, as a function of altitude, were found to be remarkably constant in the tropical UT/LS altitude band for the two decades after 1987. Thus, a parameterisation suitable for models can be extracted from these measurements. Compared to the average levels in the period between 1987 and 2007 a slight increase of particle abundances was found for 2005/2006 at altitudes with potential temperatures, theta, above 430 K. The origins of this increase are unknown except for increases measured during SCOUT-AMMA. Here the eruption of the Soufrière Hills volcano in the Caribbean caused elevated particle mixing ratios. The vertical profiles from Northern hemispheric mid-latitudes between 1999 and 2006 also are compact enough to derive a parameterisation. The tropical profiles all show a broad maximum of particle mixing ratios (between theta ~ 340 K and 390 K) which extends from below the TTL to above the thermal tropopause. Thus these particles are a "reservoir" for vertical transport into the stratosphere. The ratio of non-volatile particle number density to total particle number density was also measured by COPAS. The vertical profiles of this ratio have a maximum of 50% above 370 K over Australia and West Africa and a pronounced minimum directly below. Without detailed chemical composition measurements a reason for the increase of non-volatile particle fractions cannot yet be given. However, half of the particles from the tropical "reservoir" contain compounds other than sulphuric acid and water. Correlations of the measured aerosol mixing ratios with N2O and ozone exhibit compact relationships for the tropical data from SCOUT-AMMA, TROCCINOX, and SCOUT-O3. Correlations with CO are more scattered probably because of the connection to different pollution source regions. We provide additional data from the long distance transfer flights to the campaign sites in Brazil, Australia, and West-Africa. These were executed during a time window of 17 months within a period of relative volcanic quiescence. Thus the data represent a "snapshot picture" documenting the status of a significant part of the global UT/LS fine aerosol at low concentration levels 15 years after the last major (i.e., the 1991 Mount Pinatubo) eruption. The corresponding latitudinal distributions of the measured particle number densities are presented in this paper to provide data of the UT/LS background aerosol for modelling purposes.
Processes occurring in the tropical upper troposphere and lower stratosphere (UT/LS) are of importance for the global climate, for the stratospheric dynamics and air chemistry, and they influence the global distribution of water vapour, trace gases and aerosols. The mechanisms underlying cloud formation and variability in the UT/LS are of scientific concern as these still are not adequately described and quantified by numerical models. Part of the reasons for this is the scarcity of detailed in-situ measurements in particular from the Tropical Transition Layer (TTL) within the UT/LS. In this contribution we provide measurements of particle number densities and the amounts of non-volatile particles in the submicron size range present in the UT/LS over Southern Brazil, West Africa, and Northern Australia. The data were collected in-situ on board of the Russian high altitude research aircraft M-55 "Geophysica" using the specialised COPAS (COndensation PArticle counting System) instrument during the TROCCINOX (Araçatuba, Brazil, February 2005), the SCOUT-O3 (Darwin, Australia, December 2005), and SCOUT-AMMA (Ouagadougou, Burkina Faso, August 2006) campaigns. The vertical profiles obtained are compared to those from previous measurements from the NASA DC-8 and NASA WB-57F over Costa Rica and other tropical locations between 1999 and 2007. The number density of the submicron particles as function of altitude was found to be remarkably constant (even back to 1987) over the tropical UT/LS altitude band such that a parameterisation suitable for models can be extracted from the measurements. At altitudes corresponding to potential temperatures above 430 K a slight increase of the number densities from 2005/2006 results from the data in comparison to the 1987 to 2007 measurements. The origins of this increase are unknown. By contrast the data from Northern hemispheric mid latitudes do not exhibit such an increase between 1999 and 2006. Vertical profiles of the non-volatile fraction of the submicron particles were also measured by a COPAS channel and are presented here. The resulting profiles of the non-volatile number density fraction show a pronounced maximum of 50% in the tropical TTL over Australia and West Africa. Below and above this fraction is much lower attaining values of 10% and smaller. In the lower stratosphere the fine particles mostly consist of sulphuric acid which is reflected in the low numbers of non-volatile residues measured by COPAS. Without detailed chemical composition measurements the reason for the increase of non-volatile particle fractions cannot yet be given. The long distance transfer flights to Brazil, Australia and West-Africa were executed during a time window of 17 months within a period of relative volcanic quiescence. Thus the data measured during these transfers represent a "snapshot picture" documenting the status of a significant part of the global UT/LS aerosol (with sizes below 1 μm) at low concentration levels 15 years after the last major (i.e., the 1991 Mount Pinatubo) eruption. The corresponding latitudinal distributions of the measured particle number densities are also presented in this paper in order to provide input on the UT/LS background aerosol for modelling purposes.
The African continent is regularly portrayed as an indolent space with a well-known reputation as a chaotic continent. Viewed as lacking vision, means and capacities, Africa is perceived at best as a place that is marked by a permanent status quo, stagnation, or in worst case scenarios, as a declining continent. Various references to the continent are synonymous with famine, poverty, war, etc. Such portrayals are all the more intriguing given that the continent is known for its abundant natural resources, such as timber, oil, natural gas, minerals, etc., whose reserves are, moreover, not well known both by the African people and their leaders. As a result, there is still much progress to be made in tapping into the resources in order to improve the daily lives of African citizens.
In such a context dominated by infantile carelessness throughout the continent, the interventions of actors from outside the continent are the only hopes of bringing some vitality to this continent which is cloaked in "la grande nuit – the great darkness" (Mbembé 2013). Thus during the main sequences of recent history, representing different forms of Western penetration and activity on the African continent (slavery, imperialism, colonization), all the Western world’s contributions have obviously not sufficed to boost Africa and take it out of its never ending childhood. It has remained just as passive and apathetic today as it was yesterday.
The attraction of Asian actors to the continent is even more recent. And consistent with its abovementioned indolence, Africa is seen as an easy and defenceless prey for the Korean, Japanese, Indian, Malaysian, or Chinese conquerors. In the latter case, the insatiable appetite for natural resources whose reserves are being rapidly depleted is the cornerstone of their foreign aid policy. This led China to colonize the continent, showing a preference for Pariah Regimes which held no appeal for the West, by sending an army of workers to extract those resources (Lum et al. 2009), in defiance of all national and international regulations and based on completely opaque contracts.
Although the concept of African Agency was rapidly developed in several African countries, the aim of this study was more specific to Cameroon’s mining sector in which different entrepreneurs from abroad got involved over time. The thesis investigates whether indigenous citizens took part in any way in the development of mining projects in the country. Thus, the work assesses and analyses actions and reactions initiated and undertaken by local people in the context of China’s presence within Cameroon’s mining sector to promote and advance their interests over those of foreign investors. In addition, the author has no knowledge of any other study investigating African Agency in the mining sector as a whole in Cameroon.
In conducting this study, a multi-method research framework was developed including a series of methods used to collect data and analyse concepts of African Agency associated Political Ecology as they developed within Cameroon’s mining sector. Specifically, those methods comprised quantitative research when it came to collecting data using a positivist and empirical approach constructed by deducing evidence from statistical data collected by means of the 167 questionnaire surveys administered to local inhabitants and workers randomly selected on mining sites and in riparian communities. The questionnaires helped to capture Cameroonians' perceptions of the recent phenomenon of the gradual but significant influx of international actors and precisely Chinese players in the mining sector on the one hand, and on the other hand, observational data was collected across the GVC as developed in the Betare-Oya region. As a complement to the former technique, qualitative methods helped to study and deepen understanding of human behaviour and the social world in a holistic perspective through individual interviews, focus groups, and direct observations on the ground. In addition, the spatial analysis method based on the land use classification technique served to detect changes to land use/land cover that have been brought on by mechanised mining activities undertaken in this region. The sequencing of data collected and their processing from a ground theory perspective led to the formulation and specification of Cameroon’s Ecological Agency theory.
One of the earliest steps of this work consisted in a literature review and in placing the African Agency concept in a broader context. It then led to the state of the art, specifications about research content of the work and the main theories undergirding this thesis. Before examining developments that emerged during the last decade, a historical perspective was provided to the topic in order to show how African societies started mining operations and how they dealt with foreign partners interested in their mining resources. The aim was to show that while Western imperialism presented a challenge for the sector, it did not erase local participation, even despite the constraints associated with such involvement.
...
AirCore-HR : a high-resolution column sampling to enhance the
vertical description of CH₄ and CO₂
(2017)
An original and innovative sampling system called AirCore was presented by NOAA in 2010 (Karion et al., 2010). It consists of a long ( > 100 m) and narrow (< 1 cm) stainless steel tube that can retain a profile of atmospheric air. The captured air sample has then to be analyzed with a gas analyzer for trace mole fraction. In this study, we introduce a new AirCore aiming to improve resolution along the vertical with the objectives to (i) better capture the vertical distribution of CO2 and CH4, (ii) provide a tool to compare AirCores and validate the estimated vertical resolution achieved by AirCores. This (high-resolution) AirCore-HR consists of a 300 m tube, combining 200 m of 0.125 in. (3.175 mm) tube and a 100 m of 0.25 in. (6.35 mm) tube. This new configuration allows us to achieve a vertical resolution of 300 m up to 15 km and better than 500 m up to 22 km (if analysis of the retained sample is performed within 3 h). The AirCore-HR was flown for the first time during the annual StratoScience campaign from CNES in August 2014 from Timmins (Ontario, Canada). High-resolution vertical profiles of CO2 and CH4 up to 25 km were successfully retrieved. These profiles revealed well-defined transport structures in the troposphere (also seen in CAMS-ECMWF high-resolution forecasts of CO2 and CH4 profiles) and captured the decrease of CO2 and CH4 in the stratosphere. The multi-instrument gondola also carried two other low-resolution AirCore-GUF that allowed us to perform direct comparisons and study the underlying processing method used to convert the sample of air to greenhouse gases vertical profiles. In particular, degrading the AirCore-HR derived profiles to the low resolution of AirCore-GUF yields an excellent match between both sets of CH4 profiles and shows a good consistency in terms of vertical structures. This fully validates the theoretical vertical resolution achievable by AirCores. Concerning CO2 although a good agreement is found in terms of vertical structure, the comparison between the various AirCores yields a large and variable bias (up to almost 3 ppm in some parts of the pro- files). The reasons of this bias, possibly related to the drying agent used to dry the air, are still being investigated. Finally, the uncertainties associated with the measurements are assessed, yielding an average uncertainty below 3 ppb for CH4 and 0.25 ppm for CO2 with the major source of uncertainty coming from the potential loss of air sample on the ground and the choice of the starting and ending point of the collected air sample inside the tube. In an ideal case where the sample would be fully retained, it would be possible to know precisely the pressure at which air was sampled last and thus to improve the overall uncertainty to about 0.1 ppm for CO2 and 2 ppb for CH4
AirCore-HR: a high resolution column sampling to enhance the vertical description of CH₄ and CO₂
(2016)
An original and innovative sampling system called AirCore was presented by NOAA in 2010 (Karion et al., 2010). It consists of a long (> 100 m) and narrow (< 1 cm) stainless steel tube that can retain a profile of atmospheric air. The captured air sample has then to be analyzed with a gas analyzer for trace mole fraction. In this study, we introduce a new AirCore aiming at improved resolution along the vertical with the objectives to: (i) better capture the vertical distribution of CO2 and CH4, (ii) provide a tool to compare AirCores and validate the estimated vertical resolution achieved by AirCores. This AirCore-HR (high resolution) consists of a 300 m tube, combining 200 m of 1/8 in. (3.175 mm) tube and a 100 m of 1/4 in. (6.35 mm) tube. This new configuration allows to achieve a vertical resolution of 300 m up to 15 km and better than 500 m up to 22 km (if analysis of the retained sample is performed within 3 hours). The AirCore-HR was flown for the first time during the annual StratoScience campaign from CNES in August 2014 from Timmins (Ontario, Canada). High-resolution vertical profiles of CO2 and CH4 up to 25 km were successfully retrieved. These profiles revealed well defined transport structures in the troposphere (also seen in CAMS-ECMWF high resolution forecasts of CO2 and CH4 profiles) and captured the decrease of CO2 and CH4 in the stratosphere. The multi-instruments gondola from the flight carried two other low-resolution AirCore-GUF that allowed to perform direct comparisons and study the underlying processing method used to convert the sample of air to greenhouse gases vertical profiles. In particular, degrading the AirCore-HR derived profiles to the low resolution of AirCore-GUF yields an excellent match between both sets of CH4 profiles, and shows a good consistency between vertical structures of CO2 and CH4. These results fully validate the theoretical vertical resolution achievable by AirCores. Finally, the uncertainties associated with the measurements are assessed, yielding an average uncertainty below 3 ppb for CH4 and 0.25 ppm for CO2 with the major source of uncertainty coming from the potential loss of air sample on the ground and the choice of the starting and ending point of the collected air sample inside the tube. In an ideal case where the sample would be fully retained, it would be possible to know precisely the pressure at which air was sampled last and thus to improve the overall uncertainty to about 0.1 ppm for CO2 and 2 ppb for CH4.
Atmospheric new particle formation is a general phenomenon observed over coniferous forests. So far nucleation is described as a function of gaseous sulfuric acid concentration only, which is unable to explain the observed seasonality of nucleation events at different measurement sites. Here we introduce a new nucleation parameter including ozone and water vapor concentrations as well as UV-B radiation as a proxy for OH radical formation. Applying this new parameter to field studies conducted at Finnish and German measurement sites it is found capable to predict the occurrence of nucleation events and their seasonal and annual variation indicating a significant role of organics. Extrapolation to possible future conditions of ozone, water vapor and organic concentrations leads to a significant potential increase in nucleation event number.
This work describes the development and characterization of two instruments and their data evaluation, which contributes to a better understanding of new particle formation and growth, as well as their interactions with clouds. Both instruments were characterized at the Cosmics Leaving Outdoor Droplets (CLOUD) experiment at the European Center for Nuclear Research (CERN).
We present the characterization and application of a new gas chromatography time-of-flight mass spectrometry instrument (GC-TOFMS) for the quantitative analysis of halocarbons in air samples. The setup comprises three fundamental enhancements compared to our earlier work (Hoker et al., 2015): (1) full automation, (2) a mass resolving power R = m/Δm of the TOFMS (Tofwerk AG, Switzerland) increased up to 4000 and (3) a fully accessible data format of the mass spectrometric data. Automation in combination with the accessible data allowed an in-depth characterization of the instrument. Mass accuracy was found to be approximately 5 ppm in mean after automatic recalibration of the mass axis in each measurement. A TOFMS configuration giving R = 3500 was chosen to provide an R-to-sensitivity ratio suitable for our purpose. Calculated detection limits are as low as a few femtograms by means of the accurate mass information. The precision for substance quantification was 0.15 % at the best for an individual measurement and in general mainly determined by the signal-to-noise ratio of the chromatographic peak. Detector non-linearity was found to be insignificant up to a mixing ratio of roughly 150 ppt at 0.5 L sampled volume. At higher concentrations, non-linearities of a few percent were observed (precision level: 0.2 %) but could be attributed to a potential source within the detection system. A straightforward correction for those non-linearities was applied in data processing, again by exploiting the accurate mass information. Based on the overall characterization results, the GC-TOFMS instrument was found to be very well suited for the task of quantitative halocarbon trace gas observation and a big step forward compared to scanning, quadrupole MS with low mass resolving power and a TOFMS technique reported to be non-linear and restricted by a small dynamical range.
We present the characterization and application of a new gas chromatography time-of-flight mass spectrometry instrument (GC-TOFMS) for the quantitative analysis of halocarbons in air samples. The setup comprises three fundamental enhancements compared to our earlier work (Hoker et al., 2015): (1) full automation, (2) a mass resolving power R = m/Δm of the TOFMS (Tofwerk AG, Switzerland) increased up to 4000 and (3) a fully accessible data format of the mass spectrometric data. Automation in combination with the accessible data allowed an in-depth characterization of the instrument. Mass accuracy was found to be approximately 5 ppm in mean after automatic recalibration of the mass axis in each measurement. A TOFMS configuration giving R = 3500 was chosen to provide an R-to-sensitivity ratio suitable for our purpose. Calculated detection limits are as low as a few femtograms by means of the accurate mass information. The precision for substance quantification was 0.15 % at the best for an individual measurement and in general mainly determined by the signal-to-noise ratio of the chromatographic peak. Detector non-linearity was found to be insignificant up to a mixing ratio of roughly 150 ppt at 0.5 L sampled volume. At higher concentrations, non-linearities of a few percent were observed (precision level: 0.2 %) but could be attributed to a potential source within the detection system. A straightforward correction for those non-linearities was applied in data processing, again by exploiting the accurate mass information. Based on the overall characterization results, the GC-TOFMS instrument was found to be very well suited for the task of quantitative halocarbon trace gas observation and a big step forward compared to scanning, quadrupole MS with low mass resolving power and a TOFMS technique reported to be non-linear and restricted by a small dynamical range.
Highlights
• Full automatized analysis of teleseismic XKS shear wave splitting.
• Rapid analysis of large seismological data sets.
• Automated window selection and quality classification.
• Application to the USArray Transportable Array including expansion to Alaska.
• Improved statistical evidence and objectivity of derived effective splitting.
Abstract
Recent technological advances have led to community wide use of large-scale seismic experiments which produce seismic data on previously impossible scales. Standard processing procedures thus require automatization to facilitate a fast and objective analysis of the data. Among these, XKS-splitting is an important tool to derive first insights into the Earth's deformation regimes at depth by studying seismic anisotropy. Most often, shear-wave splitting is interpreted to represent crystallographic preferred orientation (CPO) of mantle minerals like olivine as dominating feature and can thus be used as a proxy of mantle flow processes. Here, we introduce an addition to the MATLAB®-based SplitRacer tool box (Reiss and Rümpker 2017) which automatizes the entire XKS-splitting procedure. This is achieved by the automatization of 1) choosing a time window based on spectral analyses and 2) categorization of results based on three different XKS-splitting methods (energy minimization, rotation correlation and splitting intensity). This provides effective and objective results for splitting as well as null-measurement results. This extension allows to use SplitRacer without a graphical interface and introduces a bootstrapping statistics as error estimate of the single layer joint splitting method. The procedures are designed to allow a fast and more objective analysis of a vast amount of data, as produced by recent seismic deployments (e.g. USArray, AlpArray). We test this automatization by applying the analysis to the USArray data set, which has approximately 1900 stations with between two to fifteen years of data. We can reproduce the general pattern of the results from former studies with the more objective automatic analysis. Based on a joint-splitting approach, we approximate the splitting effect at individual stations by a single anisotropic layer. As we include null-measurements as well as a larger data set as previous studies, we can provide improved statistical evidence for these effective splitting parameters.
A primordial state of matter consisting of free quarks and gluons that existed in the early universe a few microseconds after the Big Bang is also expected to form in high-energy heavy-ion collisions. Determining the equation of state (EoS) of such a primordial matter is the ultimate goal of high-energy heavy-ion experiments. Here we use supervised learning with a deep convolutional neural network to identify the EoS employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in quantum chromodynamics. Such EoS-meter is model-independent and insensitive to other simulation inputs including the initial conditions for hydrodynamic simulations.
Knowledge about the initial tectonic and depositional dynamics, as well as the influence of early rifting on climate and environmental evolution remains speculative to a large extent, because sediments are usually deeply buried. Within the East African Rift System, inversion tectonics uplifted a few of these successions to the surface hence presenting rare windows into the pre-rift depositional history. One such example, an exceptional 700 m long and up to 60 m high fresh road cut provided the opportunity to study in detail initial rift successions of the southern Albertine Rift (Western Uganda). This focusses on the basal and poorly known Middle to Late Miocene in order to unravel the climatic, environmental, hydrological and tectonic evolution of the initial Albertine Rift. A large and robust multi-proxy dataset was gathered comprising 169 m of stratigraphic thickness, which spans from 14.5 to 4.9 Ma according to a revised lithostratigraphic model. Fieldwork comprised logging of the sedimentary record, spectral gamma ray, magnetic susceptibility and 2D wall mapping with photomosaics. Additionally, the sections were sampled for bulk mineral and clay mineral analysis. The succession exposes a suite of lithofacies and architectural elements detailing the evolution of a fluvio-lacustrine system. Five depositional environments were identified which show an overall back-stepping trend from an alluvial plain to a delta plain and finally palustrine/shallow lacustrine conditions. Mesoscale base-level cycles, preservation potential of architectural elements, and stacking pattern exhibit limited accommodation space. However, it increases over time. This overall trend indicates increasing tectonic subsidence, which can be explained by flexural downwarp within the pre-rift phase and in the upper part grading into fault-controlled crustal extension of the syn-rift phase, which more and more disrupted a large-scale river system. From the Middle Miocene up to the early Pliocene, this study revealed that palaeoclimate trends become marked by increasing and more fluctuating Th concentrations, loss of feldspar, intercalated lenses of hydroxosulphate minerals, and a shift from smectite-dominated to kaolinite-dominated clays. These signals are all interpreted as detrital except for the hydroxosulphates, and they mirror the increasing intensity of chemical weathering and stripping of soils in the catchment. A trend towards increasing humidity is supported by an increase in lacustrine sediment facies and a lake-level rise. Nevertheless, intercalation of hydroxosulphate, ferricretes and pedogenised horizons prove ongoing seasonality and dry intervals. Finally, based on a revised stratigraphic model a sequence stratigraphic correlation of the outcrop's depositional cycles with basin-scale cycles is presented. According to these cycles, transition from the pre-rift to the syn-rift stage is marked by an unconformity and a tectonic pulse in the latest Miocene. However, the response of fluvial supply, the depositional system as well as climate conditions are less punctuated and characterised by gradual trends and temporal delays. The long pre-rift phase (ca 10 Myr) and the gradual transition to the syn-rift phase is in accordance with the active rifting model, which is based on thermal thinning of the lithosphere by asthenospheric upwelling.
Highlights
• We find DBrfluid/melt = 1.19 to 3.92 for experimental Br degassing from basalt magma into aqueous fluids.
• D <1 under almost dry conditions propose only little Br degassing for dry intra-plate volcanism relative to volcanic arcs.
• An annual global Br flux of 23.5–72.9 × 109 g/y into the atmosphere was calculated.
Abstract
We present the first in-situ partitioning data for bromine between a natural basaltic melt and a coexisting fluid. For this study hydrothermal diamond anvil cell experiments at pressures up to 1.7 GPa were conducted. We combined laser heating to melt the basalt glass with external heating to lower the temperature gradient in the cell and to initiate circulation for the aqueous fluid. Bromine concentrations were measured in-situ with X-ray fluorescence in the basaltic melts, glasses, and in the fluid. From the results we calculated partition coefficients of DBrfluid/melt = 1.19 to 3.92 in the range of 0.4 to 1 GPa for aqueous fluids. Experiments with neon as the surrounding fluid (DBrfluid/melt = 0.38 ± 0.01 at 1.1 GPa) suggest that Br-release from a basalt into volatiles that have no bonding affinity with Br is weak. This should be the case for dry intra-plate volcanic eruptions. From the experimentally gained partition coefficients and from global Br concentration values in melt inclusions of arc magmas, we calculated an annual global Br flux of 23.5–72.9 × 109 g/y.
Production and use of many synthetic halogenated trace gases are regulated internationally due to their contribution to stratospheric ozone depletion or climate change. In many applications they have been replaced by shorter-lived compounds, which have become measurable in the atmosphere as emissions increased. Non-target monitoring of trace gases rather than targeted measurements of well-known substances is needed to keep up with such changes in the atmospheric composition. We regularly deploy gas chromatography (GC) coupled to time-of-flight mass spectrometry (TOF-MS) for analysis of flask air samples and in situ measurements at the Taunus Observatory, a site in central Germany. TOF-MS acquires data over a continuous mass range that enables a retrospective analysis of the dataset, which can be considered a type of digital air archive. This archive can be used if new substances come into use and their mass spectrometric fingerprint is identified. However, quantifying new replacement halocarbons can be challenging, as mole fractions are generally low, requiring high measurement precision and low detection limits. In addition, calibration can be demanding, as calibration gases may not contain sufficiently high amounts of newly measured substances or the amounts in the calibration gas may have not been quantified. This paper presents an indirect data evaluation approach for TOF-MS data, where the calibration is linked to another compound which could be quantified in the calibration gas. We also present an approach to evaluate the quality of the indirect calibration method, select periods of stable instrument performance and determine well suited reference compounds. The method is applied to three short-lived synthetic halocarbons: HFO-1234yf, HFO-1234ze(E), and HCFO-1233zd(E). They represent replacements for longer-lived hydrofluorocarbons (HFCs) and exhibit increasing mole fractions in the atmosphere.
The indirectly calibrated results are compared to directly calibrated measurements using data from TOF-MS canister sample analysis and TOF-MS in situ measurements, which are available for some periods of our dataset. The application of the indirect calibration method on several test cases can result in uncertainties of around 6 % to 11 %. For hydro(chloro-)fluoroolefines (denoted H(C)FOs), uncertainties up to 23 % are achieved. The indirectly calculated mole fractions of the investigated H(C)FOs at Taunus Observatory range between measured mole fractions at urban Dübendorf and Jungfraujoch stations in Switzerland.
The crude oil constituents benzene, toluene, ethylbenzene, and the three xylene isomers (BTEX) are the dominating groundwater contaminants originating from surface spill accidents by oil production facilities and with gasoline and jet fuel. Thereby BTEX posing a threat to the world´s scarce drinking water resources due to their water solubility and toxicity. An active remediation cleanup involving a BTEX event proves not only to be very expensive but almost impossible when it comes to the complete removal of contaminants from the subsurface. A favoured and common practice is combining an active remediation process focussing on the source of contamination coupled together with the monitoring of the residual contamination in the subsurface (monitored natural attenuation; MNA). MNA include all naturally occuring biological, chemical and physical processes in the subsurface. The general goal of this work was to improve the knowledge of biodegradation of aromatic hydrocarbons under anaerobic conditions in groundwater. For this groundwater and soil at the former military underground storage tank (UST) site Schäferhof – Süd near Nienburg/Weser (Niedersachsen, Germany) were sampled and analysed. The investigations were done in collaboration of the Umweltbundesamt, the universitys of Frankfurt and Bremen and the alphacon GmbH Ganderkesee. To investigate the extent of groundwater contamination, the terminal electron acceptor processes (TEAPs) and the metabolites of BTEX degradation in groundwater, six observation wells were sampled at regular intervals between January 2002 and September 2004. The wells were positioned in order to cover the upstream, the source area and the downstream of the presumed contamination source. Additionally, vertical sediment profiles were sampled and investigated with respect to spreading and concentration of BTEX in the subsurface. A large residual contamination involving BTEX is present in soil and groundwater at the studied locality. Maximum BTEX concentration values of 17 mg/kg were recorded in analysing sediment in the unsaturated zone. In the capillary fringe, values of 450 mg/kg were recorded (October 2004) and in the saturated zone maximum values of 6.7 mg/kg BTEX were detected. The groundwater samples indicate increasing BTEX concentrations in the groundwater flow direction (from 532 µg/l up to 3300 µg/l (mean values)). Biodegradation of aromatic hydrocarbons under anaerobic conditions in the sub surface at contaminated sites is characterised by generation of metabolites. From the monoaromatic hydrocarbons BTEX metabolites such as benzoic acid (BA) and the methylated homologs and C1-and C2-benzyl-succinic acids (BSA) are generated as intermediates. A solid-phase extraction method based on octadecyl-bonded silica sorbent has been developed to concentrate such metabolite compounds from water samples followed by derivatization and gas chromatography/mass spectrometry (GC/MS) of the extracts. The recovery rate range between 75 and 97%. The method detection limit was 0.8 µg/l. Organic acids were identified as metabolic by-products of biodegradation. Benzoic acid, C1-, C2- and C3-benzoic acid were determined in all contaminated wells with considerable concentrations. Furthermore, the depletion of the dominant terminal electron acceptors (TEAs) oxygen, nitrate, and sulphate and the production of dissolved ferrous iron and methane in groundwater indicate biological mediated processes in the plume evidently proving the occurrence of NA. A large overlap of different redox zones at the studied part of the plume has been observed. A important finding in this study is the strong influence of groundwater level fluctuations on the BTEX concentration in groundwater. A very dry summer in 2003 was recorded during the monitoring period, resulting on site in a drop of the groundwater level to 1.7 m and a concomitant increase of BTEX concentrations from 240 µg/l to 1300 µg/l. The groundwater level fluctuations, natural degradation and retention processes essentially influence BTEX concentrations in the groundwater. Groundwater level fluctuations have by far a stronger influence than the influence of biological degradation. Increasing BTEX concentrations are hence not a consequence of limited biological degradation. Another part of the study was to observe the isotopic fractionation of the electron acceptor Fe(III), due to biologically mediated reduction of Fe(III) to the watersoluble Fe(II) at the site and first field data are presented. Both groundwater and sediment samples were analysed with respect to their Fe isotopic compositions using high mass resolution Multi Collector-Inductively Coupled Plasma-Mass Spectrometry (MC-ICP-MS). The delta56Fe -values of groundwater samples taken from observation wells located downstream of the source area were isotopically lighter than delta56Fe -values obtained from groundwater in the uncontaminated well. The Fe isotopic composition of most parts of the sediment profile was similar to the Fe isotopic composition of uncontaminated groundwater. Thus, a significant iron isotope fractionation can be observed between sediment and groundwater downstream of the BTEX contamination.
Vor dem Hintergrund des globalen Klimawandels und der Diskussion menschlicher Einflussnahme („anthropogener Treibhauseffekt“) ist anhand von Beobachtungsdaten der bodennahen Lufttemperatur und des Niederschlags untersucht worden, welche Strukturen die Klimaveränderungen in Hessen erkennen lassen. Dabei umfasst das betrachtete Gebiet den Bereich 49°- 52° Nord / 7°-11° Ost und schließt somit auch Teilgebiete der angrenzenden Bundesländer mit ein. Zeitlich lag der Schwerpunkt der Betrachtung auf dem Intervall 1951-2000, da aus dieser Zeit bei weitem die meisten Daten verfügbar sind (Temperatur 53, Niederschlag 674 Stationen). Darüber hinaus wurden aber auch Untersuchungen für die Zeit 1901 bis 2000 bzw. 2003 sowie für 30-jährige Subintervalle durchgeführt. Die Analysemethodik umfasst die Berechnung linearer Trends, einschließlich ihrer räumlichen Strukturen (Trendkarten), Aufdeckung von Fluktuationen (spektrale Varianzanalyse), Extremwertanalysen und die Diskussion natürlicher bzw. anthropogener Einflussfaktoren (Signalanalyse mittels multipler schrittweiser Regression). Die aus Tages-, Monats-, jahreszeitlichen und jährlichen Daten gewonnenen Ergebnisse sind überaus vielfältig und heterogen. Für das Flächenmittel Hessen ergibt sich 1951-2000 insgesamt (Jahresdaten) ein Temperaturanstieg von 0,9 °C mit dem Schwerpunkt im Winter (1,6 °C) und der geringsten Erwärmung im Herbst (0,2 °C). 1901-2003 liegen an den erfassten Stationen die jährlichen Erwärmungen bei 0,7 bis 1,8 °C; 30-jährig treten zum Teil auch Abkühlungen auf, insbesondere wenn die regional-jahreszeitlichen bzw. monatlichen Strukturen erfasst werden. Diese Strukturen sind beim Niederschlag noch weit ausgeprägter. Im Flächenmittel Hessen beträgt 1951-2000 der jährliche Niederschlagsanstieg 8,5 %, mit Maxima im Herbst (25 %) und Winter (22 %; Frühling 20%), während im Sommer ein Rückgang um 18 % eingetreten ist (mit Schwerpunkten im Juni und insbesondere August). Bei den Fluktuationen dominieren mittlere Perioden von ca. 2,2, 3,3, 5,5 und 7,5-8 Jahren, beim Niederschlag auch ca. 4,5 Jahre. Der Sonnenfleckenzyklus spiegelt sich in den analysierten Klimadaten nicht wider. Zusammen mit den Extremwerten sorgen diese Fluktuationen für zeitliche Instabilitäten der Klimatrends, insbesondere wenn relativ kurze (z.B. 30-jährige) Zeitabschnitte betrachtet werden. Die wiederum sehr vielfältigen und unterschiedlichen Ergebnisse der Extremwertanalyse spiegeln bei der Temperatur weitgehend die Trends wider, da sich die Streuung der Daten kaum verändert hat: d.h. Zunahme der Überschreitungswahrscheinlichkeit extrem warmer Ereignisse (insbesondere Frühling, überwiegend auch Sommer und Winter, am wenigsten im Herbst) und Abnahme der Unterschreitungswahrscheinlichkeit extrem kalter Ereignisse (dies im Winter bei den Tagesdaten jedoch sehr uneinheitlich). Beim Niederschlag sind die Abnahme extrem feuchter Monate im Sommer und die Zunahme extrem feuchter Tage im Herbst und Winter am auffälligsten. Langfristig folgen daraus ganz markante Änderungen der Jährlichkeiten. So ist beispielsweise 1901-2001 in Alsfeld die Jährlichkeit eines extrem feuchten Winters von 100 auf 5,6 Jahre zurückgegangen, die entsprechende Jährlichkeit eines extrem feuchten Sommers in Bad Camberg dagegen fast bis zur Unmöglichkeit angestiegen. Bei der Ursachendiskussion lässt sich in den Temperaturdaten ein deutlicher anthropogener Einfluss („Treibhauseffekt“) ausfindig machen. Abschließend wird diskutiert, inwieweit es sinnvoll ist, die beobachteten Trends, im Vergleich mit Modellprojektionen, in die Zukunft zu extrapolieren.
Ziel der Arbeit war es, die Flugzeitmassenspektrometrie als neue Analysemethode für die instrumentelle Analytik halogenierter Spurengase in der Luft zu etablieren. Die grundle-gende Motivation dafür ist, dass anthropogene Emissionen vieler Vertreter dieser Sub-stanzklasse einen negativen Einfluss auf die Umwelt zeigen: in der Atmosphäre agieren die Substanzen bzw. ihre Abbauprodukte als Katalysatoren für den stratosphärischen Ozonab-bau und verstärken den Strahlungsantrieb der Erde durch Absorption elektromagnetischer Strahlung im sogenannten atmosphärischen Fenster. Um diese Effekte und deren Auswir-kung quantifizieren zu können, ist es notwendig, Konzentrationen und Trends der Substan-zen in der Atmosphäre zu überwachen. Nur so können Gegenmaßnahmen wie Produktions-reglementierungen geplant und bewertet werden. In Kombination mit inverser Modellie-rung können zudem Rückschlüsse über tatsächlich emittierten Mengen gezogen werden. Dies stellt den Anspruch an die Analytik, sehr geringe Mengen dieser Gase sehr präzise quantifizieren zu können, um auch schwache Trends zu erkennen. Zudem muss die Analy-semethode die Möglichkeit zu bieten, mit der wachsenden Anzahl bekannter und zu über-wachender Substanzen Schritt zu halten. Besonders für letzteren Aspekt bietet die Flug-zeitmassenspektrometrie einen entscheidenden Vorteil gegenüber der „konventionellen“ Methode, der Quadrupolmassenspektrometrie: sie zeichnet das gesamten Massenspektrum auf ohne dadurch an Empfindlichkeit einzubüßen. Um das atmosphärische Mischungsver-hältnis von Substanzen im Bereich von pmol mol−1 bis fmol mol−1 bestimmen zu können, muss das Quadrupolmassenspektrometer im Single Ion Monitoring Modus betrieben wer-den – so wird zwar eine hohe Sensitivität erreicht, es wird aber auch nur die Intensität eines bestimmten Masse zu Ladungsverhältnisses (kurz: Masse) zu einem Zeitpunkt aufgezeich-net. Ein Flugzeitmassenspektrometer hingegen extrahiert Ionen mit einer Frequenz im Ki-loherzbereich und zeichnet für jede Extraktion das vollständige Flugzeitspektrum und da-mit Massenspektrum auf.
Aufgabe dieser Arbeit war es, ein Flugzeitmassenspektrometer mit vorgeschalteter Pro-benanreicherungseinheit sowie Gaschromatograph zur Trennung des Subtanzgemisches vor der Detektion aufzubauen und Werkzeuge zur Datenauswertung zu entwickeln. Um einen zukünftigen Feldeinsatz vorzubereiten, sollte der Aufbau möglichst kompakt, mobil und vollständig automatisiert sein. Anschließend sollte Empfindlichkeit, Präzision und dynami-scher Messbereich geprüft, optimiert und die Anwendbarkeit zur Analyse halogenierter Spurengase gezeigt werden. Die Ergebnisse aus der in der vorliegenden Arbeit präsentier-ten Geräteentwicklung finden sich in drei Publikationen wieder, welche in thematischer Reihenfolge die Probenanreicherung (Obersteiner et al., 2016b), den Vergleich von Quadrupol- und Flugzeitmassenspektrometrie (Hoker et al., 2015) sowie Eigenschaften und Anwendung des neuen Aufbaus (Obersteiner et al., 2016a) behandeln. Mit den genannten Aufsätzen ist die Arbeitsgruppe Engel weltweit die erste, welche hochpräzise Analytik ha-logenierter Spurengase routinemäßig mittels Flugzeitmassenspektrometrie durchführt. Der nächste Schritt ist der Übergang von der Laboranwendung zur Feldmessung, z.B. in Form von bodenbasierter in situ Analyse troposphärischer Luftmassen am Taunus Observatorium auf dem Kleinen Feldberg. Da es bisher keine Messstation für die hier beschriebene analy-tische Fragestellung in Deutschland gibt, könnte eine deutliche Verbesserung der Überwa-chung halogenierter Treibhausgase und ozonzerstörender Substanzen in Europa erzielt wer-den. Weiterhin wäre eine Flugzeugapplikation in Zukunft denkbar, welche neben der durch das Flugzeitmassenspektrometer abgedeckten Substanzbandbreite auch von dessen hoher möglicher Spektrenrate profitieren könnte. In Kombination mit Hochgeschwindigkeitsgas-chromatographie könnte eine bisher unerreichte Zeitauflösung der Beprobung der Atmo-sphäre mittels Gaschromatographie-Massenspektrometrie erzielt werden.
The East African Rift System (EARS) was initiated in the Eocene epoch between 50 and 21 Ma probably due to the influence of mantle plumes that caused volcanism, flood basalts and rifting extensions in Ethiopa and the Afar region. As a result of magmatic intrusions and adiabatic decompression melting within the lithosphere caused by the impact of the Kenya plume, there was a southward propagation of the EARS of about 30 – 15 Ma from Ethiopia to Kenya, which coincide with the occurrence of volcanism. The EARS developed towards the south along the margins of the Tanzania Craton between 15 and 8 Ma. Previous findings of low-velocity anomalies within the upper mantle and the mantle transition zone indicate an upwelling of hot mantle material in the vicinity of the Afar region and the East African Rift. This study includes the analysis of P- and S-receiver functions in order to determine further impacts on the lithosphere from below. The aim was to determine the topographic undulations of further boundary layers and to identify their variability owing to the rifting processes and the formation of the EARS. The study area included the Tanzania Craton and the surrounding rift branches of the East African Rift System.
The region of the Rwenzori Mountains can be analysed in detail because of the large dataset of the RiftLink project. The use of the P-receiver function technique and the H-K stacking method enabled to determine different vP /vS ratios depending on the tectonic setting in the Rwenzori region: Rift shoulders (vP /vS =1.74), Albert Rift segment (vP /vS =1.80), Edward Rift segment (vP /vS =1.87) and Rwenzori Mountains (vP /vS =1.86). To determine the topography of the Moho, it is necessary to take into account the thickness of the sedimentary layer, the surface topography, the azimuthal variations in crustal thickness and the impact of local anomalies. After correcting these effects on the Moho depths, significant variations in Moho topography could be determined. The Moho depths range from 29 to 39 km beneath the rift shoulders of the Albertine Rift. Within the rift valley, the crustal thickness varies between 25 – 31 km in the Edward Rift segment and 22 – 30 km in the Albert Rift segment. An averaged crustal thickness of about 26 km within the rift valley indicates the lack of the crustal root beneath the Rwenzoris. Similar variations in crustal thickness were determined by using an automatic procedure for analysing S-receiver functions that was developed in this study.
The S-receiver functions are created by applying a rotation criterion in order to rotate the Z, N and E components into the L, Q and T components. It is necessary to perform trial rotations using different incident and azimuth angles to determine the correct rotation angles. The latter are identified by the use of the rotation criterion, including the amplitude ratio of the converted Moho signal to the direct S/SKS-wave signal. The L component is rotated correctly in the direction of the incident shear wave in the case of the maximum amplitude ratio. After analysing the frequency content of the receiver functions in order to sort out harmonic and long-periodic traces, the individual Moho signals are checked for consistency in order to remove atypic signals. To increase the signal-to-noise ratios on the traces, the S-receiver functions are stacked. For this purpose, the signals of the direct shear waves must originate from similar epicenters. On the basis of similar ray paths, the receiver functions show comparable waveforms and converted signals. To perform the stacking procedure, it is necessary to merge the datasets of the adjacent stations in order to obtain a sufficient number of receiver functions. This analysis is based on the assumption that the incident seismic waves arriving at the adjacent stations penetrate to some extent the same underground structures in the case of similar wave propagation paths. This approach accounts for the fact that the converted signals do not result exclusively from the piercing points at the boundary layers. Further signals originate from the conversions at the boundary layer within the Fresnel Zone. The piercing points are derived from the significant signals in the receiver functions. Depending on the order of arrival of the converted phases on the traces, the signals are attributed to the theoretical discontinuities DIS1, DIS2, DIS3 and DIS4. However, partly due to the low signal-to-noise ratios on the traces, it is difficult to identify the real conversions on the traces and to ensure that the converted signals are attributed to the correct boundary layers. For this reason, it is necessary to check the consistency of the conversion depths among each other. In the case of inconsistent conversion depths, the corresponding signals are either adjusted to another seismic boundary layer or removed from the dataset. To verify the functionality of the automatic procedure and to determine the resolvability with respect to two boundary layers, several models are tested including horizontal and dipping discontinuities. To resolve distinct discontinuities, their depths must differ by at least 60 km, otherwise, due to similar depth ranges of the different boundary layers, the converted signals cannot be separated from each other. As a consequence, the converted signals that originate from different discontinuities are attributed to a single one. Further tests including break-off edges of seismic discontinuities are performed to check the attributions of the converted signals to the discontinuities. Owing to the varying number of boundary layers, the converted signals cannot be attributed to the discontinuities according to the order of their arrivals on the traces. It is necessary to correct their attributions to the seismic discontinuities in order to resolve the boundary layers.
The crust-mantle boundary and further discontinuities within the lithospheric mantle are investigated by applying this automatic procedure. Depending on the tectonic setting, the conversion depths of the Moho range from about 30 – 45 km beneath the western rift shoulder to 20 – 35 km within the rift valley up to 30 – 40 km beneath the eastern rift shoulder. The long wavelengths of the shear waves hamper the correct identification of the converted phases in the S-receiver functions. With respect to the relative differences in conversion depth, the topographic undulations of the crust-mantle boundary are consistent with the Moho depths derived from P-receiver functions. In contrast to the Rwenzori region, it is difficult to resolve completely the trend of the Moho in the remaining area of the East African Rift due to the small dataset provided by IRIS. The results exibit an increase in crustal thickness to up to 45 km in the region of the Cenozoic volcanics such as Virunga, Kivu, Rungwe and Kenya. The greatest Moho depths of more than 50 km are located near Mount Kilimanjaro. In addition to the Moho, the analysis of the S-receiver functions revealed two further boundary layers at depths of 60 – 140 km and 110 – 260 km, which are associated with a mid-lithospheric discontinuity and the lithosphere-asthenosphere boundary, respectively. The shallowest conversion depths of the LAB are focussed to small-scale regions within the rift branches, namely the northern Albertine Rift, the Chyulu Hills and the Mozambique Belt, which are located around the Tanzania Craton. The larger thickness of the lithosphere beneath the cratonic terrain indicates that the Tanzania Craton is not significantly eroded. However, there are indications that the lithosphere beneath the craton and the rift branches is penetrated by ascending asthenospheric melts to depths of up to 140 and 60 km, respectively. The top of the ascending melts is associated with the occurrence of the mid-lithospheric discontinuity. The shallowest conversion depths of this boundary layer (60 – 90 km) are related to the rifted areas of the EARS and the Cenozoic volcanic provinces, which are located along the Albertine Rift, the Kenya Rift and the Rukwa-Malawi rift zones. The deepest conversion depths of up to 140 km are related to the Rwenzori Belt, the Ugandan Basement Complex and the interior of the Tanzania Craton.
In the past sixty years, excessive water consumption and dam construction have significantly influenced natural flow regimes and surface freshwater ecosystems throughout China, and thus resulted in serious environmental problems. In order to balance the competing water demands between human and environment and provide knowledge on sustainable water management, assessments on anthropogenic flow alterations and their impacts on aquatic and riparian ecosystems in China are needed.
In this study, the first evaluation on quantitative relationships between anthropogenic flow alterations and ecological responses in eleven river basins and watersheds in China was performed based on the data that could be obtained from published case studies. Quantitative relationships between changes in average annual discharge, seasonal low flow and seasonal high flow and changes in ecological indicators (fish diversity, fish catch and vegetation cover, etc.) were analyzed. The results showed that changes in riparian vegetation cover as well as changes in fish diversity and fish catch were strongly correlated with the changes in flow magnitude (r = 0.77, 0.66), especially with changes in average annual river discharge. In addition, more than half of the variations in vegetation cover could be explained by changes in average annual river discharge (r² = 0.63) and roughly 50 % changes in fish catch in arid and semi-arid region and 60% changes of fish catch in humid region could be related to alterations in average annual river discharge (r² = 0.53, 0.58).
In a supplementary analysis of this study, the first estimation on quantitative relationships between decreases in native fish species richness and anthropogenic flow alterations in 34 river basins and sub-basins in China was conducted. Linear relationships between losses of native fish species and five ecologically relevant flow indicators were analyzed by single and multiple regression models. For the single regression analysis, significant linear relationships were detected for the indicators of long-term average annual discharge (ILTA) and statistical low flow Q90 (IQ90). For the multiple regressions, no indicator other than ILTA has significant relationships with changes in number of fish species mainly due to collinearity. Two conclusions emerged from the analysis: 1) losses of fish species were positively correlated with changes in ILTA in China and 2) indicator of ILTA was dominant over other flow indicators included in this research for the given dataset. These results provide a guideline for the sustainable water resources management in rivers with high risk of fish extinction in China.
Die Induzierte Polarisation (IP) ist ein geoelektrisches Verfahren und wurde ursprünglich zur Exploration von Erzvorkommen entwickelt. Neben metallischen Leitern, tragen auch Tonminerale, der Porenraum und die chemische Zusammensetzung der Porenlösung zur Polarisierbarkeit eines Unter-grundes bei. Die spektrale Induzierte Polarisation (SIP) untersucht die Polarisierbarkeit in einem Frequenzbereich von 1 mHz bis 1 kHz und nutzt diese aufgezeichneten Spektren zur Unterscheidung von Materialien. Früher mit einem enormen messtechnischen Aufwand verbunden, führte der gerätetechnische Fort-schritt in den letzten beiden Jahrzehnten dazu, dass die SIP vermehrt in der Umweltgeophysik zum Einsatz kommt. Zu den Fragestellungen gehören die Detektion von Altlasten und der Grundwasser-schutz. In der Archäologie ist die Induzierte Polarisation bislang ein kaum verwendetes Verfahren. Im Rahmen des Graduiertenkollegs „Archäologische Analytik“ der J. W. Goethe- Universität wurde die Entwicklung einer Multielektroden-Apparatur SIP-256 begonnen. Ziel der vorliegenden Arbeit war die Fortführung dieser Entwicklung. Da sich die wissenschaftliche Fragestellung während dieser Promotion auf die Erkundung archäologischer Objekte beschränkt, galt es zunächst automatisierte Messabläufe zu realisieren, die es erlauben, die komplexe elektrische Leitfähigkeit kleinräumiger 2D- bzw. 3D-Strukturen zu erfassen. Die Verwendung der SIP-256 führte zu einer erheblichen Ver-kürzung der Messzeit und war entscheidend für die Realisierung dieser Arbeit. Den zweiten Schwerpunkt der Arbeit bildet die Suche nach Anwendungsgebieten für die SIP innerhalb der archäologischen Prospektion. Basierend auf den Ursachen von Polarisationseffekten werden im Rahmen dieser Arbeit drei Anwendungsgebiete vorgestellt. Das erste Anwendungsgebiet nutzt die Vorteile der SIP bei der Prospektion von Erzen. Auf einem mittelalterlicher Verhüttungsplatz bei Seesen am Harz konnten im Vergleich zu einer konventionellen Widerstandsmessung mehr Schlackegruben lokalisiert werden. Während einer deutsch-bulgarischen Grabungskampagne in Pliska (Bulgarien) 1999 gelang es, durch eine flächenhafte Anwendung der IP einen Siedlungshorizont über Lehmablagerungen nachzuweisen. Die Überreste eines Gebäudes erzeugten einen messbaren Polarisationseffekt. Die frühmittelalterlichen Siedlungsreste befinden sich mit 2 bis 3 m in relativ großer Tiefe und konnten bei einer anschließenden Ausgrabung freigelegt werden. Eine Kernfrage war, ob Holzobjekte mit Hilfe der SIP zu detektieren sind. Mit Hilfe von Labormessungen an der TU Clausthal konnte geklärt werden, dass Holz ein polarisierbares Material ist. Zu den untersuchten Proben gehören Hölzer aus einem bronzezeitlichen Bohlenweg, die von Ausgrabungen im Federseemoor (Kreis Biberach) stammen. Durch die Untersuchungen im Labor motiviert, folgte eine Feldmessung über dem Bohlenweg. Es gelang, erstmals ein Holzobjekt mit spektraler Induzierter Polarisation zu detektieren. Holz spielt durch die dendrochronologische Datierung von Fundstellen eine wichtige Rolle, konnte aber bislang noch mit keiner geophysikalischen Methode zufriedenstellend prospektiert werden. Abschließend kann gesagt werden, dass sich die spektrale Induzierte Polarisation als wertvolle Methode in der archäologischen Prospektion etablierte. Strukturen, welche mit einer konventionellen Widerstandsmessung nicht zu erkennen waren, konnten durch die SIP eindeutig identifiziert werden. Natürlich müssen die vorliegenden Ergebnisse noch durch weitere Messungen bestätigt werden, jedoch zeichnet sich ab, dass sich mit der fortschreitenden gerätetechnischen Entwicklung, welche zu schnelleren Messabläufen führt, wichtige Zusatzinformationen durch die spektrale Induzierte Polarisation gewinnen lassen.
The oxidation state of sulfur in slab fluids is controversial, with both dominantly oxidized and reduced species proposed. Here we use in situ X-ray absorption spectroscopy analysis of sulfur-in-apatite to monitor changes in the oxidation state of sulfur during high-P metasomatism by slab fluids in the subduction channel. Our samples include a 73 cm continuous transect of reaction zones between a metagabbroic eclogite block and serpentinite matrix from a mélange zone on the island of Syros, Greece. The block core consists of garnet, omphacite, phengite, paragonite, epidote-clinozoisite, and rutile. In this region, apatite is only observed as elongate inclusions in omphacite cores. From the core outwards micas are increasingly replaced by epidote-clinozoisite, garnets are smaller and more frequent, pyrite + bornite is observed as inclusions in recrystallized omphacite, and apatite is increasingly abundant in the matrix and inclusions in garnet. A major transition at 48 cm separates an assemblage of Ca-Na amphibole, omphacite, chlorite, pyrite, and apatite from the inner garnet-bearing eclogite assemblages. Omphacite disappears from the assemblage at ~56 cm and amphibole compositions sharply transition to tremolite at 59 cm. Finally, the assemblage tremolite + talc + pyrite is observed after ~70 cm.Apatites in the eclogite assemblages exclusively display S6+ peaks in their absorption spectra. This includes apatite inclusions in omphacite in the least altered lithology, as well as matrix apatite and isolated apatite inclusions in garnet in the outermost metasomatized eclogite zone. In the intermediate pyrite-rich (~1-5 vol %) amphibole + omphacite + chlorite zone, apatite displays a strong S1- absorption peak in most grains, with rare analyses showing mixed S1- and S6+. Finally, apatite in the outermost tremolite-bearing assemblages only displays a S6+ peak. The pyrite-rich zone at 48 cm occurs at the initial interface between the serpentinite matrix and eclogite block, characterized by a dramatic decrease in Na content and Mg#. Our data suggest that reduction of S6+ in infiltrating fluids to S1- in pyrite became focused as Fe diffused across the steep Mg# gradient, resulting in pyrite precipitation. In contrast, S reduction in the Mg-rich tremolite-dominant portions of the transect was limited by a lack of Fe, resulting in low modes of pyrite and fluid buffered S6+ in apatite. Finally, S6+-bearing apatite is also observed in reaction zone lithologies from elsewhere on Syros, suggesting our observations are not isolated.Two important conclusions are drawn from these data and observations: (1) In the case of Syros, slab fluids at eclogite-facies conditions carried oxidized S6+, and (2) The interaction of these fluids with eclogites composed of ferrous-Fe silicates resulted in extensive sulfide precipitation.
This study presents an evaluation of a pulse height condensation particle counter (PH-CPC) and an expansion condensation particle counter (E-CPC) in terms of measuring ambient and laboratory-generated molecular and ion clusters. Ambient molecular cluster concentrations were measured with both instruments as they were deployed in conjunction with an ion spectrometer and other aerosol instruments in Hyytiälä, Finland at the SMEAR II station between 1 March and 30 June 2007. The observed cluster concentrations varied and ranged from some thousands to 100 000 cm -3. Both instruments showed similar (within a factor of ~5) concentrations. An average size of the detected clusters was approximately 1.8 nm. As the atmospheric measurement of sub 2-nm particles and molecular clusters is a challenging task, we conclude that most likely we were unable to detect the smallest clusters. Nevertheless, the reported concentrations are the best estimates to date for minimum cluster concentrations in a boreal forest environment.
The ambient and laboratory molecular and ion clusters were investigated. Here we present data on the ambient concentrations of both charged and uncharged molecular clusters as well as the performance of a pulse height condensation particle counter (PH-CPC) and an expansion condensation particle counter (E-CPC). The ambient molecular cluster concentrations were measured using both instruments, and they were deployed in conjunction with ion spectrometers and other aerosol instruments in Hyytiälä, Finland at the SMEAR II station during 1 March to 30 June 2007. The observed cluster concentrations varied and were from ca. 1000 to 100 000 cm−3. Both instruments showed similar concentrations. The average size of detected clusters was approximately 1.8 nm. As the atmospheric measurements at sub 2-nm particles and molecular clusters are a challenging task, and we were most likely unable to detect the smallest clusters, the reported concentrations are our best estimates for minimum cluster concentrations in boreal forest environment.
We present the application of Time-of-Flight Mass Spectrometry (TOF MS) for the analysis of halocarbons in the atmosphere, after cryogenic sample preconcentration and gas chromatographic separation. For the described field of application, the Quadrupole Mass Spectrometer (QP MS) is the state-of-the-art detector. This work aims at comparing two commercially available instruments, a QP MS and a TOF MS with respect to mass resolution, mass accuracy, sensitivity, measurement precision and detector linearity. Both mass spectrometers are operated on the same gas chromatographic system by splitting the column effluent to both detectors. The QP MS had to be operated in optimised Single Ion Monitoring (SIM) mode to achieve a sensitivity which could compete with the TOF MS. The TOF MS provided full mass range information in any acquired mass spectrum without losing sensitivity. Whilst the QP MS showed the performance already achieved in earlier tests, the sensitivity of the TOF MS was on average higher than that of the QP MS in the "operational" SIM mode by a factor of up to 3 reaching detection limits of less than 0.2 pg. Measurement precision determined for the whole analytical system was up to 0.2% depending on substance and sampled volume. The TOF MS instrument used for this study displayed significant non-linearities of up to 10% for two third of all analysed substances.
The growth of aerosol due to the aqueous phase oxidation of sulfur dioxide by ozone was measured in laboratory-generated clouds created in the Cosmics Leaving OUtdoor Droplets (CLOUD) chamber at the European Organization for Nuclear Research (CERN). Experiments were performed at 10 and −10 °C, on acidic (sulfuric acid) and on partially to fully neutralised (ammonium sulfate) seed aerosol. Clouds were generated by performing an adiabatic expansion – pressurising the chamber to 220 hPa above atmospheric pressure, and then rapidly releasing the excess pressure, resulting in a cooling, condensation of water on the aerosol and a cloud lifetime of approximately 6 min. A model was developed to compare the observed aerosol growth with that predicted using oxidation rate constants previously measured in bulk solutions. The model captured the measured aerosol growth very well for experiments performed at 10 and −10 °C, indicating that, in contrast to some previous studies, the oxidation rates of SO2 in a dispersed aqueous system can be well represented by using accepted rate constants, based on bulk measurements. To the best of our knowledge, these are the first laboratory-based measurements of aqueous phase oxidation in a dispersed, super-cooled population of droplets. The measurements are therefore important in confirming that the extrapolation of currently accepted reaction rate constants to temperatures below 0 °C is correct.
The growth of aerosol due to the aqueous phase oxidation of sulfur dioxide by ozone was measured in laboratory-generated clouds created in the Cosmics Leaving OUtdoor Droplets (CLOUD) chamber at the European Organization for Nuclear Research (CERN). Experiments were performed at 10 and −10 °C, on acidic (sulfuric acid) and on partially to fully neutralised (ammonium sulfate) seed aerosol. Clouds were generated by performing an adiabatic expansion – pressurising the chamber to 220 hPa above atmospheric pressure, and then rapidly releasing the excess pressure, resulting in a cooling, condensation of water on the aerosol and a cloud lifetime of approximately 6 min. A model was developed to compare the observed aerosol growth with that predicted using oxidation rate constants previously measured in bulk solutions. The model captured the measured aerosol growth very well for experiments performed at 10 and −10 °C, indicating that, in contrast to some previous studies, the oxidation rates of SO2 in a dispersed aqueous system can be well represented by using accepted rate constants, based on bulk measurements. To the best of our knowledge, these are the first laboratory-based measurements of aqueous phase oxidation in a dispersed, super-cooled population of droplets. The measurements are therefore important in confirming that the extrapolation of currently accepted reaction rate constants to temperatures below 0 °C is correct.
The mechanisms of transfer of crustal material from the subducting slab to the overlying mantle wedge are still debated. Mélange rocks, formed by mixing of sediments, oceanic crust, and ultramafics along the slab-mantle interface, are predicted to ascend as diapirs from the slab-top and transfer their compositional signatures to the source region of arc magmas. However, the compositions of melts that result from the interaction of mélanges with a peridotite wedge remain unknown. Here we present experimental evidence that melting of peridotite hybridized by mélanges produces melts that carry the major and trace element abundances observed in natural arc magmas. We propose that differences in nature and relative contributions of mélanges hybridizing the mantle produce a range of primary arc magmas, from tholeiitic to calc-alkaline. Thus, assimilation of mélanges into the wedge may play a key role in transferring subduction signatures from the slab to the source of arc magmas.
Cheilostome Bryozoa Anoteropora latirostris, a colonial marine invertebrate, constructs its skeleton from calcite and aragonite. This study presents firstly correlated multi-scale electron microscopy, micro-computed tomography, electron backscatter diffraction and NanoSIMS mapping. We show that all primary, coarse-grained platy calcitic lateral walls are covered by fine-grained fibrous aragonite. Vertical lateral walls separating autozooid chambers have aragonite only on their distal side. This type of asymmetric mineralization of lateral walls results from the vertical arrangement of the zooids at the growth margins of the colony and represents a type of biomineralization previously unknown in cheilostome bryozoans. NanoSIMS mapping across the aragonite-calcite interface indicates an organic layer between both mineral phases, likely representing an organic template for biomineralization of aragonite on the calcite layer. Analysis of crystallographic orientations show a moderately strong crystallographic preferred orientation (CPO) for calcite (7.4 times random orientation) and an overall weaker CPO for aragonite (2.4 times random orientation) with a high degree of twinning (45%) of the aragonite grains. The calculated Young’s modulus for the CPO map shows a weak mechanical direction perpendicular to the colony’s upper surface facilitating this organism’s strategy of clonal reproduction by fragmentation along the vertical zooid walls.
Brachiopod shells are the most widely used geological archive for the reconstruction of the temperature and the oxygen isotope composition of Phanerozoic seawater. However, it is not conclusive whether brachiopods precipitate their shells in thermodynamic equilibrium. In this study, we investigated the potential impact of kinetic controls on the isotope composition of modern brachiopods by measuring the oxygen and clumped isotope compositions of their shells. Our results show that clumped and oxygen isotope compositions depart from thermodynamic equilibrium due to growth rate-induced kinetic effects. These departures are in line with incomplete hydration and hydroxylation of dissolved CO2. These findings imply that the determination of taxon-specific growth rates alongside clumped and bulk oxygen isotope analyses is essential to ensure accurate estimates of past ocean temperatures and seawater oxygen isotope compositions from brachiopods.
This study examines the urban heat island (UHI) of Brussels, for both current (2000–2009) and projected future (2060–2069) climate conditions, by employing very high resolution (250 m) modelling experiments, using the urban boundary layer climate model UrbClim. Meteorological parameters that are related to the intensity of the UHI are identified and it is investigated how these parameters and the magnitude of the UHI evolve for two plausible trajectories for future climate conditions. UHI intensity is found to be strongly correlated to the inversion strength in the lowest 100 m of the atmosphere. The results for the future scenarios indicate that the magnitude of the UHI is expected to decrease slightly due to global warming. This can be attributed to the increased incoming longwave radiation, caused by higher air temperature and humidity values. The presence of the UHI also has a significant impact on the frequency of extreme temperature events in the city area, both in present and future climates, and exacerbates the impact of climate change on the urban population as the amount of heat wave days in the city increases twice as fast as in the rural surroundings.
This study aims to assess the skill of regional climate models (RCMs) at reproducing the climatology of Mediterranean cyclones. Seven RCMs are considered, five of which were also coupled with an oceanic model. All simulations were forced at the lateral boundaries by the ERA-Interim reanalysis for a common 20-year period (1989–2008). Six different cyclone tracking methods have been applied to all twelve RCM simulations and to the ERA-Interim reanalysis in order to assess the RCMs from the perspective of different cyclone definitions. All RCMs reproduce the main areas of high cyclone occurrence in the region south of the Alps, in the Adriatic, Ionian and Aegean Seas, as well as in the areas close to Cyprus and to Atlas mountains. The RCMs tend to underestimate intense cyclone occurrences over the Mediterranean Sea and reproduce 24–40 % of these systems, as identified in the reanalysis. The use of grid nudging in one of the RCMs is shown to be beneficial, reproducing about 60 % of the intense cyclones and keeping a better track of the seasonal cycle of intense cyclogenesis. Finally, the most intense cyclones tend to be similarly reproduced in coupled and uncoupled model simulations, suggesting that modeling atmosphere–ocean coupled processes has only a weak impact on the climatology and intensity of Mediterranean cyclones.
This paper is a contribution to the special issue on Med-CORDEX, an international coordinated initiative dedicated to the multi-component regional climate modelling (atmosphere, ocean, land surface, river) of the Mediterranean under the umbrella of HyMeX, CORDEX, and Med-CLIVAR and coordinated by Samuel Somot, Paolo Ruti, Erika Coppola, Gianmaria Sannino, Bodo Ahrens, and Gabriel Jordà.
In this study we show how size-resolved measurements of aerosol particles and cloud condensation nuclei (CCN) can be used to characterize the supersaturation of water vapor in a cloud. The method was developed and applied for the investigation of a cloud event during the ACRIDICON-Zugspitze campaign (17 September to 4 October 2012) at the high-alpine research station Schneefernerhaus (German Alps, 2650 m a.s.l.). Number size distributions of total and interstitial aerosol particles were measured with a scanning mobility particle sizer (SMPS), and size-resolved CCN efficiency spectra were recorded with a CCN counter system operated at different supersaturation levels.
During the evolution of a cloud, aerosol particles are exposed to different supersaturation levels. We outline and compare different estimates for the lower and upper bounds (Slow, Shigh) and the average value (Savg) of peak supersaturation encountered by the particles in the cloud. For the investigated cloud event, we derived Slow ≈ 0.19–0.25%, Shigh ≈ 0.90–1.64% and Savg ≈ 0.38–0.84%. Estimates of Slow, Shigh and Savg based on aerosol size distribution data require specific knowledge or assumptions of aerosol hygroscopicity, which are not required for the derivation of Slow and Savg from the size-resolved CCN efficiency spectra.
In this study we show how size-resolved measurements of aerosol particles and cloud condensation nuclei (CCN) can be used to characterize the supersaturation of water vapor in a cloud. The method was developed and applied during the ACRIDICON-Zugspitze campaign (17 September to 4 October 2012) at the high-Alpine research station Schneefernerhaus (German Alps, 2650 m a.s.l.). Number size distributions of total and interstitial aerosol particles were measured with a scanning mobility particle sizer (SMPS), and size-resolved CCN efficiency spectra were recorded with a CCN counter system operated at different supersaturation levels.
During the evolution of a cloud, aerosol particles are exposed to different supersaturation levels. We outline and compare different estimates for the lower and upper bounds (Slow, Shigh) and the average value (Savg) of peak supersaturation encountered by the particles in the cloud. A major advantage of the derivation of Slow and Savg from size-resolved CCN efficiency spectra is that it does not require the specific knowledge or assumptions about aerosol hygroscopicity that are needed to derive estimates of Slow, Shigh, and Savg from aerosol size distribution data. For the investigated cloud event, we derived Slow ≈ 0.07–0.25%, Shigh ≈ 0.86–1.31% and Savg ≈ 0.42–0.68%.
Assessment of ecologically relevant hydrological change in China due to water use and reservoirs
(2008)
As China’s economy booms, increasing water use has significantly affected hydro-geomorphic processes and thus the ecology of surface waters. A large variety of hydrological changes arising from human activities such as reservoir construction and management, water abstraction, water diversion and agricultural land expansion have been sustained throughout China. Using the global scale hydrological and water use model WaterGAP, natural and anthropogenically altered flow conditions are calculated, taking into account flow alterations due to human water consumption and 580 large reservoirs. The impacts resulting from water consumption and reservoirs have been analyzed separately. A modified “Indicators of Hydrologic Alteration” approach is used to describe the human pressures on aquatic ecosystems due to anthropogenic alterations in river flow regimes. The changes in long-term average river discharge, average monthly mean discharge and coefficients of variation of monthly river discharges under natural and impacted conditions are compared and analyzed. The indicators show very significant alterations of natural river flow regimes in a large part of northern China and only minor alterations in most of southern China. The detected large alterations in long-term average river discharge, the seasonality of flows and the inter-annual variability in the northern half of China are very likely to have caused significant ecological impacts.
It is common practice to use a 30-year period to derive climatological values, as recommended by the World Meteorological Organization. However this convention relies on important assumptions, of which the validity can be examined by deriving the uncertainty inherent to using a limited time-period for deriving climatological values. In this study a new method, aiming at deriving this uncertainty, has been developed with an application to precipitation for a station in Europe (Westdorpe) and one in Africa (Gulu). The weather generator framework is used to produce synthetic daily precipitation time-series that can also be regarded as alternative climate realizations. The framework consists of an improved Markov model, which shows good performance in reproducing the 5-day precipitation variability. The sub-seasonal, seasonal and the inter-annual signals are introduced in the weather generator framework by including covariates. These covariates are derived from an empirical mode decomposition analysis with an improved stability and significance assessment. Introducing covariates was found to substantially improve the monthly precipitation variability for Gulu. From the weather generator, 1,000 synthetic time-series were produced. The divergence between these time-series demonstrates an uncertainty, inherent to using a 30-year period for mean precipitation, of 11 % for Westdorpe and 15 % for Gulu. The uncertainty for precipitation 10-year return levels was found to be 37 % for both sites.
A realistic simulation of the atmospheric boundary layer (ABL) depends on an accurate representation of the land–atmosphere coupling. Land surface temperature (LST) plays an important role in this context and the assimilation of LST can lead to improved estimates of the boundary layer and its processes. We assimilated synthetic satellite LST retrievals derived from a nature run as truth into a fully coupled, state‐of‐the‐art land–atmosphere numeric weather prediction model. As assimilation system a local ensemble transform Kalman filter was used and the control vector was augmented by the soil temperature and humidity. To evaluate the concept of the augmented control vector, two‐day case‐studies with different control vector settings were conducted for clear‐sky periods in March and August 2017. These experiments with hourly LST assimilation were validated against the nature run and overall, the RMSE of atmospheric and soil temperature of the first‐guess (and analysis) were reduced. The temperature estimate of the ABL was particularly improved during daytime as was the estimate of the soil temperature during the whole diurnal cycle. The best impact of LST assimilation on the soil and the ABL was achieved with the augmented control vector. Through the coupling between the soil and the atmosphere, the assimilation of LST can have a positive impact on the temperature forecast of the ABL even after 15 hr because of the memory of the soil. These encouraging results motivate further work towards the assimilation of real satellite LST retrievals.
has been demonstrated in climate models that both the Indian and East Asian summer monsoons (ISM and EASM) are strengthened by the uplift of the entire Asian orography or Tibetan Plateau (TP) (i.e. bulk mountain uplift). Such an effect is widely perceived as the major mechanism contributing to the evolution of Asian summer monsoons in the Neogene. However, geological evidence suggests more diachronous growth of the Asian orography (i.e. regional mountain uplift) than bulk mountain uplift. This demands a re-evaluation of the relation between mountain uplift and the Asian monsoon in the geological periods. In this study, sensitivity experiments considering the diachronous growth of different parts of the Asian orography are performed using the regional climate model COSMO-CLM to investigate their effects on the Asian summer monsoons. The results show that, different from the bulk mountain uplift, the regional mountain uplift can lead to an asynchronous development of the ISM and EASM. While the ISM is primarily intensified by the thermal insulation (mechanical blocking) effect of the southern TP (Zagros Mountains), the EASM is mainly enhanced by the surface sensible heating of the central, northern and eastern TP. Such elevated surface heating can induce a low-level cyclonic anomaly around the TP that reduces the ISM by suppressing the lower tropospheric monsoon vorticity, but promotes the EASM by strengthening the warm advection from the south of the TP that sustains the monsoon convection. Our findings provide new insights to the evolution of the Asian summer monsoons and their interaction with the tectonic changes in the Neogene.
We have sampled atmospheric ice nuclei (IN) and aerosol in Germany and in Israel during spring 2010. IN were analyzed by the static vapor diffusion chamber FRIDGE, as well as by electron microscopy. During the Eyjafjallajökull volcanic eruption of April 2010 we have measured the highest ice nucleus number concentrations (>600 l−1) in our record of 2 yr of daily IN measurements in central Germany. Even in Israel, located about 5000 km away from Iceland, IN were as high as otherwise only during desert dust storms. The fraction of aerosol activated as ice nuclei at −18 °C and 119% rhice and the corresponding area density of ice-active sites per aerosol surface were considerably higher than what we observed during an intense outbreak of Saharan dust over Europe in May 2008.
Pure volcanic ash accounts for at least 53–68% of the 239 individual ice nucleating particles that we collected in aerosol samples from the event and analyzed by electron microscopy. Volcanic ash samples that had been collected close to the eruption site were aerosolized in the laboratory and measured by FRIDGE. Our analysis confirms the relatively poor ice nucleating efficiency (at −18 °C and 119% ice-saturation) of such "fresh" volcanic ash, as it had recently been found by other workers. We find that both the fraction of the aerosol that is active as ice nuclei as well as the density of ice-active sites on the aerosol surface are three orders of magnitude larger in the samples collected from ambient air during the volcanic peaks than in the aerosolized samples from the ash collected close to the eruption site. From this we conclude that the ice-nucleating properties of volcanic ash may be altered substantially by aging and processing during long-range transport in the atmosphere, and that global volcanism deserves further attention as a potential source of atmospheric ice nuclei.
Explosive volcanism affects weather and climate. Primary volcanic ash particles which act as ice nuclei (IN) can modify the phase and properties of cold tropospheric clouds. During the Eyjafjallajökull volcanic eruption we have measured the highest ice nucleus number concentrations (>600 L) in our record of 2 years of daily IN measurements in central Germany. Even in Israel, located about 5000 km away from Iceland, IN were as high as otherwise only during desert dust storms. These measurements are the only ones available on the properties of IN in the Eyjafjallajökull plume. The measured high concentrations and high activation temperature (−8 °C) point to an important impact of volcanic ash on microphysical and radiative properties of clouds through enhanced glaciation.
Atmospheric nanoaerosols have extensive effects on the Earth’s climate and human health. This cumulative work focuses on the development and characterization of instrumentation for measuring various parameters of atmospheric nanoaerosols, and its use to understand new particle formation from organic precursors. The principal research question is, how the chemical composition of nanoaerosol particles can be measured and how atmospheric chemistry influences aerosol processes, especially new particle formation and growth. Therefore, nanoaerosols are investigated under various aspects. More specifically, an instrument is developed to analyze nanoparticles, and field as well as chamber studies are conducted.
The main project is the instrument development of the Thermal Desorption Differential Mobility Analyzer (TD-DMA, project 1, Wagner et al. (2018)). This instrument analyzes the chemical composition of small aerosol particles. By characterization and testing in chamber experiments, it is proven to be suitable for the analysis of freshly nucleated particles.
The second project (Wagner et al. (2017)) applies a broad spectrum of aerosol measurement instruments for the characterization of aerosol particles produced by a skyscraper blasting. A comprehensive picture of the particle population emitted by the demolition is obtained.
Project 3 (K¨urten et al. (2016)) is also an ambient aerosol measurement, focusing of new particle formation in a rural area in central Germany, and the ability of a negative nitrate CI-APi-TOF to detect various substances in atmosphere. Project 4 (Heinritzi et al. (2016)) is a characterization of the negative nitrate CI-APi-TOF used in projects 1, 3, 5, 6, 7 and 8. The following projects focus on understanding new particle formation from atmospherically abundant organic precursors. Key instruments comprise the negative nitrate CI-APiTOF for gas-phase measurements of the nucleating species, and various sizing and counting instruments for quantifying the particle formation and growth. Project 5 (Kirkby et al. (2016)) shows that biogenic organic compounds formed from alpha-pinene can nucleate on their own without the influence of e.g. sulfuric acid. Project 6 (Tr¨ostl et al. (2016)) describe the subsequent growth of these particles. Project 7 (Stolzenburg et al. (2018)) covers the temperature dependence of this growth and in project 8 (Heinritzi et al. (2018)), the suppressing influence of isoprene on the new particle formation is assessed.
Atmospheric observation-based global SF6 emissions - comparison of top-down and bottom-up estimates
(2009)
Emissions of sulphur hexafluoride (SF6), one of the strongest greenhouse gases on a per molecule basis, are targeted to be collectively reduced under the Kyoto Protocol. Because of its long atmospheric lifetime (≈3000 years), the accumulation of SF6 in the atmosphere is a direct measure of its global emissions. Examination of our extended data set of globally distributed high-precision SF6 observations shows an increase in SF6 abundance from near zero in the 1970s to a global mean of 6.7 ppt by the end of 2008. In-depth evaluation of our long-term data records shows that the global source of SF6 decreased after 1995, most likely due to SF6 emission reductions in industrialised countries, but increased again after 1998. By subtracting those emissions reported by Annex I countries to the United Nations Framework Convention of Climatic Change (UNFCCC) from our observation-inferred SF6 source leaves a surprisingly large gap of more than 70–80% of non-reported SF6 emissions in the last decade.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
In Ergänzung zu einem vorangegangenen Projekt (Schönwiese et al., 2005) ist in der vorliegenden Studie eine weitere extremwertstatistische Untersuchung durchgeführt worden. Dazu wurden auf der Basis von täglichen Klimadaten aus Hessen und Umgebung (49°N bis 52°N, 7°O bis 11°O), und zwar der Temperatur von 53 Stationen und des Niederschlages von 84 Stationen, Schwellen extremer Werte definiert, um die Anzahl der Über- bzw. Unterschreitungen dieser Schwellen auf signifikante Trends hin zu untersuchen. Bei der Temperatur findet sich dabei eine systematische Zunahme von Hitzetagen (Maximumtemperatur über 30 °C) im August, wohingegen im Juli fast keine, und im Juni nur vereinzelt signifikante Zunahmen von Hitzetagen gefunden wurden. Hierbei zeigt sich, wie auch bei anderen Temperatur-Schwellen eine Abnahme der Signifikanz mit zunehmender Schwellenhöhe, was durch selteneres Auftreten besonders extremer Ereignisse verursacht wird. Im Winter und Frühjahr hat entsprechend die Anzahl der Frost- bzw. Eistage (Minimum- bzw. Maximumtemperatur unter 0 °C) signifikant abgenommen. Besonders ausgeprägt ist dies für die Frosttage im Frühling der Fall. Beim Niederschlag hat im Sommer, wiederum vor allem im August, die Anzahl von Trockentagen zugenommen. Extrem hohe Niederschlagssummen sind dagegen in dieser Jahreszeit seltener geworden, in den anderen Jahreszeiten jedoch häufiger. Vor allem der März zeichnet sich durch verbreitet hochsignifikante Zunahmen von Tagen mit Starkniederschlägen aus. Die Erhaltungsneigung von besonders warmen bzw. kalten Witterungen hat sich in den meisten Monaten nicht signifikant verändert. Es ist jedoch eine Neigung zu kürzeren relativ einheitlichen Witterungsabschnitten im Februar und März, sowie zu längeren im Oktober und November zu beobachten. Diese Ergebnisse sind jedoch vermutlich nicht sehr robust, da sich bei einer Verkürzung des Zeitfensters der Autokorrelationsfunktion die Signifikanzen teilweise (vor allem im April) deutlich verändern. Bei den Trends der Zahl der Trockenperioden erkennt man im Sommer einen positiven Trend; sie nehmen somit zu. Dies gilt sowohl für die 7-tägigen als auch für die 11-tägigen Trockenperioden. Die übrigen Jahreszeiten zeigen bei den 7-tägigen Trockenperioden nur schwache oder negative Trends. Bei den 11-tägigen Trockenperioden gilt dies nur für das Frühjahr und den Herbst, im Winter sind die Trends im Norden überwiegend positiv, im Süden negativ. Betrachtet man die Länge der längsten Trockenperioden, so nimmt diese im Sommer zu, im Frühjahr und im Gesamtjahr jedoch ab. Im Herbst ist das Bild uneinheitlich; dies kann aber auch daran liegen, dass lange sommerliche Trockenperioden in den Herbst hineinreichen und dann dort gezählt werden. Ein weiterer Aspekt ist die Analyse der Anzahl und Länge bestimmter Witterungsabschnitte (Clusteranalyse), die durch relativ hohe oder tiefe Temperatur bzw. relativ wenig bzw. viel Niederschlag definiert sind. So können beispielsweise Tage mit weniger als 1 mm Niederschlag als Trockencluster bezeichnet werden. Dabei erkennt man im Sommer einen Trend zu mehr Trockenclustern,in Übereinstimmung mit den oben genannten Ergebnissen, in den übrigen Jahreszeiten und im Gesamtjahr jedoch einen Trend zu weniger Trockenclustern. Innerhalb des Sommers ist dieser Trend im August am stärksten, in den übrigen Jahreszeiten im März, Oktober und Dezember. Bei den Clustern von Feuchteereignissen, das heißt Tagen mit relativ viel Niederschlag, ist das Bild umgekehrt. Im Sommer nimmt deren Zahl ab, ansonsten nimmt sie zu. Die stärksten Trends sind dabei wiederum im August (Abnahme) bzw. im März, Oktober und Dezember (jeweils Zunahme) zu erkennen. Alle diese Trends werden im der Regel umso schwächer, je höher die Schranke der Niederschlagsmenge gewählt wird. Bei den Temperaturdaten sind die Trends von Frost- und Eistagen nur im November überwiegend positiv, in allen Wintermonaten (Dezember, Januar und Februar) jedoch fast ausschließlich negativ. Darin spiegelt sich somit der Trend zu höheren Temperaturen wider. Bei den Wärmeclustern ändern sich die Trends mit der Höhe der Schranke. Bei der Schranke von 25°C zeigen der Juli, insbesondere aber der August positive Trends. Bei der 30°C-Schranke bleibt der Augusttrend positiv, der Julitrend wird dagegen negativ. Bei der Schranke von 35°C werden die Augusttrends dann deutlich geringer, während die Julitrends, wenn auch schwächer ausgeprägt, negativ bleiben. Die Trends im Juni sind dagegen insgesamt schwach. Betrachtet man die Signifikanz der Trends, so sind insbesondere die Trends bei hohen Schranken weniger signifikant. Dies gilt für die hohen Niederschlagschranken (20mm, 30mm, 95%-Perzentil, 99%-Perzentil) ebenso wie für die hohen Temperaturschranken (30°C, 35°C). Weiterhin ist die Signifikanz dann niedrig, wenn die Zahl der Cluster im betrachteten Zeitraum klein ist. In Monaten und Jahreszeiten, in denen nur wenige Cluster auftreten, ist der Trend der Zahl der Cluster meist nicht signifikant. Insgesamt zeigen beim Niederschlag die untere Schranke von 1mm sowie die oberen Schranken von 10mm und 90% die signifikantesten Trends. Bei den Temperaturdaten ist das bei den Frosttagen generell, bei den Eistagen im Januar und Februar sowie bei den sommerlichen Clustern mit einer Tagesmaximumtemperatur von über 25°C der Fall (mit zum Teil über 95% bzw. 99% Signifikanz).
Zur Erkundung der Depotfunktion von quellfähigen Tonmineralen für organische Umweltchemikalien und der möglichen Verdrängung dieser Chemikalien durch biogene Tenside wurden kinetische Untersuchungen mit Hilfe von Batch-Experimenten durchgeführt. Dabei wurde zunächst das Adsorptions- und Desorptionsverhalten von ausgesuchten Umweltchemikalien an mineralische Festphasen und danach die Verdrängung dieser Chemikalien durch biogene Tenside untersucht. Als Umweltchemikalien dienten in den Experimenten Di-(n-butyl)phthalat (DBP) und Di-(2-ethylhexyl)phthalat (DEHP), die in industriellem Maßstab hauptsächlich als Weichmacher in Kunststoffen verwendet werden und fünf ausgewählte polycyclische aromatische Kohlenwasserstoffe (PAK), die bei pyrolytischen Prozessen sowie der unvollständigen Verbrennung organischen Materials entstehen. In den durchgeführten Versuchsreihen dienten ein smektitreicher Bentonit, Quarzsand und Gemische aus diesen beiden Stoffen mit verschiedenen Gewichtsanteilen der Bentonit- und Sandphase sowie Seesand als Adsorbermedium für die Umweltchemikalien. Diese Variationen sollten das unterschiedliche Verhalten der verschiedenen Festphasen bezüglich der drei untersuchten Prozesse (Adsorption, Desorption und Austausch) mit den Chemikalien verdeutlichen. Untersuchungen am verwendeten Bentonit ergaben, daß sein Hauptbestandteil ein Calcium- Montmorillonit war. Der Montmorillonit ist ein quellfähiges, dioktaedrisches Tonmineral aus der Gruppe der Smektite. Die Quellfähigkeit dieses Smektits wurde in Quellversuchen mit Ethylenglykol und Glycerin mittels Röntgendiffraktometrie festgestellt. Die chemische Zusammensetzung des Minerals wurde mit Röntgenfluoreszenzmessungen analysiert. Mit dem Greene-Kelly-Test wurde der Montmorillonit als smektitischer Anteil im Bentonit identifiziert. Im Laufe einer jeden Versuchsreihe sind nacheinander drei Prozesse mit jeder Probe im Labor untersucht worden: 1. Adsorption von Umweltchemikalien (Phthalate und PAK) an Sandproben mit unterschiedlichen Tongehalten und an reinen Tonproben. 2. Desorption der adsorbierten Umweltchemikalien aus den Sand/Ton-Gemischen und Tonproben in vier Schritten. 3. Austausch dieser Chemikalien aus den Sand/Ton-Gemischen und Tonproben gegen biogene Tenside. Im ersten Schritt der Batch-Experimente wurden die beiden Phthalate bzw. die PAK (Naphthalin, Acenaphthen, Fluoren, Phenanthren und Fluoranthen) aus einer wässrigen Lösung an die mineralischen Festphasen adsorbiert. Die Phthalate wurden in einem 1:1 Verhältnis in den Experimenten eingesetzt, die fünf PAK als ein Gemisch oder auch einzeln. Für die PAKAdsorption wurde auch eine Wasser-Aceton-Mischung beim Adsorptionsversuch verwendet, da sich dadurch ihre Löslichkeit erheblich verbessern ließ und die kinetischen Reihenversuche bezüglich der Gleichgewichtseinstellung wesentlich gleichmäßiger verliefen. Die Proben wurden 20 Stunden lang bis zur Einstellung des Gleichgewichts im Überkopfmischer geschüttelt. Die festen Phasen wurden danach von den wässrigen Phasen getrennt und zur Ermittlung der Einstellung des Desorptionsgleichgewichts weiterverwendet. Die wässrigen Phasen wurden mit organischen Lösemitteln extrahiert und der Gehalt an Umweltchemikalien gaschromatographisch quantifiziert. Die verbliebenen Festphasen wurden jeweils viermal mit frischem, destilliertem Wasser 20 Stunden lang zur Ermittlung des Gleichgewichts der Desorption geschüttelt, wobei nach Abtrennung der wässrigen Phasen diese auf ihren Organikgehalt hin wie oben beschrieben untersucht wurden. An diese vier Desorptionsschritte schloß sich das Verdrängungsexperiment einer Versuchsreihe an. Hierbei wurden verseifte, langkettige biogene Tenside (Alkoholate und Carbonsäuresalze mit geradzahliger Anzahl der Kohlenstoffatome) zu jeder Probe hinzugegeben und jede Festphase nochmals mit frischem Wasser im Überkopfmischer geschüttelt. In diesem Schritt sollte überprüft werden, ob die in den Festphasen verbliebenen Phthalate und PAK durch Zugabe von biogenen Tensiden in höherem Maße in der wässrigen Phase wiedergefunden werden als dies aus dem jeweiligen Desorptionsgleichgewicht zu erwarten war. Mit den Ergebnissen konnten Adsorptionsisothermen (nur für Phthalate) aufgenommen und Angaben zur Einstellung des Desorptionsgleichgewichts oder dessen Störung nach Austauschexperimenten gemacht werden. Die Auswertung der Adsorptionsexperimente ergab, daß Festphasen mit Bentonitanteil befähigt sind, einen höheren Anteil an Phthalaten und PAK zu adsorbieren als reine Sandproben. Bei kleinen Phthalatkonzentrationen wurde DEHP aufgrund einer stärkeren Affinität zur Festphase besser adsorbiert als DBP. Stiegen die Phthalatzugaben, so wurde DBP in höherem Maße als DEHP adsorbiert. Dies wurde durch eine bessere Einlagerung der DBP-Moleküle in die innerkristallinen Zwischenschichten des Montmorillonit-Minerals ermöglicht (Interkalation). Röntgenographisch wurde ein deutlich vergrößerter Wert für den Schichtabstand im Montmorillonit nachgewiesen als im ursprünglichem Zustand (bis zu 18 Å gegenüber 15,3 Å). Die Desorptionsisothermen zeigten für Festphasen mit Quarzsandanteilen häufig ein ungleichmäßiges Verhalten. So wurde häufig im zweiten und dritten Desorptionsschritt eine unerwartet hohe Menge an Phthalaten in der wässrigen Lösung gefunden. Reine Bentonitproben zeigten dagegen eine gleichmäßige Konzentrationsabnahme der Phthalate nach jedem Desorptionsschritt. Der eingesetzte Bentonit war in der Lage, Phthalate stärker von der Desorption zurückzuhalten als Quarzsand. Die Einstellung des Desorptionsgleichgewichts erfolgte mit reinem Bentonit schneller als bei Sandproben oder Sand-Bentonit Gemischen. Bei Austauschexperimenten, in denen die ursprünglich eingesetzte Menge an Phthalaten unter 1 mg lag, wurden keine Verdrängungsprozesse festgestellt. Stiegen die Konzentrationen der Phthalate (bis zu ca. 200 mg), so kam es aufgrund der größeren Oberflächenbelegung im Montmorillonit zu Verdrängungsprozessen der Phthalate durch biogene Tenside. Die Extraktion der wässrigen Lösung ergab nach dem Austauschexperiment eine höhere Menge an Phthalaten als es aus dem Desorptionsexperimenten erwartet worden war. Insgesamt wurde mehr DBP als DEHP nach den Austauschexperimenten in der wässrigen Lösung gefunden. Da DBP besser als DEHP in die Zwischenschichten des Montmorillonits eingebaut wurde, konnte auch diese Feststellung damit erklärt werden, daß biogene Tenside die Phthalate aus den innerkristallinen Zwischenschichten verdrängen. Bei PAK wurden Verdrängungsprozesse nur im Falle von Phenanthren festgestellt. Bei anderen in den Experimenten eingesetzten PAK (vorwiegend Naphthalin, Acenaphthen und Fluoren) war offenbar der Dampfdruck so groß, daß vor dem Austauschexperiment nicht mehr genügend organisches Material in der Bodenprobe adsorbiert war. Bei parallel durchgeführten Versuchen mit reinem Quarzsand und mit Seesand als Festphase wurde dagegen weder bei Phthalaten noch PAK eine wesentliche Störung des Desorptionsgleichgewichts in der Größenordnung der bentonithaltigen Proben nach dem Verdrängungsexperiment festgestellt. Dies ist ein Hinweis darauf, daß Verdrängungsprozesse bevorzugt auf Oberflächen von Tonmineralen stattfinden. Insgesamt konnte mit dieser Arbeit gezeigt werden, daß Gleichgewichtseinstellungen von Umweltchemikalien an Tonmineralen durch biogene Tenside gestört werden können. Durch die Einwirkung der biogenen Tenside kommt es zu einer verstärkten Desorption der Umweltchemikalien aus den Tonmineralen.
Flusssysteme im mediterranen Raum reagieren besonders sensitiv auf Veränderungen von Umweltbedingungen, z.B. durch Neotektonik, Klimaänderungen und Landnutzung. Geowissenschaftler der Goethe-Universität Frankfurt untersuchen in diesem Zusammenhang das Einzugsgebiet des Rio Palancia (Spanien), um über die Erstellung einer Sediment-Massenbilanzierung die Entwicklungsgeschichte des Systems zu erforschen. Zur Identifizierung und Quantifizierung verschiedener Sediment-Ablagerungstypen wurde das Georadarverfahren (GPR) eingesetzt. Ziel dieser Arbeit ist es, am Beispiel fluvialer Lockersedimente das Zustandekommen von Radargrammen noch besser zu verstehen und möglichst viel Information über den Untergrund aus einem Radargramm zu extrahieren. An 30 Standorten wurden GPR-Messungen durchgeführt und mit Geoelektrik und Rammkernsondierungen kombiniert. Die Einführung einer Bearbeitungs- und Auswertesystematik gewährleistet die Vergleichbarkeit von Radardaten unterschiedlicher Standorte. Als Besonderheit werden die Radargramme jeweils auf zwei verschiedene Arten bearbeitet und dargestellt, um sowohl Strukturen herauszuarbeiten als auch die – zumindest relative – Amplitudencharakteristik zu erhalten. Erst dadurch wird eine Auswertung mithilfe der erweiterten Radarstratigraphie-Methode möglich. Diese setzt sich aus der klassischen Radarstratigraphie und der neu entwickelten Reflexionsanalyse zusammen. Dabei werden systematisch Radar-Schichtflächen, Radareinheiten und Radarfazies ermittelt und anschließend die Amplitudengröße, die Polarität und die Breite der Reflexionen betrachtet. Die Radarstratigraphie liefert objektive Erkenntnisse über Form und Verlauf von Untergrundstrukturen, während mithilfe der Reflexionsanalyse Aussagen zu relativen Änderungen von Wassergehalt, Korngrößenverteilung und elektrischer Leitfähigkeit möglich sind. Mithilfe der Radarstratigraphie wurde die Radarantwort verschiedener Sediment-Ablagerungstypen im Untersuchungsgebiet verglichen. Die Radargramme zeigen unterschiedliche Zusammensetzungen von Radarfazies. Eine Unterscheidung und räumliche Abgrenzung verschiedener Ablagerungstypen mit GPR ist somit durchführbar. Die Dielektrizität des Mediums bestimmt, zusammen mit der elektrischen Leitfähigkeit, die Geschwindigkeit und Dämpfung der elektromagnetischen Welle sowie die Reflexionskoeffizienten. Um das Zustandekommen von Radargrammen im Detail nachvollziehen zu können, ist es notwendig, die Dielektrizitätskoeffizienten (DK) der untersuchten Sedimente zum Zeitpunkt der Messung zu kennen und die Abhängigkeit des DK von petrophysikalischen Parametern zu verstehen. Deshalb wurden Proben aus den Rammkernsondierungen entnommen. Im Labor wurden der Real- und Imaginärteil des DK im Radarfrequenzbereich (mit Schwerpunkt auf 200 MHz) in Abhängigkeit von Wassergehalt, Trockendichte, Korngrößenverteilung und Kalkgehalt mithilfe der Plattenkondensatormethode bestimmt. Der DK ist in erster Linie vom Wassergehalt abhängig. Es konnte eine für die Sedimente im Untersuchungsgebiet charakteristische Wassergehalts-DK-Beziehung ermittelt werden. Die resultierende Kurve ist gegenüber entsprechenden in der Fachliteratur zu findenden Beziehungen verschoben, was vermutlich auf die hohen Kalkgehalte der Proben zurückzuführen ist. Für trockene Sedimente wurde eine Korrelation des DK mit der Trockendichte festgestellt. Bei der Bestimmung der Absorptionskoeffizienten fiel auf, dass Proben mit hohem Tonanteil selbst bei geringen Wassergehalten außerordentlich hohe Dämpfungskoeffizienten aufweisen können. Die charakteristische Wassergehalts-DK-Beziehung wurde für Modellierungen von Radardaten genutzt, die dann mit Messdaten verglichen wurden. Über die Modellierung einer einzelnen Radarspur konnte die spezielle Charakteristik der entsprechenden gemessenen Spur erklärt werden, die durch den Einfluss einer dünnen Schicht zustande kommt, deren Mächtigkeit an der Grenze der theoretischen Auflösung für die verwendete Radarfrequenz liegt. Auf Basis der Erkenntnisse aus der erweiterten Radarstratigraphie an einem Radargramm auf fluvialen Lockersedimenten war es zudem möglich, ein komplettes Radargramm zu simulieren. Es gibt das gemessene Radargramm vereinfacht, aber in guter Übereinstimmung wieder. Die Georadarmethode erwies sich als sehr gut geeignet für die Untersuchung, Identifizierung und Quantifizierung fluvialer Sedimente im Palancia-Einzugsgebiet. Die im Rahmen dieser Doktorarbeit entwickelte erweiterte Radarstratigraphie-Methode stellt ein systematisches und weitgehend objektives Verfahren zur Auswertung von Radargrammen dar, das sich auch auf andere Untersuchungsgebiete übertragen lassen sollte. Durch Laboruntersuchungen wurde der Einfluss petrophysikalischer Parameter auf den DK bestimmt. Über die Modellierungen konnten die Ergebnisse großskaliger Geländemessungen mit denen kleinskaliger Labormessungen verknüpft werden. Die insgesamt gewonnenen Erkenntnisse tragen zu einem besseren Verständnis von Radargrammen bei.
Der Dritte Bericht des Zwischenstaatlichen Ausschusses für Klimawandel (IPCC, 2001a, b) bestätigt den Einfluss des Menschen auf das globale Klima und warnt vor einem Temperaturanstieg und vor Niederschlagsveränderungen in den nächsten 100 Jahren, die gesellschaftlichen Wohlstand und Umwelt nachhaltig beeinträchtigen können. Dabei werden weitreichende Folgen des Klimawandels angenommen, vom Anstieg des Meeresspiegels und einer möglichen Degradation von Landflächen bis hin zum Verlust von Tier- und Pflanzenarten, der Verknappung von Wasserressourcen, einer Zunahme von natürlichen Katastrophen wie Überschwemmungen und Dürren, der Ausbreitung von Krankheiten sowie negativer Auswirkungen auf die Nahrungsversorgung der Bevölkerung. Klimawandel ist dabei nur ein Aspekt des weiter gefassten ‘Globalen Wandels’, der eine Vielzahl von anthropogen verursachten Veränderungen der Umwelt einschließt. So wird zum Beispiel erwartet, dass auch demographische und sozioökonomische Entwicklungen sowie vom Menschen verursachte Landnutzungsänderungen eine erhebliche Auswirkung auf den zukünftigen Zustand der globalen Umwelt haben werden. Zu den gravierendsten Folgen des Globalen Wandels gehört die Veränderung der räumlichen und zeitlichen Verteilung der lokalen und regionalen Wasserressourcen. Es müssen daher Strategien entwickelt werden, um sowohl die Bevölkerung als auch die Umwelt vor den möglichen negativen Auswirkungen von erhöhten oder erniedrigten Pegelständen in Fließgewässern zu schützen, oder sie auf eine Veränderung der verfügbaren Wassermengen vorzubereiten. Zur Entwicklung dieser Strategien wiederum werden wissenschaftliche Szenarien und Modellberechnungen benötigt, mit deren Hilfe sich zukünftige hydrologische Verhältnisse abschätzen lassen. Zahlreiche derartige Szenarienanalysen wurden bereits durchgeführt, um den Einfluss des Klima- und Globalen Wandels auf das Wasserdargebot und auf das hydrologische Abflussregime zu untersuchen. Da Flusseinzugsgebiete eine natürliche und angemessene Betrachtungseinheit für dieses Problem darstellen, konzentrieren sich die meisten dieser Studien auf mittlere bis große Einzugsgebiete oder auf bestimmte Regionen zusammenhängender Flussgebiete. In Europa gibt es dazu Beispiele aus den frühen neunziger Jahren, als die Resultate der ersten Klimamodelle verfügbar wurden (z.B. Ott et al., 1991: für die Mosel; Kwadijk und van Deursen, 1993: Rhein; Vehviläinen und Huttunen, 1994: Vuoksi; Broadhurst und Naden, 1996: Severn; Bergström, 1996: Einzugsgebiet der Ostsee). Für diese Studien wurden hydrologische Modelle des jeweiligen Einzugsgebiets entwickelt und der Einfluss des Klimawandels auf den Abfluss bestimmt. Krahe und Grabs (1996) haben ein Wasserbilanzmodell mit einer Auflösung von 0.5° x 0.5° für den gesamten mitteleuropäischen Raum entwickelt und es anhandder Abflussdaten des Rheins, der Weser, der Ems, der Elbe und des deutschen Teils der Donau validiert. Arnell (1994, 1999), bzw. Arnell et al. (2000) untersuchten die Auswirkung des Klimawandels auf europäische Wasserressourcen ebenfalls mithilfe rasterbasierter Modellansätze. Schließlich zeigten Stanners und Bourdeau (1995), EEA (1999), Parry (2000), oder auf globaler Ebene WBGU (1999) und IPCC (1992, 2001a, b), in allgemeineren und politisch orientierten Untersuchungen den gegenwärtigen Zustand sowie mögliche zukünftige Entwicklungen der Umwelt in Europa und weltweit auf, einschließlich verschiedener Aspekte der kontinentalen Wasserressourcen und der Hydrologie. Im Vergleich zu den zahlreichen einzugsgebietsorientierten Analysen und ihrem stetig steigenden wissenschaftlichen Anspruch bis hin zu äußerst detaillierten Fragestellungen sind die regionalen oder globalen Ansätze jedoch eher selten und bleiben meist relativ unspezifisch in ihren Schlussfolgerungen. Darüber hinaus wird die Auswirkung der Wassernutzung, die erheblich zur Veränderung der zukünftigen Wasserressourcen und Abflussmengen beitragen kann, in den meisten Fällen aufgrund des Fehlens entsprechender Daten nicht berücksichtigt. In Anbetracht dieser Mängel wurde 1999 am Wissenschaftlichen Zentrum für Umweltsystemforschung an der Universität Kassel das EuroWasser-Projekt initiiert, auf dessen Durchführung die vorliegenden Dissertation beruht. In einem integrierten Modellansatz wurden in EuroWasser die Folgen von Klimawandel und sozioökonomischen Veränderungen auf die natürliche Wasserverfügbarkeit und die Wassernutzung auf gesamteuropäischer Ebene untersucht (siehe Abschlussbericht, Lehner et al., 2001). Das EuroWasser-Projekt versucht dabei drei aus Sicht von Gesellschaft, Ökonomie und Umwelt kritische Fragen zu beantworten: (1) Wie hoch ist der gegenwärtige Wasserstress in verschiedenen Regionen Europas, und welche zukünftigen Veränderungen sind zu erwarten? (2) Wie wird sich der Globale Wandel auf das europäische Wasserkraftpotenzial auswirken? Und (3) In welchen "kritischen Gebieten" Europas muss, basierend auf den Ergebnissen verschiedener Szenarien des Globalen Wandels, damit gerechnet werden, dass die Hochwasser- und Dürregefahr in Zukunft zunimmt, und von welcher Größenordnung sind diese Veränderungen? ...
In dieser Arbeit wird die Richtungsabhängigkeit seismischer Geschwindigkeiten im Erdmantel unterhalb Deutschlands und angrenzender Gebiete durch die Analyse der teleseismischen Kernphase SKS auf Doppelbrechung untersucht (Scherwellen-Splitting). Die Anisotropie wird durch die Splittingparameter Φ und δt beschrieben und erlaubt Rückschlüsse auf geodynamische Prozesse.
Untersucht werden Aufzeichnungen des Deutschen Seismologischen Regionalnetzes (GRSN) und assoziierter Stationen aus dem Zeitraum von 1993 bis 2009. Für drei Stationen des Gräfenberg-Arrays (GRF-Array) sind Wellenformen ab 1976 verfügbar, welche damit einen weltweit einmaligen Datensatz liefern.
Auf Grund des stetigen Ausbaus der seismologischen Netze und des langen Beobachtungszeitraumes können über 3.000 Seismogramme ausgewertet werden. Der Hauptteil dieser Arbeit besteht daher in der Entwicklung einer automatischen Methodik zur Analyse von SKS-Splitting: ADORE ("Automatische Bestimmung von DOppelbrechnungsparametern in REgionalseismischen Netzwerken"). Für regionale Netze wie das GRSN gewährleistet ADORE eine objektive Bestimmung der Splittingparameter. Zunächst wird das seismologische Netzwerk als seismisches Array aufgefasst, um durch eine Frequenz-Wellenzahl-Analyse den Einsatz der SKS-Phase ohne manuellen Eingriff zu bestimmen. Die Berechnung der Splittingparameter erfolgt durch eine Inversion nach der Methode der Minimierung des transversalen Energieanteils. Automatisch wird das optimale Fenster um den SKS-Einsatz positioniert, für jede Beben-Stations-Kombination werden dazu 3.600 Einzelinversionen durchgeführt.
Um diese Vielzahl von Auswertungen in akzeptabler Zeit zu berechnen, nutzt ADORE moderne Rechnerarchitekturen aus, verteilt die Berechnungen auf mehrere Computer im lokalen Netzwerk und erzielt damit eine Beschleunigung um einen Faktor 60.
Die Analyse des gesamten Datensatzes ergibt folgende Ergebnisse: An allen analysierten Stationen wurde ein Scherwellen-Splitting festgestellt, der Stationsuntergrund weist somit überall Anisotropie auf. Für 240 Erdbeben können insgesamt 494 Wertepaare mit höchster Qualität bestimmt werden.
Unter der Annahme einer homogenen ungeneigten anisotropen Schicht unterhalb der jeweiligen Station können die Einzelmessungen pro Station gemittelt werden. Damit sind Regionen mit ähnlichen Merkmalen gut zu identifizieren: Im Norden Deutschlands herrschen NW-SO-, in der Mitte W-O-Richtungen und im Süden SW-NO-Richtungen vor.
Die Verzögerungszeiten liegen im Bereich zwischen 1.0 (Station Taunus) und 2.2 Sekunden (Tannenbergsthal, TANN). Auf Grund des hohen Wertes sind die Ursachen für die hier beobachteten Zeiten dem Erdmantel und nicht der Kruste zuzuordnen. Die bevorzugte Ausrichtung von anisotropen Kristallen auf Grund von Fließprozessen von Mantelmaterial ist Quelle der beobachteten Anisotropie. Rezente Fließprozesse von Mantelmaterial sind vor allem an der Unterkante der Lithosphäre wahrscheinlich. Durch Gebirgsbildungsprozesse, vorhandene Gebirgswurzeln oder regionale Veränderungen in der Mächtigkeit der Lithosphäre entstehen Barrieren für viskoses Mantelmaterial.
Als tektonische Ursachen für die hier gemessenen Orientierungen ist im Norden die Tornquist-Teisseyre-Linie (TTZ), in der Mitte die Variszische Gebirgsbildung und im Süden Einflüsse des Alpenbogens anzusehen. Ausnahmen bilden die Stationen Clausthal-Zellerfeld (CLZ), Rügen und Black-Forest-Observatory (BFO). Während bei letzterer ein Einfluss der Spreizungszone des Oberrheingrabens zu vermuten ist, scheint die Intrusion des Brockengranits die Beobachtungen an CLZ zu prägen. Rügen liegt in einer Übergangszone zwischen Sorgenfrei-Tornquist-Zone und TTZ.
Durch die Vielzahl von vorhandenen Einzelmessungen lassen sich an manchen Stationen komplexe Modelle untersuchen. Dazu zählen neben Gradientmodellen auch die geneigte Schicht und Zwei-Schicht-Modelle. Für sechs Stationen kann ein Zwei-Schicht-Modell erstellt werden: BFO, Gräfenberg A1, Fürstenfeldbruck (FUR), Rüdersdorf (RUE), TANN und Unterbreitzbach (UBBA). Die Interpretation der Richtungen von oberer und unterer Schicht gelingt für einen Teil der genannten Stationen: An BFO liegt die Orientierung der unteren Schicht parallel zur Vorzugsrichtung der variszischen Gebirgsbildung, jene der obere Schicht antiparallel zur Spreizungsrichtung des Rheingrabens. Für die Station FUR ist eine Überlagerung mit der Streichrichtung des Alpenmassivs zu beobachten. An GRA1 wird die untere Schicht offenbar durch rezente oder eingefrorene Anisotropie des Böhmischen Massivs bzw. des Eger-Riftsystems beeinflusst. Eine vergleichbare Wirkung ist durch die TTZ an der Station RUE zu erkennen.
ADORE wurde weiterhin auf einen Datensatz des temporären RIFTLINK-Projektes angewandt.
Balloon-borne stratospheric BrO measurements : comparison with Envisat/SCIAMACHY BrO limb profiles
(2005)
For the first time, results of all four existing stratospheric BrO profiling instruments, are presented and compared with reference to the SLIMCAT 3-dimensional chemical transport model (3-D CTM). Model calculations are used to infer a BrO profile validation set, measured by 3 different balloon sensors, for the new Envisat/SCIAMACHY (ENVIronment SATellite/SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) satellite instrument. The balloon observations include (a) balloon-borne in situ resonance fluorescence detection of BrO, (b) balloon-borne solar occultation DOAS measurements (Differential Optical Absorption Spectroscopy) of BrO in the UV, and (c) BrO profiling from the solar occultation SAOZ (Systeme d'Analyse par Observation Zenithale) balloon instrument. Since stratospheric BrO is subject to considerable diurnal variation and none of the measurements are performed close enough in time and space for a direct comparison, all balloon observations are considered with reference to outputs from the 3-D CTM. The referencing is performed by forward and backward air mass trajectory calculations to match the balloon with the satellite observations. The diurnal variation of BrO is considered by 1-D photochemical model calculation along the trajectories. The 1-D photochemical model is initialised with output data of the 3-D model with additional constraints on the vertical transport, the total amount and photochemistry of stratospheric bromine as given by the various balloon observations. Total [Bry]=(20.1±2.8)pptv obtained from DOAS BrO observations at mid-latitudes in 2003, serves as an upper limit of the comparison. Most of the balloon observations agree with the photochemical model predictions within their given error estimates. First retrieval exercises of BrO limb profiling from the SCIAMACHY satellite instrument agree to <±50% with the photochemically-corrected balloon observations, and tend to show less agreement below 20 km.
Balloon-borne stratospheric BrO measurements: comparison with Envisat/SCIAMACHY BrO limb profiles
(2006)
For the first time, results of all four existing stratospheric BrO profiling instruments, are presented and compared with reference to the SLIMCAT 3-dimensional chemical transport model (3-D CTM). Model calculations are used to infer a BrO profile validation set, measured by 3 different balloon sensors, for the new Envisat/SCIAMACHY (ENVIronment SATellite/SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) satellite instrument. The balloon observations include (a) balloon-borne in situ resonance fluorescence detection of BrO, (b) balloon-borne solar occultation DOAS measurements (Differential Optical Absorption Spectroscopy) of BrO in the UV, and (c) BrO profiling from the solar occultation SAOZ (Systeme d'Analyse par Observation Zenithale) balloon instrument. Since stratospheric BrO is subject to considerable diurnal variation and none of the measurements are performed close enough in time and space for a direct comparison, all balloon observations are considered with reference to outputs from the 3-D CTM. The referencing is performed by forward and backward air mass trajectory calculations to match the balloon with the satellite observations. The diurnal variation of BrO is considered by 1-D photochemical model calculation along the trajectories. The 1-D photochemical model is initialised with output data of the 3-D model with additional constraints on the vertical transport, the total amount and photochemistry of stratospheric bromine as given by the various balloon observations. Total [Bry]=(20.1±2.8)pptv obtained from DOAS BrO observations at mid-latitudes in 2003, serves as an upper limit of the comparison. Most of the balloon observations agree with the photochemical model predictions within their given error estimates. First retrieval exercises of BrO limb profiling from the SCIAMACHY satellite instrument agree to <±50% with the photochemically-corrected balloon observations, and tend to show less agreement below 20 km.
In dieser Studie wurden stationsbezogene Messdaten der bodennahen Lufttemperatur, des Niederschlages und des Windes in Deutschland und zum Teil auch in Mitteleuropa für den Zeitraum 1901 bzw. 1951 bis 2000 im Hinblick auf Änderungen ihres Extremverhaltens untersucht. Hierfür wurde ein bimethodischer Ansatz gewählt. Die als Methode I bezeichnete "zeitlich gleitende Extremwertanalyse" definiert für den betrachteten (gleitenden) Zeitraum feste Schwellen. An die Zeitreihen der Schwellenüber- bzw. Unterschreitungen wurden sowohl empirische, als auch theoretische Häufigkeitsverteilungen angepasst, aus denen extremwert-theoretische Größen wie Wartezeitverteilung, Wiederkehrzeit und Risiko abgeleitet wurden. Die Methode II der "strukturorientierten Zeitreihenzerlegung" sucht, basierend auf einer zugrundegelegten theoretischen Verteilung, nach zeitabhängigen Parametern der zugehörigen Wahrscheinlichkeitsdichte. Hierdurch lassen sich zeitabhängige Wahrscheinlichkeiten für das Über- bzw. Unterschreiten von Schwellen angeben. Die gleitende Analyse zeigt bei Niederschlagsmonatsdaten in ganz Deutschland für untere Schranken einen Trend zu seltenerem Auftreten von Extremereignissen. Bei oberen Schranken ist hingegen im Osten einen Trend zu seltenerem, im Westen einen Trend zu häufigerem Auftreten von Extremereignissen zu erkennen. Im Osten ergibt sich also insgesamt ein Trend zu weniger extremen Monatsniederschlagssummen, im Westen ein Trend zu höheren onatsniederschlagssummen. Bei den Niederschlagstagesdaten, bei denen nur die Untersuchung oberer Schranken sinnvoll ist, sind die Ergebnistrends denen der Niederschlagsmonatsdaten in ihrer regionalen Verteilung ähnlich. Allerdings sind die Trends hier schrankenabhängig. Insbesondere in Norddeutschland ergibt sich dabei für relativ niedrige Schranken ein Trend zu kleineren Überschreitungshäufigkeiten, für hohe Schranken hingegen ein Trend zu größeren Überschreitungshäufigkeiten. Damit ergibt sich insgesamt ein Trend zu extremeren Tagesniederschlägen. Bei den Temperaturdaten zeigen die Ergebnisse der gleitenden Analyse der Monatsdaten mit wenigen Ausnahmen ein selteneres Unterschreiten unterer Schranken (also: Kälteereignis). Dieses Verhalten ist bei den Temperaturtagesdaten sogar flächendeckend zu beobachten. Für obere Schranken (also: Hitzeereignis) ergibt sich im allgemeinen ein Trend zu häufigerem Auftreten von Extremereignissen. Allerdings ist dieser Trend nicht flächendeckend zu beobachten. Vielmehr gibt es in allen Regionen Deutschlands einzelne Stationen, bei denen ein Trend zu seltenerem Überschreiten oberer Schranken festzustellen ist. Bei der "strukturorientierten Zeitreihenzerlegung" wurden folgende Ergebnisse erzielt: Die Wahrscheinlichkeitsdichten der monatlichen und saisonalen Temperatur-Daten weisen überwiegend positive Trends im Mittelwert auf, die Streuung hat sich hier nur in Ausnahmefällen verändert. Dies führte zu teilweise deutlich gestiegenen Wahrscheinlichkeiten für besonders warme Monats- und saisonale Mittel im 20. Jh. (Ausnahme: Herbst im Datensatz 1951 bis 2000). Entsprechend sanken in diesem Zeitraum verbreitet die Wahrscheinlichkeiten für extrem kalte Monats- und saisonale Mittel. Ebenso stiegen dieWahrscheinlichkeiten für Häufigkeiten von besonders warmen Tagen (über dem 10%-Perzentil) ab 1951 in allen Jahreszeiten, besonders im Winter für die Tagesmaximum-Temperaturen. Dies korrespondiert mit einer beschleunigten Häufigkeits-Abnahme von besonders kalten Tagen in allen Jahreszeiten, besonders in Süddeutschland. Beim Niederschlag dominieren ausgeprägt jahreszeitliche Unterschiede: Im Winter findet sich sowohl ein Trend zu höheren Monats- und saisonalen Summen, als auch eine erhöhte Variabilität, was verbreitet zu einer deutlichen Zunahme von extrem hohen Niederschlagssummen in dieser Jahreszeit führt. Im Sommer hingegen wurde ein Trend zu einer verringerten Variabilität gefunden, wodurch auch extrem hohe monatliche und saisonale Niederschlagssummen in weiten Teilen Mitteleuropas in dieser Jahreszeit seltener geworden sind. Entsprechend haben Tage mit hohen (über dem 10%-Perzentil) und auch extrem hohen (über dem 5%- und 2%-Perzentil) Niederschlagssummen im Sommer verbreitet abgenommen, in den anderen Jahreszeiten (vor allem im Winter und in Westdeutschland) jedoch zugenommen. Beim Wind sind die Ergebnisse recht uneinheitlich, so dass hier eine allgemeine Charakterisierung schwer fällt. Tendenziell nehmen die Häufigkeiten extremer täglicher Windmaxima im Winter zu und im Sommer ab. Dies gilt jedoch nicht für küstennahe Stationen, wo auch im Winter oft negative Trends extremer Tagesmaxima beobachtet wurden - In Süddeutschland hingegen finden sich auch im Sommer positive Trends in den Häufigkeiten extrem starker Tagesmaxima. Jedoch sind die untersuchten Daten (Windmaxima über Beaufort 8 und mittlere monatliche Windgeschwindigkeiten) wahrscheinlich mit großen Messfehlern behaftet und zudem für die hier durchgeführten Analysen nur bedingt geeignet. Es hat sich somit gezeigt, dass das Extremverhalten von Klimaelementen, wie Temperatur und Niederschlag, im 20. Jhr. sehr starken Änderungen unterworfen war. Diese Änderungen im Extremen wiederum sind sehr stark von Änderungen des "mittleren" Zustandes dieser Klimaelemente abhängig, welcher durch statistische Charakteristika wie Mittelwert und Standardabweichung (bzw. allgemeiner Lage und Streuung) beschrieben werden kann.