Geowissenschaften / Geographie
Refine
Has Fulltext
- yes (88)
Is part of the Bibliography
- no (88)
Keywords
- Palaeoceanography (3)
- Palaeoclimate (3)
- COSMO-CLM (2)
- Physical oceanography (2)
- climate change (2)
- precipitation (2)
- uncertainty (2)
- (Urbane) Austerität (1)
- Abfall (1)
- Adorno (1)
Institute
- Geowissenschaften (46)
- Geowissenschaften / Geographie (24)
- Geographie (18)
- Biodiversität und Klima Forschungszentrum (BiK-F) (8)
- Cornelia Goethe Centrum für Frauenstudien und die Erforschung der Geschlechterverhältnisse (CGC) (3)
- Starker Start ins Studium: Qualitätspakt Lehre (3)
- Senckenbergische Naturforschende Gesellschaft (1)
Influence of sea surface roughness length parameterization on Mistral and Tramontane simulations
(2016)
The Mistral and Tramontane are mesoscale winds in southern France and above the Western Mediterranean Sea. They are phenomena well suited for studying channeling effects as well as atmosphere–land/ocean processes. This sensitivity study deals with the influence of the sea surface roughness length parameterizations on simulated Mistral and Tramontane wind speed and wind direction. Several simulations with the regional climate model COSMO-CLM were performed for the year 2005 with varying values for the Charnock parameter α. Above the western Mediterranean area, the simulated wind speed and wind direction pattern on Mistral days changes depending on the parameterization used. Higher values of α lead to lower simulated wind speeds. In areas, where the simulated wind speed does not change much, a counterclockwise rotation of the simulated wind direction is observed.
Evaluation of radiation components in a global freshwater model with station-based observations
(2016)
In many hydrological models, the amount of evapotranspired water is calculated using the potential evapotranspiration (PET) approach. The main driver of several PET approaches is net radiation, whose downward components are usually obtained from meteorological input data, whereas the upward components are calculated by the model itself. Thus, uncertainties can be large due to both the input data and model assumptions. In this study, we compare the radiation components of the WaterGAP Global Hydrology Model, driven by two meteorological input datasets and two radiation setups from ERA-Interim reanalysis. We assess the performance with respect to monthly observations provided by the Baseline Surface Radiation Network (BSRN) and the Global Energy Balance Archive (GEBA). The assessment is done for the global land area and specifically for energy/water limited regions. The results indicate that there is no optimal radiation input throughout the model variants, but standard meteorological input datasets perform better than those directly obtained by ERA-Interim reanalysis for the key variable net radiation. The low number of observations for some radiation components, as well as the scale mismatch between station observations and 0.5° × 0.5° grid cell size, limits the assessment.
Mistral and tramontane wind speed and wind direction patterns in regional climate simulations
(2016)
The Mistral and Tramontane are important wind phenomena that occur over southern France and the northwestern Mediterranean Sea. Both winds travel through constricting valleys before flowing out towards the Mediterranean Sea. The Mistral and Tramontane are thus interesting phenomena, and represent an opportunity to study channeling effects, as well as the interactions between the atmosphere and land/ocean surfaces. This study investigates Mistral and Tramontane simulations using five regional climate models with grid spacing of about 50 km and smaller. All simulations are driven by ERA-Interim reanalysis data. Spatial patterns of surface wind, as well as wind development and error propagation along the wind tracks from inland France to offshore during Mistral and Tramontane events, are presented and discussed. To disentangle the results from large-scale error sources in Mistral and Tramontane simulations, only days with well simulated large-scale sea level pressure field patterns are evaluated. Comparisons with the observations show that the large-scale pressure patterns are well simulated by the considered models, but the orographic modifications to the wind systems are not well simulated by the coarse-grid simulations (with a grid spacing of about 50 km), and are reproduced slightly better by the higher resolution simulations. On days with Mistral and/or Tramontane events, most simulations underestimate (by 13 % on average) the wind speed over the Mediterranean Sea. This effect is strongest at the lateral borders of the main flow—the flow width is underestimated. All simulations of this study show a clockwise wind direction bias over the sea during Mistral and Tramontane events. Simulations with smaller grid spacing show smaller biases than their coarse-grid counterparts.
When assessing global water resources with hydrological models, it is essential to know about methodological uncertainties. The values of simulated water balance components may vary due to different spatial and temporal aggregations, reference periods, and applied climate forcings, as well as due to the consideration of human water use, or the lack thereof. We analyzed these variations over the period 1901–2010 by forcing the global hydrological model WaterGAP 2.2 (ISIMIP2a) with five state-of-the-art climate data sets, including a homogenized version of the concatenated WFD/WFDEI data set. Absolute values and temporal variations of global water balance components are strongly affected by the uncertainty in the climate forcing, and no temporal trends of the global water balance components are detected for the four homogeneous climate forcings considered (except for human water abstractions). The calibration of WaterGAP against observed long-term average river discharge Q significantly reduces the impact of climate forcing uncertainty on estimated Q and renewable water resources. For the homogeneous forcings, Q of the calibrated and non-calibrated regions of the globe varies by 1.6 and 18.5 %, respectively, for 1971–2000. On the continental scale, most differences for long-term average precipitation P and Q estimates occur in Africa and, due to snow undercatch of rain gauges, also in the data-rich continents Europe and North America. Variations of Q at the grid-cell scale are large, except in a few grid cells upstream and downstream of calibration stations, with an average variation of 37 and 74 % among the four homogeneous forcings in calibrated and non-calibrated regions, respectively. Considering only the forcings GSWP3 and WFDEI_hom, i.e., excluding the forcing without undercatch correction (PGFv2.1) and the one with a much lower shortwave downward radiation SWD than the others (WFD), Q variations are reduced to 16 and 31 % in calibrated and non-calibrated regions, respectively. These simulation results support the need for extended Q measurements and data sharing for better constraining global water balance assessments. Over the 20th century, the human footprint on natural water resources has become larger. For 11–18% of the global land area, the change of Q between 1941–1970 and 1971–2000 was driven more strongly by change of human water use including dam construction than by change in precipitation, while this was true for only 9–13 % of the land area from 1911–1940 to 1941–1970.
When assessing global water resources with hydrological models, it is essential to know the methodological uncertainties in the water resources estimates. The study presented here quantifies effects of the uncertainty in the spatial and temporal patterns of meteorological variables on water balance components at the global, continental and grid cell scale by forcing the global hydrological model WaterGAP 2.2 (ISI-MIP 2.1) with five state-of-the-art climate forcing input data-sets. While global precipitation over land during 1971–2000 varies between 103 500 and 111 000 km3 yr−1, global river discharge varies between 39 200 and 42 200 km3 yr−1. Temporal trends of global wa- ter balance components are strongly affected by the uncertainty in the climate forcing (except human water abstractions), and there is a need for temporal homogenization of climate forcings (in particular WFD/WFDEI). On about 10–20 % of the global land area, change of river discharge between two consecutive 30 year periods was driven more strongly by changes of human water use including dam construction than by changes in precipitation. This number increases towards the end of the 20th century due to intensified human water use and dam construction. The calibration approach of WaterGAP against observed long-term average river discharge reduces the impact of climate forcing uncertainty on estimated river discharge significantly. Different homgeneous climate forcings lead to a variation of Q of only 1.6 % for the 54 % of global land area that are constrained by discharge observations, while estimated renewable water resources in the remaining uncalibrated regions vary by 18.5 %. Uncertainties are especially high in Southeast Asia where Global Runoff Data Centre (GRDC) data availability is very sparse. By sharing already available discharge data, or installing new streamflow gauging stations in such regions, water balance uncertainties could be reduced which would lead to an improved assessment of the world’s water resources.
The assessment of water balance components using global hydrological models is subject to climate forcing uncertainty as well as to an increasing intensity of human water use within the 20th century. The uncertainty of five state-of-the-art climate forcings and the resulting range of cell runoff that is simulated by the global hydrological model WaterGAP is presented. On the global land surface, about 62 % of precipitation evapotranspires, whereas 38 % discharges into oceans and inland sinks. During 1971–2000, evapotranspiration due to human water use amounted to almost 1 % of precipitation, while this anthropogenic water flow increased by a factor of approximately 5 between 1901 and 2010. Deviation of estimated global discharge from the ensemble mean due to climate forcing uncertainty is approximately 4 %. Precipitation uncertainty is the most important reason for the uncertainty of discharge and evapotranspiration, followed by shortwave downward radiation. At continental levels, deviations of water balance components due to uncertain climate forcing are higher, with the highest discharge deviations occurring for river discharge in Africa (−6 to 11 % from the ensemble mean). Uncertain climate forcings also affect the estimation of irrigation water use and thus the estimated human impact of river discharge. The uncertainty range of global irrigation water consumption amounts to approximately 50 % of the global sum of water consumption in the other water use sector.
The Earth's future depends on how we manage the manifold risks of climate change (CC). It is state-of-the-art to assume that risk reduction requires participatory management involving a broad range of stakeholders and scientists. However, there is still little knowledge about the optimal design of participatory climate change risk management processes (PRMPs), in particular with respect to considering the multitude of substantial uncertainties that are relevant for PRMPs. To support the many local to regional PRMPs that are necessary for a successful global-scale reduction of CC risks, we present a roadmap for designing such transdisciplinary knowledge integration processes. The roadmap suggests ways in which uncertainties can be comprehensively addressed within a PRMP. We discuss the concept of CC risks and their management and propose an uncertainty framework that distinguishes epistemic, ontological, and linguistic uncertainty as well as ambiguity. Uncertainties relevant for CC risk management are identified. Communicative and modeling methods that support social learning as well as the development of risk management strategies are proposed for each of six phases of a PRMP. Finally, we recommend how to evaluate PRMPs as such evaluations and their publication are paramount for achieving a reduction of CC risks.
Der Beitrag analysiert Ungleichheitseffekte des 2007 eingeführten Elterngelds. Wir zeigen, dass die familienpolitische Einführung der Ressource Elterngeld die Einkommensungleichheiten der Produktions- bzw. Erwerbssphäre auf die Reproduktions- bzw. Familiensphäre übertragen hat. Das Elterngeld trägt damit aber zumindest bislang nicht (wie angedacht) zur Aufhebung der asymmetrischen Aufteilung von (entlohnter) Erwerbsarbeit und (nicht-entlohnter) Sorgearbeit zwischen Elternteilen bei. Stattdessen verdeutlicht unsere räumlich orientierte Untersuchung des Elterngeldbezugs ungleiche Muster in den Bewältigungsmöglichkeiten kinderbezogener Sorgearbeiten. Die an der ungleichen Geographie des Elterngelds deutlich werdende Ausdifferenzierung von Bearbeitungschancen von Elternschaft interpretieren wir als Ausdruck von sozialen Spaltungstendenzen auf dem Gebiet der Reproduktion, die von der familienpolitischen Einführung des Elterngelds forciert worden sind.
A recent CLOUD (Cosmics Leaving OUtdoor Droplets) chamber study showed that sulfuric acid and dimethylamine produce new aerosols very efficiently, and yield particle formation rates that are compatible with boundary layer observations. These previously published new particle formation (NPF) rates are re-analyzed in the present study with an advanced method. The results show that the NPF rates at 1.7 nm are more than a factor of 10 faster than previously published due to earlier approximations in correcting particle measurements made at larger detection threshold. The revised NPF rates agree almost perfectly with calculated rates from a kinetic aerosol model at different sizes (1.7 nm and 4.3 nm mobility diameter). In addition, modeled and measured size distributions show good agreement over a wide range (up to ca. 30 nm). Furthermore, the aerosol model is modified such that evaporation rates for some clusters can be taken into account; these evaporation rates were previously published from a flow tube study. Using this model, the findings from the present study and the flow tube experiment can be brought into good agreement. This confirms that nucleation proceeds at rates that are compatible with collision-controlled (a.k.a. kinetically-controlled) new particle formation for the conditions during the CLOUD7 experiment (278 K, 38% RH, sulfuric acid concentration between 1×106 and 3×107 cm-3 and dimethylamine mixing ratio of ~40 pptv). Finally, the simulation of atmospheric new particle formation reveals that even tiny mixing ratios of dimethylamine (0.1 pptv) yield NPF rates that could explain significant boundary layer particle formation. This highlights the need for improved speciation and quantification techniques for atmospheric gas-phase amine measurements.
Chlorine and bromine atoms can lead to catalytic destruction of ozone in the stratosphere. Therefore the use and production of ozone depleting substances (ODS) containing chlorine and bromine is regulated by the Montreal Protocol to protect the ozone layer. Equivalent Effective Stratospheric Chlorine (EESC) has been adapted as an appropriate metric to describe the combined effects of chlorine and bromine released from halocarbons on stratospheric ozone. Here we revisit the concept of calculating EESC. We derive a new formulation of EESC based on an advanced concept of ODS propagation into the stratosphere and reactive halogen release. A new transit time distribution is introduced in which the age spectrum for an inert tracer is weighted with the release function for inorganic halogen from the source gases. This distribution is termed the “release time distribution”. The improved formulation shows that EESC levels in the year 1980 for the mid latitude lower stratosphere were significantly lower than previously calculated. 1980 marks the year commonly defined as the onset of anthropogenic ozone depletion in the stratosphere. Assuming that the EESC value must return to the same level in order for ozone to fully recover, we show that it will take more than 10 years longer than currently assumed in this region of the stratosphere. Based on the improved formulation, EESC level at mid-latitudes will reach this landmark only in 2060. We also present a range of sensitivity studies to investigate the effect of changes and uncertainties in the fractional release factors and in the assumptions on the shape of the release time distributions. We conclude that, under the assumptions that all other atmospheric parameters like stratospheric dynamics and chemistry are unchanged, the recovery of mid latitude stratospheric ozone would be expected to be delayed by about a 10 years, in a similar way as EESC.
Mit spitzer Feder beschreibt Lucius Burckhardt in seinem Aufsatz "Wer plant die Planung?" die konfligierenden Rationalitäten der am Planungsprozess beteiligten Akteure. Dabei zeigt er auf, wie "das Kräfteparallelogramm zwischen der regierenden Beamtenschaft, der Bauspekulation, der Bürgerschaft und den durch die beschlossenen Maßnahmen betroffenen Leute" (S. 107) die "Übelstände der Stadt" häufig verschlimmbessert. Es fehle ein "strategisches Vorgehen", das "dem Systemcharakter der Stadt angemessen" (S. 113) wäre.
Innerhalb der Gentrifizierungsforschung analysiert die Rent‑Gap‑Theorie, wie kleinräumige Differenzen zwischen gegenwärtigen Verwertungsbedingungen einerseits sowie Erwartungen auf zukünftig steigende Mieten andererseits Verdrängungsprozesse antreiben. Dementgegen hat Eric Clark (2014) jüngst eingefordert, dass die Stadtforschung den Blick verstärkt darauf richten müsse, wie Verdrängung verhindert werden kann. Diesen Appell aufgreifend, zeigen wir bezogen auf den deutschen Kontext, inwiefern mietrechtliche Regularien, stadtplanerische Entscheidungen und die jeweilige Eigentümerstruktur wesentlich darüber entscheiden, ob sich ein immobilienwirtschaftlicher Verwertungsdruck auch tatsächlich in Verdrängungsprozesse übersetzt. Illustriert wird dies am Wandel der Eigentümerstruktur im Frankfurter Gallus seit den 1970er Jahren. Deutlich wird dabei, dass Gentrifizierung kein Naturgesetz darstellt, sondern ein zutiefst politischer Prozess ist, der sich effektiv verhindern lässt.
In der rezensierten Monographie gelingt es, "New Public Management" als strategisches und politisches Projekt auszudeuten. Die Konsequenzen für die Liegenschaftspolitik können fundiert dargelegt werden. In der Rezension wird eine Einordnung in die bisherige Forschung sowie eine kritische Würdigung der rezensierten Arbeit versucht.
The multi-valence nature of vanadium means that its geochemical behaviour will be ƒO2-dependent, so that its concentration or V/Sc (or V/Ga), can serve as proxies for oxidation state in mantle peridotites. Compared to Fe3+/Fe2+-based equilibria, such trace elements may be less sensitive to metasomatic processes. To investigate these systematics, we have measured V, Sc, Ga and Fe3+ contents in clinopyroxene from well-characterised spinel peridotite xenoliths from the Massif Central, France. These samples were metasomatised by a variety of agents with different oxidation states.V contents can be modified by metasomatic interactions, and other geochemically similar elements including Sc and Ga can also be added, removed or remain constant. A link between V/Sc and Fe3+-Fe2+ equilibria is apparent. Partial removal of V is caused by different metasomatic agents; the common factor is that all agents were significantly more oxidised than the initial ambient mantle peridotite. This extraction can be understood by a decreasing partition coefficient for V for ΔlogƒO2 > ~FMQ-2. Considering that mineral/melt partitioning of V decreases similarly for all peridotite minerals, the bulk-rock V/Sc will also change during relatively oxidising metasomatic interactions and mirror the results obtained for clinopyroxene.
During the Holocene, North American ice sheet collapse and rapid sea-level rise reconnected the Black Sea with the global ocean. Rapid meltwater releases into the North Atlantic and associated climate change arguably slowed the pace of Neolithisation across southeastern Europe, originally hypothesized as a catastrophic flooding that fueled culturally-widespread deluge myths. However, we currently lack an independent record linking the timing of meltwater events, sea-level rise and environmental change with the timing of Neolithisation in southeastern Europe. Here, we present a sea surface salinity record from the Northern Aegean Sea indicative of two meltwater events at ~8.4 and ~7.6 kiloyears that can be directly linked to rapid declines in the establishment of Neolithic sites in southeast Europe. The meltwater events point to an increased outflow of low salinity water from the Black Sea driven by rapid sea level rise >1.4 m following freshwater outbursts from Lake Agassiz and the final decay of the Laurentide ice sheet. Our results shed new light on the link between catastrophic sea-level rise and the Neolithisation of southeastern Europe, and present a historical example of how coastal populations could have been impacted by future rapid sea-level rise.
Convection-permitting models (CPMs) have proven their usefulness in representing precipitation on a sub-daily scale. However, investigations on sub-hourly scales are still lacking, even though these are the scales for which showers exhibit the most variability. A Lagrangian approach is implemented here to evaluate the representation of showers in a CPM, using the limited-area climate model COSMO-CLM. This approach consists of tracking 5‑min precipitation fields to retrieve different features of showers (e.g., temporal pattern, horizontal speed, lifetime). In total, 312 cases are simulated at a resolution of 0.01 ° over Central Germany, and among these cases, 78 are evaluated against a radar dataset. The model is able to represent most observed features for different types of convective cells. In addition, the CPM reproduced well the observed relationship between the precipitation characteristics and temperature indicating that the COSMO-CLM model is sophisticated enough to represent the climatological features of showers.
A recent CLOUD (Cosmics Leaving OUtdoor Droplets) chamber study showed that sulfuric acid and dimethylamine produce new aerosols very efficiently and yield particle formation rates that are compatible with boundary layer observations. These previously published new particle formation (NPF) rates are reanalyzed in the present study with an advanced method. The results show that the NPF rates at 1.7 nm are more than a factor of 10 faster than previously published due to earlier approximations in correcting particle measurements made at a larger detection threshold. The revised NPF rates agree almost perfectly with calculated rates from a kinetic aerosol model at different sizes (1.7 and 4.3 nm mobility diameter). In addition, modeled and measured size distributions show good agreement over a wide range of sizes (up to ca. 30 nm). Furthermore, the aerosol model is modified such that evaporation rates for some clusters can be taken into account; these evaporation rates were previously published from a flow tube study. Using this model, the findings from the present study and the flow tube experiment can be brought into good agreement for the high base-to-acid ratios (∼ 100) relevant for this study. This confirms that nucleation proceeds at rates that are compatible with collision-controlled (a.k.a. kinetically controlled) NPF for the conditions during the CLOUD7 experiment (278 K, 38 % relative humidity, sulfuric acid concentration between 1 × 106 and 3 × 107 cm−3, and dimethylamine mixing ratio of ∼ 40 pptv, i.e., 1 × 109 cm−3).
In late 2013, a whole air flask collection programme was started at Taunus Observatory (TO) in central Germany. Being a rural site in close proximity to the Rhine–Main area, Taunus Observatory allows assessment of emissions from a densely populated region. Owing to its altitude of 825 m, the site also regularly experiences background conditions, especially when air masses approach from north-westerly directions. With a large footprint area mainly covering central Europe north of the Alps, halocarbon measurements at the site have the potential to improve the database for estimation of regional and total European halogenated greenhouse gas emissions. Flask samples are collected weekly for offline analysis using a GC/MS system simultaneously employing a quadrupole as well as a time-of-flight mass spectrometer. As background reference, additional samples are collected approximately once every 2 weeks at the Mace Head Atmospheric Research Station (MHD) when air masses approach from the site's clean air sector. Thus the time series at TO can be linked to the in situ AGAGE measurements and the NOAA flask sampling programme at MHD. An iterative baseline identification procedure separates polluted samples from baseline data. While there is good agreement of baseline mixing ratios between TO and MHD, with a larger variability of mixing ratios at the continental site, measurements at TO are regularly influenced by elevated halocarbon mixing ratios. Here, first time series are presented for CFC-11, CFC-12, HCFC-22, HFC-134a, HFC-227ea, HFC-245fa, and dichloromethane. While atmospheric mixing ratios of the chlorofluorocarbons (CFCs) decrease, they increase for the hydrochlorofluorocarbons (HCFCs) and the hydrofluorocarbons (HFCs). Small unexpected differences between CFC-11 and CFC-12 are found with regard to frequency and relative enhancement of high mixing ratio events and seasonality, although production and use of both compounds are strictly regulated by the Montreal Protocol, and therefore a similar decrease in atmospheric mixing ratios should occur. Dichloromethane, a solvent about which recently concerns have been raised regarding its growing influence on stratospheric ozone depletion, does not show a significant trend with regard to both baseline mixing ratios and the occurrence of pollution events at Taunus Observatory for the time period covered, indicating stable emissions in the regions that influence the site. An analysis of trajectories from the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model reveals differences in halocarbon mixing ranges depending on air mass origin.
Chlorine and bromine atoms lead to catalytic depletion of ozone in the stratosphere. Therefore the use and production of ozone-depleting substances (ODSs) containing chlorine and bromine is regulated by the Montreal Protocol to protect the ozone layer. Equivalent effective stratospheric chlorine (EESC) has been adopted as an appropriate metric to describe the combined effects of chlorine and bromine released from halocarbons on stratospheric ozone. Here we revisit the concept of calculating EESC. We derive a refined formulation of EESC based on an advanced concept of ODS propagation into the stratosphere and reactive halogen release. A new transit time distribution is introduced in which the age spectrum for an inert tracer is weighted with the release function for inorganic halogen from the source gases. This distribution is termed the release time distribution. We show that a much better agreement with inorganic halogen loading from the chemistry transport model TOMCAT is achieved compared with using the current formulation. The refined formulation shows EESC levels in the year 1980 for the mid-latitude lower stratosphere, which are significantly lower than previously calculated. The year 1980 is commonly used as a benchmark to which EESC must return in order to reach significant progress towards halogen and ozone recovery. Assuming that – under otherwise unchanged conditions – the EESC value must return to the same level in order for ozone to fully recover, we show that it will take more than 10 years longer than estimated in this region of the stratosphere with the current method for calculation of EESC. We also present a range of sensitivity studies to investigate the effect of changes and uncertainties in the fractional release factors and in the assumptions on the shape of the release time distributions. We further discuss the value of EESC as a proxy for future evolution of inorganic halogen loading under changing atmospheric dynamics using simulations from the EMAC model. We show that while the expected changes in stratospheric transport lead to significant differences between EESC and modelled inorganic halogen loading at constant mean age, EESC is a reasonable proxy for modelled inorganic halogen on a constant pressure level.
Analysis of stratospheric transport from an observational point of view is frequently realized by evaluation of mean age of air values from long-lived trace gases. However, this provides more insight into general transport strength and less into its mechanism. Deriving complete transit time distributions (age spectra) is desirable, but their deduction from direct measurements is difficult and so far primarily achieved by assumptions about dynamics and spectra themselves. This paper introduces a modified version of an inverse method to infer age spectra from mixing ratios of short-lived trace gases. For a full description of transport seasonality the formulation includes an imposed seasonal cycle to gain multimodal spectra. The EMAC model simulation used for a proof of concept features an idealized dataset of 40 radioactive trace gases with different chemical lifetimes as well as 40 chemically inert pulsed trace gases to calculate pulse age spectra. Annual and seasonal mean inverse spectra are compared to pulse spectra including first and second moments as well as the ratio between them to assess the performance on these time scales. Results indicate that the modified inverse age spectra match the annual and seasonal pulse age spectra well on global scale beyond 1.5 years mean age of air. The imposed seasonal cycle emerges as a reliable tool to include transport seasonality in the age spectra. Below 1.5 years mean age of air, tropospheric influence intensifies and breaks the assumption of single entry through the tropical tropopause, leading to inaccurate spectra in particular in the northern hemisphere. The imposed seasonal cycle wrongly prescribes seasonal entry in this lower region and does not lead to a better agreement between inverse and pulse age spectra without further improvement. As the inverse method aims for future implementation on in situ observational data, possible critical factors for this purpose are delineated finally.
In late 2013, a whole air flask collection program started at the Taunus Observatory (TO) in central Germany. Being a rural site in close vicinity to the densely populated Rhein-Main area, Taunus Observatory allows to assess local and regional emissions. Owed to its altitude of 825 m, the site also regularly experiences background conditions, especially when air masses approach from north-westerly directions. With a large footprint area mainly covering central Europe north of the Alps, halocarbon measurements at the site have the potential to improve the data base for estimation of regional and total European halogenated greenhouse gas emissions. Flask samples are collected weekly for offline analysis using a GC-MS system employing a quadrupole as well as a time-of-flight mass spectrometer. As background reference, additional samples are collected approximately bi-weekly at the Mace Head Atmospheric Research Station (MHD) when air masses approach from the site’s clean air sector. Thus the TO time series can be linked to the in-situ AGAGE measurements and the NOAA flask sampling program at MHD. An iterative baseline identification procedure separates polluted samples from baseline data. While there is good agreement of baseline mixing ratios between TO and MHD, with a larger variability of mixing ratios at the continental site, measurements at TO are regularly influenced by elevated halocarbon mixing ratios. Here, first time series are presented for CFC-11, CFC-12, HCFC-22, HFC-134a, HFC-227ea, HFC-245fa, and dichloromethane. While atmospheric mixing ratios of the CFCs decrease, they increase for the HCFC and the HFCs. Small unexpected differences between CFC-11 and CFC-12 are found with regard to the occurrence of high mixing ratio events and seasonality, although production and use of both compounds are strictly regulated by the Montreal Protocol, and therefore a similar decrease of atmospheric mixing ratios should occur. Dichloromethane, a solvent about which recently concerns have risen regarding its growing influence on stratospheric ozone depletion, does not show a significant trend with regard to both, baseline mixing ratios and the occurrence of pollution events at Taunus Observatory for the time period covered, indicating stable emissions in the regions that influence the site. An analysis of HYSPLIT trajectories reveals differences in halocarbon mixing ranges depending on air mass origin.
To quantify water flows between groundwater (GW) and surface water (SW) as well as the impact of Abstract. To quantify water flows between groundwater (GW) and surface water (SW) as well as the impact of capillary rise on evapotranspiration by global hydrological models (GHMs), it is necessary to replace the bucket-like linear GW reservoir model typical for hydrological models with a fully integrated gradient-based GW flow model. Linear reservoir models can only simulate GW discharge to SW bodies, provide no information on the location of the GW table and assume that there is no GW flow among grid cells. A gradient-based GW model simulates not only GW storage but also hydraulic head, which together with information on SW table elevation enables the quantification of water flows from GW to SW and vice versa. In addition, hydraulic heads are the basis for calculating lateral GW flow among grid cells and capillary rise.
G³M is a new global gradient-based GW model with a spatial resolution of 5' that will replace the current linear GW reservoir in the 0.5° WaterGAP Global Hydrology Model (WGHM). The newly developed model framework enables inmemory coupling to WGHM while keeping overall runtime relatively low, allowing sensitivity analyses and data assimilation. This paper presents the G³M concept and specific model design decisions together with results under steady-state naturalized conditions, i.e. neglecting GW abstractions. Cell-specific conductances of river beds, which govern GW-SW interaction, were determined based on the 30'' steady-state water table computed by Fan et al. (2013). Together with an appropriate choice for the effective elevation of the SW table within each grid cell, this enables a reasonable simulation of drainage from GW to SW such that, in contrast to the GW model of de Graaf et al. (2015, 2017), no additional drainage based on externally provided values for GW storage above the floodplain is required in G³M. Comparison of simulated hydraulic heads to observations around the world shows better agreement than de Graaf et al. (2015). In addition, G³M output is compared to the output of two established macro-scale models for the Central Valley, California, and the continental United States, respectively. As expected, depth to GW table is highest in mountainous and lowest in flat regions. A first analysis of losing and gaining rivers and lakes/wetlands indicates that GW discharge to rivers is by far the dominant flow, draining diffuse GW recharge, such that lateral flows only become a large fraction of total diffuse and focused recharge in case of losing rivers and some areas with very low GW recharge. G³M does not represent losing rivers in some dry regions. This study presents the first steps towards replacing the linear GW reservoir model in a GHM while improving on recent efforts, demonstrating the feasibility of the approach and the robustness of the newly developed framework.
A new method for size-resolved chemical analysis of nucleation mode aerosol particles (size range from ∼10 to ∼30 nm) is presented. The Thermal Desorption Differential Mobility Analyzer (TD-DMA) uses an online, discontinuous principle. The particles are charged, a specific size is selected by differential mobility analysis and they are collected on a filament by electrostatic precipitation. Subsequently, the sampled mass is evaporated in a clean carrier gas and analyzed by a chemical ionization mass spectrometer. Gas-phase measurements are performed with the same mass spectrometer during the sampling of particles. The characterization shows reproducible results, with a particle size resolution of 1.19 and the transmission efficiency for 15 nm particles being slightly above 50 %. The signal from the evaporation of a test substance can be detected starting from 0.01 ng and shows a linear response in the mass spectrometer. Instrument operation in the range of pg m−3 is demonstrated by an example measurement of 15 nm particles produced by nucleation from dimethylamine, sulfuric acid and water.
A new method for size resolved chemical analysis of nucleation mode aerosol particles (size range from ~10 to ~30 nm) is presented. The Thermal Desorption Differential Mobility Analyzer (TD-DMA) uses an online, discontinuous principle. The particles are charged, a specific size is selected by differential mobility analysis and they are collected on a filament by electrostatic precipitation. Subsequently, the sampled mass is evaporated in a clean carrier gas and analyzed by a chemical ionization mass spectrometer. Gas phase measurements are performed with the same mass spectrometer during the sampling of particles. The characterization shows reproducible results, with a particle size resolution of 1.19 and the transmission efficiency for 15 nm particles being slightly above 50 %. The signal from the evaporation of a test substance can be detected starting from 0.01 ng and shows a linear response in the mass spectrometer. Instrument operation in the range of pg/m3 is demonstrated by an example measurement of 15 nm particles produced by nucleation from dimethylamine, sulfuric acid and water.
The social construction of technological stasis : the stagnating data structure in OpenStreetMap
(2018)
The article aims for examining the ‘technological stasis’ of the data structure in OpenStreetMap – the successful global collaborative geodata project devoted to ‘create and distribute free geographic data for the world’. Digital structures are strongly influenced by continuing stagnation. This technological stasis – the lack of change in technology – influences data in various ways, as demonstrated by the intensive discussion of the issue by computer scientists and software engineers. However, existing research describing stagnating software is often technic centred and fuzzy, while critical research is barely considering issues of technological stasis in the digital context at all. Therefore, this paper aims for enriching this body of knowledge in order to shed light on aging data structures. I reframe technological stasis with a social-constructivist perspective – using the approach of Social Construction of Technology – especially with the concept of technological frames. Based on the case example of OpenStreetMap, my findings suggest that the data structure – and its stasis – is the outcome of competing understandings and perspectives, shaped by power asymmetries. Although the data structure did not significantly change for more than 10 years, I demonstrate that this is not because of a lack of motivation, nor technological difficulties of carrying out such changes. The technological stasis is rather rooted in the dominant position of few project members who are able to change the software design; it is their perception of the project that defines how data should be stored and what features are dispensable.
Die Verwaltung der unternehmerischen Stadt : (k)ein Thema in der geographischen Stadtforschung?!
(2018)
In der geographischen Stadtforschung finden sich allgemeine Verweise darauf, dass zum Kanon neoliberaler Reskalierung und urbaner Transformation auch die Einführung von New Public Management in den Städten westlicher Industriestaaten zählt. Daran anschließend argumentiere ich, dass das, was ich als die Verwaltung der unternehmerischen Stadt zusammenfasse, nicht lediglich das Ergebnis abstrakter Neoliberalisierungsprozesse oder technokratischer Modernisierung eines mechanischen Exekutivapparats darstellt. In dem Beitrag zeige ich auf, dass die betriebswirtschaftlich reformierte Verwaltung Effekt und wichtiges Terrain der Ausarbeitung, Artikulation und Durchsetzung einer unternehmerischen Stadtpolitik ist.
Nach einer Pressemitteilung des Statistischen Bundesamtes (2018) hatten von allen in Deutschland erfassten Lohn- und Einkommensteuerpflichtigen 19.000 Einkünfte von mindestens einer Million Euro. Dass Arbeit aber nicht die vorrangige Methode ist, um reich zu werden und zu bleiben, kann man daran erkennen, dass die Zahl der High-Net-Worth-Individuals (HNWI) mit mehr als einer Million Euro Vermögen jene der Einkommensmillionär_innen im Jahr 2017 um 1.345.600 in Deutschland überstieg. Auch die Entwicklung der HNWI ist in Deutschland günstiger als die der Einkommensmillionär_innen. Dem Statistischen Bundesamt zufolge nahm die Zahl der Einkommensmillionär_innen von 2013 bis 2018 "lediglich" um 1.600 zu (Statistisches Bundesamt 2018). Dem World Wealth Report 2018 von Capgemini zufolge konnten sich aber alleine von 2016 bis 2017 85.000 Personen mehr in Deutschland als HNWI bezeichnen (Capgemini 2018). Ganz offensichtlich ist Arbeit weniger erfolgversprechend, wenn man sich auf den Weg machen will, Millionär_in zu werden. Dies gilt nicht nur in Deutschland, sondern ist ein weit verbreitetes Phänomen. Es hat ganz einfach damit zu tun, dass Vermögen geringer besteuert wird als das Einkommen.
Subduction zone magmas are more oxidised on eruption than those at mid-ocean ridges. This is attributed either to oxidising components, derived from subducted lithosphere (slab) and added to the mantle wedge, or to oxidation processes occurring during magma ascent via differentiation. Here we provide direct evidence for contributions of oxidising slab agents to melts trapped in the sub-arc mantle. Measurements of sulfur (S) valence state in sub-arc mantle peridotites identify sulfate, both as crystalline anhydrite (CaSO4) and dissolved SO42− in spinel-hosted glass (formerly melt) inclusions. Copper-rich sulfide precipitates in the inclusions and increased Fe3+/∑Fe in spinel record a S6+–Fe2+ redox coupling during melt percolation through the sub-arc mantle. Sulfate-rich glass inclusions exhibit high U/Th, Pb/Ce, Sr/Nd and δ34S (+ 7 to + 11‰), indicating the involvement of dehydration products of serpentinised slab rocks in their parental melt sources. These observations provide a link between liberated slab components and oxidised arc magmas.
The frequency of extreme events has changed, having a direct impact on human lives. Regional climate models help us to predict these regional climate changes. This work presents an atmosphere–ocean coupled regional climate system model (RCSM; with the atmospheric component COSMO-CLM and the ocean component NEMO) over the European domain, including three marginal seas: the Mediterranean, North, and Baltic Sea. To test the model, we evaluate a simulation of more than 100 years (1900–2009) with a spatial grid resolution of about 25 km. The simulation was nested into a coupled global simulation with the model MPI-ESM in a low-resolution configuration, whose ocean temperature and salinity were nudged to the ocean–ice component of the MPI-ESM forced with the NOAA 20th Century Reanalysis (20CR). The evaluation shows the robustness of the RCSM and discusses the added value by the coupled marginal seas over an atmosphere-only simulation. The coupled system is stable for the complete 20th century and provides a better representation of extreme temperatures compared to the atmosphere-only model. The produced long-term dataset will help us to better understand the processes leading to meteorological and climate extremes.
Parabens and sorbic acid are commonly used as food preservatives due to their antimicrobial effect. However, their use in foods for infants and young children is not permitted in the European Union. Previous studies found these compounds in some gel-filled baby teethers, whereby parabens, which are well-known as endocrine disruptors, were identified in the polymer-based chewing surface consisting of ethylene-vinyl acetate (EVA). To assess the exposure of infants and young children to these products, the application of parabens in teethers should be thoroughly investigated. Therefore, the present study aimed to apply a representative migration test procedure combined with an accurate analytical method to examine gel-filled baby teethers without elaborate sample preparation, high costs, and long processing times. Accordingly, solid-phase extraction (SPE), in combination with a stable isotope dilution assay (SIDA) and subsequent gas chromatography–mass spectrometry (GC–MS) for analysis of methyl-, ethyl-, and n-propylparaben (MeP, EtP, and n-PrP), was found to be well-suited, with recoveries ranging from 93 to 99%. The study compared the release of these parabens from intact teether surfaces into water and saliva simulant under real-life conditions, with total amounts of detected parabens found to be in the range of 101–162 µg 100 mL−1 and 57–148 µg 100 mL−1, respectively. Furthermore, as a worst-case scenario, the release into water was examined using a long-term migration study.
A twentieth century-long coupled atmosphere-ocean regional climate simulation with COSMO-CLM (Consortium for Small-Scale Modeling, Climate Limited-area Model) and NEMO (Nucleus for European Modelling of the Ocean) is studied here to evaluate the added value of coupled marginal seas over continental regions. The interactive coupling of the marginal seas, namely the Mediterranean, the North and the Baltic Seas, to the atmosphere in the European region gives a comprehensive modelling system. It is expected to be able to describe the climatological features of this geographically complex area even more precisely than an atmosphere-only climate model. The investigated variables are precipitation and 2 m temperature. Sensitivity studies are used to assess the impact of SST (sea surface temperature) changes over land areas. The different SST values affect the continental precipitation more than the 2 m temperature. The simulated variables are compared to the CRU (Climatic Research Unit) observational data, and also to the HOAPS/GPCC (Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite Data, Global Precipitation Climatology Centre) data. In the coupled simulation, added skill is found primarily during winter over the eastern part of Europe. Our analysis shows that, over this region, the coupled system is dryer than the uncoupled system, both in terms of precipitation and soil moisture, which means a decrease in the bias of the system. Thus, the coupling improves the simulation of precipitation over the eastern part of Europe, due to cooler SST values and in consequence, drier soil.
Often in climate system studies, linear and symmetric statistical measures are applied to quantify interactions among subsystems or variables. However, they do not allow identification of the driving and responding subsystems. Therefore, in this study, we aimed to apply asymmetric measures from information theory: the axiomatically proposed transfer entropy and the first principle-based information flow to detect and quantify climate interactions. As their estimations are challenging, we initially tested nonparametric estimators like transfer entropy (TE)-binning, TE-kernel, and TE k-nearest neighbor and parametric estimators like TE-linear and information flow (IF)-linear with idealized two-dimensional test cases along with their sensitivity on sample size. Thereafter, we experimentally applied these methods to the Lorenz-96 model and to two real climate phenomena, i.e., (1) the Indo-Pacific Ocean coupling and (2) North Atlantic Oscillation (NAO)–European air temperature coupling. As expected, the linear estimators work for linear systems but fail for strongly nonlinear systems. The TE-kernel and TE k-nearest neighbor estimators are reliable for linear and nonlinear systems. Nevertheless, the nonparametric methods are sensitive to parameter selection and sample size. Thus, this work proposes a composite use of the TE-kernel and TE k-nearest neighbor estimators along with parameter testing for consistent results. The revealed information exchange in Lorenz-96 is dominated by the slow subsystem component. For real climate phenomena, expected bidirectional information exchange between the Indian and Pacific SSTs was detected. Furthermore, expected information exchange from NAO to European air temperature was detected, but also unexpected reversal information exchange. The latter might hint to a hidden process driving both the NAO and European temperatures. Hence, the limitations, availability of time series length and the system at hand must be taken into account before drawing any conclusions from TE and IF-linear estimations.
Convective shower characteristics simulated with the convection-permitting climate model COSMO-CLM
(2019)
This paper evaluates convective precipitation as simulated by the convection-permitting climate model (CPM) Consortium for Small-Scale Modeling in climate mode (COSMO-CLM) (with 2.8 km grid-spacing) over Germany in the period 2001–2015. Characteristics of simulated convective precipitation objects like lifetime, area, mean intensity, and total precipitation are compared to characteristics observed by weather radar. For this purpose, a tracking algorithm was applied to simulated and observed precipitation with 5-min temporal resolution. The total amount of convective precipitation is well simulated, with a small overestimation of 2%. However, the simulation underestimates convective activity, represented by the number of convective objects, by 33%. This underestimation is especially pronounced in the lowlands of Northern Germany, whereas the simulation matches observations well in the mountainous areas of Southern Germany. The underestimation of activity is compensated by an overestimation of the simulated lifetime of convective objects. The observed mean intensity, maximum intensity, and area of precipitation objects increase with their lifetime showing the spectrum of convective storms ranging from short-living single-cell storms to long-living organized convection like supercells or squall lines. The CPM is capable of reproducing the lifetime dependence of these characteristics but shows a weaker increase in mean intensity with lifetime resulting in an especially pronounced underestimation (up to 25%) of mean precipitation intensity of long-living, extreme events. This limitation of the CPM is not identifiable by classical evaluation techniques using rain gauges. The simulation can reproduce the general increase of the highest percentiles of cell area, total precipitation, and mean intensity with temperature but fails to reproduce the increase of lifetime. The scaling rates of mean intensity and total precipitation resemble observed rates only in parts of the temperature range. The results suggest that the evaluation of coarse-grained (e.g., hourly) precipitation fields is insufficient for revealing challenges in convection-permitting simulations.
Deriving stratospheric age of air spectra using an idealized set of chemically active trace gases
(2019)
Analysis of stratospheric transport from an observational point of view is frequently realized by evaluation of the mean age of air values from long-lived trace gases. However, this provides more insight into general transport strength and less into its mechanism. Deriving complete transit time distributions (age spectra) is desirable, but their deduction from direct measurements is difficult. It is so far primarily based on model work. This paper introduces a modified version of an inverse method to infer age spectra from mixing ratios of short-lived trace gases and investigates its basic principle in an idealized model simulation. For a full description of transport seasonality the method includes an imposed seasonal cycle to gain multimodal spectra. An ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulation is utilized for a general proof of concept of the method and features an idealized dataset of 40 radioactive trace gases with different chemical lifetimes as well as 40 chemically inert pulsed trace gases to calculate pulse age spectra. It is assessed whether the modified inverse method in combination with the seasonal cycle can provide matching age spectra when chemistry is well-known. Annual and seasonal mean inverse spectra are compared to pulse spectra including first and second moments as well as the ratio between them to assess the performance on these timescales. Results indicate that the modified inverse age spectra match the annual and seasonal pulse age spectra well on global scale beyond 1.5 years of mean age of air. The imposed seasonal cycle emerges as a reliable tool to include transport seasonality in the age spectra. Below 1.5 years of mean age of air, tropospheric influence intensifies and breaks the assumption of single entry through the tropical tropopause, leading to inaccurate spectra, in particular in the Northern Hemisphere. The imposed seasonal cycle wrongly prescribes seasonal entry in this lower region and does not lead to a better agreement between inverse and pulse age spectra without further improvement. Tests with a focus on future application to observational data imply that subsets of trace gases with 5 to 10 species are sufficient for deriving well-matching age spectra. These subsets can also compensate for an average uncertainty of up to ±20 % in the knowledge of chemical lifetime if a deviation of circa ±10 % in modal age and amplitude of the resulting spectra is tolerated.
In global hydrological models, groundwater (GW) is typically represented by a bucket-like linear groundwater reservoir. Reservoir models, however, (1) can only simulate GW discharge to surface water (SW) bodies but not recharge from SW to GW, (2) provide no information on the location of the GW table, and (3) assume that there is no GW flow among grid cells. This may lead, for example, to an underestimation of groundwater resources in semiarid areas where GW is often replenished by SW or to an underestimation of evapotranspiration where the GW table is close to the land surface. To overcome these limitations, it is necessary to replace the reservoir model in global hydrological models with a hydraulic head gradient-based GW flow model.
We present G3M, a new global gradient-based GW model with a spatial resolution of 5′ (arcminutes), which is to be integrated into the 0.5∘ WaterGAP Global Hydrology Model (WGHM). The newly developed model framework enables in-memory coupling to WGHM while keeping overall runtime relatively low, which allows sensitivity analyses, calibration, and data assimilation. This paper presents the G3M concept and model design decisions that are specific to the large grid size required for a global-scale model. Model results under steady-state naturalized conditions, i.e., neglecting GW abstractions, are shown. Simulated hydraulic heads show better agreement to observations around the world compared to the model output of de Graaf et al. (2015). Locations of simulated SW recharge to GW are found, as is expected, in dry and mountainous regions but areal extent of SW recharge may be underestimated. Globally, GW discharge to rivers is by far the dominant flow component such that lateral GW flows only become a large fraction of total diffuse and focused recharge in the case of losing rivers, some mountainous areas, and some areas with very low GW recharge. A strong sensitivity of simulated hydraulic heads to the spatial resolution of the model and the related choice of the water table elevation of surface water bodies was found. We suggest to investigate how global-scale groundwater modeling at 5′ spatial resolution can benefit from more highly resolved land surface elevation data.
Understanding new particle formation and growth is important because of the strong impact of these processes on climate and air quality. Measurements to elucidate the main new particle formation mechanisms are essential; however, these mechanisms have to be implemented in models to estimate their impact on the regional and global scale. Parameterizations are computationally cheap ways of implementing nucleation schemes in models, but they have their limitations, as they do not necessarily include all relevant parameters. Process models using sophisticated nucleation schemes can be useful for the generation of look-up tables in large-scale models or for the analysis of individual new particle formation events. In addition, some other important properties can be derived from a process model that implicitly calculates the evolution of the full aerosol size distribution, e.g., the particle growth rates. Within this study, a model (SANTIAGO – Sulfuric acid Ammonia NucleaTIon And GrOwth model) is constructed that simulates new particle formation starting from the monomer of sulfuric acid up to a particle size of several hundred nanometers. The smallest sulfuric acid clusters containing one to four acid molecules and a varying amount of base (ammonia) are allowed to evaporate in the model, whereas growth beyond the pentamer (five sulfuric acid molecules) is assumed to be entirely collision-controlled. The main goal of the present study is to derive appropriate thermodynamic data needed to calculate the cluster evaporation rates as a function of temperature. These data are derived numerically from CLOUD (Cosmics Leaving OUtdoor Droplets) chamber new particle formation rates for neutral sulfuric acid–water–ammonia nucleation at temperatures between 208 and 292 K. The numeric methods include an optimization scheme to derive the best estimates for the thermodynamic data (dH and dS) and a Monte Carlo method to derive their probability density functions. The derived data are compared to literature values. Using different data sets for dH and dS in SANTIAGO detailed comparison between model results and measured CLOUD new particle formation rates is discussed.
Understanding new particle formation and growth is important because of the strong impact of these processes on climate and air quality. Measurements to elucidate the main new particle formation mechanisms are essential; however, these mechanisms have to be implemented in models to estimate their impact on the regional and global scale. Parameterizations are computationally cheap ways of implementing nucleation schemes in models but they have their limitations, as they do not necessarily include all relevant parameters. Process models using sophisticated nucleation schemes can be useful for the generation of look-up tables in large scale models or for the analysis of individual new particle formation events. In addition, some other important properties can be derived from a process model that implicitly calculates the evolution of the full aerosol size distribution, e.g., the particle growth rates. Within this study, a model (SANTIAGO, Sulfuric acid Ammonia NucleaTIon And GrOwth model) is constructed that simulates new particle formation starting from the monomer of sulfuric acid up to a particle size of several hundred nanometers. The smallest sulfuric acid clusters containing one to four acid molecules and varying amount of base (ammonia) are allowed to evaporate in the model, whereas growth beyond the pentamer (5 sulfuric acid molecules) is assumed to be entirely collision-controlled. The main goal of the present study is to derive appropriate thermodynamic data needed to calculate the cluster evaporation rates as a function of temperature. These data are derived numerically from CLOUD (Cosmics Leaving OUtdoor Droplets) chamber new particle formation rates for neutral sulfuric acid-water-ammonia nucleation at temperatures between 208 K and 292 K. The numeric methods include an optimization scheme to derive the best estimates for the thermodynamic data (dH and dS) and a Monte Carlo method to derive their probability density functions. The derived data are compared to literature values. Using different data sets for dH and dS in SANTIAGO detailed comparison between model results and measured CLOUD new particle formation rates is discussed.
Worldwide, academics and practitioners are developing ‘planning-oriented’ approaches to reduce the negative impacts of car traffic for more sustainable urban and transport development. One such example is the design of car-reduced neighborhoods, although these are controversial issues in the hegemonic ‘system’ of automobility. Despite the reduction of emissions and frequent recognition as ‘best practice examples’, ‘planning-critical’ research questions the underlying objectives and narratives of such sustainable developments. Our study contributes to this research perspective by improving the understanding of narratives that emerge along with car-reduced housing developments. For this purpose, we analyze two car-reduced neighborhoods in the City of Darmstadt (Germany) by conducting interviews with different actors involved in the planning and implementation processes. Our investigation reveals that the development of car-reduced neighborhoods (i) is consciously embedded in the context of sustainability, (ii) is characterized by power relations, (iii) follows normative indicators, and (iv) does not always correspond to lived realities. Altogether, the traced narratives of car-reduced neighborhoods are embedded in the overarching debate on sustainability, while at the same time revealing the dependence of society on the automobile. Thus, the hegemonic ‘system’ of automobility—although it is beginning to crack—continues to exist.
Anfang der 1990er Jahre haben die anglophonen Geographien damit begonnen, sich mit dem Verhältnis von Psychoanalyse und Stadt auseinanderzusetzen. Ausgehend hiervon kam es Anfang der 2000er Jahre zum Ausruf eines psychoanalytic turn und zur Etablierung von Subdisziplinen, wie den psychoanalytic geographies und der psychoanalytic planning theory, die in den letzten Jahren zu etablierten Bestandteilen der wissenschaftlichen Auseinandersetzung mit Städten im anglophonen Raum geworden sind. Da ein solcher turn hierzulande ausgeblieben ist, stellt sich dieser Beitrag die Frage nach dem Potential einer psychoanalytischen Stadtforschung im deutschsprachigen Raum. Hierzu verfolgt der Autor die These, dass die Stadt bereits in ihrer Entstehung durch das Unbewusste heimgesucht wird. Das urbane Unbewusste kennzeichnet eine Art konstitutiven Störfaktor, der sich in die Topologie der Stadt einschreibt und die Stadt als Objekt (der Stadtforschung) in letzter Instanz unmöglich macht. Ausgehend von dieser Unmöglichkeit, geht der Beitrag den Fantasien rund um die sozialen, politischen und materiellen Verhältnisse einer Stadt nach. Fantasien spielen aus Sicht der psychoanalytischen Stadtforschung eine zentrale Rolle, um der Stadt eine illusorische Konsistenz zu verleihen und das urbane Unbewusste auf Distanz zu halten. Sie ermöglichen es, sich die Stadt vorzustellen, sie zu fühlen und über sie zu sprechen. Der Beitrag endet schließlich mit ein paar Worten zu den Herausforderungen einer künftigen Erschließung der Psychoanalyse für kritische Stadtforschung.
Aufbauend auf den Erfahrungen zweier Workshops zu (urbaner) Austerität in Griechenland und Deutschland diskutiert der Beitrag die (unterschiedliche) Geschichte und Geographie der Austerität mit besonderem Blick auf die Regionen Frankfurt/Rhein-Main und Athen. Die Erfahrungen der multiplen Krise seit 2008, die sich in Griechenland vor dem Hintergrund einer austeritätspolitischen "Shock Doctrine" und in der BRD im Kontext eines langfristigen Projekts der "scheibchenweisen" Austerität entwickelten, eröffnen dabei die Möglichkeit, die Debatten um urbane Austerität einem kritischen Blick zu unterziehen. Der Beitrag sieht insbesondere im Bereich der Krisen der (urbanen) sozialen Reproduktion sowie der Krisen der (städtischen) Politik und Repräsentation weiteren Forschungsbedarf.
Im Beitrag entwickeln wir einen kritischen Blick auf die Geographie der Wahlergebnisse der Alternative für Deutschland (AfD) bei den Bundestagswahlen 2017. Wir hinterfragen Erklärungsmuster, die in einem starren Stadt-Land-Gegensatz verhaftet bleiben und die komplexe Prozesshaftigkeit der Urbanisierung ignorieren. Dagegen gehen wir mit Henri Lefebvre und Theodor W. Adorno vom Urbanen und Ruralen als sozialen Verhältnissen aus, die sich im übergeordneten Prozess der Urbanisierung in dialektischer Weise scheiden sowie räumlich im Spannungsverhältnis von Zentrum und Peripherie materialisieren. Beispielhaft illustrieren wir diesen Prozess in der Diskussion von drei unterschiedlichen Orten, an denen die AfD bei den Bundestagswahlen besonders erfolgreich war: dem Landkreis Vorpommern-Greifswald als Fall einer umfassenden Peripherisierung, dem Quartier Pforzheim-Haidach als peripheres Zentrum und dem Stadtteil Mannheim-Schönau als zentrale Peripherie. Der Beitrag versucht damit eine räumliche Perspektive auf aktuelle Erfolge des Rechtspopulismus zu entwickeln wie auch Stadt-Land-Verhältnisse konzeptionell neu zu erfassen.
Here we present a comprehensive attempt to correlate aragonitic Na∕Ca ratios from Desmophyllum pertusum (formerly known as Lophelia pertusa), Madrepora oculata and a caryophylliid cold-water coral (CWC) species with different seawater parameters such as temperature, salinity and pH. Living CWC specimens were collected from 16 different locations and analyzed for their Na∕Ca ratios using solution-based inductively coupled plasma-optical emission spectrometry (ICP-OES) measurements.
The results reveal no apparent correlation with salinity (30.1–40.57 g kg−1) but a significant inverse correlation with temperature (−0.31±0.04 mmolmol−1∘C−1). Other marine aragonitic organisms such as Mytilus edulis (inner aragonitic shell portion) and Porites sp. exhibit similar results highlighting the consistency of the calculated CWC regressions. Corresponding Na∕Mg ratios show a similar temperature sensitivity to Na∕Ca ratios, but the combination of two ratios appears to reduce the impact of vital effects and domain-dependent geochemical variation. The high degree of scatter and elemental heterogeneities between the different skeletal features in both Na∕Ca and Na∕Mg, however, limit the use of these ratios as a proxy and/or make a high number of samples necessary. Additionally, we explore two models to explain the observed temperature sensitivity of Na∕Ca ratios for an open and semi-enclosed calcifying space based on temperature-sensitive Na- and Ca-pumping enzymes and transport proteins that change the composition of the calcifying fluid and consequently the skeletal Na∕Ca ratio.
Here we present a comprehensive attempt to correlate aragonitic Na / Ca ratios from Lophelia pertusa, Madrepora oculata and a caryophylliid cold-water coral (CWC) species with different seawater parameters such as temperature, salinity and pH. Living CWC specimens were collected from 16 different locations and analyzed for their Na / Ca content using solution-based inductively coupled plasma-optical emission spectrometry (ICP-OES) measurements. The results reveal no apparent correlation with salinity (30.1–40.57 g/kg) but a significant inverse correlation with temperature (−0.31 mmol/mol/°C). Other marine aragonitic organisms such as Mytilus edulis (inner aragonitic shell portion) and Porites sp. exhibit similar results highlighting the consistency of the calculated CWC regressions. Corresponding Na / Mg ratios show a similar temperature sensitivity to Na / Ca ratios, but the combination of two ratios appear to reduce the impact of vital effects and domain-dependent geochemical variation. The high degree of scatter and elemental heterogeneities between the different skeletal features in both Na / Ca and Na / Mg however limit the use of these ratios as a proxy and/or make a high number of samples necessary. Additionally, we explore two models to explain the observed temperature sensitivity of Na / Ca ratios for an open and semi-enclosed calcifying space based on temperature sensitive Na and Ca pumping enzymes and transport proteins that change the composition of the calcifying fluid and consequently the skeletal Na / Ca ratio.
Abiotic formation of n-alkane hydrocarbons has been postulated to occur within Earth's crust. Apparent evidence was primarily based on uncommon carbon and hydrogen isotope distribution patterns that set methane and its higher chain homologues apart from biotic isotopic compositions associated with microbial production and closed system thermal degradation of organic matter. Here, we present the first global investigation of the carbon and hydrogen isotopic compositions of n-alkanes in volcanic-hydrothermal fluids hosted by basaltic, andesitic, trachytic and rhyolitic rocks. We show that the bulk isotopic compositions of these gases follow trends that are characteristic of high temperature, open system degradation of organic matter. In sediment-free systems, organic matter is supplied by surface waters (seawater, meteoric water) circulating through the reservoir rocks. Our data set strongly implies that thermal degradation of organic matter is able to satisfy isotopic criteria previously classified as being indicative of abiogenesis. Further considering the ubiquitous presence of surface waters in Earth’s crust, abiotic hydrocarbon occurrences might have been significantly overestimated.
We present novel measurements of five short-lived brominated source gases (CH2Br2, CHBr3, CH2ClBr, CHCl2Br and CHClBr2) obtained using a gas chromatograph-mass spectrometer system on board the High Altitude and Long Range Research Aircraft (HALO). The instrument is extremely sensitive due to the use of chemical ionisation, allowing detection limits in the lower parts per quadrillion (10-15) range. Data from three campaigns using the HALO aircraft are presented, where the Upper Troposphere/Lower Stratosphere (UTLS) of the Northern Hemisphere mid to high latitudes were sampled during winter and during late summer to early fall. We show that an observed decrease with altitude in the stratosphere is consistent with the relative lifetimes of the different compounds. Distributions of the five source gases and total organic bromine just below the tropopause shows an increase in mixing ratio with latitude, in particular during polar winter. This increase in mixing ratio is explained by increasing lifetimes at higher latitudes during winter. As the mixing ratio at the extratropical tropopause are generally higher than those derived for the tropical tropopause, extratropical troposphere-to-stratosphere transport will result in elevated levels of organic bromine in comparison to air transported over the tropical tropopause. The observations are compared to model estimates using different emission scenarios. A scenario which has emissions most strongly concentrated to low latitudes cannot reproduce the observed latitudinal distributions and will tend to overestimate bromine input through the tropical tropopause from CH2Br2 and CHBr3. Consequently, the scenario also overestimates the amount of brominated organic gases in the stratosphere. The two scenarios with the highest overall emissions of CH2Br2 tend to overestimate mixing ratios at the tropical tropopause but are in much better agreement with extratropical tropopause values, showing that not only total emissions but also latitudinal distributions in the emissions are of importance. While an increase in tropopause values with latitude is reproduced with all emission scenarios during winter, the simulated extratropical tropopause values are on average lower than the observations during late summer to fall. We show that a good knowledge of the latitudinal distribution of tropopause mixing ratios and of the fractional contributions of tropical and extratropical air is needed to derive stratospheric inorganic bromine in the lowermost stratosphere from observations. Depending on the underlying emission scenario, differences of a factor 2 in reactive bromine derived from observations and model outputs are found for the lowermost stratosphere, based on source gas injection. We conclude that a good representation of the contributions of different source regions is required in models for a robust assessment of the role of short-lived halogen source gases on ozone depletion in the UTLS.
Diamond formation in the Earth has been extensively discussed in recent years on the basis of geochemical analysis of natural materials, high-pressure experimental studies, or theoretical aspects. Here, we demonstrate experimentally for the first time, the spontaneous crystallization of diamond from CH4-rich fluids at pressure, temperature and redox conditions approximating those of the deeper parts of the cratonic lithospheric mantle (5-7 GPa) without using diamond seed crystals or carbides. In these experiments the fluid phase is nearly pure methane, even though the oxygen fugacity was significantly above metal saturation. We propose several previously unidentified mechanisms that may promote diamond formation under such conditions and which may also have implications for the origin of sublithospheric diamonds. These include the hydroxylation of silicate minerals like olivine and pyroxene, H2 incorporation into these phases and the "etching" of graphite by H2 and CH4 and reprecipitation as diamond. This study also serves as a demonstration of our new high-pressure experimental technique for obtaining reduced fluids, which is not only relevant for diamond synthesis, but also for investigating the metasomatic origins of diamond in the upper mantle, which has further implications for the deep carbon cycle.
Tropical cyclones (TC) represent a substantial threat to life and property for Caribbean and adjacent populations. The prospective increase of TC magnitudes, expressed in the 15th chapter of the IPCC AR5 report, entails a rising probability of ecological and social disasters, which were tragically exemplified by several severe Caribbean TC strikes during the past 20 years. Modern IPCC-grade climate models, however, still lack the required spatial and temporal resolution to accurately consider the underlying boundary conditions that modulate long-time TC patterns beyond the Instrumental Era. It is thus necessary to provide a synoptic mechanistic understanding regarding the origin of such long-time patterns, in order to predict reliable changes of TC magnitude and frequency under future climate scenarios. Caribbean TC records are still rare and often lack the necessary continuity and resolution to overcome these limitations. Here, we report on an annually-resolved sedimentary archive from the bottom of the Great Blue Hole (Lighthouse Reef, Belize). The TC record encompasses 1885 years and extends all existing site-specific TC archives both in terms of resolution and duration. We identified a likely connection between long-term TC patterns and climate phenomena responses to Common Era climate variations and offer a conceptual and comparative view considering several involved tropospheric and oceanographic control mechanisms such as the El-Niño-Southern-Oscillation, the North Atlantic Oscillation and the Atlantic Multidecadal Oscillation. These basin-scaled climate modes exercise internal control on TC activity by modulating the thermodynamic environment (sea-surface temperature and vertical wind shear stress dynamics) for enhanced/suppressed TC formation both on millennial (primary) and multi-decadal (secondary) time scales. We interpret the beginning of the Medieval Warm Period (MWP) as an important time interval of the Common Era record and suspect that the southward migration of the intertropical convergence zone (ITCZ) caused, in combination with extensive hydro-climate changes, a shift in the tropical Atlantic TC regime. The TC activity in the south-western Caribbean changed in general from a stable and less active stage (100–900 CE) to a more active and variable state (1,100 CE-modern).
In partially molten regions inside the earth melt buoyancy may trigger upwelling of both solid and fluid phases, i.e. diapirism. If the melt is allowed to move separately with respect to the matrix, melt perturbations may evolve into solitary porosity waves. While diapirs may form on a wide range of scales, porosity waves are restricted to sizes of a few times the compaction length. Thus, the size of a partially molten perturbation controls whether a diapir or a porosity wave will emerge. We study the transition from diapiric rise to solitary porosity waves by solving the two-phase flow equations of conservation of mass and momentum in 2D with porosity dependent matrix viscosity. We systematically vary the initial size of a porosity perturbation from 1 to 100 times the compaction length. If the perturbation is much larger than a regular solitary wave, its Stokes velocity is large and therefore faster than the segregating melt. Consequently, the fluid is not able to form a porosity wave and a diapir emerges. For small perturbations solitary waves emerge, either with a positive or negative vertical matrix velocity inside. In between the diapir and solitary wave regimes we observe a third regime of solitary wave induced focusing of melt. In these cases, diapirism is dominant but the fluid is still fast enough to locally build up small solitary waves which rise slightly faster than the diapir and form finger like structures at the front of the diapir. In our numerical simulations the width of these fingers is controlled by the compaction length or the grid size, whichever is larger. In cases where the compaction length becomes similar to or smaller than the grid size the finger-like leading solitary porosity waves are no more properly resolved, and too big and too fast waves may be the result. Therefore, one should be careful in large scale two-phase flow modelling with melt focusing especially when compaction length and grid size are of similar order.
Die angespannte Lage am Wohnungsmarkt hat in vielen Städten eine neue Welle von Verdrängungsprozessen induziert und insbesondere die Situation einkommensschwacher Haushalte häufig prekär werden lassen. Angesichts dieser Entwicklungen haben sich vielerorts mietenpolitische Bewegungen konstituiert, die sich für eine Abkehr von einer neoliberalisierten und zunehmend finanzialisierten Wohnungsversorgung einsetzen. Lisa Vollmer nimmt in ihrer Forschungsarbeit zwei solcher Bewegungen in den Blick und fragt danach, wie sich politische Kollektivität in den alltäglichen Praktiken von Mieter*innen in Berlin bzw. New York formiert.
The most frequently used boundary-layer turbulence parameterization in numerical weather prediction (NWP) models are turbulence kinetic energy (TKE) based-based schemes. However, these parameterizations suffer from a potential weakness, namely the strong dependence on an ad-hoc quantity, the so-called turbulence length scale. The physical interpretation of the turbulence length scale is difficult and hence it cannot be directly related to measurements or large eddy simulation (LES) data. Consequently, formulations for the turbulence length scale in basically all TKE schemes are based on simplified assumptions and are model-dependent. A good reference for the independent evaluation of the turbulence length scale expression for NWP modeling is missing. Here we propose a new turbulence length scale diagnostic which can be used in the gray zone of turbulence without modifying the underlying TKE turbulence scheme. The new diagnostic is based on the TKE budget: The core idea is to encapsulate the sum of the molecular dissipation and the cross-scale TKE transfer into an effective dissipation, and associate it with the new turbulence length scale. This effective dissipation can then be calculated as a residuum in the TKE budget equation (for horizontal sub-domains of different sizes) using LES data. Estimation of the scale dependence of the diagnosed turbulence length scale using this novel method is presented for several idealized cases.
Drought is understood as both a lack of water (i.e., a deficit as compared to some requirement) and an anomaly in the condition of one or more components of the hydrological cycle. Most drought indices, however, only consider the anomaly aspect, i.e., how unusual the condition is. In this paper, we present two drought hazard indices that reflect both the deficit and anomaly aspects. The soil moisture deficit anomaly index, SMDAI, is based on the drought severity index, DSI, but is computed in a more straightforward way that does not require the definition of a mapping function. We propose a new indicator of drought hazard for water supply from rivers, the streamflow deficit anomaly index, QDAI, which takes into account the surface water demand of humans and freshwater biota. Both indices are computed and analyzed at the global scale, with a spatial resolution of roughly 50 km, for the period 1981-2010, using monthly time series of variables computed by the global water resources and the model WaterGAP2.2d. We found that the SMDAI and QDAI values are broadly similar to values of purely anomaly-based indices. However, the deficit anomaly indices provide more differentiated, spatial and temporal patterns that help to distinguish the degree of the actual drought hazard to vegetation health or the water supply. QDAI can be made relevant for stakeholders with different perceptions about the importance of ecosystem protection, by adapting the approach for computing the amount of water that is required to remain in the river for the well being of the river ecosystem. Both deficit anomaly indices are well suited for inclusion in local or global drought risk studies.
Analysing the composition of ambient ultrafine particles (UFP) is a challenging task due to the low mass and chemical complexity of small particles, yet it is a prerequisite for the identification ofparticle sources and the assessment of potential health risks. Here, we show the molecular characterization of UFP, based on cascade impactor (Nano-MOUDI) 10samples that were collected at an air quality monitoring station nearby one of Europe`s largest airports in Frankfurt, Germany. At this station, particle-size-distribution measurements show enhanced number concentration of particles smaller than 50nm during airport operating hours. We sampled the lower UFP fraction (0.010-0.018 μm; 0.018-0.032 μm; 0.032-0.056 μm) when the air masses arrived from the airport. We developed an optimized filter extraction procedure, used ultra-high performance liquid chromatography (UHPLC) for compound separation, and a heated electrospray ionization (HESI) source with an 15Orbitrap high-resolution mass spectrometer (HRMS) as a detector for organic compounds. A non-target screening detected ~200 organic compounds in the UFP fraction with sample-to-blank ratios larger than five. We identified the largest signals as homologous series of pentaerythritol esters (PEE) and trimethylolpropane esters (TMPE), which are base stocks of aircraft lubrication oils. We unambiguously attribute the majority of detected compounds to jet engine lubrication oils by matching retention times, high-resolution/accurate mass (HR/AM) measurements, and comparing MS/MS fragmentation patterns between both ambient samples and commercially available jet oils. For each UFP stage, we created molecular fingerprints to visualize the complex chemical composition ofthe organic fraction and their average carbon oxidation state. These graphs underline the presence of the homologous series of PEE and TMPE, and the appearance of jet oil additives (e.g. tricresyl phosphate (TCP)). Targeted screening on TCP confirmed the absence of the harmful tri-orthoisomer, while we identified a thermal transformation product of TMPE-based lubrication oil (trimethylolpropane phosphate (TMP-P)). Even though a quantitative determination of the identified compounds is limited, the presented method enables the qualitative detection of molecular markers for jet engine lubricants in UFP and thus strongly improves the source apportionment of UFP near airports.
Background: Point of care devices for performing targeted coagulation substitution in bleeding patients have become increasingly important in recent years. New on the market is the Quantra® from HemoSonics (LC, Charlottesville, VA, US). It uses sonorheometry, a sonic estimation of elasticity via resonance (SEER), a novel ultrasound-based technology that measures viscoelastic properties of whole blood. Several studies have already shown the comparability with devices already established on the market such as the ROTEM® (TEM International GmbH, Munich, Germany).
Objective: In contrast to existing studies, the planned study will be the first prospective interventional study using the new Quantra® system in a cardiac surgical patient cohort. The aim is to investigate the non-inferiority between an already existing coagulation algorithm, based on ROTEM®/Multiplate®, and a new algorithm based on the Quantra®, for the treatment of coagulopathic cardiac surgical patients.
Methods: The study is divided into two phases. In an initial observation phase, whole blood samples of 20 patients will be analyzed using both ROTEM®/Multiplate® and Quantra® obtained at three defined points of time (prior to surgery, after completion of cardiopulmonary bypass, on arrival in the intensive care unit). The obtained threshold values will be used to create an algorithm for hemotherapy. In a second intervention phase, the new algorithm will be tested against an algorithm used routineously for years at our department for non-inferiority.
Results: The main objective of the examination is the cumulative loss of blood within 24 hours after surgery. Statistical calculations based on literature and in-house data suggest that the new algorithm is not inferior if the difference in cumulative blood loss is < 150ml/24 h.
Conclusions: Because of the comparability of the Quantra® sonorheometry system with ROTEM® rotational thromboelastometric measurement methods, the existing hemotherapy treatment algorithm can be adapted to the Quantra device with a proof of non-inferiority. Clinical Trial: International Registered Report Identifier (IRRID): clinicaltrials.gov: NCT03902275
During the first two days of August 2016 a seismic crisis occurred on Brava, Cabo Verde, which – according to observations based on a local seismic network – was characterized by more than a thousand volcano-seismic signals. Brava is considered an active volcanic island, although it has not experienced any historic eruptions. Seismicity significantly exceeded the usual level during the crisis. We report on results based on data from a temporary seismic-array deployment on the neighbouring island of Fogo at a distance of about 35 km. The array was in operation from October 2015 to December 2016 and recorded a total of 1343 earthquakes in the region of Fogo and Brava; 355 thereof were localized. On 1 and 2 August we observed 54 earthquakes, 25 of which could be located beneath Brava. We further evaluate the observations with regards to possible precursors to the crisis and its continuation. Our analysis shows a significant variation in seismicity around Brava, but no distinct precursory pattern. However, the observations suggest that similar earthquake swarms commonly occur close to Brava. The results further confirm the advantages of seismic arrays as tools for the remote monitoring of regions with limited station coverage or access.
During the first two days of August 2016 a seismic crisis occurred on Brava, Cape Verde, which – according to observations based on a local seismic network – was characterized by more than thousand volcano–seismic signals. Brava is considered an active volcanic island, although it has not experienced any historic eruptions. Seismicity significantly exceeded the usual level during the crisis. We report on results based on data from a temporary seismic–array deployment on the neighbouring island of Fogo at a distance of about 35 km. The array was in operation from October 2015 to December 2016 and recorded a total of 1343 earthquakes, 355 thereof were localized. On 1 and 2 August we observed 54 earthquakes, 25 of which could be located beneath Brava. We further evaluate the observations with regards to possible precursors to the crisis and its continuation. Our analysis shows a migration of seismicity around Brava, but no distinct precursory pattern. However, the observations suggest that similar earthquake swarms commonly occur close to Brava. The results further confirm the advantages of seismic arrays as tools for the remote monitoring of regions with limited station coverage or access.
In order to encourage a shift from the car to the more sustainable transport mode of cycling, cycle streets have been implemented in cities all over the world in the last few years. In these shared streets, the entire carriageway is designated for cyclists, while motorized traffic is subordinated. However, evidence on the impact of cycle street interventions related to travel behavior change has been limited until now. Therefore, the objective of this study was to evaluate whether cycle streets are an effective measure to facilitate bicycle use and discourage car use, thus contributing to the aim of promoting sustainable travel. For this purpose, we conducted a written household survey in the German city of Offenbach am Main involving participants affected by a cycle street intervention (n = 701). Based on two stage models of self-regulated behavioral change (SSBC), we identified the participants’ level of willingness to use a bicycle frequently and to reduce car use. By means of bivariate and multivariate statistical methods, we analyzed the influence of awareness, use, and perceptions of the cycle street on the willingness to change behavior towards more sustainable travel. The results show that the intervention has a positive impact on frequent bicycle use, while we observed only a limited effect on car use reduction. Traffic conflicts and car speeding within the cycle street adversely affect the acceptance of the intervention. The study’s findings provide new insights into the actual effects of a cycle street and its potential to encourage sustainable travel behavior.
Rodrigues Ridge connects the Réunion hotspot track with the Central Indian Ridge (CIR) and has been suggested to represent the surface expression of a sub-lithospheric flow channel. From global earthquake catalogues, the seismicity in the region has been associated mainly with events related to the fracture zones at the CIR. However, some segments of the CIR appear void of seismic events. Here, we report on the seismicity recorded at a temporary array of 10 seismic stations operating on Rodrigues Island from September 2014 to June 2016. The array analysis was performed in the time domain by time shifting and stacking the complete waveforms. Event distances were estimated based on a 1-D velocity model and the travel time differences between S and P wave arrivals. We detected and located 63 new events that were not reported by the global networks. Most of the events (51) are located off the CIR and can be classified as intraplate earthquakes. Local magnitudes varied between 1.6 and 3.7. Four seismic clusters were observed that occurred to the west of the spreading segment of the CIR. The Rodrigues Ridge appeared to be aseismic during the period of operation. The lack of seismic activity along both Rodrigues Ridge and the sections of the CIR to the east of Rodrigues may be explained by partially molten upper-mantle material, possibly in relation to the proposed material flow between the Réunion plume and the CIR.
Inclusions of breyite (previously known as walstromite-structured CaSiO3) in diamond are usually interpreted as retrogressed CaSiO3 perovskite trapped in the transition zone or the lower mantle. However, the thermodynamic stability field of breyite does not preclude its crystallization together with diamond under upper-mantle conditions (6–10 GPa). The possibility of breyite forming in subducted sedimentary material through the reaction CaCO3 + SiO2 = CaSiO3 + C + O2 was experimentally evaluated in the CaO–SiO2–C–O2 ± H2O system at 6–10 GPa, 900–1500 ∘C and oxygen fugacity 0.5–1.0 log units below the Fe–FeO (IW) buffer. One experimental series was conducted in the anhydrous subsystem and aimed at determining the melting temperature of the aragonite–coesite (or stishovite) assemblage. It was found that melting occurs at a lower temperature (∼1500 ∘C) than the decarbonation reaction, which indicates that breyite cannot be formed from aragonite and silica under anhydrous conditions and an oxygen fugacity above IW – 1. In the second experimental series, we investigated partial melting of an aragonite–coesite mixture under hydrous conditions at the same pressures and redox conditions. The melting temperature in the presence of water decreased strongly (to 900–1200 ∘C), and the melt had a hydrous silicate composition. The reduction of melt resulted in graphite crystallization in equilibrium with titanite-structured CaSi2O5 and breyite at ∼1000 ∘C. The maximum pressure of possible breyite formation is limited by the reaction CaSiO3 + SiO2 = CaSi2O5 at ∼8 GPa. Based on the experimental results, it is concluded that breyite inclusions found in natural diamond may be formed from an aragonite–coesite assemblage or carbonate melt at 6–8 GPa via reduction at high water activity.
Inappropriate land management leads to soil loss with destruction of the land’s resource and sediment input into the receiving river. Part of the sediment budget of a catchment is the estimation of soil loss. In the Ruzizi catchment in the Eastern Democratic Republic of the Congo (DRC), only limited research has been conducted on soil loss mainly dealing with local observations on geomorphological forms or river load measurements; a regional quantification of soil loss is missing so far. Such quantifications can be calculated using the Universal Soil Loss Equation (USLE). It is composed of four factors: precipitation (R), soil (K), topography (LS), and vegetation cover (C). The factors can be calculated in different ways according to the characteristics of the study area. In this paper, different approaches for calculating the single factors are reviewed and validated with field work in two sub-catchments of Ruzizi River supplying the water for the reservoirs of Ruzizi I and II hydroelectric dams. It became obvious that the (R)USLE model provides the best results with revised R and LS factors. C factor calculations required to conduct a supervised classification using the Maximum Likelihood Procedure. Different C factor values were assigned to the land cover classes. The calculations resulted in a soil loss rate for the predominantly occurring Ferralsols and Leptosols of around 576 kt/yr in both catchments, when 2016 landcover and precipitation are used. This represents an area-normalized value of 40.4 t/ha/yr for Ruzizi I and 50.5 t/ha/yr for Ruzizi II due to different landcover in the two sub-catchments. The mean value for the whole study area is 47.8 t/ha/yr or even 27.1 t/ha/yr when considering land management techniques like terracing on the slopes (P factor). This work has shown that the (R)USLE model can serve as an easy to handle tool for soil loss quantification when comprehensive field work results are sparse. The model can be implemented in Geographic Information Systems (GIS) with free data; hence, a validation is crucial. It becomes apparent that the use of high resolution Sentinel 2a MSI data as the basis for C factor calculations is an appropriate method for considering heterogeneous Land Use Land Cover (LULC) patterns. To transfer the approach to other regions, the calculation of factor R needs to be modified
Surface temperature is a fundamental parameter of Earth’s climate. Its evolution through time is commonly reconstructed using the oxygen isotope and the clumped isotope compositions of carbonate archives. However, reaction kinetics involved in the precipitation of carbonates can introduce inaccuracies in the derived temperatures. Here, we show that dual clumped isotope analyses, i.e., simultaneous ∆47 and ∆48 measurements on the single carbonate phase, can identify the origin and quantify the extent of these kinetic biases. Our results verify theoretical predictions and evidence that the isotopic disequilibrium commonly observed in speleothems and scleractinian coral skeletons is inherited from the dissolved inorganic carbon pool of their parent solutions. Further, we show that dual clumped isotope thermometry can achieve reliable palaeotemperature reconstructions, devoid of kinetic bias. Analysis of a belemnite rostrum implies that it precipitated near isotopic equilibrium and confirms the warmer-than-present temperatures during the Early Cretaceous at southern high latitudes.
Measurements of halogenated trace gases in ambient air frequently rely on canister sampling followed by offline laboratory analysis. This allows for a large number of compounds to be analysed under stable conditions, maximizing measurement precision. However, individual compounds might be affected during the sampling and storage of canister samples. In order to assess halocarbon stability in whole-air samples from the upper troposphere and lowermost stratosphere, we performed stability tests using the high-resolution sampler (HIRES) air sampling unit, which is part of the Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container (CARIBIC) instrument package. The HIRES unit holds 88 lightweight stainless-steel cylinders that are pressurized in flight to 4.5 bar using metal bellows pumps. The HIRES unit was first deployed in 2010 but has up to now not been used for regular halocarbon analysis with the exception of chloromethane analysis. The sample collection unit was tested for the sampling and storage effects of 28 halogenated compounds. The focus was on compound stability in the stainless-steel canisters during storage of up to 5 weeks and on the influence of ozone, since flights take place in the upper troposphere and lowermost stratosphere with ozone mixing ratios of up to several hundred parts per billion by volume (ppbv). Most of the investigated (hydro)chlorofluorocarbons and long-lived hydrofluorocarbons were found to be stable over a storage time of up to 5 weeks and were unaltered by ozone being present during pressurization. Some compounds such as dichloromethane, trichloromethane, and tetrachloroethene started to decrease in the canisters after a storage time of more than 2 weeks or exhibited lowered mixing ratios in samples pressurized with ozone present. A few compounds such as tetrachloromethane and tribromomethane were found to be unstable in the HIRES stainless-steel canisters independent of ozone levels. Furthermore, growth was observed during storage for some species, namely for HFC-152a, HFC-23, and Halon 1301.
Acesta excavata (Fabricius, 1779) is a slow growing bivalve from the Limidae family and is often found associated with cold-water coral reefs along the European continental margin. Here we present the compositional variability of frequently used proxy elemental ratios (Mg/ Ca, Sr/Ca, Na/Ca) measured by laser-ablation mass spectrometry (LA-ICP-MS) and com- pare it to in-situ recorded instrumental seawater parameters such as temperature and salin- ity. Shell Mg/Ca measured in the fibrous calcitic shell section was overall not correlated with seawater temperature or salinity; however, some samples show significant correlations with temperature with a sensitivity that was found to be unusually high in comparison to other marine organisms. Mg/Ca and Sr/Ca measured in the fibrous calcitic shell section display significant negative correlations with the linear extension rate of the shell, which indicates strong vital effects in these bivalves. Multiple linear regression analysis indicates that up to 79% of elemental variability is explicable with temperature and salinity as independent pre- dictor values. Yet, the overall results clearly show that the application of Element/Ca (E/Ca) ratios in these bivalves to reconstruct past changes in temperature and salinity is likely to be complicated due to strong vital effects and the effects of organic material embedded in the shell. Therefore, we suggest to apply additional techniques, such as clumped isotopes, in order to exactly determine and quantify the underlying vital effects and possibly account for these. We found differences in the chemical composition between the two calcitic shell lay- ers that are possibly explainable through differences of the crystal morphology. Sr/Ca ratios also appear to be partly controlled by the amount of magnesium, because the small magne- sium ions bend the crystal lattice which increases the space for strontium incorporation. Oxi- dative cleaning with H2O2 did not significantly change the Mg/Ca and Sr/Ca composition of the shell. Na/Ca ratios decreased after the oxidative cleaning, which is most likely a leaching effect and not caused by the removal of organic matter.
Analysing the composition of ambient ultrafine particles (UFPs) is a challenging task due to the low mass and chemical complexity of small particles, yet it is a prerequisite for the identification of particle sources and the assessment of potential health risks. Here, we show the molecular characterization of UFPs, based on cascade impactor (Nano-MOUDI) samples that were collected at an air quality monitoring station near one of Europe's largest airports, in Frankfurt, Germany. At this station, particle-size-distribution measurements show an enhanced number concentration of particles smaller than 50 nm during airport operating hours. We sampled the lower UFP fraction (0.010–0.018, 0.018–0.032, 0.032–0.056 µm) when the air masses arrived from the airport. We developed an optimized filter extraction procedure using ultra-high-performance liquid chromatography (UHPLC) for compound separation and a heated electrospray ionization (HESI) source with an Orbitrap high-resolution mass spectrometer (HRMS) as a detector for organic compounds. A non-target screening detected ∼200 organic compounds in the UFP fraction with sample-to-blank ratios larger than 5. We identified the largest signals as homologous series of pentaerythritol esters (PEEs) and trimethylolpropane esters (TMPEs), which are base stocks of aircraft lubrication oils. We unambiguously attribute the majority of detected compounds to jet engine lubrication oils by matching retention times, high-resolution and accurate mass measurements, and comparing tandem mass spectrometry (MS2) fragmentation patterns between both ambient samples and commercially available jet oils. For each UFP stage, we created molecular fingerprints to visualize the complex chemical composition of the organic fraction and their average carbon oxidation state. These graphs underline the presence of the homologous series of PEEs and TMPEs and the appearance of jet oil additives (e.g. tricresyl phosphate, TCP). Targeted screening of TCP confirmed the absence of the harmful tri-ortho isomer, while we identified a thermal transformation product of TMPE-based lubrication oil (trimethylolpropane phosphate, TMP-P). Even though a quantitative determination of the identified compounds is limited, the presented method enables the qualitative detection of molecular markers for jet engine lubricants in UFPs and thus strongly improves the source apportionment of UFPs near airports.
Wildfire is the most common disturbance type in boreal forests and can trigger significant changes in forest composition. Waterlogging in peatlands determines the degree of tree cover and the depth of the burning horizon associated with wildfires. However, interactions between peatland moisture, vegetation composition and flammability, and fire regime in forested peatland in Eurasia remain largely unexplored, despite their huge extent in boreal regions. To address this knowledge gap, we reconstructed the Holocene fire regime, vegetation composition and peatland hydrology at two sites in Western Siberia near Tomsk Oblast, Russia. The palaeoecological records originate from forested peatland areas in predominantly light taiga (Pinus-Betula) with increase in dark taiga communities (Pinus sibirica, Picea obovata, Abies sibirica) towards the east. We found that the past water level fluctuated between 8 and 30 cm below the peat surface. Wet peatland conditions promoted broadleaf trees (Betula), whereas dry peatland conditions favoured conifers and a greater forest density (dark-to-light-taiga ratio). The frequency and severity of fire increased with a declining water table that enhanced fuel dryness and flammability and at an intermediate forest density. We found that the probability of intensification in fire severity increased when the water
level declined below 20 cm suggesting a tipping point in peatland hydrology at which wildfire regime intensifies. On a Holocene scale, we found two scenarios of moisture-vegetation-fire interactions. In the first, severe fires were recorded 45 between 7.5 and 4.5 ka BP with lower water level and an increased proportion of dark taiga and fire avoiders (Pinus sibirica at Rybanya and Abies sibirica at Ulukh Chayakh) mixed into the dominantly light taiga and fire-resister community of Pinus
sylvestris. The second occurred over the last 1.5 ka and was associated with fluctuating water tables, a declining abundance of fire avoiders, and an expansion of fire invaders (Betula). These findings suggest that frequent high-severity fires can lead to compositional and structural changes in forests when trees fail to reach reproductive maturity between fire events or where extensive forest gaps limit seed dispersal. This study also shows prolonged periods of synchronous fire activity across the sites, particularly during the early to mid-Holocene, suggesting a regional imprint of centennial to millennial-scale Holocene climate
variability on wildfire activity. Increasing human presence in the region of the Ulukh-Chayakh Mire near Teguldet over the last four centuries drastically enhanced ignitions compared to natural background levels. Frequent warm and dry spells predicted for the future in Siberia by climate change scenarios will enhance peatland drying and may convey a competitive advantage to conifer taxa. However, dry conditions, particularly a water table decline below the threshold of 20 cm, will probably exacerbate the frequency and severity of wildfire, disrupt conifers’ successional pathway and accelerate shifts towards more fire-adapted broadleaf tree cover. Furthermore, climate-disturbance-fire feedbacks will accelerate changes in the carbon balance of forested boreal peatlands and affect their overall future resilience to climate change.
Drought is understood as both a lack of water (i.e., a deficit compared to demand) and a temporal anomaly in one or more components of the hydrological cycle. Most drought indices, however, only consider the anomaly aspect, i.e., how unusual the condition is. In this paper, we present two drought hazard indices that reflect both the deficit and anomaly aspects. The soil moisture deficit anomaly index, SMDAI, is based on the drought severity index, DSI (Cammalleri et al., 2016), but is computed in a more straightforward way that does not require the definition of a mapping function. We propose a new indicator of drought hazard for water supply from rivers, the streamflow deficit anomaly index, QDAI, which takes into account the surface water demand of humans and freshwater biota. Both indices are computed and analyzed at the global scale, with a spatial resolution of roughly 50 km, for the period 1981–2010, using monthly time series of variables computed by the global water resources and the model WaterGAP 2.2d. We found that the SMDAI and QDAI values are broadly similar to values of purely anomaly-based indices. However, the deficit anomaly indices provide more differentiated spatial and temporal patterns that help to distinguish the degree and nature of the actual drought hazard to vegetation health or the water supply. QDAI can be made relevant for stakeholders with different perceptions about the importance of ecosystem protection, by adapting the approach for computing the amount of water that is required to remain in the river for the well-being of the river ecosystem. Both deficit anomaly indices are well suited for inclusion in local or global drought risk studies.
The analysis of charcoal fragments in peat and lake sediments is the most widely used approach to reconstruct past biomass burning. With a few exceptions, this method typically relies on the quantification of the total charcoal content of the sediment. To enhance charcoal analyses for the reconstruction of past fire regimes, and to make the method more relevant to studies of both plant evolution and fire management, more information must be extracted from charcoal particles. Here, I burned in the laboratory seven fuel types comprising 17 species from boreal Siberia, and build on published schemes to develop morphometric and finer diagnostic classifications of the experimentally charred particles. As most of the species used in this study are common to Northern Hemisphere forests and peatlands, these results can be directly applicable over a broad geographical scale. Results show that the effect of temperature on charcoal production is fuel dependent. Graminoids and Sphagnum, and wood (trunk) lose the most mass at low burn temperatures, whereas heathland shrub leaves, brown moss, and ferns retain the most mass at high burn temperatures. In contrast to the wood of trunk, the wood of twigs retained their mass at intermediate temperature. This suggests that species with low mass retention at hotter burning temperatures might be underrepresented in the fossil charcoal record. Charred particle aspect ratio (L/W) appeared to be the strongest indicator of the fuel type burnt. Graminoid charcoals are more elongate than those of all other fuel types, leaf charcoals are the shortest and bulkiest, and twig and wood charcoals are intermediate. Finer diagnostic features were the most useful in distinguishing between wood, graminoid, and leaf particles, but further distinctions within these fuel types are difficult. High-aspect-ratio particles dominated by graminoid and Sphagnum morphologies are robust indicators of cooler surface fires. Contrastingly, abundant wood and leaf morphologies and low-aspect-ratio particles likely indicate higher-temperature fires. However, the overlapping morphologies of leaves and wood from trees and shrubs make it hard to distinguish between high-intensity surface fires combusting living shrubs and dead wood and leaves or high-intensity crown fires combusting living trees. Despite these limitations, the combined use of charred-particle aspect ratios and fuel morphotypes can aid in more robustly interpreting changes in fuel source and fire type, thereby substantially refining histories of past wildfires. Further fields of investigation to improve the interpretation of the fossil charcoal records will require: i) More in-depth knowledge of plant anatomy for a better determination of fuel sources; ii) Relate the proportion of particular charcoal morphotypes to the quantity of biomass; iii) Link the chemical composition of fuels, combustion temperature, and charcoal production. The advanced use of image-recognition software to collect data on other charcoal features could also aid in extracting fire temperatures as well as a change in particles morphology and morphometry during particles transportation.
The analysis of charcoal fragments in peat and lake sediments is the most widely used approach to reconstruct past biomass burning. With a few exceptions, this method typically relies on the quantification of the total charcoal content of the sediment. To enhance charcoal analyses for the reconstruction of past fire regimes and make the method more relevant to studies of both plant evolution and fire management, the extraction of more information from charcoal particles is critical. Here, I used a muffle oven to burn seven fuel types comprising 17 species from boreal Siberia (near Teguldet village), which are also commonly found in the Northern Hemisphere, and built on published schemes to develop morphometric and finer diagnostic classifications of the experimentally charred particles. I then combined these results with those from fossil charcoal from a peat core taken from the same location (Ulukh-Chayakh mire) in order to demonstrate the relevance of these experiments to the fossil charcoal records. Results show that graminoids, Sphagnum, and wood (trunk) lose the most mass at low burn temperatures (<300 ∘C), whereas heathland shrub leaves, brown moss, and ferns lose the most mass at high burn temperatures. This suggests that species with low mass retention in high-temperature fires are likely to be under-represented in the fossil charcoal record. The charcoal particle aspect ratio appeared to be the strongest indicator of the fuel type burnt. Graminoid charcoal particles are the most elongate (6.7–11.5), with a threshold above 6 that may be indicative of wetland graminoids; leaves are the shortest and bulkiest (2.1–3.5); and twigs and wood are intermediate (2.0–5.2). Further, the use of fine diagnostic features was more successful in separating wood, graminoids, and leaves, but it was difficult to further differentiate these fuel types due to overlapping features. High-aspect-ratio particles, dominated by graminoid and Sphagnum morphologies, may be robust indicators of low-temperature surface fires, whereas abundant wood and leaf morphologies as well as low-aspect-ratio particles are indicative of higher-temperature fires. However, the overlapping morphologies of leaves and wood from trees and shrubs make it hard to distinguish between high-intensity surface fires, combusting living shrubs and dead wood and leaves, and high-intensity crown fires that have burnt living trees. Distinct particle shape may also influence charcoal transportation, with elongated particles (graminoids) potentially having a more heterogeneous distribution and being deposited farther away from the origin of fire than the rounder, polygonal leaf particles. Despite these limitations, the combined use of charred-particle aspect ratios and fuel morphotypes can aid in the more robust interpretation of fuel source and fire-type changes. Lastly, I highlight the further investigations needed to refine the histories of past wildfires.
In partially molten regions inside the Earth, melt buoyancy may trigger upwelling of both solid and fluid phases, i.e., diapirism. If the melt is allowed to move separately with respect to the matrix, melt perturbations may evolve into solitary porosity waves. While diapirs may form on a wide range of scales, porosity waves are restricted to sizes of a few times the compaction length. Thus, the size of a partially molten perturbation in terms of compaction length controls whether material is dominantly transported by porosity waves or by diapirism. We study the transition from diapiric rise to solitary porosity waves by solving the two-phase flow equations of conservation of mass and momentum in 2D with porosity-dependent matrix viscosity. We systematically vary the initial size of a porosity perturbation from 1.8 to 120 times the compaction length. If the perturbation is of the order of a few compaction lengths, a single solitary wave will emerge, either with a positive or negative vertical matrix flux. If melt is not allowed to move separately to the matrix a diapir will emerge. In between these end members we observe a regime where the partially molten perturbation will split up into numerous solitary waves, whose phase velocity is so low compared to the Stokes velocity that the whole swarm of waves will ascend jointly as a diapir, just slowly elongating due to a higher amplitude main solitary wave. Only if the melt is not allowed to move separately to the matrix will no solitary waves build up, but as soon as two-phase flow is enabled solitary waves will eventually emerge. The required time to build them up increases nonlinearly with the perturbation radius in terms of compaction length and might be too long to allow for them in nature in many cases.
The future physiology of marine phytoplankton will be impacted by a range of changes in global ocean conditions, including salinity regimes that vary spatially and on a range of short- to geological timescales. Coccolithophores have global ecological and biogeochemical significance as the most important calcifying marine phytoplankton group. Previous research has shown that the morphology of their exoskeletal calcified plates (coccoliths) responds to changing salinity in the most abundant coccolithophore species, Emiliania huxleyi. However, the extent to which these responses may be strain-specific is not well established. Here we investigated the growth response of six strains of E. huxleyi under low (ca. 25) and high (ca. 45) salinity batch culture conditions and found substantial variability in the magnitude and direction of response to salinity change across strains. Growth rates declined under low and high salinity conditions in four of the six strains but increased under both low and high salinity in strain RCC1232 and were higher under low salinity and lower under high salinity in strain PLYB11. When detailed changes in coccolith and coccosphere size were quantified in two of these strains that were isolated from contrasting salinity regimes (coastal Norwegian low salinity of ca. 30 and Mediterranean high salinity of ca. 37), the Norwegian strain showed an average 26% larger mean coccolith size at high salinities compared to low salinities. In contrast, coccolith size in the Mediterranean strain showed a smaller size trend (11% increase) but severely impeded coccolith formation in the low salinity treatment. Coccosphere size similarly increased with salinity in the Norwegian strain but this trend was not observed in the Mediterranean strain. Coccolith size changes with salinity compiled for other strains also show variability, strongly suggesting that the effect of salinity change on coccolithophore morphology is likely to be strain specific. We propose that physiological adaptation to local conditions, in particular strategies for plasticity under stress, has an important role in determining ecotype responses to salinity.
AirCore samplers have been increasingly used to capture vertical profiles of trace gases reaching from the ground up to about 30 km, in order to validate remote sens- ing instruments and to investigate transport processes in the stratosphere. When deployed to a weather balloon, accu- rately attributing the trace gas measurements to the sampling altitudes is nontrivial, especially in the stratosphere. In this paper we present the CO-spiking experiment, which can be deployed to any AirCore on any platform in order to evalu- ate different computational altitude attribution processes and to experimentally derive the vertical resolution of the profile by injecting small volumes of signal gas at predefined GPS altitudes during sampling. We performed two CO-spiking flights with an AirCore from the Goethe University Frankfurt (GUF) deployed to a weather balloon in Traînou, France, in June 2019. The altitude retrieval based on an instantaneous pressure equilibrium assumption slightly overestimates the sampling altitudes, especially at the top of the profiles. For these two flights our altitude attribution is accurate within 250 m below 20 km. Above 20 km the positive bias becomes larger and reaches up to 1.2 km at 27 km altitude. Differences in descent velocities are shown to have a major impact on the altitude attribution bias. We parameterize the time lag between the theoretically attributed altitude and the actual CO-spike release altitude for both flights together and use it to empirically correct our AirCore altitude retrieval. Regard- ing the corrected profiles, the altitude attribution is accurate within ±120 m throughout the profile. Further investigations are needed in order to test for the scope of validity of this correction parameter regarding different ambient conditions and maximum flight altitudes. We derive the vertical resolu- tion from the CO spikes of both flights and compare it to the modeled vertical resolution. The modeled vertical resolution is too optimistic compared to the experimentally derived res- olution throughout the profile, albeit agreeing within 220 m. All our findings derived from the two CO-spiking flights are strictly bound to the GUF AirCore dimensions. The newly introduced CO-spiking experiment can be used to test differ- ent combinations of AirCore configurations and platforms in future studies.
Constraining the architecture of complex 3D volcanic plumbing systems within active rifts, and their impact on rift processes, is critical for examining the interplay between faulting, magmatism and magmatic fluids in developing rift segments. The Natron basin of the East African Rift System provides an ideal location to study these processes, owing to its recent magmatic-tectonic activity and ongoing active carbonatite volcanism at Oldoinyo Lengai. Here, we report seismicity and fault plane solutions from a 10-month temporary seismic network spanning Oldoinyo Lengai, Naibor Soito volcanic field and Gelai volcano. We locate 6827 earthquakes with ML -0.85 to 3.6, which are related to previous and ongoing magmatic and volcanic activity in the region, as well as regional tectonic extension. We observe seismicity down to ~17 km depth north and south of Oldoinyo Lengai and shallow seismicity (3 - 10 km) beneath Gelai, including two swarms. The deepest seismicity (~down to 20 km) occurs above a previously imaged magma body below Naibor Soito. These seismicity patterns reveal a detailed image of a complex volcanic plumbing system, supporting potential lateral and vertical connections between shallow- and deep-seated magmas, where fluid and melt transport to the surface is facilitated by intrusion of dikes and sills. Focal mechanisms vary spatially. T-axis trends reveal dominantly WNW-ESE extension near Gelai, while strike-slip mechanisms and a radial trend in P-axes are observed in the vicinity of Oldoinyo Lengai. These data support local variations in the state of stress, resulting from a combination of volcanic edifice loading and magma-driven stress changes imposed on a regional extensional stress field. Our results indicate that the southern Natron basin is a segmented rift system, in which fluids preferentially percolate vertically and laterally in a region where strain transfers from a border fault to a developing magmatic rift segment.
Deformation in the upper mantle is localized in shear zones. In order to localize strain, weakening has to occur, which can be achieved by a reduction in grain size. In order for grains to remain small and preserve shear zones, phases have to mix. Phase mixing leads to dragging or pinning of grain boundaries which slows down or halts grain growth. Multiple phase mixing processes have been suggested to be important during shear zone evolution. The importance of a phase mixing process depends on the geodynamic setting. This study presents detailed microstructural analysis of spinel bearing shear zones from the Erro-Tobbio peridotite (Italy) that formed during pre-alpine rifting. The first stage of deformation occurred under melt-free conditions, during which clinopyroxene and olivine porphyroclasts dynamically recrystallized. With ongoing extension, silica-undersaturated melt percolated through the shear zones and reacted with the clinopyroxene neoblasts, forming olivine–clinopyroxene layers. Furthermore, the melt reacted with orthopyroxene porphyroclasts, forming fine-grained polymineralic layers (ultramylonites) adjacent to the porphyroclasts. Strain rates in these layers are estimated to be about an order of magnitude faster than within the olivine-rich matrix. This study demonstrates the importance of melt-rock reactions for grain size reduction, phase mixing and strain localization in these shear zones.
In the Central German Uplands, Fagus sylvatica and Picea abies have been particularly affected by climate change. With the establishment of beech forests about 3000 years ago and pure spruce stands 500 years ago, they might be regarded as ‘neophytes’ in the Hessian forests. Palaeoecological investigations at wetland sites in the low mountain ranges and intramontane basins point to an asynchronous vegetation evolution in a comparatively small but heterogenous region. On the other hand, palynological data prove that sustainably managed woodlands with high proportions of Tilia have been persisting for several millennia, before the spread of beech took place as a result of a cooler and wetter climate and changes in land management. In view of increasingly warmer and drier conditions, Tilia cordata appears especially qualified to be an important silvicultural constituent of the future, not only due to its tolerance towards drought, but also its resistance to browsing, and the ability to reproduce vegetatively. Forest managers should be encouraged to actively promote the return to more stress-tolerant lime-dominated woodlands, similar to those that existed in the Subboreal chronozone.
The current state of research about ancient settlements within the Nile Delta allows the hypothesizing of fluvial connections to ancient settlements all over the Nile Delta. Previous studies suggest a larger Nile branch close to Kom el-Gir, an ancient settlement hill in the northwestern Nile Delta. To contribute new knowledge to this little-known site and prove this hypothesis, this study aims at using small-scale paleogeographic investigations to reconstruct an ancient channel system in the surroundings of Kom el-Gir. The study pursues the following: (1) the identification of sedimentary environments via stratigraphic and portable X-ray fluorescence (pXRF) analyses of the sediments, (2) the detection of fluvial elements via electrical resistivity tomography (ERT), and (3) the synthesis of all results to provide a comprehensive reconstruction of a former fluvial network in the surroundings of Kom el-Gir. Therefore, auger core drillings, pXRF analyses, and ERT were conducted to examine the sediments within the study area. Based on the evaluation of the results, the study presents clear evidence of a former channel system in the surroundings of Kom el-Gir. Thereby, it is the combination of both methods, 1-D corings and 2-D ERT profiles, that derives a more detailed illustration of previous environmental conditions which other studies can adopt. Especially within the Nile Delta which comprises a large number of smaller and larger ancient settlement hills, this study's approach can contribute to paleogeographic investigations to improve the general understanding of the former fluvial landscape.
The ICON single-column mode
(2021)
The single-column mode (SCM) of the ICON (ICOsahedral Nonhydrostatic) modeling framework is presented. The primary purpose of the ICON SCM is to use it as a tool for research, model evaluation and development. Thanks to the simplified geometry of the ICON SCM, various aspects of the ICON model, in particular the model physics, can be studied in a well-controlled environment. Additionally, the ICON SCM has a reduced computational cost and a low data storage demand. The ICON SCM can be utilized for idealized cases—several well-established cases are already included—or for semi-realistic cases based on analyses or model forecasts. As the case setup is defined by a single NetCDF file, new cases can be prepared easily by the modification of this file. We demonstrate the usage of the ICON SCM for different idealized cases such as shallow convection, stratocumulus clouds, and radiative transfer. Additionally, the ICON SCM is tested for a semi-realistic case together with an equivalent three-dimensional setup and the large eddy simulation mode of ICON. Such consistent comparisons across the hierarchy of ICON configurations are very helpful for model development. The ICON SCM will be implemented into the operational ICON model and will serve as an additional tool for advancing the development of the ICON model.
Production and use of many synthetic halogenated trace gases are regulated internationally due to their contribution to stratospheric ozone depletion or climate change. In many applications they have been replaced by shorter-lived compounds, which have become measurable in the atmosphere as emissions increased. Non-target monitoring of trace gases rather than targeted measurements of well-known substances is needed to keep up with such changes in the atmospheric composition. We regularly deploy gas chromatography (GC) coupled to time-of-flight mass spectrometry (TOF-MS) for analysis of flask air samples and in situ measurements at the Taunus Observatory, a site in central Germany. TOF-MS acquires data over a continuous mass range that enables a retrospective analysis of the dataset, which can be considered a type of digital air archive. This archive can be used if new substances come into use and their mass spectrometric fingerprint is identified. However, quantifying new replacement halocarbons can be challenging, as mole fractions are generally low, requiring high measurement precision and low detection limits. In addition, calibration can be demanding, as calibration gases may not contain sufficiently high amounts of newly measured substances or the amounts in the calibration gas may have not been quantified. This paper presents an indirect data evaluation approach for TOF-MS data, where the calibration is linked to another compound which could be quantified in the calibration gas. We also present an approach to evaluate the quality of the indirect calibration method, select periods of stable instrument performance and determine well suited reference compounds. The method is applied to three short-lived synthetic halocarbons: HFO-1234yf, HFO-1234ze(E), and HCFO-1233zd(E). They represent replacements for longer-lived hydrofluorocarbons (HFCs) and exhibit increasing mole fractions in the atmosphere.
The indirectly calibrated results are compared to directly calibrated measurements using data from TOF-MS canister sample analysis and TOF-MS in situ measurements, which are available for some periods of our dataset. The application of the indirect calibration method on several test cases can result in uncertainties of around 6 % to 11 %. For hydro(chloro-)fluoroolefines (denoted H(C)FOs), uncertainties up to 23 % are achieved. The indirectly calculated mole fractions of the investigated H(C)FOs at Taunus Observatory range between measured mole fractions at urban Dübendorf and Jungfraujoch stations in Switzerland.
We evaluate the influence of a forest parametrization on the simulation of the boundary layer flow over moderate complex terrain in the context of the Perdigão 2017 field campaign. The numerical simulations are performed using the Weather Research and Forecasting model in large eddy simulation mode (WRF-LES). The short-term, high-resolution (40 m horizontal grid spacing) and long-term (200 m horizontal grid spacing) WRF-LES are evaluated for an integration time of 12 h and 1.5 months, respectively, with and without forest parameterization. The short-term simulations focus on low-level jet events over the valley, while the long-term simulations cover the whole intensive observation period (IOP) of the field campaign. The results are validated using lidar and meteorological tower observations. The mean diurnal cycle during the IOP shows a significant improvement of the along-valley wind speed and the wind direction when using the forest parametrization. However, the drag imposed by the parametrization results in an underestimation of the cross-valley wind speed, which can be attributed to a poor representation of the land surface characteristics. The evaluation of the high-resolution WRF-LES shows a positive influence of the forest parametrization on the simulated winds in the first 500 m above the surface.
Bilder stellen auf vielfältige Weise Bezüge zu Räumen und raumbezogenen Praktiken her. Als humangeographische Forschungsmethode fragt die Bildanalyse nach der Wirklichkeit und der Wirkungsweise von Bildern im Verhältnis von Gesellschaft und Raum. Der Beitrag führt fachlich und methodisch in die humangeographische Bildanalyse ein und diskutiert ihren Beitrag zur geographischen Bildungsforschung im Hinblick auf die Vermittlung von Bild(lese)kompetenz und den mündigen Umgang mit medialer Bildlichkeit. Als Unterrichtsbeispiel wird eine Analyse visuellen Materials für eine differenzierte Auseinandersetzung mit dem Problem Müll vorgestellt.
Wildfire is the most common disturbance type in boreal forests and can trigger significant changes in forest composition. Waterlogging in peatlands determines the degree of tree cover and the depth of the burnt horizon associated with wildfires. However, interactions between peatland moisture, vegetation composition and flammability, and fire regime in forest and forested peatland in Eurasia remain largely unexplored, despite their huge extent in boreal regions. To address this knowledge gap, we reconstructed the Holocene fire regime, vegetation composition, and peatland hydrology at two sites located in predominantly light taiga (Pinus sylvestris Betula) with interspersed dark taiga communities (Pinus sibirica, Picea obovata, Abies sibirica) in western Siberia in the Tomsk Oblast, Russia. We found marked shifts in past water levels over the Holocene. The probability of fire occurrence and the intensification of fire frequency and severity increased at times of low water table (drier conditions), enhanced fuel dryness, and an intermediate dark-to-light taiga ratio. High water level, and thus wet peat surface conditions, prevented fires from spreading on peatland and surrounding forests. Deciduous trees (i.e. Betula) and Sphagnum were more abundant under wetter peatland conditions, and conifers and denser forests were more prevalent under drier peatland conditions. On a Holocene scale, severe fires were recorded between 7.5 and 4.5 ka with an increased proportion of dark taiga and fire avoiders (Pinus sibirica at Rybnaya and Abies sibirica at Ulukh–Chayakh) in a predominantly light taiga and fire-resister community characterised by Pinus sylvestris and lower local water level. Severe fires also occurred over the last 1.5 kyr and were associated with a declining abundance of dark taiga and fire avoiders, an expansion of fire invaders (Betula), and fluctuating water tables. These findings suggest that frequent, high-severity fires can lead to compositional and structural changes in forests when trees fail to reach reproductive maturity between fire events or where extensive forest gaps limit seed dispersal. This study also shows prolonged periods of synchronous fire activity across the sites, particularly during the early to mid-Holocene, suggesting a regional imprint of centennial- to millennial-scale Holocene climate variability on wildfire activity. Humans may have affected vegetation and fire from the Neolithic; however, increasing human presence in the region, particularly at the Ulukh–Chayakh Mire over the last 4 centuries, drastically enhanced ignitions compared to natural background levels. Frequent warm and dry spells predicted by climate change scenarios for Siberia in the future will enhance peatland drying and may convey a competitive advantage to conifer taxa. However, dry conditions will probably exacerbate the frequency and severity of wildfire, disrupt conifers' successional pathway, and accelerate shifts towards deciduous broadleaf tree cover. Furthermore, climate–disturbance–fire feedbacks will accelerate changes in the carbon balance of boreal peatlands and affect their overall future resilience to climate change.
Marine stratocumuli are the most dominant cloud type by area coverage in the Southern Ocean (SO). They can be divided into different self-organized cellular morphological regimes known as open and closed mesoscale-cellular convective (MCC) clouds. Open and closed cells are the two most frequent types of organizational regimes in the SO. Using the liDAR-raDAR (DARDAR) version 2 retrievals, we quantify 59 % of all MCC clouds in this region as mixed-phase clouds (MPCs) during a 4-year time period from 2007 to 2010. The net radiative effect of SO MCC clouds is governed by changes in cloud albedo. Both cloud morphology and phase have previously been shown to impact cloud albedo individually, but their interactions and their combined impact on cloud albedo remain unclear.
Here, we investigate the relationships between cloud phase, organizational patterns, and their differences regarding their cloud radiative properties in the SO. The mixed-phase fraction, which is defined as the number of MPCs divided by the sum of MPC and supercooled liquid cloud (SLC) pixels, of all MCC clouds at a given cloud-top temperature (CTT) varies considerably between austral summer and winter. We further find that seasonal changes in cloud phase at a given CTT across all latitudes are largely independent of cloud morphology and are thus seemingly constrained by other external factors. Overall, our results show a stronger dependence of cloud phase on cloud-top height (CTH) than CTT for clouds below 2.5 km in altitude.
Preconditioning through ice-phase processes in MPCs has been observed to accelerate individual closed-to-open cell transitions in extratropical stratocumuli. The hypothesis of preconditioning has been further substantiated in large-eddy simulations of open and closed MPCs. In this study, we do not find preconditioning to primarily impact climatological cloud morphology statistics in the SO. Meanwhile, in-cloud albedo analysis reveals stronger changes in open and closed cell albedo in SLCs than in MPCs. In particular, few optically thick (cloud optical thickness >10) open cell stratocumuli are characterized as ice-free SLCs. These differences in in-cloud albedo are found to alter the cloud radiative effect in the SO by 21 to 39 W m−2 depending on season and cloud phase.
We evaluate the influence of a forest parametrization on the simulation of the boundary layer flow over moderate complex terrain in the context of the Perdigão 2017 field campaign. The numerical simulations are performed using the Weather Research and Forecasting model in large eddy simulation mode (WRF-LES). The short-term, high-resolution (40 m horizontal grid spacing) and long-term (200 m horizontal grid spacing) WRF-LES are evaluated for an integration time of 12 h and 1.5 months, respectively, with and without forest parameterization. The short-term simulations focus on low-level jet events over the valley, while the long-term simulations cover the whole intensive observation period (IOP) of the field campaign. The results are validated using lidar and meteorological tower observations. The mean diurnal cycle during the IOP shows a significant improvement of the along-valley wind speed and the wind direction when using the forest parametrization. However, the drag imposed by the parametrization results in an underestimation of the cross-valley wind speed, which can be attributed to a poor representation of the land surface characteristics. The evaluation of the high-resolution WRF-LES shows a positive influence of the forest parametrization on the simulated winds in the first 500 m above the surface.
Particulate matter (PM) largely consists of secondary organic aerosol (SOA) that is formed via oxidation of biogenic and anthropogenic volatile organic compounds (VOCs). Unambiguous identification of SOA molecules and their assignment to their precursor vapors are challenges that have so far only succeeded for a few SOA marker compounds, which are now well characterized and (partly) available as authentic standards. In this work, we resolve the complex composition of SOA by means of a top-down approach based on the newly created Aerosolomics database, which is fed by non-target analysis results of filter samples from oxidation flow reactor experiments. We investigated the oxidation products from the five biogenic VOCs α-pinene, β-pinene, limonene, 3-carene, and trans-caryophyllene and from the four anthropogenic VOCs toluene, o-xylene, 1,2,4-trimethylbenzene, and naphthalene. Using ultrahigh-performance liquid chromatography coupled to a high-resolution (Orbitrap) mass spectrometer, we determine the molecular formula of 596 chromatographically separated compounds based on exact mass and isotopic pattern. We utilize retention time and fragmentation mass spectra as a basis for unambiguous attribution of the oxidation products to their parent VOCs. Based on the molecular-resolved application of the database, we are able to assign roughly half of the total signal of oxygenated hydrocarbons in ambient suburban PM2.5 to one of the nine studied VOCs. The application of the database enabled us to interpret the appearance of diurnal compound clusters that are formed by different oxidation processes. Furthermore, by performing a hierarchical cluster analysis (HCA) on the same set of filter samples, we identified compound clusters that depend on sulfur dioxide mixing ratio and temperature. This study demonstrates how Aerosolomics tools (database and HCA) applied to PM filter samples can improve our understanding of SOA sources, their formation pathways, and temperature-driven partitioning of SOA compounds.
Monitoring woody cover by remote sensing is considered a key methodology towards sustainable management of trees in dryland forests. However, while modern very high resolution satellite (VHRS) sensors allow woodland mapping at the individual tree level, the historical perspective is often hindered by lack of appropriate image data. In this first study employing the newly accessible historical HEXAGON KH-9 stereo-panoramic camera images for environmental research, we propose their use for mapping trees in open-canopy conditions. The 2–4 feet resolution panchromatic HEXAGON satellite photographs were taken 1971–1986 within the American reconnaissance programs that are better known to the scientific community for their lower-resolution CORONA images. Our aim is to evaluate the potential of combining historical CORONA and HEXAGON with recent WorldView VHRS imagery for retrospective woodland change mapping on the tree level. We mapped all trees on 30 1-ha test sites in open-canopy argan woodlands in Morocco in the field and from the VHRS imagery for estimating changes of tree density and size between 1967/1972 and 2018. Prior to image interpretation, we used simulations based on unmanned aerial system (UAS) imagery for exemplarily examining the role of illumination, viewing geometry and image resolution on the appearance of trees and their shadows in the historical panchromatic images. We show that understanding these parameters is imperative for correct detection and size-estimation of tree crowns. Our results confirm that tree maps derived solely from VHRS image analysis generally underestimate the number of small trees and trees in clumped-canopy groups. Nevertheless, HEXAGON images compare remarkably well with WorldView images and have much higher tree-mapping potential than CORONA. By classifying the trees in three sizes, we were able to measure tree-cover changes on an ordinal scale. Although we found no clear trend of forest degradation or recovery, our argan forest sites show varying patterns of change, which are further analysed in Part B of our study. We conclude that the HEXAGON stereo-panoramic camera images, of which 670,000 worldwide will soon be available, open exciting opportunities for retrospective monitoring of trees in open-canopy conditions and other woody vegetation patterns back into the 1980s and 1970s.
Particulate matter (PM) largely consists of secondary organic aerosol (SOA) that is formed via oxidation of biogenic and anthropogenic volatile organic compounds (VOCs). Unambiguous identification of SOA molecules and their assignment to their precursor vapors is a challenge that has so far only succeeded for a few SOA marker compounds, which are now well characterized and (partly) available as authentic standards. In this work, we resolve the complex composition of SOA by a top-down approach based on a newly created aerosolomics database, which is fed by non-target analysis results of filter samples from oxidation flow reactor experiments. We investigated the oxidation products from the five biogenic VOCs α-pinene, β-pinene, limonene, 3-carene, and trans-caryophyllene and from the four anthropogenic VOCs toluene, o-xylene, 1,2,4-trimethylbenzene, and naphthalene. Using ultra-high performance liquid chromatography coupled to a high-resolution (Orbitrap) mass spectrometer, we determine the molecular formula of 596 chromatographically separated compounds based on exact mass and isotopic pattern. We utilize retention time and fragmentation mass spectra as a basis for unambiguous attribution of the oxidation products to their parent VOCs. Based on the molecular-resolved application of the database, we are able to assign roughly half of the total signal of oxygenated hydrocarbons in ambient suburban PM2.5 to one of the nine studied VOCs. The application of the database enabled us to interpret the appearance of diurnal compound clusters that are formed by different oxidation processes. Furthermore, the application of a hierarchical cluster analysis (HCA) on the same set of filter samples enabled us to identify compound clusters that depend on sulfur dioxide mixing ratio and temperature. This study demonstrates how aerosolomics tools (database and HCA) applied on PM filter samples can improve our understanding of SOA sources, their formation pathways, and temperature-driven partitioning of SOA compounds.
Zehn Jahre sub\urban sind ein Grund zum Feiern. Die kritische interdisziplinäre Stadtforschung in deutscher Sprache hat dank sub\urban einen Ort, an dem wir die mannigfaltigen Prozesse diskutieren und theoretisieren können, die Städte auf allen räumlichen Maßstabsebenen prägen. Kein Grund zum Feiern ist hingegen, dass viele dieser Prozesse dazu beitragen, dass wir in Verhältnissen leben, „in denen der Mensch ein erniedrigtes, ein geknechtetes, ein verlassenes, ein verächtliches Wesen ist“ (Marx 1976: 385). Noch immer gilt, dass es radikaler Kritik bedarf, um diese „Verhältnisse umzuwerfen“ (ebd.). Noch immer bedarf es dafür eines Verständnisses des Kapitalismus in seiner je konkreten Ausprägung und in seiner Verwobenheit mit sich wandelnden Herrschaftsformen von Patriarchat, Rassismus und Nationalismus, Homo-, Queer- und Transfeindlichkeit sowie all den anderen Formen des hierarchisierenden Ausschlusses, die für so viele Menschen das Leben zur Hölle machen (Arruzza/Bhattacharya/Fraser 2020; Brown 2018; Federici 2012; Harvey 2017). Radikale Kritik hinterfragt diese im Zeitverlauf sich wandelnden und zwischen Räumen sich unterscheidenden herrschenden Verhältnisse, betreibt mithin Aufklärung über sie, um sie in emanzipatorischer Weise zu verändern, ja zu überwinden.
Mit einem Stadtbegriff tue ich mich schwer. Städte zu verstehen ist gleichbedeutend damit, Wandel zu verstehen. Städte haben sich über die Jahrhunderte, über die Kontinente und Gesellschaftsformationen hinweg so häufig gehäutet, dass an einem geronnenen Zustand orientierte Definitionen zum Scheitern verurteilt sind. So hat es immer wieder Versuche gegeben, Stadt aus ihrer Bevölkerungsgröße, ihrer Siedlungsstruktur oder ihrer Wirtschafts- und Bauweise abzuleiten, also aus ihren besonderen räumlich-physischen Eigenschaften, die aber allenfalls zeithistorischen Wert haben.
The Russian invasion of Ukraine illustrates the increasingly judicialized nature of international relations and geopolitics. By viewing aspects of the invasion as illegal – in particular through the identification of war crimes and crimes against humanity – the international response draws attention to the political geographies of international criminal investigation. Human rights groups, academics, journalists, and open-source forensic investigations have joined forces to collect, evaluate and analyze the violent nature of war crimes. While similar shifts in evidence gathering have been observed in the case of the Bosnia-Herzegovina war and the Assad regime's violence against Syrian citizens, the use of evidence-gathering technologies and evidence-securing institutions in the case of Ukraine is distinctive. In this scholarly intervention we seek to illustrate the intimate geopolitics of evidence gathering by zooming in on two different elements that shape evidential procedures in Ukraine: i) the blurring of civilian/military boundaries; and ii) the challenges of access. By evaluating what is new and what is similar to previous war sites, we suggest that these two areas reflect a geopolitics of evidence gathering, highlighting its global-local intimacies. Both these areas are well positioned to foster new research on the (geo)legal nature of war crimes in political geography and beyond.