Universitätspublikationen
Refine
Year of publication
- 2009 (608) (remove)
Document Type
- Article (178)
- Doctoral Thesis (115)
- Part of Periodical (101)
- Book (64)
- Review (49)
- Conference Proceeding (26)
- Working Paper (17)
- Report (13)
- Diploma Thesis (11)
- Bachelor Thesis (9)
- Magister's Thesis (7)
- diplomthesis (6)
- Periodical (5)
- Part of a Book (4)
- magisterthesis (3)
Language
- German (366)
- English (225)
- French (7)
- Portuguese (7)
- Spanish (2)
- Multiple languages (1)
Is part of the Bibliography
- no (608)
Keywords
- Europa (7)
- Deutschland (4)
- Lambda-Kalkül (4)
- China (3)
- Forschung (3)
- Frankfurt <Main> / Universität (3)
- Frankreich (3)
- Zeitschrift (3)
- reactive oxygen species (3)
- Adorno (2)
Institute
- Medizin (122)
- Präsidium (75)
- Biochemie und Chemie (46)
- Gesellschaftswissenschaften (46)
- Rechtswissenschaft (41)
- Geowissenschaften (34)
- Physik (33)
- Biowissenschaften (30)
- E-Finance Lab e.V. (24)
- Informatik (24)
In this work we present the setup and first tests of our new BIO IN detector. This detector is designed to classify atmospheric ice nuclei (IN) for their biological content. Biological material is identified via its auto-fluorescence (intrinsic fluorescence) after irradiation with UV radiation. Ice nuclei are key substances for precipitation development via the Bergeron–Findeisen process. The level of scientific knowledge regarding origin and climatology (temporal and spatial distribution) of IN is very low. Some biological material is known to be active as IN even at relatively high temperatures of up to –2°C (e.g. pseudomonas syringae bacteria). These biological IN could have a strong influence on the formation of clouds and precipitation. We have designed the new BIO IN sensor to analyze the abundance of IN of biological origin. The instrument will be flown on one of the first missions of the new German research aircraft ''HALO'' (High Altitude and LOng Range).
Active chlorine species play a dominant role in the catalytic destruction of stratospheric ozone in the polar vortices during the late winter and early spring seasons. Recently, the correct understanding of the ClO dimer cycle was challenged by the release of new laboratory absorption cross sections (Pope et al., 2007) yielding significant model underestimates of observed ClO and ozone loss (von Hobe et al., 2007). Under this aspect, Arctic stratospheric limb emission measurements carried out by the balloon version of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS-B) from Kiruna (Sweden) on 11 January 2001 and 20/21 March 2003 have been reanalyzed with regard to the chlorine reservoir species ClONO2 and the active species, ClO and ClOOCl (Cl2O2). New laboratory measurements of IR absorption cross sections of ClOOCl for various temperatures and pressures allowed for the first time the retrieval of ClOOCl mixing ratios from remote sensing measurements. High values of active chlorine (ClOx) of roughly 2.3 ppbv at 20 km were observed by MIPAS-B in the cold mid-winter Arctic vortex on 11 January 2001. While nighttime ClOOCl shows enhanced values of nearly 1.1 ppbv at 20 km, ClONO2 mixing ratios are less than 0.1 ppbv at this altitude. In contrast, high ClONO2 mixing ratios of nearly 2.4 ppbv at 20 km have been observed in the late winter Arctic vortex on 20 March 2003. No significant ClOx amounts are detectable on this date since most of the active chlorine has already recovered to its main reservoir species ClONO2. The observed values of ClOx and ClONO2 are in line with the established chlorine chemistry. The thermal equilibrium constants between the dimer formation and its dissociation, as derived from the balloon measurements, are on the lower side of reported data and in good agreement with values recommended by von Hobe et al. (2007). Calculations with the ECHAM/MESSy Atmospheric Chemistry model (EMAC) using established kinetics show similar chlorine activation and deactivation, compared to the measurements in January 2001 and March 2003, respectively.
wo different single particle mass spectrometers were operated in parallel at the Swiss High Alpine Research Station Jungfraujoch (JFJ, 3580 m a.s.l.) during the Cloud and Aerosol Characterization Experiment (CLACE 6) in February and March 2007. During mixed phase cloud events ice crystals from 5 μm up to 20 μm were separated from large ice aggregates, non-activated, interstitial aerosol particles and supercooled droplets using an Ice-Counterflow Virtual Impactor (Ice-CVI). During one cloud period supercooled droplets were additionally sampled and analyzed by changing the Ice-CVI setup. The small ice particles and droplets were evaporated by injection into dry air inside the Ice-CVI. The resulting ice and droplet residues (IR and DR) were analyzed for size and composition by two single particle mass spectrometers: a custom-built Single Particle Laser-Ablation Time-of-Flight Mass Spectrometer (SPLAT) and a commercial Aerosol Time of Flight Mass Spectrometer (ATOFMS, TSI Model 3800). During CLACE 6 the SPLAT instrument characterized 355 individual ice residues that produced a mass spectrum for at least one polarity and the ATOFMS measured 152 particles. The mass spectra were binned in classes, based on the combination of dominating substances, such as mineral dust, sulfate, potassium and elemental carbon or organic material. The derived chemical information from the ice residues is compared to the JFJ ambient aerosol that was sampled while the measurement station was out of clouds (several thousand particles analyzed by SPLAT and ATOFMS) and to the composition of the residues of supercooled cloud droplets (SPLAT: 162 cloud droplet residues analyzed, ATOFMS: 1094). The measurements showed that mineral dust particles were strongly enhanced in the ice particle residues. 57% of the SPLAT spectra from ice residues were dominated by signatures from mineral compounds, and 78% of the ATOFMS spectra. Sulfate and nitrate containing particles were strongly depleted in the ice residues. Sulfate was found to dominate the droplet residues (~90% of the particles). The results from the two different single particle mass spectrometers were generally in agreement. Differences in the results originate from several causes, such as the different wavelength of the desorption and ionisation lasers and different size-dependent particle detection efficiencies.
Tracer measurements in the tropical tropopause layer during the AMMA/SCOUT-O3 aircraft campaign
(2009)
We present airborne in situ measurements made during the AMMA (African Monsoon Multidisciplinary Analysis)/SCOUT-O3 campaign between 31 July and 17 August 2006 on board the M55 Geophysica aircraft, based in Ouagadougou, Burkina Faso. CO2 and N2O were measured with the High Altitude Gas Analyzer (HAGAR), CO was measured with the Cryogenically Operated Laser Diode (COLD) instrument, and O3 with the Fast Ozone ANalyzer (FOZAN). We analyze the data obtained during five local flights to study the dominant transport processes controlling the tropical tropopause layer (TTL) above West-Africa: deep convection up to the level of main convective outflow, overshooting of deep convection, horizontal inmixing across the subtropical tropopause, and horizontal transport across the subtropical barrier. Except for the flight of 13 August, distinct minima in CO2 indicate convective outflow of boundary layer air in the TTL. The CO2 profiles show that the level of main convective outflow was mostly located between 350 and 360 K, and for 11 August reached up to 370 K. While the CO2 minima indicate quite significant convective influence, the O3 profiles suggest that the observed convective signatures were mostly not fresh, but of older origin. When compared with the mean O3 profile measured during a previous campaign over Darwin in November 2005, the O3 minimum at the main convective outflow level was less pronounced over Ouagadougou. Furthermore O3 mixing ratios were much higher throughout the whole TTL and, unlike over Darwin, rarely showed low values observed in the regional boundary layer. Signatures of irreversible mixing following overshooting of convective air were scarce in the tracer data. Some small signatures indicative of this process were found in CO2 profiles between 390 and 410 K during the flights of 4 and 8 August, and in CO data at 410 K on 7 August. However, the absence of expected corresponding signatures in other tracer data makes this evidence inconclusive, and overall there is little indication from the observations that overshooting convection has a profound impact on TTL composition during AMMA. We find the amount of photochemically aged air isentropically mixed into the TTL across the subtropical tropopause to be not significant. Using the N2O observations we estimate the fraction of aged extratropical stratospheric air in the TTL to be 0.0±0.1 up to 370 K during the local flights, increasing above this level to 0.2±0.15 at 390 K. The subtropical barrier, as indicated by the slope of the correlation between N2O and O3 between 415 and 490 K, does not appear as a sharp border between the tropics and extratropics, but rather as a gradual transition region between 10 and 25° N latitude where isentropic mixing between these two regions may occur.
Processes occurring in the tropical upper troposphere and lower stratosphere (UT/LS) are of importance for the global climate, for the stratospheric dynamics and air chemistry, and they influence the global distribution of water vapour, trace gases and aerosols. The mechanisms underlying cloud formation and variability in the UT/LS are of scientific concern as these still are not adequately described and quantified by numerical models. Part of the reasons for this is the scarcity of detailed in-situ measurements in particular from the Tropical Transition Layer (TTL) within the UT/LS. In this contribution we provide measurements of particle number densities and the amounts of non-volatile particles in the submicron size range present in the UT/LS over Southern Brazil, West Africa, and Northern Australia. The data were collected in-situ on board of the Russian high altitude research aircraft M-55 "Geophysica" using the specialised COPAS (COndensation PArticle counting System) instrument during the TROCCINOX (Araçatuba, Brazil, February 2005), the SCOUT-O3 (Darwin, Australia, December 2005), and SCOUT-AMMA (Ouagadougou, Burkina Faso, August 2006) campaigns. The vertical profiles obtained are compared to those from previous measurements from the NASA DC-8 and NASA WB-57F over Costa Rica and other tropical locations between 1999 and 2007. The number density of the submicron particles as function of altitude was found to be remarkably constant (even back to 1987) over the tropical UT/LS altitude band such that a parameterisation suitable for models can be extracted from the measurements. At altitudes corresponding to potential temperatures above 430 K a slight increase of the number densities from 2005/2006 results from the data in comparison to the 1987 to 2007 measurements. The origins of this increase are unknown. By contrast the data from Northern hemispheric mid latitudes do not exhibit such an increase between 1999 and 2006. Vertical profiles of the non-volatile fraction of the submicron particles were also measured by a COPAS channel and are presented here. The resulting profiles of the non-volatile number density fraction show a pronounced maximum of 50% in the tropical TTL over Australia and West Africa. Below and above this fraction is much lower attaining values of 10% and smaller. In the lower stratosphere the fine particles mostly consist of sulphuric acid which is reflected in the low numbers of non-volatile residues measured by COPAS. Without detailed chemical composition measurements the reason for the increase of non-volatile particle fractions cannot yet be given. The long distance transfer flights to Brazil, Australia and West-Africa were executed during a time window of 17 months within a period of relative volcanic quiescence. Thus the data measured during these transfers represent a "snapshot picture" documenting the status of a significant part of the global UT/LS aerosol (with sizes below 1 μm) at low concentration levels 15 years after the last major (i.e., the 1991 Mount Pinatubo) eruption. The corresponding latitudinal distributions of the measured particle number densities are also presented in this paper in order to provide input on the UT/LS background aerosol for modelling purposes.
Current atmospheric models do not include secondary organic aerosol (SOA) production from gas-phase reactions of polycyclic aromatic hydrocarbons (PAHs). Recent studies have shown that primary semivolatile emissions, previously assumed to be inert, undergo oxidation in the gas phase, leading to SOA formation. This opens the possibility that low-volatility gas-phase precursors are a potentially large source of SOA. In this work, SOA formation from gas-phase photooxidation of naphthalene, 1-methylnaphthalene (1-MN), 2-methylnaphthalene (2-MN), and 1,2-dimethylnaphthalene (1,2-DMN) is studied in the Caltech dual 28-m3 chambers. Under high-NOx conditions and aerosol mass loadings between 10 and 40 μg m, the SOA yields (mass of SOA per mass of hydrocarbon reacted) ranged from 0.19 to 0.30 for naphthalene, 0.19 to 0.39 for 1-MN, 0.26 to 0.45 for 2-MN, and constant at 0.31 for 1,2-DMN. Under low-NOx conditions, the SOA yields were measured to be 0.73, 0.68, and 0.58, for naphthalene, 1-MN, and 2-MN, respectively. The SOA was observed to be semivolatile under high-NOx conditions and essentially nonvolatile under low-NOx conditions, owing to the higher fraction of ring-retaining products formed under low-NOx conditions. When applying these measured yields to estimate SOA formation from primary emissions of diesel engines and wood burning, PAHs are estimated to yield 3–5 times more SOA than light aromatic compounds. PAHs can also account for up to 54% of the total SOA from oxidation of diesel emissions, representing a potentially large source of urban SOA.
During a 4-week run in October–November 2006, a pilot experiment was performed at the CERN Proton Synchrotron in preparation for the CLOUD1 experiment, whose aim is to study the possible influence of cosmic rays on clouds. The purpose of the pilot experiment was firstly to carry out exploratory measurements of the effect of ionising particle radiation on aerosol formation from trace H2SO4 vapour and secondly to provide technical input for the CLOUD design. A total of 44 nucleation bursts were produced and recorded, with formation rates of particles above the 3 nm detection threshold of between 0.1 and 100 cm−3s−1, and growth rates between 2 and 37 nm h−1. The corresponding H2SO4 concentrations were typically around 106 cm−3 or less. The experimentally-measured formation rates and H2SO4 concentrations are comparable to those found in the atmosphere, supporting the idea that sulphuric acid is involved in the nucleation of atmospheric aerosols. However, sulphuric acid alone is not able to explain the observed rapid growth rates, which suggests the presence of additional trace vapours in the aerosol chamber, whose identity is unknown. By analysing the charged fraction, a few of the aerosol bursts appear to have a contribution from ion-induced nucleation and ion-ion recombination to form neutral clusters. Some indications were also found for the accelerator beam timing and intensity to influence the aerosol particle formation rate at the highest experimental SO2 concentrations of 6 ppb, although none was found at lower concentrations. Overall, the exploratory measurements provide suggestive evidence for ion-induced nucleation or ion-ion recombination as sources of aerosol particles. However in order to quantify the conditions under which ion processes become significant, improvements are needed in controlling the experimental variables and in the reproducibility of the experiments. Finally, concerning technical aspects, the most important lessons for the CLOUD design include the stringent requirement of internal cleanliness of the aerosol chamber, as well as maintenance of extremely stable temperatures (variations below 0.1°C).
Global-scale information on natural river flows and anthropogenic river flow alterations is required to identify areas where aqueous ecosystems are expected to be strongly degraded. Such information can support the identification of environmental flow guidelines and a sustainable water management that balances the water demands of humans and ecosystems. This study presents the first global assessment of the anthropogenic alteration of river flow regimes by water withdrawals and dams, focusing in particular on the change of flow variability. Six ecologically relevant flow indicators were quantified using an improved version of the global water model WaterGAP. WaterGAP simulated, with a spatial resolution of 0.5 degree, river discharge as affected by human water withdrawals and dams, as well as naturalized discharge without this type of human interference. Mainly due to irrigation, long-term average river discharge and statistical low flow Q90 (monthly river discharge that is exceeded in 9 out of 10 months) have decreased by more than 10% on one sixth and one quarter of the global land area (excluding Antarctica and Greenland), respectively. Q90 has increased significantly on only 5% of the land area, downstream of reservoirs. Due to both water withdrawals and dams, seasonal flow amplitude has decreased significantly on one sixth of the land area, while interannual variability has increased on one quarter of the land area mainly due to irrigation. It has decreased on only 8% of the land area, in areas with little consumptive water use that are downstream of dams. Areas most affected by anthropogenic river flow alterations are the western and central USA, Mexico, the western coast of South America, the Mediterranean rim, Southern Africa, the semi-arid and arid countries of the Near East and Western Asia, Pakistan and India, Northern China and the Australian Murray-Darling Basin, as well as some Arctic rivers. Due to a large number of uncertainties related e.g. to the estimation of water use and reservoir operation rules, the analysis is expected to provide only first estimates of river flow alterations that should be refined in the future.
Pollen-based climate reconstructions were performed on two high-resolution pollen – marines cores from the Alboran and Aegean Seas in order to unravel the climatic variability in the coastal settings of the Mediterranean region between 15 000 and 4000 cal yrs BP (the Lateglacial, and early to mid-Holocene). The quantitative climate reconstructions for the Alboran and Aegean Sea records focus mainly on the reconstruction of the seasonality changes (temperatures and precipitation), a crucial parameter in the Mediterranean region. This study is based on a multi-method approach comprising 3 methods: the Modern Analogues Technique (MAT), the recent Non-Metric Multidimensional Scaling/Generalized Additive Model method (NMDS/GAM) and Partial Least Squares regression (PLS). The climate signal inferred from this comparative approach confirms that cold and dry conditions prevailed in the Mediterranean region during the Heinrich event 1 and Younger Dryas periods, while temperate conditions prevailed during the Bølling/Allerød and the Holocene. Our records suggest a West/East gradient of decreasing precipitation across the Mediterranean region during the cooler Late-glacial and early Holocene periods, similar to present-day conditions. Winter precipitation was highest during warm intervals and lowest during cooling phases. Several short-lived cool intervals (i.e., Older Dryas, another oscillation after this one (GI-1c2), Gerzensee/Preboreal Oscillations, 8.2 ka event, Bond events) connected to the North Atlantic climate system are documented in the Alboran and Aegean Sea records indicating that the climate oscillations associated with the successive steps of the deglaciation in the North Atlantic area occurred in both the western and eastern Mediterranean regions. This observation confirms the presence of strong climatic linkages between the North Atlantic and Mediterranean regions.
Abrupt climate changes of the last deglaciation detected in a western Mediterranean forest record
(2009)
Abrupt changes in Western Mediterranean climate during the last deglaciation (20 to 6 cal ka BP) are detected in marine core MD95-2043 (Alboran Sea) through the investigation of high-resolution pollen data and pollen-based climate reconstructions by the modern analogue technique (MAT) for annual precipitation (Pann) and mean temperatures of the coldest and warmest months (MTCO and MTWA). Changes in temperate Mediterranean forest development and composition and MAT reconstructions indicate major climatic shifts with parallel temperature and precipitation changes at the onsets of Heinrich stadial 1 (equivalent to the Oldest Dryas), the Bölling-Allerød (BA), and the Younger Dryas (YD). Multi-centennial-scale oscillations in forest development occurred throughout the BA, YD, and early Holocene. Shifts in vegetation composition and (Pann reconstructions indicate that forest declines occurred during dry, and generally cool, episodes centred at 14.0, 13.3, 12.9, 11.8, 10.7, 10.1, 9.2, 8.3 and 7.4 cal ka BP. The forest record also suggests multiple, low-amplitude Preboreal (PB) climate oscillations, and a marked increase in moisture availability for forest development at the end of the PB at 10.6 cal ka BP. Dry atmospheric conditions in the Western Mediterranean occurred in phase with Lateglacial events of high-latitude cooling including GI-1d (Older Dryas), GI-1b (Intra-Allerød Cold Period) and GS-1 (YD), and during Holocene events associated with high-latitude cooling, meltwater pulses and N. Atlantic ice-rafting. A possible climatic mechanism for the recurrence of dry intervals and an opposed regional precipitation pattern with respect to Western-central Europe relates to the dynamics of the westerlies and the prevalence of atmospheric blocking highs. Comparison of radiocarbon and ice-core ages for well-defined climatic transitions in the forest record suggests possible enhancement of marine reservoir ages in the Alboran Sea by 200 years (surface water age 600 years) during the Lateglacial.
George Orwells Roman 1984 aus dem Jahre 1949 gilt gemeinhin als einer der Klassiker dystopischer Literatur. Auch wenn das tatsächliche Jahr 1984 inzwischen Vergangenheit ist, hat Orwell’s Entwurf einer repressiven, totalitären Gesellschaft bis heute nichts an Aktualität eingebüßt. Konzepte wie „Big Brother“ oder „doublethink“ sind in unseren alltäglichen Wortschatz übergegangen, und Orwells Roman bildet auch weiterhin das Vorbild für viele aktuelle Dystopien. Doch nicht nur Orwells Darstellung eines düsteren, futuristischen Überwachungsstaates, in dem eine Gruppe von Machtinhabern versucht, sowohl die Vergangenheit als auch die Gedanken der Bevölkerung zu steuern, verkörpert wichtige Leitmotive dystopischer Literatur. Auch die Rolle und Anwendung von Sprache in dieser Zukunftsvision hat nachhaltig seine Spuren in dystopischer Literatur hinterlassen, auch wenn diese Rolle in der Forschungsliteratur häufig übersehen wird. Zwar befassen sich regelmäßig Kritiker mit dem Aspekt von Sprache in Romanen wie 1984 oder Aldous Huxleys Brave New World, allerdings gibt es kaum komparative Studien, die Sprache als ein eigenes, zentrales dystopisches Motiv sehen, sondern Sprache in der Regel in andere Aspekte subsumieren.
George Orwells Roman 1984 aus dem Jahre 1949 gilt gemeinhin als einer der Klassiker dystopischer Literatur. Auch wenn das tatsächliche Jahr 1984 inzwischen Vergangenheit ist, hat Orwell’s Entwurf einer repressiven, totalitären Gesellschaft bis heute nichts an Aktualität eingebüßt. Konzepte wie „Big Brother“ oder „doublethink“ sind in unseren alltäglichen Wortschatz übergegangen, und Orwells Roman bildet auch weiterhin das Vorbild für viele aktuelle Dystopien. Doch nicht nur Orwells Darstellung eines düsteren, futuristischen Überwachungsstaates, in dem eine Gruppe von Machtinhabern versucht, sowohl die Vergangenheit als auch die Gedanken der Bevölkerung zu steuern, verkörpert wichtige Leitmotive dystopischer Literatur. Auch die Rolle und Anwendung von Sprache in dieser Zukunftsvision hat nachhaltig seine Spuren in dystopischer Literatur hinterlassen, auch wenn diese Rolle in der Forschungsliteratur häufig übersehen wird. Zwar befassen sich regelmäßig Kritiker mit dem Aspekt von Sprache in Romanen wie 1984 oder Aldous Huxleys Brave New World, allerdings gibt es kaum komparative Studien, die Sprache als ein eigenes, zentrales dystopisches Motiv sehen, sondern Sprache in der Regel in andere Aspekte subsumieren.
Die vorliegende Arbeit, befasst sich mit genau dieser Unzulänglichkeit. Anhand von acht dystopischen Romanen in Englischer Sprache, die allesamt in den letzten 80 Jahren erschienen sind, wird die Rolle von Sprache herausgearbeitet, und ihre Relevanz für das Genre der Dystopie deutlich gemacht. Die verwendeten Werke sind, in chronologischer Reihenfolge: Aldous Huxleys Brave New World (1932), George Orwells 1984 (1949), Anthony Burgess‘ Clockwork Orange (1960), Russell Hobans Riddley Walker (1980), Suzette Haden Elgins Native Tongue (1984) und The Judas Rose (1987), Margaret Atwoods The Handmaid’s Tale (1985), sowie Will Selfs The Book of Dave (2006). Die Romane sind bewusst gewählt, um einen größtmöglichen Rahmen und Zeitraum abzudecken, der zudem unterschiedliche Strömungen und Traditionen innerhalb des Genres der dystopischen Literatur aufgreift.
Bevor die eigentliche Textanalyse beginnt, werden zunächst Entstehung und Charakteristika des dystopischen Konzeptes erläutert. Die Studie blickt kurz auf die Entwicklung der Utopie, dem Gegenkonzept von Dystopie, von der Klassik zur Moderne und verfolgt anschließend die Entstehung anti-utopischer Tendenzen bis hin zum Auftreten der Dystopie, einer speziellen Unterkategorie anti-utopischer Literatur, im späten 19. Jahrhundert. Darauf basierend werden einige der wichtigsten Leitmotive vorgestellt, die im weiteren Verlauf auch in Verbindung mit Sprache eine maßgebliche Rolle spielen. Zu guter Letzt wird auch auf die Problematik der Organisation und Klassifikation von Sprache in der folgenden Analyse eingegangen. Nicht nur ist Sprache an sich ein weitreichender Begriff; auch die Verwendung von Sprache in den einzelnen Romanen ist sehr unterschiedlich geprägt. So sind beispielsweise Romane wie Riddley Walker, Clockwork Orange und Book of Dave komplett oder zu weiten Teilen in einer eigenen, fiktiven Sprache verfasst, die verfügt, dass der Leser seinen Interpretationsrahmen anpassen muss. In anderen Romane dagegen, wie in Brave New World, The Handmaid’s Tale oder 1984, spielt Sprache dagegen fast ausschließlich auf der Handlungsebene eine Rolle. Eine umfangreiche Analyse erfordert es, alle Aspekte des Sprachgebrauchs abzudecken, auch wenn der begrenzte Rahmen dieser Arbeit es nur zulässt, die wichtigsten Aspekte in dieser Hinsicht abzudecken.
Aus den unterschiedlichen Formen des Sprachgebrauch, in dem sich auf Sprache sowohl als Schrift- wie Sprechmedium bezogen wird, geht auch der Aufbau der Hauptanalyse hervor: Im ersten Teil wird auf die Rolle von Sprache auf der Handlungsebene eingegangen. Es wird, unter Zuhilfenahme von Michel Foucaults Diskurstheorie, gezeigt, wie Sprache auf der einen Seite von einer autoritären Macht oder Institution verwendet wird, um bestimmte Diskurse durchzusetzen, die Stabilität der dystopischen Gesellschaft zu garantieren und das Äußern von kritischen Gedanken abzuwenden. Auf der anderen Seite, analog zu Foucaults Diskurs-Begriff, wonach ein Diskurs immer auch seinen Widerstand produziert, wird Sprache in einigen Romanen jedoch als gegenteiliges Medium eingesetzt; als ein Medium zur Befreiung und Wahrung der Individualität. Die wechselseitige Beziehung wird ausgiebig analysiert. Im dritten Analysepunkt wird die Beziehung zwischen sozialer Klasse und Status aufgedeckt.
Die zweite Hälfte der Studie wendet sich von der Handlungsebene ab und konzentriert sich auf stilistische und strukturelle Aspekte. Es wird gezeigt, wie Sprache von den Autoren benutzt wird, um die dystopische Erfahrung zu verstärken, wie die Einbindung von fiktiven Sprachen, Para- und Intertextualität sowie Namensgebung als stilistisches Mittel verwendet wird, das im Gegenzug zwei der wichtigsten Charakteristika dystopischer Literatur hervorhebt: Zum einen die didaktische Absicht, mit der Dystopien vor einer möglichen (und unweigerlich schlechteren) Zukunft warnen, falls keine Gegenmaßnahmen ergriffen werden, und zum anderen, wie Dystopien gezielt Aspekte aus der Zeit der Autoren aufgreifen, und diese in den Rahmen der Handlungsstruktur extrapolieren. Basierend auf dieser Annahme werden zum Abschluss einige Sprach- und kulturtheoretische Ideen aufgegriffen, die ihren Weg in die einzelnen Werke gefunden haben, und somit einen eigenen Diskurs von Sprache im dystopischen Roman ermöglichen.
Zum Abschluss der Arbeit werden die Ergebnisse aufgegriffen und im Hinblick auf eine mögliche Repositionierung von Sprache in der Forschung des dystopischen Romanes evaluiert. Es werden drei bestimmte Funktionen von Sprachgebrauch anhand der Analyse erschlossen und abschließend vorgeschlagen, Sprache zukünftig als eigenes Motiv innerhalb dystopischer Literatur zu sehen, da der Aspekt von Sprache in den hier diskutierten Texten unweigerlich mit der Absicht und Form der Dystopie in Einklang steht.
In this paper, similarity hypotheses for the atmospheric surface layer (ASL) are reviewed using nondimensional characteristic invariants, referred to as π -numbers. The basic idea of this dimensional π-invariants analysis (sometimes also called Buckingham’s π-theorem) is described in a mathematically generalized formalism. To illustrate the task of this powerful method and how it can be applied to deduce a variety of reasonable solutions by the formalized procedure of non-dimensionalization, various instances are represented that are relevant to the turbulence transfer across the ASL and prevailing structure of ASL turbulence. Within the framework of our review we consider both (a) Monin-Obukhov scaling for forced-convective conditions, and (b) Prandtl-Obukhov-Priestley scaling for free-convective conditions.It is shown that in the various instances of Monin-Obukhov scaling generally two π-numbers occur that result in corresponding similarity functions. In contrast to that, Prandtl-Obukhov-Priestley scaling will lead to only one π number in each case usually considered as a non-dimensional universal constant. Since an explicit mathematical relationship for the similarity functions cannot be obtained from a dimensional π-invariants analysis, elementary laws of π-invariants have to be pointed out using empirical or/and theoretical findings. To evaluate empirical similarity functions usually considered within the framework flux-profile relationships, so-called integral similarity functions for momentum and sensible heat are presented and assessed on the basis of the friction velocity and the vertical component of the eddy flux densities of sensible and latent heat directly measured during the GREIV I 1974 field campaign.
Diabetic nephropathy (DN) is a major cause of end-stage renal failure worldwide. Oxidative stress has been reported to be a major culprit of the disease and increased oxidized low density lipoprotein (oxLDL) immune complexes were found in patients with DN. In this study we present evidence, that CXCL16 is the main receptor in human podocytes mediating the uptake of oxLDL. In contrast, in primary tubular cells CD36 was mainly involved in the uptake of oxLDL. We further demonstrate that oxLDL down-regulated α3-integrin expression and increased the production of fibronectin in human podocytes. In addition, oxLDL uptake induced the production of reactive oxygen species (ROS) in human podocytes. Inhibition of oxLDL uptake by CXCL16 blocking antibodies abrogated the fibronectin and ROS production and restored α3 integrin expression in human podocytes. Furthermore we present evidence that hyperglycaemic conditions increased CXCL16 and reduced ADAM10 expression in podocytes. Importantly, in streptozotocin-induced diabetic mice an early induction of CXCL16 was accompanied by higher levels of oxLDL. Finally immunofluorescence analysis in biopsies of patients with DN revealed increased glomerular CXCL16 expression, which was paralleled by high levels of oxLDL. In summary, regulation of CXCL16, ADAM10 and oxLDL expression may be an early event in the onset of DN and therefore all three proteins may represent potential new targets for diagnosis and therapeutic intervention in DN.
The manifestation of chronic back pain depends on structural, psychosocial, occupational and genetic influences. Heritability estimates for back pain range from 30% to 45%. Genetic influences are caused by genes affecting intervertebral disc degeneration or the immune response and genes involved in pain perception, signalling and psychological processing. This inter-individual variability which is partly due to genetic differences would require an individualized pain management to prevent the transition from acute to chronic back pain or improve the outcome. The genetic profile may help to define patients at high risk for chronic pain. We summarize genetic factors that (i) impact on intervertebral disc stability, namely Collagen IX, COL9A3, COL11A1, COL11A2, COL1A1, aggrecan (AGAN), cartilage intermediate layer protein, vitamin D receptor, metalloproteinsase-3 (MMP3), MMP9, and thrombospondin-2, (ii) modify inflammation, namely interleukin-1 (IL-1) locus genes and IL-6 and (iii) and pain signalling namely guanine triphosphate (GTP) cyclohydrolase 1, catechol-O-methyltransferase, μ opioid receptor (OPMR1), melanocortin 1 receptor (MC1R), transient receptor potential channel A1 and fatty acid amide hydrolase and analgesic drug metabolism (cytochrome P450 [CYP]2D6, CYP2C9).
Protein catabolism should be reduced and protein synthesis promoted with parenteral nutrion (PN). Amino acid (AA) solutions should always be infused with PN. Standard AA solutions are generally used, whereas specially adapted AA solutions may be required in certain conditions such as severe disorders of AA utilisation or in inborn errors of AA metabolism. An AA intake of 0.8 g/kg/day is generally recommended for adult patients with a normal metabolism, which may be increased to 1.2–1.5 g/kg/day, or to 2.0 or 2.5 g/kg/day in exceptional cases. Sufficient non-nitrogen energy sources should be added in order to assure adequate utilisation of AA. A nitrogen calorie ratio of 1:130 to 1:170 (g N/kcal) or 1:21 to 1:27 (g AA/kcal) is recommended under normal metabolic conditions. In critically ill patients glutamine should be administered parenterally if indicated in the form of peptides, for example 0.3–0.4 g glutamine dipeptide/kg body weight/day (=0.2–0.26 g glutamine/kg body weight/day). No recommendation can be made for glutamine supplementation in PN for patients with acute pancreatitis or after bone marrow transplantation (BMT), and in newborns. The application of arginine is currently not warranted as a supplement in PN in adults. N-acetyl AA are only of limited use as alternative AA sources. There is currently no indication for use of AA solutions with an increased content of glycine, branched-chain AAs (BCAA) and ornithine-α-ketoglutarate (OKG) in all patients receiving PN. AA solutions with an increased proportion of BCAA are recommended in the treatment of hepatic encephalopathy (III–IV).
There are special challenges in implementing parenteral nutrition (PN) in paediatric patients, which arises from the wide range of patients, ranging from extremely premature infants up to teenagers weighing up to and over 100 kg, and their varying substrate requirements. Age and maturity-related changes of the metabolism and fluid and nutrient requirements must be taken into consideration along with the clinical situation during which PN is applied. The indication, the procedure as well as the intake of fluid and substrates are very different to that known in PN-practice in adult patients, e.g. the fluid, nutrient and energy needs of premature infants and newborns per kg body weight are markedly higher than of older paediatric and adult patients. Premature infants <35 weeks of pregnancy and most sick term infants usually require full or partial PN. In neonates the actual amount of PN administered must be calculated (not estimated). Enteral nutrition should be gradually introduced and should replace PN as quickly as possible in order to minimise any side-effects from exposure to PN. Inadequate substrate intake in early infancy can cause long-term detrimental effects in terms of metabolic programming of the risk of illness in later life. If energy and nutrient demands in children and adolescents cannot be met through enteral nutrition, partial or total PN should be considered within 7 days or less depending on the nutritional state and clinical conditions.
Both the genomes of the epsilonproteobacteria Wolinella succinogenes and Campylobacter jejuni contain operons (sdhABE) that encode for so far uncharacterized enzyme complexes annotated as ‘non-classical’ succinate:quinone reductases (SQRs). However, the role of such an enzyme ostensibly involved in aerobic respiration in an anaerobic organism such as W. succinogenes has hitherto been unknown. We have established the first genetic system for the manipulation and production of a member of the non-classical succinate:quinone oxidoreductase family. Biochemical characterization of the W. succinogenes enzyme reveals that the putative SQR is in fact a novel methylmenaquinol:fumarate reductase (MFR) with no detectable succinate oxidation activity, clearly indicative of its involvement in anaerobic metabolism. We demonstrate that the hydrophilic subunits of the MFR complex are, in contrast to all other previously characterized members of the superfamily, exported into the periplasm via the twin-arginine translocation (tat)-pathway. Furthermore we show that a single amino acid exchange (Ala86→His) in the flavoprotein of that enzyme complex is the only additional requirement for the covalent binding of the otherwise non-covalently bound FAD. Our results provide an explanation for the previously published puzzling observation that the C. jejuni sdhABE operon is upregulated in an oxygen-limited environment as compared with microaerophilic laboratory conditions.
Perturbation theory for non-abelian gauge theories at finite temperature is plagued by infrared
divergences which are caused by magnetic soft modes ~ g2T, corresponding to gluon fields of
a 3d Yang-Mills theory. While the divergences can be regulated by a dynamically generated
magnetic mass on that scale, the gauge coupling drops out of the effective expansion parameter
requiring summation of all loop orders for the calculation of observables. Some gauge invariant
possibilities to implement such infrared-safe resummations are reviewed. We use a scheme based
on the non-linear sigma model to estimate some of the contributions ~ g6 of the soft magnetic
modes to the QCD pressure through two loops. The NLO contribution amounts to ~ 10% of the
LO, suggestive of a reasonable convergence of the series.
The so-called sign problem of lattice QCD prohibits Monte Carlo simulations at finite baryon
density by means of importance sampling. Over the last few years, methods have been developed
which are able to circumvent this problem as long as the quark chemical potential is m=T <~1.
After a brief review of these methods, their application to a first principles determination of the
QCD phase diagram for small baryon densities is summarised. The location and curvature of the
pseudo-critical line of the quark hardon transition is under control and extrapolations to physical
quark masses and the continuum are feasible in the near future. No definite conclusions can as
yet be drawn regarding the existence of a critical end point, which turns out to be extremely quark
mass and cut-off sensitive. Investigations with different methods on coarse lattices show the lightmass
chiral phase transition to weaken when a chemical potential is switched on. If persisting on
finer lattices, this would imply that there is no chiral critical point or phase transition for physical
QCD. Any critical structure would then be related to physics other than chiral symmetry breaking.
The chiral critical surface is a surface of second order phase transitions bounding the region of
first order chiral phase transitions for small quark masses in the fmu;d;ms;mg parameter space.
The potential critical endpoint of the QCD (T;m)-phase diagram is widely expected to be part of
this surface. Since for m = 0 with physical quark masses QCD is known to exhibit an analytic
crossover, this expectation requires the region of chiral transitions to expand with m for a chiral
critical endpoint to exist. Instead, on coarse Nt = 4 lattices, we find the area of chiral transitions
to shrink with m, which excludes a chiral critical point for QCD at moderate chemical potentials
mB < 500 MeV. First results on finer Nt = 6 lattices indicate a curvature of the critical surface
consistent with zero and unchanged conclusions. We also comment on the interplay of phase
diagrams between the Nf = 2 and Nf = 2+1 theories and its consequences for physical QCD.
We perform a two-flavor dynamical lattice computation of the Isgur-Wise functions t1/2 and t3/2
at zero recoil in the static limit. We find t1/2(1) = 0.297(26) and t3/2(1) = 0.528(23) fulfilling
Uraltsev’s sum rule by around 80%. We also comment on a persistent conflict between theory and
experiment regarding semileptonic decays of B mesons into orbitally excited P wave D mesons,
the so-called “1/2 versus 3/2 puzzle”, and we discuss the relevance of lattice results in this
context.
We present a lattice QCD calculation of the heavy-light decay constants fB and fBs performed with Nf = 2 maximally twisted Wilson fermions, at four values of the lattice spacing. The decay constants have been also computed in the static limit and the results are used to interpolate the observables between the charmand the infinite-mass sectors, thus obtaining the value of the decay constants at the physical b quark mass. Our preliminary results are fB = 191(14)MeV, fBs = 243(14)MeV, fBs/ fB = 1.27(5). They are in good agreement with those obtained with a novel approach, recently proposed by our Collaboration (ETMC), based on the use of suitable ratios having an exactly known static limit.
We present first results from runs performed with Nf = 2+1+1 flavours of dynamical twisted mass fermions at maximal twist: a degenerate light doublet and a mass split heavy doublet. An overview of the input parameters and tuning status of our ensembles is given, together with a comparison with results obtained with Nf = 2 flavours. The problem of extracting the mass of the K- and D-mesons is discussed, and the tuning of the strange and charm quark masses examined. Finally we compare two methods of extracting the lattice spacings to check the consistency of our data and we present some first results of cPT fits in the light meson sector.
"Entre direitos iguais, a força decide", proferiu karl marx ao descrever a antinomia do direito em situações antagônicas das relações de produção capitalistas, em que "o direito [oferece resistência] ao direito" nesse ponto, marx aborda uma questão que se situa no centro de todas as teorias jurídicas críticas: que tipo de violência é velada por meio do mecanismo de ocultação denominado "direito"? Para responder a esta questão, tentar-se-á, a seguir, tornar a teoria da hegemonia de antonio gramsci e seu modelo de direito hegemônico produtivos para o campo da teoria do direito. Tal tarefa tem de lidar com a dupla dificuldade de que, por um lado, gramsci não foi um teórico do direito no sentido mais estrito, razão pela qual o potencial de sua teoria para uma análise do direito raramente foi utilizada. Por outro lado, sua abordagem só pode ser empregada por meio de uma crítica às restrições relacionadas a seu tempo. isso se aplica especialmente à sua concepção de economia como a base e a núcleo essencialista oculto (laclau; mouffe, 2001:69), assim como à sua ideia de 'classismo' sob a forma de um enfoque unilateral das classes, em que há preferencialmente mais de um "pluralismo de poder" e inúmeras lutas (litowitz, 2000: 536). Recuperar-se-á, consequentemente, argumentos-chave, ampliando-os pela utilização das recentes descobertas feitas pelas abordagens feminista e neomaterialista da teoria jurídica, bem como as análises de foucault acerca das tecnologias de poder. por fim, uma interpretação da teoria sistêmica das autonomizações comunicativas.
O 11 de setembro acelerou o desenvolvimento de uma arquitetura transnacional de segurança que intervém profundamente nas liberdades civis individuais, tanto nos direitos básicos dos cidadãos dos Estados como nos direitos humanos dos cidadãos mundiais. O artigo delineia essa arquitetura, mostra como ela dissolve as categorias jurídicas tradicionais que preservam a liberdade e discute por que hoje se aceita amplamente a prioridade da segurança sobre a liberdade.
Os limites da tolerância
(2009)
Este artigo apresenta os elementos constitutivos do conceito de tolerância e discute duas concepções diferentes do termo, como permissão e como respeito moral, que expressam modos diversos de demarcar os limites da tolerância. A tolerância é apresentada como um conceito que, para ganhar algum conteúdo, depende normativamente de um direito à justificação baseado na idéia de um uso público da razão segundo o qual as práticas e as instituições político-jurídicas que determinam a vida social dos cidadãos devem ser justificáveis à luz de normas que eles não podem recíproca e genericamente rejeitar.
Background: A growing number of German hospitals have been privatized with the intention of increasing cost effectiveness and improving the quality of health care. Numerous studies investigated what possible qualitative and economic consequences these changes issues might have on patient care. However, little is known about how this privatization trend relates to physicians' working conditions and job satisfaction. It was anticipated that different working conditions would be associated with different types of hospital ownership. To that end, this study's purpose is to compare how physicians, working for both public and privatized hospitals, rate their respective psychosocial working conditions and job satisfaction.
Methods: The study was designed as a cross-sectional comparison using questionnaire data from 203 physicians working at German hospitals of different ownership types (private for-profit, public and private nonprofit).
Results: The present study shows that several aspects of physicians' perceived working conditions differ significantly depending on hospital ownership. However, results also indicated that physicians' job satisfaction does not vary between different types of hospital ownership. Finally, it was demonstrated that job demands and resources are associated with job satisfaction, while type of ownership is not.
Conclusion: This study represents one of a few studies that investigate the effect of hospital ownership on physicians work situation and demonstrated that the type of ownership is a potential factor accounting for differences in working conditions. The findings provide an informative basis to find solutions improving physicians' work at German hospitals.
Die Arbeit analysiert den Begriff sowie den Wert der Freiheit in den Schriften des kanadischen Philosophen Charles Taylor, unter Bezugnahme auf dessen politische Philosophie und philosophische Anthropologie. Die begriffliche Klärung basiert auf einer Systematisierung der positiven Verwendung des Freiheitsbegriffes in Taylors Gesamtwerk. Die Wertanalyse interpretiert die Ergebnisse der Systematisierung in Bezug auf die Frage, ob Freiheit in Taylors Verständnis ein extrinsischer oder ein intrinsischer Wert ist.
In its admissibility decision in the Al-Saadoon case the ECtHR held that the United Kingdom had jurisdiction over the applicants, who had been arrested by British forces and kept in a British-run military prison in Iraq. Just before the respective mandate of the Security Council expired on 31 December 2008, the applicants were transferred to Iraqi custody at Iraqi request and thereby exposed to the risk of an unfair trial followed by capital punishment. In this respect, the case resembles the Soering case, although the applicants were, unlike Soering, not on British territory but on occupied Iraqi soil before they were handed over. This aspect raises the question of Iraqi sovereignty as a norm competing with the UK's human rights obligations. The authors trace back the ECtHR's case law concerning the extraterritorial application of the Convention and analyse the UK judgments and the ECtHR's admissibility decision in the Al-Saadoon affair from this angle. Furthermore they consider the doctrinal consequences of the ECHR's extraterritorial effect in cases like Soering and Al-Saadoon, where contracting parties violate guarantees of the Convention by exposing a person within their jurisdiction to a risk of a treatment contrary to these guarantees by a third state. Finally, they test the argument brought forward by the UK that not transferring the applicants would have violated Iraqi sovereignty and establish patterns how the ECtHR and the UK Courts did cope in the past with international law norms potentially competing with the Convention.
* Cooperation between "jeder-fehlerzaehlt.de" and the Techniker statutory insurance company
* "PRIoritising multiple medication in multi-morbid patients" – PRIMUM-Pilot study gets off to successful start
* New work area: Quality promotion and concept development
* Frankfurt Training Program in Evidence-Based Medicine
* Another change in our institute is the new arrival of Sabine Pommeresch
* 2nd General Practice Day in Frankfurt
kurz und kn@pp news : Nr. 17
(2009)
kurz und kn@pp news : Nr. 16
(2009)
* Kooperation von "jeder-Fehler-zaehlt.de" mit der Techniker Krankenkasse
* "PRIorisierung von MUltimedikation bei Multimorbidität" – PRIMUM-Pilotstudie erfolgreich gestartet
* Neuer Arbeitsbereich: Qualitätsförderung und Konzeptentwicklung
* Neu im Institut ist Sabine Pommeresch
* Frankfurter Fortbildungsreihe Evidenzbasierte Medizin
* 2. Frankfurter Tag der Allgemeinmedizin: Jetzt online
kurz und kn@pp news : Nr. 15
(2009)