Refine
Year of publication
Document Type
- Article (31800)
- Part of Periodical (11593)
- Book (8343)
- Doctoral Thesis (5771)
- Part of a Book (3973)
- Working Paper (3402)
- Review (2949)
- Preprint (2417)
- Contribution to a Periodical (2398)
- Conference Proceeding (1775)
Language
- German (43017)
- English (30711)
- French (1060)
- Portuguese (840)
- Spanish (309)
- Croatian (302)
- Multiple languages (263)
- Italian (198)
- mis (174)
- Turkish (168)
Has Fulltext
- yes (77366) (remove)
Keywords
- Deutsch (1080)
- Literatur (875)
- taxonomy (774)
- Deutschland (553)
- Rezension (516)
- new species (459)
- Rezeption (354)
- Frankfurt <Main> / Universität (341)
- Übersetzung (330)
- Geschichte (301)
Institute
- Medizin (7874)
- Präsidium (5261)
- Physik (4992)
- Extern (2738)
- Wirtschaftswissenschaften (2724)
- Gesellschaftswissenschaften (2379)
- Biowissenschaften (2212)
- Biochemie und Chemie (1985)
- Frankfurt Institute for Advanced Studies (FIAS) (1893)
- Informatik (1735)
Der Begriff interkulturelle Linguistik ist unter Germanisten trotz einer beinahe 40-jährigen Geschichte der interkulturellen Germanistik umstritten. Einerseits beobachtet man eine Konjunktur der angewandten Sprachwissenschaft zum Thema interkulturelle Kommunikation oder kulturelle Determinanten von Übersetzungen, andererseits wirft man diesem Zweig der Philologie konzeptionellen Synkretismus, fehlende theoretische Stringenz sowie methodologische Unschärfe vor (Glück 2010: 300). Den breiten Anwendungsfeldern insbesondere im Fremdsprachenerwerb oder in der Translatologie steht eine rare Beschäftigung mit theoretischen Grundlagen dieser Disziplin gegenüber. Umso begrüßenswerter ist deswegen eine komprimierte Monographie zu den Grundpositionen der interkulturellen Linguistik...
Im vorliegenden Beitrag wird die Ebene der sprachwissenschaftlichen Verknüpfungen im Bereich der Syntax von zwei zu vergleichenden Sprachsystemen behandelt. Im Fokus der Untersuchung steht die Problematik der syntaktischen Korrelation in der deutschen und slowakischen Sprache. Behandelt werden syntaktische Korrelate sowohl im Deutschen als auch im Slowakischen, ihre Funktion in der Hypotaxe und ihre Verwendung in konkreten Nebensatztypen in beiden Sprachen...
Der folgende Beitrag präsentiert die Problematik der Eponyme in der Fachsprache und beschäftigt sich mit deren Vorkommen in ausgewählten naturwissenschaftlichen Bereichen. Naturwissenschaften stellen eine reiche Quelle eponymischer Bezeichnungen dar. Für die Analyse wurden drei Disziplinen ausgewählt: Medizin, Chemie und Physik. Alle drei naturwissenschaftlichen Disziplinen wurden in deutscher Sprache analysiert und konfrontiert. Die Analyse konzentriert sich auf die Besonderheiten der Eponyme, d.h. Struktur, Gebrauch, Vorkommen, Kategorisierung und Evaluierung im Rahmen der ausgewählten Disziplinen. Mit der Problematik der Eponyme haben sich mehrere Autoren beschäftigt, z.B. Morton S. Freeman und Dorothy Auchter. In der Slowakei haben sich den Eponymen Mária Bujalková, Božena Džuganová, Gabriela Poláčková und Ivan Masár gewidmet. I. Masár benutzt den Terminus dedikačné termíny (Dedikationstermini, aus dem Griechischen dedicare = widmen). In der deutschen Sprachforschung sind Hans-R. Fluck und Ingrid Wiese zu nennen...
Das Märchen Der blonde Eckbert erschien 1797 im ersten Band der Sammlung Volksmärchen und eröffnete die Reihe von Tiecks Märchenschöpfungen. Mit der Einordnung dieser Erzählung in diese Sammlung verwirrte Tieck den Gattungsbegriff. Die Erzählung ist kein Volksmärchen, sondern wird als das erste romantische Kunstmärchen bezeichnet. Als guter Kenner der damals so beliebten Volks- und Feenmärchen verfremdete Tieck das gewohnte Genre, indem er die Elemente und Motive des Märchens mit durchaus realistischen Momenten verband und das Geschehen gleichsam psychologisierte. Tieck kannte auch die Volksbücher seiner Zeit, stand aber in seinen Anfängen dem Schauerroman nahe. Er verwendete märchenartige Motive nicht nur für die Prosa, sondern auch für andere Gattungen. Tiecks Märchenschauspiele Ritter Blaubart und Der gestiefelte Kater sind subjektiv gefärbte, modernisierte Dramatisierungen von Märchen...
Je geringer, umso besser – gemeint ist die Entfernung zum Handlungsort eines Werkes, falls der Schriftsteller ihn nicht in seiner Heimat platziert. Für einen Dichter deutscher Zunge bestand kaum ein Grund, der Slowakei Aufmerksamkeit zu schenken. Zu einer intensiveren, für beide Seiten unheilvollen Berührung beider Länder kam es erst im Herbst 1944, als die deutsche Wehrmacht sich daran machte, die Slowakei zu besetzen und den Aufstand, der Ende August ausbrach, niederzuschlagen...
Der vorliegende Beitrag befasst sich mit den Funktionen des Titels von literarischen Werken. Im Dialog verschiedener Konzepte (Arnold Rothe, Harald Weinrich, Daniela Hodrová) und eigener Ansichten wird der Versuch unternommen, Gérard Genettes Klassifikation der Funktionen zu erweitern und zu systematisieren. Die Funktionen des Titels von literarischen Werken werden am Beispiel des Erstlingsromans Agnes (1998) des deutschsprachigen Schweizer Autors Peter Stamm (geb. 1963) dargestellt. Die Wahl des genannten Textes lässt sich dreifach begründen: 1. Agnes ist eines der Schlüsselwerke der deutschsprachigen (Schweizer) Gegenwartsliteratur (zur Argumentation vgl. Jambor 2008: 28–38). 2. Paradoxerweise ist ein kurzer, nur aus dem Vornamen der weiblichen Protagonistin bestehender Titel besonders geeignet, die Polyfunktionalität des literarischen Titels zu demonstrieren, denn wie Harald Weinrich bei seinem Vergleich der Titel mit längeren Texten feststellt: "Nicht durch ein Weniger, sondern durch ein Mehr an Struktur und Funktion zeichnen Titel sich aus, selbst und gerade wenn sie kurz sind" (Weinrich 2000: 6). 3. Wie später präsentiert wird, enthüllen ausgerechnet Titel, in denen Eigennamen vorkommen, schwache Stellen Genettes anregender literaturwissenschaftlichen Klassifikation und verursachen Schwierigkeiten auch im linguistischen Diskurs (Weinrich)...
Heute wachsen Kinder und Jugendliche inmitten einer Medienwelt auf. Dabei dominieren audiovisuelle Medien: Fernsehen und Computer sind die Leitmedien der jungen Generation. Vor ihren beiden Lieblingsmedien verbringen deutsche 6- bis 13-Jährige 100 Minuten (Fernsehen) bzw. 40 Minuten (Computer) am Tag. Dem Bücherlesen hingegen widmen Kinder durchschnittlich nur 22 Minuten (KIM-Studie 2006 zit. in Frederking/ Krommer/ Maiwald 2008: 84). Im krassen Gegensatz dazu steht die Situation in der Schule: Das Lehrbuch ist das tragende Medium, audiovisuelle und Neue Medien werden eher selten zur "Auffrischung" des Unterrichts eingesetzt...
Das Phänomen Kollokation, aus dem Lateinischen collocatio, das auf Deutsch Stellung oder Anordnung bedeutet, etablierte sich in der zweiten Hälfte des 20. Jahrhunderts in der Linguistik. Der Terminus "Kollokation" wurde vom britischen Sprachwissenschaftler John Rupert Firth eingeführt. Franz Josef Hausmann definiert die Kollokationen als "grundsätzlich binäre Einheiten", die aber auch eine "Tripel-Struktur" zulassen, welche aus der Verbindung von zwei Kollokationen entsteht (fester Beruf - Beruf aufgeben = festen Beruf aufgeben)...
Verschiedene sprachwissenschaftliche Denkansätze und Ausgangspunkte bringen von denselben Forschungsansätzen ausgehend unterschiedliche Methoden und theoretische Auffassungen hervor. Als Produkte verschiedener Forschungsverfahren ergeben sich in demselben Teilbereich manchmal auch kontroverse Definitionen. Die Problematik der Kollokationen ist auch eine umstrittene Frage der Linguistik. Die Umstrittenheit betrifft sowohl die Fragen der Deskription des Charakters dieser sprachlichen Erscheinungen als auch deren Abgrenzung, sprachwissenschaftliche Einreihung und Klassifikation.
In dieser Studie befassen wir uns mit der Darstellung der Heterogenität von Kollokationsauffassungen sowohl in der einheimischen als auch in der internationalen Linguistik...
Im Juni 2009 fand in der slowakischen Hauptstadt Bratislava unter dem Titel "20 Jahre Freiheit: Deutschland sagt Danke!" eine Veranstaltung statt, die das Auswärtige Amt und die Deutsche Botschaft Pressburg in Zusammenarbeit mit dem Außen- und Kulturministerium der Slowakischen Republik und weiteren Partnern durchführten. Ähnliche Veranstaltungen wurden in Prag, Warschau, Danzig und Budapest organisiert, um deutsche Perspektiven auf die Ereignisse vor und nach 1989 mit den Blickwinkeln der anderen postsozialistischen Länder in einen Dialog treten zu lassen. Darüber hinaus wollten die Veranstalter das seit dem Fall des Eisernen Vorhangs gemeinsam Erreichte deutlich machen, die Menschen emotional mitnehmen und die Dankbarkeit und Weltoffenheit Deutschlands erlebbar machen. Im Rahmen des abwechslungsreichen Kultur- und Unterhaltungsprogramms las Monika Maron "aus ihrem neuen Werk Stille Zeile sechs, wie es in allen Ankündigungen der Lesung hieß. Marons Roman ist aber 1991 erschienen, also achtzehn Jahre davor. Nach der Lesung führte einer der bekannten jungen Moderatoren eines slowakischen Nachrichtensenders ein kurzes Gespräch mit der Schriftstellerin, das er mit der Frage "Frau Maron, wie fühlen Sie sich als Mensch zwanzig Jahre nach dem Fall der Mauer?" einleitete. Monika Maron versuchte vor laufenden Fernsehkameras mühsam eine Antwort auf die peinliche Frage zu finden. Sie schien sich dessen bewusst zu sein: Das Mauerfall-Jubiläum ist ein medial inszeniertes Ereignis...
Förderung der Kreativität und der Emotionalität der Schüler durch Kunstwerke im DaF-Unterricht
(2013)
Im Beitrag wird darauf hingewiesen, dass der Umgang mit Kunstwerken und damit kreativen Gestaltungsprozessen eine Bereicherung der schulischen Lernumgebung im Sinne des integrierten Konzeptes bedeutet, das Emotion und Kognition verbindet. Das emotionale Engagement der Lernenden gewährleistet die Einbeziehung ihrer ganzen Persönlichkeit in den Lernprozess. Theoretische Überlegungen werden durch Beispiele aus der Literatur und der bildenden Kunst ergänzt...
Das Lehren einer Fremdsprache ist eng verbunden mit dem Vermitteln von kultur- und gesellschaftsspezifischen Elementen. Schon ab den ersten Niveaus des Fremdsprachenunterrichts werden landeskundliche Informationen mit in den Unterricht eingebaut und je höher die Sprachkompetenzen der Adressaten, desto reicher die Zusatzinformationen, die man den Lernenden weiterleitet. Literatur, Geschichte und bildende Künste sind in der Regel kostbare Fundgruben didaktisierbarer Materialien, doch können auch soziolinguistische Themen einen interessanten Ausgangspunkt für einen anregenden, handlungsorientierten Sprachunterricht bieten...
Auf der letzten SUNG-Tagung vor zwei Jahren versuchte ich in meinem Vortrag vor allem darauf hinzuweisen, dass wir als Lehrer und Germanisten, in unserem Beruf geisteswissenschaftlich ausgerichtet, im heutigen informationsbesessenen Zeitalter mit dessen arbeitstechnischen Zwängen zu unserer eigentlichsten Aufgabe der Erziehung und Bildung in einem umfassenderen als nur informationsorientierten Sinn immer seltener, wenn überhaupt kommen. Dabei richtete ich den Blick in die Vergangenheit, auf die früheren Auslegungen der Begriffe Erziehung und Bildung, um uns die gegenwärtigen bedenklichen Entwicklungen in diesem Bereich vor Augen zu führen und ins Bewusstsein zu rufen, auf was für wertvolle Ideen unserer Vorgänger wir mehr oder weniger leichten Herzens verzichten bzw. zu verzichten gezwungen werden, soweit sie uns in der prekären Lage heute überhaupt noch als erstrebenswert erscheinen...
Alle zwei Jahre wird an einer slowakischen Universität die Verbandstagung des Herausgebers der Slowakischen Zeitschrift für Germanistik, des Deutschlehrer- und Germanistenverbands SUNG, veranstaltet. Die Fachkonferenz ist in den zwanzig Jahren ihrer Existenz nicht nur zum bedeutendsten Forum des Slowakischen Deutschlehrer- und Germanistenverbands geworden, sondern auch zur größten und wichtigsten Zusammenkunft der Mittler der deutschen Sprache, der deutschsprachigen Kultur und der darauf bezogenen Forschung in der Slowakei...
Sprache als ein Bestandteil des biologisch-kognitiven Einklangs angeborener menschlicher Fähigkeiten wird in kognitiv-linguistischer Leseart nicht als ein vom Menschen unabhängiges, abstraktes sowie autonomes System betrachtet. Sprache ist mit Bedeutung demzufolge als Verknüpfung von verschiedenen Wissensaspekten in der menschlichen Kognition verankert. Die bestimmten Spielarten der kognitiven Linguistik, die Sprache als Teil der menschlichen Kognition begreift, versuchen, besonders den Einfluss der menschlichen Wahrnehmungsprozesse auf Sprache und sprachliche Strukturen zu analysieren...
Die seit 2007 einmal jährlich erscheinende Fachzeitschrift „Aussiger Beiträge“ will als internationales Periodikum neue Impulse zu anstehenden wissenschaftlichen Debatten und Diskussionen geben. In der Bemühung, eine möglichst breite Schicht von Lesern anzusprechen, berücksichtigt sie alle germanistischen Bereiche und setzt in den einzelnen Ausgaben Schwerpunkte abwechselnd auf Literatur, Linguistik, Didaktik und Kulturgeschichte. Im Mittelpunkt der aktuellen Nummer stehen Lexikologie und Lexikografie und ihre aktuellen Entwicklungen und Herausforderungen. In der Publikation sind insgesamt 13 wissenschaftliche Beiträge, 8 Rezensionen und 5 Berichte über relevante Tagungen und Konferenzen in Tschechien, Österreich und Deutschland zu finden...
Heike Simon ist als Rechtsanwältin und juristische Fachübersetzerin tätig, zugleich lehrt sie an der Universität Lille 2 in Frankreich, wobei ihr Schwerpunkt im Bereich Grundlagen des deutschen Rechts und die deutsche Rechtssprache liegt. Dr. Gisela Funk - Baker war ebenfalls im akademischen Bereich tätig, wobei sie sich mit Fachsprachen beschäftigt hat, u.a. auch mit der deutschen Rechtssprache...
Die Autorin setzt sich zum Ziel, die Phraseologismen mit Körperteilen, sprich verbale Phraseme mit einem oder mehreren Lexemen aus dem Bereich Somatismen, zu untersuchen. Die Arbeit zeigt ein Modell zum Vergleich der Phraseologismen zweier typologisch unterschiedlicher Sprachen – Deutsch und Tschechisch...
Konsonantencluster stellen eine besondere Herausforderung im Erstspracherwerb dar. Ihre Produktion erfordert die Ausdifferenzierung der natürlichsten Silbenstruktur CV. Für den Erwerb bedeutet dies, dass Kinder lernen müssen, dem Ansatz oder der Koda einer Silbe mehrere Konsonanten zuzuweisen. In der Spracherwerbsforschung nimmt daher die Untersuchung von Konsonantengruppen einen wichtigen Stellenwert ein...
Tatsache der Sprache ist, dass jedes Wort aus der Sicht der Semantik polysemantisch ist, d.h. ein Wort kann mehrere Bedeutungen bzw. Konnotationen haben. Ausnahmen bilden die Begriffe der Fachsprache, die klar definiert sein müssen, also oft monosemantisch sind. Die Polysemie beweisen die Verwendungsmöglichkeiten, die im Kontext in verschiedenen sinnvollen Verbindungen entstehen. Ein Wort in unserem Fall, Macht ist ein Substantiv mit neutraler Bedeutung, das einen abstrakten Begriff darstellt und zu dem es mehrere Synonyme gibt, wie etwa: Ansehen, Autorität, Einfluss, Geltung, Gewicht, Machtstellung, Stärke, Vermögen, Prestige, Machtposition, Befehlsgewalt, Führung, Gewalt, Herrschaftsgewalt, Regierungsgewalt, Regiment, Staatsgewalt oder Staatsmacht entspricht. Konkrete Bedeutung enthält Macht erst in einer Verbindung mit anderen Wörtern z. B. die Macht der Liebe, seine Macht festigen, in jmds. Macht stehen usw. Die Variabilität der Lexemverbindung demonstrieren praktisch die Wörterbücher, die die Bedeutungsangaben präsentieren. In diesem Beitrag werden wir weiter die Darstellung des Wörterbuchartikels Macht in einsprachigen Wörterbüchern wie DWDSWörterbuch und Duden-Wörterbuch konfrontieren und Vorschläge für ihre Erweiterung machen aufgrund eines von uns zusammenstellten Kookkurrenzprofils der Basis Macht, das sich im Deutsch-slowakischen Kollokationswörterbuch befindet, und im Rahmen des Projektes VEGA 1/0947/11 Contrastive research of collocations in Slovak and German entstand...
Patrick Hanks und James Pustejovsky (2005) haben in einem, im Zusammenhang mit der Ausarbeitung des Online-Wörterbuchs Pattern Dictionary of English Verbs stehenden Artikel, völlig treffend angemerkt, dass "words in isolation, [...], do not have specific meanings; rather they have a multifaceted potential to contribute to the meaning of an utterance". Das semantische Potential eines Wortes manifestiert sich erst innerhalb verschiedener Kontexte. Die Untersuchung der durch monosemierende Wirkung kennzeichnenden kontextuellen Einbettung eines Wortes führt somit zu dessen Identifikation als einer Wortschatzeinheit und hiermit auch zur Ermittlung anderer an dieser Identifikation im weiteren Sinne beteiligten Einheiten...
Im vorliegenden Beitrag wird die Kollokabilität der partiell synonymischen Verben bewilligen und genehmigen und ihre Konfrontation mit dem Antonym verhindern untersucht. Im Mittelpunkt stehen die Forschung der Gemeinsamkeiten und Unterschiede in der Kollokabilität der Verben und die Bestimmung des Einflusses der Kollokabilität auf Semantik der Verben. Die Untersuchung der Kollokabilität der sprachlichen Einheiten ist wichtig, weil es für jede Sprache gilt, dass die kombinatorischen Eigenschaften der sprachlichen Einheiten spezifisch und einzigartig sind...
Das Thema des Beitrags ist praktisch orientiert und knüpft an das VEGA Projekt „Verbale Kollokationen im Deutschen und Slowakischen“ unter der Leitung von Prof. Peter Ďurčo am Institut für Germanistik der Universität der hl. Kyrill und Method in Trnava an. Das Projekt setzt sich zum Ziel, verbale Kollokationen zu analysieren und zu beschreiben. Es setzt also voraus, dass man die Kollokabilität der sprachlichen Mittel definieren und messen kann...
Zum Begriff des Turnens
(2014)
Die Sportkultur war immer ein Spiegel der Gesellschaft. Sie wandelte sich mit ihr und stand immer in einem Zusammenhang mit soziokulturellen und politischen Bedingungen der Gesellschaft. Die bunte Welt von Sport und Spiel erweckt schon immer das Interesse vieler Menschen, und zwar sowohl als Zuschauer als auch als Sporttreibende...
Sprachkorpora sind heute Informationsquellen, die Sprachwissenschaftlern für die Erforschung der Sprachen und für den Vergleich von Sprachen als ein unentbehrliches Instrument zur Verfügung stehen und man kann bei einer effizienten Erforschung von Sprachen sowie bei der Datenerhebung um Korpora gar nicht herumkommen. Sprachkorpora kann man jedoch auch für eine effektive Vermittlung jeder Sprache nutzen. Die schnelle Verbreitung der Computertechnik und der schnelle Zugang zu Informationen hat auch stark die immense Entwicklung im Bereich der Sprachverarbeitung beeinflusst und bedeutete zudem eine rasante Entwicklung der Korpuslinguistik. Deswegen erscheinen als große Defizite die ungenügende Vermittlung der Kenntnisse und das Fehlen der praktischen Fähigkeiten im Umgang mit korpuslinguistischen Tools bei Studierenden der philologischen Fachrichtungen. Die als Lehrbuch konzipierte Publikation hat eine gute Chance, diese Lücke im Lehrwerkangebot zu füllen...
Das Kolloquium zur Lexikographie und Wörterbuchforschung wurde erstes Mal im Jahre 2000 von Herbert Ernst Wiegand und Pavel Petkov veranstaltet. Seit dieser Zeit treffen sich Lexikographen im Zwei-Jahres- Rhythmus und führen Diskussionen über die aktuellen Themen im Bereich der Lexikographie und Wörterbuchforschung. Das Kolloquium bietet einen internationalen Raum zum Austausch von Erfahrungen und zur Präsentation von Forschungsergebnissen...
Biodiversity is unevenly distributed on Earth and hotspots of biodiversity are often associated with areas that have undergone orogenic activity during recent geological history (i.e. tens of millions of years). Understanding the underlying processes that have driven the accumulation of species in some areas and not in others may help guide prioritization in conservation and may facilitate forecasts on ecosystem services under future climate conditions. Consequently, the study of the origin and evolution of biodiversity in mountain systems has motivated growing scientific interest. Despite an increasing number of studies, the origin and evolution of diversity hotspots associated with the Qinghai-Tibetan Plateau (QTP) remains poorly understood. We review literature related to the diversification of organisms linked to the uplift of the QTP. To promote hypothesis-based research, we provide a geological and palaeoclimatic scenario for the region of the QTP and argue that further studies would benefit from providing a complete set of complementary analyses (molecular dating, biogeographic, and diversification rates analyses) to test for a link between organismic diversification and past geological and climatic changes in this region. In general, we found that the contribution of biological interchange between the QTP and other hotspots of biodiversity has not been sufficiently studied to date. Finally, we suggest that the biological consequences of the uplift of the QTP would be best understood using a meta-analysis approach, encompassing studies on a variety of organisms (plants and animals) from diverse habitats (forests, meadows, rivers), and thermal belts (montane, subalpine, alpine, nival). Since the species diversity in the QTP region is better documented for some organismic groups than for others, we suggest that baseline taxonomic work should be promoted.
Objective: To compare breech outcomes when mothers delivering vaginally are upright, on their back, or planning cesareans. Methods: A retrospective cohort study was undertaken of all women who presented for singleton breech delivery at a center in Frankfurt, Germany, between January 2004 and June 2011. Results: Of 750 women with term breech delivery, 315 (42.0%) planned and received a cesarean. Of 269 successful vaginal deliveries of neonates, 229 in the upright position were compared with 40 in the dorsal position. Upright deliveries were associated with significantly fewer delivery maneuvers (OR 0.45, 95% CI 0.31–0.68) and neonatal birth injuries (OR 0.08, 95% CI 0.01–0.58), second stages that were on average shorter (1 vs 1.75 hours), and nonsignificantly decreased serious perineal lacerations (OR 0.34, 95% CI 0.05–3.99). When upright position was used almost exclusively, the cesarean rate decreased. Serious fetal and neonatal morbidity potentially related to birth mode was low, and similar for upright vaginal deliveries compared with planned cesareans (OR 1.37, 95% CI 0.10–19.11). Three neonates died; all had lethal birth defects. Forceps were never required. Conclusion: Upright vaginal breech delivery was associated with reductions in duration of the second stage of labor, maneuvers required, maternal/neonatal injuries, and cesarean rate when compared with vaginal delivery in the dorsal position.
In einem 1936 geschriebenen Beitrag, der in dem Band 'Ausdruckswelt' (1949) erschienen ist, bezeichnet Gottfried Benn die letzte Zeile aus Rilkes 'Requiem' als einen "Vers, den meine Generation nie vergessen wird". Das Gedicht, 1908 entstanden und ein Jahr später gedruckt, schließt mit den Worten: "Wer spricht von Siegen -, Überstehn ist alles!" Der Vers ist durch Benn zum geflügelten Wort geworden. Vorausgegangen waren allerdings zwei Weltkriege mit Millionen von Toten und Verwüstungen großer Teile Europas. Dennoch ist in Benns Bekenntnis von Trauer oder Demut nichts zu spüren. Vielmehr verweist es auf einen Gedanken von Nietzsche, in dessen Schriften das Überleben als Leistung des willensstarken Individuums aufgefasst wird. "Ein wohlgerathner Mensch", so Nietzsche in seiner 1889 verfassten Autobiografie 'Ecce homo', "erräth Heilmittel gegen Schädigungen, er nützt schlimme Zufälle zu seinem Vortheil aus; was ihn nicht umbringt, macht ihn stärker". Die Feststellung ist Teil von Nietzsches Idee eines 'Willens zur Macht', die den Willen zum Leben einschließt. Sie hat neben der physischen und psychischen eine intellektuelle Seite, die man als glückliche Verabschiedung der Vergangenheit bezeichnen kann. Schon Marx hat den Gedanken in der Einleitung seiner 'Kritik der Hegelschen Rechtsphilosophie' (1843/44) mit Blick auf die griechische Literaturgeschichte formuliert. An konkrete Erfahrung gebunden wird er in Elias Canettis Studie 'Masse und Macht' (1960). "Dieses Gefühl der Erhabenheit über die Toten", so heißt es hier, "kennt jeder, der in Kriegen war. Es mag durch Trauer um Kameraden verdeckt sein; aber dieser sind wenige, der Toten immer viele. [...] Wem dieses Überleben oft gelingt, der ist ein 'Held'. Er ist stärker. Er hat mehr Leben in sich. Die höheren Mächte sind ihm gewogen." Ernst Jünger hat das glückliche Überleben von Kriegen im Sinne Nietzsches immer wieder zum Thema seiner Tagebücher gemacht. Während Benn die nihilistische Dimension in den Mittelpunkt stellte, wie sein Rückblick 'Nietzsche nach 50 Jahren' (1950) deutlich werden lässt, nahm Jünger die optimistischen Impulse des Werkes auf. Dabei hat er, vor allem in seinen späteren Schriften, die Idee des Überlebens vom Ausnahmezustand auf den Alltag und zugleich auf die Zukunft übertragen. Wie Nietzsche wollte Jünger nicht nur glücklich in der Zeit überleben, sondern plante auch ein immaterielles Fortleben im Gedächtnis der Nachwelt. Voraussetzung dieser doppelten Überlebensidee ist ein hoffnungsvoller Blick in die Zukunft.
Vielfältig sind die Definitionen, die das Überlebensparadigma im Sinne eines die Weltsicht prägenden Denkmusters zu erfassen versuchen, und verschieden sind die Aspekte, die der Betrachter in seiner Auffassung jeweils als die dominierenden pointiert. Nichtsdestoweniger wurzelt das moderne Verständnis vom 'Überleben' zuletzt im evolutionistischen Diskurs der zweiten Hälfte des 19. Jahrhunderts. Unter dem evolutionistischen Diskurs sind nicht bloß Darwins Werke zu verstehen, sondern vielmehr die Konstellation von Autoren, Diskursen, Berichtigungen, Anregungen, Ergänzungen, die sich um Darwins Evolutionstheorie drehen und die den Namen Darwinismus tragen. Anders ausgedrückt: Unsere Auffassung des Überlebensbegriffs ist in diesem Diskurs verfangen und kann von ihm nicht restlos loskommen. Dies gewinnt an höchster Evidenz in den Reflexionen über das Überleben von kulturellen Artefakten, die in Analogie zu den Exemplaren bestehender Spezies als Resultat einer 'natürlichen' Auslese gedeutet werden. Unter den unzähligen Beispielen einer Übertragung des Auslesegesetzes von der biologischen auf die kulturelle Evolution mag hier die Reflexion von Hans Blumenberg vorgeführt werden, denn sie bietet viel mehr als eines der rein evolutionistischen Modelle, die eine Erläuterung des kulturellen Überlebens präsentieren. Kein weiterer Autor hat meines Erachtens in der Nachkriegszeit solch einen anspruchsvollen Versuch unternommen, das Darwinsche Evolutionsgesetz jenseits der Fehlschlüsse des Sozialdarwinismus wiederherzustellen und es für die kulturelle beziehungsweise ästhetische Anthropologie fruchtbar zu machen. Des Weiteren erzielte Blumenberg mit seiner theoretischen Berichtigung zuletzt die Beschreibung eines humaneren Modells der kulturellen Produktion, dessen ethische Dimension im Folgenden auszuloten ist. Das Heranziehen einiger Betrachtungen über Primo Levis narrative Erfahrung dient anschließend dazu, die ethische Grundproblematik herauszudestillieren, die das Verbleiben in diesem - obschon korrigierten - Überlebensparadigma in Hinblick auf das historische Gedächtnis impliziert.
Vom Überleben des Wunsches als Todestrieb : Nachträglichkeit, Subjekt und Geschichte bei Freud
(2011)
Es ist nahe liegend, sich zum Thema 'Überleben' mit Freuds Schrift 'Jenseits des Lustprinzips' von 1920 zu beschäftigen, mit der traumatischen Neurose und mit Freuds Diktum, dass das Ziel des Lebens der Tod sei. Beginnt Freud doch damit, dass er gerade durch das Leiden derer, die den Krieg (oder einen schweren Unfall) überlebt haben, dazu kommt, ein Jenseits des Lustprinzips zu postulieren, einen Todestrieb einzuführen, da der traumatische Wiederholungszwang dem Lustprinzip so sehr zu widersprechen scheint, geht es doch um die Perpetuierung des Leidens, um die Wiederholung von etwas Schrecklichem. Ich möchte jedoch im Folgenden einen Umweg beschreiten und mit einer Konstellation aus den Anfängen der Psychoanalyse beginnen, von der sich ebenfalls sagen lässt, dass sie das Überleben behandelt, allerdings in einem gänzlich anderen Kontext, dem der Konstitution des Psychischen. Die Rede ist vom Befriedigungserlebnis und dem unbewussten Wunsch - zwei Konzepte, die für Freuds Denken um 1900 zentral sind.
Der vorliegende Essay beschäftigt sich, aus einiger Distanz, mit jenem 'Dilemma des Zeugnisses', wie es von Giorgio Agamben und Jean-François Lyotard formuliert wurde: 'Die Mordopfer sind tot. Wer spricht für sie, wer soll das Verbrechen bezeugen?' Die Absicht dieses Beitrags ist es, einige aus meiner Sicht unglückliche Diskursarten, die in Verbindung mit diesem Dilemma stehen, zu kritisieren.
Since 2009 has the central Nigerian Nok Culture – until then primarily known for its highly artistic terracotta figurines and early evidence of iron working in the first millennium BCE – been the focus of a research project by the Goethe University Frankfurt/Main, Germany. The analysis of Nok sculptures has so far been almost entirely restricted to their stylistic features which show such great similarities that one hypothesis of the Frankfurt project has been the possible central production of these artfully crafted figurines.
This volume, written within the scope of a dissertation project completed in 2015, challenges this hypothesis by using scientific materials analysis. Combining the results of the mineralogical and geochemical analyses as well as geographic and geological observations, an alternative model for the organisation and procedure of the manufacture of the famous Nok terracottas is suggested.
They were – as the domestic pottery that is used for comparison and differentiation in this study – manufactured with locally available raw materials (clay and temper) but in different manufacturing sequences with regard to temper and clay composition. The terracottas’ clay was obviously reserved for their production only, demonstrating – aside from stylistic similarities – the value these figurines had during the Nok Culture.
Invasive non-native species are key components of human-induced global environmen-tal change and lead to a loss of biodiversity, alterations of species interactions and changes of ecosystem services. Freshwater ecosystems in particular are strongly affect-ed by biological invasions, since they are spatially restricted environments and often already heavily impacted by anthropogenic activities. Recent human-induced species invasions are often characterized by long-distance dispersal, with many species having extended their native distribution range within a very short time frame. However, a long term view into the past shows that biological invasions are common phenomena in nature—representing the arrival of a species into a location in which it did not originally evolve—as a result of climatic changes, geotectonic activity or other natural events. Once a species arrives in a new habitat, it may experience an array of novel selection pressures resulting from abiotic and biotic environmental factors and simultaneously act as a novel selective agent on the native fauna. Consequences of species invasions are manifold. My thesis, which combines seven studies on different aspects of biological invasions, aims to explore the influence of abiotic stressors and biotic interactions during species introductions and range expansions, as well as the consequences of biological invasions on evolutionary and ecosystem processes.
The first part of my thesis examines human-induced biological invasions, dealing with basic ecological characteristics of invaded ecosystems, novel predator-prey interactions, functional consequences of species invasions and certain behavioral traits that may contribute to the invasiveness of some species. The second part of my thesis examined distribution patterns and phenotypic trait divergence in species that historically invaded new geographical areas. I investigated variation of abiotic and biotic selection factors along a stream gradient as well as ecological and evolutionary consequences of species invasions to extreme habitats. The results highlight the importance of simultaneously considering processes involved in natural invasions and during human-induced invasions to understand the success of invading species.
We often lack detailed information on the impacts of historical biological inva-sions. Also, we are currently lacking crucial knowledge about the time scales during which different mechanisms (behavioral flexibility, plastic phenotypic changes, and ge-netic adaptation) play a role during biological invasions and affect species exchange and establishment. Comparative analyses of historical, natural invasion and recent (man-made) invasions can provide insights into the relative importance of the processes governing adaptation to abiotic stressors and selection resulting from biotic interactions. Beyond their negative effects, the establishment of invasive species and the subsequent range expansion represent “natural experiments” to investigate fundamental questions in ecology and evolution. My comparison of natural and human-induced biological invasions revealed that in many cases preadaptation to altered abiotic conditions plays a key role during early stages of invasions and range expansions. Considering the evolutionary history of invasive species and the evolutionary history of the recipient native fauna might therefore help predict the consequences of biological invasions for the ecosystem under consideration and the future success of the invading species. This knowledge can also be implemented when formulating conservation strategies, including methods to mitigate and manage human-induced biological invasions.
The organic rich Livello Bonarelli formed as a result of oxygen deficiency and carbonate dissolution in the oceans during the Cenomanian/Turonian (C/T) transition. During this Ocean Anoxic Event 2 (OAE2), a combination of factors caused increased productivity, incomplete decomposition of organic matter and widespread deposition of black shales. Although these sediments are extensively studied, the exact extent, cause, timing and duration of oceanic anoxia are debated (Sinton and Duncan, 1997; Mitchell et al., 2008). Contrasting causal mechanisms have been suggested, including stratification of the water column (Lanci et al., 2010) versus intensification of the hydrological cycle driving a dynamic ocean circulation (Trabucho-Alexandre et al., 2010). Studies on trace-elemental and (radiogenic) isotope compositions of Cenomanian marine successions have suggested a volcanic origin of OAE2, by delivering nutrients to the semi-enclosed proto-North Atlantic (Zheng et al., 2013, and references therein; Du Vivier et al., 2014). Deciphering the importance of volcanic and oceanographic processes requires tight constraints on their relative timing. Regularly occurring black cherts and shales below the Livello Bonarelli demonstrate that oceanic conditions in the Umbria-Marche Basin were punctuated by episodes of regional anoxia from the mid-Cenomanian onwards. Their hierarchical stacking pattern suggests an orbital control on the deposition of organic rich horizons (Mitchell et al., 2008; Lanci et al., 2010). Stable carbon isotope data reveal that long-term 15 variations in eccentricity paced the carbon cycle (Sprovieri et al., 2013) and sea level changes (Voigt et al., 2006) of the Late Cretaceous. Here we investigate the role of orbital forcing on climate and the carbon cycle, and, specifically, on organic-rich sedimentation prior, during, and after OAE2.
We also explore the potential for establishing an anchored astrochronology for the C/T interval in Europe. Recent improvements in the astronomical solution (La2011; Laskar et al., 2011b) and in the intercalibration of radiometric and astronomical dating techniques (Kuiper et al., 2008; Renne et al., 2013) allow the extension of the astronomical time scale into the Cretaceous. The C/T boundary in the Western Interior (USA) has been dated at 93.90 ± 0.15 Ma by intercalibration of radio-isotopic and astrochronologic time scales (Meyers et al., 2012b). Also, reinterpretation of proxy records spanning the C/T interval seems to resolve discrepancies in reported durations of the OAE2 (Sageman et al., 2006; Meyers et al., 2012a). The well-documented Italian rhythmic successions, reference sections for climatic processes in the Tethyan realm, need to be tied in with the absolute time scale. Biostratigraphic correlation to radioisotopically-dated ash beds in the Western Interior is complicated by the provinciality of faunas and floras. However, δ13C stratigraphy provides a reliable correlation tool (Gale et al., 2005) and we present a new 40Ar/39Ar age for the Thatcher bentonite from the Western Interior occurring within the mid-Cenomanian δ13C event (MCE). This study integrates the well-developed cyclostratigraphy from the Umbria-Marche Basin with radioisotopic ages from the Western Interior and derives a numerical timescale for this critical interval in Earth’s history.
The oceans at the time of the Cenomanian–Turonian transition were abruptly perturbed by a period of bottom-water anoxia. This led to the brief but widespread deposition of black organic-rich shales, such as the Livello Bonarelli in the Umbria–Marche Basin (Italy). Despite intensive studies, the origin and exact timing of this event are still debated. In this study, we assess leading hypotheses about the inception of oceanic anoxia in the Late Cretaceous greenhouse world by providing a 6 Myr long astronomically tuned timescale across the Cenomanian–Turonian boundary. We procure insights into the relationship between orbital forcing and the Late Cretaceous carbon cycle by deciphering the imprint of astronomical cycles on lithologic, physical properties, and stable isotope records, obtained from the Bottaccione, Contessa and Furlo sections in the Umbria–Marche Basin. The deposition of black shales and cherts, as well as the onset of oceanic anoxia, is related to maxima in the 405 kyr cycle of eccentricity-modulated precession. Correlation to radioisotopic ages from the Western Interior (USA) provides unprecedented age control for the studied Italian successions. The most likely tuned age for the base of the Livello Bonarelli is 94.17 ± 0.15 Ma (tuning 1); however, a 405 kyr older age cannot be excluded (tuning 2) due to uncertainties in stratigraphic correlation, radioisotopic dating, and orbital configuration. Our cyclostratigraphic framework suggests that the exact timing of major carbon cycle perturbations during the Cretaceous may be linked to increased variability in seasonality (i.e. a 405 kyr eccentricity maximum) after the prolonged avoidance of seasonal extremes (i.e. a 2.4 Myr eccentricity minimum). Volcanism is probably the ultimate driver of oceanic anoxia, but orbital periodicities determine the exact timing of carbon cycle perturbations in the Late Cretaceous. This unites two leading hypotheses about the inception of oceanic anoxia in the Late Cretaceous greenhouse world.
"Der Beredsamkeit der Sieger den Hals umdrehen" : jüdischer Humor als Strategie zum Überleben
(2011)
Es ist kein Zufall, dass Freud ausgerechnet in der ostjüdischen Kultur ein Arsenal an Witzen entdeckt, entlang dessen er Witz-Techniken und ihre psychische Begründung darstellt. Die Vehemenz, mit der er den Umstand, dass es sich bei seinen Beispielen hauptsächlich um Witze aus der Kultur der Ostjuden handelt, als nebensächlich und erklärungsbedürftig zugleich kennzeichnet, um ihn dann aber doch weitgehend unbegründet zu lassen, legt die Annahme einer hier verschwiegenen Beziehung zwischen der Besonderheit des Witzes und jener "Herkunft" gerade nahe. In 'welchem' Verhältnis aber stehen Witz und Humor zur jüdischen Überlieferungskultur? Eine Frage, die sich auch Rabbiner und Philosophen stellten, darunter Marc Alain Ouaknin, dessen Reflexionen zum jüdischen Humor dieser Beitrag grundlegende Impulse verdankt. Seine Beobachtung einer methodischen Nähe zwischen Textstrategien von Talmud und Midrasch und gewissen Techniken des jüdischen Witzes lässt sich leicht erweitern zu der Annahme einer Koinzidenz von einem in diesen Witzen zeichenstrategisch und thematisch auftauchenden Verständnis jüdischer Überlieferungsdynamik und Vorstellungen des Zusammenhangs von Interpretation und Sinnerneuerung, wie sie in jenen Quellen der jüdischen Tradition gründen. Den Ort der Witze und humorvollen Anekdoten, von denen hier die Rede sein wird, kennzeichnen nun mindestens zwei besondere Situationen: Sie entspringen den unmittelbaren Erfahrungen jüdischen Lebens in Osteuropa, das bereits vor seiner Vernichtung fortwährend mit wechselnden Verfolgungs- und Unterdrückungssituationen konfrontiert war, und dem Kontinuum seiner kulturellen und religiösen Herkunft und Tradition.
Im Sinne einer Musikhistorie als "Plural von Zusammenhängen" (H. Blumenberg), deren sich überkreuzende Fäden der narrativen Bündelung durch Hörer und Chronisten bedürfen, erweist sich Schönbergs "Überlebender aus Warschau" als ein Scharnierstück der von politischen Verwerfungen durchsetzten Musikgeschichte des 20. Jahrhunderts. Die Einbeziehung des die Dignität kultureller und religiöser Selbstbehauptung symbolisierenden Glaubensbekenntnisses setzt kompositionsgeschichtlich einerseits eine national-religiöse Tradition fort, denkt man etwa an die Schlüsselstellung, die das Zitat von Luthers Choral "Ein feste Burg" in nicht wenigen Werken des 19. Jahrhunderts einnimmt. Andererseits resultiert auch die Wirkung des 'Survivor' - wie Reinhold Brinkmann im Zeichen der um und nach 1968 leidenschaftlich ausgetragenen Diskussion der politischen Aussagekraft musikalischer Werke betonte - aus der Aktivierung des politischen Textinhalts durch eine dezidiert musikalische Konzeption: Die dem "Shema Yisrael" vorausgehende musikalische Steigerung lässt sich als kompositorisches Modell bereits im apotheotisch angelegten Schluss von Schönbergs 'Gurreliedern' ("Erwacht, erwacht ihr Blumen zur Wonne") nachweisen. Auf dynamischer Ebene ist diese Klimax der Takte 72-80, die es buchstäblich darauf anlegt, den Hörer zu überwältigen, durch das anziehende Tempo (von Viertel=60 bis Viertel=160) und ein Crescendo zum dreifachen Forte, rhythmisch durch die aus der Erzählerstimme in das Orchester überspringenden triolischen Figuren, sowie tonal durch ein chromatisches Wechselspiel zwischen den vier Ausprägungen des übermäßigen Dreiklangs und der dadurch ebenfalls erforderlichen Transposition der Reihenformen bestimmt. Schönbergs ideelle Besinnung auf die Religion stützt sich kompositorisch somit ein Stück weit gerade auf ihr säkularisiertes Gegenstück - jene musikalischen Überhöhungen, die das Zeitalter der "Weltanschauungsmusik" in Form einer bisweilen hypertrophen Kunstreligion zelebrierte. Die Erinnerung des "Überlebenden" wird so auf den "grandiose moment" des musikalischen Widerstands konzentriert, während Schönberg eine gleichwertige Einbeziehung des Erzählertexts in das motivisch- tonale Gefüge der Komposition dezidiert ausschließt. Trotz dieser historischen Verortung steht der von Adorno als "autonome Gestaltung der zur Hölle gesteigerten Heteronomie" beargwöhnte 'Survivor' zugleich aber nicht nur ideengeschichtlich, sondern durchaus auch kompositionstechnisch - wie hier an Kompositionen von Schönbergs (posthumem) Schwiegersohn Luigi Nono gezeigt werden soll - mit avancierten Beispielen einer 'musique engagée' der 1960er Jahre in Verbindung.
Die Erfindung des Kinos hat man in Frankreich treffend kommentiert mit den Worten: "La vie est prise sur le vif." Durch die Wiedergabe der sichtbaren Wirklichkeit in bewegten Bildern wird der Eindruck des Lebendigen nicht wie auf Gemälden und selbst noch auf Fotografien buchstäblich festgehalten; vielmehr erscheinen die aufgenommenen Menschen und Dinge in dem Augenblick, da sie als bewegte und ebenso flüchtige Bilder auf die Leinwand projiziert werden, wie zu neuem Leben erweckt. An die Stelle des Gewesenen tritt das Gegenwärtige. Wenn Malerei und Fotografie aus dieser Perspektive als Medien der Verewigung bezeichnet werden können - so der Film als ein Medium steter Aktualisierung. Aufgrund dessen ist er aber auch in der Lage, eine dem Zuschauer überaus nahegehende Darstellung des Tötens zu geben. Insofern er fotografischen Realismus und lebendige Bewegung, theatralische Inszenierung und literarische Erzählweise kombiniert, kann er eindrucksvoller als alle anderen Medien das Publikum in Angst und Schrecken versetzen. Genau diese als besonders sinnlich gerühmten (oder gerügten) Qualitäten lassen es fraglich erscheinen, ob der Film geeignet ist, einen Einblick in den tiefsten Abgrund der Historie zu gewähren. Wenn grundsätzliche Bedenken gegen eine mögliche Darstellung der nationalsozialistischen Konzentrations- und Vernichtungslager geäußert werden, dann gelten sie insbesondere dem Film. Niemand würde mehr behaupten, dass etwa die Literatur das nicht könne oder nicht dürfe; Art Spiegelmans 'Maus' hat den Schriftgelehrten unterdessen vor Augen geführt, dass selbst eine graphic novel, nämlich ein Comic, als eine durchaus angemessene Form der Darstellung in Betracht kommen kann. Eine vergleichbar breite Zustimmung hat der Film, zumal der fiktionale, bisher nicht erreicht.
The LPJ-GUESS dynamic vegetation model uniquely combines an individual- and patch-based representation of vegetation dynamics with ecosystem biogeochemical cycling from regional to global scales. We present an updated version that includes plant and soil N dynamics, analysing the implications of accounting for C-N interactions on predictions and performance of the model. Stand structural dynamics and allometric scaling of tree growth suggested by global databases of forest stand structure and development were well-reproduced by the model in comparison to an earlier multi-model study. Accounting for N cycle dynamics improved the goodness-of-fit for broadleaved forests. N limitation associated with low N mineralisation rates reduces productivity of cold-climate and dry-climate ecosystems relative to mesic temperate and tropical ecosystems. In a model experiment emulating free-air CO2 enrichment (FACE) treatment for forests globally, N-limitation associated with low N mineralisation rates of colder soils reduces CO2-enhancement of NPP for boreal forests, while some temperate and tropical forests exhibit increased NPP enhancement. Under a business-as-usual future climate and emissions scenario, ecosystem C storage globally was projected to increase by c. 10 %; additional N requirements to match this increasing ecosystem C were within the high N supply limit estimated on stoichiometric grounds in an earlier study. Our results highlight the importance of accounting for C-N interactions not only in studies of global terrestrial C cycling, but to understand underlying mechanisms on local scales and in different regional contexts.
The LPJ-GUESS dynamic vegetation model uniquely combines an individual- and patch-based representation of vegetation dynamics with ecosystem biogeochemical cycling from regional to global scales. We present an updated version that includes plant and soil N dynamics, analysing the implications of accounting for C–N interactions on predictions and performance of the model. Stand structural dynamics and allometric scaling of tree growth suggested by global databases of forest stand structure and development were well reproduced by the model in comparison to an earlier multi-model study. Accounting for N cycle dynamics improved the goodness of fit for broadleaved forests. N limitation associated with low N-mineralisation rates reduces productivity of cold-climate and dry-climate ecosystems relative to mesic temperate and tropical ecosystems. In a model experiment emulating free-air CO2 enrichment (FACE) treatment for forests globally, N limitation associated with low N-mineralisation rates of colder soils reduces CO2 enhancement of net primary production (NPP) for boreal forests, while some temperate and tropical forests exhibit increased NPP enhancement. Under a business-as-usual future climate and emissions scenario, ecosystem C storage globally was projected to increase by ca. 10%; additional N requirements to match this increasing ecosystem C were within the high N supply limit estimated on stoichiometric grounds in an earlier study. Our results highlight the importance of accounting for C–N interactions in studies of global terrestrial N cycling, and as a basis for understanding mechanisms on local scales and in different regional contexts.
Überleben? : nach Auschwitz
(2011)
Es geht tatsächlich auch beim Nachdenken über 'Überleben' um Erfahrung und den Begriff der Erfahrung, einen qualitativen Begriff der Erfahrung, der ja in den Sozialwissenschaften immer wieder droht, aufgeweicht zu werden beziehungsweise diffundiert zu werden durch die Übermacht einer rein quantifizierbaren Empirie. Aber im Kern der Sozialforschung, gerade in ihrem kritischen Kern, steht die Kategorie der 'Erfahrung'. Erfahrung ist auch etwas, was über Personen vermittelt wird und vermittelt werden kann. Erfahrung kann auch die engen Grenzen überschreiten, die eine Aufklärung kennzeichnet, die sich bloß auf Texte bezieht. Erfahrung kann diese Grenzen überwinden. Auch die Enge kann überwunden werden, dass nur das mitgeteilt und auch verstanden werden kann, was individuell erlebt worden ist und in der Kommunikation zwischen unterschiedlichen Subjekten überschritten werden kann. Zwar haben Horkheimer und Adorno immer einen Schrecken gehabt, wenn sie das Wort 'Kommunikation' gehört haben, aber hier geht es auch tatsächlich um eine Überschreitung, eine Überschreitung der Grenzen des Individuums.
Strong seasonal variability of hygric and thermal soil conditions are a defining environmental feature in Northern Australia. However, how such changes affect the soil–atmosphere exchange of nitrous oxide (N2O), nitric oxide (NO) and dinitrogen (N2) is still 5 not well explored. By incubating intact soil cores from four sites (3 savanna, 1 pasture) under controlled soil temperatures (ST) and soil moisture (SM) we investigated the release of the trace gas fluxes of N2O, NO and carbon dioxide (CO2). Furthermore, the release of N2 due to denitrification was measured using the helium gas flow soil core technique. Under dry pre-incubation conditions NO and N2O emission were very low (< 7.0± 5.0 μgNO-Nm−2 h−1; < 0.0± 1.4 μgN2O-Nm−2 h−1) or in case of N2O, even a net soil uptake was observed. Substantial NO (max: 306.5 μgNm−2 h−1) and relatively small N2O pulse emissions (max: 5.8±5.0 μgNm−2 h−1) were recorded following soil wetting, but these pulses were short-lived, lasting only up to 3 days. The total atmospheric loss of nitrogen was dominated by N2 emissions (82.4–99.3% of total N lost), although NO emissions contributed almost 43.2% at 50% SM and 30 °C ST. N2O emissions were systematically higher for 3 of 12 sample locations, which indicates substantial spatial variability at site level, but on average soils acted as weak N2O sources or even sinks. Emissions were controlled by SM and ST for N2O and CO2, ST and pH for NO, and SM and pH for N2.
In this thesis, the production of charged kaons and Φ mesons in Au+Au collisions at sqrt sAuAu = 2.4 GeV is studied. At this energy, all particles carrying open and hidden strangeness are produced below their respective free nucleon-nucleon threshold with the corresponding so-called excess energies: sqrt sK+ exc = -0.15 GeV, sqrt sK- exc = -0.46 GeV, sqrt sΦ exc = -0.49 GeVGeV. As a consequence, the production cross sections are very sensitive to medium effects like momentum distributions, two- or multistep collisions, and modification of the in-medium spectral distribution of the produced states [1]. K+ and K- mesons exhibit different properties in baryon dominated matter, since only K- can be resonantly absorbed by nucleons. Although strangeness exchange reactions have been proposed to be the dominant channel for K- production in the analyzed energy regime, the production yield and kinematic distributions could also be explained in smaller systems based on statistical hadronization model fits to the measured particle yields, including a canonical strangeness suppression radius RC, and taking the Φ feed-down to kaons into account [2, 3]. For the first time in central Au+Au collisions at such low energies, it is possible to reconstruct and do a multi differential analysis of K- and Φ mesons. In principle, this should be the ideal environment for strangeness exchange reactions to occur, as the particles are produced deeply sub-threshold in a large and long-living system. Therefore, it is the ultimate test to differentiate between the different sources for K- production in HIC.
In total 7.3x10exp9 of the 40% most central Au(1.23 GeV per nucleon)+Au collisions are analyzed. The data has been recorded with the High Acceptance DiElectron Spectrometer HADES located at Helmholtzzentrum für Schwerionenforschung GSI in April/May 2012. A substantially improved reconstruction method has been employed to reconstruct the hadrons with high purity in a wide phase space region.
The estimated particle multiplicities follow a clear hierarchy of the excess energy: 41.5 ± 2.1|sys protons at mid-rapidity per unit in rapidity, 11.1 ± 0.6|sys ± 0.4|extrapol π-, (3.01 ± 0.03|stat ± 0.15|sys ± 0.30|extrapól) x10 exp -2 K+, (1.94 ± 0.09|stat ± 0.10|sys ± 0.10|extrapol)x10 exp -4 K- and (0.99 ± 0.24|stat ± 0.10|sys ± 0.05|extrapol)x10 exp -4 Φ per event. The multiplicities of the strange hadrons increase more than linear with the mean number of participating nucleons hAparti, supporting the assumption that the necessary energy to overcome the elementary production threshold is accumulated in multi-particle interactions. Transport models predict such an increase, but are overestimating the measured particle yield and are not able to describe the kinematic distributions of K+ mesons perfectly. However, the best description is given by the IQMD model with a density dependent kaonnucleon potential of 40 MeV at nuclear ground state density.
The K-=K+ multiplicity ratio is constant as a function of centrality and follows with (6.45 ± 0.77)x10 exp -3 the trend of increasing with beam energy indicated from previous experiments [4]. The effective temperature of K- TK+eff = (84 ± 6) MeV is found to be systematically lower than the one of K+ TK+eff = (104 ± 1) MeV, which has also been observed by the other experiments.
The Φ=K- ratio is with a value of 0.52 ± 0.16 higher than the one obtained at higher center-of-mass energies and smaller systems. This behavior is predicted from a tuned version of the UrQMD transport model [5], when including higher mass baryonic resonances which can decay into Φ mesons and from statistical hadronization models when suppressing open strangeness canonically. The found ratio is constant as a function of centrality and results with a branching ratio of 48.9%, that ~ 25% of all measured K- originate from Φ feed-down decays. A two component PLUTO simulation, consisting of a pure thermal and a K- contribution originating from Φ decays, can fully explain the observed lower effective temperature in comparison to K+ and the shape of the measured rapidity distribution of K-. As a result, we find no indication for strangeness exchange reactions being the dominant mechanism for K- production in the SIS18 energy regime, if taking the contribution from Φ feed-down decays into account.
The hadron yields for the 20% most central collisions can be described by a statistical hadronization model fit with the chemical freeze-out temperature of Tchem = (68 ± 2) MeV and baryochemical potential of μB = (883 ± 25) MeV, which is higher than expected from previous parameterizations. The analysis of the transverse mass spectra of protons indicate a kinetic freeze-out temperature of Tkin = (70 ± 4) MeV and radial flow velocity of βr = 0.43 ± 0.01, which is in agreement with the parameters obtained from the linear dependence of the effective temperatures on the particle mass Tkin = (71.5 ± 4.2) MeV and βr = 0.28 ± 0.09.
Strong seasonal variability of hygric and thermal soil conditions are a defining environmental feature in northern Australia. However, how such changes affect the soil–atmosphere exchange of nitrous oxide (N2O), nitric oxide (NO) and dinitrogen (N2) is still not well explored. By incubating intact soil cores from four sites (three savanna, one pasture) under controlled soil temperatures (ST) and soil moisture (SM) we investigated the release of the trace gas fluxes of N2O, NO and carbon dioxide (CO2). Furthermore, the release of N2 due to denitrification was measured using the helium gas flow soil core technique. Under dry pre-incubation conditions NO and N2O emissions were very low (<7.0 ± 5.0 μg NO-N m−2 h−1; <0.0 ± 1.4 μg N2O-N m−2 h−1) or in the case of N2O, even a net soil uptake was observed. Substantial NO (max: 306.5 μg N m−2 h−1) and relatively small N2O pulse emissions (max: 5.8 ± 5.0 μg N m−2 h−1) were recorded following soil wetting, but these pulses were short lived, lasting only up to 3 days. The total atmospheric loss of nitrogen was generally dominated by N2 emissions (82.4–99.3% of total N lost), although NO emissions contributed almost 43.2% to the total atmospheric nitrogen loss at 50% SM and 30 °C ST incubation settings (the contribution of N2 at these soil conditions was only 53.2%). N2O emissions were systematically higher for 3 of 12 sample locations, which indicates substantial spatial variability at site level, but on average soils acted as weak N2O sources or even sinks. By using a conservative upscale approach we estimate total annual emissions from savanna soils to average 0.12 kg N ha−1 yr−1 (N2O), 0.68 kg N ha−1 yr−1 (NO) and 6.65 kg N ha−1 yr−1 (N2). The analysis of long-term SM and ST records makes it clear that extreme soil saturation that can lead to high N2O and N2 emissions only occurs a few days per year and thus has little impact on the annual total. The potential contribution of nitrogen released due to pulse events compared to the total annual emissions was found to be of importance for NO emissions (contribution to total: 5–22%), but not for N2O emissions. Our results indicate that the total gaseous release of nitrogen from these soils is low and clearly dominated by loss in the form of inert nitrogen. Effects of seasonally varying soil temperature and moisture were detected, but were found to be low due to the small amounts of available nitrogen in the soils (total nitrogen <0.1%).
Menschliche Körper und Räume sind wechselseitig aufeinander bezogen und sehr ähnlichen gesellschaftlichen Konstitutionsbedingungen unterworfen. Dessen ungeachtet wurde der Körper mit seinen Bedeutungen für die Konstruktion und Aneignung von Räumen in der Geographie bisher kaum thematisiert. Dieser Beitrag widmet sich aus feministisch-poststrukturalistischer Perspektive dem dynamischen Wechselverhältnis von Körpern und Räumen. Besonders hervorgehoben werden dabei die Bedeutungen von Fremd- und Eigenwahrnehmung für vielfältige Raumaneignungsstrategien. Damit wird zugleich ein theoretischer Ansatz zur Diskussion gestellt, der neue Betrachtungs- und Analyseebenen eröffnet.
Drawing on the example of a research project on the extension of the margins of the global agricultural market through the workings of agribusiness in Ghana, this paper explores what contribution ethnographic approaches can make to the study of quotidian market constructions in organizational settings. It demonstrates how ethnographies of marketization can be grasped conceptually, epistemologically and methodologically, as well as what practical and methodological challenges such a practice-oriented approach towards the everyday organization of markets might encounter. By doing so, the paper offers a methodological contribution to the interdisciplinary field of marketization studies. Moreover, this paper urges economic geographers to further harness the epistemological potential of ethnographic approaches.
Raum und räumliche Beziehungen sind Konstrukte. Sicht-, Denk- und Interpretationsweisen wirken aus dem Hintergrund der aktuellen und historisch gewachsenen gesellschaftlichen Verhältnisse auf die Herstellung nicht zuletzt sprachlich konstruierter Wirklichkeiten ein. In diesem Beitrag werden die Grundzüge von Poststrukturalismus und Postmodernismus skizziert und das vorliegende Heft in seinem Stellenwert für die (Human-) Geographie begründet. Desweiteren wird in die einzelnen Beiträge des Heftes eingeführt.
Metropolregionen werden in der aktuellen Globalisierungsdebatte als Entscheidungs-, Kontroll- und Koordinationszentren von internationaler Bedeutung verstanden. Sie «bündeln» entsprechende Knoten (hubs), deren Verortung, funktionale Bedeutung und regionale Reichweite die Rolle und den Entwicklungspfad der Metropolregion bestimmen. Frankfurt/Rhein-Main ist erst in den letzten beiden Jahrzehnten in diese Rolle hineingewachsen. Knotenfunktionen bestehen heute in drei Themenfeldern: dem Innovationszentrum, dem Finanzplatz und dem Markt(informations)platz. Im Entwicklungspfad der Metropolregion zeigt sich, daß sie zunächst in nationale Bedeutung hineinwachsen mußten, bevor sie internationale Bedeutung gewinnen konnten. Die Knotenfunktionen der Metropolregion Frankfurt/Rhein-Main in den drei genannten Feldern sind jedoch «ungesichert». Daher muß offen bleiben, ob der Aufstieg von Frankfurt/Rhein-Main zur europäischen Metropolregion Auswirkungen auf das weitere System von Metropolregionen in Europa hat.
The Standard Model is one of the greatest successes of modern theoretical physics. Itl describes the physics of elementary particles by means of three forces, the electro-magnetisc, the weak and the strong interactions. The electro-magnetic and the weak interaction are rather well understood in comparison to the strong interaction.
The latest is as fundamental as the others, it is responsible for the formation of all hadrons which are classified into mesons and baryons. Well-known examples of the former is the pion and of the latter is the proton and the neutron, which form the nucleus of every atom. This fundamental force is believed to be described by the Quantum Chromodynamics (QCD) theory. According to this theory, hadrons are not elementary particles but are composed of quarks and gluons. The latter are the vector particles of the force and so are bosons of spin 1 and the former constitute the matter and are fermions with spin 1/2. To describe the interaction a new quantum number had to be introduced: the color charge which exists in three different types (blue, green and red). The name has not been chosen arbitrary as elements created from three quarks of different colors are colorless in the same way that mixing the three primary colors leads to white. However, experimentally no colored structure has ever been observed. The quarks and the gluons seem to be confined in colorless hadrons. This property of QCD is called confinement and results from a large coupling constant at low energy (or large distance). For high energy (or small distance), the perturbative analysis of QCD permits to establish the coupling constant to be small and quarks and gluons are almost free. This property is called asymptotic freedom. The possibility for QCD to describe both behaviors is one of its amazing characteristics. However, both phenomena are not well understood and one needs a method to study both the pertubative and the confining regime.
The only known method which fulfills the above criteria is Lattice QCD and more generally Lattice Quantum Field Theory (LQFT). It consists of a discretization of the spacetime and a formulation of QCD on a four-dimensional Euclidean spacetime grid of spacing a. In this way, the theory is naturally regularized and mathematically well-defined. On the other hand, the path integral formalism allows the theory to be treated as a Statistical Mechanics system which can be evaluated via a Markov chain Monte-Carlo algorithm. This method was first suggested by Wilson in 1974 [1] and shortly after Creutz performed the first numerical simulations of Yang-Mills theory [2] using a heath-bath Monte-Carlo algorithm. It appears that this method is extremely demanding in computational power. In its early days the method was criticized as the only feasible simulations involved non-physical values such as extremely large quark masses, large lattice spacing a and no dynamical quarks. With the progress of the computers and the appearance of the super-computer, the studies have come close to the physical point. But one still needs to deal with discrete space time and finite volume. Several techniques have been developed to estimate the infinite volume limit and the continuum limit. The smaller the lattice spacing and the larger the volume, the better the extrapolation to continuum and infinite volume limits is. The simulations are still very expensive and for the moment a typical length of the box is L ≈ 4fm and a ≈ 0.08fm. However, it has been realized simulating pure Yang-Mills theory and other lower dimensional models that the topology is freezing at small a [3]. It was also observed recently on full QCD simulations [4,5].
The typical lattice spacing for which this problem appears in QCD is a ≈ 0.05fm but this value depends on the quark mass used and on the algorithm. The freezing of topology leads to results which differ from physical results. Solving this issue is important for the future of LQCD [6]. Recently several methods to overcome the problem have been suggested, one of the most popular is the used of open boundary conditions [7] but this promising method has still its own issues, mainly the breaking of translation invariance.
Nach der Wiedervereinigung der beiden deutschenStaaten im Jahr 1990 wurde in den neuen Bundesländern der weitgehend verstaatlichte Grund undBoden in Privateigentum zurücküberführt. Da in derDDR eine Wiedergutmachung nationalsozialistischenUnrechts weitgehend ausgeblieben war, wurde dieRestitutionsregelung auf die Eigentumsentziehungenbis 1933 ausgedehnt.Die wenig erforschte «Arisierung» des Grundeigentums während des Nationalsozialismus gewann damitan erneuter Aktualität. Die «Arisierung» als Teilder nationalsozialistischen Judenverfolgung bedeuteteeine vollständige Verdrängung der Juden aus der Wirtschaft und damit auch aus dem Immobilienbereich.Das Ergebnis der «Arisierung» war «eine der grösstenBesitzumschichtungen der Neuzeit».Ziel des Artikels ist es, einen Überblick über die rechtlichen Regelungen und den Ablauf der «Arisierung»des Grundeigentums zu geben und mit einer Fallstudiediesen Prozess für ein konkretes Quartier im OstteilBerlins darzustellen.
Background: Despite a recent statutory ruling stating the binding nature of advance directives (ADs), only a minority of the population has signed one. Yet, a majority deem it of utmost importance to ensure their wishes are followed through in case they are no longer able to decide. The reasons for this discrepancy have not yet been investigated sufficiently.
Patients and methods: This article is based on a survey of patients using a well-established structured questionnaire. First, patients were asked about their attitudes with respect to six therapeutic options at the end of life: intravenous fluids, artificial feeding, antibiotics, analgesia, chemotherapy/dialysis, and artificial ventilation; and second, they were asked about the negative effects related to the idea of ADs surveying their apprehensions: coercion to fulfill an AD, dictatorial reading of what had been laid down, and abuse of ADs.
Results: A total of 1,260 interviewees completed the questionnaires. A significant percentage of interviewees were indecisive with respect to therapeutic options, ranging from 25% (analgesia) to 45% (artificial feeding). There was no connection to health status. Apprehensions about unwanted effects of ADs were widespread, at 51%, 35%, and 43% for coercion, dictatorial reading, and abuse, respectively.
Conclusion: A significant percentage of interviewees were unable to anticipate decisions about treatment options at the end of life. Apprehensions about negative adverse effects of ADs are widespread.
Der Aufsatz schlägt die Verbindung und Erweiterung von Analysen des (neoliberalen) Regierens mit nicht-subjektzentrierten und affekttheoretischen Ansätzen vor. Anhand einer Analyse des sozialpolitischen und sozialarbeiterischen Umgangs mit Wohnungslosen wird nachvollzogen, welcher Gewinn sich aus der Verbindung von gouvernementalen und affekttheoretischen Perspektiven ergeben kann. Aus einer gouvernementalen Perspektive wird zunächst nachgezeichnet, wie Affekte und Emotionen in Räumen des betreuten Wohnens für Wohnungslose zum Gegenstand fürsorglicher Intervention werden. Im betreuten Wohnen kommen Mikrotechniken zum Einsatz, die auf eine "ausgewogene" emotionale Bindung an Wohnräume und ihr Inventar hinarbeiten. Das betreute Wohnen ist von Problematisierungen durchzogen, die Wohnungslosigkeit als emotionale Haltung der Rastlosigkeit und Unruhe, als einen Mangel an Verbundenheit mit Orten und Dingen deuten. Gleichzeitig wird den Untergebrachten häufig auch eine übersteigerte affektive Bindung an Dinge unterstellt, die sogenannte "Horder" und "Messies" an einer sozial unauffälligen Haushaltsführung hindere. Eine gouvernementale Analyse kann die therapeutische Rationalität sichtbar machen, die diesen Problematisierungen zugrunde liegt. Eine gouvernementale Analyse allein bietet gleichwohl keine Möglichkeit, alternative Erzählungen über die Bedeutung affektiver Beziehungen für das Wohnen zu entwickeln. Mithilfe unterschiedlicher affekttheoretischer Ansätze geht der Aufsatz daher auch der Frage nach, wie sich jenseits therapeutisierender Perspektiven über das Wohnen und die Bedeutung von Bindungen an Orte und Dinge nachdenken lässt. Nicht-subjektzentrierte Konzepte von Affektivität ermöglichen solche alternativen Erzählungen und eröffnen neue Fluchtlinien der Kritik: Wohnen wir sichtbar als immer schon "betreut", eingelassen in ein Netz von intersubjektiven und interobjektiven Beziehungen.
Background: Although the risk of developing colorectal cancer (CRC) is 2-4 times higher in case of a positive family history, risk-adapted screening programs for family members related to CRC- patients do not exist in the German health care system. CRC screening recommendations for persons under 55 years of age that have a family predisposition have been published in several guidelines.
The primary aim of this study is to determine the frequency of positive family history of CRC (1st degree relatives with CRC) among 40–54 year old persons in a general practitioner (GP) setting in Germany. Secondary aims are to detect the frequency of occurrence of colorectal neoplasms (CRC and advanced adenomas) in 1st degree relatives of CRC patients and to identify the variables (e.g. demographic, genetic, epigenetic and proteomic characteristics) that are associated with it. This study also explores whether evidence-based information contributes to informed decisions and how screening participation correlates with anxiety and (anticipated) regret.
Methods/Design: Prior to the beginning of the study, the GP team (GP and one health care assistant) in around 50 practices will be trained, and about 8,750 persons that are registered with them will be asked to complete the “Network against colorectal cancer” questionnaire. The 10 % who are expected to have a positive family history will then be invited to give their informed consent to participate in the study. All individuals with positive family history will be provided with evidence-based information and prevention strategies. We plan to examine each participant’s family history of CRC in detail and to collect information on further variables (e.g. demographics) associated with increased risk. Additional stool and blood samples will be collected from study-participants who decide to undergo a colonoscopy (n ~ 350) and then analyzed at the German Cancer Research Center (DKFZ) Heidelberg to see whether further relevant variables are associated with an increased risk of CRC. One screening list and four questionnaires will be used to collect the data, and a detailed statistical analysis plan will be provided before the database is closed (expected to be June 30, 2015).
Discussion: It is anticipated that when persons with a family history of colorectal cancer have been provided with professional advice by the practice team, there will be an increase in the availability of valid information on the frequency of affected individuals and an increase in the number of persons making informed decisions. We also expect to identify further variables that are associated with colorectal cancer. This study therefore has translational relevance from lab to practice.
Trial registration: German Clinical Trials Register DRKS00006277
Im Neoliberalismus sind die politischen Handlungsspielräume für das zu Selbstführung und -verwertung verdammte Kreativsubjekt eng geworden und auch im Unternehmen Stadt werden politische Prozesse zunehmend Marktlogiken und „-zwängen“ untergeordnet. Am Beispiel der Auseinandersetzungen über die Planung eines KulturCampus in Frankfurt am Main und mit Hilfe neuerer Theorien des Politischen untersucht dieser Artikel aktuelle Formen des Unvernehmens gegen hegemoniale Formen unternehmerischer Politik und lotet neue Möglichkeiten politischer Subjektivitäten in der kreativen Stadt aus, wie sie derzeit u.a. im Kontext der Recht-auf-Stadt-Bewegung und in den performance studies erprobt werden. Dabei wird der Frage nachgegangen, inwiefern diese neuen Formen des Widerstandes in der Lage sind, die marktlogischen, postdemokratischen Regeln von Politik selbst zum Thema, neue Subjektpositionen artikulierbar und Stadt politisch wieder verhandelbar zu machen.
Zum Themenfeld "Diversität und Vielfalt" diskutierten im Rahmen des 8. Treffens des Nachwuchsnetzwerkes "Stadt, Raum, Architektur" Wissenschaftler_innen aus den Sozial-, Geistes- und Raumwissenschaften an den Instituten für Humangeographie, Kulturanthropologie und Europäische Ethnologie der Goethe-Universität Frankfurt am Main am 9. und 10. November 2012. Vor dem Hintergrund aktueller Debatten um die Konzeptualisierung von sowie den praktischen Umgang mit soziokultureller Vielfalt fand ein produktiver Austausch aus den Perspektiven der Stadtplanung, der Architekturwissenschaft sowie der sozial- und kulturwissenschaftlichen Stadt- und Raumforschung statt. Die Ergebnisse dieser interdisziplinären Auseinandersetzung hinsichtlich einer globalen Diskursverschiebung von "Multikulturalismus" zu "Diversität" und der Adaption entsprechender Strategien in Politik, Wirtschaft und Gesellschaft werden in diesem Tagungsbericht anhand theoretischer Ansätze zu "Super-Diversity", Kosmopolitismus und Transnationalismus diskutiert. Empirisch werden insbesondere Fragen zu Standortmarketing, Integrationspolitiken und der Verräumlichung von Diversität sowie konkreter Praktiken der Segregation, Marginalisierung und Aushandlung von Differenz aufgegriffen. Abschließend wird die Frage nach Konflikten und Potenzialen einer "neuen Diversität" aus stadtplanerischer, dekolonialer und poststrukturalistischer Perspektive diskutiert.
In den Vorlesungen zur Gouvernementalität skizziert Foucault die Art und Weise, in der im modernen Staat «aus der Distanz» regiert wird. Diese wird im Artikel dargestellt, materialistisch «geerdet», und es werden hierauf aufbauend die Begriffe Risiko und Versicherheitlichung diskutiert. Die Tauglichkeit dieser Herangehensweise wird anhand der aktuellen Grenz- und Migrationspolitik der EU illustriert, und es werden die in diesem Kontext produzierten Räume skizziert.
Phylogenetic relationships of the primarily wingless insects are still considered unresolved. Even the most comprehensive phylogenomic studies that addressed this question did not yield congruent results. In order to get a grip on these problems, we here analyzed the sources of incongruence in these phylogenomic studies using an extended transcriptome dataset.Our analyses showed that unevenly distributed missing data can be severely misleading by inflating node support despite the absence of phylogenetic signal. In consequence, only decisive datasets should be used which exclusively comprise data blocks containing all taxa whose relationships are addressed. Additionally, we employed Four-cluster Likelihood-Mapping (FcLM) to measure the degree of congruence among genes of a dataset, as a measure of support alternative to bootstrap. FcLM showed incongruent signal among genes, which in our case is correlated with neither functional class assignment of these genes, nor with model misspecification due to unpartitioned analyses. The herein analyzed dataset is the currently largest dataset covering primarily wingless insects, but failed to elucidate their interordinal phylogenetic relationships. While this is unsatisfying from a phylogenetic perspective, we try to show that the analyses of structure and signal within phylogenomic data can protect us from biased phylogenetic inferences due to analytical artefacts.
Recently, the conserved intracellular digestion mechanism ‘autophagy’ has been considered to be involved in early tumorigenesis and its blockade proposed as an alternative treatment approach. However, there is an ongoing debate about whether blocking autophagy has positive or negative effects in tumor cells. Since there is only poor data about the clinico-pathological relevance of autophagy in gliomas in vivo, we first established a cell culture based platform for the in vivo detection of the autophago-lysosomal components. We then investigated key autophagosomal (LC3B, p62, BAG3, Beclin1) and lysosomal (CTSB, LAMP2) molecules in 350 gliomas using immunohistochemistry, immunofluorescence, immunoblotting and qPCR. Autophagy was induced pharmacologically or by altering oxygen and nutrient levels. Our results show that autophagy is enhanced in astrocytomas as compared to normal CNS tissue, but largely independent from the WHO grade and patient survival. A strong upregulation of LC3B, p62, LAMP2 and CTSB was detected in perinecrotic areas in glioblastomas suggesting micro-environmental changes as a driver of autophagy induction in gliomas. Furthermore, glucose restriction induced autophagy in a concentration-dependent manner while hypoxia or amino acid starvation had considerably lesser effects. Apoptosis and autophagy were separately induced in glioma cells both in vitro and in vivo. In conclusion, our findings indicate that autophagy in gliomas is rather driven by micro-environmental changes than by primary glioma-intrinsic features thus challenging the concept of exploitation of the autophago-lysosomal network (ALN) as a treatment approach in gliomas.
Questions about how human-environment-relations can be conceptualized in a non-dualistic way have been intensively discussed throughout the last decades. The majority of the established realist and constructivist perspectives aim at explaining a given situation by analytically dissecting it. Unfortunately, such an interactionist perspective systematically reproduces the dualistic division between humans, environment and nature.
In contrast, this paper offers a transactive perspective origin in classical pragmatism and discusses its meta-theoretical consequences for human-environment-research. A transactionist perspective interprets the world as a flow of unique and entangled events. Instead of ontologically separating humans and environment, it advocates to look at their relations as being part of a "connatural world". Such a point of view raises new ethical and political questions for geographical human-environment research, argues for a renaissance of ideographic methodologies and hints to a fruitful unity of geographical inquiry.
Peptidyl arginine deiminase 4 (PAD4) is a nuclear enzyme that converts arginine residues to citrulline. Although increasingly implicated in inflammatory disease and cancer, the mechanism of action of PAD4 and its functionally relevant pathways remains unclear. E2F transcription factors are a family of master regulators that coordinate gene expression during cellular proliferation and diverse cell fates. We show that E2F-1 is citrullinated by PAD4 in inflammatory cells. Citrullination of E2F-1 assists its chromatin association, specifically to cytokine genes in granulocyte cells. Mechanistically, citrullination augments binding of the BET (bromodomain and extra-terminal domain) family bromodomain reader BRD4 (bromodomain-containing protein 4) to an acetylated domain in E2F-1, and PAD4 and BRD4 coexist with E2F-1 on cytokine gene promoters. Accordingly, the combined inhibition of PAD4 and BRD4 disrupts the chromatin-bound complex and suppresses cytokine gene expression. In the murine collagen-induced arthritis model, chromatin-bound E2F-1 in inflammatory cells and consequent cytokine expression are diminished upon small-molecule inhibition of PAD4 and BRD4, and the combined treatment is clinically efficacious in preventing disease progression. Our results shed light on a new transcription-based mechanism that mediates the inflammatory effect of PAD4 and establish the interplay between citrullination and acetylation in the control of E2F-1 as a regulatory interface for driving inflammatory gene expression.
Fire is the primary disturbance factor in many terrestrial ecosystems. Wildfire alters vegetation structure and composition, affects carbon storage and biogeochemical cycling, and results in the release of climatically relevant trace gases including CO2, CO, CH4, NOx, and aerosols. One way of assessing the impacts of global wildfire on centennial to multi-millennial timescales is to use process-based fire models linked to dynamic global vegetation models (DGVMs). Here we present an update to the LPJ-DGVM and a new fire module based on SPITFIRE that includes several improvements to the way in which fire occurrence, behaviour, and the effects of fire on vegetation are simulated. The new LPJ-LMfire model includes explicit calculation of natural ignitions, the representation of multi-day burning and coalescence of fires, and the calculation of rates of spread in different vegetation types. We describe a new representation of anthropogenic biomass burning under preindustrial conditions that distinguishes the different relationships between humans and fire among hunter-gatherers, pastoralists, and farmers. We evaluate our model simulations against remote-sensing-based estimates of burned area at regional and global scale. While wildfire in much of the modern world is largely influenced by anthropogenic suppression and ignitions, in those parts of the world where natural fire is still the dominant process (e.g. in remote areas of the boreal forest and subarctic), our results demonstrate a significant improvement in simulated burned area over the original SPITFIRE. The new fire model we present here is particularly suited for the investigation of climate–human–fire relationships on multi-millennial timescales prior to the Industrial Revolution.
Fire is the primary disturbance factor in many terrestrial ecosystems. Wildfire alters vegetation structure and composition, affects carbon storage and biogeochemical cycling, and results in the release of climatically relevant trace gases, including CO2, CO, CH4, NO and aerosols. Assessing the impacts of global wildfire on centennial to multimillennial timescales requires the linkage of process-based fire modeling with vegetation modeling using Dynamic Global Vegetation Models (DGVMs). Here we present a new fire module, SPITFIRE-2, and an update to the LPJ-DGVM that includes major improvements to the way in which fire occurrence, behavior, and the effect of fire on vegetation is simulated. The new fire module includes explicit calculation of natural ignitions, the representation of multi-day burning and coalescence of fires and the calculation of rates of spread in different vegetation types, as well as a simple scheme to model crown fires. We describe a new representation of anthropogenic biomass burning under preindustrial conditions that distinguishes the way in which the relationship between humans and fire are different between hunter-gatherers, obligate pastoralists, and farmers. Where and when available, we evaluate our model simulations against remote-sensing based estimates of burned area. While wildfire in much of the modern world is largely influenced by anthropogenic suppression and ignitions, in those parts of the world where natural fire is still the dominant process, e.g. in remote areas of the boreal forest, our results demonstrate a significant improvement in simulated burned area over previous models. With its unique properties of being able to simulate preindustrialfire, the new module we present here is particularly well suited for the investigation of climate-human-fire relationships on multi-millennial timescales.
We investigate complexes of two paramagnetic metal ions Gd3+ and Mn2+ to serve as polarizing agents for solid-state dynamic nuclear polarization (DNP) of 1H, 13C, and 15N at magnetic fields of 5, 9.4, and 14.1 T. Both ions are half-integer high-spin systems with a zero-field splitting and therefore exhibit a broadening of the mS = −1/2 ↔ +1/2 central transition which scales inversely with the external field strength. We investigate experimentally the influence of the chelator molecule, strong hyperfine coupling to the metal nucleus, and deuteration of the bulk matrix on DNP properties. At small Gd-DOTA concentrations the narrow central transition allows us to polarize nuclei with small gyromagnetic ratio such as 13C and even 15N via the solid effect. We demonstrate that enhancements observed are limited by the available microwave power and that large enhancement factors of >100 (for 1H) and on the order of 1000 (for 13C) can be achieved in the saturation limit even at 80 K. At larger Gd(III) concentrations (≥10 mM) where dipolar couplings between two neighboring Gd3+ complexes become substantial a transition towards cross effect as dominating DNP mechanism is observed. Furthermore, the slow spin-diffusion between 13C and 15N, respectively, allows for temporally resolved observation of enhanced polarization spreading from nuclei close to the paramagnetic ion towards nuclei further removed. Subsequently, we present preliminary DNP experiments on ubiquitin by site-directed spin-labeling with Gd3+ chelator tags. The results hold promise towards applications of such paramagnetically labeled proteins for DNP applications in biophysical chemistry and/or structural biology.
We present the first 3-D model of seismic P and S velocities in the crust and uppermost mantle beneath the Gulf of Aqaba and surrounding areas based on the results of passive travel time tomography. The tomographic inversion was performed based on travel time data from ∼ 9000 regional earthquakes provided by the Egyptian National Seismological Network (ENSN), and this was complemented with data from the International Seismological Centre (ISC). The resulting P and S velocity patterns were generally consistent with each other at all depths. Beneath the northern part of the Red Sea, we observed a strong high-velocity anomaly with abrupt limits that coincide with the coastal lines. This finding may indicate the oceanic nature of the crust in the Red Sea, and it does not support the concept of gradual stretching of the continental crust. According to our results, in the middle and lower crust, the seismic anomalies beneath the Gulf of Aqaba seem to delineate a sinistral shift (∼ 100 km) in the opposite flanks of the fault zone, which is consistent with other estimates of the left-lateral displacement in the southern part of the Dead Sea Transform fault. However, no displacement structures were visible in the uppermost lithospheric mantle.
Funded by the German Ministry for Education and Research (BMBF) a major research project called MiKlip (Mittelfristige Klimaprognose, Decadal Climate Prediction) was launched and global as well as regional predictive ensemble hindcasts have been generated. The aim of the project is to demonstrate for past climate change whether predictive models have the capability of predicting climate on time scales of decades. This includes the development of a decadal forecast system, on the one hand to support decision making for economy, politics and society for decadal time spans. On the other hand, the scientific aspect is to explore the feasibility and prospects of global and regional forecasts on decadal time scales. The focus of this paper lies on the description of the regional hindcast ensemble for Europe generated by COSMO-CLM and on the assessment of the decadal variability and predictability against observations. To measure decadal variability we remove the long term bias as well as the long term linear trend from the data. Further, we applied low pass filters to the original data to separate the decadal climate signal from high frequency noise. The decadal variability and predictability assessment is applied to temperature and precipitation data for the summer and winter half-year averages/sums. The best results have been found for the prediction of decadal temperature anomalies, i.e. we have detected a distinct predictive skill and reasonable reliability. Hence it is possible to predict regional temperature variability on decadal timescales, However, the situation is less satisfactory for precipitation. Here we have found regions showing good predictability, but also regions without any predictive skill.
The prediction of climate on time scales of years to decades is attracting the interest of both climate researchers and stakeholders. The German Ministry for Education and Research (BMBF) has launched a major research programme on decadal climate prediction called MiKlip (Mittelfristige Klimaprognosen, Decadal Climate Prediction) in order to investigate the prediction potential of global and regional climate models (RCMs). In this paper we describe a regional predictive hindcast ensemble, its validation, and the added value of regional downscaling. Global predictions are obtained from an ensemble of simulations by the MPI-ESM-LR model (baseline 0 runs), which were downscaled for Europe using the COSMO-CLM regional model. Decadal hindcasts were produced for the 5 decades starting in 1961 until 2001. Observations were taken from the E-OBS data set. To identify decadal variability and predictability, we removed the long-term mean, as well as the long-term linear trend from the data. We split the resulting anomaly time series into two parts, the first including lead times of 1–5 years, reflecting the skill which originates mainly from the initialisation, and the second including lead times from 6–10 years, which are more related to the representation of low frequency climate variability and the effects of external forcing. We investigated temperature averages and precipitation sums for the summer and winter half-year. Skill assessment was based on correlation coefficient and reliability. We found that regional downscaling preserves, but mostly does not improve the skill and the reliability of the global predictions for summer half-year temperature anomalies. In contrast, regionalisation improves global decadal predictions of half-year precipitation sums in most parts of Europe. The added value results from an increased predictive skill on grid-point basis together with an improvement of the ensemble spread, i.e. the reliability.
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central question underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. We indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.
In old and heavily weathered soils, the availability of P might be so small that the primary production of plants is limited. However, plants have evolved several mechanisms to actively take up P from the soil or mine it to overcome this limitation. These mechanisms involve the active uptake of P mediated by mycorrhiza, biotic de-occlusion through root clusters, and the biotic enhancement of weathering through root exudation. The objective of this paper is to investigate how and where these processes contribute to alleviate P limitation on primary productivity. To do so, we propose a process-based model accounting for the major processes of the carbon, water, and P cycles including chemical weathering at the global scale. Implementing P limitation on biomass synthesis allows the assessment of the efficiencies of biomass production across different ecosystems. We use simulation experiments to assess the relative importance of the different uptake mechanisms to alleviate P limitation on biomass production. We find that active P uptake is an essential mechanism for sustaining P availability on long timescales, whereas biotic de-occlusion might serve as a buffer on timescales shorter than 10 000 yr. Although active P uptake is essential for reducing P losses by leaching, humid lowland soils reach P limitation after around 100 000 yr of soil evolution. Given the generalized modelling framework, our model results compare reasonably with observed or independently estimated patterns and ranges of P concentrations in soils and vegetation. Furthermore, our simulations suggest that P limitation might be an important driver of biomass production efficiency (the fraction of the gross primary productivity used for biomass growth), and that vegetation on old soils has a smaller biomass production rate when P becomes limiting. With this study, we provide a theoretical basis for investigating the responses of terrestrial ecosystems to P availability linking geological and ecological timescales under different environmental settings.
Profiles of CFC-11 (CCl3F) and CFC-12 (CCl2F2) of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European satellite Envisat have been retrieved from versions MIPAS/4.61 to MIPAS/4.62 and MIPAS/5.02 to MIPAS/5.06 level-1b data using the scientific level-2 processor run by Karlsruhe Institute of Technology (KIT), Institute of Meteorology and Climate Research (IMK) and Consejo Superior de Investigaciones Científicas (CSIC), Instituto de Astrofísica de Andalucía (IAA). These profiles have been compared to measurements taken by the balloon-borne cryosampler, Mark IV (MkIV) and MIPAS-Balloon (MIPAS-B), the airborne MIPAS-STRatospheric aircraft (MIPAS-STR), the satellite-borne Atmospheric Chemistry Experiment Fourier transform spectrometer (ACE-FTS) and the High Resolution Dynamic Limb Sounder (HIRDLS), as well as the ground-based Halocarbon and other Atmospheric Trace Species (HATS) network for the reduced spectral resolution period (RR: January 2005–April 2012) of MIPAS. ACE-FTS, MkIV and HATS also provide measurements during the high spectral resolution period (full resolution, FR: July 2002–March 2004) and were used to validate MIPAS CFC-11 and CFC-12 products during that time, as well as profiles from the Improved Limb Atmospheric Spectrometer, ILAS-II. In general, we find that MIPAS shows slightly higher values for CFC-11 at the lower end of the profiles (below ∼ 15 km) and in a comparison of HATS ground-based data and MIPAS measurements at 3 km below the tropopause. Differences range from approximately 10 to 50 pptv ( ∼ 5–20 %) during the RR period. In general, differences are slightly smaller for the FR period. An indication of a slight high bias at the lower end of the profile exists for CFC-12 as well, but this bias is far less pronounced than for CFC-11 and is not as obvious in the relative differences between MIPAS and any of the comparison instruments. Differences at the lower end of the profile (below ∼ 15 km) and in the comparison of HATS and MIPAS measurements taken at 3 km below the tropopause mainly stay within 10–50 pptv (corresponding to ∼ 2–10 % for CFC-12) for the RR and the FR period. Between ∼ 15 and 30 km, most comparisons agree within 10–20 pptv (10–20 %), apart from ILAS-II, which shows large differences above ∼ 17 km. Overall, relative differences are usually smaller for CFC-12 than for CFC-11. For both species – CFC-11 and CFC-12 – we find that differences at the lower end of the profile tend to be larger at higher latitudes than in tropical and subtropical regions. In addition, MIPAS profiles have a maximum in their mixing ratio around the tropopause, which is most obvious in tropical mean profiles. Comparisons of the standard deviation in a quiescent atmosphere (polar summer) show that only the CFC-12 FR error budget can fully explain the observed variability, while for the other products (CFC-11 FR and RR and CFC-12 RR) only two-thirds to three-quarters can be explained. Investigations regarding the temporal stability show very small negative drifts in MIPAS CFC-11 measurements. These instrument drifts vary between ∼ 1 and 3 % decade−1. For CFC-12, the drifts are also negative and close to zero up to ∼ 30 km. Above that altitude, larger drifts of up to ∼ 50 % decade−1 appear which are negative up to ∼ 35 km and positive, but of a similar magnitude, above.
Reconstructions of biomass burning from sediment-charcoal records to improve data–model comparisons
(2016)
The location, timing, spatial extent, and frequency of wildfires are changing rapidly in many parts of the world, producing substantial impacts on ecosystems, people, and potentially climate. Paleofire records based on charcoal accumulation in sediments enable modern changes in biomass burning to be considered in their long-term context. Paleofire records also provide insights into the causes and impacts of past wildfires and emissions when analyzed in conjunction with other paleoenvironmental data and with fire models. Here we present new 1000-year and 22 000-year trends and gridded biomass burning reconstructions based on the Global Charcoal Database version 3 (GCDv3), which includes 736 charcoal records (57 more than in version 2). The new gridded reconstructions reveal the spatial patterns underlying the temporal trends in the data, allowing insights into likely controls on biomass burning at regional to global scales. In the most recent few decades, biomass burning has sharply increased in both hemispheres but especially in the north, where charcoal fluxes are now higher than at any other time during the past 22 000 years. We also discuss methodological issues relevant to data–model comparisons and identify areas for future research. Spatially gridded versions of the global data set from GCDv3 are provided to facilitate comparison with and validation of global fire simulations.
Reconstructions of biomass burning from sediment charcoal records to improve data–model comparisons
(2015)
The location, timing, spatial extent, and frequency of wildfires are changing rapidly in many parts of the world, producing substantial impacts on ecosystems, people, and potentially climate. Paleofire records based on charcoal accumulation in sediments enable modern changes in biomass burning to be considered in their long-term context. Paleofire records also provide insights into the causes and impacts of past wildfires and emissions when analyzed in conjunction with other paleoenvironmental data and with fire models. Here we present new 1000-year and 22 000-year trends and gridded biomass burning reconstructions based on the Global Charcoal Database version 3 (GCDv3), which includes 736 charcoal records (57 more than in version 2). The new gridded reconstructions reveal the spatial patterns underlying the temporal trends in the data, allowing insights into likely controls on biomass burning at regional to global scales. In the most recent few decades, biomass burning has sharply increased in both hemispheres but especially in the north, where charcoal fluxes are now higher than at any other time during the past 22 000 years. We also discuss methodological issues relevant to data–model comparisons and identify areas for future research. Spatially gridded versions of the global data set from GCDv3 are provided to facilitate comparison with and validation of global fire simulations.
Amines are potentially important for atmospheric new particle formation, but their concentrations are usually low with typical mixing ratios in the pptv range or even smaller. Therefore, the demand for highly sensitive gas-phase amine measurements has emerged in the last several years. Nitrate chemical ionization mass spectrometry (CIMS) is routinely used for the measurement of gas-phase sulfuric acid in the sub-pptv range. Furthermore, extremely low volatile organic compounds (ELVOCs) can be detected with a nitrate CIMS. In this study we demonstrate that a nitrate CIMS can also be used for the sensitive measurement of dimethylamine (DMA, (CH3)2NH) using the NO3−•(HNO3)1 − 2• (DMA) cluster ion signal. Calibration measurements were made at the CLOUD chamber during two different measurement campaigns. Good linearity between 0 and ∼ 120 pptv of DMA as well as a sub-pptv detection limit of 0.7 pptv for a 10 min integration time are demonstrated at 278 K and 38 % RH.
Metal-organic frameworks (MOFs) have emerged as a promising class of crystalline porous inorganic-organic hybrid materials showing a wide range of applications. In order to realize the integration of MOFs into specific devices, this thesis mainly focuses on the controlled growth and the properties of highly oriented surface-mounted metal-organic frameworks (SURMOFs).
The stepwise layer-by-layer (LbL) growth method exhibits vast advantages for the controllable growth of SURMOFs regarding the crystallite orientation, film thickness and homogeneity. However, up to date, only a few MOFs have been demonstrated to be suited for this protocol. So the first project of this thesis was designed to extend the applicability of the LbL growth. To this end, a semi-rigid linker based [Cu2(sdb)2(bipy)] (sdb = 4,4’-sulfonylbiphenyl dicarboxylate; bipy = 4,4’-bipyridine) MOF was chosen. Employing the LbL growth, [Cu2(sdb)2(bipy)] SURMOFs were successfully grown onto both pyridyl- and carboxyl-terminated surfaces at the temperature range of 15-65 °C. Interestingly, the orientation of the SURMOFs largely depends on temperature on both surfaces. At low temperatures (below 40 °C), SURMOFs with exclusive [010] orientation are obtained. In contrast, at high temperatures (40-65 °C), [001] oriented SURMOF growth is favored. A novel growth mode was demonstrated, which is, instead of surface chemistry, the temperature-induced ripening processes and the tendency to minimize surface energies can dominate the SURMOF growth.
Inspired by the advantages of LbL deposition of isoreticular SURMOFs, the second project was conceived to grow multivariate SURMOFs (MTV-SURMOFs) using mixed dicarboxylate linkers. We advance a hypothesis that linker acidity (expressed by the pKa values) may have an influence on the oriented growth of MTV-SURMOFs. To test the hypothesis, seven isoreticular [Cu2L2(dabco)] (L = single kind of dicarboxylate linker; dabco = 1,4-diazabicyclo[2.2.2]octane) SURMOFs were grown onto pyridyl-terminated surfaces at 60 °C. The quality of [001] orientation is greatly affected by the acidity of the linkers. With this observation, we deposited a series of [Cu2Lm2(dabco)] (Lm = mixed dicarboxylate linkers) SURMOFs under the same conditions. [Cu2Lm2(dabco)] SURMOFs with exclusive [001] orientation are obtained when the growth solution contains two linkers of relatively high pKa value or more than two kinds of linkers (independent of the pKa values), while the mixtures of ligands with relatively low pKa values or a high content of low pKa valued linkers can result in mis-oriented growth of SURMOFs with unexpected [100] orientation.
Moreover, the LbL growth shows enormous potential in the rational construction of functional SURMOFs. Therefore, the third project of this thesis was devised to deposit SURMOFs containing redox-active species. For this, the 4,4’-biphenyldicarboxylic acid (H2(bpdc)) linker was functionalized with ferrocene (Fc) and dimethyl ferrocene (Me2Fc) moieties. [Cu2(bpdc-amide-Fc)2(dabco)] SURMOF (Fc-SURMOF) is perfectly grown along the [100] direction, while mis-oriented growth of [Cu2(bpdc-amide-Me2Fc)2(dabco)] SURMOF (Me2Fc-SURMOF) was observed. Surprisingly, Fc-SURMOF shows excellent electrochemical properties due to the reversible oxidation and reduction of the ferrocene moieties in the oriented pores, while the Me2Fc-SURMOF was found to be a closely packed insulating layer since no extensive charge transfer is observed. A diffusion controlled mechanism of redox reaction is proposed, where the diffusion of the counter anions in the pores limits the current.
Besides the LbL growth protocol, the spin-coating technique is also promising for the oriented growth of SURMOFs. Driven by the specific applications, the fourth project of this thesis was planned to grow functional SURMOFs containing catalytically active units. The Keggin-type polyoxometalates (POMs) with high catalytic activities were chosen to functionalize the HKUST-1 SURMOFs. Combining the technique with methanol vapor induced growth, a series of POM functionalized HKUST-1 SURMOFs (denoted as POM@HKUST-1 SURMOFs) were controllably deposited onto pyridyl-terminated surfaces. The SURMOFs exhibit great potential as electrocatalysts in electrochemical devices due to the excellent redox properties of POMs. In addition, the PTA@HKUST-1 (PTA = phosphotungstic acid) SURMOF can be employed as an ideal platform for the selective loading of methylene blue (MB) dye with high efficiency. Owing to the strong binding between the dye molecules and the framework, the MB dye cannot be desorbed by ion exchange and MB loaded PTA@HKUST-1 SURMOF shows reliable redox properties under inert conditions, further confirming the application potential in electrochemical devices.
MIPAS-Envisat is a satellite-borne sensor which measured vertical profiles of a wide range of trace gases from 2002 to 2012 using IR emission spectroscopy. We present geophysical validation of the MIPAS-Envisat operational retrieval (version 6.0) of N2O, CH4, CFC-12, and CFC-11 by the European Space Agency (ESA). The geophysical validation data are derived from measurements of samples collected by a cryogenic whole air sampler flown to altitudes of up to 34 km by means of large scientific balloons. In order to increase the number of coincidences between the satellite and the balloon observations, we applied a trajectory matching technique. The results are presented for different time periods due to a change in the spectroscopic resolution of MIPAS in early 2005. Retrieval results for N2O, CH4, and CFC-12 show partly good agreement for some altitude regions, which differs for the periods with different spectroscopic resolution. The more recent low spectroscopic resolution data above 20 km altitude show agreement with the combined uncertainties, while there is a tendency of the earlier high spectral resolution data set to underestimate these species above 25 km. The earlier high spectral resolution data show a significant overestimation of the mixing ratios for N2O, CH4, and CFC-12 below 20 km. These differences need to be considered when using these data. The CFC-11 results from the operation retrieval version 6.0 cannot be recommended for scientific studies due to a systematic overestimation of the CFC-11 mixing ratios at all altitudes.
MIPAS-Envisat is a satellite-borne sensor which measured vertical profiles of a wide range of trace gases from 2002 to 2012 using IR emission spectroscopy. We present geophysical validation of the MIPAS-Envisat operational retrieval (version 6.0) of N2O, CH4, CFC-12, and CFC-11 by the European Space Agency (ESA). The geophysical validation data are derived from measurements of samples collected by a cryogenic whole air sampler flown to altitudes of up to 34 km by means of large scientific balloons. In order to increase the number of coincidences between the satellite and the balloon observations, we applied a trajectory matching technique. The results are presented for different time periods due to a change in the spectroscopic resolution of MIPAS in early 2005. Retrieval results for N2O, CH4, and CFC-12 show partly good agreement for some altitude regions, which differs for the periods with different spectroscopic resolution. The more recent low spectroscopic resolution data above 20 km altitude show agreement with the combined uncertainties, while there is a tendency of the earlier high spectral resolution data set to underestimate these species above 25 km. The earlier high spectral resolution data show a significant overestimation of the mixing ratios for N2O, CH4, and CFC-12 below 20 km. These differences need to be considered when using these data. The CFC-11 results from the operation retrieval version 6.0 cannot be recommended for scientific studies due to a systematic overestimation of the CFC-11 mixing ratios at all altitudes.
Modelling short-term variability in carbon and water exchange in a temperate Scots pine forest
(2015)
The vegetation–atmosphere carbon and water exchange at one particular site can strongly vary from year to year, and understanding this interannual variability in carbon and water exchange (IAVcw) is a critical factor in projecting future ecosystem changes. However, the mechanisms driving this IAVcw are not well understood. We used data on carbon and water fluxes from a multi-year eddy covariance study (1997–2009) in a Dutch Scots pine forest and forced a process-based ecosystem model (Lund–Potsdam–Jena General Ecosystem Simulator; LPJ-GUESS) with local data to, firstly, test whether the model can explain IAVcw and seasonal carbon and water exchange from direct environmental factors only. Initial model runs showed low correlations with estimated annual gross primary productivity (GPP) and annual actual evapotranspiration (AET), while monthly and daily fluxes showed high correlations. The model underestimated GPP and AET during winter and drought events. Secondly, we adapted the temperature inhibition function of photosynthesis to account for the observation that at this particular site, trees continue to assimilate at very low atmospheric temperatures (up to daily averages of −10 °C), resulting in a net carbon sink in winter. While we were able to improve daily and monthly simulations during winter by lowering the modelled minimum temperature threshold for photosynthesis, this did not increase explained IAVcw at the site. Thirdly, we implemented three alternative hypotheses concerning water uptake by plants in order to test which one best corresponds with the data. In particular, we analyse the effects during the 2003 heatwave. These simulations revealed a strong sensitivity of the modelled fluxes during dry and warm conditions, but no single formulation was consistently superior in reproducing the data for all timescales and the overall model–data match for IAVcw could not be improved. Most probably access to deep soil water leads to higher AET and GPP simulated during the heatwave of 2003. We conclude that photosynthesis at lower temperatures than assumed in most models can be important for winter carbon and water fluxes in pine forests. Furthermore, details of the model representations of water uptake, which are often overlooked, need further attention, and deep water access should be treated explicitly.
Modelling short-term variability in carbon and water exchange in a temperate Scots pine forest
(2015)
The vegetation–atmosphere carbon and water exchange at one particular site can strongly vary from year to year, and understanding this interannual variability in carbon and water exchange (IAVcw) is a critical factor in projecting future ecosystem changes. However, the mechanisms driving this IAVcw are not well understood. We used data on carbon and water fluxes from a multi-year eddy covariance study (1997–2009) in a Dutch Scots pine forest and forced a process-based ecosystem model (Lund–Potsdam–Jena General Ecosystem Simulator; LPJ-GUESS) with local data to, firstly, test whether the model can explain IAVcw and seasonal carbon and water exchange from direct environmental factors only. Initial model runs showed low correlations with estimated annual gross primary productivity (GPP) and annual actual evapotranspiration (AET), while monthly and daily fluxes showed high correlations. The model underestimated GPP and AET during winter and drought events. Secondly, we adapted the temperature inhibition function of photosynthesis to account for the observation that at this particular site, trees continue to assimilate at very low atmospheric temperatures (up to daily averages of −10 °C), resulting in a net carbon sink in winter. While we were able to improve daily and monthly simulations during winter by lowering the modelled minimum temperature threshold for photosynthesis, this did not increase explained IAVcw at the site. Thirdly, we implemented three alternative hypotheses concerning water uptake by plants in order to test which one best corresponds with the data. In particular, we analyse the effects during the 2003 heatwave. These simulations revealed a strong sensitivity of the modelled fluxes during dry and warm conditions, but no single formulation was consistently superior in reproducing the data for all timescales and the overall model–data match for IAVcw could not be improved. Most probably access to deep soil water leads to higher AET and GPP simulated during the heatwave of 2003. We conclude that photosynthesis at lower temperatures than assumed in most models can be important for winter carbon and water fluxes in pine forests. Furthermore, details of the model representations of water uptake, which are often overlooked, need further attention, and deep water access should be treated explicitly.
Influence of sea surface roughness length parameterization on Mistral and Tramontane simulations
(2016)
The Mistral and Tramontane are mesoscale winds in southern France and above the Western Mediterranean Sea. They are phenomena well suited for studying channeling effects as well as atmosphere–land/ocean processes. This sensitivity study deals with the influence of the sea surface roughness length parameterizations on simulated Mistral and Tramontane wind speed and wind direction. Several simulations with the regional climate model COSMO-CLM were performed for the year 2005 with varying values for the Charnock parameter α. Above the western Mediterranean area, the simulated wind speed and wind direction pattern on Mistral days changes depending on the parameterization used. Higher values of α lead to lower simulated wind speeds. In areas, where the simulated wind speed does not change much, a counterclockwise rotation of the simulated wind direction is observed.
We present the application of time-of-flight mass spectrometry (TOF MS) for the analysis of halocarbons in the atmosphere after cryogenic sample preconcentration and gas chromatographic separation. For the described field of application, the quadrupole mass spectrometer (QP MS) is a state-of-the-art detector. This work aims at comparing two commercially available instruments, a QP MS and a TOF MS, with respect to mass resolution, mass accuracy, stability of the mass axis and instrument sensitivity, detector sensitivity, measurement precision and detector linearity. Both mass spectrometers are operated on the same gas chromatographic system by splitting the column effluent to both detectors. The QP MS had to be operated in optimised single ion monitoring (SIM) mode to achieve a sensitivity which could compete with the TOF MS. The TOF MS provided full mass range information in any acquired mass spectrum without losing sensitivity. Whilst the QP MS showed the performance already achieved in earlier tests, the sensitivity of the TOF MS was on average higher than that of the QP MS in the "operational" SIM mode by a factor of up to 3, reaching detection limits of less than 0.2 pg. Measurement precision determined for the whole analytical system was up to 0.2% depending on substance and sampled volume. The TOF MS instrument used for this study displayed significant non-linearities of up to 10% for two-thirds of all analysed substances.
We present the characterization and application of a new gas chromatography time-of-flight mass spectrometry instrument (GC-TOFMS) for the quantitative analysis of halocarbons in air samples. The setup comprises three fundamental enhancements compared to our earlier work (Hoker et al., 2015): (1) full automation, (2) a mass resolving power R = m/Δm of the TOFMS (Tofwerk AG, Switzerland) increased up to 4000 and (3) a fully accessible data format of the mass spectrometric data. Automation in combination with the accessible data allowed an in-depth characterization of the instrument. Mass accuracy was found to be approximately 5 ppm in mean after automatic recalibration of the mass axis in each measurement. A TOFMS configuration giving R = 3500 was chosen to provide an R-to-sensitivity ratio suitable for our purpose. Calculated detection limits are as low as a few femtograms by means of the accurate mass information. The precision for substance quantification was 0.15 % at the best for an individual measurement and in general mainly determined by the signal-to-noise ratio of the chromatographic peak. Detector non-linearity was found to be insignificant up to a mixing ratio of roughly 150 ppt at 0.5 L sampled volume. At higher concentrations, non-linearities of a few percent were observed (precision level: 0.2 %) but could be attributed to a potential source within the detection system. A straightforward correction for those non-linearities was applied in data processing, again by exploiting the accurate mass information. Based on the overall characterization results, the GC-TOFMS instrument was found to be very well suited for the task of quantitative halocarbon trace gas observation and a big step forward compared to scanning, quadrupole MS with low mass resolving power and a TOFMS technique reported to be non-linear and restricted by a small dynamical range.
We present the characterization and application of a new gas chromatography time-of-flight mass spectrometry instrument (GC-TOFMS) for the quantitative analysis of halocarbons in air samples. The setup comprises three fundamental enhancements compared to our earlier work (Hoker et al., 2015): (1) full automation, (2) a mass resolving power R = m/Δm of the TOFMS (Tofwerk AG, Switzerland) increased up to 4000 and (3) a fully accessible data format of the mass spectrometric data. Automation in combination with the accessible data allowed an in-depth characterization of the instrument. Mass accuracy was found to be approximately 5 ppm in mean after automatic recalibration of the mass axis in each measurement. A TOFMS configuration giving R = 3500 was chosen to provide an R-to-sensitivity ratio suitable for our purpose. Calculated detection limits are as low as a few femtograms by means of the accurate mass information. The precision for substance quantification was 0.15 % at the best for an individual measurement and in general mainly determined by the signal-to-noise ratio of the chromatographic peak. Detector non-linearity was found to be insignificant up to a mixing ratio of roughly 150 ppt at 0.5 L sampled volume. At higher concentrations, non-linearities of a few percent were observed (precision level: 0.2 %) but could be attributed to a potential source within the detection system. A straightforward correction for those non-linearities was applied in data processing, again by exploiting the accurate mass information. Based on the overall characterization results, the GC-TOFMS instrument was found to be very well suited for the task of quantitative halocarbon trace gas observation and a big step forward compared to scanning, quadrupole MS with low mass resolving power and a TOFMS technique reported to be non-linear and restricted by a small dynamical range.