Universitätspublikationen
Refine
Year of publication
- 2015 (1739) (remove)
Document Type
- Article (603)
- Doctoral Thesis (187)
- Working Paper (169)
- Contribution to a Periodical (164)
- Book (159)
- Report (157)
- Part of Periodical (124)
- Review (70)
- Preprint (55)
- Conference Proceeding (22)
Language
- English (863)
- German (835)
- Spanish (14)
- Italian (11)
- Portuguese (11)
- French (3)
- Multiple languages (1)
- Russian (1)
Is part of the Bibliography
- no (1739)
Keywords
- Islamischer Staat (34)
- IS (25)
- Terrorismus (23)
- Deutschland (16)
- Dschihadismus (13)
- Syrien (12)
- Terror (11)
- Irak (10)
- Islamismus (10)
- Salafismus (10)
Institute
- Präsidium (336)
- Medizin (252)
- Gesellschaftswissenschaften (230)
- Physik (184)
- Wirtschaftswissenschaften (149)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (116)
- Center for Financial Studies (CFS) (115)
- Biowissenschaften (99)
- Informatik (96)
- Frankfurt Institute for Advanced Studies (FIAS) (95)
- Sustainable Architecture for Finance in Europe (SAFE) (93)
- House of Finance (HoF) (76)
- Biochemie und Chemie (74)
- Geschichtswissenschaften (74)
- Geowissenschaften (53)
- Neuere Philologien (50)
- Rechtswissenschaft (48)
- Institut für Wirtschaft, Arbeit, und Kultur (IWAK) (43)
- Philosophie (41)
- Kulturwissenschaften (39)
- Pharmazie (35)
- Senckenbergische Naturforschende Gesellschaft (35)
- Biodiversität und Klima Forschungszentrum (BiK-F) (29)
- Institut für Sozialforschung (IFS) (27)
- Psychologie (27)
- E-Finance Lab e.V. (25)
- Institut für Ökologie, Evolution und Diversität (25)
- Erziehungswissenschaften (22)
- Institut für sozial-ökologische Forschung (ISOE) (17)
- Institute for Monetary and Financial Stability (IMFS) (16)
- Universitätsbibliothek (14)
- Evangelische Theologie (13)
- Geographie (13)
- Sportwissenschaften (13)
- Exzellenzcluster Makromolekulare Komplexe (11)
- Sprachwissenschaften (10)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (9)
- Biochemie, Chemie und Pharmazie (8)
- Georg-Speyer-Haus (8)
- Mathematik (8)
- Sprach- und Kulturwissenschaften (8)
- MPI für Hirnforschung (6)
- Sonderforschungsbereiche / Forschungskollegs (6)
- Institute for Law and Finance (ILF) (5)
- Philosophie und Geschichtswissenschaften (5)
- Hessische Stiftung für Friedens- und Konfliktforschung (HSFK) (4)
- Informatik und Mathematik (3)
- Katholische Theologie (3)
- LOEWE-Schwerpunkt Außergerichtliche und gerichtliche Konfliktlösung (3)
- Starker Start ins Studium: Qualitätspakt Lehre (3)
- Centre for Drug Research (2)
- Deutsches Institut für Internationale Pädagogische Forschung (DIPF) (2)
- Forschungskolleg Humanwissenschaften (2)
- Gleichstellungsbüro (2)
- Hochschulrechenzentrum (2)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (2)
- MPI für Biophysik (2)
- Museum Giersch der Goethe Universität (2)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (2)
- Zentrum für Interdisziplinäre Afrikaforschung (ZIAF) (2)
- Akademie für Bildungsforschung und Lehrerbildung (bisher: Zentrum für Lehrerbildung und Schul- und Unterrichtsforschung) (1)
- Center for Membrane Proteomics (CMP) (1)
- Cornelia Goethe Centrum für Frauenstudien und die Erforschung der Geschlechterverhältnisse (CGC) (1)
- ELEMENTS (1)
- Ernst Strüngmann Institut (1)
- Europäische Akademie der Arbeit in der Universität Frankfurt am Main (1)
- Geowissenschaften / Geographie (1)
- LOEWE-Schwerpunkt für Integrative Pilzforschung (1)
- MPI für empirische Ästhetik (1)
- Universität des 3. Lebensalters e.V. (1)
Die HBO-Serie The Wire erzählt eine Geschichte von Kriminalität, Polizeiarbeit und Politik in Baltimore. Eine ihrer Stärken liegt darin, wie sie die Ambivalenz des sozialen und politischen Lebens nachzeichnet. Eine zentrale Rolle spielt dabei allgegenwärtige Korruption. Ihre Ambivalenz bricht sich nicht zuletzt in der Darstellung der Figur des State Senator Clay Davis.
Die bisher nicht lückenlos aufgeklärte, vermutlich aber Gruppen der organisierten Kriminalität zuzurechnende Ermordung von über 40 Lehramtsstudenten in der südmexikanischen Kleinstadt Ayotzinapa Ende 2014 hat ebenso wie die seit dem Abschuss eines Militärhubschraubers im Mai 2015 eskalierende Gewalt im westlichen Bundesstaat Jalisco wieder einmal schmerzlich in Erinnerung gerufen, dass in Mexiko bereits seit fast neun Jahren ein blutiger Gewaltkonflikt im Gange ist, der angesichts der wirtschaftlichen Erfolge des „Aztec tiger“ teils fast schon vergessen schien.
Die vorliegende Arbeit befasst sich mit der Untersuchung einzelner chiraler Moleküle durch Koinzidenzmessungen. Ein Molekül wird chiral genannt, wenn es in zwei Varianten, sogenannten Enantiomeren auftritt, deren Strukturmodelle Spiegelbilder voneinander sind.
Da viele biologisch relevante Moleküle chiral sind, sind Methoden und Erkenntnisse dieses Gebiets von großer Bedeutung für Biochemie und Pharmazie. Bemerkenswert ist, dass in der Natur meist nur eines der beiden möglichen Enantiomere auftritt. Ob diese Wahl zufällig war, ob sie aufgrund der Anfangsbedingungen bei Entstehung des Lebens erfolgte, oder ob sie eine fundamentale Ursache hat, ist bisher ungeklärt. Seit der Entdeckung chiraler Molekülstrukturen in der zweiten Hälfte des 19. Jahrhunderts ist eine Vielzahl von Methoden entwickelt worden, um die beiden Enantiomere eines Moleküls zu unterscheiden und ihre Eigenschaften zu untersuchen. Aussagen über die mikroskopische Struktur (Absolutkonfiguration) können jedoch meist nur mithilfe theoretischer Modelle getroffen werden.
Der innovative Schritt der vorliegenden Arbeit besteht darin, eine in der Atomphysik entwickelte Technik zur Untersuchung einzelner mikroskopischer Systeme erstmals auf chirale Moleküle anzuwenden: Mit der sogenannten Cold Target Recoil Ion Momentum Spectroscopy (COLTRIMS) ist es möglich, einzelne Moleküle in der Gasphase mehrfach zu ionisieren und die entstandenen Fragmente (Ionen und Elektronen) zu untersuchen. Die gleichzeitige Detektion dieser Fragmente wird als Koinzidenzmessung bezeichnet.
Zunächst wurde das prototypische chirale Molekül CHBrClF mit einem Femtosekunden-Laserpuls mehrfach ionisiert, sodass alle fünf Atome als einfach geladene Ionen in einer sogenannten Coulomb-Explosion „auseinander fliegen“. Durch Messung der Impulsvektoren dieser Ionen konnte die mikroskopische Konfiguration einzelner Moleküle mit sehr hoher Zuverlässigkeit bestimmt werden. Somit eignet sich die Koinzidenzmethode auch dazu, die Anteile der rechts- bzw. linkshändigen Enantiomere in einer Probe zu bestimmen. Die Messungen an der verwendeten, racemischen Probe zeigen bei der Ionisation mit linear polarisiertem Licht im Rahmen der statistischen Unsicherheit wie erwartet eine Gleichverteilung der beiden Enantiomere.
In einem nachfolgenden Experiment konnte gezeigt werden, dass sich die Coulomb-Explosion auch mit einzelnen hochenergetischen Photonen aus einer Synchrotronstrahlungsquelle realisieren lässt. Für beide Ionisationsmechanismen – am Laser und am Synchrotron - wurden mehrere Fragmentationskanäle untersucht. Im Hinblick auf die Erweiterung der Methode hin zu komplexeren, biologisch relevanten Molekülen ist es entscheidend zu wissen, inwieweit sich die Händigkeit bestimmen lässt, wenn nicht alle Atome des Moleküls als atomare Ionen detektiert werden. Hierbei stellte sich heraus, dass auch molekulare Ionen zur Bestimmung der Absolutkonfiguration herangezogen werden können. Eine signifikante Steigerung der Effizienz konnte für den Fall demonstriert werden, dass nicht alle Fragmente aus der Coulomb-Explosion des Moleküls detektiert wurden – hier lassen sich allerdings nur noch statistische Aussagen über die Absolutkonfiguration und die Häufigkeit der beiden Enantiomere treffen.
Um die Grenzen der Methode in Bezug auf die Massenauflösung zu testen, wurden isotopenchirale Moleküle, d.h. Moleküle, die nur aufgrund zwei verschiedener Isotope chiral sind, untersucht. Auch hier ist eine Trennung der Enantiomere möglich, wenn auch mit gewissen Einschränkungen.
Ein wichtiges Merkmal chiraler Moleküle ist das unterschiedliche Verhalten der Enantiomere bei Wechselwirkung mit zirkular polarisierter Strahlung. Diese Asymmetrie wird Zirkulardichroismus genannt. Die koinzidente Untersuchung von Ionen und Elektronen aus der Fragmentation eines Moleküls eröffnet neue Möglichkeiten für die Untersuchung des Dichroismus. So können die Impulsvektoren der Ionen mit bekannten Asymmetrien in der Elektronenverteilung (Photoelektron-Zirkulardichroismus) verknüpft werden, was zu einem besseren Verständnis der Wechselwirkung elektromagnetischer Strahlung mit chiralen Molekülen führen kann.
In dieser Arbeit wurde nach Asymmetrien in der Winkelverteilung sowohl der Ionen als auch der Elektronen nach der Ionisation von CHBrClF und Propylenoxid (C3H6O) mit zirkular polarisierter Synchrotronstrahlung gesucht. In den durchgeführten Messungen konnte kein zweifelsfreier Nachweis für einen Dichroismus bei den verwendeten experimentellen Bedingungen erbracht werden. Technische und prinzipielle Limitierungen der Methode wurden diskutiert und Verbesserungsvorschläge für zukünftige Messungen genannt.
Mit der erfolgreichen Bestimmung der Absolutkonfiguration und der prinzipiellen Möglichkeit, Asymmetrien in zuvor nicht zugänglichen Messgrößen zu untersuchen, legt diese Arbeit den Grundstein für die Anwendung der Koinzidenzspektroskopie auf Fragestellungen der Stereochemie.
Die Dissertation ist in den Bereichen der semiklassischen Quantengravitation und der pseudokomplexen Allgemeinen Relativitätstheorie (pk-ART) anzusiedeln. Dabei wird unter semiklassischer Quantengravitation die Untersuchung quantenmechanischer Phänomene in einem durch eine klassische Gravitationstheorie gegebenen gravitativen Hintergrundfeld verstanden und bei der pk-ART handelt es sich um eine Alternative zu der aktuell anerkannten klassischen Gravitationstheorie, der Allgemeinen Relativitätstheorie (ART), die die reellen Raumzeitkoordinaten der ART pseudokomplex erweitert. Dies führt zusammen mit einer Veränderung des Variationsprinzips in führender Ordnung auf eine Korrektur der Einstein- Gleichung der ART mit einem zusätzlichen Quellterm (Energie-Impuls-Tensor), dessen exakte Form jedoch bisher nicht bekannt ist.
Die Beschreibung der Gravitation als Hintergrundfeld ergibt sich zwangsläufig daraus, dass auf Basis der ART bisher keine quantisierte Beschreibung für sie gefunden werden konnte. Jedoch wird erhofft, dass die Untersuchung semiklassischer Phänomene Hinweise auf die korrekte Theorie der Quantengravitation gibt. Zudem motiviert der Mangel einer quantisierten Gravitationstheorie die Verwendung alternativer Theorien, da sich dadurch die Frage stellt, ob die ART die korrekte Beschreibung klassischer Felder ist.
Das Ziel der vorliegenden Dissertation war die grundlegenden Unterschiede zwischen der ART und der pk-ART für gebundene sphärisch symmetrische Zustände der Klein-Gordon- und der Dirac-Gleichung zu identifizieren und ein qualitatives Modell der Vakuumfluktuationen in sphärisch symmetrischen Materieverteilungen zu bestimmen, wobei der Zusammenhang der pk-ART mit den Vakuumfluktuationen in der Annahme besteht, dass ein Zusammenhang zwischen ihnen und dem zusätzlichen Quellterm der pk-ART existiert. Dafür wurden die gebundenen Zustände der Klein-Gordon- und der Dirac-Gleichung für drei verschiedene Metrikmodelle (zwei ART-Modelle und ein pk-ART-Modell) mit konstanter Dichte systematisch numerisch berechnet, einige repräsentative Grafiken erstellt, anhand derer die grundlegenden Unterschiede der Ergebnisse der ART-Modelle und des pk-ART-Modells erörtert wurden, und die ART Ergebnisse der Dirac-Gleichung soweit wie möglich mit Ergebnissen der Literatur verglichen. Insbesondere wurde dabei festgestellt, dass die Energieeigenwerte in der pk-ART im Gegensatz zu denen in der ART in Abhängigkeit der Ausdehnung des Zentralobjekts ein Minimum aufweisen. Zudem wurden die Energieeigenwerte der Klein-Gordon-Gleichung teilweise sowohl über das Eigenwertproblem einer Matrix als auch über ein Anfangswertproblem berechnet und es wurde festgestellt, dass die Beschreibung als Eigenwertproblem deutlich uneffektiver ist, wenn dafür die Basis des dreidimensionalen harmonischen Oszillators genutzt wird. Für die Entwicklung des qualitativen Vakuumfluktuationsmodells wurden zwei Näherungen für den Erwartungswert des Energie-Impuls-Tensors in führender Ordnung für die Schwarzschildmetrik (ART) verglichen und die Verwendung eines qualitativen Modells durch die dabei auftretende Diskrepanz gerechtfertigt. Danach wurden die Vakuumfluktuationen für Metriken konstanter Materiedichte mit Hilfe einer der Näherungen in führender Ordnung berechnet und ein Modell gesucht, das den gleichen qualitativen Verlauf aufweist. Im Anschluss wurde dieses Modell noch für einfache Metriken mit variabler Materiedichte verifiziert.
Die Dissertation leistet mit der Analyse der gebundenen Zustände einen Beitrag in der Identifikation der Unterschiede zwischen der pk-ART und der ART und führt somit auf weitere mögliche Messgrößen, die der Unterscheidung der beiden Theorien dienen könnten. Weiterhin ermöglicht das abgeleitete Modell eine Verfeinerung der schon publizierten Ergebnisse über Neutronensterne und die für die Erstellung nötigen Vorarbeiten leisten einen Beitrag zur Identifikation des
pk-ART Quellterms.
The endocannabinoids (EC), their synthetizing and metabolizing enzymes, and the cannabinoid (CB) receptors comprise the endocannabinoid system (ECS) that has been detected by Yasuo et al. (2010) in rodent and human brain areas essential for circadian rhythmic control and hormone secretion. The EC are secreted in the pars tuberalis formation (PT) of the pituitary gland and unfold their effect as ligands on cannabinoid receptors type 1 (CB1) in the pars distalis (PD). The CB1 is mostly expressed on folliculo-stellate (fs) cells of the PD. The fs cells execute regulative and supportive functions to adjacent hormone-producing cells (Allaerts and Venkelecom, 2005; Mitsuishi et al., 2013). The lipid and calcium binding protein Annexin A1 (Anx A1) and the cell membrane permeable compound nitric oxide (NO) have been detected in the fs cells (Woods et al., 1990; Devnath and Inoue, 2008). There are published findings indicating strong influence of Anx A1 and NO on hormone production (Taylor et al. 1993; Venkelecom et al, 1997). The hypothesis of this study is that the EC influence hormonal secretion by acting upon CB1 receptors on fs cells and thus activating or inhibiting Anx A1 and NO that directly affect adjacent glandular cells.
Prevalently cell models were used to carry out the experimental work. The TtT/GF and Tpit/F1 cell lines represent the fs cells, the AtT20/D16v stand for the ACTH-producing corticotroph (C) cells, and GH4C1 for the PRL-producing lactotroph (L) cells. Whenever comparison with an integrity model was possible tissue from C3H mice was used. Chemoluminescent and photometrical detection, enzyme-linked immunosorbent assay (ELISA), fluorescence-activated cell sorting (FACS), immunoblot (IB), immunocyto- and immunohisto-chemical analysis (ICC, IHC), in situ hybridization (ISH), and (q) PCR methods were used as assaying tools to investigate CB1, Anx A1, the Anx A1 receptor - Fpr-rs1, NO, ACTH, and PRL.
CB1 was detected on the fs, C, and L cell models. The presence of fatty acid amide hydrolase (FAAH, an EC degrading enzyme) was confirmed in the fs cells. Incubations of the fs cells with CB1 agonists (2-AG, AEA, WIN) and antagonist (otenabant) were performed and resulting increase of Anx A1, and inhibition of NO were detected. Anx A1 binding sites, known as formyl peptide like receptor – related sequence 1 (Fpr-rs1) were identified on the C and L cells. The hormone-producing cells were treated with a 2-AG, Anx A1, and NO and the resulting changes in the levels of ACTH and PRL were detected. Anx A1 acted stimulatory on ACTH in the C AtT20/D16v cell and inhibitory on PRL in the L GH4C1 cell. NO inhibited both ACTH and PRL release. Additional analysis of the levels of expression of mRNA for Anx A1 and Fpr-rs1 in murine PD tissue demonstrated that while the expression of the first was not influenced by time, the expression of the latter was activated during the subjective day.
The here presented study shows that EC influence the ACTH release stimulatory through activating Anx A1 and inhibiting NO. As for PRL, the EC unfold an inhibition through activating Anx A1, and stimulation through inhibiting NO. A clear regulatory linkeage between the EC and ACTH and PRL control is revealed, involving the fs cells with possible time-dependence.
Ausgehend von dem gesellschaftlichen Problem des Übergewichts im Kindesalter wird die besondere Bedeutung und Verantwortung des Sportunterrichts für diese Klientel herausgestellt. Dabei wird die These vertreten, dass der Sportunterricht seinem Auftrag nur dann gerecht werden kann, wenn es gelingt, auch übergewichtigen Kindern positive Erfahrungen in Bezug auf Bewegung, Spiel und Sport zu vermitteln. Im Rahmen dieses sportpädagogischen Problemfeldes wurde zunächst ein Fragebogen konzipiert und validiert, der das Wohlbefinden als Indikator für positive Erfahrungen übergewichtiger Schüler mit dem von normalgewichtigen Kindern vergleicht (n = 336). Eine anschließende qualitative Untersuchung in Form von Leitfadeninterviews (mit acht übergewichtigen/adipösen Kindern) ergänzt und konkretisiert die Ergebnisse.
Als wesentliches Resultat konnte die Erkenntnis gewonnen werden, dass das Wohlbefinden – gemessen durch ein faktorenanalytisch generiertes Modell mit den drei Faktoren „Sportunterricht/Sportlehrer“ (Faktor I), „sportliches Selbstwertgefühl“ (Faktor II) und „Mitschüler/Schulzufriedenheit“ (Faktor III) – keine signifikanten Unterschiede zwischen den Gewichtsklassen zeigt (Faktor I p = .57; Faktor II p = .04; Faktor III p = .23). Übergewichtige Schüler fühlen sich demnach nicht weniger wohl als ihre normalgewichtigen Klassenkameraden, in der Skala sportliches Selbstwertgefühl erzielten sie sogar höhere Werte (Normalgewichtige m = 2,06 ± 0,96; Übergewichtige m = 2,27 ± 0,89). Trotz dieses positiven Befundes verspüren Übergewichtige durchaus so manche Unzufriedenheit. Die Frage nach der Wichtigkeit der bzw. der Zufriedenheit mit den Komponenten Sportunterricht, eigene sportliche Leistung, Zusammenarbeit mit den Mitschülern, Figur und Sportlehrer machte deutlich, dass den Übergewichtigen Figur und sportliche Leistung sehr wichtig sind, sie jedoch nur bedingt damit zufrieden sind. Die Unterschiede in den entsprechenden Skalen erwiesen sich als hoch signifikant (Figur p = .00 d = .28; sportliche Leistung p = .01 d = .29). Die Überprüfung der Frage F1.2 hinsichtlich der geschlechtsspezifischen Unterschiede in den Aussagen zum Wohlbefinden lieferte lediglich ein signifikantes Ergebnis (p = .01) mit mittlerem Effekt (d = .48). Übergewichtige Mädchen gaben im Faktor „Mitschüler/Schulzufriedenheit“ höhere Werte an (m = 3,27 ± 0,66) als übergewichtige Jungen (m = 2,93 ± 0,75). Daraus lässt sich schließen, dass sich die weiblichen Übergewichtigen besser von ihren Mitschülern verstanden und unterstützt fühlen und sie eine allgemein größere Schulzufriedenheit verspüren als die männliche Vergleichsgruppe.
Die Auswertung in Bezug auf die Herkunft der Schüler lieferte keine signifikanten Ergebnisse. Dieser Befund deutet auf eine gelungene Integration der ausländischen Schüler hin, die aber möglicherweise aufgrund des hohen Ausländeranteils im Stadtgebiet Offenbach nicht repräsentativ ist.
Die Auswertung der Interviews zeigte, dass der positive Selbstwert auf ein hohes Maß an sozialer Anerkennung zurückzuführen ist. Entgegen zahlreichen theoretischen Vorannahmen berichtete kein Kind von anhaltenden Diskriminierungen oder Schamgefühlen aufgrund seines Gewichts. Die Bedeutung der eigenen sportlichen Leistung zeichnete sich mehrfach als Schlüsselkriterium im Umgang mit der pädagogischen Herausforderung, dem Erschaffen eines Problembewusstseins, ohne den Selbstwert und die Freude am Sporttreiben zu trüben, ab. Übergewichtige Kinder messen der Leistung einen hohen Stellenwert bei und erkennen in der Hoffnung einer möglichen Verbesserung, dass eine Reduktion des Gewichtes vorteilhaft ist.
Empirical credit demand analysis undertaken at the aggregate level obscures potential behavioral heterogeneity between various borrowing sectors. Looking at disaggregated data and analyzing bank loans to non-financial companies, to financial companies, to households for consumption and for house purchases separately with respect to a common set of macroeconomic determinants may facilitate more accurate empirical relationships and more reliable insights for economic policy. Using quarterly Euro area panel data between 2003 and 2013, empirical evidence for heterogeneity in borrowing behavior across sectors and the credit cycle with respect to interest rates, output and house prices is found. The results motivate sector-specific, counter-cyclical capital requirements.
This paper empirically investigates how organizational hierarchy affects the allocation of credit within a bank. Using an exogenous variation in organizational design, induced by a reorganization plan implemented in roughly 2,000 bank branches in India during 1999-2006, and employing a difference-in-differences research strategy, we find that increased hierarchization of a branch decreases its ability to produce "soft" information on loans, leads to increased standardization of loans and rationing of "soft information" loans. Furthermore, this loss of information brings about a reduction in performance on loans: delinquency rates and returns on similar loans are worse in more hierarchical branches. We also document how hierarchical structures perform better in environments that are characterized by a high degree of corruption, thus highlighting the benefits of hierarchical decision making in restraining rent seeking activities. Finally, we document a channel - managerial interference - through which hierarchy affects loan outcomes.
Our paper evaluates recent regulatory proposals mandating the deferral of bonus payments and claw-back clauses in the financial sector. We study a broadly applicable principal agent setting, in which the agent exerts effort for an immediately observable task (acquisition) and a task for which information is only gradually available over time (diligence). Optimal compensation contracts trade off the cost and benefit of delay resulting from agent impatience and the informational gain. Mandatory deferral may increase or decrease equilibrium diligence depending on the importance of the acquisition task. We provide concrete conditions on economic primitives that make mandatory deferral socially (un)desirable.
In its meeting on 6 September 2012, the Governing Council of the ECB took decisions on a number of technical features regarding the Eurosystem’s outright transactions in secondary sovereign bond markets (OMT). This decision was challenged in the German Federal Constitutional Court (GFCC) by a number of constitutional complaints and other petitions. In its seminal judgment of 14 January 2014, the German court expressed serious doubts on the compatibility of the ECB’s decision with the European Union law.
It admitted the complaints and petitions even though actual purchases had not been executed and the control of acts of an organ of the EU in principle is not the task of the GFCC. As justification for this procedure the court resorted to its judicature on a reserved “ultra vires” control and the defense of the “constitutional identiy” of Germany. In the end, however, the court referred the case pursuant to Article 267 TFEU to the European Court of Justice (ECJ) for preliminary rulings on several questions of EU law. In substance, the German court assessed OMT as an act of economic policy which is not covered by the competences of the ECB. Furthermore, it judged OMT as a – by EU primary law – prohibited monetary financing of sovereign debt. The defense of the ECB (disruption of monetary policy transmission mechanism) was dismissed without closer scrutiny as being “irrelevant”. Finally the court opened, however, a way for a compromise by an interpretation of OMT in conformity with EU law under preconditions, specified in detail.
Procedure and findings of this judgment were harshly criticized by many economists but also by the majority of legal scholars. This criticism is largely convincing in view of the admissibility of the complaints. Even if the “ultra vires” control is in conformity with prior decisions of court it is in this judgment expanded further without compelling reasons. It is also questionable whether the standing of the complaining parties had to be accepted and whether the referral to the ECJ was indicated. The arguments of the court are, however, conclusive in respect of the transgression of competences by the ECB and – to somewhat lesser extent – in respect of the monetary debt financing. The dismissal of the defense as “irrelevant” is absolutey persuasive.
The Treaty of Maastricht imposed the strict obligation on the European Union (EU) to establish an economic and monetary union, now Article 3(4) TEU. This economic and monetary union is, however, not designed as a separate entity but as an integral part of the EU. The single currency was to become the currency of the EU and to be the legal tender in all Member States unless an exemption was explicitly granted in the primary law of the EU, as in the case of the UK and Denmark. The newly admitted Member States are obliged to introduce the euro as their currency as soon as they fulfil the admission criteria. Technically, this has been achieved by transferring the exclusive competence for the monetary policy of the Member States whose currency is the euro on the EU, Article 3(1)(c) TFEU and by bestowing the euro with the quality of legal tender, the only legal tender in the EU, Article 128(1) sentence 3 TFEU.
Savings accounts are owned by most households, but little is known about the performance of households’ investments. We create a unique dataset by matching information on individual savings accounts from the DNB Household Survey with market data on account-specific interest rates and characteristics. We document considerable heterogeneity in returns across households, which can be partly explained by financial sophistication. A one-standard deviation increase in financial literacy is associated with a 13% increase compared to the median interest rate. We isolate the usage of modern technology (online accounts) as one channel through which financial literacy has a positive association with returns.
A theory of the boundaries of banks with implications for financial integration and regulation
(2015)
We offer a theory of the "boundary of the
rm" that is tailored to banking, as it builds on a single ine¢ ciency arising from risk-shifting and as it takes into account both interbank lending as an alternative to integration and the role of possibly insured deposit funding. Amongst others, it explains both why deeper economic integration should cause also greater financial integration through both bank mergers and interbank lending, albeit this typically remains ine¢ ciently incomplete, and why economic disintegration (or "desychronization"), as currently witnessed in the European Union, should cause less interbank exposure. It also suggests that recent policy measures such as the preferential treatment of retail deposits, the extension of deposit insurance, or penalties on "connectedness" could all lead to substantial welfare losses.
In-depth analyses of cancer cell proteomes are needed to elucidate oncogenic pathomechanisms, as well as to identify potential drug targets and diagnostic biomarkers. However, methods for quantitative proteomic characterization of patient-derived tumors and in particular their cellular subpopulations are largely lacking. Here we describe an experimental set-up that allows quantitative analysis of proteomes of cancer cell subpopulations derived from either liquid or solid tumors. This is achieved by combining cellular enrichment strategies with quantitative Super-SILAC-based mass spectrometry followed by bioinformatic data analysis. To enrich specific cellular subsets, liquid tumors are first immunophenotyped by flow cytometry followed by FACS-sorting; for solid tumors, laser-capture microdissection is used to purify specific cellular subpopulations. In a second step, proteins are extracted from the purified cells and subsequently combined with a tumor-specific, SILAC-labeled spike-in standard that enables protein quantification. The resulting protein mixture is subjected to either gel electrophoresis or Filter Aided Sample Preparation (FASP) followed by tryptic digestion. Finally, tryptic peptides are analyzed using a hybrid quadrupole-orbitrap mass spectrometer, and the data obtained are processed with bioinformatic software suites including MaxQuant. By means of the workflow presented here, up to 8,000 proteins can be identified and quantified in patient-derived samples, and the resulting protein expression profiles can be compared among patients to identify diagnostic proteomic signatures or potential drug targets.
In the title compound, C20H24N2O4, both peptide bonds adopt a trans configuration with respect to the —N—H and —C=O groups. The dihedral angle between the aromatic rings is 53.58 (4)°. The molecular conformation is stabilized by an intramolecular N—H⋯O hydrogen bond. The crystal packing is characterized by zigzag chains of N—H⋯O hydrogen-bonded molecules running along the b-axis direction.
Das Hauptziel der vorliegenden Arbeit war es, die energieabhängigen Wirkungsquerschnitte von (γ,n)-Reaktionen für 169Tm, 170Yb, 176Yb und 130Te mittels der Photoaktivierungsmethode zu bestimmen.
Dazu wurden zunächst die Effizienzen der verwendeten Detektoren mithilfe von Simulationen korrigiert, da die verwendeten Targets eine ausgedehnte Geometrie aufweisen im Gegensatz zu den punktförmigen Eichquellen. Es hat sich herausgestellt, dass mit den Simulationen die Effizienzen der MCA-Detektoren energieabhängig korrigiert werden konnten, da die Simulationen die Form der gemessenen Effizienzen gut reproduzieren konnten. Bei den Effizienzen der LEPS-Detektoren hingegen konnte keine energieabhäangige Korrektur vorgenommen werden, da die LEPS-Detektoren aufgrund des geringen Abstandes zu den Detektoren hohe Summeneffekte zeigten. Im Rahmen dieser Arbeit konnten diese Summeneffekte jedoch nicht korrigiert bzw. berücksichtigt werden.
n the EU there are longstanding and ongoing pressures towards a tax that is levied on the EU level to substitute for national contributions. We discuss conditions under which such a transition can make sense, starting from what we call a "decentralization theorem of taxation" that is analogous to Oates (1972) famous result that in the absence of spill-over effects and economies of scale decentralized public good provision weakly dominates central provision. We then drop assumptions that turn out to be unnecessary for this results. While spill-over effects of taxation may call for central rules for taxation, as long as spill-over effects do not depend on the intra-regional distribution of the tax burden, decentralized taxation plus tax coordination is found superior to a union-wide tax.
Do markets correct individual behavioral biases? In an experimental asset market, we compare the outcomes of a standard market economy to those of a an island economy that removed market interactions. We observe asset price bubbles in the market economy while prices are stable in the island economy. We also find that subjects took more risk following larger losses, resulting in larger prices and consistent with a gambling for resurrection motive. This motive can translate into bubbles in the market economy because higher prices increase average losses and thus reinforce the desire to resurrect. By contrast, the absence of such a strategic complementarity in island economies can explain the more stable outcome. These results suggest that markets do not correct behavioral biases, rather the contrary.
This paper analyzes sovereign risk shift-contagion, i.e. positive and significant changes in the propagation mechanisms, using bond yield spreads for the major eurozone countries. By emphasizing the use of two econometric approaches based on quantile regressions (standard quantile regression and Bayesian quantile regression with heteroskedasticity) we find that the propagation of shocks in euro's bond yield spreads shows almost no presence of shift-contagion. All the increases in correlation we have witnessed over the last years come from larger shocks propagated with higher intensity across Europe.
Research on interbank networks and systemic importance is starting to recognise that the web of exposures linking banks balance sheets is more complex than the single-layer-of-exposure paradigm. We use data on exposures between large European banks broken down by both maturity and instrument type to characterise the main features of the multiplex structure of the network of large European banks. This multiplex network presents positive correlated multiplexity and a high similarity between layers, stemming both from standard similarity analyses as well as a core-periphery analyses of the different layers. We propose measures of systemic importance that fit the case in which banks are connected through an arbitrary number of layers (be it by instrument, maturity or a combination of both). Such measures allow for a decomposition of the global systemic importance index for any bank into the contributions of each of the sub-networks, providing a useful tool for banking regulators and supervisors. We use the dataset of exposures between large European banks to illustrate the proposed measures.
Although banks are at the center of systemic risk, there are other institutions that contribute to it. With the publication of the leveraged lending guideline in March 2013, the U.S. regulators show that they are especially worried about the private equity firms with their high-risk deals. Given these risks and the interconnectedness of the banks through the LBO loan syndicates, I shed light on the impact of a bank’s LBO loan exposure on its systemic risk. By using 3,538 observations between 2000 and 2013 from 165 global banks, I show that banks with higher LBO exposure also have a higher level of systemic risk. Other loan purposes do not show this positive relationship. The main drivers influencing this relationship positively are the bank’s interconnectedness to other LBO financing banks and its size. Lending experience with a specific PE sponsor, experience with leading LBO syndicates or a bank’s credit rating, however, lead to a lower impact of the LBO loan exposure on systemic risk.
In the mid-1990s, institutional investors entered the syndicated loan market and started to serve borrowers as lead arrangers. Why are non-banks able to compete for this role against banks? How do the composition of syndicates and loan pricing differ among lead arrangers? By using a dataset of 12,847 leveraged loans between 1997 and 2012, I aim to answer these questions. Non-banks benefit from looser regulatory requirements, have industry expertise which helps them in the screening and monitoring of borrowers and focus on firms that ask for loans only instead of additional cross-selling of other services. I can show that non-banks specialize on more opaque and less experienced borrowers, are more likely than banks to choose participants that help to reduce potentially higher information asymmetries and earn 105 basis points more than banks.
This paper analyzes the influence Leveraged Buyouts (LBOs) have on the operating performance of the LBO target companies’ direct competitors. A unique and hand-collected data set on LBOs in the United States in the period 1985-2009 allows us to analyze the effects different restructuring activities as part of the LBO have on the competitors’ revenues. These restructuring activities include changes to leverage, governance, or operating business, as well as M&A activities of the LBO target company. We find that although LBOs itself have a negative influence on competitors’ revenue growth, some restructuring mechanisms might actually benefit competing companies.
The Liikanen Group proposes contingent convertible (CoCo) bonds as a potential mechanism to enhance financial stability in the banking industry. Especially life insurance companies could serve as CoCo bond holders as they are already the largest purchasers of bank bonds in Europe. We develop a stylized model with a direct financial connection between banking and insurance and study the effects of various types of bonds such as non-convertible bonds, write-down bonds and CoCos on banks' and insurers' risk situations. In addition, we compare insurers' capital requirements under the proposed Solvency II standard model as well as under an internal model that ex-ante anticipates additional risks due to possible conversion of the CoCo bond into bank shares. In order to check the robustness of our findings, we consider different CoCo designs (write-down factor, trigger value, holding time of bank shares) and compare the resulting capital requirements with those for holding non-convertible bonds. We identify situations in which insurers benefit from buying CoCo bonds due to lower capital requirements and higher coupon rates. Furthermore, our results highlight how the Solvency II standard model can mislead insurers in their CoCo investment decision due to economically irrational incentives.
I assess how Basel III, Solvency II and the low interest rate environment will affect the financial connection between the bank and insurance sector by changing the funding patterns of banks as well as the investment strategies of life insurance companies. Especially for life insurance companies, the current low interest rate environment poses a key risk since declining returns on investments jeopardize the guaranteed return on life insurance contracts, a core component of traditional life insurance contracts in several European countries. I consider a contingent claim framework with a direct financial connection between banks and life insurers via bank bonds. The results indicate that life insurers' demand for bank bonds increases over the mid-term but ultimately declines in the long-run. Since life insurers are the largest purchasers of bank bonds in Europe, banks could lose one of their main funding sources. In addition, I show that shareholder value driven life insurers' appetite for risk increases when the gap between asset return and liability growth diminishes. To check the robustness of the findings, I calibrate a prolonged low interest rate scenario. The results show that the insurer's risk appetite is even higher when interest rates remain persistently low. A sensitivity analysis regarding industry-specific regulatory safety levels reveals that contagion between bank and life insurer is driven by the insurers' demand for bank bonds which itself depends on the regulatory safety level of banks.
The creation of the Banking Union is likely to come with substantial implications for the governance of Eurozone banks. The European Central Bank, in its capacity as supervisory authority for systemically important banks, as well as the Single Resolution Board, under the EU Regulations establishing the Single Supervisory Mechanism and the Single Resolution Mechanism, have been provided with a broad mandate and corresponding powers that allow for far-reaching interference with the relevant institutions’ organisational and business decisions. Starting with an overview of the relevant powers, the present paper explores how these could – and should – be exercised against the backdrop of the fundamental policy objectives of the Banking Union. The relevant aspects directly relate to a fundamental question associated with the reallocation of the supervisory landscape, namely: Will the centralisation of supervisory powers, over time, also lead to the streamlining of business models, corporate and group structures of banks across the Eurozone?
This paper examines the dynamic relationship between credit risk and liquidity in the sovereign bond market in the context of the European Central Bank (ECB) interventions. Using a comprehensive set of liquidity measures obtained from a detailed, quote-level dataset of the largest interdealer market for Italian government bonds, we show that changes in credit risk, as measured by the Italian sovereign credit default swap (CDS) spread, generally drive the liquidity of the market: a 10% change in the CDS spread leads a 11% change in the bid-ask spread. This relationship is stronger, and the transmission is faster, when the CDS spread is above the 500 basis point threshold, estimated endogenously, and can be ascribed to changes in margins and collateral, as well as clientele effects. Moreover, we show that the Long-Term Refinancing Operations (LTRO) intervention by the ECB weakened the sensitivity of the liquidity provision by the market makers to changes in the Italian government's credit risk. We also document the importance of market-wide and dealer-specific funding liquidity measures in determining the market liquidity for Italian government bonds.
The European Commission has published a Green Paper outlining possible measures to create a single market for capital in Europe. Our comments on the Commission’s capital markets union project use the functional finance approach as a starting point. Policy decisions, according to the functional finance perspective, should be essentially neutral (agnostic) in terms of institutions (level playing field). Our main angle, from which we assess proposals for the capital markets union agenda, are information asymmetries and the agency problems (screening, monitoring) which arise as a result. Within this perspective, we make a number of more specific proposals.
The paper traces the developments from the formation of the European Economic and Monetary Union to this date. It discusses the fact that the primary mandate of the European System of Central Banks (ESCB) is confined to safeguarding price stability and does not include general economic policy. Finally, the paper contributes to the discussion on whether the primary law of the European Union would support a eurozone exit. The Treaty of Maastricht imposed the strict obligation on the European Union (EU) to establish an economic and monetary union, now Article 3(4) TEU. This economic and monetary union is, however, not designed as a separate entity but as an integral part of the EU. The single currency was to become the currency of the EU and to be the legal tender in all Member States unless an exemption was explicitly granted in the primary law of the EU, as in the case of the UK and Denmark. The newly admitted Member States are obliged to introduce the euro as their currency as soon as they fulfil the admission criteria. Technically, this has been achieved by transferring the exclusive competence for the monetary policy of the Member States whose currency is the euro on the EU, Article 3(1)(c) TFEU and by bestowing the euro with the quality of legal tender, the only legal tender in the EU, Article 128(1) sentence 3 TFEU.
Die deutsche Steuerpolitik kombiniert hohe Steuersätze mit zahlreichen Ausnahmen. Das reißt Gerechtigkeitslücken, lenkt Investitionen in die falschen Zwecke und verkompliziert das Steuersystem mitunter bis zur Unkenntlichkeit. Bei der Erbschaftsteuer ist dies besonders augenfällig. Der Versuch mit minimalinvasiven Korrekturen Konsistenz in die Erbschaft- und Schenkungsteuer zu bringen ist fast zwangsläufig zum Scheitern verurteilt. Vieles spricht stattdessen für deutlich abgesenkte Steuersätze und eine gleichzeitige Abschaffung der Vergünstigungen für Betriebsvermögen.
Ein kritischer Diskurs ist essentiell für die Wissenschaft. Das ist zwar banal, wird aber im gegenwärtigen Streit um „Münkler Watch“, einem Blog, in dem Studierende der Humboldt-Universität Berlin eine Vorlesung des Politikwissenschaftlers Prof. Herfried Münkler anonym kritisieren, häufig vergessen. Aber auch den Studierenden scheint es nicht um einen inhaltlichen Dialog, sondern um Aufmerksamkeit zu gehen.
25 Jahre ISOE – Veranstaltungsdokumentation online +++ Mehr als nur Wohnen: Veranstaltungsreihe „Gemeinsam Leben in der Stadt“ +++ Bau von Windenergieanlagen: Konfliktparteien im Dialog +++ Festakt bei BiK-F – Aufnahme in die Senckenberg Gesellschaft für Naturforschung +++ ISOE-Lecture im Wintersemester 2014/15 an der Goethe-Universität Frankfurt +++ Aus dem ISOE: Europäische Biodiversitätsforschung: ISOE ist Mitglied im ALTER-Net +++ Termine +++ Publikationen
ISOE-Forschungsteam begleitet „Reallabore“ in Baden-Württemberg +++ ISOE bei den Berliner Energietagen +++ Wasser für die Trockenzeit – Übergabe der Flutwassersammelanlage in Namibia +++ Zukunftsstadt – ISOE ist Partner des Wissenschaftsjahres 2015 +++ Weltwasserdekade endet – Probleme in der weltweiten Wasserversorgung bleiben +++ Capital4Health – Forschungsverbund für transdisziplinäre Gesundheitsforschung +++ Aus dem ISOE: Dr. Alexandra Lux ist neue Leiterin des Forschungsschwerpunkts „Transdisziplinäre Methoden und Konzepte“ +++ Termine +++ Publikationen
Beste! Neues in der Blogroll
(2015)
Hin und wieder muss einfach mal ordentlich entrümpelt und aufgeräumt werden. Das gilt für das Leben im Allgemeinen und hin und wieder eben auch für das Bretterblog. Heute war es mal wieder soweit. Hoch motiviert von den verwegenen Plänen des letzten Redaktionstreffens habe ich mich unter anderem an unsere Blogroll gewagt: Einmal durchgeklickt, Blogleichen weggeräumt und gestaunt, was für starke Blogs es doch so da draußen gibt, die man hin und wieder mal aus dem Auge verliert!...
The design of rainwater harvesting based gardens requires considering current climate but also climate change during the lifespan of the facility. The goal of this study is to present an approach for designing garden variants that can be safely supplied with harvested rainwater, taking into account climate change and adaptation measures. In addition, the study presents a methodology to quantify the effects of climate change on rainwater harvesting based gardening. Results of the study may not be accurate due to the assumptions made for climate projections and may need to be further refined. We used a tank flow model and an irrigation water model. Then we established three simple climate scenarios and analyzed the impact of climate change on harvested rain and horticulture production for a semi-arid region in northern Namibia. In the two climate scenarios with decreased precipitation and medium/high temperature increase; adaptation measures are required to avoid substantial decreases in horticulture production. The study found that the most promising adaptation measures to sustain yields and revenues are a more water efficient garden variant and an enlargement of the roof size. The proposed measures can partly or completely compensate the negative impacts of climate change.
Bayesian Networks are computer-based environmental models that are frequently used to support decision-making under uncertainty. Under data scarce conditions, Bayesian Networks can be developed, parameterized, and run based on expert knowledge only. However, the efficiency of expert-based Bayesian Network modeling is limited by the difficulty in deriving model inputs in the time available during expert workshops. This thesis therefore aimed at developing a simple and robust method for deriving conditional probability tables from expert estimates in a time-efficient way. The design and application of this new elicitation and conversion method is demonstrated using a case study in Xinjiang, Northwest China. The key characteristics of this method are its time-efficiency and the approach to use different conversion tables based on varying levels of confidence. Although the method has its limitations, e.g. it can only be applied for variables with one conditioning variable; it provides the opportunity to support the parameterization of Bayesian Networks which would otherwise remain half-finished due to time constraints. In addition, a case study in the Murray-Darling Basin, Australia, is used to compare Bayesian Network types and software to improve the presentation clarity of large Bayesian Networks. Both case studies aimed at gaining insights on how to improve the applicability of Bayesian Networks to support environmental management.
Nach dem Ausfall der vergangenen Woche, gibt es diese Woche wieder eine Netzschau. Alles beim Alten, bin ich geneigt zu sagen. Aber das stimmt nicht ganz. Diese Ausgabe ist eine Übergangs-Netzschau und kommt deshalb sehr schlank daher. Wir überlegen, das Format drastisch zu verändern. Über das “wie” ist bisher allerdings noch keine Entscheidung gefallen. Wer uns Tipps geben mag, wie ihre/seine perfekte Netzschau aussehen würde oder was wir auf jeden Fall verändern sollten, kommentiere bitte diesen Beitrag. Danke!
Nach vorherrschender Lesart prallen im Internet Exklusivitäts- und Zugangsinteressen aufeinander. Das Urheberrecht soll diesen Konflikt in ein angemessenes Gleichgewicht bringen. Im folgenden Beitrag werden die Auseinandersetzungen um das digitale Urheberrecht anders gedeutet. Demnach ist die Online-Kommunikation von zwei koexistierenden Kulturen geprägt, die sich je verschieden zum Urheberrecht verhalten. Die Ausgestaltung des digitalen Urheberrechts wird mit darüber entscheiden, ob das dynamische Nebeneinander von Exklusivitäts- und Zugangskultur fortdauert oder ob eine der beiden Kulturen verdrängt wird. Das Urheberrecht ist folglich als Teil der Internetregulierung zu betrachten.
FIAS Scientific Report 2014
(2015)
Die vorliegende Dissertation zeigt, dass globale Kohärenz in Lebenserzählungen erst in der Adoleszenz entsteht und sich im Erwachsenenalter weiter entwickelt. Außerdem konnte gezeigt werden, dass die fragmentarische Nutzung der Lebensgeschichte in Form autobiographischen Urteilens in Zeiten tiefgreifender Lebensveränderungen zum Erhalt der Selbst-Kontinuität beiträgt.
Estudio de la producción científica sobre la Escuela de Salamanca en los últimos años y perspectivas de futuro. Se plantea la dificultad de delimitación temporal de la Escuela y se propone la ampliación de su ámbito de estudio, no sólo a los temas tradicionales como la teología (moral, el problema del mal, la polémica De auxiliis), el derecho (el derecho natural y los derechos humanos, la soberanía, la guerra justa,…) y la economía (la propiedad privada, el dinero, el valor y el precio, el interés), sino también a problemas científicos sobre el espacio, el tiempo y otros.
El siguiente artículo presenta una reconstrucción del debate acerca de la condición teológico-política de los indios americanos en el siglo XVI. Se concentra, en particular, en uno de los elementos presentes en una controversia compleja: las opiniones sobre el paganismo de los pueblos "descubiertos" en América y en Asia. Luego de analizar las condenas por "idolatría" de los indios que encontramos en escritos de cronistas como López de Gómara o Fernández de Oviedo, se resumen los argumentos elaborados por maestros importantes de la Universidad de Salamanca (De Paz, Vitoria, Soto) para rechazar la forma confusa en la que estaban siendo planteados los dilemas teológicos surgidos tras el descubrimiento de los nuevos pueblos paganos. El trabajo hace énfasis también en la importancia del papel jugado por los teólogos salamantinos en un proceso más amplio de conceptualización de la naturaleza "inocente" de las "idolatrías" de los nativos americanos del que otros teólogos misioneros (Las Casas, Zumárraga,…) también fueron partícipes, aunque recurriendo a otros métodos y argumentos. Las secciones finales están dedicadas al estudio de la consolidación de los argumentos forjados por los teólogos salmantinos en el continuo debate sobre la evangelización y la dominación española sobre las Indias. Se retoman, en concreto, algunos escritos de dos figuras destacadas: Alonso de la Veracruz y Domingo de Salazar, para mostrar cómo, bajo la influencia de Vitoria y Soto, sus profesores en la Universidad de Salamanca, Veracruz y Salazar adaptaron algunas de sus ideas a los contextos misioneros de América y Asia.
El propósito de este artículo es discutir, en un primero momento, en que medida los maestros vinculados a la Universidad de Salamanca y a su correspondiente Escuela de Salamanca contribuyeron para la validación de un saber relacionado a los descubrimientos que permitió pensar una nueva configuración geográfica de la Tierra. En un segundo momento, mostraremos como la universidad salmantina, junto con otras instituciones de saber, operaron como un centro de actividad ‘científica’ que estuve a servicio de los proyectos de la monarquía española.
Anforderungen an einen wissenschaftlicher Verbrechensbegriff werden im ersten Teil dieses Textes vorgestellt. Die folgende Untersuchung der „Allgemeinen Theorien des Verbrechens“ zeigt, dass diese ihren Anspruch nicht einlösen können, weil sie eines wissenschaftlich tragfähigen Verbrechensbegriffes entbehren. Doch indem sie diesen Mangel nicht erwähnen, sondern diese Leerstelle mit Schweigen oder losen Verbrechensbegriffen verhüllen, täuschen sie darüber hinweg.
Background: Acquired resistance to standard chemotherapy causes treatment failure in patients with metastatic bladder cancer. Overexpression of pro-survival Bcl-2 family proteins has been associated with a poor chemotherapeutic response, suggesting that Bcl-2-targeted therapy may be a feasible strategy in patients with these tumors. The small-molecule pan-Bcl-2 inhibitor (−)-gossypol (AT-101) is known to induce apoptotic cell death, but can also induce autophagy through release of the pro-autophagic BH3 only protein Beclin-1 from Bcl-2. The potential therapeutic effects of (−)-gossypol in chemoresistant bladder cancer and the role of autophagy in this context are hitherto unknown.
Methods: Cisplatin (5637rCDDP1000, RT4rCDDP1000) and gemcitabine (5637rGEMCI20, RT4rGEMCI20) chemoresistant sub-lines of the chemo-sensitive bladder cancer cell lines 5637 and RT4 were established for the investigation of acquired resistance mechanisms. Cell lines carrying a stable lentiviral knockdown of the core autophagy regulator ATG5 were created from chemosensitive 5637 and chemoresistant 5637rGEMCI20 and 5637rCDDP1000 cell lines. Cell death and autophagy were quantified by FACS analysis of propidium iodide, Annexin and Lysotracker staining, as well as LC3 translocation.
Results: Here we demonstrate that (−)-gossypol induces an apoptotic type of cell death in 5637 and RT4 cells which is partially inhibited by the pan-caspase inhibitor z-VAD. Cisplatin- and gemcitabine-resistant bladder cancer cells exhibit enhanced basal and drug-induced autophagosome formation and lysosomal activity which is accompanied by an attenuated apoptotic cell death after treatment with both (−)-gossypol and ABT-737, a Bcl-2 inhibitor which spares Mcl-1, in comparison to parental cells. Knockdown of ATG5 and inhibition of autophagy by 3-MA had no discernible effect on apoptotic cell death induced by (−)-gossypol and ABT-737 in parental 5637 cells, but evoked a significant increase in early apoptosis and overall cell death in BH3 mimetic-treated 5637rGEMCI20 and 5637rCDDP1000 cells.
Conclusions: Our findings show for the first time that (−)-gossypol concomitantly triggers apoptosis and a cytoprotective type of autophagy in bladder cancer and support the notion that enhanced autophagy may underlie the chemoresistant phenotype of these tumors. Simultaneous targeting of Bcl-2 proteins and the autophagy pathway may be an efficient new strategy to overcome their "autophagy addiction" and acquired resistance to current therapy.
The family of lysosome-associated membrane proteins (LAMP) includes the ubiquitously expressed LAMP1 and LAMP2, which account for half of the proteins in the lysosomal membrane. Another member of the LAMP family is LAMP3, which is expressed only in certain cell types and differentiation stages. LAMP3 expression is linked with poor prognosis of certain cancers, and the locus where it is encoded was identified as a risk factor for Parkinson's disease (PD). Here, we investigated the role of LAMP3 in the two main cellular degradation pathways, the proteasome and autophagy. LAMP3 mRNA was not detected in mouse models of PD or in the brain of human patients. However, it was strongly induced upon proteasomal inhibition in the neuroblastoma cell line SH-SY5Y. Induction of LAMP3 mRNA following proteasomal inhibition was dependent on UPR transcription factor ATF4 signaling and induced autophagic flux. Prevention of LAMP3 induction enhanced apoptotic cell death. In summary, these data demonstrate that LAMP3 regulation as part of the UPR contributes to protein degradation and cell survival during proteasomal dysfunction. This link between autophagy and the proteasome may be of special importance for the treatment of tumor cells with proteasomal inhibitors.
Dieser Beitrag ist ein Besprechungsaufsatz zu Beatrice Brunhöbers 2010 erschienener Dissertation Die Erfindung „demokratischer Repräsentation“ in den Federalist Papers (Mohr Siebeck, Tübingen: Grundlagen der Rechtswissenschaft, Bd. 14), in der Brunhöber die innovative – und auch die Verfassungsentwicklung andernorts prägende – Kraft der Verbindung von Demokratie, politischer Repräsentation und Föderalismusidee durch die amerikanischen Verfassungsväter herausarbeitet. Auf der Basis von Brunhöbers Untersuchung geht es insbesondere darum, wie sich das von Hamilton, Madison und Jay entworfene ‚alte‘ Konzept zur Gestaltung eines starken Gemeinwesens (eingeschlossen das vertrauensbildende Prinzip der Gewaltenteilung) für einen integrativen Umgang mit den ‚modernen‘ Gegebenheiten pluralistischer Gesellschaften nutzbar machen läßt, im Blick die Gesamtheit (und Vielfalt) des Staatsvolkes als Geltungsfundament legitimer Herrschaft. Im Hintergrund steht die Frage nach Möglichkeiten zur Nutzbarmachung historischer Vergewisserungen für heutige Debatten überhaupt.
Background: Influenza vaccination is recommended for all healthcare personnel (HCP) and most institutions offer vaccination for free and on site. However, medical students do not always have such easy access, and the predictors that might guide the motivation of medical students to get vaccinated are largely unknown.
Methods: We conducted a cross-sectional survey study among pre-clinical medical students in a German University hospital to assess the social cognitive predictors of influenza vaccination, as well as reasons for refusal and acceptance of the vaccine.
Results: Findings show that pre-clinical medical students have comparable knowledge gaps and negative attitudes towards influenza vaccination that have previously been reported among HCP. Lower injunctive norms and higher feelings of autonomy contribute to no intention to get vaccinated against influenza, while a positive instrumental attitude and higher feelings of autonomy contribute to a high intention to get vaccinated. The variables in the regression model explained 20% of the variance in intention to get vaccinated. Conclusions: The identified factors should be addressed early in medical education, and hospitals might benefit from a more inclusive vaccination program and accessibility of free vaccines for their medical students.
Background: The objective measurement of the mechanical component and its role in chronic ankle instability is still a matter of scientific debate. We analyzed known group and diagnostic validity of our ankle arthrometer. Additionally, functional aspects of chronic ankle instability were evaluated in relation to anterior talar drawer.
Methods: By manual stress testing, 41 functionally unstable ankles were divided as mechanically stable (n = 15) or mechanically unstable (n = 26). Ankle laxity was quantified using an ankle arthrometer. Stiffness values from the load displacement curves were calculated between 40 and 60 N. Known group validity and eta2 were established by comparing manual and arthrometer testing results. Diagnostic validity for the ankle arthrometer was determined by a 2 × 2 contingency table. The functional ankle instability severity was quantified by the German version of the Foot and Ankle Ability Measure (FAAM-G). Stiffness (40–60 N) and FAAM-G values were correlated.
Results: Mechanically unstable ankles had lower 40–60 N stiffness values than mechanically stable ankles (p = 0.006 and <0.001). Eta for the relation between manual and arthrometer anterior talar drawer testing was 0.628. With 5.1 N/mm as cut-off value, accuracy, sensitivity, and specificity were 85%, 81%, and 93%, respectively. The correlation between individual 40–60 N arthrometer stiffness values and FAAM-G scores was r = 0.286 and 0.316 (p = 0.07 and 0.04).
Conclusions: In this investigation, the ankle arthrometer demonstrated a high diagnostic validity for the determination of mechanical ankle instability. A clear interaction between mechanical (ankle arthrometer) and functional (FAAM-G) measures could not be demonstrated.
Background: Patients with liver cirrhosis have a highly elevated risk of developing bacterial infections that significantly decrease survival rates. One of the most relevant infections is spontaneous bacterial peritonitis (SBP). Recently, NOD2 germline variants were found to be potential predictors of the development of infectious complications and mortality in patients with cirrhosis. The aim of the INCA (Impact of NOD2 genotype-guided antibiotic prevention on survival in patients with liver Cirrhosis and Ascites) trial is to investigate whether survival of this genetically defined high-risk group of patients with cirrhosis defined by the presence of NOD2 variants is improved by primary antibiotic prophylaxis of SBP.
Methods/Design: The INCA trial is a double-blind, placebo-controlled clinical trial with two parallel treatment arms (arm 1: norfloxacin 400 mg once daily; arm 2: placebo once daily; 12-month treatment and observational period). Balanced randomization of 186 eligible patients with stratification for the protein content of the ascites (<15 versus ≥15 g/L) and the study site is planned. In this multicenter national study, patients are recruited in at least 13 centers throughout Germany. The key inclusion criterion is the presence of a NOD2 risk variant in patients with decompensated liver cirrhosis. The most important exclusion criteria are current SBP or previous history of SBP and any long-term antibiotic prophylaxis. The primary endpoint is overall survival after 12 months of treatment. Secondary objectives are to evaluate whether the frequencies of SBP and other clinically relevant infections necessitating antibiotic treatment, as well as the total duration of unplanned hospitalization due to cirrhosis, differ in both study arms. Recruitment started in February 2014.
Discussion: Preventive strategies are required to avoid life-threatening infections in patients with liver cirrhosis, but unselected use of antibiotics can trigger resistant bacteria and worsen outcome. Thus, individualized approaches that direct intervention only to patients with the highest risk are urgently needed. This trial meets this need by suggesting stratified prevention based on genetic risk assessment. To our knowledge, the INCA trial is first in the field of hepatology aimed at rapidly transferring and validating information on individual genetic risk into clinical decision algorithms.
Trial registrations: German Clinical Trials Register DRKS00005616. Registered 22 January 2014. EU Clinical Trials Register EudraCT 2013-001626-26. Registered 26 January 2015.
Recent studies have revealed an important role for Ltbp-4 in elastogenesis. Its mutational inactivation in humans causes autosomal recessive cutis laxa type 1C (ARCL1C), which is a severe disorder caused by defects of the elastic fiber network. Although the human gene involved in ARCL1C has been discovered based on similar elastic fiber abnormalities exhibited by mice lacking the short Ltbp-4 isoform (Ltbp4S−/−), the murine phenotype does not replicate ARCL1C. We therefore inactivated both Ltbp-4 isoforms in the mouse germline to model ARCL1C. Comparative analysis of Ltbp4S−/− and Ltbp4-null (Ltbp4−/−) mice identified Ltbp-4L as an important factor for elastogenesis and postnatal survival, and showed that it has distinct tissue expression patterns and specific molecular functions. We identified fibulin-4 as a previously unknown interaction partner of both Ltbp-4 isoforms and demonstrated that at least Ltbp-4L expression is essential for incorporation of fibulin-4 into the extracellular matrix (ECM). Overall, our results contribute to the current understanding of elastogenesis and provide an animal model of ARCL1C.
The three-dimensional quantification of small-scale processes in the upper troposphere and lower stratosphere is one of the challenges of current atmospheric research and requires the development of new measurement strategies. This work presents the first results from the newly developed Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) obtained during the ESSenCe (ESa Sounder Campaign) and TACTS/ESMVal (TACTS: Transport and composition in the upper troposphere/lowermost stratosphere, ESMVal: Earth System Model Validation) aircraft campaigns. The focus of this work is on the so-called dynamics-mode data characterized by a medium-spectral and a very-high-spatial resolution. The retrieval strategy for the derivation of two- and three-dimensional constituent fields in the upper troposphere and lower stratosphere is presented. Uncertainties of the main retrieval targets (temperature, O3, HNO3, and CFC-12) and their spatial resolution are discussed. During ESSenCe, high-resolution two-dimensional cross-sections have been obtained. Comparisons to collocated remote-sensing and in situ data indicate a good agreement between the data sets. During TACTS/ESMVal, a tomographic flight pattern to sense an intrusion of stratospheric air deep into the troposphere was performed. It was possible to reconstruct this filament at an unprecedented spatial resolution of better than 500 m vertically and 20 × 20 km horizontally.
Microstructural abnormalities in white matter (WM) are often reported in Alzheimer's disease (AD) and may reflect primary or secondary circuitry degeneration (i.e., due to cortical atrophy). The interpretation of diffusion tensor imaging (DTI) eigenvectors, known as multiple indices, may provide new insights into the main pathological models supporting primary or secondary patterns of WM disruption in AD, the retrogenesis, and Wallerian degeneration models, respectively. The aim of this review is to analyze the current literature on the contribution of DTI multiple indices to the understanding of AD neuropathology, taking the retrogenesis model as a reference for discussion. A systematic review using MEDLINE, EMBASE, and PUBMED was performed. Evidence suggests that AD evolves through distinct patterns of WM disruption, in which retrogenesis or, alternatively, the Wallerian degeneration may prevail. Distinct patterns of WM atrophy may be influenced by complex interactions which comprise disease status and progression, fiber localization, concurrent risk factors (i.e., vascular disease, gender), and cognitive reserve. The use of DTI multiple indices in addition to other standard multimodal methods in dementia research may help to determine the contribution of retrogenesis hypothesis to the understanding of neuropathological hallmarks that lead to AD.
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Introduction: In this article three research questions are addressed: (1) Is there an association between socioeconomic status (SES) and patient-reported outcomes in a cohort of multimorbid patients? (2) Does the association vary according to SES indicator used (income, education, occupational position)? (3) Can the association between SES and patient-reported outcomes (self-rated health, health-related quality of life and functional status) be (partly) explained by burden of disease?
Methods: Analyses are based on the MultiCare Cohort Study, a German multicentre, prospective, observational cohort study of multimorbid patients from general practice. We analysed baseline data and data from the first follow-up after 15 months (N = 2,729). To assess burden of disease we used the patients’ morbidity data from standardized general practitioner (GP) interviews based on a list of 46 groups of chronic conditions including the GP’s severity rating of each chronic condition ranging from marginal to very severe.
Results: In the cross-sectional analyses SES was significantly associated with the patient-reported outcomes at baseline. Associations with income were more consistent and stronger than with education and occupational position. Associations were partly explained (17% to 44%) by burden of disease. In the longitudinal analyses only income (but not education and occupational position) was significantly related to the patient-reported outcomes at follow-up. Associations between income and the outcomes were reduced by 18% to 27% after adjustment for burden of disease.
Conclusions: Results indicate social inequalities in self-rated health, functional status and health related quality of life among older multimorbid patients. As associations with education and occupational position were inconsistent, these inequalities were mainly due to income. Inequalities were partly explained by burden of disease. However, even among patients with a similar disease burden, those with a low income were worse off in terms of the three patient-reported outcomes under study.
Background: Xanthophyllomyces dendrorhous is a basal agaricomycete with uncertain taxonomic placement, known for its unique ability to produce astaxanthin, a carotenoid with antioxidant properties. It was the aim of this study to elucidate the organization of its CoA-derived pathways and to use the genomic information of X. dendrorhous for a phylogenomic investigation of the Basidiomycota.
Results: The genome assembly of a haploid strain of Xanthophyllomyces dendrorhous revealed a genome of 19.50 Megabases with 6385 protein coding genes. Phylogenetic analyses were conducted including 48 fungal genomes. These revealed Ustilaginomycotina and Agaricomycotina as sister groups. In the latter a well-supported sister-group relationship of two major orders, Polyporales and Russulales, was inferred. Wallemia occupies a basal position within the Agaricomycotina and X. dendrorhous represents the basal lineage of the Tremellomycetes, highlighting that the typical tremelloid parenthesomes have either convergently evolved in Wallemia and the Tremellomycetes, or were lost in the Cystofilobasidiales lineage. A detailed characterization of the CoA-related pathways was done and all genes for fatty acid, sterol and carotenoid synthesis have been assigned.
Conclusions: The current study ascertains that Wallemia with tremelloid parenthesomes is the most basal agaricomycotinous lineage and that Cystofilobasidiales without tremelloid parenthesomes are deeply rooted within Tremellomycetes, suggesting that parenthesomes at septal pores might be the core synapomorphy for the Agaricomycotina. Apart from evolutionary insights the genome sequence of X. dendrorhous will facilitate genetic pathway engineering for optimized astaxanthin or oxidative alcohol production.
Background: Aging is associated with loss of balance and activity in daily life. It impacts postural control and increases the risk of falls. The current study was conducted to determine the feasibility and long-term impact of stochastic resonance whole-body vibration (SR-WBV) on static and dynamic balance and reaction time among elderly individuals.
Methods: A randomized crossover pilot study with blinding of the participants. Twenty elderly were divided into group A (SR-WBV 5 Hz, Noise 4/SR-WBV 1 Hz, Noise 1) or group B (SR-WBV 1 Hz, Noise 1/SR-WBV 5 Hz, Noise 1). Feasibility outcomes included recruitment, compliance and safety. Secondary outcomes were Semi-Tandem Stand (STS), Functional Reach Test (FRT), Expanded Timed Get Up-and-Go (ETGUG), walking under single (ST) & dual task (DT) conditions, hand and foot reaction time (RTH/RTF). Puri and Sen Rank-Order L Statistics were used to analyse carry-over effects. To analyse SR-WBV effects Wilcoxon signed-ranked tests were used.
Results: With good recruitment rate (55%) and compliance (attrition 15%; adherence 85%) rates the intervention was deemed feasible. Three participants dropped out, two due to knee pain and one for personal reasons. ETGUG 0 to 2 m (p = 0.143; ES: 0.36) and ETGUG total time (p = 0.097; ES: 0.40) showed medium effect sizes.
Conclusions: Stochastic resonance training is feasible in untrained elderly resulting in good recruitment and compliance. Low volume SR-WBV exercises over 12 training sessions with 5 Hz, Noise 4 seems a sufficient stimulus to improve ETGUG total time. The stimulation did not elicit changes in other outcomes.
Trial registration: This trial has been registered at the U.S. National Institutes of Health under ClinicalTrials.gov: NCT01045746 .
The genetics responsible for the inter-individually variable G-CSF responsiveness remain elusive. A single nucleotide polymorphism (SNP) in the 3’UTR of CXCL12, rs1801157, was implicated in X4-tropic HiV susceptibility and later, in two small studies, in G-CSR responsiveness in patients and donors. The position of the SNP in the 3’UTR together with in-silico predictions suggested differential binding of micro-RNA941 as an underlying mechanism. In a cohort of 515 healthy stem cell donors we attempted to reproduce the correlation of the CXCL12 3’UTR SNP and mobilization responses and tested the role of miR941 in this context. The SNP was distributed with the expected frequency. Mobilization efficiency for CD34+ cells in WT, heterozygous and homozygous SNP individuals was indistinguishable, even after controlling for gender. miR941 expression in non-hematopoietic bone marrow cells was undetectable and miR941 did not interact with the 3’ UTR of CXCL12. Proposed effects of the SNP rs1801157 on G-CSF responsiveness cannot be confirmed in a larger cohort.
Völkermord an den Armeniern: diplomatische Rücksichtnahme darf Anerkennung nicht im Weg stehen
(2015)
In ihrem Gastbeitrag fordern Matthias Winkler und Timo Leimeister von Genocide Alert, dass Deutschland, trotz möglicher diplomatischer Verstimmungen, nicht davor zurückschrecken sollte, den Völkermord an den Armeniern von 1915 als solchen ausdrücklich zu benennen. Vor einem Jahrhundert wurden weite Teile des armenischen Volkes im Osmanischen Reich in einem Völkermord ausgelöscht. Das Deutsche Reich war ein enger Verbündeter der damaligen osmanischen Regierung und stellte die Bündnispolitik über das Überleben der Armenier. Durch ein Eingeständnis auch der eigenen historischen Verantwortung an den Geschehnissen kann die Bundesrepublik im Gegenteil den Vertretern einer Aussöhnung in der Türkei den Rücken stärken...
Single-molecule super-resolution microscopy allows imaging of fluorescently-tagged proteins in live cells with a precision well below that of the diffraction limit. Here, we demonstrate 3D sectioning with single-molecule super-resolution microscopy by making use of the fitting information that is usually discarded to reject fluorophores that emit from above or below a virtual-'light-sheet', a thin volume centred on the focal plane of the microscope. We describe an easy-to-use routine (implemented as an open-source ImageJ plug-in) to quickly analyse a calibration sample to define and use such a virtual light-sheet. In addition, the plug-in is easily usable on almost any existing 2D super-resolution instrumentation. This optical sectioning of super-resolution images is achieved by applying well-characterised width and amplitude thresholds to diffraction-limited spots that can be used to tune the thickness of the virtual light-sheet. This allows qualitative and quantitative imaging improvements: by rejecting out-of-focus fluorophores, the super-resolution image gains contrast and local features may be revealed; by retaining only fluorophores close to the focal plane, virtual-'light-sheet' single-molecule localisation microscopy improves the probability that all emitting fluorophores will be detected, fitted and quantitatively evaluated.
The formation of particles from precursor vapors is an important source of atmospheric aerosol. Research at the Cosmics Leaving OUtdoor Droplets (CLOUD) facility at CERN tries to elucidate which vapors are responsible for this new-particle formation, and how in detail it proceeds. Initial measurement campaigns at the CLOUD stainless-steel aerosol chamber focused on investigating particle formation from ammonia (NH3) and sulfuric acid (H2SO4). Experiments were conducted in the presence of water, ozone and sulfur dioxide. Contaminant trace gases were suppressed at the technological limit. For this study, we mapped out the compositions of small NH3–H2SO4 clusters over a wide range of atmospherically relevant environmental conditions. We covered [NH3] in the range from < 2 to 1400 pptv, [H2SO4] from 3.3 × 106 to 1.4 × 109 cm−3 (0.1 to 56 pptv), and a temperature range from −25 to +20 °C. Negatively and positively charged clusters were directly measured by an atmospheric pressure interface time-of-flight (APi-TOF) mass spectrometer, as they initially formed from gas-phase NH3 and H2SO4, and then grew to larger clusters containing more than 50 molecules of NH3 and H2SO4, corresponding to mobility-equivalent diameters greater than 2 nm. Water molecules evaporate from these clusters during sampling and are not observed. We found that the composition of the NH3–H2SO4 clusters is primarily determined by the ratio of gas-phase concentrations [NH3] / [H2SO4], as well as by temperature. Pure binary H2O–H2SO4 clusters (observed as clusters of only H2SO4) only form at [NH3] / [H2SO4] < 0.1 to 1. For larger values of [NH3] / [H2SO4], the composition of NH3–H2SO4 clusters was characterized by the number of NH3 molecules m added for each added H2SO4 molecule n (Δm/Δ n), where n is in the range 4–18 (negatively charged clusters) or 1–17 (positively charged clusters). For negatively charged clusters, Δ m/Δn saturated between 1 and 1.4 for [NH3] / [H2SO4] > 10. Positively charged clusters grew on average by Δm/Δn = 1.05 and were only observed at sufficiently high [NH3] / [H2SO4]. The H2SO4 molecules of these clusters are partially neutralized by NH3, in close resemblance to the acid–base bindings of ammonium bisulfate. Supported by model simulations, we substantiate previous evidence for acid–base reactions being the essential mechanism behind the formation of these clusters under atmospheric conditions and up to sizes of at least 2 nm. Our results also suggest that electrically neutral NH3–H2SO4 clusters, unobservable in this study, have generally the same composition as ionic clusters for [NH3] / [H2SO4] > 10. We expect that NH3–H2SO4 clusters form and grow also mostly by Δm/Δn > 1 in the atmosphere's boundary layer, as [NH3] / [H2SO4] is mostly larger than 10. We compared our results from CLOUD with APi-TOF measurements of NH3–H2SO4 anion clusters during new-particle formation in the Finnish boreal forest. However, the exact role of NH3–H2SO4 clusters in boundary layer particle formation remains to be resolved.
Seven different instruments and measurement methods were used to examine the immersion freezing of bacterial ice nuclei from Snomax® (hereafter Snomax), a product containing ice-active protein complexes from non-viable Pseudomonas syringae bacteria. The experimental conditions were kept as similar as possible for the different measurements. Of the participating instruments, some examined droplets which had been made from suspensions directly, and the others examined droplets activated on previously generated Snomax particles, with particle diameters of mostly a few hundred nanometers and up to a few micrometers in some cases. Data were obtained in the temperature range from −2 to −38 °C, and it was found that all ice-active protein complexes were already activated above −12 °C. Droplets with different Snomax mass concentrations covering 10 orders of magnitude were examined. Some instruments had very short ice nucleation times down to below 1 s, while others had comparably slow cooling rates around 1 K min−1. Displaying data from the different instruments in terms of numbers of ice-active protein complexes per dry mass of Snomax, nm, showed that within their uncertainty, the data agree well with each other as well as to previously reported literature results. Two parameterizations were taken from literature for a direct comparison to our results, and these were a time-dependent approach based on a contact angle distribution (Niedermeier et al., 2014) and a modification of the parameterization presented in Hartmann et al. (2013) representing a time-independent approach. The agreement between these and the measured data were good; i.e., they agreed within a temperature range of 0.6 K or equivalently a range in nm of a factor of 2. From the results presented herein, we propose that Snomax, at least when carefully shared and prepared, is a suitable material to test and compare different instruments for their accuracy of measuring immersion freezing.
Ecolabels are frequently presented as consumer information tools that efficiently promote environmental aims such as the sustainability of fisheries. Two recent WTO dispute settlement cases -- Tuna II and COOL -- have called into question the characterisation of labels as ‘consumer information tools’ by illuminating the regulatory power and purposes of labelling. Tuna II moreover clarifies that WTO law does not necessarily privilege ecolabelling over more openly interventionist government measures aimed at environmental protection. In this contribution I first sketch two views of ecolabelling -- one that depicts ecolabelling as primarily aiming at consumer information and another that stresses the regulatory function of labelling. I then turn to the dispute settlement reports in Tuna II and COOL in order to specify the government authority involved in many labelling schemes. I conclude this contribution with the call for a critical assessment of ecolabelling. The power of ecolabelling may be employed to reshape markets and promote green growth. At the same time, however, it may consolidate a trend that places the consumer at the centre of initiatives for societal change and loses sight of potentially more radical transformations through the engagement of human beings as citizens.
Das Verhältnis von Zwangsvollstreckungs- und Verfassungsrecht ist nicht nur in Deutschland ein aktuelles Thema in der zivilprozessualen, verfassungsrechtlichen und (verfahrens-) rechtspolitischen Diskussion, wie die vorliegende Themenwahl der o.g. Jahrestagung der International Association of Procedural Law (IAPL) belegt. Ein Ausschnitt aus dieser Gesamtthematik ist Gegenstand dieses Nationalberichts aus der Perspektive des deutschen (Verfahrens-) Rechts, der unter dem Generalhema „Verfassung, Grundrechte und Vollstreckungsrecht“ insbesondere das „Spannungsverhältnis“ der kollidierenden Grundrechte von Vollstreckungsschuldner und -gläubiger behandelt.
Although much is known about the critical importance of active verbal rehearsal for successful recall, knowledge about the mechanisms of rehearsal and their respective development in children is very limited. To be able to rehearse several items together, these items have to be available, or, if presented and rehearsed previously, retrieved from memory. Therefore, joint rehearsal of several items may itself be considered recall. Accordingly, by analyzing free recall, one cannot only gain insight into how recall and rehearsal unfold, but also into how principles that govern children’s recall govern children’s rehearsal. Over a period of three and a half years (beginning at grade 3) 54 children were longitudinally assessed seven times on several overt rehearsal free recall trials. A first set of analyses on recall revealed significant age-related increases in the primacy effect and an age-invariant recency effect. In the middle portion of the list, wave-shaped recall characteristics emerged and increased with age, indicating grouping of the list into subsequences. In a second set of analyses, overt rehearsal behavior was decomposed into distinct rehearsal sets. Analyses of these sets revealed that the distribution of rehearsals within each set resembled the serial position curves with one- or two-item primacy and recency effects and wave-shaped rehearsal patterns in between. In addition, rehearsal behavior throughout the list was characterized by a decreasing tendency to begin rehearsal sets with the first list item. This result parallels the phenomenon of beginning recall with the first item on short lists and with the last item on longer lists.
Prostaglandin E2 (PGE2) favors multiple aspects of tumor development and immune evasion. Therefore, microsomal prostaglandin E synthase (mPGES-1/-2), is a potential target for cancer therapy. We explored whether inhibiting mPGES-1 in human and mouse models of breast cancer affects tumor-associated immunity. A new model of breast tumor spheroid killing by human PBMCs was developed. In this model, tumor killing required CD80 expression by tumor-associated phagocytes to trigger cytotoxic T cell activation. Pharmacological mPGES-1 inhibition increased CD80 expression, whereas addition of PGE2, a prostaglandin E2 receptor 2 (EP2) agonist, or activation of signaling downstream of EP2 reduced CD80 expression. Genetic ablation of mPGES-1 resulted in markedly reduced tumor growth in PyMT mice. Macrophages of mPGES-1-/- PyMT mice indeed expressed elevated levels of CD80 compared to their wildtype counterparts. CD80 expression in tumor-spheroid infiltrating mPGES-1-/- macrophages translated into antigen-specific cytotoxic T cell activation. In conclusion, mPGES-1 inhibition elevates CD80 expression by tumor-associated phagocytes to restrict tumor growth. We propose that mPGES-1 inhibition in combination with immune cell activation might be part of a therapeutic strategy to overcome the immunosuppressive tumor microenvironment.
The aim of this study was to assess whether endosperm-specific carotenoid biosynthesis influenced core metabolic processes in maize embryo and endosperm and how global seed metabolism adapted to this expanded biosynthetic capacity. Although enhancement of carotenoid biosynthesis was targeted to the endosperm of maize kernels, a concurrent up-regulation of sterol and fatty acid biosynthesis in the embryo was measured. Targeted terpenoid analysis, and non-targeted metabolomic, proteomic, and transcriptomic profiling revealed changes especially in carbohydrate metabolism in the transgenic line. In-depth analysis of the data, including changes of metabolite pools and increased enzyme and transcript concentrations, gave a first insight into the metabolic variation precipitated by the higher up-stream metabolite demand by the extended biosynthesis capacities for terpenoids and fatty acids. An integrative model is put forward to explain the metabolic regulation for the increased provision of terpenoid and fatty acid precursors, particularly glyceraldehyde 3-phosphate and pyruvate or acetyl-CoA from imported fructose and glucose. The model was supported by higher activities of fructokinase, glucose 6-phosphate isomerase, and fructose 1,6-bisphosphate aldolase indicating a higher flux through the glycolytic pathway. Although pyruvate and acetyl-CoA utilization was higher in the engineered line, pyruvate kinase activity was lower. A sufficient provision of both metabolites may be supported by a by-pass in a reaction sequence involving phosphoenolpyruvate carboxylase, malate dehydrogenase, and malic enzyme.
Global warming, changes in the hydrological cycle and enhanced marine primary productivity all have been invoked as having contributed to the occurrence of widespread ocean anoxia during the Cenomanian–Turonian oceanic anoxic event (OAE2; ~94 Ma), but disentangling these factors on a regional scale has remained problematic. In an attempt to separate these forcing factors, we generated palynological and organic geochemical records using a core spanning the OAE2 from Wunstorf, Lower Saxony Basin (LSB; northern Germany), which exhibits cyclic black shale–marl alternations related to the orbital precession cycle.
Despite the widely varying depositional conditions complicating the interpretation of the obtained records, TEX86H indicates that sea-surface temperature (SST) evolution in the LSB during OAE2 resembles that of previously studied sites throughout the proto-North Atlantic. Cooling during the so-called Plenus Cold Event interrupted black shale deposition during the early stages of OAE2. However, TEX86 does not vary significantly across black shale–marl alternations, suggesting that temperature variations did not force the formation of the cyclic black shale horizons. Relative (i.e., with respect to marine palynomorphs) and absolute abundances of pollen and spores are elevated during phases of black shale deposition, indicative of enhanced precipitation and run-off. High abundances of cysts from inferred heterotrophic and euryhaline dinoflagellates supports high run-off, which likely introduced additional nutrients to the epicontinental shelf resulting in elevated marine primary productivity.
We conclude that orbitally forced enhanced precipitation and run-off, in tandem with elevated marine primary productivity, were critical in cyclic black shale formation on the northern European epicontinental shelf and potentially for other OAE2 sections in the proto-Atlantic and Western Interior Seaway at similar latitudes as well.
The forest, savanna, and grassland biomes, and the transitions between them, are expected to undergo major changes in the future due to global climate change. Dynamic global vegetation models (DGVMs) are very useful for understanding vegetation dynamics under the present climate, and for predicting its changes under future conditions. However, several DGVMs display high uncertainty in predicting vegetation in tropical areas. Here we perform a comparative analysis of three different DGVMs (JSBACH, LPJ-GUESS-SPITFIRE and aDGVM) with regard to their representation of the ecological mechanisms and feedbacks that determine the forest, savanna, and grassland biomes, in an attempt to bridge the knowledge gap between ecology and global modeling. The outcomes of the models, which include different mechanisms, are compared to observed tree cover along a mean annual precipitation gradient in Africa. By drawing on the large number of recent studies that have delivered new insights into the ecology of tropical ecosystems in general, and of savannas in particular, we identify two main mechanisms that need improved representation in the examined DGVMs. The first mechanism includes water limitation to tree growth, and tree–grass competition for water, which are key factors in determining savanna presence in arid and semi-arid areas. The second is a grass–fire feedback, which maintains both forest and savanna presence in mesic areas. Grasses constitute the majority of the fuel load, and at the same time benefit from the openness of the landscape after fires, since they recover faster than trees. Additionally, these two mechanisms are better represented when the models also include tree life stages (adults and seedlings), and distinguish between fire-prone and shade-tolerant forest trees, and fire-resistant and shade-intolerant savanna trees. Including these basic elements could improve the predictive ability of the DGVMs, not only under current climate conditions but also and especially under future scenarios.
The Tarim River basin, located in Xinjiang, NW China, is the largest endorheic river basin in China and one of the largest in all of Central Asia. Due to the extremely arid climate, with an annual precipitation of less than 100 mm, the water supply along the Aksu and Tarim rivers solely depends on river water. This is linked to anthropogenic activities (e.g., agriculture) and natural and semi-natural ecosystems as both compete for water. The ongoing increase in water consumption by agriculture and other human activities in this region has been enhancing the competition for water between human needs and nature. Against this background, 11 German and 6 Chinese universities and research institutes have formed the consortium SuMaRiO (Sustainable Management of River Oases along the Tarim River; http://www.sumario.de), which aims to create a holistic picture of the availability of water resources in the Tarim River basin and the impacts on anthropogenic activities and natural ecosystems caused by the water distribution within the Tarim River basin. On the basis of the results from field studies and modeling approaches as well as from suggestions by the relevant regional stakeholders, a decision support tool (DST) will be implemented that will then assist stakeholders in balancing the competition for water, acknowledging the major external effects of water allocation to agriculture and to natural ecosystems. This consortium was formed in 2011 and is funded by the German Federal Ministry of Education and Research. As the data collection phase was finished this year, the paper presented here brings together the results from the fields from the disciplines of climate modeling, cryology, hydrology, agricultural sciences, ecology, geoinformatics, and social sciences in order to present a comprehensive picture of the effects of different water availability schemes on anthropogenic activities and natural ecosystems along the Tarim River. The second objective is to present the project structure of the whole consortium, the current status of work (i.e., major new results and findings), explain the foundation of the decision support tool as a key product of this project, and conclude with application recommendations for the region. The discharge of the Aksu River, which is the major tributary of the Tarim, has been increasing over the past 6 decades. From 1989 to 2011, agricultural area more than doubled: cotton became the major crop and there was a shift from small-scale to large-scale intensive farming. The ongoing increase in irrigated agricultural land leads to the increased threat of salinization and soil degradation caused by increased evapotranspiration. Aside from agricultural land, the major natural and semi-natural ecosystems are riparian (Tugai) forests, shrub vegetation, reed beds, and other grassland, as well as urban and peri-urban vegetation. Within the SuMaRiO cluster, focus has been set on the Tugai forests, with Populus euphratica as the dominant tree species, because these forests belong to the most productive and species-rich natural ecosystems of the Tarim River basin. At sites close to the groundwater, the annual stem diameter increments of Populus euphratica correlated with the river runoffs of the previous year. However, the natural river dynamics cease along the downstream course and thus hamper the recruitment of Populus euphratica. A study on the willingness to pay for the conservation of the natural ecosystems was conducted to estimate the concern of the people in the region and in China's capital. These household surveys revealed that there is a considerable willingness to pay for conservation of the natural ecosystems, with mitigation of dust and sandstorms considered the most important ecosystem service. Stakeholder dialogues contributed to creating a scientific basis for a sustainable management in the future.
Die vorliegende Arbeit verfolgt den Anspruch, die von Paul Natorp (1907) gestellte Frage, was die Gemeinschaft für die Erziehung und umgekehrt die Erziehung für die Gemeinschaft bedeute, empirisch auszuloten. Im Mittelpunkt des Forschungsinteresses steht dabei das Verhältnis von Schule und Gemeinschaft in der ‚postnationalen Konstellation‘ (Habermas 1998), dem sich die Arbeit über die Untersuchung von Unterrichtskommunikation zu den Themen ‚Nationalsozialismus/Holocaust‘ und ‚Multikulturalismus/Rassismus‘ anzunähern versucht.
Zur Vorbereitung der empirischen Studie wird im Rahmen einer Semantikstudie zunächst der pädagogische Diskurs zum Thema Gemeinschaft von den ersten Vorläufern gemeinschaftspä-dagogischen Denkens bis hin zu aktuellen konzeptionellen Entwürfen nachgezeichnet. Funktion und Bedeutung der Gemeinschaftsfigur als Bezugskategorie von Erziehungsreflexion in der Moderne werden herausgearbeitet. Weiterhin werden die rekonstruierten gemeinschaftspädagogischen Konzepte in Hinblick auf ihr Spannungspotential zu Leitprinzipien demokratischer Erziehung beleuchtet.
Mithilfe sequenzanalytischer Interpretationen wird anschließend der Frage nachgegangen, welche Rolle Gemeinschaftsbezügen als Fluchtpunkt pädagogischer Einwirkungsbemühungen im Unterricht zukommt. Gleichzeitig wird gefragt, ob und in welcher Weise Unterricht Gemeinschaft als Ermöglichungsform zur Umsetzung seiner pädagogischen Absichten in Anspruch nimmt. Nicht zuletzt richtet die Analyse das Augenmerk auf den Umgang mit den potentiellen Herausforderungen, die unter Migrationsbedingungen mit dem Rekurrieren auf Gemeinschaft am Lernort (Geschichts-)Unterricht verbunden sind.
Die Interpretationen zeigen, wie Unterricht Gemeinschaft in Anspruch nimmt, um die Bedingungen seines Prozessierens zu sichern. Weiterhin decken sie zwei kontrastierende Typen des Rekurrierens auf Gemeinschaft als Fluchtpunkt pädagogischer Kommunikation zum Thema NS auf. Als Lösungsoption im Umgang mit den migrationsbedingten Herausforderungen von Geschichtsunterricht zum Thema NS deutet sich indes die Tendenz an, verstärkt auf eine universalistische Erinnerungs- und Verantwortungskultur Bezug zu nehmen.
Das Gefahrenpotential, das dem Gemeinschaftsgedanken in der aktuellen erziehungswissenschaftlichen Diskussion zugeschrieben wird, erweist sich in den betrachteten Unterrichtsse-quenzen als beschränkt. Die Beobachtungen legen vielmehr die These einer eingehegten Form von Gemeinschaftserziehung in der gegenwärtigen Unterrichtspraxis nahe. Darin erfährt das Risiko, das gemeinschaftspädagogischen Ansätzen ihrer Kritik nach eingeschrieben ist, über normative Selbstverpflichtungen der pädagogischen Praxis eine Eindämmung.
I’m probably not alone in observing that there seems to be an increasing number of data articles being published in the field of conflict studies and IR. Together with some colleagues, I’m even preparing one myself at the moment! Is that perceived increase in data publication actually measurable? And does it indeed amount to “drowning”?
This paper explores how banks adjust their risk-based capital ratios and asset allocations following an exogenous shock to their asset quality caused by Hurricane Katrina in 2005. We find that independent banks based in the disaster areas increase their risk-based capital ratios after the hurricane, while those part of a bank holding company do not. The effect on independent banks mainly comes from the subgroup of high-capitalized banks. These banks increase their holdings in government securities and reduce loans to non-financial firms. Hence, banks that become more stable achieve this at the cost of reduced lending.
A number of recent studies regress a "narratively" identified measure of a macroeconomic shock directly on an outcome variable. In this note, we argue that this approach can be viewed as the reduced-form regression of an instrumental variable approach in which the narrative time series is used as an instrument for an endogenous series of interest. This motivates evaluating the validity of narrative measures through the lens of a randomized experiment. We apply our framework to four recently constructed narrative measures of tax shocks by Romer and Romer (2010), Cloyne (2013), and Mertens and Ravn (2012). All of them turn out to be weak instruments for observable measures of taxes. After correcting for weak instruments, we find that using any of the considered narrative tax measures as an instrument for cyclically adjusted tax revenues yields tax multiplier estimates that are indistinguishable from zero. We conclude that the literature currently understates the uncertainty associated with quantifying the tax multiplier.
This paper studies a dynamic stochastic general equilibrium model involving climate change. Our model allows for damages on economic growth resulting from global warming. In the calibration, we capture effects from climate change and feedback effects on the temperature dynamics. We solve for the optimal state-dependent abatement policy. In our simulations, the costs of this policy measured in terms of lost GDP growth are moderate. On the other hand, postponing abatement action could reduce the probability that the climate can be stabilized. For instance, waiting for 10 years reduces this probability from 60% to 30%. Waiting for another 10 years leads to a probability that is less than 10%. Finally, doing nothing opens the risk that temperatures might explode and economic growth decreases significantly.
The banking system is highly interconnected and these connections can be conveniently represented as an interbank network. This survey presents a systematic overview of the recent advances in the theoretical literature on interbank networks. We assess our current understanding of the structure of interbank networks, of how network characteristics affect contagion in the banking system and of how banks form connections when faced with the possibility of contagion and systemic risk. In particular, we highlight how the theoretical literature on interbank networks offers a coherent way of studying interconnections, contagion processes and systemic risk, while emphasizing at the same time the challenges that must be addressed before general results on the link between the structure of the interbank network and financial stability can be established. The survey concludes with a discussion of the policy relevance of interbank network models with a special focus on macroprudential policies and monetary policy.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Objectives: The aim of our study was to find out how much energy is applicable in second-generation dual source high-pitch computed tomography (CT) in imaging of the abdomen.
Materials and methods: We examined an upper abdominal phantom using a Somatom Definition Flash CT-Scanner (Siemens, Forchheim, Germany). The study protocol consisted of a scan-series at 100 kV and 120 kV. In each scan series we started with a pitch of 3.2 and reduced it in steps of 0.2, until a pitch of 1.6 was reached. The current was adjusted to the maximum the scanner could achieve. Energy values, image noise, image quality, and radiation exposure were evaluated.
Results: For a pitch of 3.2 the maximum applicable current was 142 mAs at 120 kV and in 100 kV the maximum applicable current was 114 mAs. For conventional abdominal imaging, current levels of 200 to 260 mAs are generally used. To achieve similar current levels, we had to decrease the pitch to 1.8 at 100 kV - at this pitch we could perform our imaging at 204 mAs. At a pitch of 2.2 in 120 kV we could apply a current of 206 mAs.
Conclusion: We conclude our study by stating that if there is a need for a higher current, we have to reduce the pitch. In a high-pitch dual source CT, we always have to remember where our main focus is, so we can adjust the pitch to the energy we need in the area of the body that has to be imaged, to find answers to the clinical question being raised.
This paper presents results from the "INUIT-JFJ/CLACE 2013" field campaign at the high alpine research station Jungfraujoch in January/February 2013. The chemical composition of ice particle residuals (IPR) in a size diameter range of 200–900 nm was measured in orographic, convective and non-convective clouds with a single particle mass spectrometer (ALABAMA) under ambient conditions characterized by temperatures between −28 and −4 °C and wind speed from 0.1 to 21 km h−1. Additionally, background aerosol particles in cloud free air were investigated. The IPR were sampled from mixed-phase clouds with two inlets which selectively extract small ice crystals in-cloud, namely the Counterflow Virtual Impactor (Ice-CVI) and the Ice Selective Inlet (ISI). The IPR as well as the aerosol particles were classified into seven different particle types: (1) black carbon, (2) organic carbon, (3) black carbon internally mixed with organic carbon, (4) minerals, (5) one particle group (termed "BioMinSal") that may contain biological particles, minerals, or salts, (6) industrial metals, and (7) lead containing particles. For any sampled particle population it was determined by means of single particle mass spectrometer how many of the analyzed particles belonged to each of these categories. Accordingly, between 20 and 30% of the IPR and roughly 42% of the background particles contained organic carbon. The measured fractions of minerals in the IPR composition varied from 6 to 33%, while the values for the "BioMinSal" group were between 15 and 29%. Four percent to 31% of the IPR contained organic carbon mixed with black carbon. Both inlets delivered similar results of the chemical composition and of the particle size distribution, although lead was found only in the IPR sampled by the Ice-CVI. The results show that the ice particle residual composition varies substantially between different cloud events, which indicates the influence of different meteorological conditions, such as origin of the air masses, temperature and wind speed.
Irrigation intensifies land use by increasing crop yield but also impacts water resources. It affects water and energy balances and consequently the microclimate in irrigated regions. Therefore, knowledge of the extent of irrigated land is important for hydrological and crop modelling, global change research, and assessments of resource use and management. Information on the historical evolution of irrigated lands is limited. The new global historical irrigation data set (HID) provides estimates of the temporal development of the area equipped for irrigation (AEI) between 1900 and 2005 at 5 arcmin resolution. We collected sub-national irrigation statistics from various sources and found that the global extent of AEI increased from 63 million ha (Mha) in 1900 to 111 Mha in 1950 and 306 Mha in 2005. We developed eight gridded versions of time series of AEI by combining sub-national irrigation statistics with different data sets on the historical extent of cropland and pasture. Different rules were applied to maximize consistency of the gridded products to sub-national irrigation statistics or to historical cropland and pasture data sets. The HID reflects very well the spatial patterns of irrigated land as shown on historical maps for the western United States (around year 1900) and on a global map (around year 1960). Mean aridity on irrigated land increased and mean natural river discharge on irrigated land decreased from 1900 to 1950 whereas aridity decreased and river discharge remained approximately constant from 1950 to 2005. The data set and its documentation are made available in an open-data repository at https://mygeohub.org/publications/8 (doi:10.13019/M20599).