Universitätspublikationen
Refine
Year of publication
- 2015 (1742) (remove)
Document Type
- Article (606)
- Doctoral Thesis (187)
- Working Paper (169)
- Contribution to a Periodical (164)
- Book (159)
- Report (157)
- Part of Periodical (124)
- Review (70)
- Preprint (55)
- Conference Proceeding (22)
Language
- English (866)
- German (835)
- Spanish (14)
- Italian (11)
- Portuguese (11)
- French (3)
- Multiple languages (1)
- Russian (1)
Is part of the Bibliography
- no (1742)
Keywords
- Islamischer Staat (34)
- IS (25)
- Terrorismus (23)
- Deutschland (16)
- Dschihadismus (13)
- Syrien (12)
- Terror (11)
- Irak (10)
- Islamismus (10)
- Salafismus (10)
Institute
- Präsidium (336)
- Medizin (252)
- Gesellschaftswissenschaften (230)
- Physik (185)
- Wirtschaftswissenschaften (149)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (116)
- Center for Financial Studies (CFS) (115)
- Biowissenschaften (99)
- Frankfurt Institute for Advanced Studies (FIAS) (97)
- Informatik (96)
La Critique de la vision phénoménologique est une tentative de critique de la phénoménologie, à travers la Théorie Critique et la philosophie d’Emmanuel Lévinas, qui caractérise la phénoménologie comme une science eidétique. Nous proposons donc une bref histoire du concept de l’eidos, qui est compris comme un archétype idéal depuis le Platonisme. On aborde l’opposition du matérialisme et de l’idéalisme ancrée dans la Théorie des formes de Platon, l’hylémorphisme d’Aristote, et la Théorie matérialiste des simulacres de Lucrèce. La question substantielle : «matérialisme et/ou idéalisme »nous conduit aux principes de l’individuation, au formalisme et aux concepts de la réification. La phénoménologie de Husserl est née dans le Kulturkampf qui se caractérise par le déferlement du positivisme dans l’idéalisme. Sous cet angle, la phénoménologie est un certain tour de force idéaliste contre le positivisme. La phénoménologie essaie d’intégrer les courants contemporains de la philosophie allemande, et c’est ici et non en biologie que se situe la lutte pour la vie, selon Husserl. Le problème de la vision phénoménologique, en regard de la «race» comportant des significations qui ne sont pas particulièrement biologiques, est un problème qui remonte à Aristote. Selon lui, l’usage de l’eidos est aussi synonyme des catégories de genre et d’espèce. L’eidos d’Husserl inclut la conception d’Aristote, et se présente comme un moyen possible de construire un concept métaphysique de la race en dehors de la biologie. L’eidos en tant que type, tel qu’il est constitué dans la Lebenswelt , se caractérise finalement par la transformation de l’Umwelt en Heimwelt, dans lequel l’individu est passivement formé par la tradition, l’habitus, par terre et sang–un monde de la moyenne, de la « normalité ». Nous essayons de montrer, dans le processus de ce bouleversement irrationnel de la philosophie en Allemagne, le cas particulier et tragique du devenir de la phénoménologie de Husserl entre les mains de Heidegger, qui suggère une auto-limitation de la phénoménologie à la recherche d’un sens qui vise à l’unité du Dasein . Notre but ici est simple et radical : de même que Marx a montré que la philosophie de Hegel n’est rien d’autre que la collection des catégories de la philosophie bourg eoise en déclin, Lévinas et l’École de Francfort ont montré que la philosophie de Heidegger n’est rien d’autre qu’une poursuite de la philosophie hégélienne, mais à un niveau plus abstrait et aussi plus global.
Objective: To investigate the accuracy, efficiency and radiation dose of a novel laser navigation system (LNS) compared to those of free-handed punctures on computed tomography (CT).
Materials and methods: Sixty punctures were performed using a phantom body to compare accuracy, timely effort, and radiation dose of the conventional free-handed procedure to those of the LNS-guided method. An additional 20 LNS-guided interventions were performed on another phantom to confirm accuracy. Ten patients subsequently underwent LNS-guided punctures.
Results: The phantom 1-LNS group showed a target point accuracy of 4.0 ± 2.7 mm (freehand, 6.3 ± 3.6 mm; p = 0.008), entrance point accuracy of 0.8 ± 0.6 mm (freehand, 6.1 ± 4.7 mm), needle angulation accuracy of 1.3 ± 0.9° (freehand, 3.4 ± 3.1°; p < 0.001), intervention time of 7.03 ± 5.18 minutes (freehand, 8.38 ± 4.09 minutes; p = 0.006), and 4.2 ± 3.6 CT images (freehand, 7.9 ± 5.1; p < 0.001). These results show significant improvement in 60 punctures compared to freehand. The phantom 2-LNS group showed a target point accuracy of 3.6 ± 2.5 mm, entrance point accuracy of 1.4 ± 2.0 mm, needle angulation accuracy of 1.0 ± 1.2°, intervention time of 1.44 ± 0.22 minutes, and 3.4 ± 1.7 CT images. The LNS group achieved target point accuracy of 5.0 ± 1.2 mm, entrance point accuracy of 2.0 ± 1.5 mm, needle angulation accuracy of 1.5 ± 0.3°, intervention time of 12.08 ± 3.07 minutes, and used 5.7 ± 1.6 CT-images for the first experience with patients.
Conclusion: Laser navigation system improved accuracy, duration of intervention, and radiation dose of CT-guided interventions.
Background: The complex cellular networks within tumors, the cytokine milieu, and tumor immune escape mechanisms affecting infiltration and anti-tumor activity of immune cells are of great interest to understand tumor formation and to decipher novel access points for cancer therapy. However, cellular in vitro assays, which rely on monolayer cultures of mammalian cell lines, neglect the three-dimensional architecture of a tumor, thus limiting their validity for the in vivo situation.
Methods: Three-dimensional in vivo-like tumor spheroid were established from human cervical carcinoma cell lines as proof of concept to investigate infiltration and cytotoxicity of NK cells in a 96-well plate format, which is applicable for high-throughput screening. Tumor spheroids were monitored for NK cell infiltration and cytotoxicity by flow cytometry. Infiltrated NK cells, could be recovered by magnetic cell separation.
Results: The tumor spheroids were stable over several days with minor alterations in phenotypic appearance. The tumor spheroids expressed high levels of cellular ligands for the natural killer (NK) group 2D receptor (NKG2D), mediating spheroid destruction by primary human NK cells. Interestingly, destruction of a three-dimensional tumor spheroid took much longer when compared to the parental monolayer cultures. Moreover, destruction of tumor spheroids was accompanied by infiltration of a fraction of NK cells, which could be recovered at high purity.
Conclusion: Tumor spheroids represent a versatile in vivo-like model system to study cytotoxicity and infiltration of immune cells in high-throughput screening. This system might proof useful for the investigation of the modulatory potential of soluble factors and cells of the tumor microenvironment on immune cell activity as well as profiling of patient-/donor-derived immune cells to personalize cellular immunotherapy.
Background: Plant hormones are well known regulators which balance plant responses to abiotic and biotic stresses. We investigated the role of abscisic acid (ABA) in resistance of barley (Hordeum vulgare L.) against the plant pathogenic fungus Magnaporthe oryzae.
Results: Exogenous application of ABA prior to inoculation with M. oryzae led to more disease symptoms on barley leaves. This result contrasted the finding that ABA application enhances resistance of barley against the powdery mildew fungus. Microscopic analysis identified diminished penetration resistance as cause for enhanced susceptibility. Consistently, the barley mutant Az34, impaired in ABA biosynthesis, was less susceptible to infection by M. oryzae and displayed elevated penetration resistance as compared to the isogenic wild type cultivar Steptoe. Chemical complementation of Az34 mutant plants by exogenous application of ABA re-established disease severity to the wild type level. The role of ABA in susceptibility of barley against M. oryzae was corroborated by showing that ABA application led to increased disease severity in all barley cultivars under investigation except for the most susceptible cultivar Pallas. Interestingly, endogenous ABA concentrations did not significantly change after infection of barley with M. oryzae.
Conclusion: Our results revealed that elevated ABA levels led to a higher disease severity on barley leaves to M. oryzae. This supports earlier reports on the role of ABA in enhancing susceptibility of rice to the same pathogen and thereby demonstrates a host plant-independent function of this phytohormone in pathogenicity of monocotyledonous plants against M. oryzae.
Background: High reproducibility of LV mass and volume measurement from cine cardiovascular magnetic resonance (CMR) has been shown within single centers. However, the extent to which contours may vary from center to center, due to different training protocols, is unknown. We aimed to quantify sources of variation between many centers, and provide a multi-center consensus ground truth dataset for benchmarking automated processing tools and facilitating training for new readers in CMR analysis.
Methods: Seven independent expert readers, representing seven experienced CMR core laboratories, analyzed fifteen cine CMR data sets in accordance with their standard operating protocols and SCMR guidelines. Consensus contours were generated for each image according to a statistical optimization scheme that maximized contour placement agreement between readers.
Results: Reader-consensus agreement was better than inter-reader agreement (end-diastolic volume 14.7 ml vs 15.2–28.4 ml; end-systolic volume 13.2 ml vs 14.0–21.5 ml; LV mass 17.5 g vs 20.2–34.5 g; ejection fraction 4.2 % vs 4.6–7.5 %). Compared with consensus contours, readers were very consistent (small variability across cases within each reader), but bias varied between readers due to differences in contouring protocols at each center. Although larger contour differences were found at the apex and base, the main effect on volume was due to small but consistent differences in the position of the contours in all regions of the LV.
Conclusions: A multi-center consensus dataset was established for the purposes of benchmarking and training. Achieving consensus on contour drawing protocol between centers before analysis, or bias correction after analysis, is required when collating multi-center results.
We analyze the macroeconomic implications of increasing the top marginal income tax rate using a dynamic general equilibrium framework with heterogeneous agents and a fiscal structure resembling the actual U.S. tax system. The wealth and income distributions generated by our model replicate the empirical ones. In two policy experiments, we increase the statutory top marginal tax rate from 35 to 70 percent and redistribute the additional tax revenue among households, either by decreasing all other marginal tax rates or by paying out a lump-sum transfer to all households. We find that increasing the top marginal tax rate decreases inequality in both wealth and income but also leads to a contraction of the aggregate economy. This is primarily driven by the negative effects that the tax change has on top income earners. The aggregate gain in welfare is sizable in both experiments mainly due to a higher degree of distributional equality.
This paper looks into the specific influence that the European banking union will have on (future) bank client relationships. It shows that the intended regulatory influence on market conditions in principle serves as a powerful governance tool to achieve financial stability objectives.
From this vantage, it analyzes macro-prudential instruments with a particular view to mortgage lending markets – the latter have been critical in the emergence of many modern financial crises. In gauging the impact of the new European supervisory framework, it finds that the ECB will lack influence on key macro-prudential tools to push through more rigid supervisory policies vis-à-vis forbearing national authorities.
Furthermore, this paper points out that the current design of the European bail-in tool supplies resolution authorities with undue discretion. This feature which also afflicts the SRM imperils the key policy objective to re-instill market discipline on banks’ debt financing operations. The latter is also called into question because the nested regulatory technique that aims at preventing bail-outs unintendedly opens additional maneuvering space for political decision makers.
In Absatz 3 des Artikel 136 des Vertrags über die Arbeitsweise der EU (AEUV) wurde für die Verwendung von ESM Geldern festgelegt, dass diese nur dann zur Gewährung von Finanzhilfen verwendet werden dürfen, wenn „... dies unabdingbar ist, um die Stabilität des Euro-Währungsgebiets insgesamt zu wahren." Im vorliegenden Artikel argumentiert Alfons Weichenrieder, dass die nach dem griechischen Referendum entstandene Situation, die Stabilität des “Euro-Währungsgebiets insgesamt" nicht bedroht, so dass die Vergabe von neuen Krediten, zumal diese voraussichtlich unter weichen und im Zweifel nicht durchsetzbaren Auflagen vergeben würden, ein offensichtlicher Verstoß gegen die Grundlagen des ESM wäre.
In this statement the European Shadow Financial Regulatory Committee (ESFRC) is advocating a conditional relief of Greek’s government debt based on Greece meeting certain targets for structural economic reforms in areas such as its labor market and pensions sector.The authors argue that the position of the European institutions that debt relief for Greece cannot be part of an agreement is based on the illusion that Greece will be able to service its sovereign debt and reduce its debt overhang after implementing a set of fiscal and structural reforms. However, the Greek economy would need to grow at an unrealistig level to achieve debt sustainability soley on the basis of reforms.The authors therefore view a substantial debt relief as inevitable and argue that three questions must be resolved urgently, in order to structure debt relief adequately: First, which groups must accept losses associated with debt relief. Second, how much debt relief should be offered. Third, under what conditions should relief be offered.
In light of the failed negotiations with Greece, Jan Krahnen argues that an effective reform agenda for Greece can only be designed by the elected government. Fundamental reforms will take time to take full effect and euro area member states will, in the meantime, have to offer Greece a basic level of economic security.
Krahnen demands that policy makers and the professional public involved view the Greek crisis as an opportunity to take the next necessary steps to formulate a reform agenda for the European Monetary Union. A community of supranational and non-party researchers and intellectuals could take the initiative and in a structured process develop a trustworthy and realistic concept that drafts the next big step towards a political union of Europe, including elements of a fiscal union.
Mit Blick auf die gescheiterten Verhandlungen mit Griechenland, argumentiert Jan Krahnen im vorliegenden Policy Beitrag, dass eine zielführende Reformagenda nur von der gewählten Regierung Griechenlands formuliert werden kann. Die Euro-Staaten müssten Griechenland für die Zeitdauer einer Restrukturierungszeit eine Grundsicherung zusagen. Die EU-Staaten fordert Krahnen dazu auf, aus der Griechenlandkrise die notwendigen Konsequenzen zu ziehen. Auch die Eurozone brauche eine effektive Reformagenda. Die Verschuldungsdynamik innerhalb der Währungsunion, deren Auswüchse am Beispiel Griechenlands besonders deutlich werden, könne bei fehlendem guten Willen nur durch eine politische Union und eine in sie eingebettete Fiskalunion aufgelöst werden. Krahnen argumentiert, dass ein Weiterverhandeln über Restrukturierungsauflagen aus der derzeitigen verfahrenen Situation nicht herausführen wird. Entscheidend sei, ein mehr oder weniger umfassendes Paket zu schnüren, das Elemente eines teilweisen internationalen Haftungsverbunds mit Elementen eines partiellen nationalen Souveränitätsverzichts verbindet.
Negative Zinsen auf Einlagen – juristische Hindernisse und ihre wettbewerbspolitischen Auswirkungen
(2015)
Im anhaltenden Niedrigzinsumfeld tun Banken sich schwer damit, die ihnen zur Verfügung gestellte Liquidität einer renditeträchtigen Nachfrage zuzuführen. Darüberhinaus müssen sie auf Liquiditätsüberschüsse, die im Rahmen der Einlagenfazilität des Eurosystems über Nacht bei den nationalen Zentralbanken der Eurozone deponiert werden, Strafzinsen entrichtet. Vor diesem Hintergrund könnten Banken durch negative Einlagenzinsen das Anliegen verfolgen, die Nachfrage nach Aufbewahrung von (Sicht)Einlagen zu verringern. Einer solchen Strategie stehen aber aus juristischer Sicht Hindernisse entgegen, soweit der beschriebene Paradigmenwechsel auch im Rahmen existierender Kundenbeziehungen einseitig vorgenommen werden soll. Die rechtlichen Hürden sind weder Ausdruck einer realitätsfernen Haarspalterei, noch eines verbraucherschützenden Furors. Vielmehr ermöglichen sie privaten und gewerblichen Bankkunden, im Zeitpunkt der angestrebten Zinsanpassung bewusst über die Verwendung ihrer liquiden Mittel zu entscheiden.
When markets are incomplete, social security can partially insure against idiosyncratic and aggregate risks. We incorporate both risks into an analytically tractable model with two overlapping generations. We derive the equilibrium dynamics in closed form and show that joint presence of both risks leads to over-proportional risk exposure for households. This implies that the whole benefit from insurance through social security is greater than the sum of the benefits from insurance against each of the two risks in isolation. We measure this through interaction effects which appear even though the two risks are orthogonal by construction. While the interactions unambiguously increase the welfare benefits from insurance, they can in- or decrease the welfare costs from crowding out of capital formation. The net effect depends on the relative strengths of the opposing forces.
We investigate the relationship between anchoring and the emergence of bubbles in experimental asset markets. We show that setting a visual anchor at the fundamental value (FV) in the first period only is sufficient to eliminate or to significantly reduce bubbles in laboratory asset markets. If no FV-anchor is set, bubble-crash patterns emerge. Our results indicate that bubbles in laboratory environments are primarily sparked in the first period. If prices are initiated around the FV, they stay close to the FV over the entire trading horizon. Our insights can be related to initial public offerings and the interaction between prices set on pre-opening markets and subsequent intra-day price dynamics.
The pressure on tax haven countries to engage in tax information exchange shows first effects on capital markets. Empirical research suggests that investors do react to information exchange and partially withdraw from previous secrecy jurisdictions that open up to information exchange. While some of the economic literature emphasizes possible positive effects of tax havens, the present paper argues that proponents of positive effects may have started from questionable premises, in particular when it comes to the effects that tax havens have for emerging markets like China and India.
n this paper we compute the optimal tax and education policy transition in an economy where progressive taxes provide social insurance against idiosyncratic wage risk, but distort the education decision of households. Optimally chosen tertiary education subsidies mitigate these distortions. We highlight the importance of two different channels through which academic talent is transmitted across generations (persistence of innate ability vs. the impact of parental education) for the optimal design of these policies and model different forms of labor as imperfect substitutes, thereby generating general equilibrium feedback effects from policies to relative wages of skilled and unskilled workers. We show that subsidizing higher education has important redistributive benefits, by shrinking the college wage premium in general equilibrium. We also argue that a full characterization of the transition path is crucial for policy evaluation. We find that optimal education policies are always characterized by generous tuition subsidies, but the optimal degree of income tax progressivity depends crucially on whether transitional costs of policies are explicitly taken into account and how strongly the college premium responds to policy changes in general equilibrium.
This paper looks into the specific influence that the European banking union will have on (future) bank client relationships. It shows that the intended regulatory influence on market conditions in principle serves as a powerful governance tool to achieve financial stability objectives.
From this vantage, it analyzes macro-prudential instruments with a particular view to mortgage lending markets – the latter have been critical in the emergence of many modern financial crises. In gauging the impact of the new European supervisory framework, it finds that the ECB will lack influence on key macro-prudential tools to push through more rigid supervisory policies vis-à-vis forbearing national authorities.
Furthermore, this paper points out that the current design of the European bail-in tool supplies resolution authorities with undue discretion. This feature which also afflicts the SRM imperils the key policy objective to re-instill market discipline on banks’ debt financing operations. The latter is also called into question because the nested regulatory technique that aims at preventing bail-outs unintendedly opens additional maneuvering space for political decision makers.
In an experimental setting in which investors can entrust their money to traders, we investigate how compensation schemes affect liquidity provision and asset prices. Investors face a trade-off between risk and return. At the benefit of a potentially higher return, they can entrust their money to a trader. However this investment is risky, as the trader might not be trustworthy. Alternatively, they can opt for a safe but low return. We study how subjects solve this trade-off when traders are either liable for losses or not, and when their bonuses are either capped or not. Limited liability introduces a conflict of interest because it makes traders value the asset more than investors. To limit losses, investors should thus restrict liquidity provision to force traders to trade at a lower price. By contrast, bonus caps make traders value the asset less than investors. This should encourage liquidity provision and decrease prices. In contrast to these predictions, we find that under limited liability investors contribute to asset price bubbles by increasing liquidity provision and that caps fail to tame bubbles. Overall, giving investors skin in the game fosters financial stability.
Since August 2009, German legislation allows for voluntary Say on Pay Votes (SoPV) during Annual General Meetings (AGMs). We examine 1,169 AGMs of all German listed firms with more than 10,000 agenda items over the period 2010-2013 to identify (1) determinants and approval rates of voluntary SoPVs, (2) the effect of voluntary SoPVs on AGM participation, and (3) the effect of SoP on executive compensation. Our data reveals that in the first four years of the voluntary say on pay regime every second firm in our sample has opted for having a SoPV. The propensity for a SoPV increases with firm size, abnormal executive compensation and free float of shares. Indeed, smaller firms with concentrated ownership do not only have a lower propensity for a SoPV, but also show a higher propensity to opt for only limited disclosure of executive compensation. Approval rates of SoPVs are lower than the approval rate for the average AGM agenda item and this effect is stronger in (i) widely held firms as well as in (ii) firms with abnormal executive compensation. Additionally, SoPVs actually can increase AGM participation; however, this result is particularly evident for widely held firms. Finally, we find stronger pay for performance elements within total executive compensation, particularly when the effect of executive compensation is lagged over the years following the vote. Overall, our results are consistent with the view that firms use voluntary SoPV to gain legitimation for executive remuneration policies in firms with low ownership concentration. This is enforced, where (small) shareholders consider executive compensation a part of the agency problem of listed firms, and where (small) shareholders consider SoPVs as a possibility to actively influence corporate decisions, with these decisions leading to a higher degree of alignment between executive management boards and shareholders.
The standard view suggests that removing barriers to entry and improving judicial enforcement reduces informality and boosts investment and growth. However, a general equilibrium approach shows that this conclusion may hold to a lesser extent in countries with a constrained supply of funds because of, for example, a more concentrated banking sector or lower financial openness. When the formal sector grows larger in those countries, more entrepreneurs become creditworthy, but the higher pressure on the credit market limits further capital accumulation. We show empirical evidence consistent with these predictions.
Die Seismizität des nördlichen Oberrheingrabens (ORG) ist aufgrund seines Potentials für die geothermische Nutzung und der damit möglicherweise verbundenen seismischen Risiken von allgemeinem Interesse. Detaillierte Kenntnisse der natürlichen Seismizität erlauben Rückschlüsse auf aktive Störungszonen und Spannungsverhältnisse im Untergrund. Sie liefert außerdem wichtige Hintergrundinformationen für die Abschätzung einer möglichen induzierten Seismizität. Untersuchungen zur Charakterisierung der natürlichen Seismizität, des Spannungsfeldes und der seismischen Gefährdung des nördlichen ORG sind Hauptbestandteil dieser Arbeit, die innerhalb des BMU/BMWi-Projektes SiMoN (Seismisches Monitoring im Zusammenhang mit der geothermischen Nutzung des Nördlichen Oberrheingrabens) entstanden ist. Aufzeichnungen eines Netzwerkes aus 13 seismischen Stationen dienen als Datengrundlage zur Charakterisierung der Seismizität innerhalb eines etwa 50 x 60 km2 großen Areals im dichtbesiedelten Rhein-Main Gebiet. Untersuchungen der Rauschbedingungen zur Bewertung der Eignung der Stationsorte für das Aufzeichnen der natürlichen Seismizität lieferten bei den Stationen auf felsigem Untergrund sehr gute spektrale Eigenschaften, während alle Stationen im Sediment des ORG deutlich höhere Rauschanteile aufzeigten. Anhand systematischer Messungen in flachen Bohrlöchern konnten laterale und vertikale Variationen des seismischen Rauschens beschrieben werden und dadurch eine Verbesserung der Detektionsschwelle beobachtet werden.
Es werden die Ergebnisse des seismischen Monitorings für den Zeitraum November 2010 bis Dezember 2014 dargestellt. Die Detektionsschwelle für das Netzwerk liegt bei einer Lokalmagnitude von etwa 0,5, die Vollständigkeitsmagnitude beträgt Mc = 1,2. Seit Beginn der Datenaufzeichnung konnten 243 Erdbeben im unmittelbaren Bereich des Stations-netzwerkes mit Magnituden im Bereich zwischen ML = -0,5 und ML = 4,2 lokalisiert werden. Die Epizentren liegen hauptsächlich entlang der östlichen Grabenschulter und im Graben; entlang der westlichen Grabenschulter ist die seismische Aktivität deutlich geringer. Eine weitere aktive Region konnte entlang der südlichen Ausläufer des Taunus im Nordwesten des Untersuchungsgebietes identifiziert werden. Die Seismizität erstreckt sich bis in eine Tiefe von 24 km mit einem Maximum der hypozentralen Tiefenverteilung im Bereich von 12-18 km. Im Graben ist die Seismizität dabei auf die tiefere Kruste im Bereich von 9-24 km beschränkt. Das Fehlen von seismischer Aktivität in der oberen Kruste bis ca. 9 km Tiefe im Graben könnte auf eine aseismische Deformation in diesem Tiefenbereich hindeuten. Seit Mai 2014 konnte südöstlich von Darmstadt bei der Ortschaft Ober-Ramstadt zum ersten Mal seit fast 150 Jahren eine Schwarmbebenaktivität im Bereich des nördlichen ORG registriert werden. Die Hypozentren sind in zwei Cluster unterteilt, die räumlich voneinander getrennt sind und unterschiedliche Aktivitätsraten aufweisen. Die Herdtiefen liegen im Bereich von 1-8 km.
Zusätzlich zu den Daten des SiMoN Netzwerkes wurden Aufzeichnungen der regionalen Erdbebendienste in Herdflächenanalysen für insgesamt 58 Erdbeben einbezogen. Die Herdflächenlösungen weisen überwiegend Blattverschiebungen (Strike-slip-Mechanismen) auf. Auf- und Abschiebungen spielen nur eine untergeordnete Rolle. Die berechneten Herdmechanismen bestätigen, dass sich das Spannungsfeld des nördlichen ORG transtensional verhält, im Vergleich zu früheren Studien konnte jedoch eine deutlich ausgeprägte Blattverschiebungskomponente identifiziert werden. Zur Bestimmung der Hauptspannungsachsen wurde eine Inversion der Herdflächenlösungen durchgeführt und die Richtung der maximalen horizontalen Spannung, welche hauptsächlich in N135°E orientiert ist, bestimmt.
Aufbauend auf den neu gewonnen Erkenntnissen zur natürlichen Seismizität und zum Spannungsfeld des nördlichen ORG wurde eine probabilistische seismische Gefährdungsanalyse durchgeführt. Um Unsicherheiten in den seismischen Quellregion-modellen zu berücksichtigen, wurden sechs unterschiedliche Modelle entwickelt. Für jede Quellregion wurden spezifische Parameter bestimmt. Ihre Unsicherheiten werden in einem logischen Baum behandelt. Auf der Grundlage eines neu zusammengestellten Momentmagnituden-basierten Erdbebenkatalogs wurden die Magnitudenhäufigkeits-parameter bestimmt. Unter Berücksichtigung des tektonischen Regimes in jeder Quelle wurden unterschiedliche Dämpfungsrelationen der Bodenbeschleunigung verwendet. Zur Quantifizierung der maximal zu erwartenden Magnitude in jeder Quelle wurden Wahrscheinlichkeitsdichtefunktionen berechnet. Die Resultate der Gefährdungsanalyse werden in Form von Karten der Spektralbodenbeschleunigungen und Spitzenboden-beschleunigungen für Wiederkehrperioden von 475 und 2475 Jahren und Antwortbeschleunigungsspektren dargestellt. Im Vergleich zu früheren Studien konnte eine erhöhte seismische Gefährdung für den nördlichen ORG festgestellt werden.
Diese Arbeit stellt eine Annäherung an den umfangreichen Themenkomplex "Kind, Kunst und Kompensation" dar. In Auseinandersetzung mit unterschiedlichsten Berührungspunkten wird aufgezeigt, warum gestalterisch-schöpferische Betätigung förderlich und erforderlich für die Entwicklung ist, insbesondere im Kindesalter. Im Fokus der Betrachtung stehen zum einen grundlegende Wirkungsweisen, die diesen Prozessen innewohnen und zum anderen der sozioökonomische Wandel, welcher gerade jene Wirkungsweisen herausfordert und teilweise ebenso notwendig macht. Umhüllt wird dieser Bereich vom zentralen Themenkomplex der Handlungsverarmung, Vereinseitigung und Vielfalts-Debatte, welcher im Grunde nicht nur Heranwachsende, sondern letztlich jeden Menschen der westlichen Welt betrifft.
Sowohl bezüglich der Modellierung der Sprachkompetenz bei mehrsprachigen Kindern als auch hinsichtlich der Bestimmung der hierfür notwendigen Indikatoren herrscht nach wie vor Forschungsbedarf. Die entsprechenden Erkenntnisse sind aber für eine valide Sprachstandsdiagnostik, auf Basis welcher auch eine Sprachförderung stattfinden kann, unerlässlich. In der vorliegenden Dissertation werden daher mit Hilfe von Verteilungsstatistiken, konfirmatorischen Faktoranalysen und Strukturgleichungsmodellen zunächst ausgewählte Sprachstandsindikatoren sowie das Konstrukt Sprachkompetenz von Kindern mit Deutsch als Zweitsprache zu Beginn der 1. Klasse modelliert und im Anschluss ihr Einfluss auf die Orthographiekompetenz dieser Schülerinnen und Schüler am Ende der 2. Klasse untersucht. Zusätzlich wird das phonologische Arbeitsgedächtnis als ein weiterer Prädiktor in das Modell eingebunden. Es wird der Fragestellung nachgegangen, ob die für die Modellierung der Sprachkompetenz verwendeten Indikatoren, wie bspw. Bildbenennung, Kasus, Syntaxerwerbsstufen und die mittlere Äußerungslänge (MLU) für die Gruppe der mehrsprachigen Kinder in der Schuleingangsphase geeignet bzw. valide sind. Dabei werden die jeweiligen Streuungen, die Korrelationen untereinander (konvergente Validität) sowie die Leistung der Indikatoren für die jeweiligen sprachlichen Kompetenzbereiche anhand der Höhe der Faktorladungen betrachtet. Ebenfalls wird die prognostische Validität der Indikatoren hinsichtlich der Rechtschreibkompetenz beleuchtet. Des Weiteren wird geprüft, ob sich die Sprachkompetenz bei DaZ-Kindern in eine semantische, eine morphologische und eine syntaktische Fähigkeit unterteilen lässt und ob die Sprachkompetenz als Faktor zweiter Ordnung modelliert werden kann. Weitere Fragestellungen betreffen die Modellierung der Rechtschreibkompetenz, also die Anzahl und Art der latenten Variablen, die dieses Konstrukt abbilden, sowie den Einfluss des phonologischen Arbeitsgedächtnisses und der einzelnen sprachlichen auf die orthographischen Teilkompetenzen.
Künstliche Ribonucleasen, die sequenzspezifisch und effizient die Spaltung von RNA-Phosphordiesterbindungen katalysieren, könnten potenziell nicht nur als biochemische Werkzeuge dienen, sondern auch als Wirkstoffe gegen eine Vielzahl von Erkrankungen, bei denen mRNA oder miRNA involviert sind, eine wichtige Rolle spielen. Obwohl in den letzten beiden Jahrzehnten zahlreiche sequenzspezifische RNA-Spalter entwickelt wurden, bleibt die Spaltaktivität dieser Verbindungen nach wie vor deutlich hinter der ihrer natürlichen Äquivalente zurück. Die Optimierung künstlicher Ribonucleasen und grundlegend dafür die Erforschung der Faktoren, die die Spaltaktivität einer Verbindung beeinflussen, sind daher weiterhin von großem Interesse. Zwar enthalten die meisten künstlichen Ribonucleasen Metallionen, doch sind auch metallfreie RNA-Spalter, zum Beispiel auf der Basis heterocyclischer Guanidine, bekannt. Prinzipiell kann die Hydrolyse des RNA-Rückgrates durch Deprotonierung der nucleophil am Phosphoratom angreifenden 2‘-OH-Gruppe, durch Protonierung der als Abgangsgruppe fungierenden 5‘-OH-Gruppe sowie durch Stabilisierung des bei der Spaltung durchlaufenen dianionischen Phosphorans katalysiert werden. Daher sollten potenzielle RNA-Spalter in der Lage sein, sowohl als Base als auch als Säure wirken zu können, was bei einem pKa-Wert im Bereich von 7 am besten gegeben ist. Fungiert ein und dasselbe Molekül als Protonenakzeptor und -donor, so kommt es im Fall von Guanidinanaloga zu einer Tautomerisierung vom Amino- zum Iminoisomer. Eine möglichst kleine Energiedifferenz zwischen beiden Formen sollte sich daher positiv auf die Spaltaktivität auswirken. In der vorliegenden Arbeit wurde eine Reihe heterocyclischer Guanidine synthetisiert, deren pKa-Werte bestimmt und die jeweiligen Energiedifferenzen zwischen Amino- und Iminotautomer grob mittels AM1-Rechnungen abgeschätzt. In Spaltexperimenten wurden Cy5-markierte RNA-Substrate mit den verschiedenen Verbindungen inkubiert (Spalter-Konzentration: 2 bzw. 10 mM). Die Analyse und Quantifizierung der Spaltprodukte erfolgten anschließend mithilfe eines DNA-Sequenziergerätes. Alle untersuchten und ausreichend löslichen Substanzen, die sowohl einen geeigneten pKa-Wert (6 – 8) als auch eine niedrige Energiedifferenz zwischen Amino- und Iminotautomer (≤ 5 kcal/mol) aufwiesen bzw. bei denen nur der pKa-Wert oder nur die Energiedifferenz in geringem Maße vom Idealwert abwich, spalteten RNA, wenn auch teilweise nur mit einer geringen Aktivität. In den Spaltexperimenten erwiesen sich Guanidinanaloga mit einem großen aromatischen System als besonders aktiv, allen voran 2-Aminoperimidin und seine Derivate, die auch bei Konzentrationen unter 50 µM Spaltaktivität zeigten. Gleichzeitig offenbarten diese Verbindungen in Fluoreszenzkorrelationsspektroskopie Experimenten eine große Tendenz zur Aggregation mit RNA, so dass die Spaltung in diesen Fällen möglicherweise nicht durch Einzelmoleküle, sondern durch Aggregate erfolgte. Um RNA-Substrate auch sequenzspezifisch spalten zu können, wurden PNA-Konjugate des bereits bekannten RNA-Spalters Tris(2-aminobenzimidazol) hergestellt, wobei der Spalter über eine neue, quecksilberfreie Route synthetisiert wurde. Es konnte gezeigt werden, dass diese PNA-Konjugate RNA sequenzspezifisch mit einer Halbwertszeit von etwa 11 h spalten, was im Rahmen der Halbwertszeit vergleichbarer DNA-Konjugate liegt. Um zu untersuchen, ob 2-Aminoperimidine auch als Einzelverbindungen aktiv sind, wurden zwei PNA-Konjugate von am Naphthylring substituierten 2-Aminoperimidin-Derivaten synthetisiert. Beide Konjugate zeigten keinerlei Spaltaktivität, was darauf hindeuten könnte, dass die Hydrolyse des RNA-Rückgrates nur durch mehrere Spalter-Einheiten – kovalent verknüpft oder in Form von Aggregaten – effizient katalysiert werden kann.
Atomistic molecular dynamics approach for channeling of charged particles in oriented crystals
(2015)
Der Gitterführungseffekt ist der Prozess der Ausbreitung von geladenen Teilchen entlang der Ebenen oder Achsen von kristallinen Materialien. Seit den 1960er Jahren ist dieser Effekt weitgehend theoretisch und experimentell untersucht worden. Dieser Effekt wurde für die Manipulation von Hochenergiestrahlen, die Hochpräzisionsstruktur- und -fehleranalyse von kristallinen Medien und die Herstellung von hochenergetischer Strahlung angewendet. Zur Abstimmung der Parameter der Gitterführung und Gitterführungsstrahlung wurde dieser Prozess für den Fall von künstlich nanostrukturierten Materialien, wie gebogenen Kristallen, Nanoröhren und Fullerit, angenommen. In den letzten Jahren wurde das Konzept des kristallinen Undulators formuliert und getestet, das besondere Eigenschaften der Strahlung aufgrund der Gitterführung von Projektilen in regelmäßig gebogenen Kristallen vorhersagt.
In dieser Arbeit werden die Prozesse der Gitterführung von Sub- und Multi-GeV-Elektronen und -Positronen durch den atomistischen Molekulardynamik-Ansatz untersucht. Die Ergebnisse dieser Studien wurden in einer Reihe von Artikeln während meiner Promotion in Frankfurt vorgestellt. Dieser Ansatz ermöglicht die Simulation komplexer Fälle von Gitterführung in geraden, gebogenen und periodisch gebogenen Kristallen aus reinen kristallinen Materialien und von gemischten Materialien wie Si-Ge-Kristallen, in mehrschichtigen und nanostrukturierten kristallinen Systemen. Die Arbeit beschreibt die Methode der Simulationen, stellt Ergebnisse von Simulationen für verschiedene Fälle vor und vergleicht die Ergebnisse von Simulationen mit aktuellen experimentellen Daten. Die Ergebnisse werden mit Schätzungen der dechanneling-Länge verglichen, dem Anteil der gittergeführten Projektile, der Winkelverteilung der ausgehenden Projektile und des Strahlungsspektrums.
The upcoming CBM Experiment at FAIR aims at exploring the region of highest net baryonic densities reproducible in energetic heavy ion collisions. Due to the very high beam intensities expected at FAIR, unprecedented data regarding rare observables such as charm quarks and hyperons will be accessible. Open charm mesons are particularly interesting, since they support the reconstruction of the total charm cross-section in order to search for exotic phenomena, e.g. a phase transition towards the quark-gluon plasma which is predicted by several theoretical models. Open charm studies will be performed via secondary vertex reconstruction with a suitable Micro-Vertex Detector (MVD). The CBM-MVD is currently in the development and prototyping phase with primary design goals concentrating on spatial resolution, radiation hardness, material budget, and readout performance. CMOS Monolithic Active Pixel Sensors (MAPS) provide an excellent spatial resolution for the MVD in the order of few um in combination with a low material budget (50 um thickness) and high radiation hardness. The active volume of the devices is formed from the epitaxial layer of standard CMOS wafers. This allows for integration of pixels together with analogue and digital data processing circuits on one single chip. This option was explored with the MIMOSA-26 prototype, which integrates functionalities like pedestal correction, correlated double sampling, discrimination and data sparsification based on zero suppression combined with a small and dense pixel matrix. The pixel array composed of 576 lines of 1152 pixels is read out in a column-parallel rolling shutter mode. One discriminator per column and the digital data processing circuits are located on the same chip in a 3 mm wide area beneath the pixel matrix allowing for binary hit encoding. This area also contains the circuits for pedestal correction and the configuration memory, which is programmed via JTAG. The preprocessed digital data is read out via two 80 Mbit/s LVDS links per sensor, which stream their data continuously based on a low-level protocol.
Within the scope of this thesis, a readout concept of the CBM-MVD is proposed and studied based on the current MIMOSA sensor generation. The backbone of the system is formed by the Readout Controller boards (ROCs) featuring FPGA microchips and optical links. Several ROC prototypes are considered using the synergy with the HADES Experiment. Finally, the TRB3 board is selected as a possible candidate for the initial FAIR experiments. Furthermore, a highly scalable, hardware independent FPGA firmware is implemented in order to steer and read out multiple MIMOSA-26 sensors. The reconfigurable firmware is also designed with the support for future MIMOSA sensor generations. The free-streaming sensor data is deserialized and error-checked, prior to its transmission over a suitable network interface. In order to demonstrate the validity of the concept, a readout network similar to the HADES Data Acquisition (DAQ) system is developed. The ROC is tested on the HADES TRB2 boards and data is acquired using suitable MAPS add-on boards and the TrbNet protocol.
In the context of the CBM-MVD prototype project, a readout network with 12 MIMOSA-26 sensors has been prepared for an in-beam test at the CERN SPS facility. A comprehensive control system is designed comprising customized software tools. The subsequent in-beam test is used to validate the design choices. As a result, the system could be operated synchronously and dead-time free for several days. The readout network behavior in a realistic operating environment has been carefully studied with the outcome the the TrbNet based approach handles the MVD prototype setup without any difficulties. A procedure to keep the sensors synchronous even in case of a data overflow has been pioneered as well. After the beam test, improvements and conceptual changes to the readout systems are being addressed which allow an integration into the global CBM DAQ system.
In the first part of the thesis, we show that the payment flow of a linear tax on trading gains from a security with a semimartingale price process can be constructed for all càglàd and adapted trading strategies. It is characterized as the unique continuous extension of the tax payments for elementary strategies w.r.t. the convergence "uniformly in probability". In this framework, we prove that under quite mild assumptions dividend payoffs have almost surely a negative effect on investor’s after-tax wealth if the riskless interest rate is always positive. In addition, we give an example for tax-efficient strategies for which the tax payment flow can be computed explicitly.
In the second part of the thesis, we investigate the impact of capital gains taxes on optimal investment decisions in a quite simple model. Namely, we consider a risk neutral investor who owns one risky stock from which she assumes that it has a lower expected return than the riskless bank account and determine the optimal stopping time at which she sells the stock to invest the proceeds in the bank account up to the maturity date. In the case of linear taxes and a positive riskless interest rate, the problem is nontrivial because at the selling time the investor has to realize book profits which triggers tax payments. We derive a boundary that is continuous and increasing in time, and decreasing in the volatility of the stock such that the investor sells the stock at the first time its price is smaller or equal to this boundary.
The implementation of pump-probe experiments with ultrashort laser pulses enables the study of dynamical processes in atoms or molecules, which may provide a deeper inside in their physical origin. The application of this method to systems as nitrous oxide, which is not only a simple example for polyatomic molecules but which also plays a crucial role in the greenhouse effect, promises interesting and beneficial findings. This thesis presents, on the one hand, the technical extension of an existing experimental setup for high-harmonic generation (HHG) and ultra-fast laser physics by an extreme ultraviolet (XUV) spectrometer for the in-situ observation of the harmonic spectrum during ongoing measurements. The present setup enables the production of short laser pulse trains in the XUV spectral range with durations of a few hundred attoseconds (1 as = 10^−18 s) via HHG and supports to perform XUV-IR pump-probe experiments using the infrared (IR) driving field with durations of a few femtoseconds. Moreover, a reaction microscope is implemented, which enables the coincident detection of several charged particles emerging from an ionization or dissociation process and to reconstruct their full 3-D-momentum vectors. With this technique it is possible to perform time-resolved momentum spectroscopy of few-particle quantum systems. Here, the design and the calibration of the XUV spectrometer is presented as well as a first application to the analysis of experimental data by providing information on the produced photon energies. On the other hand, the results of an XUV-pump IR-probe measurement on nitrous oxide (N2O) are discussed. With the broad harmonic spectrum (∼ 17 − 45 eV) it is possible to address several states of the singly and doubly ionized cation. One reaction channel is the single ionization into a stable state of N2O+. Here, the coincidently measured photoelectron energies allow the observation of sidebands, which served to estimate the pulse durations of the involved XUV pulse trains as well as of the fundamental IR pulses. Additionally, single ionization of nitrous oxide can lead to a dissociation into a charged and a neutral fragment. The four respective dissociation channels are compared by presenting their branching ratios, kinetic energy release (KER) distributions and their dependencies on the time delay between pump and probe pulse. In the production of the dication, there are two competitive processes: direct double ionization considering photon energies above the double-ionization threshold, and autoionization of singly ionized and excited molecules in the case of photon energies near the double-ionization threshold. In both cases, the ionization leads to a Coulomb explosion into two charged fragments, where the N − N bond or the N − O bond may dissociate. The influence of the IR-probe field on the ionization yield and the KER was investigated for both dissociation channels and compared. In addition, the corresponding photoelectron energy spectra are presented, which show indications for autoionizing states being involved, and their dependence on the delay and the KER of the respective ions is analyzed.
According to the prevailing view, the purpose of digital copyright is to balance conflicting interests in exclusivity on the one hand and in access to information on the other. This article offers an alternative reading of the conflicts surrounding copyright in the digital era. It argues that two cultures of communication coexist on the internet, each of which has a different relationship to copyright. Whereas copyright institutionalizes and supports a culture of exclusivity, it is at best neutral towards a culture of free and open access. The article shows that, depending on the future regulation of copyright and the internet in general, the dynamic coexistence of these cultures may well be replaced by an overwhelming dominance of the culture of exclusivity.
In der Region Rhein-Main steht mit dem Rhein-Main-Mobilitätspanel (RMP) seit dem Jahr 2008 ein Datensatz zur Verfügung, der im Vergleich zu früheren Datensätzen eine verbesserte Beschreibung der regionalen Mobilitätsentwicklung ermöglicht. In dieser Methodenstudie wird überprüft, inwieweit Anschlussmöglichkeiten dieses Datensatzes mit anderen regionalen Datensätzen bestehen. Das Ziel dieser Studie ist somit die Prüfung, inwiefern in der Region Rhein-Main vorliegende Mobilitäts- und andere (insbesondere raumbezogene) Daten mit dem RMP kombiniert werden können, um daraus neue Erkenntnisse und Handlungsoptionen für die Entscheidungsträger vor Ort zu entwickeln. Im Rahmen der Studie werden sowohl ein Vergleich der Stichprobenzusammensetzung und der Erhebungsmethodik als auch der erfassten Kennziffern durchgeführt und Möglichkeiten zur Kombination mit Raumstrukturdaten überprüft. Weiterhin werden zentrale Mobilitätskennziffern der betrachteten Erhebungen (MiD 2002, 2008; SrV 2008; Deutsches Mobilitätspanel 2002-2008) gegenübergestellt und die Anwendbarkeit des harmonisierten und kombinierten Datensatzes hinsichtlich einer inhaltlichen Fragestellung überprüft.
Containment problems belong to the classical problems of (convex) geometry. In the proper sense, a containment problem is the task to decide the set-theoretic inclusion of two given sets, which is hard from both the theoretical and the practical perspective. In a broader sense, this includes, e.g., radii or packing problems, which are even harder. For some classes of convex sets there has been strong interest in containment problems. This includes containment problems of polyhedra and balls, and containment of polyhedra, which have been studied in the late 20th century because of their inherent relevance in linear programming and combinatorics.
Since then, there has only been limited progress in understanding containment problems of that type. In recent years, containment problems for spectrahedra, which naturally generalize the class of polyhedra, have seen great interest. This interest is particularly driven by the intrinsic relevance of spectrahedra and their projections in polynomial optimization and convex algebraic geometry. Except for the treatment of special classes or situations, there has been no overall treatment of that kind of problems, though.
In this thesis, we provide a comprehensive treatment of containment problems concerning polyhedra, spectrahedra, and their projections from the viewpoint of low-degree semialgebraic problems and study algebraic certificates for containment. This leads to a new and systematic access to studying containment problems of (projections of) polyhedra and spectrahedra, and provides several new and partially unexpected results.
The main idea - which is meanwhile common in polynomial optimization, but whose understanding of the particular potential on low-degree geometric problems is still a major challenge - can be explained as follows. One point of view towards linear programming is as an application of Farkas' Lemma which characterizes the (non-)solvability of a system of linear inequalities. The affine form of Farkas' Lemma characterizes linear polynomials which are nonnegative on a given polyhedron. By omitting the linearity condition, one gets a polynomial nonnegativity question on a semialgebraic set, leading to so-called Positivstellensaetze (or, more precisely Nichtnegativstellensaetze). A Positivstellensatz provides a certificate for the positivity of a polynomial function in terms of a polynomial identity. As in the linear case, these Positivstellensaetze are the foundation of polynomial optimization and relaxation methods. The transition from positivity to nonnegativity is still a major challenge in real algebraic geometry and polynomial optimization.
With this in mind, several principal questions arise in the context of containment problems: Can the particular containment problem be formulated as a polynomial nonnegativity (or, feasibility) problem in a sophisticated way? If so, how are positivity and nonnegativity related to the containment question in the sense of their geometric meaning? Is there a sophisticated Positivstellensatz for the particular situation, yielding certificates for containment? Concerning the degree of the semialgebraic certificates, which degree is necessary, which degree is sufficient to decide containment?
Indeed, (almost) all containment problems studied in this thesis can be formulated as polynomial nonnegativity problems allowing the application of semialgebraic relaxations. Other than this general result, the answer to all the other questions (highly) depends on the specific containment problem, particularly with regard to its underlying geometry. An important point is whether the hierarchies coming from increasing the degree in the polynomial relaxations always decide containment in finitely many steps.
We focus on the containment problem of an H-polytope in a V-polytope and of a spectrahedron in a spectrahedron. Moreover, we address containment problems concerning projections of H-polyhedra and spectrahedra. This selection is justified by the fact that the mentioned containment problems are computationally hard and their geometry is not well understood.
Derivation and characterization of a new filter for nonlinear high-dimensional data assimilation
(2015)
Data assimilation (DA) combines model forecasts with real-world observations to achieve an optimal estimate of the state of a dynamical system. The quality of predictions in nonlinear and chaotic systems such as atmospheric or oceanic circulation is strongly sensitive to the initial conditions. Therefore, beyond the consistent reconstruction of past states, a primary relevance of advanced DA methods concerns the proper model initialization. The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational DA schemes. They are applied in a wide range of research and operations. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the mean and covariance of the analysis ensemble are biased and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) relies on Bayes' theorem without further assumptions, which guarantees an exact asymptotic behavior. However, it is exposed to weight collapse, particularly in higher-dimensional settings, known as the curse of dimensionality.
This work presents a new method to obtain an analysis ensemble with mean and covariance that exactly match the corresponding Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The limitation with respect to fully-nonlinear filtering is that the NETF only considers the mean and covariance of the Bayesian analysis density, neglecting higher-order moments.
The properties and performance of the proposed algorithm are investigated via a set of experiments. The results indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. They also confirm that localization enhances the applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel filter is coupled to a large-scale ocean general circulation model with a realistic observation scenario. The NETF remains stable with a small ensemble size and shows a consistent behavior. Additionally, its analyses exhibit low estimation errors, as revealed by a comparison with a free ensemble integration and the ETKF. The results confirm that, in principle, the filter can be applied successfully and as simple as the ETKF in high-dimensional problems. No further modifications are needed, even though the algorithm is only based on the particle weights. Thus, it is able to overcome the curse of dimensionality, even in deterministic systems. This proves that the NETF constitutes a promising and user-friendly method for nonlinear high-dimensional DA.
In der Dissertation mit dem Titel „Verrechtlichung von Geschichte. Parlamentarische Debatten um die gesetzlichen Bestimmungen gegen Holocaustleugnung in Österreich und Deutschland“ wurden die strafrechtlichen Bestimmungen gegen Holocaust-Leugnung in Deutschland und Österreich untersucht. Im Vordergrund stand die Frage, wie ein historisches Ereignis mit Hilfe politischer und juristischer Terminologie so gefasst und normiert werden konnte, dass die Leugnung desselben seitdem mit Hilfe des Rechts bestraft werden kann. Dazu wurden vor allem jene parlamentarischen Vorgänge und Debatten untersucht, die der Verabschiedung der Gesetze vorausgegangen sind. Die Auswertung dieser Quellen hilft auch zu verstehen, weshalb und in welcher Form die Logik dieser Gesetze in den letzten zwanzig Jahren von anderen Staaten übernommen und auf andere historische Ereignisse ausgeweitet worden ist. Neben diesem umfangreichen empirischen Teil, der auf die jeweiligen historischen Spezifika eingeht, beinhaltet die Dissertation einen stärker analytisch ausgerichteten resümierenden Schlussteil, in dem versucht wurde, mit thesenhaften Beobachtungen das Phänomen und die ideengeschichtliche Genese der Holocaust-Leugnungsgesetze nachzuzeichnen. Diese Beobachtungen umfassen unter anderem die Bereiche Geschichtspolitik, Rechtspolitik, Sprachpolitik oder auch Wissenschaftspolitik und gehen aus unterschiedlichen Blickwinkeln der Frage nach, auf welche Weise die Gesetze begründet, legitimiert und kritisiert worden sind und immer noch werden.
Amphibians have existed on the planet for over 300 million years and are today one of the most diverse vertebrate classes in the world with over 7000 known species and still many more to be discovered. However, several studies assume that approximately one third of the world´s known living amphibians are directly threatened with extinction, making it the most endangered vertebrate class. In relation to the relatively small land mass that is occupied by the state of Panama, it supports one of the most diverse amphibian faunas. However, in many cases the ecological role of single species in a wider context and their habitat preferences are still poorly understood and subject to ongoing research. Modern taxonomic approaches in other tropical regions have shown that former assumptions of amphibian diversity were distinct underestimations of the actual species diversity; a situation that is probably also true for Panama. Concurrently, the collection of amphibian diversity data and the description of new species is a race against time. The amphibian fauna of the world and that of Panama in particular, has suffered from an unprecedented loss of diversity over the last 30 years. The reasons are manifold and include destruction, alteration, and fragmentation of their natural habitats as the main causes, but also the deadly amphibian disease chytridiomycosis caused by the fungal pathogen Batrachochytrium dendrobatidis (Bd). In Panama and Costa Rica, this Emerging Infectious Disease (EID) spread in a wave-like manner from west to east causing mass die-offs and reduced amphibian diversity even in well-preserved habitats. The disease has primarily affected stream-associated highland species. The last large-scale evaluation of the conservation status of Panama´s amphibians through the IUCN Red List of Threatened Species in 2004 concluded that approximately 30% of the known species are acutely threatened with extinction. Furthermore, around 17% of the amphibian species that have been known back then lacked adequate data to be assessed. In view of Panama´s already overwhelming amphibian diversity, as well as the variety of habitats and the large number of sites that have not been examined with regard to amphibians before, I started this study with the conviction that the inventory of Panama´s amphibian diversity is far from being completed. Furthermore, when I started this study, it was uncertain if there would be any surviving amphibian species in areas where chytridiomycosis had emerged. The loss of whole amphibian communities in upland western Panama following Bd arrival led to a shift of amphibian research to lowland sites in central and eastern Panama aiming primarily on pathogen arrival and the documentation of epizootic outbreak and subsequent population decline. The situation of amphibian communities in areas post-decline was therefore largely unknown. Accordingly, the main goals of my study were to add to the taxonomic inventory of amphibians in Panama and to assess the situation of amphibian populations in habitats where chytrid-driven declines have been observed. To address these tasks I conducted fieldwork in western Panama with a focus on mountainous elevations between 1000 and 3475 m asl. Additionally, I visited different lowland sites between sea level and 1000 m asl to collect comparative material. In the period between 2008 and 2013, I conducted five collection trips to Panama that add up to a total of approximately 13 months in the field. I have sampled nine regions in western Panama and collected 767 specimens together with student collaborators, 531 of which were collected under my personal field number. Additional data obtained from those specimens include 68 male anuran call recordings, 102 standardized color descriptions of specimens in life, and 259 tissue samples that to date yielded 185 16S mtDNA sequences. This comprises the most comprehensive data set for amphibians of Panama and the first large-scale DNA barcoding approach for western Panama to date. After a preliminary DNA barcoding and subsequent comparative examination of morphological und bioacoustic data of all specimens collected, the number of taxonomic problems that needed to be addressed was higher than I previously anticipated. For most genetic lineages deeper taxonomic analyses were required to reach conclusive results. A selection had to be made with which lineages to proceed in the analyses, in view of the substantial financial and time expenditure that would be needed for a complete taxonomic revision. Therefore, I chose to run deeper analyses on one genus from each of the three amphibian orders in Panama. The genera selection depended largely on the availability of sufficient material and the scientific relevance of the respective genus.
I selected the genus Diasporus from the order Anura. These small frogs are omnipresent in many habitats and thus relatively easy to find. In addition, the genus is underrepresented in taxonomic studies. This is the first taxonomic study on the genus Diasporus to include a molecular phylogeny and the first comparison of advertisement calls between several populations from western Panama. In total, I collected 67 Diasporus specimens throughout western Panama and compared them morphologically with 49 additional specimens from Central America in collections, including the primary types of D. diasporus and D. hylaeformis. Additional comparative data were taken from literature. The DNA barcoding analysis of a fragment of the 16S rRNA gene included 43 own sequences that were complemented with 15 relevant GenBank sequences. In addition, I compared the advertisement calls of 26 male individuals among each other and with call descriptions from the literature. The DNA barcoding approach revealed several unnamed genetic lineages, but in some cases also resulted in the lumping of morphologically and bioacoustically distinct specimens. Generally, the morphological examination of the collected material revealed almost no specific characters that could be used to distinguish between genetic lineages. However, it was possible to identify species using a combination of several morphological characteristics. Which ones are relevant in the individual case depends on the respective species. My extensive collection of call recordings made it possible to test for the first time the intraspecific call variation of D. hylaeformis in dependency of various parameters. This analysis showed that the dominant frequency depends significantly on the body size of the calling male; the smaller the calling male, the higher the frequency of the call. A similar relationship was observed between the call rate and temperature: the lower the temperature during calling, the lower the call rate. I suppose that these general patterns, which have already been observed in other anuran genera, are also true in other Diasporus species that could not be tested in this study. Taking into account the intraspecific variation of Diasporus advertisement calls, I consider comparative call analyses to be the best way to distinguish between species. This is especially true in syntopic species. Integration of the three lines of evidence (i.e., morphology, DNA barcoding, and bioacoustics) led to the identification of four new species, two of which (i.e., D. citrinobapheus and D. igneus) colleagues and I have already formally described.
I conducted an integrative taxonomic analysis of the western Panamanian representatives of the genus Bolitoglossa from the order Caudata, the larger of the two Panamanian salamander genera. Bolitoglossa is very species-rich with a centre of diversification in the high mountains of Costa Rica and western Panama. I collected 53 Bolitoglossa specimens and compared them to twelve specimens in collection, including the holotype and one paratype of B. gomezi. The dataset was complemented with information from the literature. Among the sampled specimens were two species considered to be endangered that have not been collected or observed for several decades; B. magnifica has not been seen for 34 years and B. anthracina has not been seen for 22 years. Further, I collected salamanders at several new locations. To date, my 16S mtDNA barcoding analysis represents the densest taxon sampling for Panamanian Bolitoglossa composed of 21 own sequences that were combined in the final alignment with 47 GenBank sequences. Even though the molecular phylogeny is based only on a single marker, the received trees largely coincide with previous studies and the nodes received high statistical support. In these trees, I retrieve all previously defined subgenera and species groups. On the basis of this molecular phylogeny, I placed B. anthracina, here sequenced for the first time, in the B. subpalmata species group. Due to the fact that B. anthracina is a large and dark colored species it had previously been placed by implication in the B. schizodactyla species group along with other large black salamanders of the B. nigrescens species complex. Moreover, I found deep divergent genetic lineages among geographically separated populations of B. minutula. However, until now there were no additional morphological characteristics detectable to distinguish between these lineages. Additionally, my colleagues and I described a new deep divergent lineage in the B. robinsoni species group as B. jugivagans, a species new to science. In contrast, I found only minor genetic differences between specimens of B. sombra and B. nigrescens. After combining morphometric data and tooth counts from literature of both species with additional data from specimens of B. sombra that I collected near the type locality, the distinguishing features blurred. In particular, including much larger specimens of B. sombra, not yet known at the time of its description, showed that the tooth count difference is dependent on the size and age of the specimen examined. Larger specimens have more maxillary and vomerine teeth. Based on this evidence I regard B. sombra as a junior synonym of B. nigrescens. Further, I revised the Panamanian distribution of the two relatively common lowland salamanders, B. colonnea and B. lignicolor. Besides filling the gaps in the fragmentary known distributions of these species, I assessed the molecular and morphological variation of both species among populations in Panama. While there was little variation in B. lignicolor, I found divergent genetic lineages among geographically distinct populations of B. colonnea that require further taxonomic examination.
Caecilians (order Gymnophiona) are among the least investigated terrestrial vertebrates. After I received a first specimen of the predominantly South American genus Oscaecilia (family Caeciliidae) in western Panama, I started to work more extensively on the taxonomy of Caeciliidae in Central America. The specimens from western Panama were not readily assignable to a single described species, but shared characters with O. elongata and O. osae. While O. osae was only known from the holotype, the type material of O. elongata was destroyed during World War II. On the basis of the original description, the unique feature in O. elongata within Oscaecilia is the absence of subdermal scales in the posterior part of the body. In a referred specimen of O. elongata mentioned in the original description from eastern Panama, this characteristic cannot be examined as it consists of head and neck only. Therefore, I used non-destructive high-resolution, synchrotron-based X-ray micro CT imaging (HRμCT) to examine cranial characters in the specimens in question and took normal radiographs to count vertebrae and to make subdermal scales visible. I found that the fragmented specimen from eastern Panama likely belongs to the well-sampled species O. ochrocephala and has not much in common with O. osae or the specimens from western Panama. Contrarily, O. osae and the specimens from western Panama share many morphological characters, but also show some differences. Genetic barcoding revealed that both species are close relatives, but the genetic distance could not be finally resolved, because 16S sequences obtained from blood samples of living O. osae were of poor quality. Thus, I compare the Oscaecilia from western Panama to O. osae in this study, but postpone a taxonomic decision until further material becomes available. Further, I designate O. elongata a nomen dubium, because the type material is lost, the type locality is not defined in more detail than “Panama”, and the original description does not allow for a definite assignment. Since previous molecular studies only considered O. ochrocephala, the monophyly of Oscaecilia was never tested before. So far, the genus Oscaecilia is based largely on a single cranial character, the eyes covered with bone. Here, I combined two 16S mtDNA sequences of O. osae from Costa Rica and two sequences from O. sp. from western Panama with two sequences of O. ochrocephala and ten sequences of four species of the genus Caecilia, the sister genus of Oscaecilia. The resulted phylogeny contains two well-supported clades, one clade containing two species of Caecilia, one from Panama and one from western Ecuador and all species of Oscaecilia tested. The other clade consists of two species of Caecilia from the Amazon basin. I therefore assume that the split in both clades is due to the rise of the Andes, what led to today’s cis-trans-Andean distribution of the two clades. For now, to restore monophyly, I suggest to place Oscaecilia within the synonymy of Caecilia until more taxa have been tested. When assessing the conservation status of the amphibian species in mountainous western Panama, I first compiled a list of known species that I potentially could have found during my fieldwork. Using the IUCN categories, I analyzed how many of the endangered species I actually found and how these are distributed over families and species groups. Surprisingly, my rediscoveries of lost species were not equally distributed among the four families that comprise most endangered amphibian species (i.e., Bufonidae, Craugastoridae, Hylidae, and Plethodontidae). While I discovered ten of eleven endangered hylids and six of nine endangered plethodontids, I found only one of four endangered bufonids and none of the nine endangered craugastorids. I assume that the secretive living plethodontids, for which no Bd related declines have been documented, were just overlooked in the past decades. In contrast, I propose that hylids, in which Bd related population decline is well documented, developed distinct evolutionary solutions permitting coexistence with the pathogen. The situation is obviously different in bufonids and craugastorids, where I found no signs of population recoveries at present. So far, the only surviving populations of species from these families exist in climatic or physiographic niches that have probably shielded them from Bd. My data confirm the current view that the risk for naïve amphibian populations to decline during Bd epizootics is predicted by ecological traits (e.g., aquatic index, vertical distribution) and not dependent on taxonomic affiliation. However, I propose that only certain amphibian families (e.g., hylids and centrolenids) have the ability to acquire immunity solutions to coexist with the pathogen during enzootic stages. This is a very new perspective on the worst infectious disease in amphibians worldwide, allowing for new research approaches to understand the host-pathogen dynamics. Moreover, I examined where the share of surviving endangered amphibian species is particularly high in mountainous western Panama. As was to be expected, most of the endangered species are found within the boundaries of protected areas. One exception is the unprotected Cerro Colorado region in the Comarca Ngöbe-Buglé that provides habitat for a wide variety of endangered and undiscovered amphibian species. Nonetheless, planned open pit mining would destroy the forests in a large part of the area. This demonstrates once again that human activities are the biggest threat to amphibians in Panama and elsewhere.
Epigenetic silencing of transgene expression represents a major obstacle for the efficient genetic modification of multipotent and pluripotent stem cells. We and others have demonstrated that a 1.5 kb methylation-free CpG island from the human HNRPA2B1-CBX3 housekeeping genes (A2UCOE) effectively prevents transgene silencing and variegation in cell lines, multipotent and pluripotent stem cells, and their differentiated progeny. However, the bidirectional promoter activity of this element may disturb expression of neighboring genes. Furthermore, the epigenetic basis underlying the anti-silencing effect of the UCOE on juxtaposed promoters has been only partially explored. In this study we removed the HNRPA2B1 moiety from the A2UCOE and demonstrate efficient anti-silencing properties also for a minimal 0.7 kb element containing merely the CBX3 promoter. This DNA element largely prevents silencing of viral and tissue-specific promoters in multipotent and pluripotent stem cells. The protective activity of CBX3 was associated with reduced promoter CpG-methylation, decreased levels of repressive and increased levels of active histone marks. Moreover, the anti-silencing effect of CBX3 was locally restricted and when linked to tissue-specific promoters did not activate transcription in off target cells. Thus, CBX3 is a highly attractive element for sustained, tissue-specific and copy-number dependent transgene expression in vitro and in vivo.
In the present work, mismatch negativity (MMN) was used to examine the contribution of spectral vs. temporal perceptual features to vowel length discrimination in children and adults. Three age groups (adults vs. 9-10 years vs. 10-11 years olds) have been taken to examine developmental effects on vowel length perception. Natural (i.e., spectrotemporal) vowel length differences were compared with (artificially modified) stimulus pairs varying only in temporal or spectral characteristics to contrast spectral, temporal and spectrotemporal processing.
The result indicates that, while adults integrate spectral and temporal aspects of the speech signal in an additive way, children of 9-10 years of age sequentially process both features. However, vowel length processing is found to become adultlike at the age of 10-11 years.
This thesis covers the analysis of radix sort, radix select and the path length of digital trees under a stochastic input assumption known as the Markov model.
The main results are asymptotic expansions of mean and variance as well as a central limit theorem for the complexity of radix sort and the path length of tries, PATRICIA tries and digital search trees.
Concerning radix select, a variety of different models for ranks are discussed including a law of large numbers for the worst case behavior, a limit theorem for the grand averages model and the first order asymptotic of the average complexity in the quantile model.
Some of the results are achieved by moment transfer techniques, the limit laws are based on a novel use of the contraction method suited for systems of stochastic recurrences.
Diesmal kein Hinweis auf einen lesenswerten Text, sondern auf eine Veranstaltung, die von Genocide Alert mit der Deutschen Atlantischen Gesellschaft in Berlin organisiert wird: Am 15.6. um 18:30 Uhr diskutieren im Presse- und Informationsamt der Bundesregierung Dr. Klaus Kinkel, ehemaliger Bundesaußenminister a.D., Alfred Grannas vom Auswärtiges Amt, Prof. Dr. Axel Hagedorn, der Anwalt der Stiftung “Mütter von Srebrenica” sowie Prof. Dr. Wolfgang Höpken von der Uni Leipzig über das Massaker, das im Jahr 1995 in Srebrenica stattgefunden hat...
Hematopoietic stem cells (HSCs) have the unique abilities of life-long self-renewal and multi-lineage differentiation. They are routinely used in BM or stem cell transplantations to reconstitute the blood system of patients suffering from malignant or monogenic blood disorders. For an adequate production of each blood cell lineage in homeostasis and under stress conditions, the fate choice of HSCs to either self-renew or to differentiate must be strictly controlled. The incomplete understanding of the molecular mechanisms that control this balance makes it still impossible to maintain or expand undifferentiated HSCs in culture for advanced regenerative medical purposes.
The aim of this thesis was the identification and molecular characterisation of mechanisms that control the decision of HSCs to self-renew or to differentiate, and how they are connected to extrinsic cytokine signaling control. Prior to this thesis, a screening for genes upregulated under self-renewal promoting thrombopoietin (TPO) signaling via the transcription factors STAT5A/B in HSCs was conducted, and Growth arrest and DNA damage inducible 45 gamma (Gadd45g) was one of the regulated genes. GADD45G was described as stress sensor, DNA-damage response and tumor suppressor gene, that is epigenetically silenced in many solid tumors and leukemia. Furthermore, Gadd45g is upregulated in aged HSCs with impaired multi-lineage reconstitution abilities, and it is induced by differentiation promoting cytokines in GM-committed cells. However, the function of GADD45G in LT-HSCs was unknown. All these points warrant further investigation to unravel the function of GADD45G on early cell fate decisions of HSCs in hematopoiesis.
The expression of Gadd45g was stimulated by hematopoietic cytokines TPO, IL3 and IL6 both in HSCs and MPPs, making GADD45G an interesting target to focus on. To simulate the cytokine-induced expression GADD45G was lentivirally transduced in HSCs. Surprisingly, GADD45G did not induce cell cycle arrest or cell death in hematopoietic cells neither in vitro nor in vivo, as reported in many cell lines. Instead GADD45G revealed an enhanced and markedly accelerated differentiation of HSCs into mainly myelomonocytic cells, similar as observed for IL3 and IL6 containing cultures. Also in vivo, GADD45G rapidly initiates the differentiation program in HSCs at the expense of self-renewal and long-term engraftment, as shown by serial HSC transplantation experiments. Along the same line, HSCs from Gadd45g-knock out mice exhibited an increased self-renewal. In vitro, Gadd45g-/- progenitors showed higher and prolonged colony formation potential and slower expansion after cytokine stimulation. The loss of Gadd45g increased HSC self-renewal and improved repopulation in secondary recipients, determined by serial competitive transplantations. Taken together, GADD45G could be identified as molecular link between differentiation-promoting cytokine signaling and rapid differentiation induction in murine LT-HSCs.
As presented in this thesis the differentiation induction of GADD45G was mediated by the activation of the cascade of MAP3K4 – MKK6 –p38 MAPK. Small molecule inhibition of p38, but not JNK, blocked the GADD45G-induced differentiation. GADD45G binds to MAP3K4 and releases its auto-inhibitory loop by a change in confirmation, initiating this cascade. Phosphoflow cytometry demonstrated the activation of p38 and a downstream kinase MK2 by GADD45G expression in MPPs. Furthermore, the expression of constitutive active MAP3K4 and MKK6 were able to phenocopy GADD45G-induced differentiation, which could be blocked by p38 inhibition.
The other two family members GADD45A and B also induced accelerated differentiation in LT-HSCs. Interestingly, only GADD45G suppressed the differentiation into megakaryocyte and erythrocyte (Mek/E) lineage cells suggesting a role of GADD45G in lineage choice. Long-term time-lapse microscopy-based cell tracking of single LT-HSCs and their progeny revealed that, once GADD45G is expressed, the development of LT-HSCs into granulocyte-macrophage-committed progeny occurred within 36 hours, and uncovered a selective lineage choice with a severe reduction in Mek/E cells. Furthermore, no megakaryocytic-erythroid progenitors (MEPs) could develop from HSPCs in BM 2 weeks after transplantation suggesting a very early selection against Mek/E cell fates. In line with these findings, GADD45G-transduced MEPs could not expand or form colonies in vitro, demonstrating that the differentiation program induced by GADD45G is not compatible with Mek/E lineage fate. Gene expression profiling of HSCs indicated that GADD45G promotes myelomonocytic differentiation programs over programs for self-renewal or megakaryo-/ erythropoiesis. The here identified differentiation induction potential of GADD45G is so strong that the expression of GADD45G in primary acute myeloid leukemia (AML) cells inhibited their expansion accompanied by enhanced differentiation and increased apoptosis.
The here presented work shows that IL3 and IL6 induce a differentiation program in HSCs via GADD45G and p38 closing the link of extrinsic cytokine signaling and differentiation induction. Since the loss of Gadd45g increased the self-renewal and slowed HSC differentiation, this may be utilized, i.e. by p38 inhibition, to ex vivo maintain and expand HSCs by preventing cytokine-induced differentiation. Furthermore, Re-expression of GADD45G may overcome the differentiation block in leukemia to eliminate these cells by driving them into terminal differentiation and apoptosis.
In der aktuellen Ausgabe der Zeitschrift für Internationale Beziehungen analysieren Christopher Daase und Nicole Deitelhoff (Uni Frankfurt) den Ist-Zustand in der deutschen Disziplin der Internationalen Beziehungen (Fazit: nicht so gut & viel Käse) und rufen zu mehr Beteiligung auf, v.a. beim anstehenden DVPW-Kongress in Duisburg.
Die HBO-Serie The Wire erzählt eine Geschichte von Kriminalität, Polizeiarbeit und Politik in Baltimore. Eine ihrer Stärken liegt darin, wie sie die Ambivalenz des sozialen und politischen Lebens nachzeichnet. Eine zentrale Rolle spielt dabei allgegenwärtige Korruption. Ihre Ambivalenz bricht sich nicht zuletzt in der Darstellung der Figur des State Senator Clay Davis.
Die bisher nicht lückenlos aufgeklärte, vermutlich aber Gruppen der organisierten Kriminalität zuzurechnende Ermordung von über 40 Lehramtsstudenten in der südmexikanischen Kleinstadt Ayotzinapa Ende 2014 hat ebenso wie die seit dem Abschuss eines Militärhubschraubers im Mai 2015 eskalierende Gewalt im westlichen Bundesstaat Jalisco wieder einmal schmerzlich in Erinnerung gerufen, dass in Mexiko bereits seit fast neun Jahren ein blutiger Gewaltkonflikt im Gange ist, der angesichts der wirtschaftlichen Erfolge des „Aztec tiger“ teils fast schon vergessen schien.
Die vorliegende Arbeit befasst sich mit der Untersuchung einzelner chiraler Moleküle durch Koinzidenzmessungen. Ein Molekül wird chiral genannt, wenn es in zwei Varianten, sogenannten Enantiomeren auftritt, deren Strukturmodelle Spiegelbilder voneinander sind.
Da viele biologisch relevante Moleküle chiral sind, sind Methoden und Erkenntnisse dieses Gebiets von großer Bedeutung für Biochemie und Pharmazie. Bemerkenswert ist, dass in der Natur meist nur eines der beiden möglichen Enantiomere auftritt. Ob diese Wahl zufällig war, ob sie aufgrund der Anfangsbedingungen bei Entstehung des Lebens erfolgte, oder ob sie eine fundamentale Ursache hat, ist bisher ungeklärt. Seit der Entdeckung chiraler Molekülstrukturen in der zweiten Hälfte des 19. Jahrhunderts ist eine Vielzahl von Methoden entwickelt worden, um die beiden Enantiomere eines Moleküls zu unterscheiden und ihre Eigenschaften zu untersuchen. Aussagen über die mikroskopische Struktur (Absolutkonfiguration) können jedoch meist nur mithilfe theoretischer Modelle getroffen werden.
Der innovative Schritt der vorliegenden Arbeit besteht darin, eine in der Atomphysik entwickelte Technik zur Untersuchung einzelner mikroskopischer Systeme erstmals auf chirale Moleküle anzuwenden: Mit der sogenannten Cold Target Recoil Ion Momentum Spectroscopy (COLTRIMS) ist es möglich, einzelne Moleküle in der Gasphase mehrfach zu ionisieren und die entstandenen Fragmente (Ionen und Elektronen) zu untersuchen. Die gleichzeitige Detektion dieser Fragmente wird als Koinzidenzmessung bezeichnet.
Zunächst wurde das prototypische chirale Molekül CHBrClF mit einem Femtosekunden-Laserpuls mehrfach ionisiert, sodass alle fünf Atome als einfach geladene Ionen in einer sogenannten Coulomb-Explosion „auseinander fliegen“. Durch Messung der Impulsvektoren dieser Ionen konnte die mikroskopische Konfiguration einzelner Moleküle mit sehr hoher Zuverlässigkeit bestimmt werden. Somit eignet sich die Koinzidenzmethode auch dazu, die Anteile der rechts- bzw. linkshändigen Enantiomere in einer Probe zu bestimmen. Die Messungen an der verwendeten, racemischen Probe zeigen bei der Ionisation mit linear polarisiertem Licht im Rahmen der statistischen Unsicherheit wie erwartet eine Gleichverteilung der beiden Enantiomere.
In einem nachfolgenden Experiment konnte gezeigt werden, dass sich die Coulomb-Explosion auch mit einzelnen hochenergetischen Photonen aus einer Synchrotronstrahlungsquelle realisieren lässt. Für beide Ionisationsmechanismen – am Laser und am Synchrotron - wurden mehrere Fragmentationskanäle untersucht. Im Hinblick auf die Erweiterung der Methode hin zu komplexeren, biologisch relevanten Molekülen ist es entscheidend zu wissen, inwieweit sich die Händigkeit bestimmen lässt, wenn nicht alle Atome des Moleküls als atomare Ionen detektiert werden. Hierbei stellte sich heraus, dass auch molekulare Ionen zur Bestimmung der Absolutkonfiguration herangezogen werden können. Eine signifikante Steigerung der Effizienz konnte für den Fall demonstriert werden, dass nicht alle Fragmente aus der Coulomb-Explosion des Moleküls detektiert wurden – hier lassen sich allerdings nur noch statistische Aussagen über die Absolutkonfiguration und die Häufigkeit der beiden Enantiomere treffen.
Um die Grenzen der Methode in Bezug auf die Massenauflösung zu testen, wurden isotopenchirale Moleküle, d.h. Moleküle, die nur aufgrund zwei verschiedener Isotope chiral sind, untersucht. Auch hier ist eine Trennung der Enantiomere möglich, wenn auch mit gewissen Einschränkungen.
Ein wichtiges Merkmal chiraler Moleküle ist das unterschiedliche Verhalten der Enantiomere bei Wechselwirkung mit zirkular polarisierter Strahlung. Diese Asymmetrie wird Zirkulardichroismus genannt. Die koinzidente Untersuchung von Ionen und Elektronen aus der Fragmentation eines Moleküls eröffnet neue Möglichkeiten für die Untersuchung des Dichroismus. So können die Impulsvektoren der Ionen mit bekannten Asymmetrien in der Elektronenverteilung (Photoelektron-Zirkulardichroismus) verknüpft werden, was zu einem besseren Verständnis der Wechselwirkung elektromagnetischer Strahlung mit chiralen Molekülen führen kann.
In dieser Arbeit wurde nach Asymmetrien in der Winkelverteilung sowohl der Ionen als auch der Elektronen nach der Ionisation von CHBrClF und Propylenoxid (C3H6O) mit zirkular polarisierter Synchrotronstrahlung gesucht. In den durchgeführten Messungen konnte kein zweifelsfreier Nachweis für einen Dichroismus bei den verwendeten experimentellen Bedingungen erbracht werden. Technische und prinzipielle Limitierungen der Methode wurden diskutiert und Verbesserungsvorschläge für zukünftige Messungen genannt.
Mit der erfolgreichen Bestimmung der Absolutkonfiguration und der prinzipiellen Möglichkeit, Asymmetrien in zuvor nicht zugänglichen Messgrößen zu untersuchen, legt diese Arbeit den Grundstein für die Anwendung der Koinzidenzspektroskopie auf Fragestellungen der Stereochemie.
Die Dissertation ist in den Bereichen der semiklassischen Quantengravitation und der pseudokomplexen Allgemeinen Relativitätstheorie (pk-ART) anzusiedeln. Dabei wird unter semiklassischer Quantengravitation die Untersuchung quantenmechanischer Phänomene in einem durch eine klassische Gravitationstheorie gegebenen gravitativen Hintergrundfeld verstanden und bei der pk-ART handelt es sich um eine Alternative zu der aktuell anerkannten klassischen Gravitationstheorie, der Allgemeinen Relativitätstheorie (ART), die die reellen Raumzeitkoordinaten der ART pseudokomplex erweitert. Dies führt zusammen mit einer Veränderung des Variationsprinzips in führender Ordnung auf eine Korrektur der Einstein- Gleichung der ART mit einem zusätzlichen Quellterm (Energie-Impuls-Tensor), dessen exakte Form jedoch bisher nicht bekannt ist.
Die Beschreibung der Gravitation als Hintergrundfeld ergibt sich zwangsläufig daraus, dass auf Basis der ART bisher keine quantisierte Beschreibung für sie gefunden werden konnte. Jedoch wird erhofft, dass die Untersuchung semiklassischer Phänomene Hinweise auf die korrekte Theorie der Quantengravitation gibt. Zudem motiviert der Mangel einer quantisierten Gravitationstheorie die Verwendung alternativer Theorien, da sich dadurch die Frage stellt, ob die ART die korrekte Beschreibung klassischer Felder ist.
Das Ziel der vorliegenden Dissertation war die grundlegenden Unterschiede zwischen der ART und der pk-ART für gebundene sphärisch symmetrische Zustände der Klein-Gordon- und der Dirac-Gleichung zu identifizieren und ein qualitatives Modell der Vakuumfluktuationen in sphärisch symmetrischen Materieverteilungen zu bestimmen, wobei der Zusammenhang der pk-ART mit den Vakuumfluktuationen in der Annahme besteht, dass ein Zusammenhang zwischen ihnen und dem zusätzlichen Quellterm der pk-ART existiert. Dafür wurden die gebundenen Zustände der Klein-Gordon- und der Dirac-Gleichung für drei verschiedene Metrikmodelle (zwei ART-Modelle und ein pk-ART-Modell) mit konstanter Dichte systematisch numerisch berechnet, einige repräsentative Grafiken erstellt, anhand derer die grundlegenden Unterschiede der Ergebnisse der ART-Modelle und des pk-ART-Modells erörtert wurden, und die ART Ergebnisse der Dirac-Gleichung soweit wie möglich mit Ergebnissen der Literatur verglichen. Insbesondere wurde dabei festgestellt, dass die Energieeigenwerte in der pk-ART im Gegensatz zu denen in der ART in Abhängigkeit der Ausdehnung des Zentralobjekts ein Minimum aufweisen. Zudem wurden die Energieeigenwerte der Klein-Gordon-Gleichung teilweise sowohl über das Eigenwertproblem einer Matrix als auch über ein Anfangswertproblem berechnet und es wurde festgestellt, dass die Beschreibung als Eigenwertproblem deutlich uneffektiver ist, wenn dafür die Basis des dreidimensionalen harmonischen Oszillators genutzt wird. Für die Entwicklung des qualitativen Vakuumfluktuationsmodells wurden zwei Näherungen für den Erwartungswert des Energie-Impuls-Tensors in führender Ordnung für die Schwarzschildmetrik (ART) verglichen und die Verwendung eines qualitativen Modells durch die dabei auftretende Diskrepanz gerechtfertigt. Danach wurden die Vakuumfluktuationen für Metriken konstanter Materiedichte mit Hilfe einer der Näherungen in führender Ordnung berechnet und ein Modell gesucht, das den gleichen qualitativen Verlauf aufweist. Im Anschluss wurde dieses Modell noch für einfache Metriken mit variabler Materiedichte verifiziert.
Die Dissertation leistet mit der Analyse der gebundenen Zustände einen Beitrag in der Identifikation der Unterschiede zwischen der pk-ART und der ART und führt somit auf weitere mögliche Messgrößen, die der Unterscheidung der beiden Theorien dienen könnten. Weiterhin ermöglicht das abgeleitete Modell eine Verfeinerung der schon publizierten Ergebnisse über Neutronensterne und die für die Erstellung nötigen Vorarbeiten leisten einen Beitrag zur Identifikation des
pk-ART Quellterms.
The endocannabinoids (EC), their synthetizing and metabolizing enzymes, and the cannabinoid (CB) receptors comprise the endocannabinoid system (ECS) that has been detected by Yasuo et al. (2010) in rodent and human brain areas essential for circadian rhythmic control and hormone secretion. The EC are secreted in the pars tuberalis formation (PT) of the pituitary gland and unfold their effect as ligands on cannabinoid receptors type 1 (CB1) in the pars distalis (PD). The CB1 is mostly expressed on folliculo-stellate (fs) cells of the PD. The fs cells execute regulative and supportive functions to adjacent hormone-producing cells (Allaerts and Venkelecom, 2005; Mitsuishi et al., 2013). The lipid and calcium binding protein Annexin A1 (Anx A1) and the cell membrane permeable compound nitric oxide (NO) have been detected in the fs cells (Woods et al., 1990; Devnath and Inoue, 2008). There are published findings indicating strong influence of Anx A1 and NO on hormone production (Taylor et al. 1993; Venkelecom et al, 1997). The hypothesis of this study is that the EC influence hormonal secretion by acting upon CB1 receptors on fs cells and thus activating or inhibiting Anx A1 and NO that directly affect adjacent glandular cells.
Prevalently cell models were used to carry out the experimental work. The TtT/GF and Tpit/F1 cell lines represent the fs cells, the AtT20/D16v stand for the ACTH-producing corticotroph (C) cells, and GH4C1 for the PRL-producing lactotroph (L) cells. Whenever comparison with an integrity model was possible tissue from C3H mice was used. Chemoluminescent and photometrical detection, enzyme-linked immunosorbent assay (ELISA), fluorescence-activated cell sorting (FACS), immunoblot (IB), immunocyto- and immunohisto-chemical analysis (ICC, IHC), in situ hybridization (ISH), and (q) PCR methods were used as assaying tools to investigate CB1, Anx A1, the Anx A1 receptor - Fpr-rs1, NO, ACTH, and PRL.
CB1 was detected on the fs, C, and L cell models. The presence of fatty acid amide hydrolase (FAAH, an EC degrading enzyme) was confirmed in the fs cells. Incubations of the fs cells with CB1 agonists (2-AG, AEA, WIN) and antagonist (otenabant) were performed and resulting increase of Anx A1, and inhibition of NO were detected. Anx A1 binding sites, known as formyl peptide like receptor – related sequence 1 (Fpr-rs1) were identified on the C and L cells. The hormone-producing cells were treated with a 2-AG, Anx A1, and NO and the resulting changes in the levels of ACTH and PRL were detected. Anx A1 acted stimulatory on ACTH in the C AtT20/D16v cell and inhibitory on PRL in the L GH4C1 cell. NO inhibited both ACTH and PRL release. Additional analysis of the levels of expression of mRNA for Anx A1 and Fpr-rs1 in murine PD tissue demonstrated that while the expression of the first was not influenced by time, the expression of the latter was activated during the subjective day.
The here presented study shows that EC influence the ACTH release stimulatory through activating Anx A1 and inhibiting NO. As for PRL, the EC unfold an inhibition through activating Anx A1, and stimulation through inhibiting NO. A clear regulatory linkeage between the EC and ACTH and PRL control is revealed, involving the fs cells with possible time-dependence.
Ausgehend von dem gesellschaftlichen Problem des Übergewichts im Kindesalter wird die besondere Bedeutung und Verantwortung des Sportunterrichts für diese Klientel herausgestellt. Dabei wird die These vertreten, dass der Sportunterricht seinem Auftrag nur dann gerecht werden kann, wenn es gelingt, auch übergewichtigen Kindern positive Erfahrungen in Bezug auf Bewegung, Spiel und Sport zu vermitteln. Im Rahmen dieses sportpädagogischen Problemfeldes wurde zunächst ein Fragebogen konzipiert und validiert, der das Wohlbefinden als Indikator für positive Erfahrungen übergewichtiger Schüler mit dem von normalgewichtigen Kindern vergleicht (n = 336). Eine anschließende qualitative Untersuchung in Form von Leitfadeninterviews (mit acht übergewichtigen/adipösen Kindern) ergänzt und konkretisiert die Ergebnisse.
Als wesentliches Resultat konnte die Erkenntnis gewonnen werden, dass das Wohlbefinden – gemessen durch ein faktorenanalytisch generiertes Modell mit den drei Faktoren „Sportunterricht/Sportlehrer“ (Faktor I), „sportliches Selbstwertgefühl“ (Faktor II) und „Mitschüler/Schulzufriedenheit“ (Faktor III) – keine signifikanten Unterschiede zwischen den Gewichtsklassen zeigt (Faktor I p = .57; Faktor II p = .04; Faktor III p = .23). Übergewichtige Schüler fühlen sich demnach nicht weniger wohl als ihre normalgewichtigen Klassenkameraden, in der Skala sportliches Selbstwertgefühl erzielten sie sogar höhere Werte (Normalgewichtige m = 2,06 ± 0,96; Übergewichtige m = 2,27 ± 0,89). Trotz dieses positiven Befundes verspüren Übergewichtige durchaus so manche Unzufriedenheit. Die Frage nach der Wichtigkeit der bzw. der Zufriedenheit mit den Komponenten Sportunterricht, eigene sportliche Leistung, Zusammenarbeit mit den Mitschülern, Figur und Sportlehrer machte deutlich, dass den Übergewichtigen Figur und sportliche Leistung sehr wichtig sind, sie jedoch nur bedingt damit zufrieden sind. Die Unterschiede in den entsprechenden Skalen erwiesen sich als hoch signifikant (Figur p = .00 d = .28; sportliche Leistung p = .01 d = .29). Die Überprüfung der Frage F1.2 hinsichtlich der geschlechtsspezifischen Unterschiede in den Aussagen zum Wohlbefinden lieferte lediglich ein signifikantes Ergebnis (p = .01) mit mittlerem Effekt (d = .48). Übergewichtige Mädchen gaben im Faktor „Mitschüler/Schulzufriedenheit“ höhere Werte an (m = 3,27 ± 0,66) als übergewichtige Jungen (m = 2,93 ± 0,75). Daraus lässt sich schließen, dass sich die weiblichen Übergewichtigen besser von ihren Mitschülern verstanden und unterstützt fühlen und sie eine allgemein größere Schulzufriedenheit verspüren als die männliche Vergleichsgruppe.
Die Auswertung in Bezug auf die Herkunft der Schüler lieferte keine signifikanten Ergebnisse. Dieser Befund deutet auf eine gelungene Integration der ausländischen Schüler hin, die aber möglicherweise aufgrund des hohen Ausländeranteils im Stadtgebiet Offenbach nicht repräsentativ ist.
Die Auswertung der Interviews zeigte, dass der positive Selbstwert auf ein hohes Maß an sozialer Anerkennung zurückzuführen ist. Entgegen zahlreichen theoretischen Vorannahmen berichtete kein Kind von anhaltenden Diskriminierungen oder Schamgefühlen aufgrund seines Gewichts. Die Bedeutung der eigenen sportlichen Leistung zeichnete sich mehrfach als Schlüsselkriterium im Umgang mit der pädagogischen Herausforderung, dem Erschaffen eines Problembewusstseins, ohne den Selbstwert und die Freude am Sporttreiben zu trüben, ab. Übergewichtige Kinder messen der Leistung einen hohen Stellenwert bei und erkennen in der Hoffnung einer möglichen Verbesserung, dass eine Reduktion des Gewichtes vorteilhaft ist.
Empirical credit demand analysis undertaken at the aggregate level obscures potential behavioral heterogeneity between various borrowing sectors. Looking at disaggregated data and analyzing bank loans to non-financial companies, to financial companies, to households for consumption and for house purchases separately with respect to a common set of macroeconomic determinants may facilitate more accurate empirical relationships and more reliable insights for economic policy. Using quarterly Euro area panel data between 2003 and 2013, empirical evidence for heterogeneity in borrowing behavior across sectors and the credit cycle with respect to interest rates, output and house prices is found. The results motivate sector-specific, counter-cyclical capital requirements.
This paper empirically investigates how organizational hierarchy affects the allocation of credit within a bank. Using an exogenous variation in organizational design, induced by a reorganization plan implemented in roughly 2,000 bank branches in India during 1999-2006, and employing a difference-in-differences research strategy, we find that increased hierarchization of a branch decreases its ability to produce "soft" information on loans, leads to increased standardization of loans and rationing of "soft information" loans. Furthermore, this loss of information brings about a reduction in performance on loans: delinquency rates and returns on similar loans are worse in more hierarchical branches. We also document how hierarchical structures perform better in environments that are characterized by a high degree of corruption, thus highlighting the benefits of hierarchical decision making in restraining rent seeking activities. Finally, we document a channel - managerial interference - through which hierarchy affects loan outcomes.
Our paper evaluates recent regulatory proposals mandating the deferral of bonus payments and claw-back clauses in the financial sector. We study a broadly applicable principal agent setting, in which the agent exerts effort for an immediately observable task (acquisition) and a task for which information is only gradually available over time (diligence). Optimal compensation contracts trade off the cost and benefit of delay resulting from agent impatience and the informational gain. Mandatory deferral may increase or decrease equilibrium diligence depending on the importance of the acquisition task. We provide concrete conditions on economic primitives that make mandatory deferral socially (un)desirable.
In its meeting on 6 September 2012, the Governing Council of the ECB took decisions on a number of technical features regarding the Eurosystem’s outright transactions in secondary sovereign bond markets (OMT). This decision was challenged in the German Federal Constitutional Court (GFCC) by a number of constitutional complaints and other petitions. In its seminal judgment of 14 January 2014, the German court expressed serious doubts on the compatibility of the ECB’s decision with the European Union law.
It admitted the complaints and petitions even though actual purchases had not been executed and the control of acts of an organ of the EU in principle is not the task of the GFCC. As justification for this procedure the court resorted to its judicature on a reserved “ultra vires” control and the defense of the “constitutional identiy” of Germany. In the end, however, the court referred the case pursuant to Article 267 TFEU to the European Court of Justice (ECJ) for preliminary rulings on several questions of EU law. In substance, the German court assessed OMT as an act of economic policy which is not covered by the competences of the ECB. Furthermore, it judged OMT as a – by EU primary law – prohibited monetary financing of sovereign debt. The defense of the ECB (disruption of monetary policy transmission mechanism) was dismissed without closer scrutiny as being “irrelevant”. Finally the court opened, however, a way for a compromise by an interpretation of OMT in conformity with EU law under preconditions, specified in detail.
Procedure and findings of this judgment were harshly criticized by many economists but also by the majority of legal scholars. This criticism is largely convincing in view of the admissibility of the complaints. Even if the “ultra vires” control is in conformity with prior decisions of court it is in this judgment expanded further without compelling reasons. It is also questionable whether the standing of the complaining parties had to be accepted and whether the referral to the ECJ was indicated. The arguments of the court are, however, conclusive in respect of the transgression of competences by the ECB and – to somewhat lesser extent – in respect of the monetary debt financing. The dismissal of the defense as “irrelevant” is absolutey persuasive.
The Treaty of Maastricht imposed the strict obligation on the European Union (EU) to establish an economic and monetary union, now Article 3(4) TEU. This economic and monetary union is, however, not designed as a separate entity but as an integral part of the EU. The single currency was to become the currency of the EU and to be the legal tender in all Member States unless an exemption was explicitly granted in the primary law of the EU, as in the case of the UK and Denmark. The newly admitted Member States are obliged to introduce the euro as their currency as soon as they fulfil the admission criteria. Technically, this has been achieved by transferring the exclusive competence for the monetary policy of the Member States whose currency is the euro on the EU, Article 3(1)(c) TFEU and by bestowing the euro with the quality of legal tender, the only legal tender in the EU, Article 128(1) sentence 3 TFEU.
Savings accounts are owned by most households, but little is known about the performance of households’ investments. We create a unique dataset by matching information on individual savings accounts from the DNB Household Survey with market data on account-specific interest rates and characteristics. We document considerable heterogeneity in returns across households, which can be partly explained by financial sophistication. A one-standard deviation increase in financial literacy is associated with a 13% increase compared to the median interest rate. We isolate the usage of modern technology (online accounts) as one channel through which financial literacy has a positive association with returns.
A theory of the boundaries of banks with implications for financial integration and regulation
(2015)
We offer a theory of the "boundary of the
rm" that is tailored to banking, as it builds on a single ine¢ ciency arising from risk-shifting and as it takes into account both interbank lending as an alternative to integration and the role of possibly insured deposit funding. Amongst others, it explains both why deeper economic integration should cause also greater financial integration through both bank mergers and interbank lending, albeit this typically remains ine¢ ciently incomplete, and why economic disintegration (or "desychronization"), as currently witnessed in the European Union, should cause less interbank exposure. It also suggests that recent policy measures such as the preferential treatment of retail deposits, the extension of deposit insurance, or penalties on "connectedness" could all lead to substantial welfare losses.
In-depth analyses of cancer cell proteomes are needed to elucidate oncogenic pathomechanisms, as well as to identify potential drug targets and diagnostic biomarkers. However, methods for quantitative proteomic characterization of patient-derived tumors and in particular their cellular subpopulations are largely lacking. Here we describe an experimental set-up that allows quantitative analysis of proteomes of cancer cell subpopulations derived from either liquid or solid tumors. This is achieved by combining cellular enrichment strategies with quantitative Super-SILAC-based mass spectrometry followed by bioinformatic data analysis. To enrich specific cellular subsets, liquid tumors are first immunophenotyped by flow cytometry followed by FACS-sorting; for solid tumors, laser-capture microdissection is used to purify specific cellular subpopulations. In a second step, proteins are extracted from the purified cells and subsequently combined with a tumor-specific, SILAC-labeled spike-in standard that enables protein quantification. The resulting protein mixture is subjected to either gel electrophoresis or Filter Aided Sample Preparation (FASP) followed by tryptic digestion. Finally, tryptic peptides are analyzed using a hybrid quadrupole-orbitrap mass spectrometer, and the data obtained are processed with bioinformatic software suites including MaxQuant. By means of the workflow presented here, up to 8,000 proteins can be identified and quantified in patient-derived samples, and the resulting protein expression profiles can be compared among patients to identify diagnostic proteomic signatures or potential drug targets.
In the title compound, C20H24N2O4, both peptide bonds adopt a trans configuration with respect to the —N—H and —C=O groups. The dihedral angle between the aromatic rings is 53.58 (4)°. The molecular conformation is stabilized by an intramolecular N—H⋯O hydrogen bond. The crystal packing is characterized by zigzag chains of N—H⋯O hydrogen-bonded molecules running along the b-axis direction.
Das Hauptziel der vorliegenden Arbeit war es, die energieabhängigen Wirkungsquerschnitte von (γ,n)-Reaktionen für 169Tm, 170Yb, 176Yb und 130Te mittels der Photoaktivierungsmethode zu bestimmen.
Dazu wurden zunächst die Effizienzen der verwendeten Detektoren mithilfe von Simulationen korrigiert, da die verwendeten Targets eine ausgedehnte Geometrie aufweisen im Gegensatz zu den punktförmigen Eichquellen. Es hat sich herausgestellt, dass mit den Simulationen die Effizienzen der MCA-Detektoren energieabhängig korrigiert werden konnten, da die Simulationen die Form der gemessenen Effizienzen gut reproduzieren konnten. Bei den Effizienzen der LEPS-Detektoren hingegen konnte keine energieabhäangige Korrektur vorgenommen werden, da die LEPS-Detektoren aufgrund des geringen Abstandes zu den Detektoren hohe Summeneffekte zeigten. Im Rahmen dieser Arbeit konnten diese Summeneffekte jedoch nicht korrigiert bzw. berücksichtigt werden.
n the EU there are longstanding and ongoing pressures towards a tax that is levied on the EU level to substitute for national contributions. We discuss conditions under which such a transition can make sense, starting from what we call a "decentralization theorem of taxation" that is analogous to Oates (1972) famous result that in the absence of spill-over effects and economies of scale decentralized public good provision weakly dominates central provision. We then drop assumptions that turn out to be unnecessary for this results. While spill-over effects of taxation may call for central rules for taxation, as long as spill-over effects do not depend on the intra-regional distribution of the tax burden, decentralized taxation plus tax coordination is found superior to a union-wide tax.
Do markets correct individual behavioral biases? In an experimental asset market, we compare the outcomes of a standard market economy to those of a an island economy that removed market interactions. We observe asset price bubbles in the market economy while prices are stable in the island economy. We also find that subjects took more risk following larger losses, resulting in larger prices and consistent with a gambling for resurrection motive. This motive can translate into bubbles in the market economy because higher prices increase average losses and thus reinforce the desire to resurrect. By contrast, the absence of such a strategic complementarity in island economies can explain the more stable outcome. These results suggest that markets do not correct behavioral biases, rather the contrary.
This paper analyzes sovereign risk shift-contagion, i.e. positive and significant changes in the propagation mechanisms, using bond yield spreads for the major eurozone countries. By emphasizing the use of two econometric approaches based on quantile regressions (standard quantile regression and Bayesian quantile regression with heteroskedasticity) we find that the propagation of shocks in euro's bond yield spreads shows almost no presence of shift-contagion. All the increases in correlation we have witnessed over the last years come from larger shocks propagated with higher intensity across Europe.
Research on interbank networks and systemic importance is starting to recognise that the web of exposures linking banks balance sheets is more complex than the single-layer-of-exposure paradigm. We use data on exposures between large European banks broken down by both maturity and instrument type to characterise the main features of the multiplex structure of the network of large European banks. This multiplex network presents positive correlated multiplexity and a high similarity between layers, stemming both from standard similarity analyses as well as a core-periphery analyses of the different layers. We propose measures of systemic importance that fit the case in which banks are connected through an arbitrary number of layers (be it by instrument, maturity or a combination of both). Such measures allow for a decomposition of the global systemic importance index for any bank into the contributions of each of the sub-networks, providing a useful tool for banking regulators and supervisors. We use the dataset of exposures between large European banks to illustrate the proposed measures.
Although banks are at the center of systemic risk, there are other institutions that contribute to it. With the publication of the leveraged lending guideline in March 2013, the U.S. regulators show that they are especially worried about the private equity firms with their high-risk deals. Given these risks and the interconnectedness of the banks through the LBO loan syndicates, I shed light on the impact of a bank’s LBO loan exposure on its systemic risk. By using 3,538 observations between 2000 and 2013 from 165 global banks, I show that banks with higher LBO exposure also have a higher level of systemic risk. Other loan purposes do not show this positive relationship. The main drivers influencing this relationship positively are the bank’s interconnectedness to other LBO financing banks and its size. Lending experience with a specific PE sponsor, experience with leading LBO syndicates or a bank’s credit rating, however, lead to a lower impact of the LBO loan exposure on systemic risk.
In the mid-1990s, institutional investors entered the syndicated loan market and started to serve borrowers as lead arrangers. Why are non-banks able to compete for this role against banks? How do the composition of syndicates and loan pricing differ among lead arrangers? By using a dataset of 12,847 leveraged loans between 1997 and 2012, I aim to answer these questions. Non-banks benefit from looser regulatory requirements, have industry expertise which helps them in the screening and monitoring of borrowers and focus on firms that ask for loans only instead of additional cross-selling of other services. I can show that non-banks specialize on more opaque and less experienced borrowers, are more likely than banks to choose participants that help to reduce potentially higher information asymmetries and earn 105 basis points more than banks.
This paper analyzes the influence Leveraged Buyouts (LBOs) have on the operating performance of the LBO target companies’ direct competitors. A unique and hand-collected data set on LBOs in the United States in the period 1985-2009 allows us to analyze the effects different restructuring activities as part of the LBO have on the competitors’ revenues. These restructuring activities include changes to leverage, governance, or operating business, as well as M&A activities of the LBO target company. We find that although LBOs itself have a negative influence on competitors’ revenue growth, some restructuring mechanisms might actually benefit competing companies.
The Liikanen Group proposes contingent convertible (CoCo) bonds as a potential mechanism to enhance financial stability in the banking industry. Especially life insurance companies could serve as CoCo bond holders as they are already the largest purchasers of bank bonds in Europe. We develop a stylized model with a direct financial connection between banking and insurance and study the effects of various types of bonds such as non-convertible bonds, write-down bonds and CoCos on banks' and insurers' risk situations. In addition, we compare insurers' capital requirements under the proposed Solvency II standard model as well as under an internal model that ex-ante anticipates additional risks due to possible conversion of the CoCo bond into bank shares. In order to check the robustness of our findings, we consider different CoCo designs (write-down factor, trigger value, holding time of bank shares) and compare the resulting capital requirements with those for holding non-convertible bonds. We identify situations in which insurers benefit from buying CoCo bonds due to lower capital requirements and higher coupon rates. Furthermore, our results highlight how the Solvency II standard model can mislead insurers in their CoCo investment decision due to economically irrational incentives.
I assess how Basel III, Solvency II and the low interest rate environment will affect the financial connection between the bank and insurance sector by changing the funding patterns of banks as well as the investment strategies of life insurance companies. Especially for life insurance companies, the current low interest rate environment poses a key risk since declining returns on investments jeopardize the guaranteed return on life insurance contracts, a core component of traditional life insurance contracts in several European countries. I consider a contingent claim framework with a direct financial connection between banks and life insurers via bank bonds. The results indicate that life insurers' demand for bank bonds increases over the mid-term but ultimately declines in the long-run. Since life insurers are the largest purchasers of bank bonds in Europe, banks could lose one of their main funding sources. In addition, I show that shareholder value driven life insurers' appetite for risk increases when the gap between asset return and liability growth diminishes. To check the robustness of the findings, I calibrate a prolonged low interest rate scenario. The results show that the insurer's risk appetite is even higher when interest rates remain persistently low. A sensitivity analysis regarding industry-specific regulatory safety levels reveals that contagion between bank and life insurer is driven by the insurers' demand for bank bonds which itself depends on the regulatory safety level of banks.
The creation of the Banking Union is likely to come with substantial implications for the governance of Eurozone banks. The European Central Bank, in its capacity as supervisory authority for systemically important banks, as well as the Single Resolution Board, under the EU Regulations establishing the Single Supervisory Mechanism and the Single Resolution Mechanism, have been provided with a broad mandate and corresponding powers that allow for far-reaching interference with the relevant institutions’ organisational and business decisions. Starting with an overview of the relevant powers, the present paper explores how these could – and should – be exercised against the backdrop of the fundamental policy objectives of the Banking Union. The relevant aspects directly relate to a fundamental question associated with the reallocation of the supervisory landscape, namely: Will the centralisation of supervisory powers, over time, also lead to the streamlining of business models, corporate and group structures of banks across the Eurozone?
This paper examines the dynamic relationship between credit risk and liquidity in the sovereign bond market in the context of the European Central Bank (ECB) interventions. Using a comprehensive set of liquidity measures obtained from a detailed, quote-level dataset of the largest interdealer market for Italian government bonds, we show that changes in credit risk, as measured by the Italian sovereign credit default swap (CDS) spread, generally drive the liquidity of the market: a 10% change in the CDS spread leads a 11% change in the bid-ask spread. This relationship is stronger, and the transmission is faster, when the CDS spread is above the 500 basis point threshold, estimated endogenously, and can be ascribed to changes in margins and collateral, as well as clientele effects. Moreover, we show that the Long-Term Refinancing Operations (LTRO) intervention by the ECB weakened the sensitivity of the liquidity provision by the market makers to changes in the Italian government's credit risk. We also document the importance of market-wide and dealer-specific funding liquidity measures in determining the market liquidity for Italian government bonds.
The European Commission has published a Green Paper outlining possible measures to create a single market for capital in Europe. Our comments on the Commission’s capital markets union project use the functional finance approach as a starting point. Policy decisions, according to the functional finance perspective, should be essentially neutral (agnostic) in terms of institutions (level playing field). Our main angle, from which we assess proposals for the capital markets union agenda, are information asymmetries and the agency problems (screening, monitoring) which arise as a result. Within this perspective, we make a number of more specific proposals.
The paper traces the developments from the formation of the European Economic and Monetary Union to this date. It discusses the fact that the primary mandate of the European System of Central Banks (ESCB) is confined to safeguarding price stability and does not include general economic policy. Finally, the paper contributes to the discussion on whether the primary law of the European Union would support a eurozone exit. The Treaty of Maastricht imposed the strict obligation on the European Union (EU) to establish an economic and monetary union, now Article 3(4) TEU. This economic and monetary union is, however, not designed as a separate entity but as an integral part of the EU. The single currency was to become the currency of the EU and to be the legal tender in all Member States unless an exemption was explicitly granted in the primary law of the EU, as in the case of the UK and Denmark. The newly admitted Member States are obliged to introduce the euro as their currency as soon as they fulfil the admission criteria. Technically, this has been achieved by transferring the exclusive competence for the monetary policy of the Member States whose currency is the euro on the EU, Article 3(1)(c) TFEU and by bestowing the euro with the quality of legal tender, the only legal tender in the EU, Article 128(1) sentence 3 TFEU.
Die deutsche Steuerpolitik kombiniert hohe Steuersätze mit zahlreichen Ausnahmen. Das reißt Gerechtigkeitslücken, lenkt Investitionen in die falschen Zwecke und verkompliziert das Steuersystem mitunter bis zur Unkenntlichkeit. Bei der Erbschaftsteuer ist dies besonders augenfällig. Der Versuch mit minimalinvasiven Korrekturen Konsistenz in die Erbschaft- und Schenkungsteuer zu bringen ist fast zwangsläufig zum Scheitern verurteilt. Vieles spricht stattdessen für deutlich abgesenkte Steuersätze und eine gleichzeitige Abschaffung der Vergünstigungen für Betriebsvermögen.
Ein kritischer Diskurs ist essentiell für die Wissenschaft. Das ist zwar banal, wird aber im gegenwärtigen Streit um „Münkler Watch“, einem Blog, in dem Studierende der Humboldt-Universität Berlin eine Vorlesung des Politikwissenschaftlers Prof. Herfried Münkler anonym kritisieren, häufig vergessen. Aber auch den Studierenden scheint es nicht um einen inhaltlichen Dialog, sondern um Aufmerksamkeit zu gehen.
25 Jahre ISOE – Veranstaltungsdokumentation online +++ Mehr als nur Wohnen: Veranstaltungsreihe „Gemeinsam Leben in der Stadt“ +++ Bau von Windenergieanlagen: Konfliktparteien im Dialog +++ Festakt bei BiK-F – Aufnahme in die Senckenberg Gesellschaft für Naturforschung +++ ISOE-Lecture im Wintersemester 2014/15 an der Goethe-Universität Frankfurt +++ Aus dem ISOE: Europäische Biodiversitätsforschung: ISOE ist Mitglied im ALTER-Net +++ Termine +++ Publikationen
ISOE-Forschungsteam begleitet „Reallabore“ in Baden-Württemberg +++ ISOE bei den Berliner Energietagen +++ Wasser für die Trockenzeit – Übergabe der Flutwassersammelanlage in Namibia +++ Zukunftsstadt – ISOE ist Partner des Wissenschaftsjahres 2015 +++ Weltwasserdekade endet – Probleme in der weltweiten Wasserversorgung bleiben +++ Capital4Health – Forschungsverbund für transdisziplinäre Gesundheitsforschung +++ Aus dem ISOE: Dr. Alexandra Lux ist neue Leiterin des Forschungsschwerpunkts „Transdisziplinäre Methoden und Konzepte“ +++ Termine +++ Publikationen
Beste! Neues in der Blogroll
(2015)
Hin und wieder muss einfach mal ordentlich entrümpelt und aufgeräumt werden. Das gilt für das Leben im Allgemeinen und hin und wieder eben auch für das Bretterblog. Heute war es mal wieder soweit. Hoch motiviert von den verwegenen Plänen des letzten Redaktionstreffens habe ich mich unter anderem an unsere Blogroll gewagt: Einmal durchgeklickt, Blogleichen weggeräumt und gestaunt, was für starke Blogs es doch so da draußen gibt, die man hin und wieder mal aus dem Auge verliert!...
The design of rainwater harvesting based gardens requires considering current climate but also climate change during the lifespan of the facility. The goal of this study is to present an approach for designing garden variants that can be safely supplied with harvested rainwater, taking into account climate change and adaptation measures. In addition, the study presents a methodology to quantify the effects of climate change on rainwater harvesting based gardening. Results of the study may not be accurate due to the assumptions made for climate projections and may need to be further refined. We used a tank flow model and an irrigation water model. Then we established three simple climate scenarios and analyzed the impact of climate change on harvested rain and horticulture production for a semi-arid region in northern Namibia. In the two climate scenarios with decreased precipitation and medium/high temperature increase; adaptation measures are required to avoid substantial decreases in horticulture production. The study found that the most promising adaptation measures to sustain yields and revenues are a more water efficient garden variant and an enlargement of the roof size. The proposed measures can partly or completely compensate the negative impacts of climate change.
Bayesian Networks are computer-based environmental models that are frequently used to support decision-making under uncertainty. Under data scarce conditions, Bayesian Networks can be developed, parameterized, and run based on expert knowledge only. However, the efficiency of expert-based Bayesian Network modeling is limited by the difficulty in deriving model inputs in the time available during expert workshops. This thesis therefore aimed at developing a simple and robust method for deriving conditional probability tables from expert estimates in a time-efficient way. The design and application of this new elicitation and conversion method is demonstrated using a case study in Xinjiang, Northwest China. The key characteristics of this method are its time-efficiency and the approach to use different conversion tables based on varying levels of confidence. Although the method has its limitations, e.g. it can only be applied for variables with one conditioning variable; it provides the opportunity to support the parameterization of Bayesian Networks which would otherwise remain half-finished due to time constraints. In addition, a case study in the Murray-Darling Basin, Australia, is used to compare Bayesian Network types and software to improve the presentation clarity of large Bayesian Networks. Both case studies aimed at gaining insights on how to improve the applicability of Bayesian Networks to support environmental management.