Refine
Year of publication
- 2008 (137) (remove)
Document Type
- Working Paper (137) (remove)
Is part of the Bibliography
- no (137)
Keywords
- USA (7)
- Deutschland (6)
- Bank (5)
- Geldpolitik (5)
- Lambda-Kalkül (5)
- Operationale Semantik (5)
- Programmiersprache (5)
- Haushalt (4)
- Liquidität (4)
- Aging (3)
Institute
This paper analyzes liquidity in an order driven market. We only investigate the best limits in the limit order book, but also take into account the book behind these inside prices. When subsequent prices are close to the best ones and depth at them is substantial, larger orders can be executed without an extensive price impact and without deterring liquidity. We develop and estimate several econometric models, based on depth and prices in the book, as well as on the slopes of the limit order book. The dynamics of different dimensions of liquidity are analyzed: prices, depth at and beyond the best prices, as well as resiliency, i.e. how fast the different liquidity measures recover after a liquidity shock. Our results show a somewhat less favorable image of liquidity than often found in the literature. After a liquidity shock (in the spread or depth or in the book beyond the best limits), several dimension of liquidity deteriorate at the same time. Not only does the inside spread increase, and depth at the best prices decrease, also the difference between subsequent bid and ask prices may become larger and depth provided at them decreases. The impacts are both econometrically and economically significant. Also, our findings point to an interaction between different measures of liquidity, between liquidity at the best prices and beyond in the book, and between ask and bid side of the market.
This paper introduces adaptive learning and endogenous indexation in the New-Keynesian Phillips curve and studies disinflation under inflation targeting policies. The analysis is motivated by the disinflation performance of many inflation-targeting countries, in particular the gradual Chilean disinflation with temporary annual targets. At the start of the disinflation episode price-setting firms’ expect inflation to be highly persistent and opt for backward-looking indexation. As the central bank acts to bring inflation under control, price-setting firms revise their estimates of the degree of persistence. Such adaptive learning lowers the cost of disinflation. This reduction can be exploited by a gradual approach to disinflation. Firms that choose the rate for indexation also re-assess the likelihood that announced inflation targets determine steady-state inflation and adjust indexation of contracts accordingly. A strategy of announcing and pursuing short-term targets for inflation is found to influence the likelihood that firms switch from backward-looking indexation to the central bank’s targets. As firms abandon backward-looking indexation the costs of disinflation decline further. We show that an inflation targeting strategy that employs temporary targets can benefit from lower disinflation costs due to the reduction in backward-looking indexation.
Motivated by the recent discussion of the declining importance of deposits as banks´ major source of funding we investigate which factors determine funding costs at local banks. Using a panel data set of more than 800 German local savings and cooperative banks for the period from 1998 to 2004 we show that funding costs are not only driven by the relative share of comparatively cheap deposits of bank´s liabilities but among other factors especially by the size of the bank. In our empirical analysis we find strong and robust evidence that, ceteris paribus, smaller banks exhibit lower funding costs than larger banks suggesting that small banks are able to attract deposits more cheaply than their larger counterparts. We argue that this is the case because smaller banks interact more personally with customers, operate in customers´ geographic proximity and have longer and stronger relationships than larger banks and, hence, are able to charge higher prices for their services. Our finding of a strong influence of bank size on funding costs is also in an in- ternational context of great interest as mergers among small local banks - the key driver of bank growth - are a recent phenomenon not only in European banking that is expected to continue in the future. At the same time, net interest income remains by far the most important source of revenue for most local banks, accounting for approximately 70% of total operating revenues in the case of German local banks. The influence of size on funding costs is of strong economic relevance: our results suggest that an increase in size by 50%, for example, from EUR 500 million in total assets to EUR 750 million (exemplary for M&A transactions among local banks) increases funding costs, ceteris paribus, by approximately 18 basis points which relates to approx. 7% of banks´ average net interest margin.
This paper is one of the first to analyse political influence on state-owned savings banks in a developed country with an established financial market: Germany. Combining a large dataset with financial and operating figures of all 457 German savings banks from 1994 to 2006 and information on over 1,250 local elections during this period we investigate the change in business behavior around elections. We find strong indications for political inflence: the probability that savings banks close branches, lay-off employees or engage in merger activities is significantly reduced around elections. At the same time they tend to increase their extraordinary spendings, which include support for social and cultural events in the area, on average by over 15%. Finally, we find that savings banks extend significantly more loans to their corporate and private customers in the run-up to an election. In further analyses, we show that the magnitude of political influence depends on bank specific, economical and political circumstances in the city or county: political influence seems to be facilitated by weak political majorities and profitable banks. Banks in economically weak areas seem to be less prone to political influence.
Die vorliegende Arbeit widmet sich der phonetischen Motivation phonologischer Palatalisierungsprozesse, bei welchen Vorderzungenvokoide die Palatalisierung (bzw. Affrizierung) vorangehender Plosive bewirken. Durch akustische Analysen zu deutschen und bulgarischen stimmlosen alveolaren und velaren Verschlußlauten wird der Einfluß nachfolgender vorderer Vokoide und des tiefen Vokals /a/ auf die geräuschähnliche Phase nach der plosiven Verschlußlösung der Konsonanten untersucht. Zum Zwecke der Überprüfung einer nach universellen phonologischen Prinzipien formulierten Hierarchie der wahrscheinlichen Inputkandidaten für Palatalisierungen werden akustische Messungen zur Zeitdauer und zu den spektralen Eigenschaften des konsonantischen Segments in wortinitialen Konsonant-Vokoid-Sequenzen vorgestellt. Die Ergebnisse der Studie unterstützen nur teilweise die vorgeschlagene Hierarchiehypothese und zeigen, daß sprachspezifische Besonderheiten einen Einfluß auf die Anordnung der Elemente der Hierarchie ausüben.
This paper considers a trading game in which sequentially arriving liquidity traders either opt for a market order or for a limit order. One class of traders is considered to have an extended trading horizon, implying their impatience is linked to their trading orientation. More specifically, sellers are considered to have a trading horizon of two periods, whereas buyers only have a single-period trading scope (the extended buyer-horizon case is completely symmetric). Clearly, as the life span of their submitted limit orders is longer, this setting implies sellers are granted a natural advantage in supplying liquidity. This benefit is hampered, however, by the direct competition arising between consecutively arriving sellers. Closed-form characterizations for the order submission strategies are obtained when solving for the equilibrium of this dynamic game. These allow to examine how these forces affect traders´ order placement decisions. Further, the analysis yields insight into the dynamic process of price formation and into the market clearing process of a non-intermediated, order driven market.
Der bevorstehende Beitritt Sloweniens in die OECD1 (Organisation for Economic Cooperation and Development), die jüngste Bewertung des BTI-Status-Index 2008 (Bertelsmann-Transformation-Index) auf dem 2. Platz, der Ratsvorsitz der EU (Europäische Union) im 1. Halbjahr 2008, die Mitgliedschaft zum Schengen-Raum und die Einführung des Euro, sind nur die jüngsten Meilensteine der erfolgreichen und nachhaltigen Transformation in ein demokratisches System und die Festlegung auf eine marktwirtschaftliche Ordnung. Die Geschichte Sloweniens stand lange Zeit im Schatten der Geschichte Österreichs und Jugoslawiens. Als eine Nation in einem eigenen Staat sieht sich Slowenien seit dem Zerfall Jugoslawiens in einer gänzlich neuen Rolle. Das Erbe der früheren Abhängigkeiten ist einem neuen Selbstbewusstsein gewichen. Die graduelle Transformation Sloweniens während der 1990er Jahre in einen völkerrechtlich unabhängigen Staat, eine politische Demokratie und eine freie Marktwirtschaft erscheint im europäischen Kontext „…only [as] a chapter in the larger tale of the democratic wave that rather unexpectedly swept across Central, Eastern, and Southeastern Europe during the last years of the twentieth century.“ In Reflexion der historischen Ereignisse beurteilt Kornai die Transformation am Ende des letzten Jahrhunderts in Europa „…in spite of serious problems and anomalies …[as] a success story.“ Im Rahmen des Transformationsprozesses konnte sich Slowenien als „politischer und ökonomischer Zwerg“ als unabhängiger Staat in das demokratische Europa und die Europäische Union integrieren und fest verankern. Um Gründe und Faktoren dieses Prozesses zu identifizieren, ist eine Betrachtung der Entwicklungen in den 1980er Jahren, die zur Auflösung des blockfreien sozialistischen Jugoslawiens und zur Selbstständigkeit Sloweniens geführt haben, notwendig. Jede der konstituierenden Teilrepubliken und Regionen Jugoslawiens blickt zurück auf eine eigene historische, religiöse und sprachliche Tradition mit individuellen Erfahrungen und spezifischen Spannungen innerhalb und außerhalb der gemeinsamen Föderation. Sloweniens Weg in die politische, ökonomische und demokratische Unabhängigkeit war ein individueller nationaler Differenzierungs- und Umgestaltungsprozess und Ergebnis vielfältiger mehrdimensionaler Konflikte. Unerwartet und plötzlich war der Bruch und die Herauslösung aus dem Staatenbund Jugoslawiens am 25. Juni 1991 nicht. Die Gründung und der Niedergang eines Staates sind schwierig zu erklärende und komplexe Phänomene. Die Triebkräfte der auflösenden gesellschaftlichen Prozesse im Jugoslawien der 1980er Jahre ausschließlich auf die Nationalitätenfrage zu reduzieren, bewertet Weißenbacher als eine zu enge Fokussierung der Darstellung und Begründung auf die ethnischen Spannungen innerhalb des Vielvölkerstaates. Er argumentiert: „Die Wurzeln der Desintegration des sozialistischen Jugoslawiens in alten ethnischen Feindseligkeiten zu suchen, hieße die ökonomischen, sozialen und politischen Prozesse zu ignorieren….“ Die zunehmenden regionalen Inkompatibilitäten Jugoslawiens in den 1980er Jahren verdeutlichen in Betrachtung des spezifischen Entwicklungspfads der Teilrepublik Slowenien, dass die politisch-gesellschaftlichen, kulturellen und die sozioökonomischen Strukturen letztendlich nicht dauerhaft mit den Strukturen anderer jugoslawischer Teilrepubliken vereinbar waren. Die politische und wirtschaftliche Instabilität Jugoslawiens und der frühzeitige Wandel innerhalb der slowenischen Gesellschaft und der Kommunistischen Partei in den 1980er Jahren führten durch politischen Reformdruck und makroökonomische Ungleichgewichte zum Kollaps des jugoslawischen Staatenbundes. Mencinger betont, dass die tiefe Krise Jugoslawiens letzten Endes ohne einen radikalen Systembruch und Sturkurwandel von politischer und ökonomischer Machtverteilung nicht zu überwinden gewesen wäre. Der vorliegende Beitrag greift die Rahmenbedingungen, Entwicklungen, Konflikte und Ziele auf und zeichnet die wesentlichen politischen und wirtschaftlichen Geschehnisse nach, denen sich die slowenische Bevölkerung und Politik in den Jahren vor der Loslösung gegenübersahen und die zur Gründung des unabhängigen Staates geführt haben.
Motivated by the prominent role of electronic limit order book (LOB) markets in today’s stock market environment, this paper provides the basis for understanding, reconstructing and adopting Hollifield, Miller, Sandas, and Slive’s (2006) (henceforth HMSS) methodology for estimating the gains from trade to the Xetra LOB market at the Frankfurt Stock Exchange (FSE) in order to evaluate its performance in this respect. Therefore this paper looks deeply into HMSS’s base model and provides a structured recipe for the planned implementation with Xetra LOB data. The contribution of this paper lies in the modification of HMSS’s methodology with respect to the particularities of the Xetra trading system that are not yet considered in HMSS’s base model. The necessary modifications, as expressed in terms of empirical caveats, are substantial to derive unbiased market efficiency measures for Xetra in the end.
Inhalt Prof. Dr. Helmut Siekmann : Föderalismuskommission II für eine zukunftsfähige Gestaltung der Finanzsysteme nutzen. Stellungnahme für das Expertengespräch des Haushalts- und Finanzausschusses des Landtags Nordrhein-Westfalen am 14.02.2008 Stellungnahme 14/1785 Antrag der Fraktion BÜNDNIS90/Die Grünen im Landtag Nordrhein-Westfalen: Drucksache 14/4338 Fragenkatalog zum Expertengespräch des Haushalts- und Finanzausschusses und des Hauptausschusses am 14.02.2008
Inhalt: Prof. Dr. Helmut Siekmann : Stellungnahme für die öffentliche Anhörung des Haushaltsausschusses zu dem Gesetzentwurf der Fraktion der SPD und Bündnis 90/Die Grünen für ein Gesetz zur Änderung der Hessischen Landeshaushaltsordnung Gesetzentwurf der Fraktionen der SPD und Bündnis 90/Die Grünen für ein Gesetz zur Änderung der Hessischen Landeshaushaltsordnung (LHO) : Drucksache 17/265 Liste der Anzuhörenden im Haushaltsausschuss : am 17.09.2008 zur Drucksache 17/265
A new global crop water model was developed to compute blue (irrigation) water requirements and crop evapotranspiration from green (precipitation) water at a spatial resolution of 5 arc minutes by 5 arc minutes for 26 different crop classes. The model is based on soil water balances performed for each crop and each grid cell. For the first time a new global data set was applied consisting of monthly growing areas of irrigated crops and related cropping calendars. Crop water use was computed for irrigated land and the period 1998 – 2002. In this documentation report the data sets used as model input and methods used in the model calculations are described, followed by a presentation of the first results for blue and green water use at the global scale, for countries and specific crops. Additionally the simulated seasonal distribution of water use on irrigated land is presented. The computed model results are compared to census based statistical information on irrigation water use and to results of another crop water model developed at FAO.
Der Regierungsentwurf des ARUG : Inhalt und wesentliche Änderungen gegenüber dem Referentenentwurf
(2008)
Der Entwurf eines Gesetzes zur Umsetzung der Aktionärsrechterichtlinie (ARUG) enthält viel mehr als nur die Umsetzung der Richtlinie über die Ausübung bestimmter Rechte von Aktionären in börsennotierten Gesellschaften (sog. Aktionärsrechterichtlinie), die bis 3. August 2009 zu erfolgen hat. Der jetzt vorliegende ARUG-Entwurf widmet sich drei weiteren Regelungskomplexen. In einem zweiten Schwerpunkt sollen für den Bereich der Kapitalaufbringung durch Sacheinlagen Deregulierungsoptionen aus der Änderung der Kapitalrichtlinie genutzt werden. In einem dritten Komplex wendet sich der Entwurf der Deregulierung des Vollmachtsstimmrechts der Banken zu. Hier werden ganz neue Handlungsalternativen eröffnet. Und ein letztes bedeutendes Ziel des Entwurfs ist die Eindämmung missbräuchlicher Aktionärsklagen. Der ARUG-Entwurf ist im Mai 2008 der Öffentlichkeit als Referentenentwurf vorgestellt worden. Die Bundestagswahl 2009 naht und der Entwurf darf nicht der Diskontinuität zum Opfer fallen. Deshalb ist der Regierungsentwurf unter Hochdruck vorbereitet worden. Das Kabinett hat ihn am 5. November verabschiedet. Damit hat das Gesetz eine gute Chance, zum 1. November 2009 in Kraft zu treten. ...
Zugleich Besprechung von LG Köln, Urt. v. 5.10.2007 – 82 O 114/06 (STRABAG AG) Zu den Rechten, die ein Aktionär gemäß § 28 Satz 1 WpHG für die Zeit verliert, in der er seine Mitteilungspflicht aus § 21 Abs. 1 oder 1a WpHG nicht erfüllt, gehört auch das Stimmrecht in der Hauptversammlung. Daß diese Regelung ein erhebliches Anfechtungspotential gegen Hauptversammlungsbeschlüsse in sich birgt, hatte man schon erkannt, als das WpHG noch nicht einmal in Kraft getreten war. Heute liegt dieses Potential offener zutage denn je: Einer neueren empirischen Studie zufolge zählt die Rüge, der Mehrheits- oder ein sonstiger Großaktionär sei wegen Verstoßes gegen gesetzliche Mitteilungspflichten vom Stimmrecht ausgeschlossen gewesen, zu den am häufigsten vorgebrachten Anfechtungsgründen. Mit Aufmerksamkeit von allen Seiten darf vor diesem Hintergrund das Urteil des Landgerichts Köln vom 5. Oktober 2007 in der Sache STRABAG AG rechnen, das mit mehreren grundsätzlichen – und z. T. überraschenden – Aussagen zur Auslegung der §§ 21 ff. WpHG sowie zu den Möglichkeiten und prozessualen Folgen eines Bestätigungsbeschlusses gemäß § 244 AktG aufwartet. Der Beitrag stellt zunächst den Sachverhalt des STRABAG-Falles und diejenigen Thesen des Urteils vor (unter II), die anschließend nacheinander auf den Prüfstand gestellt werden sollen (unter III-VI). Der Fall bietet aber auch Anlaß, der Frage nachzugehen, was von der geplanten Verschärfung des § 28 WpHG durch das im Entwurf vorliegende Risikobegrenzungsgesetz4 zu halten ist (unter VII).
Vorwort: Klima ist vor allem deswegen nicht nur von wissenschaftlichem, sondern auch von öffentlichem Interesse, weil es veränderlich ist und weil solche Änderungen gravierende ökologische sowie sozioökonomische Folgen haben können. Im Detail weisen Klimaänderungen allerdings komplizierte zeitliche und räumliche Strukturen auf, deren Erfassung und Interpretation alles andere als einfach ist. Bei den zeitlichen Strukturen stehen mit Recht vor allem relativ langfristige Trends sowie Extremereignisse im Blickpunkt, erstere, weil sie den systematischen Klimawandel zum Ausdruck bringen und letztere wegen ihrer besonders brisanten Auswirkungen. Mit beiden Aspekten hat sich unsere Arbeitsgruppe immer wieder eingehend befasst. Hinsichtlich der Extremereignisse bzw. Extremwertstatistik sei beispielsweise auf die Institutsberichte Nr. 1, 2 und 5 sowie die dort angegebene Literatur hingewiesen. Hier geht es wieder einmal um Klimatrends und dabei ganz besonders um die räumlichen Trendstrukturen. Der relativ langfristige und somit systematische Klimawandel läuft nämlich regional sehr unterschiedlich ab, was am besten in Trendkarten zum Ausdruck kommt. Solche regionalen, zum Teil sehr kleinräumigen Besonderheiten sind insbesondere beim Niederschlag sehr ausgeprägt. Zudem sind die räumlichen Trendstrukturen auch jahreszeitlich/monatlich sehr unterschiedlich. In unserer Arbeitsgruppe hat sich Herr Dr. Jörg Rapp im Rahmen seiner Diplom- und insbesondere Doktorarbeit intensiv mit diesem Problem beschäftigt, was zur Publikation des „Atlas der Niederschlags- und Temperaturtrends in Deutschland 1891-1990“ (Rapp und Schönwiese, 2. Aufl. 1996) sowie des „Climate Trend Atlas of Europe – Based on Observations 1891-1990“ (Schönwiese und Rapp, 1997) geführt hat. Die große Beachtung dieser Arbeiten ließ es schon lange als notwendig erscheinen, eine Aktualisierung vorzunehmen. Dies ist zunächst für den Klima-Trendatlas Deutschland geschehen, der nun für das Zeitintervall 1901-2000 vorliegt (Institutsbericht Nr. 4, 2005). Hier wird nun auch eine entsprechende Aktualisierung für Europa vorgelegt, und zwar auf der Grundlage der Berechnungen, die Reinhard Janoschitz in seiner Diplomarbeit durchgeführt hat. Dabei besteht eine enge Querverbindung zum Projekt VASClimO (Variability Analysis of Surface Climate Observations), das dankenswerterweise vom Bundesministerium für Bildung und Forschung (BMBF) im Rahmen von DEKLIM (Deutsches Klimaforschungsprogramm) gefördert worden ist (siehe Institutsbericht Nr. 6, in den vorab schon einige wenige Europa-Klima-Trendkarten einbezogen worden sind). Mit der Publikation des hier vorliegenden „Klima-Trendatlas Europa 1901-2000“ werden in insgesamt 261 Karten (davon 17 Karten in Farbdarstellung in den Text integriert) wieder umfangreiche Informationen zum Klimawandel in Europa vorgelegt. Sie beruhen vorwiegend auf linearen Trendanalysen hinsichtlich der bodennahen Lufttemperatur und des Niederschlags für die Zeit 1901-2000 sowie für die Subintervalle 1951-2000, 1961-1990 und 1971-2000, jeweils aufgrund der jährlichen, jahreszeitlichen und monatlichen Beobachtungsdaten. Die Signifikanz der Trends ist im (schwarz/weiß wiedergegebenen) Kartenteil durch Rasterung markiert. Da sich die Analyse eng an die oben zitierte Arbeit von Schönwiese und Rapp (1997) anlehnt, wo ausführliche textliche Erläuterungen zu finden sind (ebenso in Rapp, 2000) wurde hier der Textteil sehr knapp gehalten.
Various concurrency primitives have been added to sequential programming languages, in order to turn them concurrent. Prominent examples are concurrent buffers for Haskell, channels in Concurrent ML, joins in JoCaml, and handled futures in Alice ML. Even though one might conjecture that all these primitives provide the same expressiveness, proving this equivalence is an open challenge in the area of program semantics. In this paper, we establish a first instance of this conjecture. We show that concurrent buffers can be encoded in the lambda calculus with futures underlying Alice ML. Our correctness proof results from a systematic method, based on observational semantics with respect to may and must convergence.
Aufbauend auf einer Literaturanalyse wird der derzeitige technische Entwicklungsstand im Bereich der Energierückgewinnung aus dem Siedlungsabwasser skizziert. Neben der Wärmerückgewinnung, die sowohl im Kanalnetz als auch dezentral in Gebäuden möglich ist, wurde die Biogasgewinnung sowohl auf Aerobkläranlagen als auch in Anaerobanlagen und die anschließende Aufbereitung der Klärgase in Erdgasqualität ebenso diskutiert wie die Nutzung von Schlämmen als Brennmaterial. Die Darstellung des derzeitigen Entwicklungsstandes half dabei, mögliche Entwicklungsaufgaben zu identifizieren, die einerseits vordringlich erlauben könnten, Abwasser künftig als Energieressource zu betrachten, und deren Lösung andererseits besonders innovative Leistungen erfordern. Die Entwicklungsaufgaben wurden thesenhaft zugespitzt, um so anschließend in einer Delphi-Befragung überprüft zu werden.
Aufbauend auf einer Literaturanalyse wird der derzeitige technische Entwicklungsstand im Bereich der Wiedergewinnung von Phosphat und Stickstoffverbindungen aus dem häuslichen Abwasser skizziert: Neben der (chemischen) Wiedergewinnung aus dem Abwasser und der Verwendung von Anaerobverfahren sowie die Wiedergewinnung aus Klärschlamm ist auch die Bewässerung mit Abwasser, die Kompostierung sowie die Fraktionierung von Abwasser („Gelbwasser“) eine Möglichkeit zur besseren Ausnutzung der Nährstoffgehalte des Abwassers. Der erzielte Überblick über den derzeitigen Stand der Nährstoffrückgewinnung diente dazu, mögliche Entwicklungsaufgaben zu identifizieren, die einerseits vordringlich (insbesondere zur Lösung globaler Probleme, z.B. zur Beendigung des Ressourcenmangels) erscheinen und deren Lösung andererseits besonders innovative Leistungen erfordern. Die Entwicklungsaufgaben wurden thesenhaft zugespitzt, um so anschließend in einer Delphi-Befragung überprüft werden zu können.
Aufbauend auf einer Literaturanalyse wird der derzeitige technische Entwicklungsstand im Bereich des Grauwasserrecyclings skizziert. Neben mechanisch-biologische Anlagen treten vereinzelt Membranfilteranlagen, aber auch „Low-Tech“-Anlagen. Der Überblick half, mögliche Entwicklungsaufgaben zu identifizieren, die einerseits vordringlich (insbesondere zur Lösung künftiger Wassermengenprobleme) erscheinen und deren Lösung andererseits besonders innovative Leistungen erfordern. Die Entwicklungsaufgaben wurden thesenhaft zugespitzt, um so anschließend in einer Delphi-Befragung überprüft werden zu können.
Der Prozess der europäischen Integration wirkt zunehmend auf die Gestaltung der Gesundheitssysteme der Mitgliedstaaten ein. Die von der Kommission und dem EuGH vorangetriebene Anwendung des europäischen Binnenmarkt- und Wettbewerbsrechts auf die Gesundheitspolitik hat zur Folge, dass marktlichen Steuerungsprinzipien ein Primat gegenüber staatlicher und korporatistischer Regulierung eingeräumt wird. Die gesundheitspolitische Gestaltungskompetenz liegt bei den Mitgliedstaaten, diese haben jedoch die „vier Freiheiten“ bzw. das europäische Wettbewerbsrecht zu beachten. Das Prinzip der Solidarität spielt in den europäischen Verträgen dagegen nur eine untergeordnete Rolle. Solidarität erscheint im europäischen Diskurs als ein Wert, der für die Europäische Union einen wichtigen Bezugspunkt darstellt, ohne dass er eine rechtlich verbindliche Form erhalten hat. Im Resultat entscheidet daher die Auslegung des Solidaritätsprinzips durch den Gerichtshof darüber, ob solidarische Elemente in der nationalen Gesundheitspolitik mit dem europäischen Recht vereinbar sind. Dieser Mechanismus beruht nicht auf demokratisch organisierten Meinungs- und Willensbildungsprozessen, sondern ist Gegenstand schwer prognostizierbarer richterlicher Interpretationskunst.
The impact of European integration on the German system of pharmaceutical product authorization
(2008)
The European Union has evolved since 1965 into an influential political player in the regulation of pharmaceutical safety standards. The objective of establishing a single European market for pharmaceuticals makes it necessary for member-states to adopt uniform safety standards and marketing authorization procedures. This article investigates the impact of the European integration process on the German marketing authorization system for pharmaceuticals. The analysis shows that the main focal points and objectives of European regulation of pharmaceutical safety have shifted since 1965. The initial phase saw the introduction of uniform European safety standards as a result of which Germany was obliged to undertake “catch-up” modernization. From the mid-1970s, these standards were extended and specified in greater detail. Since the mid-1990s, a process of reorientation has been under way. The formation of the European Agency for the Evaluation of Medicinal Products (EMEA) and the growing importance of the European authorization procedure, combined with intensified global competition on pharmaceutical markets, are exerting indirect pressure for EU member-states to adjust their medicines policies. Consequently, over the past few years Germany has been engaged in a competition-oriented reorganization of its pharmaceutical product authorization system the outcome of which will be to give higher priority to economic interests.
Die Bundesregierung plant mit dem „Gesetz zur Begrenzung der mit Finanzinvestitionen verbundenen Risiken“ (Risikobegrenzungsgesetz), das derzeit als Regierungsentwurf vorliegt, gesamtwirtschaftlich unerwünschte Aktivitäten von Finanzinvestoren zu erschweren oder zu verhindern. Dabei sollen Finanz- oder Unternehmenstransaktionen, die effizienzfördernd wirken, unbeeinträchtigt bleiben. Inwieweit der RegE-Risikobegrenzungsgesetz dieses selbstgesetzte Ziel erreichen wird, ist derzeit nicht absehbar. Absehbar ist hingegen, dass die im RegE-Risikobegrenzungsgesetz enthaltene neue übernahmerechtliche Regel für das sog. „acting in concert“ in einen Konflikt mit dem Gemeinschaftsrecht gerät. Diesen Konflikt und seine Gründe zeigt der Beitrag auf. Dazu wird in Teil A. zunächst der neue Tatbestand vorgestellt und sodann unter B. seine Vereinbarkeit mit der Übernahmerichtlinie (I.) sowie mit der Kapitalverkehrsfreiheit (II.) untersucht. Unter C. werden die Ergebnisse zusammengefasst.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi that use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as well as two variants of must-convergence, which hold in a broad class of extended process- and extended lambda calculi, if the calculi satisfy certain natural conditions. As a guide-line, the proofs of the context lemmas are valid in call-by-need calculi, in callby-value calculi if substitution is restricted to variable-by-variable and in process calculi like variants of the π-calculus. For calculi employing beta-reduction using a call-by-name or call-by-value strategy or similar reduction rules, some iu-variants of ciu-theorems are obtained from our context lemmas. Our results reestablish several context lemmas already proved in the literature, and also provide some new context lemmas as well as some new variants of the ciu-theorem. To make the results widely applicable, we use a higher-order abstract syntax that allows untyped calculi as well as certain simple typing schemes. The approach may lead to a unifying view of higher-order calculi, reduction, and observational equality.
We show on an abstract level that contextual equivalence in non-deterministic program calculi defined by may- and must-convergence is maximal in the following sense. Using also all the test predicates generated by the Boolean, forall- and existential closure of may- and must-convergence does not change the contextual equivalence. The situation is different if may- and total must-convergence is used, where an expression totally must-converges if all reductions are finite and terminate with a value: There is an infinite sequence of test-predicates generated by the Boolean, forall- and existential closure of may- and total must-convergence, which also leads to an infinite sequence of different contextual equalities.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
This paper discusses the so-called commercial approach to microfinance under economic and ethical aspects. It first shows how microfinance has developed from a purely welfare-oriented activity to a commercially relevant line of banking business. The background of this stunning success is the – almost universal – adoption of the so-called commercial approach to microfinance in the course of the last decade. As the author argues, this commercial approach is the only sound approach to adopt if one wanted microfinance to have any social and developmental impact, and therefore the wide-spread “moralistic” criticism of the commercial approach, which has again and again been expressed in the 1990s, is ill-placed from an economic and an ethical perspective. However, some recent events in microfinance raise doubts as to whether the commercial approach has not, in a number of cases, gone too far. The evident example for such a development is the Mexican microfinance institution Compartamos, which recently undertook a financially extremely successful IPO. As it seems, some microfinance institutions have by now become so radically commercial that all of those social and development considerations, which have traditionally motivated work in the field of microfinance, seem to have lost their importance. Thus there is a conflict between commercial and developmental aspirations. However, this conflict is not inevitable. The paper concludes by showing that, and how, a microfinance institution can try to combine using the strengths of the capital market and at the same time maintaining its developmental focus and importance.
Der Referentenentwurf eines Gesetzes zur Umsetzung der Aktionärsrechterichtlinie (ARUG), der am 6. Mai 2008 der Öffentlichkeit zugeleitet wurde, bringt einige lang erwartete und vorab in der Literatur viel diskutierte Neuerungen des Aktiengesetzes. Anlass für den Entwurf ist die Umsetzung der Richtlinie 2007/36/EG vom 11. Juli 2007 über die Ausübung bestimmter Rechte von Aktionären in börsennotierten Gesellschaften (sog. Aktionärsrechterichtlinie).2 Dem Ziel der Richtlinie folgend soll die grenzüberschreitende Ausübung von Aktionärsrechten erleichtert werden; dies betrifft vor allem die Möglichkeiten der Online-Teilnahme an der Hauptversammlung und die Kommunikation mit den Aktionären im Vorfeld der Hauptversammlung. Darüber hinaus wird die Richtlinienumsetzung vom deutschen Gesetzgeber zum Anlass genommen, das Aktienrecht noch in einigen weiteren Punkten zu ändern. So wird das Depotstimmrecht der Kreditinstitute weiter dereguliert und die Festsetzung eines Mindestbetrages bei Wandelschuldverschreibungen ermöglicht. Die Werthaltigkeitsprüfung bei Sacheinlagen im Rahmen von Gründungen und Kapitalerhöhungen wird eingeschränkt; damit werden einige Optionen der durch die Richtlinie 2006/68/EG3 geänderten Kapitalrichtlinie4 umgesetzt. Ein besonderer Schwerpunkt des Referentenentwurfs liegt auf der Konkretisierung der aktien-, umwandlungs- und konzernrechtlichen Freigabeverfahren, durch welche missbräuchliche Aktionärsklagen weiter eingedämmt werden sollen.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
Ce texte s’est voulu une brève présentation des tons phonologiques qu’on rencontre dans les langues bantoues parlées au Gabon. L’élément nouveau ici par rapport à ce que l’on sait de l'analyse de la tonalité des langues bantoues en général, c’est la prise en compte de l'intonation dans l'explication de certaines modifications tonales du niveau lexical dont les tons lexicaux (fixes ou flottants) ne peuvent pas rendre compte.
Sowohl die Diversifikation als auch die Fokussierung von Unternehmensaktivitäten werden häufig mit der Maximierung des Unternehmenswertes begründet. Wir untersuchen die Auswirkungen auf den Aktienkurs für 184 Akquisitionen sowie 139 Desinvestitionen deutscher Konzerne im Zeitraum von 1996-2005. Unternehmensdiversifikationen üben, entgegen der oft geäußerten Kritik, keinen signifikant negativen Einfluss auf den Marktwert aus. Fokussierende Unternehmensakquisitionen hingegen sind mit einem signifikanten Wertaufschlag verbunden. Der Verkauf von Unternehmensteilen führt generell zu einer Marktwertsteigerung. Dabei führen Abspaltungen außerhalb des Kerngeschäfts zu einer – allerdings insignifikant – höheren Wertsteigerung als Desinvestitionen von Kerngeschäftsaktivitäten. Statt eines systematischen Diversifikationsabschlags finden wir somit einen „Fokussierungsaufschlag“ für den deutschen Markt.
n the last few years, many of the world’s largest financial exchanges have converted from mutual, not-for-profit organizations to publicly-traded, for-profit firms. In most cases, these exchanges have substantial responsibilities with respect to enforcing various regulations that protect investors from dishonest agents. We examine how the incentives to enforce such regulations change as an exchange converts from mutual to for-profit status. In contrast to oft-stated concerns, we find that, in many circumstances, an exchange that maximizes shareholder (rather than member) income has a greater incentive to aggressively enforce these types of regulations.
Zur Offenlegung von Abfindungszahlungen und Pensionszusagen an ein ausgeschiedenes Vorstandsmitglied
(2008)
Abfindungszahlungen und Pensionszusagen gehören zu den besonders umstrittenen Bestandteilen der Vorstandsvergütung. Der deutsche Gesetzgeber ist mit dem Gesetz über die Offenlegung von Vorstandsvergütungen (VorstOG) der internationalen Entwicklung gefolgt. Bereits Ziff. 4.2.4 DCGK a.F. hatte die individualisierte Offenlegung der Bezüge aktueller Vorstandsmitglieder empfohlen. In Frankreich wurde bereits 2001 die Pflicht zur Offenlegung von Vorstandsgehältern in den Art. L. 225-102-1 des Code de commerce aufgenommen. Aktuell beschäftigt sich das französische Parlament mit dem Gesetz „Croissance, emploi et pouvoir d’achat: modernisation de l’économie“, das bei Vereinbarungen von Abfindungen einen Hauptversammlungsbeschluss notwendig machen würde. In England sind die Bezüge der „Directors“ in einem Remuneration Report offenzulegen (Sec. 420 CA 2006). Vorreiter auf dem Gebiet der Offenlegungspflicht waren die Vereinigten Staaten, die seit 1992 eine individualisierte Offenlegung vorschreiben. Auch die Europäische Kommission hat sich für die Pflicht zur individualisierten Offenlegung ausgesprochen. Im Mittelpunkt der Diskussion steht insbesondere die Frage der Offenlegung der Abfindungs- und Pensionszusagen. Scheidet ein Vorstandsmitglied vorzeitig aus, hat es grundsätzlich einen Vergütungsanspruch bis zur Beendigung seines Anstellungsvertrags, außer wenn der Aufsichtsrat ihm aus wichtigem Grund gekündigt hat. In der Regel werden aber mit dem Vorstandsmitglied Abfindungsvereinbarungen getroffen. Neben den Abfindungsvereinbarungen spielen auch die Pensions- und Versorgungszusagen in der Praxis eine wichtige Rolle. Mit Blick auf den Wortlaut des § 285 HGB stellt sich, auch zwei Jahre nach Inkrafttreten des VorstOG, immer noch die Frage, ob bei börsennotierten Aktiengesellschaften die Abfindungszahlungen und Pensionszusagen individualisiert oder nur aggregiert offenzulegen sind. Fraglich ist zum einen, wie eine vereinbarte Abfindungszahlung im Lagebericht bei der Angabe der Vorstandsbezüge zu behandeln ist, wenn ein Vorstandsmitglied vorzeitig ausscheidet (III.). Zum anderen stellt sich die Frage, wie Pensionszusagen darzustellen sind (IV.). Bevor auf diese beiden Fragen eingegangen werden kann, soll kurz der gesetzliche Rahmen der Offenlegungspflicht skizziert werden (II.). ...
A data set of monthly growing areas of 26 irrigated crops (MGAG-I) and related crop calendars (CC-I) was compiled for 402 spatial entities. The selection of the crops consisted of all major food crops including regionally important ones (wheat, rice, maize, barley, rye, millet, sorghum, soybeans, sunflower, potatoes, cassava, sugar cane, sugar beets, oil palm, rapeseed/canola, groundnuts/peanuts, pulses, citrus, date palm, grapes/vine, cocoa, coffee), major water-consuming crops (cotton), and unspecified other crops (other perennial crops, other annual crops, managed grassland). The data set refers to the time period 1998-2002 and has a spatial resolution of 5 arc minutes by 5 arc minutes which is 8 km by 8 km at the equator. This is the first time that a data set of cell-specific irrigated growing areas of irrigated crops with this spatial resolution was created. The data set is consistent to the irrigated area and water use statistics of the AQUASTAT programme of the Food and Agriculture Organization of the United Nations (FAO) (http://www.fao.org/ag/agl/aglw/aquastat/main/index.stm) and the Global Map of Irrigation Areas (GMIA) (http://www.fao.org/ag/agl/aglw/aquastat/irrigationmap/index.stm). At the cell-level it was tried to maximise consistency to the cropland extent and cropland harvested area from the Department of Geography and Earth System Science Program of the McGill University at Montreal, Quebec, Canada and the Center for Sustainability and the Global Environment (SAGE) of the University of Wisconsin at Madison, USA (http://www.geog.mcgill.ca/~nramankutty/ Datasets/Datasets.html and http://geomatics.geog.mcgill.ca/~navin/pub/Data/175crops2000/). The consistency between the grid product and the input data was quantified. MGAG-I and CC-I are fully consistent to each other on entity level. For input data other than CC-I, the consistency of MGAG-I on cell level was calculated. The consistency of MGAG-I with respect to the area equipped for irrigation (AEI) of GMIA and to the cropland extent of SAGE was characterised by the sum of the cell-specific maximum difference between the MGAG-I monthly total irrigated area and the reference area when the latter was exceeded in the grid cell. The consistency of the harvested area contained in MGAG-I with respect to SAGE harvested area was characterised by the crop-specific sum of the cell-specific difference between MGAG-I harvested area and the SAGE harvested area when the latter was exceeded in the grid cell. In all three cases, the sums are the excess areas that should not have been distributed under the assumption that the input data were correct. Globally, this cell-level excess of MGAG-I as compared to AEI is 331,304 ha or only about 0.12 % of the global AEI of 278.9 Mha found in the original grid. The respective cell-level excess of MGAG-I as compared to the SAGE cropland extent is 32.2 Mha, corresponding to about 2.2 % of the total cropland area. The respective cell-level excess of MGAG-I as compared to the SAGE harvested area is 27 % of the irrigated harvested area, or 11.5 % of the AEI. In a further step that will be published later also rainfed areas were compiled in order to form the Global data set of monthly irrigated and rainfed crop areas around the year 2000 (MIRCA2000). The data set can be used for global and continental-scale studies on food security and water use. In the future, it will be improved, e.g. with a better spatial resolution of crop calendars and an improved crop distribution algorithm. The MIRCA2000 data set, its full documentation together with future updates will be freely available through the following long-term internet site: http://www.geo.uni-frankfurt.de/ipg/ag/dl/forschung/MIRCA/index.html. The research presented here was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) within the framework of the research project entitled "Consistent assessment of global green, blue and virtual water fluxes in the context of food production: regional stresses and worldwide teleconnections". The authors thank Navin Ramankutty and Chad Monfreda for making available the current SAGE datasets on cropland extent (Ramankutty et al., 2008) and harvested area (Monfreda et al., 2008) prior to their publication.
The execution, clearing, and settlement of financial transactions are all subject to substantial scale and scope economies which make each of these complementary functions a natural monopoly. Integration of trade, execution, and settlement in an exchange improves efficiency by economizing on transactions costs. When scope economies in clearing are more extensive than those in execution, integration is more costly, and efficient organization involves a trade-off of scope economies and transactions costs. A properly organized clearing cooperative can eliminate double marginalization problems and exploit scope economies, but can result in opportunism and underinvestment. Moreover, a clearing cooperative may exercise market power. Vertical integration and tying can foreclose entry, but foreclosure can be efficient because market power rents attract excessive entry. Integration of trading and post-trade services is the modal form of organization in financial markets, which is consistent with the hypothesis that transactional efficiencies explain organizational arrangements in these markets.
Central counterparties (CCPs) have increasingly become a cornerstone of financial markets infrastructure. We present a model where trades are time-critical, liquidity is limited and there is limited enforcement of trades. We show a CCP novating trades implements efficient trading behaviour. It is optimal for the CCP to face default losses to achieve the efficient level of trade. To cover these losses, the CCP optimally uses margin calls, and, as the default problem becomes more severe, also requires default funds and then imposes position limits.
Monetary policy analysts often rely on rules-of-thumb, such as the Taylor rule, to describe historical monetary policy decisions and to compare current policy to historical norms. Analysis along these lines also permits evaluation of episodes where policy may have deviated from a simple rule and examination of the reasons behind such deviations. One interesting question is whether such rules-of-thumb should draw on policymakers "forecasts of key variables such as inflation and unemployment or on observed outcomes. Importantly, deviations of the policy from the prescriptions of a Taylor rule that relies on outcomes may be due to systematic responses to information captured in policymakers" own projections. We investigate this proposition in the context of FOMC policy decisions over the past 20 years using publicly available FOMC projections from the biannual monetary policy reports to the Congress (Humphrey-Hawkins reports). Our results indicate that FOMC decisions can indeed be predominantly explained in terms of the FOMC´s own projections rather than observed outcomes. Thus, a forecast-based rule-of-thumb better characterizes FOMC decision-making. We also confirm that many of the apparent deviations of the federal funds rate from an outcome-based Taylor-style rule may be considered systematic responses to information contained in FOMC projections.
CONTENTS Preamble 1. Concept and Drivers of Globalization 1.0 A Brief Historical Perspective 1.1 Concept of Globalization 1.2 Economic Globalization 1.3 Drivers of Economic Globalization 2. Globalization and Markets 2.1 The Free Market System 2.2 Markets and the Solution of Economic Problems 2.3 African Markets and “Getting the Prices Right”. 2.4 Implications of the Imperfect Market System 2.5 Government’s Inevitable Role 2.6 The International Environment/Markets 3. Globalization and Trade Liberalisation 3.1 The Experience of the Developing Countries 3.2 Nigeria’s Experience with Trade Liberalisation 4. Global Economic Integration and Sub-Saharan Africa 4.1 Global Economic Integration 4.2 Africa’s Integration with the World Economy 4.3 The Benefits of Economic Globalization and Sub-Saharan Africa 4.4 Why has Africa Lagged? 5. Nigeria and the Global Economy 5.1 Openness of the Economy and Integration with the World Economy 5.2 Globalization and Nigeria’s Trade 5.3 Globalization and Foreign Capital Flows to Nigeria 5.4 Foreign Capital Flows and Debt Accumulation 5.5 Globalization, Growth and Development 6. Appropriate Policy Responses and Lessons 7. Concluding Remarks 8. Appreciation 9. Annex 10. References
While companies have emerged as very proactive donors in the wake of recent major disasters like Hurricane Katrina, it remains unclear whether that corporate generosity generates benefits to firms themselves. The literature on strategic philanthropy suggests that such philanthropic behavior may be valuable because it can generate direct and indirect benefits to the firm, yet it is not known whether investors interpret donations in this way. We develop hypotheses linking the strategic character of donations to positive abnormal returns. Using event study methodology, we investigate stock market reactions to corporate donation announcements by 108 US firms made in response to Hurricane Katrina. We then use regression analysis to examine if our hypothesized predictors are associated with positive abnormal returns. Our results show that overall, corporate donations were linked to neither positive nor negative abnormal returns. We do, however, see that a number of factors moderate the relationship between donation announcements and abnormal stock returns. Implications for theory and practice are discussed.
When a spot market monopolist participates in a derivatives market, she has an incentive to deviate from the spot market monopoly optimum to make her derivatives market position more profitable. When contracts can only be written contingent on the spot price, a risk-averse monopolist chooses to participate in the derivatives market to hedge her risk, and she reduces expected profits by doing so. However, eliminating all risk is impossible. These results are independent of the shape of the demand function, the distribution of demand shocks, the nature of preferences or the set of derivatives contracts.
We show that the use of correlations for modeling dependencies may lead to counterintuitive behavior of risk measures, such as Value-at-Risk (VaR) and Expected Short- fall (ES), when the risk of very rare events is assessed via Monte-Carlo techniques. The phenomenon is demonstrated for mixture models adapted from credit risk analysis as well as for common Poisson-shock models used in reliability theory. An obvious implication of this finding pertains to the analysis of operational risk. The alleged incentive suggested by the New Basel Capital Accord (Basel II), amely decreasing minimum capital requirements by allowing for less than perfect correlation, may not necessarily be attainable.
Macro announcements change the equilibrium riskfree rate. We find that treasury prices reflect part of the impact instantaneously, but intermediaries rely on their customer order flow in the 15 minutes after the announcement to discover the full impact. We show that this customer flow informativeness is strongest at times when analyst forecasts of macro variables are highly dispersed. We study 30 year treasury futures to identify the customer flow. We further show that intermediaries appear to benefit from privately recognizing informed customer flow, as, in the cross-section, their own-account trade profitability correlates with access to customer orders, controlling for volatility, competition, and the announcement surprise. These results suggest that intermediaries learn about equilibrium riskfree rates through customer orders.
Many older US households have done little or no planning for retirement, and there is a substantial population that seems to undersave for retirement. Of particular concern is the relative position of older women, who are more vulnerable to old-age poverty due to their longer longevity. This paper uses data from a special module we devised on planning and financial literacy in the 2004 Health and Retirement Study. It shows that women display much lower levels of financial literacy than the older population as a whole. In addition, women who are less financially literate are also less likely to plan for retirement and be successful planners. These findings have important implications for policy and for programs aimed at fostering financial security at older ages.
Increasingly, individuals are in charge of their own financial security and are confronted with ever more complex financial instruments. However, there is evidence that many individuals are not well-equipped to make sound saving decisions. This paper demonstrates widespread financial illiteracy among the U.S. population, particularly among specific demographic groups. Those with low education, women, African-Americans, and Hispanics display particularly low levels of literacy. Financial literacy impacts financial decision-making. Failure to plan for retirement, lack of participation in the stock market, and poor borrowing behavior can all be linked to ignorance of basic financial concepts. While financial education programs can result in improved saving behavior and financial decision-making, much can be done to improve these programs’ effectiveness.
We study the effect of randomness in the adversarial queueing model. All proofs of instability for deterministic queueing strategies exploit a finespun strategy of insertions by an adversary. If the local queueing decisions in the network are subject to randomness, it is far from obvious, that an adversary can still trick the network into instability. We show that uniform queueing is unstable even against an oblivious adversary. Consequently, randomizing the queueing decisions made to operate a network is not in itself a suitable fix for poor network performances due to packet pileups.
The paper provides novel insights on the effect of a firm’s risk management objective on the optimal design of risk transfer instruments. I analyze the interrelation between the structure of the optimal insurance contract and the firm’s objective to minimize the required equity it has to hold to accommodate losses in the presence of multiple risks and moral hazard. In contrast to the case of risk aversion and moral hazard, the optimal insurance contract involves a joint deductible on aggregate losses in the present setting.
The paper proposes a panel cointegration analysis of the joint development of government expenditures and economic growth in 23 OECD countries. The empirical evidence provides indication of a structural positive correlation between public spending and per-capita GDP which is consistent with the so-called Wagner´s law. A long-run elasticity larger than one suggests a more than proportional increase of government expenditures with respect to economic activity. In addition, according to the spirit of the law, we found that the correlation is usually higher in countries with lower per-capita GDP, suggesting that the catching-up period is characterized by a stronger development of government activities with respect to economies in a more advanced state of development.
Do we measure what we get?
(2008)
Performance measures shall enhance the performance of companies by directing the attention of decision makers towards the achievement of organizational goals. Therefore, goal congruence is regarded in literature as a major factor in the quality of such measures. As reality is affected by many variables, in practice one has tried to achieve a high degree of goal congruence by incorporating an increasing number of these variables into performance measures. However, a goal congruent measure does not lead automatically to superior decisions, because decision makers’ restricted cognitive abilities can counteract the intended effects. This paper addresses the interplay between goal congruence and complexity of performance measures considering cognitively-restricted decision makers. Two types of decision quality are derived which allow a differentiated view on the influence of this interplay on decision quality and learning. The simulation experiments based on this differentiation provide results which allow a critical reflection on costs and benefits of goal congruence and the assumptions regarding the goal congruence of incentive systems.
This study develops a novel 2-step hedonic approach, which is used to construct a price index for German paintings. This approach enables the researcher to use every single auction record, instead of only those auction records that belong to a sub-sample of selected artists. This results in a substantially larger sample available for research and it lowers the selection bias that is inherent in the traditional hedonic and repeat sales methodologies. Using a unique sample of 61,135 auction records for German artworks created by 5,115 different artists over the period 1985 to 2007, we find that the geometric annual return on German art is just 3.8 percent, with a standard deviation of 17.87 percent. Although our results indicate that art underperforms the market portfolio and is not proportionally rewarded for downside risk, under some circumstances art should be included in an optimal portfolio for diversification purposes.
Risk transfer with CDOs
(2008)
Modern bank management comprises both classical lending business and transfer of asset risk to capital markets through securitization. Sound knowledge of the risks involved in securitization transactions is a prerequisite for solid risk management. This paper aims to resolve a part of the opaqueness surrounding credit-risk allocation to tranches that represent claims of different seniority on a reference portfolio. In particular, this paper analyzes the allocation of credit risk to different tranches of a CDO transaction when the underlying asset returns are driven by a common macro factor and an idiosyncratic component. Junior and senior tranches are found to be nearly orthogonal, motivating a search for the whereabout of systematic risk in CDO transactions. We propose a metric for capturing the allocation of systematic risk to tranches. First, in contrast to a widely-held claim, we show that (extreme) tail risk in standard CDO transactions is held by all tranches. While junior tranches take on all types of systematic risk, senior tranches take on almost no non-tail risk. This is in stark contrast to an untranched bond portfolio of the same rating quality, which on average suffers substantial losses for all realizations of the macro factor. Second, given tranching, a shock to the risk of the underlying asset portfolio (e.g. a rise in asset correlation or in mean portfolio loss) has the strongest impact, in relative terms, on the exposure of senior tranche CDO-investors. Our findings can be used to explain major stylized facts observed in credit markets.
Risk transfer with CDOs
(2008)
Modern bank management comprises both classical lending business and transfer of asset risk to capital markets through securitization. Sound knowledge of the risks involved in securitization transactions is a prerequisite for solid risk management. This paper aims to resolve a part of the opaqueness surrounding credit-risk allocation to tranches that represent claims of different seniority on a reference portfolio. In particular, this paper analyzes the allocation of credit risk to different tranches of a CDO transaction when the underlying asset returns are driven by a common macro factor and an idiosyncratic component. Junior and senior tranches are found to be nearly orthogonal, motivating a search for the where about of systematic risk in CDO transactions. We propose a metric for capturing the allocation of systematic risk to tranches. First, in contrast to a widely-held claim, we show that (extreme) tail risk in standard CDO transactions is held by all tranches. While junior tranches take on all types of systematic risk, senior tranches take on almost no non-tail risk. This is in stark contrast to an untranched bond portfolio of the same rating quality, which on average suffers substantial losses for all realizations of the macro factor. Second, given tranching, a shock to the risk of the underlying asset portfolio (e.g. a rise in asset correlation or in mean portfolio loss) has the strongest impact, in relative terms, on the exposure of senior tranche CDO-investors. Our findings can be used to explain major stylized facts observed in credit markets.
Modern macroeconomics empirically addresses economy-wide incentives behind economic actions by using insights from the way a single representative household would behave. This analytical approach requires that incentives of the poor and the rich are strictly aligned. In empirical analysis a challenging complication is that consumer and income data are typically available at the household level, and individuals living in multimember households have the potential to share goods within the household. The analytical approach of modern macroeconomics would require that intra-household sharing is also strictly aligned across the rich and the poor. Here we have designed a survey method that allows the testing of this stringent property of intra-household sharing and find that it holds: once expenditures for basic needs are subtracted from disposable household income, household-size economies implied by the remainder household incomes are the same for the rich and the poor.
The introduction of a common currency as well as the harmonization of rules and regulations in Europe has significantly reduced distance in all its guises. With reduced costs of overcoming space, this emphasizes centripetal forces and it should foster consolidation of financial activity. In a national context, as a rule, this led to the emergence of one financial center. Hence, Europeanization of financial and monetary affairs could foretell the relegation of some European financial hubs such as Frankfurt and Paris to third-rank status. Frankfurt’s financial history is interesting insofar as it has lost (in the 1870s) and regained (mainly in the 1980s) its preeminent place in the German context. Because Europe is still characterized by local pockets of information-sensitive assets as well as a demand for variety the national analogy probably does not hold. There is room in Europe for a number of financial hubs of an international dimension, including Frankfurt.
This paper discusses the implications of transnational media production and diasporic networks for the cultural politics of migrant minorities. How are fields of cultural politics transformed if Hirschmann’s famous options ‘exit’ and ‘voice’ are no longer constituting mutually exclusive responses to dissent within a nation-state, but modes of action that can combine and build upon each other in the context of migration and diasporic media activism? Two case studies are discussed in more detail, relating to Alevi amateur television production in Germany and to a Kurdish satellite television station that reaches out to a diaspora across Europe and the Middle East. Keywords: migrant media, transnationalism, Alevis, Kurds, Turkey, Germany
The "quiet life hypothesis (QLH)" posits that banks enjoy the advantages of market power in terms of foregone revenues or cost savings. We suggest a unified approach to measure competition and efficiency simultaneously to test this hypothesis. We estimate bank-specific Lerner indices as measures of competition and test if cost and profit efficiency are negatively related to market power in the case of German savings banks.We find that both market power and average revenues declined among these banks between 1996 and 2006. While we find clear evidence supporting the QLH, estimated effects of the QLH are small from an economical perspective.
Central counterparties
(2008)
Central counterparties (CCPs) have increasingly become a cornerstone of financial markets infrastructure. We present a model where trades are time-critical, liquidity is limited and there is limited enforcement of trades. We show a CCP novating trades implements efficient trading behaviour. It is optimal for the CCP to face default losses to achieve the efficient level of trade. To cover these losses, the CCP optimally uses margin calls, and, as the default problem becomes more severe, also requires default funds and then imposes position limits.
The single most important policy-induced innovation in the international financial system since the collapse of the Bretton-Woods regime is the institution of the European Monetary Union. This paper provides an account of how the process of financial integration has promoted financial development in the euro area. It starts by defining financial integration and how to measure it, analyzes the barriers that can prevent it and the effects of their removal on financial markets, and assesses whether the euro area has actually become more integrated. It then explores to which extent these changes in financial markets have influenced the performance of the euro-area economy, that is, its growth and investment, as well as its ability to adjust to shocks and to allow risk-sharing. The paper concludes analyzing further steps that are required to consolidate financial integration and enhance the future stability of financial markets.
We examine insurance markets with two types of customers: those who regret suboptimal decisions and those who don.t. In this setting, we characterize the equilibria under hidden information about the type of customers and hidden action. We show that both pooling and separating equilibria can exist. Furthermore, there exist separating equilibria that predict a positive correlation between the amount of insurance coverage and risk type, as in the standard economic models of adverse selection, but there also exist separating equilibria that predict a negative correlation between the amount of insurance coverage and risk type, i.e. advantageous selection. Since optimal choice of regretful customers depends on foregone alternatives, any equilibrium includes a contract which is o¤ered but not purchased.
Generally, information provision and certifcation have been identified as the major economic functions of rating agencies. This paper analyzes whether the “watchlist” (rating review) instrument has extended the agencies' role towards a monitoring position, as proposed by Boot, Milbourn, and Schmeits (2006). Using a data set of Moody's rating history between 1982 and 2004, we find that the overall information content of rating action has indeed increased since the introduction of the watchlist procedure. Our findings suggest that rating reviews help to establish implicit monitoring contracts between agencies and borrowers and as such enable a finer partition of rating information, thereby contributing to a higher information quality.
Generally, information provision and certification have been identified as the major economic functions of rating agencies. This paper analyzes whether the “watchlist" (rating review) instrument has extended the agencies' role towards a monitoring position, as proposed by Boot, Milbourn, and Schmeits (2006). Using a data set of Moody's rating history between 1982 and 2004, we find that the overall information content of rating action has indeed increased since the introduction of the watchlist procedure. Our findings suggest that rating reviews help to establish implicit monitoring contracts between agencies and borrowers and as such enable a finer partition of rating information, thereby contributing to a higher information quality.
Algorithmic trading has sharply increased over the past decade. Equity market liquidity has improved as well. Are the two trends related? For a recent five-year panel of New York Stock Exchange (NYSE) stocks, we use a normalized measure of electronic message traffic (order submissions, cancellations, and executions) as a proxy for algorithmic trading, and we trace the associations between liquidity and message traffic. Based on within-stock variation, we find that algorithmic trading and liquidity are positively related. To sort out causality, we use the start of autoquoting on the NYSE as an exogenous instrument for algorithmic trading. Previously, specialists were responsible for manually disseminating the inside quote. As stocks were phased in gradually during early 2003, the manual quote was replaced by a new automated quote whenever there was a change to the NYSE limit order book. This market structure change provides quicker feedback to traders and algorithms and results in more message traffic. For large-cap stocks in particular, quoted and effective spreads narrow under autoquote and adverse selection declines, indicating that algorithmic trading does causally improve liquidity.
Bayesian learning provides the core concept of processing noisy information. In standard Bayesian frameworks, assessing the price impact of information requires perfect knowledge of news’ precision. In practice, however, precision is rarely dis- closed. Therefore, we extend standard Bayesian learning, suggesting traders infer news’ precision from magnitudes of surprises and from external sources. We show that interactions of the different precision signals may result in highly nonlinear price responses. Empirical tests based on intra-day T-bond futures price reactions to employment releases confirm the model’s predictions and show that the effects are statistically and economically significant.
An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out–of–sample Value–at–Risk measures.
We develop a multivariate generalization of the Markov–switching GARCH model introduced by Haas, Mittnik, and Paolella (2004b) and derive its fourth–moment structure. An application to international stock markets illustrates the relevance of accounting for volatility regimes from both a statistical and economic perspective, including out–of–sample portfolio selection and computation of Value–at–Risk.
Innovative automated execution strategies like Algorithmic Trading gain significant market share on electronic market venues worldwide, although their impact on market outcome has not been investigated in depth yet. In order to assess the impact of such concepts, e.g. effects on the price formation or the volatility of prices, a simulation environment is presented that provides stylized implementations of algorithmic trading behavior and allows for modeling latency. As simulations allow for reproducing exactly the same basic situation, an assessment of the impact of algorithmic trading models can be conducted by comparing different simulation runs including and excluding a trader constituting an algorithmic trading model in its trading behavior. By this means the impact of Algorithmic Trading on different characteristics of market outcome can be assessed. The results indicate that large volumes to execute by the algorithmic trader have an increasing impact on market prices. On the other hand, lower latency appears to lower market volatility.
Marginal income taxes may have an insurance effect by decreasing the effective fluctuations of after-tax individual income. By compressing the idiosyncratic component o personal income fluctuations, higher marginal taxes should be negatively correlated with the dispersion of consumption across households, a necessary implication of an insurance effect of taxation. Our study empirically examines this negative correlation, exploiting the ample variation of state taxes across US states. We show that taxes are negatively correlated with the consumption dispersion of the within-state distribution of non-durable consumption and that this correlation is robust.
This paper addresses and resolves the issue of microstructure noise when measuring the relative importance of home and U.S. market in the price discovery process of Canadian interlisted stocks. In order to avoid large bounds for information shares, previous studies applying the Cholesky decomposition within the Hasbrouck (1995) framework had to rely on high frequency data. However, due to the considerable amount of microstructure noise inherent in return data at very high frequencies, these estimators are distorted. We offer a modified approach that identifies unique information shares based on distributional assumptions and thereby enables us to control for microstructure noise. Our results indicate that the role of the U.S. market in the price discovery process of Canadian interlisted stocks has been underestimated so far. Moreover, we suggest that rather than stock specific factors, market characteristics determine information shares.
Since independence from British colonial rule, Uganda has had a turbulent political history characterised by putsches, dictatorship, contested electoral outcomes, civil wars and a military invasion. There were eight changes of government within a period of twenty-four years (from 1962-1986), five of which were violent and unconstitutional. This paper identifies factors that account for these recurrent episodes of political violence and state collapse. While colonialism bequeathed the country a negative legacy including a weak state apparatus, ethnic division, skewed development, elite polarisation and a narrow economic base, post-colonial leaders have on the whole exacerbated rather than reversed these trends. Factors such as ethnic rivalry, political exclusion, militarisation of politics, weak state institutions, and unequal access to opportunities for self-advancement help to account for the recurrent cycles of violence and state failure prior to 1986. External factors have also been important, particularly the country’s politically turbulent neighbourhood, the outcome of political instability and civil conflict in surrounding countries. Neighbourhood turbulence stemming from such factors as civil wars in Congo and Sudan has had spill-over effects in that it has allowed insurgent groups geographical space within which to operate as well as provided opportunities for the acquisition of instruments of war with which to destabilise the country. Critical to these processes have been the porosity of post-colonial borders and the inability by the Ugandan state to exercise effective control over its entire territory. By demonstrating the interplay between internal and external factors in shaping Uganda’s postcolonial experience, the paper makes an important shift away from conventional explanations that have focused disproportionately on internal processes. Lastly, the paper provides pointers to areas of further research such as the economic foundations of conflict that should ultimately strengthen our understanding of factors that combine to make state-making fail or succeed.
Die Privatisierung von Krankheitskosten durch erhöhte Zuzahlungen, informelle Leistungsverweigerungen in der GKV sowie das Nebeneinander von gesetzlicher und privater Krankenversicherung bei einer wachsenden Kluft zwischen beiden Systemen haben die sozialen und die räumlich-zeitlichen Barrieren zur Inanspruchnahme von Gesundheitsleistungen für sozial schwache Gruppen erhöht. Damit wächst die Gefahr, dass die Krankenversorgungspolitik zu einer eigenständigen Ursache für die Verstärkung und Aufrechterhaltung gesundheitlicher Ungleichheit wird. Gleichzeitig werden die Möglichkeiten der gesetzlichen Krankenversicherung, durch verbesserte Prävention zu einer Verringerung gesundheitlicher Ungleichheit beizutragen, nur unzureichend genutzt. So liegt die Teilnahmequote von Personen mit niedrigem Sozialstatus an zahlreichen Maßnahmen der Krankheitsfrüherkennung, insbesondere bei der Krebsvorsorge, nach wie vor deutlich unter dem Durchschnitt. Mit der Novellierung des § 20 SGB V im Jahr 2000 hat zwar auch die Verminderung der sozialen Ungleichheit von Gesundheitschancen Eingang in das Zielsystem der GKV gefunden. Allerdings geht dieses Ziel nur teilweise in die Präventionspraxis der Krankenkassen ein. Nach wie vor existieren zahlreiche Hürden bei der Implementierung von Maßnahmen der kontextgestützten Verhältnisprävention.
Hong Kong’s Linked Exchange Rate System (LERS) has been in operation for twenty-five years during which time many other fixed exchange rate systems have succumbed to shocks and/or speculative attacks. This fact alone suggests that the LERS is a robust system which enjoys a large measure of credibility in financial markets. This paper intends to investigate whether this is indeed the case, and whether it has been the case throughout its 25-year history. In particular we will use the tools of modern finance to extract information from financial asset prices about market expectations that are related to the credibility of the LERS. The main focus is on how market participants ‘judged’ the various changes made to the LERS, such as the ‘seven technical measures’ introduced in September 1998 and the ‘three refinements’ made in May 2005. These changes have been characterizes as making the system less discretionary over time, and we hypothesize that they have also made it more credible as revealed in the prices of exchange rate related asset prices. We also investigate the relationship between interest rates and exchange rates in the current system in light of modern models of target-zone exchange rate systems. We will examine whether the intramarginal intervention in November 2007 changed the dynamic properties of the exchange rate as suggested by such models.
In this paper we consider the dynamics of spot and futures prices in the presence of arbitrage. We propose a partially linear error correction model where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. We estimate our model using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference.
A data set of annual values of area equipped for irrigation for all 236 countries in the world during the time period 1900 - 2003 was generated. The basis for this data product was information available through various online data bases and from other published materials. The complete time series were then constructed around the reported data applying six statistical methods. The methods are discussed in terms of reliability and data uncertainties. The total area equipped for irrigation in the world in 1900 was 53.2 million hectares. Irrigation was mainly practiced in all the arid regions of the globe and in paddy rice areas of South and East Asia. In some temperate countries in Western Europe irrigation was practiced widely on pastures and meadows. The time series suggest a modest rate of increase of irrigated areas in the first half of the 20th century followed by a more dynamic development in the second half. The turn of the century is characterized by an overall consolidating trend resulting at a total of 285.8 million hectares in 2003. The major contributing countries have changed little throughout the century. This data product is regarded as a preliminary result toward an ongoing effort to develop a detailed data set and map of areas equipped for irrigation in the world over the 20th century using sub-national statistics and historical irrigation maps.
We report evidence that the presence of hidden liquidity is associated with greater liquidity in the order books, greater trading volume, and smaller price impact. Limit and market order submission behavior changes when hidden liquidity is present consistent with at least some traders being able to detect hidden liquidity. We estimate a model of liquidity provision that allows us to measure variations in the marginal and total payoffs from liquidity provision in states with and without hidden liquidity. Our estimates of the expected surplus to providers of visible and hidden liquidity are positive and typically of the order of one-half to one basis points per trade. The positive liquidity provider surpluses combined with the increased trading volume when hidden liquidity is present are both consistent with liquidity externalities.
The future of securitization
(2008)
Securitization is a financial innovation that experiences a boom-bust cycle, as many other innovations before. This paper analyzes possible reasons for the breakdown of primary and secondary securitization markets, and argues that misaligned incentives along the value chain are the primary cause of the problems. The illiquidity of asset and interbank markets, in this view, is a market failure derived from ill-designed mechanisms of coordinating financial intermediaries and investors. Thus, illiquidity is closely related to the design of the financial chains. Our policy conclusions emphasize crisis prevention rather than crisis management, and the objective is to restore a “comprehensive incentive alignment”. The toe-hold for strengthening regulation is surprisingly small. First, we emphasize the importance of equity piece retention for the long-term quality of the underlying asset pool. As a consequence, equity piece allocation needs to be publicly known, alleviating market pricing. Second, on a micro level, accountability of managers can be improved by compensation packages aiming at long term incentives, and penalizing policies with destabilizing effects on financial markets. Third, on a macro level, increased transparency relating to effective risk transfer, risk-related management compensation, and credible measurement of rating performance stabilizes the valuation of financial assets and, hence, improves the solvency of financial intermediaries. Fourth, financial intermediaries, whose risk is opaque, may be subjected to higher capital requirements.
We find and describe four futures markets where the bid-ask spread is bid down to the fixed price tick size practically all the time, and which match counterparties using a pro-rata rule. These four markets´ offered depths at the quotes on average exceed mean market order size by two orders of magnitude, and their order cancellation rates (the probability of any given offered lot being cancelled) are significantly over 96 per cent. We develop a simple theoretical model to ex- plain these facts, where strategic complementarities in the choice of limit order size cause traders to risk overtrading by submitting over-sized limit orders, most of which they expect to cancel.
How do fiscal and technology shocks affect real exchange rates? : New evidence for the United States
(2008)
Using vector autoregressions on U.S. time series relative to an aggregate of industrialized countries, this paper provides new evidence on the dynamic effects of government spending and technology shocks on the real exchange rate and the terms of trade. To achieve identification, we derive robust restrictions on the sign of several impulse responses from a two-country general equilibrium model. We find that both the real exchange rate and the terms of trade – whose responses are left unrestricted – depreciate in response to expansionary government spending shocks and appreciate in response to positive technology shocks.
This paper identifies some common errors that occur in comparative law, offers some guidelines to help avoid such errors, and provides a framework for entering into studies of the company laws of three major jurisdictions. The first section illustrates why a conscious approach to comparative company law is useful. Part I discusses some of the problems that can arise in comparative law and offers a few points of caution that can be useful for practical, theoretical and legislative comparative law. Part II discusses some relatively famous examples of comparative analysis gone astray in order to demonstrate the utility of heeding the outlined points of caution. The second section offers a framework for approaching comparative company law. Part III provides an example of using functional definition to demarcate the topic "company law", offering an "effects" test to determine whether a given provision of law should be considered as functionally part of the rules that govern the core characteristics of companies. It does this by presenting the relevant company law statutes and related topical laws of Germany, the United Kingdom and the United States, using Delaware as a proxy for the 50 states. On the basis of this definition, Part IV analyzes the system of legal functions that comprises "company law" in the United States and the European Union. It selects as the predominant factor for consideration the jurisdictions, sub-jurisdictions and rule-making entities that have legislative or rule-making competence in the relevant territorial unit, analyzes the extent of their power, presents the type of law (rules) they enact (issue), and discusses the concrete manner in which the laws and rules of the jurisdictions and sub-jurisdictions can legally interact. Part V looks at the way these jurisdictions do interact on the temporal axis of history, that is, their actual influence on each other, which in the relevant jurisdictions currently takes the form of regulatory competition and legislative harmonization. The method of the approach outlined in this paper borrows much from system theory. The analysis attempts to be detailed without losing track of the overall jurisdictional framework in the countries studied.
Measuring financial asset return and volatilty spillovers, with application to global equity markets
(2008)
We provide a simple and intuitive measure of interdependence of asset returns and/or volatilities. In particular, we formulate and examine precise and separate measures of return spillovers and volatility spillovers. Our framework facilitates study of both non-crisis and crisis episodes, including trends and bursts in spillovers, and both turn out to be empirically important. In particular, in an analysis of nineteen global equity markets from the early 1990s to the present, we find striking evidence of divergent behavior in the dynamics of return spillovers vs. volatility spillovers: Return spillovers display a gently increasing trend but no bursts, whereas volatility spillovers display no trend but clear bursts.
We argue for incorporating the financial economics of market microstructure into the financial econometrics of asset return volatility estimation. In particular, we use market microstructure theory to derive the cross-correlation function between latent returns and market microstructure noise, which feature prominently in the recent volatility literature. The cross-correlation at zero displacement is typically negative, and cross-correlations at nonzero displacements are positive and decay geometrically. If market makers are sufficiently risk averse, however, the cross-correlation pattern is inverted. Our results are useful for assessing the validity of the frequently-assumed independence of latent price and microstructure noise, for explaining observed cross-correlation patterns, for predicting as-yet undiscovered patterns, and for making informed conjectures as to improved volatility estimation methods.
The popular Nelson-Siegel (1987) yield curve is routinely fit to cross sections of intra-country bond yields, and Diebold and Li (2006) have recently proposed a dynamized version. In this paper we extend Diebold-Li to a global context, modeling a potentially large set of country yield curves in a framework that allows for both global and country-specific factors. In an empirical analysis of term structures of government bond yields for the Germany, Japan, the U.K. and the U.S., we find that global yield factors do indeed exist and are economically important, generally explaining significant fractions of country yield curve dynamics, with interesting differences across countries.
This paper explores the role of trade integration—or openness—for monetary policy transmission in a medium-scale New Keynesian model. Allowing for strategic complementarities in price-setting, we highlight a new dimension of the exchange rate channel by which monetary policy directly impacts domestic inflation. Although the strength of this effect increases with economic openness, it also requires that import prices respond to exchange rate changes. In this case domestic producers find it optimal to adjust their prices to exchange rate changes which alter the domestic currency price of their foreign competitors. We pin down key parameters of the model by matching impulse responses obtained from a vector autoregression on U.S. time series relative to an aggregate of industrialized countries. While we find evidence for strong complementarities, exchange rate pass-through is limited. Openness has therefore little bearing on monetary transmission in the estimated model.
Reform of the securities class action is once again the subject of national debate. The impetus for this debate is the reports of three different groups – The Committee on Capital Market Regulation, The Commission on the Regulation of U.S. Capital Markets In the 21st Century, and McKinsey & Company. Each of the reports focuses on a single theme: how the contemporary regulatory culture places U.S. capital markets at a competitive disadvantage to foreign markets. While multiple regulatory forces are targeted by each report’s call for reform, each of the reports singles out securities class actions as one of the prime villains that place U.S. capital markets at a competitive disadvantage. The reports’ recommendations range from insignificant changes to drastic curtailments of private class actions. Surprisingly, these current-day cries echo calls for reform heeded by Congress in the not too distant past. Major reform of the securities class action occurred with the Private Securities Litigation Reform Act of 1995.5 Among the PSLRA’s contributions is the introduction of procedures by which the court chooses from among competing petitioners a lead plaintiff for the class. The statute commands that the petitioner with the largest financial loss suffered as a consequence of the defendant’s alleged misrepresentation is presumed to be the most adequate plaintiff. Thus, the lead plaintiff provision supplants the traditional “first to file” rule for selecting the suit’s plaintiff with a mechanism that seeks to harness to the plaintiff’s economic self interest to the suits’ prosecution. Also, by eliminating the race to be the first to file, the lead plaintiff provision seeks to avoid “hair trigger” filings by overly eager plaintiffs’ counsel which Congress believed too frequently gave rise to incomplete and insubstantially pled causes of action. The PSLRA also introduced for securities class actions a heightened pleading requirement8 as well as a bar to the plaintiff obtaining any discovery prior to the district court disposing of the defendants’ motions to dismiss. By introducing the requirement that allegations involving fraud must be plead not only with particularity, but also that the pled facts must establish a “strong inference” of fraud, the PSLRA cast aside, albeit only for securities actions, the much lower notice pleading requirement that has been a fixture of American civil procedure for decades. Substantive changes to the law were also introduced by the PSLRA. With few exceptions, joint and several liability was replaced by proportionate liability so that a particular defendant’s liability is capped by that defendant’s relative degree of fault. Similarly, contribution rights among co-violators are also based on proportionate fault of each defendant. Three years after the PSLRA, Congress returned to the topic again by enacting the Securities Litigation Uniform Standards Act;13 this provision was prompted by aggressive efforts of plaintiff lawyers to bypass the limitations, most notably the bar to discovery and higher pleading requirement, of the PSLRA by bringing suit in state court. Post-SLUSA, securities fraud class actions are exclusively the domain of the federal court. In this paper, we examine the impact of the PSLRA and more particularly the impact the type of lead plaintiff on the size of settlements in securities fraud class actions. We thus provide insight into whether the type of plaintiff that heads the class action impacts the overall outcome of the case. Furthermore, we explore possible indicia that may explain why some suits settle for extremely small sums – small relative to the “provable losses” suffered by the class, small relative to the asset size of the defendantcompany, and small relative to other settlements in our sample. This evidence bears heavily on the debate over “strike suits.” Part I of this paper sets forth the contemporary debate surrounding the need for further reforms of securities class actions. In this section, we set forth the insights advanced in three prominent reports focused on the competitiveness of U.S. capital markets. In Part II we first provide descriptive statistics of our extensive data set, and thenuse multivariate regression analysis to explore the underlying relationships. In Part III, we closely examine small settlements for clues to whether they reflect evidence of strike suits. We conclude in Part IV with a set of policy recommendations based on our analysis of the data. Our goals in this paper are more modest than the Committee Report, the Chamber Report and the McKinsey Report, each of which called for wide-ranging reforms: we focus on how the PSLRA changed securities fraud settlements so as to determine whether the reforms it introduced accomplished at least some of the Act’s important goals. If the PSLRA was successful, and we think it was, then one must be somewhat skeptical of the need for further cutbacks in private securities class action so soon after the Act was passed.
We study the relation between cognitive abilities and stockholding using the recent Survey of Health, Ageing and Retirement in Europe (SHARE), which has detailed data on wealth and portfolio composition of individuals aged 50+ in 11 European countries and three indicators of cognitive abilities: mathematical, verbal fluency, and recall skills. We find that the propensity to invest in stocks is strongly associated with cognitive abilities, for both direct stock market participation and indirect participation through mutual funds and retirement accounts. Since the decision to invest in less information-intensive assets (such as bonds) is less strongly related to cognitive abilities, we conclude that the association between cognitive abilities and stockholding is driven by information constraints, rather than by features of preferences or psychological traits.
This paper documents and studies sources of international differences in participation and holdings in stocks, private businesses, and homes among households aged 50+ in the US, England, and eleven continental European countries, using new internationally comparable, household-level data. With greater integration of asset and labor markets and policies, households of given characteristics should be holding more similar portfolios for old age. We decompose observed differences across the Atlantic, within the US, and within Europe into those arising from differences: a) in the distribution of characteristics and b) in the influence of given characteristics. We find that US households are generally more likely to own these assets than their European counterparts. However, European asset owners tend to hold smaller real, PPP-adjusted amounts in stocks and larger in private businesses and primary residence than US owners at comparable points in the distribution of holdings, even controlling for differences in configuration of characteristics. Differences in characteristics often play minimal or no role. Differences in market conditions are much more pronounced among European countries than among US regions, suggesting significant potential for further integration.
We investigate, using the 2002 US Health and Retirement Study, the factors influencing individuals’ insecurity and expectations about terrorism, and study the effects these last have on households’ portfolio choices and spending patterns. We find that females, the religiously devout, those equipped with a better memory, the less educated, and those living close to where the events of September 2001 took place worry a lot about their safety. In addition, fear of terrorism discourages households from investing in stocks, mostly through the high levels of insecurity felt by females. Insecurity due to terrorism also makes single men less likely to own a business. Finally, we find evidence of expenditure shifting away from recreational activities that can potentially leave one exposed to a terrorist attack and towards goods that might help one cope with the consequences of terrorism materially (increased use of car and spending on the house) or psychologically (spending on personal care products by females in couples).