Refine
Year of publication
Document Type
- Article (31800)
- Part of Periodical (11593)
- Book (8343)
- Doctoral Thesis (5771)
- Part of a Book (3973)
- Working Paper (3402)
- Review (2949)
- Preprint (2417)
- Contribution to a Periodical (2398)
- Conference Proceeding (1775)
Language
- German (43017)
- English (30711)
- French (1060)
- Portuguese (840)
- Spanish (309)
- Croatian (302)
- Multiple languages (263)
- Italian (198)
- mis (174)
- Turkish (168)
Has Fulltext
- yes (77366) (remove)
Keywords
- Deutsch (1080)
- Literatur (875)
- taxonomy (774)
- Deutschland (553)
- Rezension (516)
- new species (459)
- Rezeption (354)
- Frankfurt <Main> / Universität (341)
- Übersetzung (330)
- Geschichte (301)
Institute
- Medizin (7874)
- Präsidium (5261)
- Physik (4992)
- Extern (2738)
- Wirtschaftswissenschaften (2724)
- Gesellschaftswissenschaften (2379)
- Biowissenschaften (2212)
- Biochemie und Chemie (1985)
- Frankfurt Institute for Advanced Studies (FIAS) (1893)
- Informatik (1735)
In dieser Arbeit wurde die transversale Flußgeschwindigkeit einer Schwerionenreaktion direkt bestimmt. Dazu modifizierte man den herkömmlichen Yano-Koonin-Podgoretskii Formalismus, der zur Bestimmung der longitudinalen Expansion bereits erfolgreich eingesetzt wurde. Die transversale Expansion wurde in verschiedenen kinematischen Bereichen bestimmt. Einzelne Quellabschnitte erreichen Geschwindigkeiten bis zu b = 0.8. Das entspricht den Werten, die man durch indirekte Verfahren für den transversalen Fluß bestimmte. In den Intervallen mittlerer longitudinaler Paarrapidität entspricht die Yano-Koonin-Podgoretskii Rapidität der mittleren Paarrapidität. Dieses Verhalten erwartet man von einer Quelle, die ein boostinvariantes Expansionsverhalten besitzt. Die HBT-Radien, die im Zuge der Analyse der Korrelationsfunktion bestimmt wurden, entsprechen in der Größenordnung denen, die bei der Untersuchung der longitudinalen Expansion bestimmt wurden. Lediglich der Parameter R0 zeigt ein abweichendes Verhalten, indem er für geringere Rapiditäten kleinere Werte annimmt, dieser Parameter ist jedoch mit einem großen Fehler belastet. Die Konsistenz des Formalismus bezüglich verschieden gewählter transversaler Richtungen wurde überprüft. Trotz erheblicher Unterschiede in den transversalen Rapiditätsverteilungen wurden in vier verschiedenen Richtungen vergleichbare Resultate gemessen. Um einen größeren Impulsbereich abzudecken wurden die Messungen in zwei verschiedenen Magnetfeldkonfigurationen durchgeführt, in den Bereichen wo die Parameter der Korrelationsfunktion im beiden bestimmt werden konnten, ergaben sich vergleichbare Werte.
In dieser Arbeit wurde die Pionenproduktion in C + C und Si + Si - Kollisionen bei 40A GeV und 158A GeV untersucht. Dazu wurden zwei vollkommen unterschiedliche Methoden, die dE/dx- Teilchenidentifizierung und die h- - Methode, bei der der Anteil von Nicht- Pionen simuliert wird, verwendet. Die Ergebnisse beider Methoden stimmen gut überein, die Differenz fließt in den systematischen Fehler ein. Für die Bestimmung der totalen Multiplizitäten und mittleren transversalen Massen wurde die h- - Methode aufgrund ihrer größeren Akzeptanz gewählt. Zusätzlich wurde für 40A GeV C + C eine zentralitätsabhängige Analyse der Pionenmultiplizitäten vorgenommen. Die Ergebnisse dieser Analyse sollten jedoch als vorläufig angesehen werden. Die Ergebnisse meiner Analyse wurden mit der von C. Höhne [14] bei 158A GeV verglichen, sie stimmen innerhalb der Fehler überein. Es wurden Modelle zur Simulation von Kollisionen (UrQMD, Venus) vorgestellt und angewandt, um die experimentellen Ergebnisse mit den Vorhersagen der Simulationen zu vergleichen. Ein weiteres Modell (Statistical Model of the Early Stage) wurde vorgestellt, welches die qualitative und anschauliche Interpretation der Daten erlaubt. Die Ergebnisse wurden als Energie- und Systemgrößenabhängigkeitsplots zusammen mit anderen NA49- Ergebnissen, Ergebnissen anderer Experimente und Simulationsvorhersagen gezeigt und diskutiert. Der Übergang von der Unterdrückung der Pionenproduktion in Pb+Pb - Kollisionen relativ zu p+p zu einer Erhöhung der Pionenproduktion bei niedrigen SPS-Energien wurde auch bei kleinen Systemen, C + C und Si + Si , beobachtet. Eine Interpretation der Pionenmultiplizitäten mit den Statistical Model of the Early Stage legt die Vermutung nahe, dass bereits bei 40A GeV C + C - Kollisionen Quark- Gluon- Plasma gebildet wird. Diese Vermutung muss allerdings durch die Betrachtung weiterer Observabler noch bestätigt werden.
Die konventionelle Extremwertstatistik die sich an der Über- bzw. Unterschreitungshäufigkeit bestimmter Schwellenwerte orientiert, beinhaltet den Nachteil, daß Änderungen der Parameter der Häufigkeitsverteilung die Extremwertwahrscheinlichkeit beeinflussen. So kann allein das Vorhandensein eines Trends für derartige Veränderungen verantwortlich sein. Die hier gewählte Methodik vermeidet diesen Nachteil, indem sie eine Zerlegung der betrachteten Zeitreihen in einen strukturierten und einen unstrukturierten Anteil durchführt. Dabei setzt sich der strukturierte Anteil aus einer Trend-, Saison- und glatten Komponente zusammen. Aus der Summe dieser in der Zeitreihe signifikant enthaltenen Komponenten läßt sich die Eintrittswahrscheinlichkeit von Extremwerten ableiten. Ähnliches gilt für den unstrukturierten Anteil insbesondere für die Varianz des Residuums. Das Residuum kann aber auch Werte enthalten, die nicht zu ihrer ansonsten angepaßten Häufigkeitsverteilung passen. Solche Werte werden als Extremereignisse bezeichnet und sind von den Extremwerten zu unterschieden. In der vorliegenden Arbeit werden nun, getrennt voneinander, durch Änderungen in den Parametern der Häufigkeitsverteilung hervorgerufene Variationen der Extremwertwahrscheinlichkeit als auch parameterunabhängige Extremereignisse der bodennahen Lufttemperatur betrachtet. Als Datenbasis dienten 41, wahrscheinlich homogene, europäische Stationszeitreihen von Monatsmitteltemperaturen, die den Zeitraum von 1871 bis 1990 abdecken. In den untersuchten Temperaturzeitreihen wurde an 37 von 41 Stationen ein positiver Trend detektiert, woraus ein Anstieg der Extremwertwahrscheinlichkeit mit der Zeit resultiert. Die glatten, niederfrequenten Schwingungen wirken sich in den meisten Fällen um 1890 und 1975 negativ und um 1871, 1940 und 1990 positiv auf die Extremwertwahrscheinlich keit aus. Desweiteren treten Änderungen in der Saisonfigur bezüglich der Amplitude und der Phasenlage auf. Detektierte Zunahmen in der Amplitude des Jahresgangs führen zu einer positiven Änderung der Extremwertwahrscheinlichkeit. Signifikante Änderungen in der Phasenlage der Saisonfigur erzeugen in den Anomaliezeitreihen einen saisonal unterschiedlichen Trend, dessen Amplitude, in den betrachteten Fällen, in der Größenordnung der Trendkomponente liegt. Saisonal unterschiedliche Trends beeinflussen saisonal unterschiedlich die Wahrscheinlichkeit für das Eintreten von Extremwerten. Die Residuen von fünf Temperaturzeitreihen weisen signifikante Varianzinstationaritäten auf, wobei in nur einem Fall die Varianz mit der Zeit zunimmt und somit einen Anstieg der Extremwertwahrscheinlichkeit erzeugt. Extremereignisse treten vorwiegend in Form besonders kalter Winter auf und können wahrscheinlich als Realisation eines Poisson-Prozesses interpretiert werden. Sie erscheinen zufällig über den Beobachtungszeitraum verteilt mit einer mittleren Wiederkehrzeit von mehr als 10 Jahren.
Hauptanliegen dieser Arbeit ist es, statistische Zusammenhänge zwischen der Nord-Atlantik-Oszillation (NAO) und der bodennahen Lufttemperatur in Europa zu untersuchen. Dazu wurden zunächst die Korrelationskoeffizienten nach Pearson, Kendall, Spearman und die Transinformation berechnet, sowie die zugehörigen Signifikanzen abgeschätzt. Diese Analysen wurden auch zeitlich gleitend durchgeführt, um mögliche Veränderungen im Einfluß der NAO auf die Temperatur nachweisen zu können. Weiterhin wurde mit Hilfe der selektiven Zeitreihenzerlegung nach signifikanten, charakteristischen zeitlichen Strukturen sowohl in der NAO als auch in den Zeitreihen der Lufttemperatur gesucht: Trend, glatte, saisonale, harmonische Komponente und Rauschen. Zweck dieser Untersuchung war es, gegebenenfalls gleichartige zeitliche Strukturen in der NAO und Temperatur zu finden, um den Zusammenhang zwischen NAO und Temperatur näher beschreiben zu können. Die Untersuchungen wurden fur den Zeitraum von 1871 bis 1990 in monatlicher, saisonaler und jährlicher Auflösung auf Basis von Zeitreihen der mittleren monatlichen Lufttemperatur 41 europäischer WMO- (World Meteorological Organization) Stationen, sowie zwei unterschiedlich definierten NAO-Index-Zeitreihen, die ebenfalls in Monatsmitteln vorlagen, durchgeführt. Ergänzend wurde auf einen globalen Datensatz von Temperaturflächenmitteln zuruckgegriffen, um auch aus globaler Sicht Aussagen uber Zusammenhänge zwischen NAO und bodennaher Lufttemperatur zu erhalten. Die Untersuchungen bezogen sich hierbei auf das Zeitintervall von 1892 bis 1994. Der Zusammenhang zwischen den in Europa beobachteten Temperaturen und der NAO ist linearer Natur und vor allem in den Wintermonaten ausgeprägt. Ein maximaler Zusammenhang findet sich im nordeuropäischen Winter mit einer erklärten Varianz um 40%. Ein Vergleich von extrem kalten Wintern mit der NAO hat gezeigt, daß extreme Kältereignisse nur bei einer schwachen NAO (negativer NAO-Index) auftreten. Im Jahresgang findet eine Verschiebung des durch die NAO beeinflußten Gebietes in Ost-West-Richtung statt. Das Minimum des Zusammenhanges besteht im Sommer bei maximaler Ost-Verschiebung. Weiterhin ist der Einfluß der NAO auf die Temperatur stark zonal ausgeprägt. Es besteht ein Nord-Süd-Gefälle von positiver Korrelation im Norden zu negativer im Süden Europas. Zu diesem Ergebnis führte sowohl die Analyse der Europadaten wie des globalen Datensatzes. Der Einfluß der NAO auf die Temperatur ist nicht stationär; seit Beginn dieses Jahrhunderts hat sich dieser zunehmend ostwärts verlagert. Ein signifikanter Trend konnte in den Indexreihen der NAO aber nicht nachgewiesen werden. Signifikante zeitliche Strukturen der NAO konnten im Bereich der niederfrequenten und auch hochfrequenten Variabilität gefunden werden. Die Winter-NAO (mittlerer Indexwert von Dezember bis Februar) zeigt insbesondere einen in den Wintertemperaturen (Temperaturmittel der Monate Dezember bis Februar) gleichartigen niederfrequenten Verlauf, der durch Polynome vierter und fünfter Ordnung beschrieben werden kann. Im Bereich der hochfrequenten Variabilität konnte mit Ausnahme der Sommer- und Herbstdaten in allen Indexreihen der NAO eine harmonische Schwingung mit einer Periode von etwa 7 Jahren detektiert werden. Die gleiche Schwingung findet sich in den Wintertemperaturen West- und Mitteleuropas.
Zunächst wurde die Notwendigkeit von Schemaänderungen erläutert und verschiedene Ansätze aus der Literatur beschrieben, Schemaänderungen in laufenden Systeme so durchzuführen, dass eine möglichst einfache und automatisch stattfindende Konvertierung der betroffenen Datenobjekte erfolgen kann. Beim Vergleich erweist sich das Konzept der Schemaversionierung als die leistungsfähigste Lösung. Der Grundgedanke der Schemaversionierung ist, durch jede Schemaänderung eine neue Schemaversion zu erstellen, wobei die alte Schemaversion weiterhin benutzt werden kann. Die Datenobjekte liegen ebenfalls in mehreren Versionen vor und die Schemaänderung wird auf Objektebene nachempfunden, indem Datenänderungen propagiert, d.h. die Daten automatisch konvertiert werden. Für die Propagation werden die Beziehungen zwischen den Schemaversionen ausgenutzt. Mit dem Konzept der Schemaversionierung ist es möglich, mehrere Versionen eines Schemas parallel zu benutzen und nur die auf die Datenbank zugreifenden Applikationen anzupassen, die auch wirklich von der Schemaänderung betroffen sind. Diese Diplomarbeit ist Teil des COAST-Projekts, das die Schemaversionierung als Prototyp umsetzt. In COAST existierte vor dieser Diplomarbeit nur die Möglichkeit, einfache Schemaanderungen durchzuführen. Neu wurden komplexe Schemaanderungsoperationen eingeführt und das Konzept der Propagation entsprechend erweitert. Komplexe Schemaänderungen unterscheiden sich von einfachen Schemaänderungen dadurch, dass sie Attribute aus mehreren Quellklassen in einer Zielklasse (oder andersherum) vereinen können. Die bereits in kurz angeschnittenen Default-Konvertierungsfunktionen wurden genauer untersucht und konkret eingeführt. Es wurden mehrere typische Schemaänderungsoperationen vorgestellt und darauf untersucht, ob sie mit den bisherigen einfachen Schemaänderungsoperationen durchführbar waren oder ob dazu komplexe Schemaänderungsoperationen nötig sind. Außerdem wurde analysiert, ob das System für die entsprechenden Operationen automatisch sinnvolle Defaultkonvertierungsoperationen generieren kann oder ob ein Eingriff des Schemaentwicklers notwendig ist. Dazu wurden sie in eine von vier Kategorien eingeteilt, die aussagen, ob einfache oder komplexe Schemaänderungsoperationen nötig sind und ob sinnvolle Default-Konvertierungsfunktionen ohne Eingriff des Schemaentwicklers erzeugt werden können oder nicht. Zu jeder der aufgezahlten Schemaänderungsoperationen wurde die entsprechende vom System erzeugte Default-Konvertierungsfunktion aufgeführt und im Falle, dass sie der Schemaentwickler uberprüfen muss, angegeben, wo noch potenzieller Bedarf für manuelle Änderungen vorliegt. Die Auswirkungen der Einführung von komplexen Schemaänderungsoperationen auf die Propagation wurde im nächsten Kapitel analysiert und dabei festgestellt, dass das bisherige Konzept der Propagationskanten zwischen je zwei Objektversionen desselben Objekts nicht mehr ausreicht. Entsprechend wurde das neue Konzept von kombinierten Propagationskanten entwickelt, das Kanten zwischen mehr als nur zwei Objektversionen zulässt. Dazu wurden verschiedene Lösungsmöglichkeiten miteinander verglichen. Weiter wurden verschiedene Ansätze fur die Darstellung und Speicherung von Konvertierungsfunktionen vorgestellt und entschieden, die Konvertierungsfunktionen konkret in textueller Darstellung in den Propagationskanten zu speichern. Fur die Spezifizierung der gewünschten Konvertierungen bei der Propagation wurde eine Konvertierungssprache entwickelt und nach verschiedenen Gesichtspunkten konzipiert. Diese Gesichtspunkte umfassen sowohl den nötigen Funktionsumfang der Sprache wie auch entwurfstechnische Aspekte. Sämtliche Befehle der entwickelten Sprache wurden detailliert vorgestellt und abschließend die Sprache in BNF (Backus-Naur-Form) präsentiert. COAST ist inzwischen als Prototyp implementiert und wurde u.a. auf der CeBIT '99 vorgestellt (s. [Lau99b] und [LDHA99]). Nach einer Beschreibung der Funktionsweise und des Aufbaus von COAST und insbesondere des Propagationsmanagers wurden einige Implementierungsdetails vorgestellt und verschiedene Betrachtungen zur moglichen Optimierung beschrieben. Die Ziele der Diplomarbeit wurden damit erreicht: Die Schemaevolution kann mit den Vorzügen der Versionierung durchgeführt werden. Komplexe Schemaänderungen sind nun möglich und wurden ins Modell integriert. Die Propagation wurde entsprechend erweitert und eine Sprache zur Spezifikation der Propagation entwickelt.
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Syndicated loans and the number of lending relationships have raised growing attention. All other terms being equal (e.g. seniority), syndicated loans provide larger payments (in basis points) to lenders funding larger amounts. The paper explores empirically the motivation for such a price discrimination on sovereign syndicated loans in the period 1990-1997. First evidence suggests larger premia are associated with renegotiation prospects. This is consistent with the hypothesis that price discrimination is aimed at reducing the number of lenders and thus the expected renegotiation costs. However, larger payment discrimination is also associated with more targeted market segments and with larger loans, thus minimising borrowing costs and/or attempting to widen the circle of lending relationships in order to successfully raise the requested amount. JEL Classification: F34, G21, G33 This version: June, 2002. Later version (october 2003) with the title: "Why Borrowers Pay Premiums to Larger Lenders: Empirical Evidence from Sovereign Syndicated Loans" : http://publikationen.ub.uni-frankfurt.de/volltexte/2005/992/
We use consumer price data for 205 cities/regions in 21 countries to study deviations from the law-of-one-price before, during and after the major currency crises of the 1990s. We combine data from industrialised nations in North America (Unites States, Canada, Mexico), Europe (Germany, Italy, Spain and Portugal) and Asia (Japan, Korea, New Zealand, Australia) with corresponding data from emerging market economies in the South America (Argentine, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). We confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by significantly increasing these border effects, and by raising within country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago. JEL classification: F40, F41
We use consumer price data for 81 European cities (in Germany, Austria, Switzerland, Italy, Spain and Portugal) to study deviations from the law-of-one-price before and during the European Economic and Monetary Union (EMU) by analysing both aggregate and disaggregate CPI data for 7 categories of goods we find that the distance between cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of the relative price is much higher for two cities located in different countries than for two equidistant cities in the same country. Under EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility. JEL classification: F40, F41
This paper analyzes a comprehensive data set of 108 non venture-backed, 58 venture-backed and 33 bridge financed companies going public at Germany s Neuer Markt between March 1997 and March 2000. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This study provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VCbacked IPOs.
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
Using data from the Consumer Expenditure Survey we first document that the recent increase in income inequality in the US has not been accompanied by a corresponding rise in consumption inequality. Much of this divergence is due to different trends in within-group inequality, which has increased significantly for income but little for consumption. We then develop a simple framework that allows us to analytically characterize how within-group income inequality affects consumption inequality in a world in which agents can trade a full set of contingent consumption claims, subject to endogenous constraints emanating from the limited enforcement of intertemporal contracts (as in Kehoe and Levine, 1993). Finally, we quantitatively evaluate, in the context of a calibrated general equilibrium production economy, whether this set-up, or alternatively a standard incomplete markets model (as in Ayiagari 1994), can account for the documented stylized consumption inequality facts from the US data. JEL Klassifikation: E21, D91, D63, D31, G22
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61.
This paper studies an overlapping generations model with stochastic production and incomplete markets to assess whether the introduction of an unfunded social security system leads to a Pareto improvement. When returns to capital and wages are imperfectly correlated a system that endows retired households with claims to labor income enhances the sharing of aggregate risk between generations. Our quantitative analysis shows that, abstracting from the capital crowding-out effect, the introduction of social security represents a Pareto improving reform, even when the economy is dynamically effcient. However, the severity of the crowding-out effect in general equilibrium tends to overturn these gains. Klassifikation: E62, H55, H31, D91, D58 . April 2005.
While much of classical statistical analysis is based on Gaussian distributional assumptions, statistical modeling with the Laplace distribution has gained importance in many applied fields. This phenomenon is rooted in the fact that, like the Gaussian, the Laplace distribution has many attractive properties. This paper investigates two methods of combining them and their use in modeling and predicting financial risk. Based on 25 daily stock return series, the empirical results indicate that the new models offer a plausible description of the data. They are also shown to be competitive with, or superior to, use of the hyperbolic distribution, which has gained some popularity in asset-return modeling and, in fact, also nests the Gaussian and Laplace. Klassifikation: C16, C50 . March 2005.
This paper computes the optimal progressivity of the income tax code in a dynamic general equilibrium model with household heterogeneity in which uninsurable labor productivity risk gives rise to a nontrivial income and wealth distribution. A progressive tax system serves as a partial substitute for missing insurance markets and enhances an equal distribution of economic welfare. These beneficial effects of a progressive tax system have to be traded off against the efficiency loss arising from distorting endogenous labor supply and capital accumulation decisions. Using a utilitarian steady state social welfare criterion we find that the optimal US income tax is well approximated by a flat tax rate of 17:2% and a fixed deduction of about $9,400. The steady state welfare gains from a fundamental tax reform towards this tax system are equivalent to 1:7% higher consumption in each state of the world. An explicit computation of the transition path induced by a reform of the current towards the optimal tax system indicates that a majority of the population currently alive (roughly 62%) would experience welfare gains, suggesting that such fundamental income tax reform is not only desirable, but may also be politically feasible. JEL Klassifikation: E62, H21, H24 .
Financial markets embed expectations of central bank policy into asset prices. This paper compares two approaches that extract a probability density of market beliefs. The first is a simulatedmoments estimator for option volatilities described in Mizrach (2002); the second is a new approach developed by Haas, Mittnik and Paolella (2004a) for fat-tailed conditionally heteroskedastic time series. In an application to the 1992-93 European Exchange Rate Mechanism crises, that both the options and the underlying exchange rates provide useful information for policy makers. JEL Klassifikation: G12, G14, F31.
Volatility forecasting
(2005)
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1.
This paper analyzes dynamic equilibrium risk sharing contracts between profit-maximizing intermediaries and a large pool of ex-ante identical agents that face idiosyncratic income uncertainty that makes them heterogeneous ex-post. In any given period, after having observed her income, the agent can walk away from the contract, while the intermediary cannot, i.e. there is one-sided commitment. We consider the extreme scenario that the agents face no costs to walking away, and can sign up with any competing intermediary without any reputational losses. We demonstrate that not only autarky, but also partial and full insurance can obtain, depending on the relative patience of agents and financial intermediaries. Insurance can be provided because in an equilibrium contract an up-front payment e.ectively locks in the agent with an intermediary. We then show that our contract economy is equivalent to a consumption-savings economy with one-period Arrow securities and a short-sale constraint, similar to Bulow and Rogo. (1989). From this equivalence and our characterization of dynamic contracts it immediately follows that without cost of switching financial intermediaries debt contracts are not sustainable, even though a risk allocation superior to autarky can be achieved. JEL Klassifikation: G22, E21, D11, D91.
Default risk sharing between banks and markets : the contribution of collateralized debt obligations
(2005)
This paper contributes to the economics of financial institutions risk management by exploring how loan securitization a.ects their default risk, their systematic risk, and their stock prices. In a typical CDO transaction a bank retains through a first loss piece a very high proportion of the expected default losses, and transfers only the extreme losses to other market participants. The size of the first loss piece is largely driven by the average default probability of the securitized assets. If the bank sells loans in a true sale transaction, it may use the proceeds to to expand its loan business, thereby incurring more systematic risk. We find an increase of the banks' betas, but no significant stock price e.ect around the announcement of a CDO issue. Our results suggest a role for supervisory requirements in stabilizing the financial system, related to transparency of tranche allocation, and to regulatory treatment of senior tranches. JEL Klassifikation: D82, G21, D74 .
This paper makes an attempt to present the economics of credit securitization in a non-technical way, starting from the description and the analysis of a typical securitization transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitization transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitization enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisiory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort. Klassifikation: D82, G21, D74. February 16, 2005.
We selectively survey, unify and extend the literature on realized volatility of financial asset returns. Rather than focusing exclusively on characterizing the properties of realized volatility, we progress by examining economically interesting functions of realized volatility, namely realized betas for equity portfolios, relating them both to their underlying realized variance and covariance parts and to underlying macroeconomic fundamentals.
From a macroeconomic perspective, the short-term interest rate is a policy instrument under the direct control of the central bank. From a finance perspective, long rates are risk-adjusted averages of expected future short rates. Thus, as illustrated by much recent research, a joint macro-finance modeling strategy will provide the most comprehensive understanding of the term structure of interest rates. We discuss various questions that arise in this research, and we also present a new examination of the relationship between two prominent dynamic, latent factor models in this literature: the Nelson-Siegel and affine no-arbitrage term structure models. JEL Klassifikation: G1, E4, E5.
What do academics have to offer market risk management practitioners in financial institutions? Current industry practice largely follows one of two extremely restrictive approaches: historical simulation or RiskMetrics. In contrast, we favor flexible methods based on recent developments in financial econometrics, which are likely to produce more accurate assessments of market risk. Clearly, the demands of real-world risk management in financial institutions - in particular, real-time risk tracking in very high-dimensional situations - impose strict limits on model complexity. Hence we stress parsimonious models that are easily estimated, and we discuss a variety of practical approaches for high-dimensional covariance matrix modeling, along with what we see as some of the pitfalls and problems in current practice. In so doing we hope to encourage further dialog between the academic and practitioner communities, hopefully stimulating the development of improved market risk management technologies that draw on the best of both worlds.
This study offers a historical review of the monetary policy reform of October 6, 1979, and discusses the influences behind it and its significance. We lay out the record from the start of 1979 through the spring of 1980, relying almost exclusively upon contemporaneous sources, including the recently released transcripts of Federal Open Market Committee (FOMC) meetings during 1979. We then present and discuss in detail the reasons for the FOMC's adoption of the reform and the communications challenge presented to the Committee during this period. Further, we examine whether the essential characteristics of the reform were consistent with monetarism, new, neo, or old-fashioned Keynesianism, nominal income targeting, and inflation targeting. The record suggests that the reform was adopted when the FOMC became convinced that its earlier gradualist strategy using finely tuned interest rate moves had proved inadequate for fighting inflation and reversing inflation expectations. The new plan had to break dramatically with established practice, allow for the possibility of substantial increases in short-term interest rates, yet be politically acceptable, and convince financial markets participants that it would be effective. The new operating procedures were also adopted for the pragmatic reason that they would likely succeed. JEL Klassifikation: E52, E58, E61, E65.
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
The Basle securitisation framework explained: the regulatory treatment of asset securitisation
(2005)
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
In this article, we investigate risk return characteristics and diversification benefits when private equity is used as a portfolio component. We use a unique dataset describing 642 US-American portfolio companies with 3620 private equity investments. Information about precisely dated cash flows at the company level enables for the first time a cash flow equivalent and simultaneous investment simulation in stocks, as well as the construction of stock portfolios for benchmarking purposes. With respect to the methodology involved, we construct private equity, stock-benchmark and mixed-asset portfolios using bootstrap simulations. For the late 1990s we find a dramatic increase in the extent to which private equity outperforms stock investment. In earlier years private equity was underperforming its stock benchmarks. Within the overall class of private equity, returns on earlier private equity investment categories, like venture capital, show on average higher variations and even higher rates of failure. It is in this category in particular that high average portfolio returns are generated solely by the ability to select a few extremely well performing companies, thus compensating for lost investments. There is a high marginal diversifiable risk reduction of about 80% when the portfolio size is increased to include 15 investments. When the portfolio size is increased from 15 to 200 there are few marginal risk diversification effects on the one hand, but a large increase in managing expenditure on the other, so that an actual average portfolio size between 20 and 28 investments seems to be well balanced. We provide empirical evidence that the non-diversifiable risk that a constrained investor, who is exclusively investing in private equity, has to hold exceeds that of constrained stock investors and also the market risk. From the viewpoint of unconstrained investors with complete investment freedom, risk can be optimally reduced by constructing mixed asset portfolios. According to the various private equity subcategories analyzed, there are big differences in optimal allocations to this asset class for minimizing mixed-asset portfolio variance or maximizing performance ratios. We observe optimal portfolio weightings to be between 3% and 65%.
We take a simple time-series approach to modeling and forecasting daily average temperature in U.S. cities, and we inquire systematically as to whether it may prove useful from the vantage point of participants in the weather derivatives market. The answer is, perhaps surprisingly, yes. Time-series modeling reveals conditional mean dynamics, and crucially, strong conditional variance dynamics, in daily average temperature, and it reveals sharp differences between the distribution of temperature and the distribution of temperature surprises. As we argue, it also holds promise for producing the long-horizon predictive densities crucial for pricing weather derivatives, so that additional inquiry into time-series weather forecasting methods will likely prove useful in weather derivatives contexts.
Despite powerful advances in yield curve modeling in the last twenty years, comparatively little attention has been paid to the key practical problem of forecasting the yield curve. In this paper we do so. We use neither the no-arbitrage approach, which focuses on accurately fitting the cross section of interest rates at any given time but neglects time-series dynamics, nor the equilibrium approach, which focuses on time-series dynamics (primarily those of the instantaneous rate) but pays comparatively little attention to fitting the entire cross section at any given time and has been shown to forecast poorly. Instead, we use variations on the Nelson-Siegel exponential components framework to model the entire yield curve, period-by-period, as a three-dimensional parameter evolving dynamically. We show that the three time-varying parameters may be interpreted as factors corresponding to level, slope and curvature, and that they may be estimated with high efficiency. We propose and estimate autoregressive models for the factors, and we show that our models are consistent with a variety of stylized facts regarding the yield curve. We use our models to produce term-structure forecasts at both short and long horizons, with encouraging results. In particular, our forecasts appear much more accurate at long horizons than various standard benchmark forecasts. JEL Code: G1, E4, C5
We consider three sets of phenomena that feature prominently - and separately - in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that: (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very high-frequency (e.g., daily) or very low-frequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally non-Gaussian environments, as for example with time-varying conditional skewness and/or kurtosis.
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
Financial theory creates a puzzle. Some authors argue that high-risk entrepreneurs choose debt contracts instead of equity contracts since risky but high returns are of relatively more value for a loan-financed firm. On the contrary, authors who focus explicitly on start-up finance predict that entrepreneurs are the more likely to seek equity-like venture capital contracts, the more risky their projects are. Our paper makes a first step to resolve this puzzle empirically. We present microeconometric evidence on the determinants of debt and equity financing in young and innovative SMEs. We pay special attention to the role of risk for the choice of the financing method. Since risk is not directly observable we use different indicators for financial and project risk. It turns out that our data generally confirms the hypothesis that the probability that a young high-tech firm receives equity financing is an increasing function of the financial risk. With regard to the intrinsic project risk, our results are less conclusive, as some of our indicators of a risky project are found to have a negative effect on the likelihood to be financed by private equity.
We study the returns the venture capital and private equity investment from 221 venture capital and private equity funds that are part of 72 venture capital and private equity firms, 5040 entrepreneurial firms (3826 venture capital and 1214 private equity), and spanning 32 years (1971 - 2003) and 39 countries from North and South America, Europe and Asia. We make use of four main categories of variables to proxy for value-added activities and risks that explain venture capital and private equity returns: market and legal environment, VC characteristics, entrepreneurial firm characteristics, and the characteristics and structure of the investment. We show Heckman sample selection issues in regards to both unrealized and partially realized investments are important to consider for analysing the determinants of realized returns. We further compare the actual unrealized returns, as reported to investment managers, to the predicted unrealized returns based on the estimates of realized returns from the sample selection models. We show there exists significant systematic biases in the reporting of unrealized investments to institutional investors depending on the level of the earnings aggressiveness and disclosure indices in a country, as well as proxies for the degree of information asymmetry between investment managers and venture capital and private equity fund managers. Klassifikation: G24, G28, G31, G32, G35
We analyze welfare maximizing monetary policy in a dynamic two-country model with price stickiness and imperfect competition. In this context, a typical terms of trade externality affects policy interaction between independent monetary authorities. Unlike the existing literature, we remain consistent to a public finance approach by an explicit consideration of all the distortions that are relevant to the Ramsey planner. This strategy entails two main advantages. First, it allows an accurate characterization of optimal policy in an economy that evolves around a steady-state which is not necessarily efficient. Second, it allows to describe a full range of alternative dynamic equilibria when price setters in both countries are completely forward-looking and households' preferences are not restricted. In this context, we study optimal policy both in the long-run and along a dynamic path, and we compare optimal commitment policy under Nash competition and under cooperation. By deriving a second order accurate solution to the policy functions, we also characterize the welfare gains from international policy cooperation. Klassifikation: E52, F41 . This version: January, 2004. First draft: October 2003 .
This paper considers a theoretical model of n asymmetric firms that reduce their initial unit costs by spending on R&D activities. In accordance with Schumpeterian hypotheses we obtain that more efficient (bigger) firms spend more in R&D and this leads to a more concentrated market structure. We also find a positive relationship between innovation and market concentration. This calls for a corrective tax on R&D activities to curtail strategic incentives to over-invest in R&D trying to achieve a higher market share. Klassifikation: L11, L52, O31 . February, 2004.
This paper aims to analyze the impact of different types of venture capitalists on the performance of their portfolio firms around and after the IPO. We thereby investigate the hypothesis that different governance structures, objectives and track record of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis by using a data set embracing all IPOs which occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after the IPO compared to all other IPOs and their share prices fluctuate less than those of their counterparts in this period of time. Obviously, independent VCs, which concentrated mainly on growth stocks (low book-to-market ratio) and large firms (high market value), were able to add value by leading to less post-IPO idiosyncratic risk and more return (after controlling for all other effects). On the contrary, firms backed by public VCs (being small and having a high book-to-market ratio) showed relative underperformance. Klassifikation: G10, G14, G24 . 29th January 2004 .
How might retirees consider deploying the retirement assets accumulated in a defined contribution pension plan? One possibility would be to purchase an immediate annuity. Another approach, called the "phased withdrawal" strategy in the literature, would have the retiree invest his funds and then withdraw some portion of the account annually. Using this second tactic, the withdrawal rate might be determined according to a fixed benefit level payable until the retiree dies or the funds run out, or it could be set using a variable formula, where the retiree withdraws funds according to a rule linked to life expectancy. Using a range of data consistent with the German experience, we evaluate several alternative designs for phased withdrawal strategies, allowing for endogenous asset allocation patterns, and also allowing the worker to make decisions both about when to retire and when to switch to an annuity. We show that one particular phased withdrawal rule is appealing since it offers relatively low expected shortfall risk, good expected payouts for the retiree during his life, and some bequest potential for the heirs. We also find that unisex mortality tables if used for annuity pricing can make women's expected shortfalls higher, expected benefits higher, and bequests lower under a phased withdrawal program. Finally, we show that delayed annuitization can be appealing since it provides higher expected benefits with lower expected shortfalls, at the cost of somewhat lower anticipated bequests. Klassifikation: G22, G23, J26, J32, H55 . January 2004.
This paper proposes an intertemporal model of venture capital investment with screening and advising where the venture capitalist´s time endowment is the scarce input factor. Screening improves the selection of firms receiving finance, advising allows firms to develop a marketable product, both have a variable intensity. In our setup, optimal linear contracts solves the moral hazard problem. Screening however asks for an entrepreneur wage and does not allow for upfront payments which would cause severe adverse selection. Project characteristics have implications for screening and advising intensity and the distribution of profits. Finally, we develop a formal version of the "venture capital cycle" by extending the basic setup to a simple model of venture capital supply and demand.
This paper analyses the effects of the Initial Public Offering (IPO) market on real investment decisions in emerging industries. We first propose a model of IPO timing based on divergence of opinion among investors and short-sale constraints. Using a real option approach, we show that firms are more likely to go public when the ratio of overvaluation over profits is high, that is after stock market run-ups. Because initial returns increase with the demand from optimistic investors at the time of the offer, the model provides an explanation for the observed positive causality between average initial returns and IPO volume. Second, we discuss the possibility of real overinvestment in high-tech industries. We claim that investing in the industry gives agents an option to sell the project on the stock market at an overvalued price enabling then the financing of positive NPV projects which would not be undertaken otherwise. It is shown that the IPO market can however also lead to overinvestment in new industries. Finally, we present some econometric results supporting the idea that funds committed to the financing of high-tech industries may respond positively to optimistic stock market valuations.
Equal size, equal role? : interest rate interdependence between the Euro area and the United States
(2003)
This paper investigates whether the degree and the nature of economic and monetary policy interdependence between the United States and the euro area have changed with the advent of EMU. Using real-time data, it addresses this issue from the perspective of financial markets by analysing the effects of monetary policy announcements and macroeconomic news on daily interest rates in the United States and the euro area. First, the paper finds that the interdependence of money markets has increased strongly around EMU. Although spillover effects from the United States to the euro area remain stronger than in the opposite direction, we present evidence that US markets have started reacting also to euro area developments since the onset of EMU. Second, beyond these general linkages, the paper finds that certain macroeconomic news about the US economy have a large and significant effect on euro area money markets, and that these effects have become stronger in recent years. Finally, we show that US macroeconomic news have become good leading indicators for economic developments in the euro area. This indicates that the higher money market interdependence between the United States and the euro area is at least partly explained by the increased real integration of the two economies in recent years.
Based on a broad set of regional aggregated and disaggregated consumer price index (CPI) data from major industrialized countries in Asia, North America and Europe we are examining the role that national borders play for goods market integration. In line with the existing literature we find that intra-national markets are better integrated than international market. Additionally, our results show that there is a large "ocean" effect, i.e., inter-continental markets are significantly more segmented than intra-continental markets. To examine the impact of the establishment of the European Monetary Union (EMU) on integration, we split our sample into a pre-EMU and EMU sample. We find that border effects across EMU countries have declined by about 80% to 90% after 1999 whereas border estimates across non-EMU countries have remained basically unchanged. Since global factors have affected all countries in our sample similarly and major integration efforts across EMU countries were made before 1999, we suggest that most of the reduction in EMU border estimates has been "nominal". Panel unit root evidence shows that the observed large differences in integration across intra- and inter-continental markets remain valid in the long-run. This finding implies that real factors are responsible for the documented segmentations across our sample countries.
We estimate a Bayesian vector autoregression for the U.K. with drifting coefficients and stochastic volatilities. We use it to characterize posterior densities for several objects that are useful for designing and evaluating monetary policy, including local approximations to the mean, persistence, and volatility of inflation. We present diverse sources of uncertainty that impinge on the posterior predictive density for inflation, including model uncertainty, policy drift, structural shifts and other shocks. We use a recently developed minimum entropy method to bring outside information to bear on inflation forecasts. We compare our predictive densities with the Bank of England's fan charts.
We show diverse beliefs is an important propagation mechanism of fluctuations, money non neutrality and efficacy of monetary policy. Since expectations affect demand, our theory shows economic fluctuations are mostly driven by varying demand not supply shocks. Using a competitive model with flexible prices in which agents hold Rational Belief (see Kurz (1994)) we show that (i) our economy replicates well the empirical record of fluctuations in the U.S. (ii) Under monetary rules without discretion, monetary policy has a strong stabilization effect and an aggressive anti-inflationary policy can reduce inflation volatility to zero. (iii) The statistical Phillips Curve changes substantially with policy instruments and activist policy rules render it vertical. (iv) Although prices are flexible, money shocks result in less than proportional changes in inflation hence the aggregate price level appears "sticky" with respect to money shocks. (v) Discretion in monetary policy adds a random element to policy and increases volatility. The impact of discretion on the efficacy of policy depends upon the structure of market beliefs about future discretionary decisions. We study two rationalizable beliefs. In one case, market beliefs weaken the effect of policy and in the second, beliefs bolster policy outcomes and discretion could be a desirable attribute of the policy rule. Since the central bank does not know any more than the private sector, real social gain from discretion arise only in extraordinary cases. Hence, the weight of the argument leads us to conclude that bank´s policy should be transparent and abandon discretion except for rare and unusual circumstances. (vi) An implication of our model suggests the current effective policy is only mildly activist and aims mostly to target inflation.
Permanent and transitory policy shocks in an empirical macro model with asymmetric information
(2003)
Despite a large literature documenting that the efficacy of monetary policy depends on how inflation expectations are anchored, many monetary policy models assume: (1) the inflation target of monetary policy is constant; and, (2) the inflation target is known by all economic agents. This paper proposes an empirical specification with two policy shocks: permanent changes to the inflation target and transitory perturbations of the short-term real rate. The public sector cannot correctly distinguish between these two shocks and, under incomplete learning, private perceptions of the inflation target will not equal the true target. The paper shows how imperfect policy credibility can affect economic responses to structural shocks, including transition to a new inflation target - a question that cannot be addressed by many commonly used empirical and theoretical models. In contrast to models where all monetary policy actions are transient, the proposed specification implies that sizable movements in historical bond yields and inflation are attributable to perceptions of permanent shocks in target inflation.
This paper investigates the role that imperfect knowledge about the structure of the economy plays in the formation of expectations, macroeconomic dynamics, and the efficient formulation of monetary policy. Economic agents rely on an adaptive learning technology to form expectations and to update continuously their beliefs regarding the dynamic structure of the economy based on incoming data. The process of perpetual learning introduces an additional layer of dynamic interaction between monetary policy and economic outcomes. We find that policies that would be efficient under rational expectations can perform poorly when knowledge is imperfect. In particular, policies that fail to maintain tight control over inflation are prone to episodes in which the public's expectations of inflation become uncoupled from the policy objective and stagflation results, in a pattern similar to that experienced in the United States during the 1970s. Our results highlight the value of effective communication of a central bank's inflation objective and of continued vigilance against inflation in anchoring inflation expectations and fostering macroeconomic stability. July 2003.
Monetary policy is sometimes formulated in terms of a target level of inflation, a fixed time horizon and a constant interest rate that is anticipated to achieve the target at the specified horizon. These requirements lead to constant interest rate (CIR)instrument rules. Using the standard New Keynesian model, it is shown that some forms of CIR policy lead to both indeterminacy of equilibria and instability under adaptive learning. However, some other forms of CIR policy perform better. We also examine the properties of the different policy rules in the presence of inertial demand and price behaviour.
Escapist policy rules
(2003)
We study a simple, microfounded macroeconomic system in which the monetary authority employs a Taylor-type policy rule. We analyze situations in which the self-confirming equilibrium is unique and learnable according to Bullard and Mitra (2002). We explore the prospects for the use of 'large deviation' theory in this context, as employed by Sargent (1999) and Cho, Williams, and Sargent (2002). We show that our system can sometimes depart from the self-confirming equilibrium towards a non-equilibrium outcome characterized by persistently low nominal interest rates and persistently low inflation. Thus we generate events that have some of the properties of "liquidity traps" observed in the data, even though the policymaker remains committed to a Taylor-type policy rule which otherwise has desirable stabilization properties.
The development of tractable forward looking models of monetary policy has lead to an explosion of research on the implications of adopting Taylor-type interest rate rules. Indeterminacies have been found to arise for some specifications of the interest rate rule, raising the possibility of inefficient fluctuations due to the dependence of expectations on extraneous "sunspots ". Separately, recent work by a number of authors has shown that sunspot equilibria previously thought to be unstable under private agent learning can in some cases be stable when the observed sunspot has a suitable time series structure. In this paper we generalize the "common factor "technique, used in this analysis, to examine standard monetary models that combine forward looking expectations and predetermined variables. We consider a variety of specifications that incorporate both lagged and expected inflation in the Phillips Curve, and both expected inflation and inertial elements in the policy rule. We find that some policy rules can indeed lead to learnable sunspot solutions and we investigate the conditions under which this phenomenon arises.
A financial system can only perform its function of channelling funds from savers to investors if it offers sufficient assurance to the providers of the funds that they will reap the rewards which have been promised to them. To the extent that this assurance is not provided by contracts alone, potential financiers will want to monitor and influence managerial decisions. This is why corporate governance is an essential part of any financial system. It is almost obvious that providers of equity have a genuine interest in the functioning of corporate governance. However, corporate governance encompasses more than investor protection. Similar considerations also apply to other stakeholders who invest their resources in a firm and whose expectations of later receiving an appropriate return on their investment also depend on decisions at the level of the individual firm which would be extremely difficult to anticipate and prescribe in a set of complete contingent contracts. Lenders, especially long-term lenders, are one such group of stakeholders who may also want to play a role in corporate governance; employees, especially those with high skill levels and firm-specific knowledge, are another. The German corporate governance system is different from that of the Anglo-Saxon countries because it foresees the possibility, and even the necessity, to integrate lenders and employees in the governance of large corporations. The German corporate governance system is generally regarded as the standard example of an insider-controlled and stakeholder-oriented system. Moreover, only a few years ago it was a consistent system in the sense of being composed of complementary elements which fit together well. The first objective of this paper is to show why and in which respect these characterisations were once appropriate. However, the past decade has seen a wave of developments in the German corporate governance system, which make it worthwhile and indeed necessary to investigate whether German corporate governance has recently changed in a fundamental way. More specifically one can ask which elements and features of German corporate governance have in fact changed, why they have changed and whether those changes which did occur constitute a structural change which would have converted the old insider-controlled system into an outsider-controlled and shareholder-oriented system and/or would have deprived it of its former consistency. It is the second purpose of this paper to answer these questions. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
A rapidly growing literature has documented important improvements in volatility measurement and forecasting performance through the use of realized volatilities constructed from high-frequency returns coupled with relatively simple reduced-form time series modeling procedures. Building on recent theoretical results from Barndorff-Nielsen and Shephard (2003c,d) for related bi-power variation measures involving the sum of high-frequency absolute returns, the present paper provides a practical framework for non-parametrically measuring the jump component in realized volatility measurements. Exploiting these ideas for a decade of high-frequency five-minute returns for the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find the jump component of the price process to be distinctly less persistent than the continuous sample path component. Explicitly including the jump measure as an additional explanatory variable in an easy-to-implement reduced form model for realized volatility results in highly significant jump coefficient estimates at the daily, weekly and quarterly forecast horizons. As such, our results hold promise for improved financial asset allocation, risk management, and derivatives pricing, by separate modeling, forecasting and pricing of the continuous and jump components of total return variability.
While focusing on the protection of distressed sovereigns, the current debate intended to reform the International Financial Architecture has hardly addressed the protection of creditors rights that varies among laws. I suspect however that this constitutes an essential determinant of the success of suggested solutions, especially under the contractual approach. Based on a sample of bonds issued by developing countries states in the period, January 1987 to December 1997, I find that, for given contract characteristics (e.g. listing markets and currency), the governing law is selected according to its ability to enforce repayment. However, although the New York law seems looser and incur larger enforcement costs than the England&Wales law, the former permits equivalent yearly credit amounts. I interpret this as a consequence of the existence of a larger set of valuable assets (e.g. trade) in the US that constitute implicit securities. My findings yield important implications for the reforms. In particular, provided that there exists a seemingly equivalent enforcement credibility between England and New York laws, the prompt implementation of the contractual approach solution should constitute a valuable first step toward efficient sovereign debt markets. October 2003.
The paper suggests an innovative contribution to the investigation of banking liabilities pricing contracted by sovereign agents. To address fundamental issues of banking, the study focuses on the determinants of the up-front fees (the up-front fee is a charge paid out at the signature of the loan arrangement). The investigation is based on a uniquely extensive sample of bank loans contracted or guaranteed by 58 less-developed countries sovereigns in the period from 1983 to 1997. The well detailed reports allow for the calculation of the equivalent yearly margin on the utilization period for all individual loan. The main findings suggest a significant impact of the renegotiation and agency costs on front-end borrowing payments. Unlike the sole interest spread, the all-in interest margin better takes account of these costs. The model estimates however suggest the non-linear pricing is hardly associated with an exogenous split-up intended by the borrower and his banker to cover up information. Instead the up-front payment is a liquidity transfer as described by Gorton and Kahn (2000) to compensate for renegotiation and monitoring costs. The second interesting result is that banks demand payment for all types of sovereign risk in an identical manner public debt holders do. The difference is that, unlike bond holders, bankers have the possibility to charge an up-front fee to compensate for renegotiation costs. Hence, beyond the information related issues, the higher complexity of the pricing design makes bank loan optimal for lenders on sovereign capital markets, especially relative to public debt, thus motivating for their presence. The paper contributes to the expanding literature on loan syndication and banking related issues. The study also has relevance for the investigation of the developing countries debt pricing.
We present an analysis of VaR forecasts and P&L-series of all 13 German banks that used internal models for regulatory purposes in the year 2001. To this end, we introduce the notion of well-behaved forecast systems. Furthermore, we provide a series of statistical tools to perform our analyses. The results shed light on the forecast quality of VaR models of the individual banks, the regulator's portfolio as a whole, and the main ingredients of the computation of the regulatory capital required by the Basel rules.
We estimate a model with latent factors that summarize the yield curve (namely, level, slope, and curvature) as well as observable macroeconomic variables (real activity, inflation, and the stance of monetary policy). Our goal is to provide a characterization of the dynamic interactions between the macroeconomy and the yield curve. We find strong evidence of the effects of macro variables on future movements in the yield curve and much weaker evidence for a reverse influence. We also relate our results to a traditional macroeconomic approach based on the expectations hypothesis.
Using the Johansen test for cointegration, we examine to which extent inflation rates in the Euro area have converged after the introduction of a single currency. Since the assumption of non-stationary variables represents the pivotal point in cointegration analyses we pay special attention to the appropriate identification of non-stationary inflation rates by the application of six different unit root tests. We compare two periods, the first ranging from 1993 to 1998 and the second from 1993 to 2002 with monthly observations. The Johansen test only finds partial convergence for the former period and no convergence for the latter.
Financial markets are to a very large extent influenced by the advent of information. Such disclosures, however, do not only contain information about fundamentals underlying the markets, but they also serve as a focal point for the beliefs of market participants. This dual role of information gains further importance for explaining the development of asset valuations when taking into account that information may be perceived individually (private information), or may be commonly shared by all traders (public information). This study investigates into the recently developed theoretical structures explaining the operating mechanism of the two types of information and emphasizes the empirical testability and differentiation between the role of private and public information. Concluding from a survey of experimental studies and own econometric analyses, it is argued that most often public information dominates private information. This finding justifies central bankers´ unease when disseminating news to the markets and argues against the recent trend of demanding full transparency both for financial institutions and financial markets themselves.
The paper describes the legal and economic environment of mergers and acquisitions in Germany and explores barriers to obtaining and executing corporate control. Various cases are used to demonstrate that resistance by different stakeholders including minority shareholders, organized labour and the government may present powerful obstacles to takeovers in Germany. In spite of the overall convergence of European takeover and securities trading laws, Germany still shows many peculiarities that make its market for corporate control distinct from other countries. Concentrated share ownership, cross shareholdings and pyramidal ownership structures are frequent barriers to acquiring majority stakes. Codetermination laws, the supervisory board structure and supermajority requirements for important corporate decisions limit the execution of control by majority shareholders. Bidders that disregard the German preference for consensual solutions and the specific balance of powers will risk their takeover attempt be frustrated by opposing influence groups. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
This paper is a draft for the chapter "German banks and banking structure" of the forthcoming book "The German financial system" edited by J.P. Krahnen and R.H. Schmidt (Oxford University Press). As such, the paper starts out with a description of past and present structural features of the German banking industry. Given the presented empirical evidence it then argues that great care has to be taken when generalising structural trends from one financial system to another. Whilst conventional commercial banking is clearly in decline in the US, it is far from clear whether the dominance of banks in the German financial system has been significantly eroded over the last decades. We interpret the immense stability in intermediation ratios and financing patterns of firms between 1970 and 2000 as strong evidence for our view that the way in which and the extent to which German banks fulfil the central functions for the financial system are still consistent with the overall logic of the German financial system. In spite of the current dire business environment for financial intermediaries we do not expect the German financial system and its banking industry as an integral part of this system to converge to the institutional arrangements typical for a market-oriented financial system.
We present a survey on the role of initial public offerings (Epos) and venture capital (VC) in Germany after the Second World War. Between 1945 and 1983 IPOs hardly played a role at all and only a minor role thereafter. In addition, companies that chose an IPO were much older and larger than the average companies going public for the first time in the US or the UK. The level of IPO underpricing in Germany, in contrast, has not been fundamentally different from that in other countries. The picture for venture capital financing is not much different from that provided by IPOs in Germany. For a long time venture capital financing was hardly significant, particularly as a source of early stage financing. The unprecedented boom on the Neuer Markt between 1997 and 2000, when many small venture capital financed firms entered the market, provides a striking contrast to the preceding era. However, by US standards, the levels of both IPO and venture capital activities remained rather low even in this boom phase. The extent to which recent developments will have a lasting impact on the financing of German firms, the level of IPO activity, and venture capital financing, remains to be seen. At the time of writing, activity has come to a near stand still and the Neuer Markt has just been dissolved. The low number of IPOs and the fairly low volume of VC financing in Germany before the introduction of the Neuer Markt are a striking and much debated phenomenon. Understanding the reasons for these apparent peculiarities is vital to understanding the German financial system. The potential explanations that have been put forward range from differentces in mentality to legal and institutional impediments and the availability of alternative sources of financing. Moreover the recent literature discusses how interest groups may have benefited and influenced the situation. These groups include politicians, unions/workers, managers/controlling-owners of established firms as well as banks. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
We analyze the venture capitalist´s decision on the timing of the IPO, the offer price and the fraction of shares he sells in the course of the IPO. A venture capitalist may decide to take a company public or to liquidate it after one or two financing periods. A longer venture capitalist´s participation in a firm (later IPO) may increase its value while also increasing costs for the venture capitalist. Due to his active involvement, the venture capitalist knows the type of firm and the kind of project he finances before potential new investors do. This information asymmetry is resolved at the end of the second period. Under certain assumptions about the parameters and the structure of the model, we obtain a single equilibrium in which high-quality firms separate from low-quality firms. The latter are liquidated after the first period, while the former go public either after having been financed by the venture capitalist for two periods or after one financing period using a lock-up. Whether a strategy of one or two financing periods is chosen depends on the consulting intensity of the project and / or on the experience of the venture capitalist. In the separating equilibrium, the offer price corresponds to the true value of the firm. An earlier version of this paper appeared as: The Decision of Venture Capitalists on Timing and Extent of IPOs (ZEW Discussion Paper No. 03-12). This version July 2003.
Using a unique, hand-collected database of all venture-backed firms listed on Germany´s Neuer Markt, we analyze the history of venture capital financing of these firms before the IPO and the behavior of venture capitalists at the IPO. We can detect significant differences in the behavior and characteristics of German vs. foreign venture capital firms. The discrepancy in the investment and divestment strategies may be explained by the grandstanding phenomenon, the value-added hypothesis and certification issues. German venture capitalists are typically younger and smaller than their counterparts from abroad. They syndicate less. The sectoral structure of their portfolios differs from that of foreign venture capital firms. We also find that German venture capitalists typically take companies with lower offering volumes on the market. They usually finance firms in a later stage, carry through fewer investment rounds and take their portfolio firms public earlier. In companies where a German firm is the lead venture capitalist, the fraction of equity held by the group of venture capitalists is lower, their selling intensity at the IPO is higher and the committed lock-up period is longer.
This paper deals with the proposed use of sovereign credit ratings in the "Basel Accord on Capital Adequacy" (Basel II) and considers its potential effect on emerging markets financing. It investigates in a first attempt the consequences of the planned revisions on the two central aspects of international bank credit flows: the impact on capital costs and the volatility of credit supply across the risk spectrum of borrowers. The empirical findings cast doubt on the usefulness of credit ratings in determining commercial banks' capital adequacy ratios since the standardized approach to credit risk would lead to more divergence rather than convergence between investment-grade and speculative-grade borrowers. This conclusion is based on the lateness and cyclical determination of credit rating agencies' sovereign risk assessments and the continuing incentives for short-term rather than long-term interbank lending ingrained in the proposed Basel II framework.
Do changes in sovereign credit ratings contribute to financial contagion in emerging market crises?
(2003)
Credit rating changes for long-term foreign currency debt may act as a wake-up call with upgrades and downgrades in one country affecting other financial markets within and across national borders. Such a potential (contagious) rating effect is likely to be stronger in emerging market economies, where institutional investors' problems of asymmetric information are more present. This empirical study complements earlier research by explicitly examining cross-security and cross-country contagious rating effects of credit rating agencies' sovereign risk assessments. In particular, the specific impact of sovereign rating changes during the financial turmoil in emerging markets in the latter half of the 1990s has been examined. The results indicate that sovereign rating changes in a ground-zero country have a (statistically) significant impact on the financial markets of other emerging market economies although the spillover effects tend to be regional.
Accounting for financial instruments in the banking industry: conclusions from a simulation model
(2003)
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature.
Some of the most widely expressed myths about the German financial system are concerned with the close ties and intensive interaction between banks and firms, often described as Hausbank relationships. Links between banks and firms include direct shareholdings, board representation, and proxy voting and are particularly significant for corporate governance. Allegedly, these relationships promote investment and improve the performance of firms. Furthermore, German universal banks are believed to play a special role as large and informed monitoring investors (shareholders). However, for the very same reasons, German universal banks are frequently accused of abusing their influence on firms by exploiting rents and sustaining the entrenchment of firms against efficient transfers of firm control. In this paper, we review recent empirical evidence regarding the special role of banks for the corporate governance of German firms. We differentiate between large exchangelisted firms and small and medium sized companies throughout. With respect to the role of banks as monitoring investors, the evidence does not unanimously support a special role of banks for large firms. Only one study finds that banks´ control of management goes beyond what nonbank shareholders achieve. Proxyvoting rights apparently do not provide a significant means for banks to exert management control. Most of the recent evidence regarding small firms suggests that a Hausbank relationship can indeed be beneficial. Hausbanks are more willing to sustain financing when borrower quality deteriorates, and they invest more often than arm´s length banks in workouts if borrowers face financial distress.
In Germany a public discussion on the "power of banks" has been going on for decades now with power having at least two meanings. On the one hand it is the power of banks to control public corporations through direct shareholdings or the exercise of proxy votes - this is the power of banks in corporate control. On the other hand it is market power - due to imperfect competition in markets for financial services - that banks exercise vis-à-vis their loan and deposit customers. In the past, bank regulation has often been blamed to undermine competition and the working of market forces in the financial industry for the sake of soundness and stability of financial services firms. This chapter tries to shed some light on the historical development and current state of bank regulation in Germany. In so doing it tries to embed the analysis of bank regulation into a more general industrial organisation framework. For every regulated industry, competition and regulation are deeply interrelated as most regulatory institutions - even if they do not explicitly address the competitiveness of the market - either affect market structure or conduct. This paper tries to uncover some of the specific relationships between monetary policy, government interference and bank regulation on the one hand and bank market structure and economic performance on the other. In so doing we hope to point to several areas for fruitful research in the future. While our focus is on Germany, some of the questions that we raise and some of our insights might also be applicable to banking systems elsewhere. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
The experience in the period during and after the Asian crisis of 1997-98 has provoked an extensive debate about the credit rating agencies' evaluation of sovereign risk in emerging markets lending. This study analyzes the role of credit rating agencies in international finan-cial markets, particularly whether sovereign credit ratings have an impact on the financial stability in emerging market economies. The event study and panel regression results indicate that credit rating agencies have substantial influence on the size and volatility of emerging markets lending. The empirical results are significantly stronger in the case of government's downgrades and negative imminent sovereign credit rating actions such as credit watches and rating outlooks than positive adjustments by the credit rating agencies while by the market participants' anticipated sovereign credit rating changes have a smaller impact on financial markets in emerging economies.
The German financial system is the archetype of a bank-dominated system. This implies that organized equity markets are, in some sense, underdeveloped. The purpose of this paper is, first, to describe the German equity markets and, second, to analyze whether it is underdeveloped in any meaningful sense. In the descriptive part we provide a detailed account of the microstructure of the German equity markets, putting special emphasis on recent developments. When comparing the German market with its peers, we find that it is indeed underdeveloped with respect to market capitalization. In terms of liquidity, on the other hand, the German equity market is not generally underdeveloped. It does, however, lack a liquid market for block trading. Klassifikation: G 51 . Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
This chapter analyzes the role of financial accounting in the German financial system. It starts from the common perception that German accounting is rather "uninformative". This characterization is appropriate from the perspective of an arm´s length or outside investor and when confined to the financial statements per se. But it is no longer accurate when a broader perspective is adopted. The German accounting system exhibits several arrangements that privately communicate information to insiders, notably the supervisory board. Due to these features, the key financing and contracting parties seem reasonably well informed. The same cannot be said about outside investors relying primarily on public disclosure. A descriptive analysis of the main elements of the Germany system and a survey of extant empirical accounting research generally support these arguments.
The paper explores factors that influence the design of financing contracts between venture capital investors and European venture capital funds. 122 Private Placement Memoranda and 46 Partnership Agreements are investigated in respect to the use of covenant restrictions and compensation schemes. The analysis focuses on the impact of two key factors: the reputation of VC-funds and changes in the overall demand for venture capital services. We find that established funds are more severely restricted by contractual covenants. This contradicts the conventional wisdom which assumes that established market participants care more about their reputation, have less incentive to behave opportunistically and therefore need less covenant restrictions. We also find that managers of established funds are more often obliged to invest own capital alongside with investors money. We interpret this as evidence that established funds have actually less reason to care about their reputation as compared to young funds. One reason for this surprising result could be that managers of established VC funds are older and closer to retirement and therefore put less weight on the effects of their actions on future business opportunities. We also explore the effects of venture capital supply on contract design. Gompers and Lerner (1996) show that VC-funds in the US are able to reduce the number of restrictive covenants in years with high supply of venture capital and interpret this as a result of increased bargaining power by VC-funds. We do not find similar evidence for Europe. Instead, we find that VC-funds receive less base compensation and higher performance related compensation in years with strong capital inflows into the VC industry. This may be interpreted as a signal of overconfidence: Strong investor demand seems to coincide with overoptimistic expectations by fund managers which make them willing to accept higher powered incentive schemes.
Price stability and monetary policy effectiveness when nominal interest rates are bounded at zero
(2003)
This paper employs stochastic simulations of a small structural rational expectations model to investigate the consequences of the zero bound on nominal interest rates. We find that if the economy is subject to stochastic shocks similar in magnitude to those experienced in the U.S. over the 1980s and 1990s, the consequences of the zero bound are negligible for target inflation rates as low as 2 percent. However, the effects of the constraint are non-linear with respect to the inflation target and produce a quantitatively significant deterioration of the performance of the economy with targets between 0 and 1 percent. The variability of output increases significantly and that of inflation also rises somewhat. Also, we show that the asymmetry of the policy ineffectiveness induced by the zero bound generates a non-vertical long-run Phillips curve. Output falls increasingly short of potential with lower inflation targets.
We study optimal nominal demand policy in an economy with monopolistic competition and flexible prices when firms have imperfect common knowledge about the shocks hitting the economy. Parametrizing firms´ information imperfections by a (Shannon) capacity parameter that constrains the amount of information flowing to each firm, we study how policy that minimizes a quadratic objective in output and prices depends on this parameter. When price setting decisions of firms are strategic complements, for a large range of capacity values optimal policy nominally accommodates mark-up shocks in the short-run. This finding is robust to the policy maker observing shocks imperfectly or being uncertain about firms´ capacity parameter. With persistent mark-up shocks accommodation may increase in the medium term, but decreases in the long-run thereby generating a hump-shaped price response and a slow reduction in output. Instead, when prices are strategic substitutes, policy tends to react restrictively to mark-up shocks. However, rational expectations equilibria may then not exist with small amounts of imperfect common knowledge.
In this study a regime switching approach is applied to estimate the chartist and fundamentalist (c&f) exchange rate model originally proposed by Frankel and Froot (1986). The c&f model is tested against alternative regime switching specifications applying likelihood ratio tests. Nested atheoretical models like the popular segmented trends model suggested by Engel and Hamilton (1990) are rejected in favour of the multi agent model. Moreover, the c&f regime switching model seems to describe the data much better than a competing regime switching GARCH(1,1) model. Finally, our findings turned out to be relatively robust when estimating the model in subsamples. The empirical results suggest that the model is able to explain daily DM/Dollar forward exchange rate dynamics from 1982 to 1998.
We develop a behavioral exchange rate model with chartists and fundamentalists to study cyclical behavior in foreign exchange markets. Within our model, the market impact of fundamentalists depends on the strength of their belief in fundamental analysis. Estimation of a STAR GARCH model shows that the more the exchange rate deviates from its fundamental value, the more fundamentalists leave the market. In contrast to previous findings, our paper indicates that due to the nonlinear presence of fundamentalists, market stability decreases with increasing misalignments. A stabilization policy such as central bank interventions may help to deflate bubbles.
In this paper we study the role of the exchange rate in conducting monetary policy in an economy with near-zero nominal interest rates as experienced in Japan since the mid-1990s. Our analysis is based on an estimated model of Japan, the United States and the euro area with rational expectations and nominal rigidities. First, we provide a quantitative analysis of the impact of the zero bound on the effectiveness of interest rate policy in Japan in terms of stabilizing output and inflation. Then we evaluate three concrete proposals that focus on depreciation of the currency as a way to ameliorate the effect of the zero bound and evade a potential liquidity trap. Finally, we investigate the international consequences of these proposals.
In this paper we estimate a small model of the euro area to be used as a laboratory for evaluating the performance of alternative monetary policy strategies. We start with the relationship between output and inflation and investigate the fit of the nominal wage contracting model due to Taylor (1980)and three different versions of the relative real wage contracting model proposed by Buiter and Jewitt (1981)and estimated by Fuhrer and Moore (1995a) for the United States. While Fuhrer and Moore reject the nominal contracting model in favor of the relative contracting model which induces more inflation persistence, we find that both models fit euro area data reasonably well. When considering France, Germany and Italy separately, however, we find that the nominal contracting model fits German data better, while the relative contracting model does quite well in countries which transitioned out of a high inflation regime such as France and Italy. We close the model by estimating an aggregate demand relationship and investigate the consequences of the different wage contracting specifications for the inflation-output variability tradeoff, when interest rates are set according to Taylor 's rule.
In this study, we perform a quantitative assessment of the role of money as an indicator variable for monetary policy in the euro area. We document the magnitude of revisions to euro area-wide data on output, prices, and money, and find that monetary aggregates have a potentially significant role in providing information about current real output. We then proceed to analyze the information content of money in a forward-looking model in which monetary policy is optimally determined subject to incomplete information about the true state of the economy. We show that monetary aggregates may have substantial information content in an environment with high variability of output measurement errors, low variability of money demand shocks, and a strong contemporaneous linkage between money demand and real output. As a practical matter, however, we conclude that money has fairly limited information content as an indicator of contemporaneous aggregate demand in the euro area.
We investigate the performance of forecast-based monetary policy rules using five macroeconomic models that reflect a wide range of views on aggregate dynamics. We identify the key characteristics of rules that are robust to model uncertainty: such rules respond to the one-year-ahead inflation forecast and to the current output gap and incorporate a substantial degree of policy inertia. In contrast, rules with longer forecast horizons are less robust and are prone to generating indeterminacy. Finally, we identify a robust benchmark rule that performs very well in all five models over a wide range of policy preferences.
Inflation-targeting central banks have only imperfect knowledge about the effect of policy decisions on inflation. An important source of uncertainty is the relationship between inflation and unemployment. This paper studies the optimal monetary policy in the presence of uncertainty about the natural unemployment rate, the short-run inflation-unemployment tradeoff and the degree of inflation persistence in a simple macroeconomic model, which incorporates rational learning by the central bank as well as private sector agents. Two conflicting motives drive the optimal policy. In the static version of the model, uncertainty provides a motive for the policymaker to move more cautiously than she would if she knew the true parameters. In the dynamic version, uncertainty also motivates an element of experimentation in policy. I find that the optimal policy that balances the cautionary and activist motives typically exhibits gradualism, that is, it still remains less aggressive than a policy that disregards parameter uncertainty. Exceptions occur when uncertainty is very high and in inflation close to target.
The use of GARCH models with stable Paretian innovations in financial modeling has been recently suggested in the literature. This class of processes is attractive because it allows for conditional skewness and leptokurtosis of financial returns without ruling out normality. This contribution illustrates their usefulness in predicting the downside risk of financial assets in the context of modeling foreign exchange-rates and demonstrates their superiority over use of normal or Student´s t GARCH models.
Learning and equilibrium selection in a monetary overlapping generations model with sticky prices
(2003)
We study adaptive learning in a monetary overlapping generations model with sticky prices and monopolistic competition for the case where learning agents observe current endogenous variables. Observability of current variables is essential for informational consistency of the learning setup with the model set up but generates multiple temporary equilibria when prices are flexible and prevents a straightforward construction of the learning dynamics. Sticky prices overcome this problem by avoiding simultaneity between prices and price expectations. Adaptive learning then robustly selects the determinate (monetary) steady state independent from the degree of imperfect competition. The indeterminate (non-monetary) steady state and non-stationary equilibria are never stable. Stability in a deterministic version of the model may differ because perfect foresight equilibria can be the limit of restricted perceptions equilibria of the stochastic economy with vanishing noise and thereby inherit different stability properties. This discontinuity at the zero variance of shocks suggests to analyze learning in stochastic models.
This paper compares Bayesian decision theory with robust decision theory where the decision maker optimizes with respect to the worst state realization. For a class of robust decision problems there exists a sequence of Bayesian decision problems whose solution converges towards the robust solution. It is shown that the limiting Bayesian problem displays infinite risk aversion and that decisions are insensitive (robust) to the precise assignment of prior probabilities. This holds independent from whether the preference for robustness is global or restricted to local perturbations around some reference model.