Working Paper
Refine
Year of publication
- 2004 (114) (remove)
Document Type
- Working Paper (114) (remove)
Has Fulltext
- yes (114)
Is part of the Bibliography
- no (114)
Keywords
- Anton Ulrich <Braunschweig-Wolfenbüttel (18)
- Herzog> / Octavia (18)
- Deutschland (13)
- Schätzung (7)
- Geldpolitik (6)
- Theorie (5)
- USA (5)
- Venture Capital (5)
- Corporate Governance (4)
- Kongress (4)
Institute
This paper employs individual bidding data to analyze the empirical performance of the longer term refinancing operations (LTROs) of the European Central Bank (ECB). We investigate how banks’ bidding behavior is related to a series of exogenous variables such as collateral costs, interest rate expectations, market volatility and to individual bank characteristics like country of origin, size, and experience. Panel regressions reveal that a bank’s bidding depends on bank characteristics. Yet, different bidding behavior generally does not translate into differences concerning bidder success. In contrast to the ECB’s main refinancing operations, we find evidence for the winner’s curse effect in LTROs. Our results indicate that LTROs do neither lead to market distortions nor to unfair auction outcomes. JEL classification: E52, D44
Even though tourism has been recognised as an important field for transnational research today, there are few attempts to place tourism in the context of transnational theories or to think about transnationalism from the perspective of tourists. I argue that in researching tourist practices one can add important aspects to transnational approaches. The prerequisites of mobility and interaction for example are the features chosen by backpackers to describe what their Round-The-World-Trip is about. A form of tourism is adopted, or created, that itself confronts many aspects of globalisation: First of all there is the immense dynamic that is involved. Backpackers try to cover as many places and experiences as possible, travelling at high speed. They adopt all kinds of touristic experiences ranging from beach to adventure to culture tourism. They don't focus on a specific area or country but travel the world. They cross national borders perpetually. Additionally they form a transnational network in which they interact with strangers of similar backgrounds (other backpackers, tourist professionals). This network helps them interacting with people from different backgrounds (the socalled hosts or locals). Considering my research Backpackers forge a certain identity from these transnational practices which I want to name globedentity. Globedentity expresses a type of identity construction that not only refers to the individual (I) but reflects the world (globe) in this identity. This globedentity is not fixed but is perpetually re-created and re-defined. It also embraces the increasing popular awareness of globalisation which backpackers, coming from highly educated middle class backgrounds, in particular have identified with. Due to the constant awareness of the latest global social, cultural and economic developments in these educated milieus they know exactly which tools to use to become successful parts of their societies.
Taxation and tax policy reform appears on the political agenda in most advanced welfare states in Europe and North America. Of course studies of taxation and tax policy are nothing new and have existed ever since people have paid taxes. The current work is situated in the context of the future of the welfare state and the reinforced international economic and political integration referred to as "globalization." The purpose of this paper is to analyze how globalization is affecting tax policy in advanced welfare states. In comparing the evolution of tax policy in Canada with those in the United States, Germany and Sweden from 1960 to 1995, I will try to review the conventional antiglobalization thesis, i.e., that globalization leads to a "race to the bottom" in revenue and expenditures policies, or as others have called it, a "beggar the neighbour policy" (Tanzi and Bovenberg 1990, 187). ... Conclusion: The empirical data and theoretical models clearly show that globalization is one relatively minor factor among many that explain tax policy reforms. And even that limited influence is mediated by domestic political systems, institutions and constellations of actors. As the data has shown, the conventional globalization thesis of a race to the bottom is not borne out. Tax rates and tax revenues are still increasing, despite the ongoing trend toward international trade integration. Countervailing pressures like the high cost of welfare programs, different parties in government, strong labour unions, and institutional veto players counteract the pressure of globalization on tax policy. As for the future of taxation in Canada, it is more likely to be one of gradual evolution than radical change. Although the data don’t show any downward pressure on tax rates and tax revenues comparatively speaking, there are at least four key factors in Canada that are likely to put pressure on future tax rates, although regional political dynamics and the workings of fiscal federalism suggest that tax reductions will be a higher priority in some provinces than others (Hale 2002). First, neoliberalism will continue to shape fiscal and tax policy, including the role of the tax system in delivering social policies and programs in most parts of Canada. Second, governments that seek to define their own economic and social priorities rather than simply react to events beyond their borders will have to exercise centralized control over budgetary policies and spending levels if they hope to foster the economic growth needed to finance social services in the context of Canada’s changing demographics. Third, the ability of governments to combine the promotion of economic growth and higher living standards will be closely linked to their ability to develop a workable division of responsibilities among federal and provincial governments and with other national governments. Finally, the diffusion of new technologies will continue to transform national and regional economies while giving individuals greater opportunity to avoid government and tax regulations that run contrary to their perceived interests and values. This discussion of determinants that shape tax policy reform has shown that successful management of fiscal and tax policy requires a capacity to set priorities; adapt to changing circumstances; and build a consensus that enables competing economic, social, regional and ideological interests to identify their own well-being in the broader political and economic environment. Tax policy is shaped by many political, economic and social determinants. As Geoffrey Hale correctly concludes, "it should not be surprising if the tax system stubbornly refuses to confirm either economic theories or political ideologies, but reflects past decisions and the policy tradeoffs of the political process" (2002, 71). The notion of tax policy being driven by globalization and forces associated with globalization (both positive and negative) is simply not borne by the facts.
In der vorliegenden Studie werden die sozialpolitischen Reformen in den USA und Kanada während der 1990er Jahren in einer vergleichenden Perspektive analysiert. Dabei wird insbesondere die Rolle steuerpolitischer Instrumentarien in den Reformen thematisiert und der Frage nachgegangen, ob sich hier ein neuer Typ von Wohlfahrtsstaat herausbildet. Im ersten Teil des Papiers wird das in der vergleichenden Wohlfahrtsstaatsforschung etablierte Modell des liberalen Wohlfahrtsstaats skizziert, um vor diesem Hintergrund die Reformen in den USA und Kanada zu untersuchen und zu vergleichen. Anschließend wird in einer breiteren vergleichenden Perspektive die out-put-Leistung der beiden Wohlfahrtsstaaten analysiert. Al normative Kriterien hierbei gilt in erster Linie die Umverteilungsfunktion sozialpolitischer Instrumentarien, hier in erster Linie verstanden als Einkommensumverteilung.
Am Beginn des 21. Jahrhunderts wird der Zustand der US-Demokratie kontrovers diskutiert. Während manche Beobachter eine zu hohe Responsivität des politischen Systems gegenüber den Ansprüchen seiner Bürger entdeckt haben wollen und deshalb von demosclerosis und einer Hyperdemokratie sprechen, in welcher der Volkswille in einen unantastbaren, göttlichen Rang erhoben worden sei, kommen andere zu dem Schluss, dass die Gründerväter im Hinblick auf ihre handlungsanleitende Furcht vor einer »Tyrannei der Mehrheit« ganze Arbeit geleistet und ein nahezu unüberwindbares System von Vetopositionen geschaffen hätten, das Partikularinteressen strukturell bevorzuge und deshalb nur in Ausnahmesituationen die Mehrheitspräferenzen der Bürger in Politik umsetze. Kurzum: Die Furcht der Federalists vor einer »Mehrheitstyrannei« habe einer »Minderheitstyrannei« Tür und Tor geöffnet. Der Artikel versucht die Vereinigten Staaten in diesem Spannungsbogen zu verorten. Ziel ist es, die Qualität der amerikanischen Demokratie am Beginn des 21. Jahrhunderts zu problematisieren. Dabei werden auch die Entwicklungen nach dem 11. September berücksichtigt.
In dieser Studie werden die Wirkungen von Arbeitsbeschaffungsmaßnahmen (ABM) in Deutschland auf die individuellen Eingliederungswahrscheinlichkeiten der Teilnehmer in reguläre Beschäftigung evaluiert. Für die Untersuchung wird ein umfangreicher und informativer Datensatz aus den Datenquellen der Bundesagentur für Arbeit (BA) verwendet, der es ermöglicht, die Wirkungen der Programme differenziert nach individuellen Unterschieden der Teilnehmer und mit Berücksichtigung der heterogenen Arbeitsmarktstruktur zu untersuchen. Der Datensatz enthält Informationen zu allen Teilnehmern in ABM, die ihre Maßnahmen im Februar 2000 begonnen haben, und zu einer Kontrollgruppe von Nichtteilnehmern, die im Januar 2000 arbeitslos waren und im Februar 2000 nicht in die Programme eingetreten sind. Mit Hilfe der Informationen der Beschäftigtenstatistik ist es hierbei erstmals möglich, den Abgang in reguläre Beschäftigung auf Grundlage administrativer Daten zu untersuchen. Der vorliegende Verbleibszeitraum reicht bis Dezember 2002. Unter Verwendung von Matching-Methoden auf dem Ansatz potenzieller Ergebnisse werden die Effekte von ABM mit regionaler Unterscheidung und für besondere Problem- und Zielgruppen des Arbeitsmarktes geschätzt. Die Ergebnisse zeigen zwar deutliche Unterschiede in den Effekten für Subgruppen, insgesamt weisen die empirischen Befunde jedoch darauf hin, dass das Ziel der Eingliederung in reguläre ungeförderte Beschäftigung durch ABM weitgehend nicht realisiert werden konnte. JEL: C40 , C13 , J64 , H43 , J68
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
In a framework closely related to Diamond and Rajan (2001) we characterize different financial systems and analyze the welfare implications of different LOLR-policies in these financial systems. We show that in a bank-dominated financial system it is less likely that a LOLR-policy that follows the Bagehot rules is preferable. In financial systems with rather illiquid assets a discretionary individual liquidity assistance might be welfare improving, while in market-based financial systems, with rather liquid assets in the banks' balance sheets, emergency liquidity assistance provided freely to the market at a penalty rate is likely to be efficient. Thus, a "one size fits all"-approach that does not take the differences of financial systems into account is misguiding. JEL - Klassifikation: D52 , E44 , G21 , E52 , E58
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Empirical evidence suggests that even those firms presumably most in need of monitoringintensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank. JEL - Klassifikation: G21 , G78 , G33
This paper suggests a motive for bank mergers that goes beyond alleged and typically unverifiable scale economies: preemtive resolution of banks´ financial distress. Such "distress mergers" can be a significant motivation for mergers because they can foster reorganizations, realize diversification gains, and avoid public attention. However, since none of these potential benefits comes without a cost, the overall assessment of distress mergers is unclear. We conduct an empirical analysis to provide evidence on consequences of distress mergers. The analysis is based on comprehensive data from Germany´s savings and cooperatives banks sectors over the period 1993 to 2001. During this period both sectors faced significant structural problems and superordinate institutions (associations) presumably have engaged in coordinated actions to manage distress mergers. The data comprise 3640 banks and 1484 mergers. Our results suggest that bank mergers as a means of preemtive distress resolution have moderate costs in terms of the economic impact on performance. We do find strong evidence consistent with diversification gains. Thus, distress mergers seem to have benefits without affecting systematic stability adversely.
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
Der Bestimmung risikoadäquater Diskontierungssätze kommt bei der Unternehmensbedeutung eine zentrale Bedeutung zu. Wird zu deren Bestimmung in der praktischen Anwendung das CAPM verwendet, gilt es dabei, risikolose Zinssätze und Risikoprämien zu bestimmen, für die erwartete Renditen des Marktportfeuilles und Beta-Faktoren als Maßgrößen für das systematische Risiko benötigt werden. Passend zu den zu bewertenden erwarteten Überschussgrößen sollten auch die zur Diskontierung verwendeten Renditeforderungen die im Bewertungszeitpunkt erwarteten künftigen Renditen vergleichbarer Anlagen widerspiegeln. Die weitaus meisten Beiträge zur Operationalisierung des CAPM leiten die Renditeforderungen jedoch aus historischen Kapitalmarktrenditen ab. Wir zeigen in diesem Beitrag auf, wie erwartete künftige Renditen aus beobachtbaren Größen, vor allen den Zinsstrukturkurven und den beobachtbaren Analystenprognosen, zukunftsorientiert abgeleitet werden können. Damit wird eine konzeptionell schlüssigere Bewertung der im Bewertungszeitpunkt erwarteten künftigen Überschüsse mit den zeitgleich erwarteten künftigen Renditen ermöglicht.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
Seit der Einführung des Deutschen Corporate Governance Kodex (Kodex) im Jahr 2002 sind deutsche börsennotierte Unternehmen zur Abgabe der Entsprechenserklärung gemäß § 161 AktG verpflichtet (Comply-or-Explain-Prinzip). Auf der Basis dieser Information soll durch den Druck des Kapitalmarkts die Einhaltung des Kodex überwacht und gegebenenfalls sanktioniert werden. Dabei wird regelmäßig postuliert, dass bei überdurchschnittlicher Befolgung bzw. Nichtbefolgung der Kodex-Empfehlungen eine Belohnung durch Kurszuschläge bzw. eine Sanktionierung durch Kursabschläge erfolgt. Die Ergebnisse einer Ereignisstudie zeigen, dass die Abgabe der Entsprechenserklärung keine erhebliche Kursbeeinflussung auslöst und die für das Enforcement des Kodex angenommene (und erforderliche) Selbstregulierung durch den Kapitalmarkt nicht stattfindet. Es wird daher kritisch hinterfragt, ob der für den Kodex gewählte und grundsätzlich zu begrüßende flexible Regulierungsansatz im System des zwingenden deutschen Gesellschaftsrechts einen geeigneten Enforcement-Mechanismus darstellt. This paper studies the short-run announcement effects of compliance with the German Corporate Governance Code (‘the Code’) on firm value. Event study results suggest that firm value is unaffected by the announcement, although such market reactions to the first time disclosure of the declaration of conformity were widely assumed by the private and public promoters of the Code. This result from acceptance of the German Code add evidence to the hypothesis that regulatory corporate governance initiatives that rely on mandatory disclosure without monitoring and enforcement are ineffective in civil law countries.
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that financing patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by comparing and analyzing various ways of measuring financial structure and financing patterns and by demonstrating that the surprising empirical results found by studies that relied on net flows are due to a hidden assumption. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results, which use an estimation technique for determining gross flows of funds in those cases in which empirical data are not available, are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way, and moreover in a way which is largely in line with the general view of the differences between the financial systems of the countries covered in the present paper.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to force the hot and dense nuclear matter into a deconfined phase.
Die vorliegende Fallstudie zu den Anciens Combattants in Diébougou ist das Ergebnis einer Lehrforschung der Johann Wolfgang Goethe-Universität, Frankfurt am Main, Institut für Historische Ethnologie, die vom 24. August 2001 bis zum 16. Dezember 2001 unter der Leitung von Prof. Carola Lentz, und der Betreuung durch Dr. Katja Werthmann, Dr. Richard Kuba sowie in Kooperation mit der Universität von Ouagadougou (Département d’Histoire et d’Archéologie) in Burkina Faso stattfand. Allen genannten Personen und Institutionen sei an dieser Stelle ausdrücklich gedankt. Die Fallstudie reichte ich im Februar 2004 als Magister-Abschlussarbeit am Fachbereich Geschichtswissenschaften (Historische Ethnologie) der Johann Wolfgang Goethe-Universität, Frankfurt am Main, bei Prof. Carola Lentz und Prof. Karl-Heinz Kohl ein. Der Großteil der ursprünglich im Anhang der Magisterarbeit enthaltenen Dokumente, Karten und Photos wurde ausgelagert, und der Text erfuhr eine geringfügige Überarbeitung. Im Anschluss an einen vierwöchigen Dioula-Sprachkurs in Bobo-Dioulasso folgte die dreimonatige Erhebungsphase in Diébougou, sowohl Hauptort (chef-lieu) der Provinz Bougouriba im Südwesten Burkina Fasos, Markt- als auch Verwaltungszentrum mit 11 637 Einwohnern (siehe http://www.ambf.bf/f_mairies.html). Interethnische Beziehungen, Siedlungsgeschichte und Bodenrecht waren die übergeordneten Themen des ethnologischen Teilprojekts A9 des Sonderforschungsbereichs 268 "Kulturentwicklung und Sprachgeschichte im Naturraum Westafrikanische Savanne" der Johann Wolfgang Goethe-Universität, der die Durchführung der Lehrforschung finanziell unterstützte, und dem ich gleichsam danken möchte.
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
In this article, we investigate risk return characteristics and diversification benefits when private equity is used as a portfolio component. We use a unique dataset describing 642 US-American portfolio companies with 3620 private equity investments. Information about precisely dated cash flows at the company level enables for the first time a cash flow equivalent and simultaneous investment simulation in stocks, as well as the construction of stock portfolios for benchmarking purposes. With respect to the methodology involved, we construct private equity, stock-benchmark and mixed-asset portfolios using bootstrap simulations. For the late 1990s we find a dramatic increase in the extent to which private equity outperforms stock investment. In earlier years private equity was underperforming its stock benchmarks. Within the overall class of private equity, returns on earlier private equity investment categories, like venture capital, show on average higher variations and even higher rates of failure. It is in this category in particular that high average portfolio returns are generated solely by the ability to select a few extremely well performing companies, thus compensating for lost investments. There is a high marginal diversifiable risk reduction of about 80% when the portfolio size is increased to include 15 investments. When the portfolio size is increased from 15 to 200 there are few marginal risk diversification effects on the one hand, but a large increase in managing expenditure on the other, so that an actual average portfolio size between 20 and 28 investments seems to be well balanced. We provide empirical evidence that the non-diversifiable risk that a constrained investor, who is exclusively investing in private equity, has to hold exceeds that of constrained stock investors and also the market risk. From the viewpoint of unconstrained investors with complete investment freedom, risk can be optimally reduced by constructing mixed asset portfolios. According to the various private equity subcategories analyzed, there are big differences in optimal allocations to this asset class for minimizing mixed-asset portfolio variance or maximizing performance ratios. We observe optimal portfolio weightings to be between 3% and 65%.
We take a simple time-series approach to modeling and forecasting daily average temperature in U.S. cities, and we inquire systematically as to whether it may prove useful from the vantage point of participants in the weather derivatives market. The answer is, perhaps surprisingly, yes. Time-series modeling reveals conditional mean dynamics, and crucially, strong conditional variance dynamics, in daily average temperature, and it reveals sharp differences between the distribution of temperature and the distribution of temperature surprises. As we argue, it also holds promise for producing the long-horizon predictive densities crucial for pricing weather derivatives, so that additional inquiry into time-series weather forecasting methods will likely prove useful in weather derivatives contexts.
Despite powerful advances in yield curve modeling in the last twenty years, comparatively little attention has been paid to the key practical problem of forecasting the yield curve. In this paper we do so. We use neither the no-arbitrage approach, which focuses on accurately fitting the cross section of interest rates at any given time but neglects time-series dynamics, nor the equilibrium approach, which focuses on time-series dynamics (primarily those of the instantaneous rate) but pays comparatively little attention to fitting the entire cross section at any given time and has been shown to forecast poorly. Instead, we use variations on the Nelson-Siegel exponential components framework to model the entire yield curve, period-by-period, as a three-dimensional parameter evolving dynamically. We show that the three time-varying parameters may be interpreted as factors corresponding to level, slope and curvature, and that they may be estimated with high efficiency. We propose and estimate autoregressive models for the factors, and we show that our models are consistent with a variety of stylized facts regarding the yield curve. We use our models to produce term-structure forecasts at both short and long horizons, with encouraging results. In particular, our forecasts appear much more accurate at long horizons than various standard benchmark forecasts. JEL Code: G1, E4, C5
We consider three sets of phenomena that feature prominently - and separately - in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that: (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very high-frequency (e.g., daily) or very low-frequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally non-Gaussian environments, as for example with time-varying conditional skewness and/or kurtosis.
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
Financial theory creates a puzzle. Some authors argue that high-risk entrepreneurs choose debt contracts instead of equity contracts since risky but high returns are of relatively more value for a loan-financed firm. On the contrary, authors who focus explicitly on start-up finance predict that entrepreneurs are the more likely to seek equity-like venture capital contracts, the more risky their projects are. Our paper makes a first step to resolve this puzzle empirically. We present microeconometric evidence on the determinants of debt and equity financing in young and innovative SMEs. We pay special attention to the role of risk for the choice of the financing method. Since risk is not directly observable we use different indicators for financial and project risk. It turns out that our data generally confirms the hypothesis that the probability that a young high-tech firm receives equity financing is an increasing function of the financial risk. With regard to the intrinsic project risk, our results are less conclusive, as some of our indicators of a risky project are found to have a negative effect on the likelihood to be financed by private equity.
We study the returns the venture capital and private equity investment from 221 venture capital and private equity funds that are part of 72 venture capital and private equity firms, 5040 entrepreneurial firms (3826 venture capital and 1214 private equity), and spanning 32 years (1971 - 2003) and 39 countries from North and South America, Europe and Asia. We make use of four main categories of variables to proxy for value-added activities and risks that explain venture capital and private equity returns: market and legal environment, VC characteristics, entrepreneurial firm characteristics, and the characteristics and structure of the investment. We show Heckman sample selection issues in regards to both unrealized and partially realized investments are important to consider for analysing the determinants of realized returns. We further compare the actual unrealized returns, as reported to investment managers, to the predicted unrealized returns based on the estimates of realized returns from the sample selection models. We show there exists significant systematic biases in the reporting of unrealized investments to institutional investors depending on the level of the earnings aggressiveness and disclosure indices in a country, as well as proxies for the degree of information asymmetry between investment managers and venture capital and private equity fund managers. Klassifikation: G24, G28, G31, G32, G35
We analyze welfare maximizing monetary policy in a dynamic two-country model with price stickiness and imperfect competition. In this context, a typical terms of trade externality affects policy interaction between independent monetary authorities. Unlike the existing literature, we remain consistent to a public finance approach by an explicit consideration of all the distortions that are relevant to the Ramsey planner. This strategy entails two main advantages. First, it allows an accurate characterization of optimal policy in an economy that evolves around a steady-state which is not necessarily efficient. Second, it allows to describe a full range of alternative dynamic equilibria when price setters in both countries are completely forward-looking and households' preferences are not restricted. In this context, we study optimal policy both in the long-run and along a dynamic path, and we compare optimal commitment policy under Nash competition and under cooperation. By deriving a second order accurate solution to the policy functions, we also characterize the welfare gains from international policy cooperation. Klassifikation: E52, F41 . This version: January, 2004. First draft: October 2003 .
This paper considers a theoretical model of n asymmetric firms that reduce their initial unit costs by spending on R&D activities. In accordance with Schumpeterian hypotheses we obtain that more efficient (bigger) firms spend more in R&D and this leads to a more concentrated market structure. We also find a positive relationship between innovation and market concentration. This calls for a corrective tax on R&D activities to curtail strategic incentives to over-invest in R&D trying to achieve a higher market share. Klassifikation: L11, L52, O31 . February, 2004.
This paper aims to analyze the impact of different types of venture capitalists on the performance of their portfolio firms around and after the IPO. We thereby investigate the hypothesis that different governance structures, objectives and track record of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis by using a data set embracing all IPOs which occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after the IPO compared to all other IPOs and their share prices fluctuate less than those of their counterparts in this period of time. Obviously, independent VCs, which concentrated mainly on growth stocks (low book-to-market ratio) and large firms (high market value), were able to add value by leading to less post-IPO idiosyncratic risk and more return (after controlling for all other effects). On the contrary, firms backed by public VCs (being small and having a high book-to-market ratio) showed relative underperformance. Klassifikation: G10, G14, G24 . 29th January 2004 .