Working Paper
Refine
Year of publication
- 2004 (114) (remove)
Document Type
- Working Paper (114) (remove)
Has Fulltext
- yes (114)
Is part of the Bibliography
- no (114)
Keywords
- Anton Ulrich <Braunschweig-Wolfenbüttel (18)
- Herzog> / Octavia (18)
- Deutschland (13)
- Schätzung (7)
- Geldpolitik (6)
- Theorie (5)
- USA (5)
- Venture Capital (5)
- Corporate Governance (4)
- Kongress (4)
Institute
This paper sets out to analyze the influence of different types of venture capitalists on the performance of their portfolio firms around and after IPO. We investigate the hypothesis that different governance structures, objectives, and track records of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis using a data set embracing all IPOs that have occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after IPO as compared to all other IPOs, and their share prices fluctuate less than those of their counterparts in this period of time. On the contrary, firms backed by public VCs show relative underperformance. The fact that this could occur implies that market participants did not correctly assess the role played by different types of VCs.
Using a normalized CES function with factor-augmenting technical progress, we estimate a supply-side system of the US economy from 1953 to 1998. Avoiding potential estimation biases that have occurred in earlier studies and putting a high emphasis on the consistency of the data set, required by the estimated system, we obtain robust results not only for the aggregate elasticity of substitution but also for the parameters of labor and capital augmenting technical change. We find that the elasticity of substitution is significantly below unity and that the growth rates of technical progress show an asymmetrical pattern where the growth of laboraugmenting technical progress is exponential, while that of capital is hyperbolic or logarithmic.
The paper explains the absence of resultative secondary predication in Russian as arising from a conflict of inferential interpretations. It formalises the framework necessary to express this proposal in terms of abductive reasoning with Poole systems in Gricean contexts. The conflict is shown to arise for default rules regulating alternative realisation of verb-internally specified consequent states. The paper thus indicates that typological variation may be due not only to different parameter values but to general inferential properties of the syntax-semantics mapping. The proposed theory also contradicts some widespread proposals that the absence of resultative secondary predication is due to the absence of some particular language feature.
Band II von II
Band I von II
The papers in this volume were presented at the eleventh meeting of the Austronesian Formal Linguistics Association (AFLA 11), held from April 23-25 at the Zentrum für Allgemeine Sprachwissenschaft, Berlin, Germany. The conference was organized by Hans-Martin Gärtner, Joachim Sabel, and myself, as part of the research project Clause Structure and Adjuncts in Austronesian Languages. We gratefully acknowledge the financial support by the German Research Foundation (Deutsche Forschungsgemeinschaft). We would like to thank Wayan Arka, Agibail Cohn, Laura Downing, Silke Hamann, S J Hannahs, Ray Harlow, Nikolaus Himmelmann, Yuchua E. Hsiao, Lillian Huang, Ed Keenan, Glyne Piggott, Charles Randriamasimanana, Joszef Szakos, Barbara Stiebels, Jane Tang, Lisa Travis, Noami Tsukido, Sam Wang, Elizabeth Zeitoun, Kie Ross Zuraw, and Marzena Zygis for reviewing the abstracts. We are thankful to Mechthild Bernhard, Jenny Ehrhardt, Fabienne Fritzsche, Theódóra Torfadóttir and Tue Trinh for their help during the conference. I would like to thank Theódóra for providing essential editorial assistance.
The papers of this 33th volume of the ZAS Papers in Linguistics present intermediate results of the ZAS-project on language acquisition. Currently we deal with the question of which functions children assign to the first grammatical forms they use productively. The goal is to identify grammatical features comprising the child's early grammar. This issue is investigated within the analyses of longitudinal data (cf. the papers of Gagarina/Bittner, Gagarina, Kühnast/Popova/Popov, Bewer) as well as within experimental research (see the papers of Bittner, Kühnast/Popova/Popov). The main topic of this volume is the acquisition of definite articles and verbal aspect.
Bewer – who has worked as a student assistant in the project for a long time and wrote her MA-thesis on the topic of the project – investigates children's acquisition of gender features in German. Kühnast/Popova/Popov discuss the correlations between the acquisition of definite articles and verbal aspect in Bulgarian. Bittner presents results of an experimental study on definite article perception in adult German. Gagarina traces the emergence of aspectual oppositions in Russian and examines the validity of the 'aspect before tense' hypothesis for L1-speaking children. Additionally, the paper of Gagarina/Bittner deals with the interrelation between the acquisition of finiteness and verb arguments in Russian and German.
Table of Contents:
T. A. Hall (Indiana University): English syllabification as the interaction of markedness constraints
Antony D. Green: Opacity in Tiberian Hebrew: Morphology, not phonology
Sabine Zerbian (ZAS Berlin): Phonological Phrases in Xhosa (Southern Bantu)
Laura J. Downing (ZAS Berlin): What African Languages Tell Us About Accent Typology
Marzena Zygis (ZAS Berlin): (Un)markedness of trills: the case of Slavic r-palatalisation
Laura J. Downing (ZAS Berlin), Al Mtenje (University of Malawi), Bernd Pompino-Marschall (Humboldt-Universitat Berlin): Prosody and Information Structure in Chichewa
T. A. Hall (Indiana University). Silke Hamann (ZAS Berlin), Marzena Zygis (ZAS Berlin): The phonetics of stop assibilation
Christian Geng (ZAS Berlin), Christine Mooshammer (Universitat Kiel): The Hungarian palatal stop: phonological considerations and phonetic data
Außerhalb der indoeuropäischen Sprachen [erfreut sich] [d]ie Kategorie „Adjektiv“ […] einer geringeren Verbreitung als man als Laie vermuten würde, und es zeigen sich in nicht-indoeuropäischen Sprachen von den europäischen Sprachen stark verschiedene Aufteilungen der Welt in Nomina und Verba. Eine bisher nicht beschriebene Verteilung von Konzepten auf Wortarten in der Sprache Guarani, welche hauptsächlich in Paraguay gesprochen wird, ist das Thema dieser Arbeit.
Siebter Band der zweiten Fassung (B) der "Römischen Octavia". Postum veröffentlicht in Wien. Der Text bricht mitten im Satz ab. Über diesen siebten Band hinaus existieren auch noch große Teile der Diktatniederschriften für einen abschließenden achten Band des Romans. Am verlässlichsten informiert hierüber die Einleitung zur historisch-kritischen Ausgabe in HKA I, pp. XIX-LIX. Vor allem auf den letzten etwa 200 Seiten des siebten Bandes finden sich starke auch konzeptionelle Abweichungen gegenüber den Diktatniederschriften, die auf Anton Ulrichs Sekretär Gottfried Alberti zurückgehen, der nach dem Tod des Herzogs damit betraut war, das Romanprojekt zu einem Ende zu bringen. Zu den Umständen, warum der siebte Band erst 1762 in Wien und der achte Band gar nicht erschienen ist, vgl. Otte (1983) und HKA I, pp. XLVIII-LVIII. Zwischen 1714 (Erscheinen des sechsten Bandes) und 1716 (Konkurs Zilligers) entstand in Wolfenbüttel ein nie vollendeter und nie in den Verkauf gelangter Teildruck des siebten Bandes: Der| Römischen| Octavia| Siebender Theil.| [vignette]| Braunschweig/| Gedruckt und verlegt durch Johann Georg Zilligern| Hochfürstl. privil. Hof-Buchdrucker.
Sechster Band der zweiten Fassung (B) der "Römischen Octavia". verzeichnet im Ostermesskatalog von 1714. Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Enthält mehrere wahrscheinlich autorfremde Gedichte, die jedoch nicht sicher zugeschrieben werden können. Enthält auf den pp. 875-884 ein kleines Oratorium, dessen Verfasserschaft ungeklärt ist. Diverse Überarbeitungen gegenüber der ersten Fassung. Etwa die Hälfte des ursprünglichen Textes des sechsten Bandes der ersten Fassung ist nun dem fünften Band zugeschlagen worden. Ungefähr ab der Mitte des sechsten Bandes beginnt ganz neuer Text. Neu hinzugekommen ist in der ersten Hälfte folgende abgeschlossene Geschichte: "Die Geschichte der Epicharis", pp. 315-356.
Fünfter Band der zweiten Fassung (B) der "Römischen Octavia". verzeichnet im Ostermesskatalog von 1713. Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Etwa die Hälfte des Texts des sechsten Bandes der ersten Fassung ist in der zweiten Fassung in den fünften Band integriert. Enthält mehrere wahrscheinlich autorfremde Gedichte, die jedoch nicht sicher zugeschrieben werden können. Diverse Überarbeitungen gegenüber der ersten Fassung. Neu hinzugekommen sind folgende abgeschlossene Geschichten: "Die Geschichte des Corillus", pp. 15-54 "Des Vatinius Gesandschafft", pp. 57-67 "Begebenheit der Apasia", pp. 857-860
Vierter Band der zweiten Fassung (B) der "Römischen Octavia". verzeichnet im Ostermesskatalog von 1713. Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Enthält wahrscheinlich autorfremde Gedichte, die jedoch nicht sicher zugeschrieben werden können, sowie eine autobiographische Schlüsselerzählung Aurora von Königsmarcks "Die Geschichte der Solane", pp. 603-658. Enthält auf den pp. 415-466 Fragmente eines Davids-Epos in Alexandrinern "Die Geschichte des Davids/ Königs in Juda", das wahrscheinlich ein Jugendwerk Anton Ulrichs darstellt. Einige Überarbeitungen gegenüber der ersten Fassung. Neu hinzugekommen sind folgende abgeschlossene Geschichten: "Fortsetzung der Geschichte der Königin Berenice", pp. 221-232 "Die Geschichte des Davids/ Königs in Juda", pp. 415-466 "Die Geschichte der Solane", pp. 603-658, verfasst von Aurora von Königsmarck "Die Geschichte des Agbarus und der Printzeßin Nitocris", pp. 1051-1066
Dritter Band der zweiten Fassung (B) der "Römischen Octavia" verzeichnet im Ostermesskatalog von 1713. Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Enthält mehrere wahrscheinlich autorfremde Gedichte, die allerdings sämtlich nicht sicher zugeschrieben werden können (vgl. HKA III, pp. XXIII-XXVII). Gegenüber der ersten Fassung nur geringere Überarbeitung. Neu hinzugekommen ist "Die Geschichte Der Königin Berenice", pp. 276-350.
Zweiter Band der zweiten Fassung (B) der "Römischen Octavia". Verzeichnet im Ostermesskatalog von 1713. Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Enthält mehrere teils wahrscheinlich, teils sicher autorfremde Gedichte. Zwei davon stammen von Christian Hofmann von Hofmannswaldau (vgl. HKA I, pp. CLXXIVf., Anm. 195). Gegenüber der ersten Fassung nur geringere Überarbeitung. Neu hinzugekommen ist "Fortsetzung der Geschichte/ Des Königs Monobazes und der Königin Susanna von Adiabene", pp. 808-855. Von der "Geschichte der Flavia Domitilla und der Cönis", pp. 920-1015, existiert eine Übersetzung einer unbekannten französischen Hofdame ins Französische, datiert auf den 9. März 1714 (23: Cod.Guelf. 196.1 Extravag.).
Sechster Band der ersten Fassung der "Römischen Octavia". Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Die nicht verkauften Exemplare dieses Drucks wurden zudem 1711 unter neuem Titelblatt im Rahmen einer Gesamtausgabe nochmals angeboten.
Fünfter Band der ersten Fassung der "Römischen Octavia". Der Band ist erstmals im Ostermesskatalog von 1704 angekündigt worden und dann nochmals im Ostermesskatalog von 1706. Wahrscheinlich sollte der Band, wie es auch auf dem Titelblatt steht, ursprünglich 1704 erscheinen, ist dann aber erst 1706 tatsächlich auf den Markt gekommen. Vgl. dazu HKA I (1993), p. CXVI. Cf. zur Vollständigen Publikationsgeschichte: Octavia. Römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Die nicht verkauften Exemplare dieses Drucks wurden zudem 1711 unter neuem Titelblatt im Rahmen einer Gesamtausgabe nochmals angeboten.
Drittes und letztes Drittel des vierten Bandes der ersten Fassung der "Römischen Octavia". Die ersten beiden Drittel erschienen ein Jahr zuvor. Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Die nicht verkauften Exemplare dieses Drucks wurden zudem 1711 unter neuem Titelblatt im Rahmen einer Gesamtausgabe nochmals angeboten.
Die ersten zwei Drittel des vierten Bandes der ersten Fassung der "Römischen Octavia". Das dritte Drittel folgte separat ein Jahr später. Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Von diesem vierten Band existiert ein nie vollendeter und nie erschienener Teildruck, der mit einigen Lücken die Seiten 250-418 umfasst, entstanden zwischen 1680 und 1682. Die einzelnen Blätter sind in Handschriften zur "Römischen Octavia" (SA Wolfenbüttel: 1 Alt 22, 317, 368, 369) zum Teil eingeklebt und zum Teil lose eingelegt. Vgl. HKA I (1993), pp. CVIf. Ursprünglich sollte dieser vierte Band den Roman abschließen. Als die Arbeit nach einer Unterbrechung von rund 20 Jahren um die Jahrhundertwende herum wieder aufgenommen wurde, wurde das Projekt auf dann auf sechs Bände erweitert. Etwa ab der Bandmitte dominiert dann neu verfasster Text, der nach 1700 entstanden ist. Die nicht verkauften Exemplare dieses Drucks wurden zudem 1711 unter neuem Titelblatt im Rahmen einer Gesamtausgabe nochmals angeboten.
Nachdruck des dritten Bands der ersten Fassung der "Römischen Octavia". Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Enthält mehrere wahrscheinlich autorfremde Gedichte, die allerdings sämtlich nicht sicher zugeschrieben werden können (vgl. HKA III, pp. XXIII-XXVII).
Nachdruck des zweiten Bandes der ersten Fassung der "Römischen Octavia". Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Von diesem Nachdruck existiert ein bis auf einige Lesarten textgleicher Doppeldruck mit durchgehendem Neusatz, erschienen nach 1685 und vor 1711. Von den ersten drei Bänden sind vor 1711 jeweils Doppeldrucke entstanden, also Ausgaben mit komplettem Neusatz, aber der alten Datierung. Dieser Vorgang ist angesichts der Tatsache, dass die Drucker der Zeit in der Regel sehr daran interessiert waren, ihre Ausgaben verkaufsfördernd als 'neu' zu präsentieren, eher ungewöhnlich. Das wichtigste Argument dafür, dass es sich nicht um gleichzeitige Doppeldrucke aus dem jeweiligen Ersterscheinungsjahr der Bände handelt, ist die Tatsache, dass für die Titelauflage des dritten Bandes von 1711 nicht etwa die Bögen der zweiten Ausgabe von 1702 verwendet wurden, sondern vor allem Bögen, die einer der beiden auf 1679 datierten Ausgaben zugerechnet werden müssen. Wenn diese Bögen tatsächlich bereits von 1679 stammten, hätte man 1702 ja gar nicht nachdrucken müssen. Vgl. zu dieser Argumentation Boghardt (1993), pp. XCVIIIf.
Nachdruck des ersten Bandes der ersten Fassung der "Römischen Octavia". Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Von diesem Nachdruck existiert ein bis auf einige Lesarten textgleicher Doppeldruck mit durchgehendem Neusatz, erschienen nach 1685 und vor 1711. Von den ersten drei Bänden sind vor 1711 jeweils Doppeldrucke entstanden, also Ausgaben mit komplettem Neusatz, aber der alten Datierung. Dieser Vorgang ist angesichts der Tatsache, dass die Drucker der Zeit in der Regel sehr daran interessiert waren, ihre Ausgaben verkaufsfördernd als 'neu' zu präsentieren, eher ungewöhnlich. Das wichtigste Argument dafür, dass es sich nicht um gleichzeitige Doppeldrucke aus dem jeweiligen Ersterscheinungsjahr der Bände handelt, ist die Tatsache, dass für die Titelauflage des dritten Bandes von 1711 nicht etwa die Bögen der zweiten Ausgabe von 1702 verwendet wurden, sondern vor allem Bögen, die einer der beiden auf 1679 datierten Ausgaben zugerechnet werden müssen. Wenn diese Bögen tatsächlich bereits von 1679 stammten, hätte man 1702 ja gar nicht nachdrucken müssen. Vgl. zu dieser Argumentation Boghardt (1993), pp. XCVIIIf.
Bibliographie : Octavia römische Geschichte: Zweyter Theil, [vol. 3] (Nürnberg: J. Hoffmann, 1679)
(2004)
Dritter Band der ersten Fassung der "Römischen Octavia". Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Von diesem Erstdruck existiert ein bis auf einige Lesarten textgleicher Doppeldruck mit durchgehendem Neusatz, erschienen nach 1702 und vor 1711. Von den ersten drei Bänden sind vor 1711 jeweils Doppeldrucke entstanden, also Ausgaben mit komplettem Neusatz, aber der alten Datierung. Dieser Vorgang ist angesichts der Tatsache, dass die Drucker der Zeit in der Regel sehr daran interessiert waren, ihre Ausgaben verkaufsfördernd als 'neu' zu präsentieren, eher ungewöhnlich. Das wichtigste Argument dafür, dass es sich nicht um gleichzeitige Doppeldrucke aus dem jeweiligen Ersterscheinungsjahr der Bände handelt, ist die Tatsache, dass für die Titelauflage des dritten Bandes von 1711 nicht etwa die Bögen der zweiten Ausgabe von 1702 verwendet wurden, sondern vor allem Bögen, die einer der beiden auf 1679 datierten Ausgaben zugerechnet werden müssen. Wenn diese Bögen tatsächlich bereits von 1679 stammten, hätte man 1702 ja gar nicht nachdrucken müssen. Vgl. zu dieser Argumentation Boghardt (1993), pp. XCVIIIf.
Zweiter Band der ersten Fassung der "Römischen Octavia". Cf. zur vollständigen Publikationsgeschichte: Octavia römische Geschichte, [vol. 1] (Nürnberg: J. Hoffmann, 1677). Enthält mehrere teils wahrscheinlich, teils sicher autorfremde Gedichte. Zwei davon stammen von Christian Hofmann von Hofmannswaldau (vgl. HKA I, pp. CLXXIVf., Anm. 195). Von der "Geschichte der Flavia Domitilla und der Cönis", pp. 920-1015, existiert eine Übersetzung einer unbekannten französischen Hofdame ins Französische, datiert auf den 9. März 1714 (23: Cod.Guelf. 196.1 Extravag.).
Zur "Römischen Octavia" sind in der Herzog August Bibliothek (23:) und im Staatsarchiv (SA) Wolfenbüttel umfangreiche Vorarbeiten, Manuskripte und Diktatniederschriften erhalten, was für einen Roman des 17. und frühen 18. Jahrhunderts eine absolute Seltenheit darstellt. Über die gedruckten Bände hinaus existieren auch noch große Teile der Diktatniederschriften für einen abschließenden achten Band des Romans. Am verlässlichsten informiert hierüber die Einleitung zur Historisch-kritischen Ausgabe in HKA I, pp. XIX-LIX. Zur umfangreichen erhaltenen Korrespondenz Anton Ulrichs vgl. Mazingue (1978), pp. 887-900. Der Roman erschien in zwei Fassungen - Fassung A, verlegt in Nürnberg, und Fassung B, verlegt in Braunschweig und Wien, weichen im Text voneinander ab. A war zuerst auf vier Bände konzipiert, ist dann aber auf sechs Bände erweitert worden. Diese sechs Bände wurden 1711 zu einer Werkausgabe gruppiert. Fassung B war zuerst ebenfalls auf sechs, später auf acht Bände konzipiert, von ihr erschienen sechs Bände in Braunschweig zwischen 1712 und 1714 und ein siebter in Wien 1762. Von den Ausgaben A.3.a, A.1.b, A.2.b sind Druckvarianten überliefert, die jeweils nach den Nachdrucken der ersten drei Bände (1685-1702) und vor 1711 entstanden sind. Die ersten drei Bände der Ausgabe von 1711 bieten zum weit überwiegenden Teil Material dieser Druckvarianten.
It is shown that between one-turn pushdown automata (1-turn PDAs) and deterministic finite automata (DFAs) there will be savings concerning the size of description not bounded by any recursive function, so-called non-recursive tradeoffs. Considering the number of turns of the stack height as a consumable resource of PDAs, we can show the existence of non-recursive trade-offs between PDAs performing k+ 1 turns and k turns for k >= 1. Furthermore, non-recursive trade-offs are shown between arbitrary PDAs and PDAs which perform only a finite number of turns. Finally, several decidability questions are shown to be undecidable and not semidecidable.
The paper explores factors that influence the design of financing contracts between venture capital investors and European venture capital funds. 122 Private Placement Memoranda and 46 Partnership Agreements are investigated in respect to the use of covenant restrictions and compensation schemes. The analysis focuses on the impact of two key factors: the reputation of VC-funds and changes in the overall demand for venture capital services. We find that established funds are more severely restricted by contractual covenants. This contradicts the conventional wisdom which assumes that established market participants care more about their reputation, have less incentive to behave opportunistically and therefore need less covenant restrictions. We also find that managers of established funds are more often obliged to invest own capital alongside with investors money. We interpret this as evidence that established funds have actually less reason to care about their reputation as compared to young funds. One reason for this surprising result could be that managers of established VC funds are older and closer to retirement and therefore put less weight on the effects of their actions on future business opportunities. We also explore the effects of venture capital supply on contract design. Gompers and Lerner (1996) show that VC-funds in the US are able to reduce the number of restrictive covenants in years with high supply of venture capital and interpret this as a result of increased bargaining power by VC-funds. We do not find similar evidence for Europe. Instead, we find that VC-funds receive less base compensation and higher performance related compensation in years with strong capital inflows into the VC industry. This may be interpreted as a signal of overconfidence: Strong investor demand seems to coincide with overoptimistic expectations by fund managers which make them willing to accept higher powered incentive schemes. JEL: G32 Keywords: Venture Capital, Contracting, Limited Partnership, Funds, Principal Agent, Compensation, Covenants, Reputation, Bargaining Power
This paper is concerned with the tagging of spatial expressions in German newspaper articles, assigning a meaning to the expression and classifying the usages of the spatial expression and linking the derived referent to an event description. In our system, we implemented the activation of concepts in a very simple fashion, a concept is activated once (with a cost depending on the item that activated it) and is left activated thereafter. As an example, a city also activates the nodes for the region and the country it is part of, so that cities from one country are chosen over cities from different countries. A test corpus of 12 German newspaper articles was tested regarding several disambiguation strategies. Disambiguation was carried out via a beam search to find an approximately cost-optimal solution for the conflict set of potential grounding candidates for the tagged spatial expression. Test showed that the disambiguation strategies improved accuracy significantly.
Much has been written on the success of the Indian software industry, enumerating systemic factors like first-class higher education and research institutions, both public and private; low labour costs, stimulating (state) policies etc. However, although most studies analyzing the 'Indian' software industry cover essentially the South (and West) Indian clusters, this issue has not been tackled explicitly. This paper supplements the economic geography explanations mentioned above with the additional factor social capital, which is not only important within the region, but also in transnational (ethnic) networks linking Indian software clusters with the Silicon Valley. In other words, spatial proximity is complemented with cultural proximity, thereby, extending the system of innovation. The main hypothesis is that some Indian regions are more apt to economic development and innovation due to their higher affinity to education and learning, as well as, their more general openness, which has been a main finding of my interviews. In addition, the transnational networks of Silicon Valley Indians seem to be dominated by South Indians, thus, corroborating the regional clustering of the Indian software industry. JEL Classifications: O30, R12, Z13, L86
During the past decade, processes associated with what is popularly though perhaps misleadingly known as globalization have come within the purview of anthropology. Migration and mobility ‐ and the footloose or even rootless social groups that they produce ‐ as well as the worldwide diffusion of commodities, media images, political ideas and practices, technologies and scientific knowledge today are on anthropology's research agenda. As a consequence, received notions about the ways in which culture relates to territory have been abandoned. The term transnationalisation captures cultural processes that stream across the borders of nation states. Anthropologists have been forced to revise the notion that transnationalisation would inevitably bring about a culturally homogenized world. Instead, we are witnessing a surge of greatly increasing cultural diversity. New cultural forms grow out of historically situated articulations of the local and the global. Rather than left-over relics from traditional orders, these are decidedly modern, yet far from uniform. The essay engages the idea of the pluralization of modernities, explores its potential for interdisciplinary research agendas, and also inquires into problematic assumptions underlying this new theoretical concept.
As of today, estimating interest rate reaction functions for the Euro Area is hampered by the short time span since the conduct of a single monetary policy. In this paper we circumvent the common use of aggregated data before 1999 by estimating interest rate reaction functions based on a panel including actual EMU Member States. We find that exploiting the cross-section dimen- sion of a multi-country panel and accounting for cross-country heterogeneity in advance of the single monetary policy pays off with regard to the estimated reaction functions' ability to describe actual interest rate dynamics. We retrieve a panel reaction function which is demonstrated to be a valuable tool for evaluating episodes of monetary policy since 1999. JEL - Klassifikation: E43 , E58 , C33
This paper employs individual bidding data to analyze the empirical performance of the longer term refinancing operations (LTROs) of the European Central Bank (ECB). We investigate how banks’ bidding behavior is related to a series of exogenous variables such as collateral costs, interest rate expectations, market volatility and to individual bank characteristics like country of origin, size, and experience. Panel regressions reveal that a bank’s bidding depends on bank characteristics. Yet, different bidding behavior generally does not translate into differences concerning bidder success. In contrast to the ECB’s main refinancing operations, we find evidence for the winner’s curse effect in LTROs. Our results indicate that LTROs do neither lead to market distortions nor to unfair auction outcomes. JEL classification: E52, D44
Even though tourism has been recognised as an important field for transnational research today, there are few attempts to place tourism in the context of transnational theories or to think about transnationalism from the perspective of tourists. I argue that in researching tourist practices one can add important aspects to transnational approaches. The prerequisites of mobility and interaction for example are the features chosen by backpackers to describe what their Round-The-World-Trip is about. A form of tourism is adopted, or created, that itself confronts many aspects of globalisation: First of all there is the immense dynamic that is involved. Backpackers try to cover as many places and experiences as possible, travelling at high speed. They adopt all kinds of touristic experiences ranging from beach to adventure to culture tourism. They don't focus on a specific area or country but travel the world. They cross national borders perpetually. Additionally they form a transnational network in which they interact with strangers of similar backgrounds (other backpackers, tourist professionals). This network helps them interacting with people from different backgrounds (the socalled hosts or locals). Considering my research Backpackers forge a certain identity from these transnational practices which I want to name globedentity. Globedentity expresses a type of identity construction that not only refers to the individual (I) but reflects the world (globe) in this identity. This globedentity is not fixed but is perpetually re-created and re-defined. It also embraces the increasing popular awareness of globalisation which backpackers, coming from highly educated middle class backgrounds, in particular have identified with. Due to the constant awareness of the latest global social, cultural and economic developments in these educated milieus they know exactly which tools to use to become successful parts of their societies.
Taxation and tax policy reform appears on the political agenda in most advanced welfare states in Europe and North America. Of course studies of taxation and tax policy are nothing new and have existed ever since people have paid taxes. The current work is situated in the context of the future of the welfare state and the reinforced international economic and political integration referred to as "globalization." The purpose of this paper is to analyze how globalization is affecting tax policy in advanced welfare states. In comparing the evolution of tax policy in Canada with those in the United States, Germany and Sweden from 1960 to 1995, I will try to review the conventional antiglobalization thesis, i.e., that globalization leads to a "race to the bottom" in revenue and expenditures policies, or as others have called it, a "beggar the neighbour policy" (Tanzi and Bovenberg 1990, 187). ... Conclusion: The empirical data and theoretical models clearly show that globalization is one relatively minor factor among many that explain tax policy reforms. And even that limited influence is mediated by domestic political systems, institutions and constellations of actors. As the data has shown, the conventional globalization thesis of a race to the bottom is not borne out. Tax rates and tax revenues are still increasing, despite the ongoing trend toward international trade integration. Countervailing pressures like the high cost of welfare programs, different parties in government, strong labour unions, and institutional veto players counteract the pressure of globalization on tax policy. As for the future of taxation in Canada, it is more likely to be one of gradual evolution than radical change. Although the data don’t show any downward pressure on tax rates and tax revenues comparatively speaking, there are at least four key factors in Canada that are likely to put pressure on future tax rates, although regional political dynamics and the workings of fiscal federalism suggest that tax reductions will be a higher priority in some provinces than others (Hale 2002). First, neoliberalism will continue to shape fiscal and tax policy, including the role of the tax system in delivering social policies and programs in most parts of Canada. Second, governments that seek to define their own economic and social priorities rather than simply react to events beyond their borders will have to exercise centralized control over budgetary policies and spending levels if they hope to foster the economic growth needed to finance social services in the context of Canada’s changing demographics. Third, the ability of governments to combine the promotion of economic growth and higher living standards will be closely linked to their ability to develop a workable division of responsibilities among federal and provincial governments and with other national governments. Finally, the diffusion of new technologies will continue to transform national and regional economies while giving individuals greater opportunity to avoid government and tax regulations that run contrary to their perceived interests and values. This discussion of determinants that shape tax policy reform has shown that successful management of fiscal and tax policy requires a capacity to set priorities; adapt to changing circumstances; and build a consensus that enables competing economic, social, regional and ideological interests to identify their own well-being in the broader political and economic environment. Tax policy is shaped by many political, economic and social determinants. As Geoffrey Hale correctly concludes, "it should not be surprising if the tax system stubbornly refuses to confirm either economic theories or political ideologies, but reflects past decisions and the policy tradeoffs of the political process" (2002, 71). The notion of tax policy being driven by globalization and forces associated with globalization (both positive and negative) is simply not borne by the facts.
In der vorliegenden Studie werden die sozialpolitischen Reformen in den USA und Kanada während der 1990er Jahren in einer vergleichenden Perspektive analysiert. Dabei wird insbesondere die Rolle steuerpolitischer Instrumentarien in den Reformen thematisiert und der Frage nachgegangen, ob sich hier ein neuer Typ von Wohlfahrtsstaat herausbildet. Im ersten Teil des Papiers wird das in der vergleichenden Wohlfahrtsstaatsforschung etablierte Modell des liberalen Wohlfahrtsstaats skizziert, um vor diesem Hintergrund die Reformen in den USA und Kanada zu untersuchen und zu vergleichen. Anschließend wird in einer breiteren vergleichenden Perspektive die out-put-Leistung der beiden Wohlfahrtsstaaten analysiert. Al normative Kriterien hierbei gilt in erster Linie die Umverteilungsfunktion sozialpolitischer Instrumentarien, hier in erster Linie verstanden als Einkommensumverteilung.
Am Beginn des 21. Jahrhunderts wird der Zustand der US-Demokratie kontrovers diskutiert. Während manche Beobachter eine zu hohe Responsivität des politischen Systems gegenüber den Ansprüchen seiner Bürger entdeckt haben wollen und deshalb von demosclerosis und einer Hyperdemokratie sprechen, in welcher der Volkswille in einen unantastbaren, göttlichen Rang erhoben worden sei, kommen andere zu dem Schluss, dass die Gründerväter im Hinblick auf ihre handlungsanleitende Furcht vor einer »Tyrannei der Mehrheit« ganze Arbeit geleistet und ein nahezu unüberwindbares System von Vetopositionen geschaffen hätten, das Partikularinteressen strukturell bevorzuge und deshalb nur in Ausnahmesituationen die Mehrheitspräferenzen der Bürger in Politik umsetze. Kurzum: Die Furcht der Federalists vor einer »Mehrheitstyrannei« habe einer »Minderheitstyrannei« Tür und Tor geöffnet. Der Artikel versucht die Vereinigten Staaten in diesem Spannungsbogen zu verorten. Ziel ist es, die Qualität der amerikanischen Demokratie am Beginn des 21. Jahrhunderts zu problematisieren. Dabei werden auch die Entwicklungen nach dem 11. September berücksichtigt.
In dieser Studie werden die Wirkungen von Arbeitsbeschaffungsmaßnahmen (ABM) in Deutschland auf die individuellen Eingliederungswahrscheinlichkeiten der Teilnehmer in reguläre Beschäftigung evaluiert. Für die Untersuchung wird ein umfangreicher und informativer Datensatz aus den Datenquellen der Bundesagentur für Arbeit (BA) verwendet, der es ermöglicht, die Wirkungen der Programme differenziert nach individuellen Unterschieden der Teilnehmer und mit Berücksichtigung der heterogenen Arbeitsmarktstruktur zu untersuchen. Der Datensatz enthält Informationen zu allen Teilnehmern in ABM, die ihre Maßnahmen im Februar 2000 begonnen haben, und zu einer Kontrollgruppe von Nichtteilnehmern, die im Januar 2000 arbeitslos waren und im Februar 2000 nicht in die Programme eingetreten sind. Mit Hilfe der Informationen der Beschäftigtenstatistik ist es hierbei erstmals möglich, den Abgang in reguläre Beschäftigung auf Grundlage administrativer Daten zu untersuchen. Der vorliegende Verbleibszeitraum reicht bis Dezember 2002. Unter Verwendung von Matching-Methoden auf dem Ansatz potenzieller Ergebnisse werden die Effekte von ABM mit regionaler Unterscheidung und für besondere Problem- und Zielgruppen des Arbeitsmarktes geschätzt. Die Ergebnisse zeigen zwar deutliche Unterschiede in den Effekten für Subgruppen, insgesamt weisen die empirischen Befunde jedoch darauf hin, dass das Ziel der Eingliederung in reguläre ungeförderte Beschäftigung durch ABM weitgehend nicht realisiert werden konnte. JEL: C40 , C13 , J64 , H43 , J68
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
In a framework closely related to Diamond and Rajan (2001) we characterize different financial systems and analyze the welfare implications of different LOLR-policies in these financial systems. We show that in a bank-dominated financial system it is less likely that a LOLR-policy that follows the Bagehot rules is preferable. In financial systems with rather illiquid assets a discretionary individual liquidity assistance might be welfare improving, while in market-based financial systems, with rather liquid assets in the banks' balance sheets, emergency liquidity assistance provided freely to the market at a penalty rate is likely to be efficient. Thus, a "one size fits all"-approach that does not take the differences of financial systems into account is misguiding. JEL - Klassifikation: D52 , E44 , G21 , E52 , E58
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Empirical evidence suggests that even those firms presumably most in need of monitoringintensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank. JEL - Klassifikation: G21 , G78 , G33
This paper suggests a motive for bank mergers that goes beyond alleged and typically unverifiable scale economies: preemtive resolution of banks´ financial distress. Such "distress mergers" can be a significant motivation for mergers because they can foster reorganizations, realize diversification gains, and avoid public attention. However, since none of these potential benefits comes without a cost, the overall assessment of distress mergers is unclear. We conduct an empirical analysis to provide evidence on consequences of distress mergers. The analysis is based on comprehensive data from Germany´s savings and cooperatives banks sectors over the period 1993 to 2001. During this period both sectors faced significant structural problems and superordinate institutions (associations) presumably have engaged in coordinated actions to manage distress mergers. The data comprise 3640 banks and 1484 mergers. Our results suggest that bank mergers as a means of preemtive distress resolution have moderate costs in terms of the economic impact on performance. We do find strong evidence consistent with diversification gains. Thus, distress mergers seem to have benefits without affecting systematic stability adversely.
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
Der Bestimmung risikoadäquater Diskontierungssätze kommt bei der Unternehmensbedeutung eine zentrale Bedeutung zu. Wird zu deren Bestimmung in der praktischen Anwendung das CAPM verwendet, gilt es dabei, risikolose Zinssätze und Risikoprämien zu bestimmen, für die erwartete Renditen des Marktportfeuilles und Beta-Faktoren als Maßgrößen für das systematische Risiko benötigt werden. Passend zu den zu bewertenden erwarteten Überschussgrößen sollten auch die zur Diskontierung verwendeten Renditeforderungen die im Bewertungszeitpunkt erwarteten künftigen Renditen vergleichbarer Anlagen widerspiegeln. Die weitaus meisten Beiträge zur Operationalisierung des CAPM leiten die Renditeforderungen jedoch aus historischen Kapitalmarktrenditen ab. Wir zeigen in diesem Beitrag auf, wie erwartete künftige Renditen aus beobachtbaren Größen, vor allen den Zinsstrukturkurven und den beobachtbaren Analystenprognosen, zukunftsorientiert abgeleitet werden können. Damit wird eine konzeptionell schlüssigere Bewertung der im Bewertungszeitpunkt erwarteten künftigen Überschüsse mit den zeitgleich erwarteten künftigen Renditen ermöglicht.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
Seit der Einführung des Deutschen Corporate Governance Kodex (Kodex) im Jahr 2002 sind deutsche börsennotierte Unternehmen zur Abgabe der Entsprechenserklärung gemäß § 161 AktG verpflichtet (Comply-or-Explain-Prinzip). Auf der Basis dieser Information soll durch den Druck des Kapitalmarkts die Einhaltung des Kodex überwacht und gegebenenfalls sanktioniert werden. Dabei wird regelmäßig postuliert, dass bei überdurchschnittlicher Befolgung bzw. Nichtbefolgung der Kodex-Empfehlungen eine Belohnung durch Kurszuschläge bzw. eine Sanktionierung durch Kursabschläge erfolgt. Die Ergebnisse einer Ereignisstudie zeigen, dass die Abgabe der Entsprechenserklärung keine erhebliche Kursbeeinflussung auslöst und die für das Enforcement des Kodex angenommene (und erforderliche) Selbstregulierung durch den Kapitalmarkt nicht stattfindet. Es wird daher kritisch hinterfragt, ob der für den Kodex gewählte und grundsätzlich zu begrüßende flexible Regulierungsansatz im System des zwingenden deutschen Gesellschaftsrechts einen geeigneten Enforcement-Mechanismus darstellt. This paper studies the short-run announcement effects of compliance with the German Corporate Governance Code (‘the Code’) on firm value. Event study results suggest that firm value is unaffected by the announcement, although such market reactions to the first time disclosure of the declaration of conformity were widely assumed by the private and public promoters of the Code. This result from acceptance of the German Code add evidence to the hypothesis that regulatory corporate governance initiatives that rely on mandatory disclosure without monitoring and enforcement are ineffective in civil law countries.
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that financing patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by comparing and analyzing various ways of measuring financial structure and financing patterns and by demonstrating that the surprising empirical results found by studies that relied on net flows are due to a hidden assumption. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results, which use an estimation technique for determining gross flows of funds in those cases in which empirical data are not available, are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way, and moreover in a way which is largely in line with the general view of the differences between the financial systems of the countries covered in the present paper.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to force the hot and dense nuclear matter into a deconfined phase.
Die vorliegende Fallstudie zu den Anciens Combattants in Diébougou ist das Ergebnis einer Lehrforschung der Johann Wolfgang Goethe-Universität, Frankfurt am Main, Institut für Historische Ethnologie, die vom 24. August 2001 bis zum 16. Dezember 2001 unter der Leitung von Prof. Carola Lentz, und der Betreuung durch Dr. Katja Werthmann, Dr. Richard Kuba sowie in Kooperation mit der Universität von Ouagadougou (Département d’Histoire et d’Archéologie) in Burkina Faso stattfand. Allen genannten Personen und Institutionen sei an dieser Stelle ausdrücklich gedankt. Die Fallstudie reichte ich im Februar 2004 als Magister-Abschlussarbeit am Fachbereich Geschichtswissenschaften (Historische Ethnologie) der Johann Wolfgang Goethe-Universität, Frankfurt am Main, bei Prof. Carola Lentz und Prof. Karl-Heinz Kohl ein. Der Großteil der ursprünglich im Anhang der Magisterarbeit enthaltenen Dokumente, Karten und Photos wurde ausgelagert, und der Text erfuhr eine geringfügige Überarbeitung. Im Anschluss an einen vierwöchigen Dioula-Sprachkurs in Bobo-Dioulasso folgte die dreimonatige Erhebungsphase in Diébougou, sowohl Hauptort (chef-lieu) der Provinz Bougouriba im Südwesten Burkina Fasos, Markt- als auch Verwaltungszentrum mit 11 637 Einwohnern (siehe http://www.ambf.bf/f_mairies.html). Interethnische Beziehungen, Siedlungsgeschichte und Bodenrecht waren die übergeordneten Themen des ethnologischen Teilprojekts A9 des Sonderforschungsbereichs 268 "Kulturentwicklung und Sprachgeschichte im Naturraum Westafrikanische Savanne" der Johann Wolfgang Goethe-Universität, der die Durchführung der Lehrforschung finanziell unterstützte, und dem ich gleichsam danken möchte.
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
In this article, we investigate risk return characteristics and diversification benefits when private equity is used as a portfolio component. We use a unique dataset describing 642 US-American portfolio companies with 3620 private equity investments. Information about precisely dated cash flows at the company level enables for the first time a cash flow equivalent and simultaneous investment simulation in stocks, as well as the construction of stock portfolios for benchmarking purposes. With respect to the methodology involved, we construct private equity, stock-benchmark and mixed-asset portfolios using bootstrap simulations. For the late 1990s we find a dramatic increase in the extent to which private equity outperforms stock investment. In earlier years private equity was underperforming its stock benchmarks. Within the overall class of private equity, returns on earlier private equity investment categories, like venture capital, show on average higher variations and even higher rates of failure. It is in this category in particular that high average portfolio returns are generated solely by the ability to select a few extremely well performing companies, thus compensating for lost investments. There is a high marginal diversifiable risk reduction of about 80% when the portfolio size is increased to include 15 investments. When the portfolio size is increased from 15 to 200 there are few marginal risk diversification effects on the one hand, but a large increase in managing expenditure on the other, so that an actual average portfolio size between 20 and 28 investments seems to be well balanced. We provide empirical evidence that the non-diversifiable risk that a constrained investor, who is exclusively investing in private equity, has to hold exceeds that of constrained stock investors and also the market risk. From the viewpoint of unconstrained investors with complete investment freedom, risk can be optimally reduced by constructing mixed asset portfolios. According to the various private equity subcategories analyzed, there are big differences in optimal allocations to this asset class for minimizing mixed-asset portfolio variance or maximizing performance ratios. We observe optimal portfolio weightings to be between 3% and 65%.
We take a simple time-series approach to modeling and forecasting daily average temperature in U.S. cities, and we inquire systematically as to whether it may prove useful from the vantage point of participants in the weather derivatives market. The answer is, perhaps surprisingly, yes. Time-series modeling reveals conditional mean dynamics, and crucially, strong conditional variance dynamics, in daily average temperature, and it reveals sharp differences between the distribution of temperature and the distribution of temperature surprises. As we argue, it also holds promise for producing the long-horizon predictive densities crucial for pricing weather derivatives, so that additional inquiry into time-series weather forecasting methods will likely prove useful in weather derivatives contexts.
Despite powerful advances in yield curve modeling in the last twenty years, comparatively little attention has been paid to the key practical problem of forecasting the yield curve. In this paper we do so. We use neither the no-arbitrage approach, which focuses on accurately fitting the cross section of interest rates at any given time but neglects time-series dynamics, nor the equilibrium approach, which focuses on time-series dynamics (primarily those of the instantaneous rate) but pays comparatively little attention to fitting the entire cross section at any given time and has been shown to forecast poorly. Instead, we use variations on the Nelson-Siegel exponential components framework to model the entire yield curve, period-by-period, as a three-dimensional parameter evolving dynamically. We show that the three time-varying parameters may be interpreted as factors corresponding to level, slope and curvature, and that they may be estimated with high efficiency. We propose and estimate autoregressive models for the factors, and we show that our models are consistent with a variety of stylized facts regarding the yield curve. We use our models to produce term-structure forecasts at both short and long horizons, with encouraging results. In particular, our forecasts appear much more accurate at long horizons than various standard benchmark forecasts. JEL Code: G1, E4, C5
We consider three sets of phenomena that feature prominently - and separately - in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that: (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very high-frequency (e.g., daily) or very low-frequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally non-Gaussian environments, as for example with time-varying conditional skewness and/or kurtosis.
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
Financial theory creates a puzzle. Some authors argue that high-risk entrepreneurs choose debt contracts instead of equity contracts since risky but high returns are of relatively more value for a loan-financed firm. On the contrary, authors who focus explicitly on start-up finance predict that entrepreneurs are the more likely to seek equity-like venture capital contracts, the more risky their projects are. Our paper makes a first step to resolve this puzzle empirically. We present microeconometric evidence on the determinants of debt and equity financing in young and innovative SMEs. We pay special attention to the role of risk for the choice of the financing method. Since risk is not directly observable we use different indicators for financial and project risk. It turns out that our data generally confirms the hypothesis that the probability that a young high-tech firm receives equity financing is an increasing function of the financial risk. With regard to the intrinsic project risk, our results are less conclusive, as some of our indicators of a risky project are found to have a negative effect on the likelihood to be financed by private equity.
We study the returns the venture capital and private equity investment from 221 venture capital and private equity funds that are part of 72 venture capital and private equity firms, 5040 entrepreneurial firms (3826 venture capital and 1214 private equity), and spanning 32 years (1971 - 2003) and 39 countries from North and South America, Europe and Asia. We make use of four main categories of variables to proxy for value-added activities and risks that explain venture capital and private equity returns: market and legal environment, VC characteristics, entrepreneurial firm characteristics, and the characteristics and structure of the investment. We show Heckman sample selection issues in regards to both unrealized and partially realized investments are important to consider for analysing the determinants of realized returns. We further compare the actual unrealized returns, as reported to investment managers, to the predicted unrealized returns based on the estimates of realized returns from the sample selection models. We show there exists significant systematic biases in the reporting of unrealized investments to institutional investors depending on the level of the earnings aggressiveness and disclosure indices in a country, as well as proxies for the degree of information asymmetry between investment managers and venture capital and private equity fund managers. Klassifikation: G24, G28, G31, G32, G35
We analyze welfare maximizing monetary policy in a dynamic two-country model with price stickiness and imperfect competition. In this context, a typical terms of trade externality affects policy interaction between independent monetary authorities. Unlike the existing literature, we remain consistent to a public finance approach by an explicit consideration of all the distortions that are relevant to the Ramsey planner. This strategy entails two main advantages. First, it allows an accurate characterization of optimal policy in an economy that evolves around a steady-state which is not necessarily efficient. Second, it allows to describe a full range of alternative dynamic equilibria when price setters in both countries are completely forward-looking and households' preferences are not restricted. In this context, we study optimal policy both in the long-run and along a dynamic path, and we compare optimal commitment policy under Nash competition and under cooperation. By deriving a second order accurate solution to the policy functions, we also characterize the welfare gains from international policy cooperation. Klassifikation: E52, F41 . This version: January, 2004. First draft: October 2003 .
This paper considers a theoretical model of n asymmetric firms that reduce their initial unit costs by spending on R&D activities. In accordance with Schumpeterian hypotheses we obtain that more efficient (bigger) firms spend more in R&D and this leads to a more concentrated market structure. We also find a positive relationship between innovation and market concentration. This calls for a corrective tax on R&D activities to curtail strategic incentives to over-invest in R&D trying to achieve a higher market share. Klassifikation: L11, L52, O31 . February, 2004.
This paper aims to analyze the impact of different types of venture capitalists on the performance of their portfolio firms around and after the IPO. We thereby investigate the hypothesis that different governance structures, objectives and track record of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis by using a data set embracing all IPOs which occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after the IPO compared to all other IPOs and their share prices fluctuate less than those of their counterparts in this period of time. Obviously, independent VCs, which concentrated mainly on growth stocks (low book-to-market ratio) and large firms (high market value), were able to add value by leading to less post-IPO idiosyncratic risk and more return (after controlling for all other effects). On the contrary, firms backed by public VCs (being small and having a high book-to-market ratio) showed relative underperformance. Klassifikation: G10, G14, G24 . 29th January 2004 .
How might retirees consider deploying the retirement assets accumulated in a defined contribution pension plan? One possibility would be to purchase an immediate annuity. Another approach, called the "phased withdrawal" strategy in the literature, would have the retiree invest his funds and then withdraw some portion of the account annually. Using this second tactic, the withdrawal rate might be determined according to a fixed benefit level payable until the retiree dies or the funds run out, or it could be set using a variable formula, where the retiree withdraws funds according to a rule linked to life expectancy. Using a range of data consistent with the German experience, we evaluate several alternative designs for phased withdrawal strategies, allowing for endogenous asset allocation patterns, and also allowing the worker to make decisions both about when to retire and when to switch to an annuity. We show that one particular phased withdrawal rule is appealing since it offers relatively low expected shortfall risk, good expected payouts for the retiree during his life, and some bequest potential for the heirs. We also find that unisex mortality tables if used for annuity pricing can make women's expected shortfalls higher, expected benefits higher, and bequests lower under a phased withdrawal program. Finally, we show that delayed annuitization can be appealing since it provides higher expected benefits with lower expected shortfalls, at the cost of somewhat lower anticipated bequests. Klassifikation: G22, G23, J26, J32, H55 . January 2004.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
Das neue Insiderrecht
(2004)
Mit Inkrafttreten von Art. 1 des Gesetzes zur Verbesserung des Anlegerschutzes (Anlegerschutzverbesserungsgesetz - AnSVG) am 30. Oktober 2004 hat das WpHG zahlreiche Änderungen erfahren. Die nachfolgenden Ausführung gehen anhand einiger ausgewählter Beispiel der Frage nach, inwieweit die Marktmißbrauchsrichtlinie und ihre Umsetzung durch das AnSVG das bisher geltende Insiderrecht geändert haben. Vorab sei bemerkt, daß die Aufgabe, das neue Recht einigermaßen zutreffend zu interpretieren, durch die Besonderheiten des Rechtssetzungsverfahrens, das schließlich in die Neufassung des WpHG eingemündet ist, nicht gerade erleichtert wird: Die europarechtlichen Vorgaben finden sich nicht mehr nur in einer einzigen Richtlinie, sondern sind aufgrund des Komitologieverfahrens über zahlreiche Rechtsakte verteilt. Für das Insiderrecht sind neben der Marktmißbrauchsrichtlinie mehrere Durchführungsrichtlinien und eine Verordnung der Kommission von Bedeutung, für deren Verständnis wiederum die CESR-Vorschläge zusätzliche Anhaltspunkte bieten. Da schon die deutsche Fassung der Richtlinien in etlichen Punkten von den jeweiligen englischen Version und das WpHG wiederum nicht selten von den Richtlinien abweicht, entsteht bisweilen eine Art "stille Post"-Effekt, der es noch mehr als schon bislang notwendig macht, sich bei der Auslegung der Begriffe des WpHG zu vergewissern, ob sich die Umsetzung durch den deutschen Gesetzgeber innerhalb des europarechtlichen Rahmens bewegt. Insbesondere auf der Sanktionenseite ist das nicht durchweg der Fall.
Europäische Bankkonzerne sind nicht nur verpflichtet, konsolidierte Jahresabschlüsse zu erstellen, sie müssen seit Mitte der achtziger Jahre darüber hinaus ihr gesamtes regulatives Eigenkapital im Wege eines weiteren Konsolidierungsverfahrens ermitteln. Dieses Verfahren hat der deutsche Gesetzgeber im Kreditwesengesetz kodifiziert. Der folgende Beitrag erörtert offene Fragen, die sich bei Anwendung der kreditwesenrechtlichen Vorschriften über die Kapitalkonsolidierung stellen, und zeigt die Konsequenzen auf, die das Konsolidierungsverfahren auf die Geschäftsentfaltungsmöglickeiten der Konzernunternehmen hat. Die anschließende Analyse der Zweckmäßigkeit des Verfahrens soll belegen, dass sich die Pflicht zur Durchführung einer besonderen bankaufsichtsrechtlichen Kapitalkonsolidierung kaum rechtfertigen lässt. Der Autor plädiert daher für deren Abschaffung und für die Einführung einer generellen Pflicht zur Unterlegung von Bank-an-Bank Beteiligungen mit Haftungsmitteln.
Das Bankgeheimnis stellt weder ein absolutes Verbot der Weitergabe kundenbezogener Informationen noch ein Verbot der Abtretung von Forderungen gegen Kunden dar. Die aus dem Bankgeheimnis folgende Pflicht zur vertraulichen Behandlung von Informationen über Kunden wird ihrerseits durch immanente Grenzen beschränkt, soweit es die Ausübung von Gläubigerrechten der Bank in Frage steht. Eine Veräußerung und Abtretung von Forderungen und die dafür notwendige Weitergabe der Kundendaten wird daher durch das Bankgeheimnis nicht ausgeschlossen. Das Bankgeheimnis verpflichtet die Bank allerdings dazu, bei der Ausübung ihrer Gläubigerrechte die Vertraulichkeit der Informationen über die Geschäftsbeziehung so weit wie nur möglich zu wahren. Weitergehende Schranken zieht auch das Datenschutzrecht der Verwaltung und Verwertung von Forderungen durch die Bank nicht.
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
Anmerkungen zum Urteil des BGH vom 24. November 2003, II ZR 171/01 : Das Urteil des BGH vom 24. 11. 2003 verschärft das Recht der Kapitalerhaltung empfindlich. Der Leitsatz, Kreditgewährungen an Gesellschafter, die nicht aus Rücklagen oder Gewinnvorträgen, sondern zu Lasten des gebundenen Vermögens der GmbH erfolgen, sind auch dann grundsätzlich als verbotene Auszahlung von Gesellschaftsvermögen zu bewerten, wenn der Rückzahlungsanspruch gegen den Gesellschafter im Einzelfall vollwertig sein sollte und die zugehörigen Urteilsgründe lassen erhebliche Auswirkungen nicht nur auf das Finanzierungsgebaren kleiner Gesellschaften, um die es in dem vom BGH entschiedenen Fall ging, sondern auch auf die Möglichkeiten der Innenfinanzierung großer Konzerne befürchten.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.