Refine
Year of publication
Document Type
- Working Paper (1496)
- Part of Periodical (568)
- Article (205)
- Report (141)
- Book (100)
- Doctoral Thesis (70)
- Contribution to a Periodical (44)
- Conference Proceeding (21)
- Part of a Book (13)
- Periodical (12)
Is part of the Bibliography
- no (2698)
Keywords
- Deutschland (98)
- Financial Institutions (90)
- Capital Markets Union (65)
- ECB (65)
- Financial Markets (59)
- Banking Union (50)
- Banking Regulation (49)
- Household Finance (45)
- Monetary Policy (41)
- Banking Supervision (40)
Institute
- Wirtschaftswissenschaften (2698) (remove)
The increasing digitization of the world of work is associated with accelerated structural changes. These are connected with changed qualification profiles and thus new challenges for vocational education and training (VET). Companies, vocational schools and other educational institutions must respond appropriately. The volume focuses on the diverse demands placed on teachers, learners and educational institutions in vocational education and training and aims to provide up-to-date results on learning in the digital age.
We extend the classical ”martingale-plus-noise” model for high-frequency prices by an error correction mechanism originating from prevailing mispricing. The speed of price reversal is a natural measure for informational efficiency. The strength of the price reversal relative to the signal-to-noise ratio determines the signs of the return serial correlation and the bias in standard realized variance estimates. We derive the model’s properties and locally estimate it based on mid-quote returns of the NASDAQ 100 constituents. There is evidence of mildly persistent local regimes of positive and negative serial correlation, arising from lagged feedback effects and sluggish price adjustments. The model performance is decidedly superior to existing stylized microstructure models. Finally, we document intraday periodicities in the speed of price reversion and noise-to-signal ratios.
We define a sentiment indicator that exploits two contrasting views of return predictability, and study its properties. The indicator, which is based on option prices, valuation ratios and interest rates, was unusually high during the late 1990s, reflecting dividend growth expectations that in our view were unreasonably optimistic. We interpret it as helping to reveal irrational beliefs about fundamentals. We show that our measure is a leading indicator of detrended volume, and of various other measures associated with financial fragility. We also make two methodological contributions. First, we derive a new valuation-ratio decomposition that is related to the Campbell and Shiller (1988) loglinearization, but which resembles the traditional Gordon growth model more closely and has certain other advantages for our purposes. Second, we introduce a volatility index that provides a lower bound on the market's expected log return.
We show that time-varying volatility of volatility is a significant risk factor which affects the cross-section and the time-series of index and VIX option returns, beyond volatility risk itself. Volatility and volatility-of-volatility measures, identified model-free from the option price data as the VIX and VVIX indices, respectively, are only weakly related to each other. Delta-hedged index and VIX option returns are negative on average, and are more negative for strategies which are more exposed to volatility and volatility-of-volatility risks. Volatility and volatility of volatility significantly and negatively predict future delta-hedged option payoffs. The evidence is consistent with a no-arbitrage model featuring time-varying market volatility and volatility-of-volatility factors, both of which have negative market price of risk.
Discretionary disclosure theory suggests that firms' incentives to provide proprietary versus nonproprietary information differ markedly. To test this conjecture, the paper investigates the incentives of German firms to voluntarily disclose business segment reports and cash flow statements in their annual financial reports. While the former is likely to reveal proprietary information to competitors, the latter is less proprietary in nature. Using these proxies for proprietary and non-proprietary disclosures, respectively, I find that the determinants or at least their relative magnitudes differ in a way consistent with the proprietary cost hypothesis. That is, cash flow statement disclosures appear to be governed primarily by capital-market considerations, whereas segment disclosures are more strongly associated with proxies for product-market and proprietary-cost considerations.
Vom "Netz-Doktor" bis "Health 2.0" : welche Möglichkeiten das Internet chronisch Kranken bieten kann
(2011)
Jeder fünfte Deutsche ist inzwischen bereits über 65 Jahre, und der demografische Wandel schreitet voran. Mit dem wachsenden Anteil Älterer nimmt auch die Zahl der chronisch Kranken stetig zu. Diese Patienten haben einen besonders hohen Bedarf an aktuellen medizinischen Informationen; das stellt neue Herausforderungen an alle Beteiligten im Gesundheitssystem. Unter dem Stichwort "Health 2.0" untersucht der Wirtschaftsinformatiker Christoph Rosenkranz, welche interaktiven Möglichkeiten das Internet den Betroffenen bisher schon bietet und was es darüber hinaus in Zukunft leisten sollte.
Vom Kinderzuschlag zum Kindergeldzuschlag : ein Reformvorschlag zur Bekämpfung von Kinderarmut
(2007)
Ausgehend von einer kritischen Analyse des im Zuge der Hartz IV-Reform 2005 eingeführten Kinderzuschlags wird in der vorliegenden Studie ein Reformkonzept zur Bekämpfung von Kinderarmut entwickelt und eine quantitative Abschätzung der unmittelbaren Reformwirkungen vorgenommen. Bei der Gestaltung des Reformvorschlags wurde an Grundprinzipien des allgemeinen Familienleistungsausgleichs angeknüpft. Dieser sollte unabhängig von der jeweiligen Armutsursache das Existenzminimum des Kindes nicht nur von der Steuer freistellen, sondern im Bedarfsfall durch positive Transfers – mit einem Kindergeldzuschlag – gewährleisten. Dies erfordert a) die Aufstockung des Kindergeldes durch einen Zuschlag auf die Höhe des sächlichen Existenzminimums, also um maximal 150 Euro auf 304 Euro – bei Alleinerziehenden wegen besonderer Mehrbedarfe für das erste Kind um maximal 250 Euro auf 404 Euro; b) den Verzicht auf eine zeitliche Befristung des Kindergeldzuschlags; c) die Berücksichtigung des Familieneinkommen nach Abzug eines Freibetrages in Höhe des pauschalisierten Existenzminimums der Eltern bzw. des Elternteils (1.238 Euro bzw. 860 Euro); d) eine mäßige (mit Besteuerungsgrundsätzen vereinbare) Anrechnung des zu berücksichtigenden Einkommens – wir schlagen eine Transferentzugsrate von 50% vor; e) den Verzicht auf eine Berücksichtigung des Vermögens. Wesentliche Unterschiede des Reformkonzepts gegenüber dem derzeitigen Kinderzuschlag liegen in der Ersetzung der „spitzen“ Berechnung des elterlichen Existenzminimums durch eine Pauschale und in dem Verzicht zum Einen auf eine explizite Höchsteinkommensgrenze – aus der Transferentzugsrate ergibt sich freilich eine implizite Höchsteinkommensgrenze – und zum Anderen auf eine Mindesteinkommensgrenze. Es bleibt den Eltern also unbenommen, den Kindergeldzuschlag in Anspruch zu nehmen, selbst wenn ihre Einkommensverhältnisse und individuellen Wohnkosten auf einen höheren ALG II-Anspruch schließen lassen, den sie aber nicht wahrnehmen – sei es aus Stigmatisierungsangst, aus Unwissenheit, weil sie den Verweis auf kleine Ersparnisse befürchten oder sich von dem bürokratischen Aufwand abschrecken lassen. Aus vorliegenden Schätzungen geht hervor, dass aus den genannten Grün den das Ausmaß verdeckter Armut groß ist. Dem könnte durch einen vergleichsweise unbürokratischen Kindergeldzuschlag entgegengewirkt werden, insbesondere wenn der Leistungsträger, also die Familienkasse, verpflichtet wird, bei sehr geringem Einkommen des Antragstellers diesen auf möglicherweise bestehende höhere ALG II-Ansprüche hinzuweisen. Zur Abschätzung der unmittelbaren Reformwirkungen wurde ein Mikrosimulationsmodell entwickelt und mit den Daten des Sozio-ökonomischen Panels 2006 in mehreren Varianten gerechnet. Auf der Basis einer bereinigten Stichprobe ergeben sich – je nach Reformvariante – 3 Mio. bis 3,6 Mio. potenziell begünstigte Kinder, was etwa einem Sechstel bzw. einem Fünftel aller Kinder, für die Kindergeld bezogen wird, entspricht. Unter den Kindern von Alleinerziehenden würde die Empfängerquote mit gut einem Drittel weit überdurchschnittlich ausfallen. Die fiskalischen Bruttokosten des Reformmodells würden sich auf 3,7 Mrd. bzw. 4,5 Mrd. Euro jährlich (11% bzw. 13% der derzeitigen Kindergeldausgaben) belaufen; sie würden durch einige Einsparungen beim nachrangigen Wohngeld, bei ausbildungsbedingten Transfers sowie beim ALG II – sofern einige Anspruchsberechtigte den Bezug des Kindergeldzuschlags vorziehen – etwas vermindert werden. Der durchschnittliche Zahlbetrag pro Bedarfsgemeinschaft mit Anspruch auf Kindergeldzuschlag liegt bei 190 Euro p. M., der Median bei 150 Euro. Mit dem insgesamt begrenzten Mittelaufwand kann eine erhebliche Verminderung relativer Einkommensarmut von Familien erreicht werden. Die derzeit bei etwa 18% liegende Armutsquote von Kindern, für die Kindergeld bezogen wird, würde nach Einführung des Kindergeldzuschlags um etwa vier Prozentpunkte zurückgehen, die aller Mitglieder in den Familien von 16% um drei Prozentpunkte. Mit etwa zwei Dritteln lebt der größte Teil der potenziellen Anspruchsberechtigten in erwerbstätigen Familien, und die relative stärkste Verminderung der Armutsquote ergibt sich bei Familien mit Vollzeiterwerbstätigkeit. Die mit dem Kindergeldzuschlag zu bewirkende Verminderung von Kinderarmut würde wegen der hohen Erwerbsquote von Familien also mit einem Abbau von Armut trotz Arbeit einhergehen. Besonders große Reformwirkungen zeigen sich bei den Alleinerziehenden, für welche die Simulation eine Reduzierung der derzeit bei 40% liegenden Armutsquote um etwa acht Prozentpunkte ergibt. Dennoch verbliebe die Armutsquote auch nach Einführung des Kindergeldzuschlags auf einem bedrückend hohen Niveau. Dies ist ganz überwiegend auf die große Zahl der Alleinerziehenden mit Bezug von ALG II und Sozialgeld bzw. Sozialhilfe zurückzuführen, die annahmegemäß nach der Reform im Grundsicherungsbezug verbleiben, den vorrangigen Kindergeldzuschlag also nicht in Anspruch nehmen. Bei den Paarfamilien zeigt sich – relativ gesehen – ein ähnlicher Effekt des Kindergeldzuschlags wie bei den Alleinerziehenden; die Armutsquote von derzeit 12,5% würde um ein Fünftel auf 10% zurückgehen. Dabei fällt die Reformwirkung umso größer aus, je mehr Kinder in der Familie leben. Bei den trotz Einführung des Kindergeldzuschlags unter der relativen Armutsgrenze verbleibenden Paarfamilien handelt es sich zu einem geringeren Teil als bei den Alleinerziehenden um Empfänger von nachrangigen allgemeinen Grundsicherungsleistungen und zu einem größeren Teil um Fälle, bei denen auch das um den Kindergeldzuschlag erhöhte Einkommen die Armutsgrenze nicht erreicht. Ihre Situation würde sich dennoch durch die Reform erheblich verbessern, da die relative Armutslücke im Durchschnitt von 21% auf 14% zurückgehen würde; dies entspricht einer Einkommenserhöhung von durchschnittlich 267 Euro. Abschließend bleibt darauf hinzuweisen, dass der hier vorgestellte Reformvorschlag lediglich als erster Schritt zu einer allgemeinen Grundsicherung für Kinder zu verstehen ist. Er wurde unter dem Aspekt einer schnellen Umsetzbarkeit entwickelt, sollte aber weiter reichende Überlegungen nicht verdrängen. Diese haben nicht nur das sächliche Existenzminimum des Kindes, sondern darüber hinaus den verfassungsgerichtlich festgestellten Betreuungs- und Erziehungs- oder Ausbildungsbedarf (BEA) in den Blick zu nehmen. Er wird im Rahmen der Einkommensbesteuerung durch einen Freibetrag berücksichtigt (§ 32 Abs. 6 EStG), ist in die Bemessung des hier vorgestellten Kindergeldzuschlags aber nicht eingegangen. Eine systematische Weiterentwicklung des Familienleistungsausgleichs im Steuerrecht würde die Einführung eines einheitlichen (Brutto-) Kindergeldes zur Abdeckung von sächlichem Existenzminimum und BEA erfordern, das entsprechend der Leistungsfähigkeit der Eltern, also nach dem allgemeinen Einkommensteuertarif, zu versteuern wäre (Lenze 2007).
Die Rechnung kommt immer zum Schluss – und sie zu bezahlen, macht in der Regel keine Freude. Wenn wir aber schon eher ungern bezahlen, soll die Zahlungsmethode selbst wenigstens einfach, überall verfügbar und sehr sicher sein. Insbesondere die Sicherheit ist beim Bezahlen im 21. Jahrhundert ein wichtiges Thema, zu dem es interdisziplinäre Forschungsansätze aus Wirtschaftswissenschaften, Informatik und Recht in einem von dynamischer Entwicklung geprägten Umfeld gibt.
Variations and disparities between von Hayek and Ordoliberalism can be detected on diverse levels: 1. philosophy of science; 2. setting dissimilar priorities; 3. social philosophy; 4. genesis of norms; and, 5. notion of freedom. Therefore, it is possible to make an important distinction within neoliberalism itself, which contains at least two factions: von Hayek’s evolutionary liberalism, and German Ordoliberalism. The following essay not only takes the neoliberal separation of different varieties as granted; it proceeds further. It focuses on the topic of justice and elaborates the (slightly) differing conceptions of justice within neoliberalism. Thus, the specific contribution of the paper is that it adds a sixth dimension of differences (which is highly interconnected with the differing conceptions of genesis of norms). In this paper, I emphasize the (often neglected) subtle differences between von Hayek, Eucken, Röpke, and Rüstow, with special emphasis on their theories of justice. In this regard, I focus not only on Eucken and von Hayek; in addition, I include the concepts of justice developed by Rüstow and Röpke, as well, and, in consequence, broaden the perspective incorporating Eucken as a member of the Freiburg School of Law and Economics, and Rüstow and Röpke as representatives of Ordoliberalism in the wider sense. The paper tackles these topics in three steps. After briefly examining and discussing the existing literature and providing a literature overview on the decade-long debate on von Hayek and Ordoliberalism, I then describe von Hayek’s conception of commutative justice; particularly, justice of rules and procedures (rather than end-state justice). Then, I examine Eucken’s, Rüstow’s, and Röpke’s theories of justice, which consist of a mixture of commutative and distributive justice. Then, I draw a comparison between the ideas of justice developed by Eucken, Röpke, Rüstow, and von Hayek. The essay ends with a summary of my main findings.
Vulnerability comes, according to Orio Giarini, with two risks: human-made risks, also called entrepreneurial risks, and natural or pure risks such as accidents and earthquakes. Both types of risk are growing in dimension and are increasingly interrelated. To control the vulnerability, sophisticated insurance products are called for. Here, mutual insurance is relevant, in particular when risks are large, probabilities uncertain or unknown, and events interrelated or correlated. In this paper the following three examples are discussed and the advantages of mutual insurance are shown: unknown probabilities connected with unforeseeable events, correlated risks and macroeconomic or demographic risks.
Tagungsbericht des Workshops "Völkerrecht und Weltwirtschaft im 19. Jahrhundert. Die Internationalisierung der Ökonomie aus völkerrechts- und wirtschafts(theorie-)geschichtlicher Perspektive", der vom 3. bis 4. September 2009 in Frankfurt am Main stattfand. Veranstalter: Exzellenzcluster "Die Herausbildung normativer Ordnungen"; in Kooperation mit der Goethe Universität Frankfurt am Main; dem Max-Planck-Institut für europäische Rechtsgeschichte
Krieg, das haben wir von den Massenmedien gelernt, ist nur dort, wo jemand zusieht, das spectaculum bedarf des Mediums. Mediale Aufmerksamkeit bestimmt darüber, ob Kriege und ihre Folgen überhaupt noch von jemand anderem wahrgenommen werden als von den unmittelbar Betroffenen. Manipulationen über die Nachrichten vom Krieg sind so alt wie das Kriegswesen, nur tritt heute neben die Manipulation das Ringen um die knappe Währung Aufmerksamkeit. Längst haben wir uns an bizarre Mitteilungen wie die gewöhnt, der Afghanistan- Krieg sei der erste des neuen Jahrtausends gewesen, während dauerhaft schwelende Kriegsherde der Welt unbeachtet bleiben. Und längst ringen die Siegessicheren darum, dass sie ihren Krieg vor allem medial gewinnen – das andere ist gar nicht so wichtig. Um all dies geht es Thomas Scharff in seiner Habilitationsschrift eigentlich gar nicht. Eigentlich. Dennoch ist er mit seiner Analyse von Texten über den Krieg der Karolinger ganz nah dran am 21. Jahrhundert. Nicht warum Krieg geplant, geführt und wie er gewonnen wird, sondern das Schreiben der "Intellektuellen" über Krieg, der Krieg als Thema – das ist Scharffs erklärtes Vorhaben. Nach Politik und Sozialgeschichte soll nun die Historiographiegeschichte den Blick auf neue Facetten eröffnen. ...
The dissertation collects four self-contained essays which contribute to the literature on wage structures, heterogeneous labor demand, and the impact of trade unions. The first paper provides a detailed description of the evolution of wage inequality in East and West Germany in the late years of the twentieth century. In contrast to previous decades, wage inequality has been rising in several dimensions during that period. The second paper identifies cohort effects in the evolution of both wages and employment. Observed structures are consistent with a labor demand framework that incorporates steady skill-biased technical change. Substitutability between skill and age groups in the German labor market is found to be relatively high. Simulations based on estimated elasticities of substitution illustrate that higher wage dispersion between skill groups would have contributed to a reduction in unemployment. The third paper estimates determinants of individual union membership decisions and studies the erosion of union density in East and West Germany. Using corresponding predictions of net union density, the fourth paper analyzes the link between union strength and the structure of wages. A higher union density is associated with lower residual wage dispersion, reduced skill wage differentials, and a lower wage level. This finding is in line with an insurance motive for union action. The thesis comprises the following articles: (1) “Rising Wage Dispersion, After All! The German Wage Structure at the Turn of the Century,” IZA Discussion Paper 2098, April 2006. (2) “Skill Wage Premia, Employment, and Cohort Effects: Are Workers in Germany All of the Same Type?”, IZA Discussion Paper 2185, June 2006, joint with Bernd Fitzenberger. (3) “The Erosion of Union Membership in Germany: Determinants, Densities, Decompositions,” IZA Discussion Paper 2193, July 2006, joint with Bernd Fitzenberger and Qingwei Wang. (4) “Equal Pay for Equal Work? On Union Power and the Structure of Wages in West Germany, 1985–1997,” translation of “Gleicher Lohn für gleiche Arbeit? Zum Zusammenhang zwischen Gewerkschaftsmitgliedschaft und Lohnstruktur in Westdeutschland 1985–1997,” Zeitschrift für Arbeitsmarkt-Forschung, 38 (2/3), 125-146, joint with Bernd Fitzenberger, 2005.
This dissertation contains five independent chapters dealing with wage dispersion and unemployment. The first chapter deals with the explanation of international changes in wage inequality and unemployment in the 80s and 90s. Both theoretically and empirically, social benefits and its link to average income are blamed for the different experiences across countries. The second chapter discusses the search framework, to explain residual wage inequality and finds that institutional wage compression has ambiguous effects on employment. In the third chapter, we apply the theory to German data. We show that job-to-job transitions are important in explaining both frictions and career advances. In the fourth chapter, we empirically assess the relationship between wage dispersion and unemployment for homogeneous workers. We find that neither a frictional nor a neo-classical view in explaining this relationship are convincing. Unemployment within cells is not negatively correlated with wage dispersion. Finally, the last chapter builds a theoretical model which treats heterogeneous individuals in a production function framework and a frictional labor market. The model generates both wage dispersion within and between skill groups and both frictional and structural unemployment. In sum, the dissertation stresses the importance of modelling frictions to understand different types of wage inequality and unemployment.
The right to ask questions and voice their opinions at annual general meetings (AGMs) represents one of the few avenues for shareholders to communicate directly and publicly with the firm’s management. Examining AGM transcripts of U.S. companies between 2007 and 2021, we find that shareholders actively express their concerns about environmental, social and governance (ESG) issues in accordance with their specific relationship with the company. Further, they are also demonstrably more vocal about ESG issues at AGMs of firms with poor sustainability performance. What is more, we show that this soft engagement translates into a more negative tone which, in turn, results in lower approval rates for management proposals. Shareholders' soft engagement at AGMs is hence an effective way to "walk the talk".
As recent newspaper headlines show the topic of patents/patent laws is still heavily disputed. In this paper I will approach this topic from a theoretical-historical and history of economic thought-perspective. In this regard I will link the patent controversy of the nineteenth century with Walter Eucken’s Ordoliberalism – a German version of neoliberalism. My paper is structured as follows: The second chapter provides the reader with a historical introduction. At the heart of this paragraph are the controversy and discourse on patent laws in nineteenth century Europe as well as the pro and contra arguments presented by the anti-patent/free-trade movement respectively by the advocates of patent protection. The focus of my paper is on the struggle for the protection of inventions and innovations in nineteenth century Germany, since Walter Eucken, main representative of the Freiburg School of Law and Economics, picks up the counter-arguments presented in the national debate and in particular by the Kongress deutscher Volkswirthe. The third chapter deals intensively with the question whether patent laws are just ‘nonsense upon stilts’ from an ordoliberal perspective. Here, Eucken’s arguments against the current patent system are elaborated in great detail. The paper ends with a summary of my main findings.
Von Februar bis Juni 2015 hat die Europäische Zentralbank (EZB) die Notfall-Liquiditätshilfen (emergency liquidity assistance, ELA) für griechische Banken von 50 auf etwa 90 Milliarden Euro ausgeweitet. Dies hat zu einer Diskussion unter Wissenschaftlern, Politikern und Praktikern geführt, ob diese Liquiditätshilfen rechtmäßig sind. Es wurde der Vorwurf erhoben, die EZB trage bewusst zu einer Konkursverschleppung der bereits insolventen griechischen Banken bei.
Wir nehmen diesen Vorwurf zum Anlass, die Grundsätze des ELA-Programms genauer zu betrachten und die Frage zu diskutieren, ob das Programm in der aktuellen Situation rechtmäßig war. Zunächst beschreiben wir hierfür aus finanzwirtschaftlicher Perspektive die komplexe Beziehung zwischen der Europäischen Union, der EZB und den griechischen Banken. Dabei gehen wir insbesondere auf die wirtschaftspolitischen Grundsätze einer Währungsunion mit einer unvollständigen Fiskalunion (oder Haushaltskonsolidierung) ein. Vor diesem Hintergrund analysieren wir dann die Entscheidung der EZB, weiterhin Liquiditätshilfen an griechische Banken bereitzustellen. Wir kommen zu dem Ergebnis, dass das Vorgehen der EZB nicht als Konkursverschleppung zu bezeichnen ist.
„Bedeutende Finanzplätze“ oder Finanzzentren sind eng abgegrenzte Orte mit einer beträchtlichen Konzentration wichtiger professioneller Aktivitäten aus dem Finanzdienstleistungsbereich und der entsprechenden Institutionen. Allerdings: „Finance is a footloose industry“: Die Finanzbranche kann abwandern, ein Finanzzentrum kann sich verlagern, möglicherweise auch einfach auflösen. Die Möglichkeit der Auflösung und der Abwanderung stellt eine Bedrohung dar, die in der Zeit der Globalisierung und der rasanten Fortschritte der Transport- und der Informations- und Kommunikationstechnik ausgeprägter sein dürfte, als sie je war. Frankfurt ist zweifellos ein „bedeutender Finanzplatz“, und manchen gilt er auch als bedroht. Allein deshalb ist unser Thema wichtig; und auch wenn die Einschätzungen von Bedeutung und Bedrohtheit keineswegs neu sind, ist es doch aktuell. Der Aspekt der Bedrohtheit prägt, wie wir die Frage im Titel verstehen und diskutieren möchten. Was ist ein „bedeutender Finanzplatz“? Selbst wenn man das Attribut „bedeutend“ erst einmal beiseite lässt, ist die Frage keineswegs trivial. Sie zielt ja nicht nur auf eine Begriffsklärung, eine Sprachregelung ab. Hinter dem Begriff steht oft auch eine Vorstellung vom „Wesen“ dessen, was ein Begriff bezeichnet. Also: Was macht einen Finanzplatz aus? Und weiter: Warum gibt es überhaupt Finanzplätze als beträchtliche Konzentrationen von bestimmten wichtigen Aktivitäten und Institutionen? Welche Kräfte führen - oder zumindest führten - zu der räumlichen Konzentration der Aktivitäten und Institutionen, wie wirken diese Kräfte, und wie ändern sie sich gegebenenfalls? Diesen Fragen ist dieser Beitrag im Wesentlichen gewidmet, und sie prägen seinen Aufbau. Im Abschnitt II wird diskutiert, was ein „bedeutender Finanzplatz“ ist oder woran man ihn erkennt und „was er braucht“. Im Abschnitt III gehen wir zuerst auf die Frage nach der in letzter Zeit unter dem Stichwort „the end of geography“ heftig diskutierten Vorstellung einer Auflösung oder Virtualisierung der Finanzplätze ein – nicht weil dies die wichtigere Bedrohung wäre, sondern weil es die grundlegendere Frage darstellt. Dann diskutieren wir den Wettbewerb von Finanzplätzen in Europa. Den Abschluss bilden Überlegungen zu den Perspektiven des Finanzplatzes Frankfurt und der möglichen Förderung seiner Entwicklung.
Der vorliegende Beitrag führt eine detaillierte empirische Untersuchung über die Rolle der amtlichen Kursmakler an der Frankfurter Wertpapierbörse durch. Der verwendete Datensatz erlaubt eine Analyse des Einflusses der Maklertätigkeit auf Liquidität und Volatilität sowie eine Beurteilung der Profitabilität der Eigengeschäfte.
Die Beteiligung der Makler am Präsenzhandel ist erheblich. Ihre Eigengeschäfte machen über 20% des Handelsvolumens zu gerechneten Kursen und über 40% des Handelsvolumens im variablen Handel aus. Für letzteren wird zudem dokumentiert, daß die Tätigkeit der Makler zu einer deutlichen Reduktion der Geld-Brief-Spannen beiträgt. Die letztendlich gezahlte effektive Spanne beträgt im Durchschnitt weniger als ein Drittel der Spanne, die sich aus dem Orderbuch ergibt.
Für den Handel zu gerechneten Kursen wird gezeigt, daß die Preisfeststellung durch die Makler zu einer Verringerung der Volatilität führt. Eine Beurteilung des Einflusses der Makler auf die Volatilität im fortlaufenden Handel scheitert daran, daß das hierfür teilweise verwendete Maß, die Stabilisierungsrate, nach unserer Einschätzung keine aussagekräftigen Resultate liefert.
Die Makler erzielten während unseres Untersuchungszeitraums im Durchschnitt keinen Gewinn aus ihren Eigengeschäften. Eine Zerlegung der Gewinne in zwei Komponenten zeigt, daß positive Spannengewinne im Aggregat nicht für entstehende Positionierungsverluste kompensieren können.
Insgesamt zeigt unsere Untersuchung, daß die Kursmakler an den deutschen Wertpapierbörsen einen Beitrag zur Sicherung der Marktqualität leisten. Die Konsequenzen dieser Resultate für die Organisation des Aktienhandels in Deutschland werden diskutiert.
The authors present evidence of a new propagation mechanism for wealth inequality, based on differential responses, by education, to greater inequality at the start of economic life. The paper is motivated by a novel positive cross-country relationship between wealth inequality and perceptions of opportunity and fairness, which holds only for the more educated. Using unique administrative micro data and a quasi-field experiment of exogenous allocation of households, the authors find that exposure to a greater top 10% wealth share at the start of economic life in the country leads only the more educated placed in locations with above-median wealth mobility to attain higher wealth levels and position in the cohort-specific wealth distribution later on. Underlying this effect is greater participation in risky financial and real assets and in self-employment, with no evidence for a labor income, unemployment risk, or human capital investment channel. This differential response is robust to controlling for initial exposure to fixed or other time-varying local features, including income inequality, and consistent with self-fulfilling responses of the more educated to perceived opportunities, without evidence of imitation or learning from those at the top.
We use data from the 2009 Internet Survey of the Health and Retirement Study to examine the consumption impact of wealth shocks and unemployment during the Great Recession in the US. We find that many households experienced large capital losses in housing and in their financial portfolios, and that a non-trivial fraction of respondents have lost their job. As a consequence of these shocks, many households reduced substantially their expenditures. We estimate that the marginal propensities to consume with respect to housing and financial wealth are 1 and 3.3 percentage points, respectively. In addition, those who became unemployed reduced spending by 10 percent. We also distinguish the effect of perceived transitory and permanent wealth shocks, splitting the sample between households who think that the stock market is likely to recover in a year’s time, and those who do not. In line with the predictions of standard models of intertemporal choice, we find that the latter group adjusted much more than the former its spending in response to financial wealth shocks.
Über die Wirkung der Förderung der beruflichen Weiterbildung in Ostdeutschland sind in den letzten Jahren zahlreiche empirische Studien durchgeführt worden, die auf den methodischen Fortschritten der Evaluationsforschung aufbauen. Dieser Beitrag stellt die empirische Evidenz im Kontext der institutionellen Regelungen und der methodischen Probleme einer angemessenen mikro- oder makroökonomischen Evaluation dar. Insbesondere behandeln wir die im ostdeutschen Kontext wichtigen Probleme der Mehrfachteilnahmen und der Reduktion der Beschäftigungsquoten für die Teilnehmer kurz vor einer Weiterbildung ("Ashenfelters Tal"), die bislang in empirischen Analysen kaum Berücksichtigung finden. Die durchgeführten mikroökonomischen Evaluationsstudien basieren auf Umfragedaten mit kleinen Teilnehmerzahlen. Die erzielten Evaluationsergebnisse ergeben ein ernüchterndes Bild der Wirkung geförderter Weiterbildung: Es zeigen sich eher negative als positive Maßnahmeneffekte auf die Beschäftigung. Aufgrund der schwachen Datenbasis und verbleibender methodischer Probleme erlaubt die bisherige Evidenz jedoch noch keine wirtschaftspolitischen Schlussfolgerungen. Stattdessen sollten die eher negativen Ergebnisse von der politischen Seite zum Anlass genommen werden, die Voraussetzungen für eine angemessene und kontinuierliche Evaluation der aktiven Arbeitsmarktpolitik zu schaffen.
Um eine grüne Transformation der Volkswirtschaft zu erreichen, werden Finanzmärkte und die mit ihnen verbundenen Banken eine wichtige Rolle einnehmen müssen. Aber allein vermögen Banken und Kapitalmärkte wenig, wenn sie nicht im Kontext einer klugen, politischen Rahmensetzung und einer transparenten Erfassung der verursachten Schäden auf Unternehmensebene gesehen werden. Diese drei Pfeiler stellen bildlich den tragenden Unterbau für eine Brücke hin zu einer klimaneutralen Wirt-schaftsverfassung dar. Ihr Zusammenwirken ist eine Voraussetzung dafür, dass die Finanzwirtschaft die benötigten Finanzmittel für die grüne Transformation bereitstellen kann.
Der "Generationenvertrag" der gesetzlichen Rentenversicherung hat die Grenzen seiner Leistungsfähigkeit erreicht. Damit ist die "erste Säule" der Alterssicherung, die auf diesem Umlageverfahren basiert, ins Wanken geraten. Schuld daran ist die zunehmende Überalterung der Gesellschaft, aber auch die anhaltend hohe Arbeitslosigkeit, die zu enormen Beitragsausfällen führt. Schon heute sind die Rentenzahlungen nur noch zu rund 75 Prozent durch die Sozialversicherungsbeiträge der arbeitenden Bevölkerung gedeckt, der Rest muss – ähnlich wie bei den Beamtenpensionen – aus dem allgemeinen Steueraufkommen finanziert werden. Das birgt vor allem für die jungen Beitragszahler substanzielle Risiken. Angesichts dieser Perspektiven sind immer weniger junge Menschen bereit, steigende Rentenbeiträge bei stetig sinkenden Leistungen zu akzeptieren. Kann die kapitalgedeckte Alterssicherung diese Defizite auffangen? Wie lassen sich die vielfältigen Konzepte der privaten Alterssicherung bewerten?
Wenn Angehörige die Pflege übernehmen : von Kosten und Nutzen intrafamiliärer Pflegevereinbarungen
(2007)
Ob ein Angehöriger im Alter zu Hause von der Familie versorgt werden kann, hängt von vielen Faktoren ab, nicht nur davon, ob die Familienmitglieder über die nötigen Pflegekenntnisse verfügen, motiviert sind oder ob sie sich moralisch verpflichtet fühlen. Bei der Frage nach den Möglichkeiten der Pflege in der Familie, man spricht auch von »intrafamiliären Pflegearrangements«, müssen auch ökonomische Gesichtspunkte berücksichtigt werden: Wer sich als Pflegebedürftiger entscheidet, keinen ambulanten Pflegedienst zu engagieren oder nicht ins Heim zu gehen, der bevorzugt – ökonomisch gesprochen – die Eigenproduktion in Form der Familienpflege gegenüber dem Kauf professioneller Pflegedienstleistungen von externen Märkten. Welche Gründe haben Familien für die Bevorzugung der intrafamiliären Pflege, welchen Nutzen und welche Kosten berücksichtigen sie bei ihrer (Pflege-)Entscheidung? Um die beobachtbare Stabilität und die möglichen Vorteile der Pflege in Familien und Privathaushalten zu erklären, kann die ökonomische Sicht interessante Aspekte erhellen. Letztere werden in diesem Beitrag mit den Ergebnissen einer schriftlichen Befragung zu den Auswirkungen der Gesetzlichen Pflegeversicherung in Hessen konfrontiert.
Seit der Einführung des Deutschen Corporate Governance Kodex (Kodex) im Jahr 2002 sind deutsche börsennotierte Unternehmen zur Abgabe der Entsprechenserklärung gemäß § 161 AktG verpflichtet (Comply-or-Explain-Prinzip). Auf der Basis dieser Information soll durch den Druck des Kapitalmarkts die Einhaltung des Kodex überwacht und gegebenenfalls sanktioniert werden. Dabei wird regelmäßig postuliert, dass bei überdurchschnittlicher Befolgung bzw. Nichtbefolgung der Kodex-Empfehlungen eine Belohnung durch Kurszuschläge bzw. eine Sanktionierung durch Kursabschläge erfolgt. Die Ergebnisse einer Ereignisstudie zeigen, dass die Abgabe der Entsprechenserklärung keine erhebliche Kursbeeinflussung auslöst und die für das Enforcement des Kodex angenommene (und erforderliche) Selbstregulierung durch den Kapitalmarkt nicht stattfindet. Es wird daher kritisch hinterfragt, ob der für den Kodex gewählte und grundsätzlich zu begrüßende flexible Regulierungsansatz im System des zwingenden deutschen Gesellschaftsrechts einen geeigneten Enforcement-Mechanismus darstellt. This paper studies the short-run announcement effects of compliance with the German Corporate Governance Code (‘the Code’) on firm value. Event study results suggest that firm value is unaffected by the announcement, although such market reactions to the first time disclosure of the declaration of conformity were widely assumed by the private and public promoters of the Code. This result from acceptance of the German Code add evidence to the hypothesis that regulatory corporate governance initiatives that rely on mandatory disclosure without monitoring and enforcement are ineffective in civil law countries.
This in-depth analysis provides evidence on differences in the practice of supervising large banks in the UK and in the euro area. It identifies the diverging institutional architecture (partially supranationalised vs. national oversight) as a pivotal determinant for a higher effectiveness of supervisory decision making in the UK. The ECB is likely to take a more stringent stance in prudential supervision than UK authorities. The setting of risk weights and the design of macroprudential stress test scenarios document this hypothesis. This document was provided by the Economic Governance Support Unit at the request of the ECON Committee.
This document was requested by the European Parliament's Committee on Economic and Monetary Affairs. It was originally published on the European Parliament’s webpage: www.europarl.europa.eu/RegData/etudes/IDAN/2021/689443/IPOL_IDA(2021)689443_EN.pdf
In this paper we argue that the own findings of the SSM THEMATIC REVIEW ON PROFITABILITY AND BUSINESS MODEL and the academic literature on bank profitability do not provide support for the business model approach of supervisory guidance. We discuss in the paper several reasons why the regulator should stay away from intervening in management practices. We conclude that by taking the role of a coach instead of a referee, the supervisor generates a hazard for financial stability.
The paper discusses the policy implications of the Wirecard scandal. The study finds that all lines of defense against corporate fraud, including internal control systems, external audits, the oversight bodies for financial reporting and auditing and the market supervisor, contributed to the scandal and are in need of reform. To ensure market integrity and investor protection in the future, the authors make eight suggestions for the market and institutional oversight architecture in Germany and in Europe.
The health and genetic data of deceased people are a particularly important asset in the field of biomedical research. However, in practice, using them is compli- cated, as the legal framework that should regulate their use has not been fully developed yet. The General Data Protection Regulation (GDPR) is not applicable to such data and the Member States have not been able to agree on an alternative regulation. Recently, normative models have been proposed in an attempt to face this issue. The most well- known of these is posthumous medical data donation (PMDD). This proposal supports an opt-in donation system of health data for research purposes. In this article, we argue that PMDD is not a useful model for addressing the issue at hand, as it does not consider that some of these data (the genetic data) may be the personal data of the living relatives of the deceased. Furthermore, we find the reasons supporting an opt-in model less convincing than those that vouch for alternative systems. Indeed, we propose a normative framework that is based on the opt-out system for non-personal data combined with the application of the GDPR to the relatives’ personal data.
What constitutes a financial system in general and the German financial system in particular?
(2003)
This paper is one of the two introductory chapters of the book "The German Financial System". It first discusses two issues that have a general bearing on the entire book, and then provides a broad overview of the German financial system. The first general issue is that of clarifying what we mean by the key term "financial system" and, based on this definition, of showing why the financial system of a country is important and what it might be important for. Obviously, a definition of its subject matter and an explanation of its importance are required at the outset of any book. As we will explain in Section II, we use the term "financial system" in a broad sense which sets it clearly apart from the narrower concept of the "financial sector". The second general issue is that of how financial systems are described and analysed. Obviously, the definition of the object of analysis and the method by which the object is to be analysed are closely related to one another. The remainder of the paper provides a general overview of the German financial system. In addition, it is intended to provide a first indication of how the elements of the German financial system are related to each other, and thus to support our claim from Section II that there is indeed some merit in emphasising the systemic features of financial systems in general and of the German financial system in particular. The chapter concludes by briefly comparing the general characteristics of the German financial system with those of the financial systems of other advanced industrial countries, and taking a brief look at recent developments which might undermine the "systemic" character of the German financial system.
Research results confirm the existence of various forms of international tax planning by multinational firms. Prominent examples for firms employing tax avoidance strategies are Amazon, Google and Starbucks. Increasing availability of administrative data for Europe has enabled researchers to study behavioural responses of European multinationals to taxation. The present paper summarizes what we can learn from these recent studies in general and about German multinationals in particular.
On 23 July 2014, the U.S. Securities and Exchange Commission (SEC) passed the “Money Market Reform: Amendments to Form PF ,” designed to prevent investor runs on money market mutual funds such as those experienced in institutional prime funds following the bankruptcy of Lehman Brothers. The present article evaluates the reform choices in the U.S. and draws conclusions for the proposed EU regulation of money market funds.
We show that banks that are facing relatively high locally non-diversifiable risks in their home region expand more across states than banks that do not face such risks following branching deregulation in the 1990s and 2000s. These banks with high locally non-diversifiable risks also benefit relatively more from deregulation in terms of higher bank stability. Further, these banks expand more into counties where risks are relatively high and positively correlated with risks in their home region, suggesting that they do not only diversify but also build on their expertise in local risks when they expand into new regions.
What happened in Cyprus
(2013)
This policy letter sheds light on the economic and political backround in Cyprus and provides an analyses of the factors which lead to an intensification of the crisis there. It discusses the severe consequences of the errors made in the recent establishment of an adjustment program for Cyprus by the Europroup for European economic management as a whole.
Participation in further education is a central success factor for economic growth and societal as well as individual development. This is especially true today because in most industrialized countries, labor markets and work processes are changing rapidly. Data on further education, however, show that not everybody participates and that different social groups participate to different degrees. Activities in continuous vocational education and training (CVET) are mainly differentiated as formal, non-formal and informal CVET, whereby further differences between offers of non-formal and informal CVET are seldom elaborated. Furthermore, reasons for participation or non-participation are often neglected. In this study, we therefore analyze and compare predictors for participation in both forms of CVET, namely, non-formal and informal. To learn more about the reasons for participation, we focus on the individual perspective of employees (invidual factors, job-related factors, and learning biography) and additionally integrate institutional characteristics (workplace and company-based characteristics). The results mainly show that non-formal CVET is still strongly influenced by institutional settings. In the case of informal CVET, on the other hand, the learning biography plays a central role.
The current economic landscape is complex and globalized, and it imposes on individuals the responsibility for their own financial security. This situation has been intensified by the COVID-19 crisis, since short-time work and layoffs significantly limit the availability of financial resources for individuals. Due to the long duration of the lockdown, these challenges will have a long-term impact and affect the financial well-being of many citizens. Moreover, it can be assumed that the consequences of this crisis will once again particularly affect groups of people who have already frequently been identified as having low financial literacy. Financial literacy is therefore an important target for educational measures and interventions. However, it cannot be considered in isolation but must take into account the many potential factors that influence financial literacy alone or in combination. These include personality traits and socio-demographic factors as well as the (in)ability to defer gratification. Against this background, individualized support offers can be made. With this in mind, in the first step of this study, we analyze the complex interaction of personality traits, socio-demographic factors, the (in-)ability to delay gratification, and financial literacy. In the second step, we differentiate the identified effects regarding different groups to identify moderating effects, which, in turn, allow conclusions to be drawn about the need for individualized interventions. The results show that gender and educational background moderate the effects occurring between self-reported financial literacy, financial learning opportunities, delay of gratification, and financial literacy.
Facebook’s proposal to create a global digital currency, Libra, has generated a wide discussion about its potential benefits and drawbacks. This note contributes to this discussion and, first, characterizes similarities and dissimilarities of Libra’s building blocks with existing institutions. Second, the note discusses open questions about Libra which arise from this characterization and, third, potential future developments and their policy implications. A central issue is that Libra raises considerable questions about its role in and impact on the international monetary and financial system that should be addressed before policymakers and regulators give Libra the green light.
Stocks are exposed to the risk of sudden downward jumps. Additionally, a crash in one stock (or index) can increase the risk of crashes in other stocks (or indices). Our paper explicitly takes this contagion risk into account and studies its impact on the portfolio decision of a CRRA investor both in complete and in incomplete market settings. We find that the investor significantly adjusts his portfolio when contagion is more likely to occur. Capturing the time dimension of contagion, i.e. the time span between jumps in two stocks or stock indices, is thus of first-order importance when analyzing portfolio decisions. Investors ignoring contagion completely or accounting for contagion while ignoring its time dimension suffer large and economically significant utility losses. These losses are larger in complete than in incomplete markets, and the investor might be better off if he does not trade derivatives. Furthermore, we emphasize that the risk of contagion has a crucial impact on investors' security demands, since it reduces their ability to diversify their portfolios.
Most event studies rely on cumulative abnormal returns, measured as percentage changes in stock prices, as their dependent variable. Stock price reflects the value of the operating business plus non-operating assets minus debt. Yet, many events, in particular in marketing, only influence the value of the operating business, but not non-operating assets and debt. For these cases, the authors argue that the cumulative abnormal return on the operating business, defined as the ratio between the cumulative abnormal return on stock price and the firm-specific leverage effect, is a more appropriate dependent variable. Ignoring the differences in firm-specific leverage effects inflates the impact of observations pertaining to firms with large debt and deflates those pertaining to firms with large non-operating assets. Observations of firms with high debt receive several times the weight attributed to firms with low debt. A simulation study and the reanalysis of three previously published marketing event studies shows that ignoring the firm-specific leverage effects influences an event study's results in unpredictable ways.
Whatever it takes to understand a central banker : embedding their words using neural networks
(2023)
Dictionary approaches are at the forefront of current techniques for quantifying central bank communication. In this paper, the author propose a novel language model that is able to capture subtleties of messages such as one of the most famous sentences in central bank communications when ECB President Mario Draghi stated that "within [its] mandate, the ECB is ready to do whatever it takes to preserve the euro".
The authors utilize a text corpus that is unparalleled in size and diversity in the central bank communication literature, as well as introduce a novel approach to text quantication from computational linguistics. This allows them to provide high-quality central bank-specific textual representations and demonstrate their applicability by developing an index that tracks deviations in the Fed's communication towards inflation targeting. Their findings indicate that these deviations in communication significantly impact monetary policy actions, substantially reducing the reaction towards inflation deviation in the US.
The ECB’s Outright Monetary Transactions (OMT) program, launched in summer 2012, indirectly recapitalized periphery country banks through its positive impact on the value of sovereign bonds. However, the regained stability of the European banking sector has not fully transferred into economic growth. We show that zombie lending behavior of banks that still remained undercapitalized after the OMT announcement is an important reason for this development. As a result, there was no positive impact on real economic activity like employment or investment. Instead, firms mainly used the newly acquired funds to build up cash reserves. Finally, we document that creditworthy firms in industries with a high prevalence of zombie firms suffered significantly from the credit misallocation, which slowed down the economic recovery.
This in-depth analysis proposes ways to retract from supervisory COVID-19 support measures without perils for financial stability. It simulates the likely impact of the corona crisis on euro area banks’ capital and predicts a significant capital shortfall. We recommend to end accounting practices that conceal loan losses and sustain capital relief measures. Our in-depth analysis also proposes how to address the impending capital shortfall in resolution/liquidation and a supranational recapitalisation.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
We consider the continuous-time portfolio optimization problem of an investor with constant relative risk aversion who maximizes expected utility of terminal wealth. The risky asset follows a jump-diffusion model with a diffusion state variable. We propose an approximation method that replaces the jumps by a diffusion and solve the resulting problem analytically. Furthermore, we provide explicit bounds on the true optimal strategy and the relative wealth equivalent loss that do not rely on quantities known only in the true model. We apply our method to a calibrated affine model. Our findings are threefold: Jumps matter more, i.e. our approximation is less accurate, if (i) the expected jump size or (ii) the jump intensity is large. Fixing the average impact of jumps, we find that (iii) rare, but severe jumps matter more than frequent, but small jumps.
We consider the continuous-time portfolio optimization problem of an investor with constant relative risk aversion who maximizes expected utility of terminal wealth. The risky asset follows a jump-diffusion model with a diffusion state variable. We propose an approximation method that replaces the jumps by a diffusion and solve the resulting problem analytically. Furthermore, we provide explicit bounds on the true optimal strategy and the relative wealth equivalent loss that do not rely on results from the true model. We apply our method to a calibrated affine model and fine that relative wealth equivalent losses are below 1.16% if the jump size is stochastic and below 1% if the jump size is constant and γ ≥ 5. We perform robustness checks for various levels of risk-aversion, expected jump size, and jump intensity.
In this study, we investigate the wealth decumulation decision from the perspective of a retiree who is averse to the prospect of fully annuitizing her accumulated savings. We field a large online survey of hypothetical product choices for phased drawdown offerings and annuities. While the demand for annuities remains low in our sample, we find significant demand for phased withdrawal products with equity-based asset allocations and flexible payout structures. Consistent with the product choice, the most important self-reported considerations for the wealth decumulation decision are low default risk in the products they purchase, the size of the withdrawal rates, and flexibility in the timing of their withdrawal. As determinants of the decision of how much wealth individuals are willing to draw down, we identify consumers’ attitudes towards future economic conditions, the extent to which they are protected against longevity risk, and their desire to leave bequests. Policy implications are discussed.
This paper studies a household’s optimal demand for a reverse mortgage. These contracts allow homeowners to tap their home equity to finance consumption needs. In stylized frameworks, we show that the decision to enter a reverse mortgage is mainly driven by the dierential between the aggregate appreciation of the house price and principal limiting factor on the one hand and the funding costs of a household on the other hand. We also study a rich life-cycle model that can explain the low demand for reverse mortgages as observed in US data. In this model, we analyze the optimal response of a household that is confronted with a health shock or financial disaster. If an agent suers from an unexpected health shock, she reduces the risky portfolio share and is more likely to enter a reverse mortgage. On the other hand, if there is a large drop in the stock market, she keeps the risky portfolio share almost constant by buying additional shares of stock. Besides, the probability to take out a reverse mortgage is hardly aected.
Having a gatekeeper position in a collaborative network offers firms great potential to gain competitive advantages. However, it is not well understood what kind of collaborations are associated with such a position. Conceptually grounded in social network theory, this study draws on the resource-based view and the relational factors view to investigate which types of collaboration characterize firms that are in a gatekeeper position, which ultimately could improve firm performance in subsequent periods. The empirical analysis utilizes a unique longitudinal data set to examine dynamic network formation. We used a data crawling approach to reconstruct collaboration networks among the 500 largest companies in Germany over nine years and matched these networks with performance data. The results indicate that firms in gatekeeper positions often engage in medium-intensity collaborations and less likely weak-intensity collaborations. Strong-intensity collaborations are not related to the likelihood of being a gatekeeper. Our study further reveals that a firm's knowledge base is an important moderator and that this knowledge base can increase the benefits of having a gatekeeper position in terms of firm performance.
This paper compares the dynamics of the financial integration process as described by different empirical approaches. To this end, a wide range of measures accounting for several dimensions of integration is employed. In addition, we evaluate the performance of each measure by relying on an established international finance result, i.e., increasing financial integration leads to declining international portfolio diversification benefits. Using monthly equity market data for three different country groups (i.e., developed markets, emerging markets, developed plus emerging markets) and a dynamic indicator of international portfolio diversification benefits, we find that (i) all measures give rise to a very similar long-run integration pattern; (ii) the standard correlation explains variations in diversification benefits as well or better than more sophisticated measures. These Findings are robust to a battery of robustness checks.
Cryptocurrencies have received growing attention from individuals, the media, and regulators. However, little is known about the investors whom these financial instruments attract. Using administrative data, we describe the investment behavior of individuals who invest in cryptocurrencies with structured retail products. We find that cryptocurrency investors are active traders, prone to investment biases, and hold risky portfolios. In line with attention effects and anticipatory utility, we find that the average cryptocurrency investor substantially increases log-in and trading activity after his or her first cryptocurrency purchase. Our results document which investors are more likely to adopt new financial products and help inform regulators about investors' vulnerability to cryptocurrency investments.
This paper investigates the determinants of value and growth investing in a large administrative panel of Swedish residents over the 1999-2007 period. We document strong relationships between a household’s portfolio tilt and the household’s financial and demographic characteristics. Value investors have higher financial and real estate wealth, lower leverage, lower income risk, lower human capital, and are more likely to be female than the average growth investor. Households actively migrate to value stocks over the life-cycle and, at higher frequencies, dynamically offset the passive variations in the value tilt induced by market movements. We verify that these results are not driven by cohort effects, financial sophistication, biases toward popular or professionally close stocks, or unobserved heterogeneity in preferences. We relate these household-level results to some of the leading explanations of the value premium.
This paper compares the shareholder-value-maximizing capital structure and pricing policy of insurance groups against that of stand-alone insurers. Groups can utilise intra-group risk diversification by means of capital and risk transfer instruments. We show that using these instruments enables the group to offer insurance with less default risk and at lower premiums than is optimal for standalone insurers. We also take into account that shareholders of groups could find it more difficult to prevent inefficient overinvestment or cross-subsidisation, which we model by higher dead-weight costs of carrying capital. The tradeoff between risk diversification on the one hand and higher dead-weight costs on the other can result in group building being beneficial for shareholders but detrimental for policyholders.
Manipulative communications touting stocks are common in capital markets around the world. Although the price distortions created by so-called “pump-and-dump” schemes are well known, little is known about the investors in these frauds. By examining 421 “pump-and-dump” schemes between 2002 and 2015 and a proprietary set of trading records for over 110,000 individual investors from a major German bank, we provide evidence on the participation rate, magnitude of the investments, losses, and the characteristics of the individuals who invest in such schemes. Our evidence suggests that participation is quite common and involves sizable losses, with nearly 6% of active investors participating in at least one “pump-and-dump” and an average loss of nearly 30%. Moreover, we identify several distinct types of investors, some of which should not be viewed as falling prey to these frauds. We also show that portfolio composition and past trading behavior can better explain participation in touted stocks than demographics. Our analysis offers insights into the challenges associated with designing effective investor protection against market manipulation.
Who gains from inter-corporate credit? To answer this question we measure the impact of the announcements of inter-corporate loans in China on the stock prices of the firms involved. We find that the average abnormal return for the issuers of inter-corporate loans is significantly negative, whereas it is positive for the receivers. Issuing firms may be perceived by investors to have run out of worthwhile projects to finance, while receiving firms are being certified as creditworthy. Subsequent firm performance and investment confirms these valuations as overall accurate.
Homestead exemptions to personal bankruptcy allow households to retain their home equity up to a limit determined at the state level. Households that may experience bankruptcy thus have an incentive to bias their portfolios towards home equity. Using US household data from the Survey of Income and Program Participation for the period 1996-2006, we find that especially households with low net worth maintain a larger share of their wealth as home equity if a larger homestead exemption applies. This home equity bias is also more pronounced if the household head is in poor health, increasing the chance of bankruptcy on account of unpaid medical bills. The bias is further stronger for households with mortgage finance, shorter house tenures, and younger household heads, which taken together reflect households that face more financial uncertainty.
Homestead exemptions to personal bankruptcy allow households to retain their home equity up to a limit determined at the state level. Households that may experience bankruptcy thus have an incentive to bias their portfolios towards home equity. Using US household data for the period 1996 to 2006, we find that household demand for real estate is relatively high if the marginal investment in home equity is covered by the exemption. The home equity bias is more pronounced for younger households that face more financial uncertainty and therefore have a higher ex ante probability of bankruptcy.
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
Who knows what when? : The information content of pre-IPO market prices : [Version March/June 2002]
(2002)
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
We present novel evidence on the value of cross-border political access. We analyze data on meetings of US multinational enterprises (MNEs) with European Commission (EC) policymakers. Meetings with Commissioners are associated with positive abnormal equity returns. We study channels of value creation through political access in the areas of regulation and taxation. US enterprises with EC meetings are more likely to receive favorable outcomes in their European merger decisions and have lower effective tax rates on foreign income than their peers without meetings. Our results suggest that access to foreign policymakers is of substantial value for MNEs.
We consider an additively time-separable life-cycle model for the family of power period utility functions u such that u0(c) = c−θ for resistance to inter-temporal substitution of θ > 0. The utility maximization problem over life-time consumption is dynamically inconsistent for almost all specifications of effective discount factors. Pollak (1968) shows that the savings behavior of a sophisticated agent and her naive counterpart is always identical for a logarithmic utility function (i.e., for θ = 1). As an extension of Pollak’s result we show that the sophisticated agent saves a greater (smaller) fraction of her wealth in every period than her naive counterpart whenever θ > 1 (θ < 1) irrespective of the specification of discount factors. We further show that this finding extends to an environment with risky returns and dynamically inconsistent Epstein-Zin-Weil preferences.
This paper studies discrete time finite horizon life-cycle models with arbitrary discount functions and iso-elastic per period power utility with concavity parameter θ. We distinguish between the savings behavior of a sophisticated versus a naive agent. Although both agent types have identical preferences, they solve different utility maximization problems whenever the model is dynamically inconsistent. Pollak (1968) shows that the savings behavior of both agent types is nevertheless identical for logarithmic utility (θ = 1). We generalize this result by showing that the sophisticated agent saves in every period a greater fraction of her wealth than the naive agent if and only if θ ≥ 1. While this result goes through for model extensions that preserve linearity of the consumption policy function, it breaks down for non-linear model extensions.
Who should hold bail-inable debt and how can regulators police holding restrictions effectively?
(2023)
This paper analyses the demand-side prerequisites for the efficient application of the bail-in tool in bank resolution, scrutinises whether the European bank crisis management and deposit insurance (CMDI) framework is apt to establish them, and proposes amendments to remedy identified shortcomings.
The first applications of the new European CMDI framework, particularly in Italy, have shown that a bail-in of debt holders is especially problematic if they are households or other types of retail investors. Such debt holders may be unable to bear losses, and the social implications of bailing them in may create incentives for decision makers to refrain from involving them in bank resolution. In turn, however, if investors can expect resolution authorities (RAs) to behave inconsistently over time and bail-out bank capital and debt holders despite earlier vows to involve them in bank rescues, the pricing and monitoring incentives that the crisis management framework seeks to invigorate would vanish. As a result, market discipline would be suboptimal and moral hazard would persist. Therefore, the policy objectives of the CMDI framework will only be achieved if critical bail-in capital is not held by retail investors without sufficient loss-bearing capacity. Currently, neither the CMDI framework nor capital market regulation suffice to assure that this precondition is met. Therefore, some amendments are necessary. In particular, debt instruments that are most likely to absorb losses in resolution should have a high minimum denomination and banks should not be allowed to self-place such securities.
We show strong overall and heterogeneous economic incidence effects, as well as distortionary effects, of only shifting statutory incidence (i.e., the agent on which taxes are levied), without any tax rate change. For identification, we exploit a tax change and administrative data from the credit market: (i) a policy change in 2018 in Spain shifting an existing mortgage tax from being levied on borrowers to being levied on banks; (ii) some areas, for historical reasons, were exempt from paying this tax (or have different tax rates); and (iii) an exhaustive matched credit register. We find the following robust results: First, after the policy change, the average mortgage rate increases consistently with a strong – but not complete – tax pass-through. Second, there is a large heterogeneity in such pass-through: larger for borrowers with lower income, a smaller number of lending relationships, not working for the lender, or facing less banks in their zip-code, thereby suggesting a bargaining power mechanism at work. Third, despite no variation in the tax rate, and consistent with the non-full tax pass-through, the tax shift increases banks’ risk-taking. More affected banks reduce costly mortgage insurance in case of loan default (especially so if banks have weaker ex-ante balance sheets) and expand into non-affected but (much) ex-ante riskier consumer lending, experiencing even higher ex-post defaults within consumer loans.
External linkages allow nascent ventures to access crucial resources during the process of new product development. Forming external linkages can substantially contribute to a venture’s performance. However, little is known about the paths of external linkage formation, as well as the circumstances that drive the choice to pursue one rather than another path. This gap deserves further investigation, because we do not know whether insights developed for incumbent firms also apply to nascent ventures: To address this gap, we explore a novel dataset of 370 venture creation processes. Using sequence analyses based on optimal matching techniques and cluster analyses, we reveal that nascent ventures pursue one of overall four distinct paths of linkage formation activities during new product development. Contrary to the findings of the strategy literature, we find that if nascent ventures engage in external linkages at all, they do not combine exploration- and exploitation-oriented linkages but form either exploration- or exploitation-oriented linkages. Additional regression analyses highlight the circumstances that lead nascent ventures to pursue one rather than the other pathways. Taken together, our analyses point out that resource scarcity constitutes an important factor shaping the linkage formation activities of nascent ventures. Accordingly, we show that nascent ventures tend not to optimize by adding complementary knowledge to the firm’s knowledge base but rather to extend the existing knowledge base—a strategy which we call bricolage.
In the euro area, monetary policy is conducted by a single central bank for 20 member countries. However, countries are heterogeneous in their economic development, including their inflation rates. This paper combines a New Keynesian model and a neural network to assess whether the European Central Bank (ECB) conducted monetary policy between 2002 and 2022 according to the weighted average of the inflation rates within the European Monetary Union (EMU) or reacted more strongly to the inflation rate developments of certain EMU countries.
The New Keynesian model first generates data which is used to train and evaluate several machine learning algorithms. They authors find that a neural network performs best out-of-sample. They use this algorithm to generally classify historical EMU data, and to determine the exact weight on the inflation rate of EMU members in each quarter of the past two decades. Their findings suggest disproportional emphasis of the ECB on the inflation rates of EMU members that exhibited high inflation rate volatility for the vast majority of the time frame considered (80%), with a median inflation weight of 67% on these countries. They show that these results stem from a tendency of the ECB to react more strongly to countries whose inflation rates exhibit greater deviations from their long-term trend.
Why bank money creation?
(2022)
We provide a rationale for bank money creation in our current monetary system by investigating its merits over a system with banks as intermediaries of loanable funds. The latter system could result when CBDCs are introduced. In the loanable funds system, households limit banks’ leverage ratios when providing deposits to make sure they have enough “skin in the game” to opt for loan monitoring. When there is unobservable heterogeneity among banks with regard to their (opportunity) costs from monitoring, aggregate lending to bank-dependent firms is inefficiently low. A monetary system with bank money creation alleviates this problem, as banks can initiate lending by creating bank deposits without relying on household funding. With a suitable regulatory leverage constraint, the gains from higher lending by banks with a high repayment pledgeability outweigh losses from banks which are less diligent in monitoring. Bank-risk assessments, combined with appropriate risk-sensitive capital requirements, can reduce or even eliminate such losses.
A large empirical literature has shown that user fees signicantly deter public service utilization in developing #countries. While most of these results reflect partial equilibrium analysis, we find that the nationwide abolition of public school fees in Kenya in 2003 led to no increase in net public enrollment rates, but rather a dramatic shift toward private schooling. Results suggest this divergence between partial- and general-equilibrium effects is partially explained by social interactions: the entry of poorer pupils into free education contributed to the exit of their more affluent peers.
This Policy Letter presents two event studies based on the pre-war data that foreshadows the remarkable way in which Russian economy was able to withstand the pressure from unprecedented package of international sanctions. First, it shows that a sudden stop of one of the two domestic producers of zinc in 2018 did not lead to a slowdown in the steel industry, which heavily relied on this input. Second, it demonstrates that a huge increase in cost of fuel called mazut in 2020 had virtually no impact on firms that used it, even in the regions where it was hard to substitute it for alternative fuels. This Policy Letter argues that such stability in production can be explained by the fact that Russian economy is heavily oriented toward commodities. It is much easier to replace a commodity supplier than a supplier of manufacturing goods, and many commodity producers operate at high profit margins that allow them to continue to operate even after big increases in their costs. Thus, sanctions had a much smaller impact on Russia than they would have on an economy with larger manufacturing sector, where inputs are less substitutable and profit margins are smaller.
We provide a comprehensive analysis of the determinants of trading in the sovereign credit default swaps (CDS) market, using weekly data for single-name sovereign CDS from October 2008 to September 2015. We describe the anatomy of the sovereign CDS market, derive a law of motion for gross positions and their components, and identify the key factors that drive the cross-sectional and time-series properties of trading volume and net notional amounts outstanding. While a single principal component accounts for 54 percent of the variation in sovereign CDS spreads, the largest common factor explains only 7 percent of the variation in sovereign CDS net notional amounts outstanding. Moreover, unlike for CDS spreads, common global factors explain very little of the variation in sovereign CDS trading and net notional amounts outstanding, suggesting that it is driven primarily by idiosyncratic country risk. We analyze several local and regional channels that may explain the trading in sovereign CDS: (a) country-specific credit risk shocks, including changes in a country's credit rating and related outlook changes, (b) the announcement and issuance of domestic and international debt, (c) macroeconomic sentiment derived from conventional and unconventional monetary policy, macro-economic news and shocks, and (d) regulatory channels, such as changes in bank capital adequacy requirements. All our findings suggest that sovereign CDS are more likely used for hedging than for speculative purposes.
Identifying the cause of discrimination is crucial to design effective policies and to understand discrimination dynamics. Building on traditional models, this paper introduces a new explanation for discrimination: discrimination based on motivated reasoning. By systematically acquiring and processing information, individuals form motivated beliefs and consequentially discriminate based on these beliefs. Through a series of experiments, I show the existence of discrimination based on motivated reasoning and demonstrate important differences to statistical discrimination and taste-based discrimination. Finally, I demonstrate how this form of discrimination can be alleviated by limiting individuals’ scope to interpret information.
From 1963 through 2015, idiosyncratic risk (IR) is high when market risk (MR) is high. We show that the positive relation between IR and MR is highly stable through time and is robust across exchanges, firm size, liquidity, and market-to-book groupings. Though stock liquidity affects the strength of the relation, the relation is strong for the most liquid stocks. The relation has roots in fundamentals as higher market risk predicts greater idiosyncratic earnings volatility and as firm characteristics related to the ability of firms to adjust to higher uncertainty help explain the strength of the relation. Consistent with the view that growth options provide a hedge against macroeconomic uncertainty, we find evidence that the relation is weaker for firms with more growth options.
The bail-in tool as implemented in the European bank resolution framework suffers from severe shortcomings. To some extent, the regulatory framework can remedy the impediments to the desirable incentive effect of private sector involvement (PSI) that emanate from a lack of predictability of outcomes, if it compels banks to issue a sufficiently sized minimum of high-quality, easy to bail-in (subordinated) liabilities. Yet, even the limited improvements any prescription of bail-in capital can offer for PSI’s operational effectiveness seem compromised in important respects.
The main problem, echoing the general concerns voiced against the European bail-in regime, is that the specifications for minimum requirements for own funds and eligible liabilities (MREL) are also highly detailed and discretionary and thus alleviate the predicament of investors in bail-in debt, at best, only insufficiently. Quite importantly, given the character of typical MREL instruments as non-runnable long-term debt, even if investors are able to gauge the relevant risk of PSI in a bank’s failure correctly at the time of purchase, subsequent adjustment of MREL-prescriptions by competent or resolution authorities potentially change the risk profile of the pertinent instruments. Therefore, original pricing decisions may prove inadequate and so may market discipline that follows from them.
The pending European legislation aims at an implementation of the already complex specifications of the Financial Stability Board (FSB) for Total Loss Absorbing Capacity (TLAC) by very detailed and case specific amendments to both the regulatory capital and the resolution regime with an exorbitant emphasis on proportionality and technical fine-tuning. What gets lost in this approach, however, is the key policy objective of enhanced market discipline through predictable PSI: it is hardly conceivable that the pricing of MREL-instruments reflects an accurate risk-assessment of investors because of the many discretionary choices a multitude of agencies are supposed to make and revisit in the administration of the new regime. To prove this conclusion, this chapter looks in more detail at the regulatory objectives of the BRRD’s prescriptions for MREL and their implementation in the prospectively amended European supervisory and resolution framework.
The bail-in tool as implemented in the European bank resolution framework suffers from severe shortcomings. To some extent, the regulatory framework can remedy the impediments to the desirable incentive effect of private sector involvement (PSI) that emanate from a lack of predictability of outcomes, if it compels banks to issue a sufficiently sized minimum of high-quality, easy to bail-in (subordinated) liabilities. Yet, even the limited improvements any prescription of bail-in capital can offer for PSI’s operational effectiveness seem compromised in important respects.
The main problem, echoing the general concerns voiced against the European bail-in regime, is that the specifications for minimum requirements for own funds and eligible liabilities (MREL) are also highly detailed and discretionary and thus alleviate the predicament of investors in bail-in debt, at best, only insufficiently. Quite importantly, given the character of typical MREL instruments as non-runnable long-term debt, even if investors are able to gauge the relevant risk of PSI in a bank’s failure correctly at the time of purchase, subsequent adjustment of MREL-prescriptions by competent or resolution authorities potentially change the risk profile of the pertinent instruments. Therefore, original pricing decisions may prove inadequate and so may market discipline that follows from them.
The pending European legislation aims at an implementation of the already complex specifications of the Financial Stability Board (FSB) for Total Loss Absorbing Capacity (TLAC) by very detailed and case specific amendments to both the regulatory capital and the resolution regime with an exorbitant emphasis on proportionality and technical fine-tuning. What gets lost in this approach, however, is the key policy objective of enhanced market discipline through predictable PSI: it is hardly conceivable that the pricing of MREL-instruments reflects an accurate risk-assessment of investors because of the many discretionary choices a multitude of agencies are supposed to make and revisit in the administration of the new regime. To prove this conclusion, this chapter looks in more detail at the regulatory objectives of the BRRD’s prescriptions for MREL and their implementation in the prospectively amended European supervisory and resolution framework.
Why MREL won’t help much
(2017)
This policy letter provides evidence for the crucial importance of the initial regulatory treatment for the further development of financial innovations by exploring the emergence and initial legal framing of off-balance-sheet leasing in Germany. Due to a missing legal framework, lease contracts occurred as an innovative social practice of off-balance-sheet financing. However, this lacking legal framing impeded the development of this financial innovation as it also created legal uncertainties. This was about to change after the initial legal framing of leasing in the 1970’s which eliminated those legal uncertainties and off-balance-sheet leasing entered into a stunning period of growth while laying the foundation of a regulatory resiliency against efforts that seek to abandon the off-balance-sheet treatment of leases. As the initial legal framing is crucial for the further development of a financial innovation, we propose the French approach for the initial vindication of new financial products in which the principles-based rules are aligned with the capabilities of regulators to intervene, even when a financial innovation complies with the letter of the law. In this way, regulators could regulate the frontier of financial innovations and weed out those which are entirely or mainly driven by regulatory arbitrage considerations while maintaining the beneficial elements of those products.
Wider die schwarze Null
(2019)