Working Paper
Refine
Year of publication
- 2011 (92) (remove)
Document Type
- Working Paper (92) (remove)
Has Fulltext
- yes (92)
Is part of the Bibliography
- no (92)
Keywords
- Deutschland (4)
- USA (4)
- China (3)
- Digital Humanities (3)
- Financial Crisis (3)
- Japan (3)
- Monetary Policy (3)
- Adaptive Erwartung (2)
- Adverse Selection Risk (2)
- Außenwirtschaftliches Gleichgewicht (2)
Institute
- Center for Financial Studies (CFS) (35)
- Wirtschaftswissenschaften (8)
- Informatik (7)
- Institute for Law and Finance (ILF) (7)
- Institute for Monetary and Financial Stability (IMFS) (7)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (6)
- House of Finance (HoF) (4)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (4)
- Gesellschaftswissenschaften (3)
- Medizin (3)
The aim of this paper is to examine what has been the role of information provision to the market throughout the crisis. We consider two main sources of information to the market, financial statements and information provided by credit rating agencies. We examine how these sources of information work and the effectiveness of their disclosure process during the crisis. Contrary to the commonly held view, fair value accounting did not have a major impact on the crisis development and severity. However, the structure and lack of accountability of credit rating agencies had a profound impact on their incentives, which may have jeopardized the accuracy of the whole rating process. We claim that the crisis experience has changed the way we think about information as well as market discipline and discuss policy implications and proposals for regulation. JEL Classification: G01, G24, G28, M41, M48
Prepared by Christian Laux, Vienna University of Economics and Business & Center for Financial Studies (CFS) for the “Workshop on Liquidity Premium in Solvency II: Conceptual and Measurement Issues,” DNB Amsterdam, March 18, 2011. The insurance industry and the Committee of European Insurance and Occupational Pension Supervisors (CEIOPS) propose to add a liquidity premium to the risk-free rate when discounting liabilities in times of financial turmoil. The objective is to counterbalance adverse effects on regulatory capital due to a decrease in asset values caused by illiquidity in a crisis. As I argue in this note, although the motive might be sensible, the proposal to add a liquidity premium when discounting liabilities is not the right approach to tackle the problem.
In this paper we challenge the view that corporate bonds are always arm’s length debt. We analyze the effect of bond ratings on the stock price return to acquirers in M&A transactions, which tend to have significant effects on creditor wealth. We find acquirers abnormal returns to be higher if they are unrated, controlling for a wide variety of other effects identified in the literature. Tracing the difference in returns to distinct managerial decisions, we find that, everything else constant, rated firms increase their leverage in takeover transactions by less than their unrated counterparts. Consistent with a significant role for rating agencies, we find monitoring effects to be strongest when acquirer bonds are rated at the borderline between investment grade and junk. Finally, we are able to empirically exclude a large number of alternative explanations for the empirical regularities that we uncover. JEL Classification: G21, G24, G32, G34 Keywords: Acquisitions, Credit Ratings, Mergers and Acquisitions, Arm’s Length Debt, Abnormal Returns
This paper outlines a new method for using qualitative information to analyze the monetary policy strategy of central banks. Quantitative assessment indicators that are extracted from a central bank's public statements via the balance statistic approach are employed to estimate a Taylor-type rule. This procedure allows to directly capture a policymaker's assessments of macroeconomic variables that are relevant for its decision making process. As an application of the proposed method the monetary policy of the Bundesbank is re-investigated with a new dataset. One distinctive feature of the Bundesbank's strategy consisted of targeting growth in monetary aggregates. The analysis using the proposed method provides evidence that the Bundesbank indeed took into consideration monetary aggregates but also real economic activity and inflation developments in its monetary policy strategy since 1975. JEL Classification: E52, E58, N14 Keywords: Monetary Policy Rule, Statement Indicators, Bundesbank, Monetary Targeting
In seiner ausführlichen Untersuchung unterschiedlicher philosophischer Ansätze zum Prinzip „Verantwortung“ führt Ludger Heidbrink (2003) aus, dass die Standardtheorie der „Verantwortung“ auf drei Pfeilern beruhe, „dem Subjekt der Verantwortung, dem Objekt der Verantwortung und der Instanz der Verantwortung“ (ebd.: S. 21 f.; Hervorhebung von B. H.). Dabei bezieht er sich auf einige philosophische Ansätze, die Verantwortung in einer mehrstelligen Relation verstehen: Eine Person hat (1) Verantwortung für etwas (2) vor und gegenüber jemandem (3) und wird nach Maßgabe von gewissen Kriterien beurteilt (4) (u. a. Lenk/Maring 1993; Höffe 1993). An dieser Definition wird deutlich, dass es sich bei „Verantwortung“ um ein zutiefst soziales Handlungsprinzip dreht, denn eine Person, die verantwortlich handelt, tritt immer in irgendeiner Form in Interaktion mit ihrer sozialen Umwelt. So kümmern sich beispielsweise Eltern um ihre Kinder; Arbeitsnehmer stellen im Rahmen kollegialer Arbeitsteilung ein Produkt her oder erfüllen eine Dienstleistung für einen Kunden. Selbst wer sich gegenüber einem Tier oder der Natur verantwortlich verhält, erfüllt dabei eine moralische Norm, deren Einhaltung die Gesellschaft von ihm erwartet. Daran wird deutlich, dass eine Person, auch wenn sie sich in ihrem Handeln nicht direkt auf andere Menschen bezieht, gegenüber Personen oder Instanzen die Folgen ihres Verhaltens verantworten muss, was bedeutet, dass sie im Rahmen der Rechenschaftspflicht letztlich auch in eine Interaktion mit anderen Menschen tritt. Nur von mündigen Menschen kann Verantwortung für ihr Handeln erwartet werden. Der intersubjektive Charakter des Verantwortungspostulats lässt normalerweise auch zu, dass sich die beteiligten Personen über die Voraussetzungen verständigen können, unter denen das geforderte Handeln möglich ist oder war. Denn meistens genügt allein der Willen einer Person nicht zur Übernahme von Verantwortung.
Der Text Konstitutive Regeln – normativ oder nicht? Ein Blick auf ihre Rolle in Praktiken geht der Frage nach, ob – und wenn ja, in welcher Weise – konstitutive Regeln normativ sind. Die Herausforderung besteht darin, dass diese Regeln bzw. ihre Befolgung womöglich durchweg in nicht-normativen Begriffen beschrieben werden können – nämlich im Wesentlichen als Erfüllung notwendiger und/oder hinreichender Bedingungen. Natürlich kann man, aus welchen Gründen auch immer, jederzeit fordern, einer konstitutiven Regel Folge zu leisten. Aber damit würde Normativität ‚von außen‘ an solche Regeln herangetragen; in Frage steht aber, ob diese Regeln selbst normativ sind. Für eine derartige ‚interne‘ Normativität spricht sicherlich unser Umgang mit diesen Regeln und auch unser alltägliches Reden über sie. So beschreiben wir in unserer Alltagspraxis etwa das Befolgen von Spielregeln (als Paradebeispiele für konstitutive Regeln) als etwas, das korrekt ist oder getan werden soll – und Abweichungen entsprechend als Verletzungen. Der Überlegungsgang des Textes ist zweigeteilt: In einem ersten Schritt werden einige Arten von konstitutiven Regeln unterschieden. Der systematische Ertrag dieser begrifflichen Überlegungen besteht in dem Vorschlag, dass manche Arten von konstitutiven Regeln ganz problemlos als auch normative Phänomene charakterisierbar sind, andere hingegen nicht. In einem zweiten Schritt wird vor allem zu zeigen versucht, dass einige der wirklichen ‚Problemfälle‘ konstitutiver Regeln zumindest als schwach-normative Regeln beschrieben werden können (im Unterschied zu stark-normativen Phänomenen wie Verpflichtungen oder Verbote). Die ‚schwache Normativität‘ dieser Regeln kommt zum Vorschein, wenn man ihre Rolle in Praktiken betrachtet – insbesondere die Art und Weise, wie sich Akteure in diesen Praktiken unter Berufung auf konstitutive Regeln kritisieren, ohne sich dabei bereits als verpflichtet zu behandeln, diese Regeln zu befolgen.
There is ample empirical evidence documenting widespread financial illiteracy and limited pension knowledge. At the same time, the distribution of wealth is widely dispersed and many workers arrive on the verge of retirement with few or no personal assets. In this paper, we investigate the relationship between financial literacy and household net worth, relying on comprehensive measures of financial knowledge designed for a special module of the DNB (De Nederlandsche Bank) Household Survey. Our findings provide evidence of a strong positive association between financial literacy and net worth, even after controlling for many determinants of wealth. Moreover, we discuss two channels through which financial literacy might facilitate wealth accumulation. First, financial knowledge increases the likelihood of investing in the stock market, allowing individuals to benefit from the equity premium. Second, financial literacy is positively related to retirement planning, and the development of a savings plan has been shown to boost wealth. Overall, financial literacy, both directly and indirectly, is found to have a strong link to household wealth. JEL Classification: D91, D12, J26 Keywords: Financial Education, Savings and Wealth Accumulation, Retirement Preparation, Knowledge of Finance and Economics, Overconfidence, Stock Market Participation
This paper studies the impact of the concentration of control, the type of controlling shareholder and the dividend tax preference of the controlling shareholder on dividend policy for a panel of 220 German firms over 1984-2005. While the concentration of control does not have an effect on the dividend payout, there is strong evidence that the type of controlling shareholder matters as family controlled firms have high dividend payouts whereas bank controlled firms have low dividend payouts. However, there is no evidence that the dividend preference of the large shareholder has an impact on the dividend decision. JEL Classification: G32, G35 Keywords: Dividend Policy, Payout Policy, Lintner Dividend Model, Tax Clientele Effects, Corporate Governance
We make three points. First, the decade before the financial crisis in 2007 was characterized by a collapse in the yield on TIPS. Second, estimated VARs for the federal funds rate and the TIPS yield show that while monetary policy shocks had negligible effects on the TIPS yield, shocks to the latter had one-to-one effects on the federal funds rate. Third, these findings can be rationalized in a New Keynesian model.
Die Ergebnisse der Arbeit lassen sich wie folgt zusammenfassen: 1. Das Früherkennungssystem des § 91 Abs. 2 AktG ist kein „Risiko“-Früherkennungssystem, wie dies der IDW PS 340 annimmt. Daraus folgen zu weit reichende angebliche Risikomanagement- und Prüfpflichten. Eine fortlaufende und ständige Erfassung, Bewertung und Analyse von Einzelrisiken ist nämlich nicht erforderlich. Zwar mag im Einzelfall eine Risikosteuerung angezeigt sein, die sich einem umfassenden Risikomanagement annähert. Die über § 91 Abs. 2 AktG hinausgehenden Maßnahmen ergeben sich dann aber aus §§ 76, 93 AktG und sind insofern auch nicht Prüfungsgegenstand des Abschlussprüfers nach § 317 Abs. 4 HGB. 2. Der IDW PS 340 blendet die zentral wichtige Liquiditätssteuerung aus. Hier ist er zu eng und nicht spezifisch genug. Bestandsgefährdende Entwicklungen können nämlich auch durch Liquiditätsrisiken entstehen. § 91 Abs. 2 AktG verpflichtet daher die Gesellschaft zur Aufstellung eines Finanzplans, der künftige Zahlungsein- und Zahlungsausgänge einander gegenüberstellt. 3. Der IDW PS 340 besteht nicht hinreichend deutlich auf schriftlicher Dokumentation des Früherkennungssystems. Denn eine schriftliche Fixierung von Früherkennungs- und Überwachungssystem dient nicht nur der Funktionsfähigkeit dieser Systeme, sondern bildet auch die Prüfungsgrundlage, ohne die dem Abschlussprüfer eine hinreichende Prüfung nicht möglich ist.
We revisit the role of time in measuring the price impact of trades using a new empirical method that combines spread decomposition and dynamic duration modeling. Previous studies which have addressed the issue in a vector-autoregressive framework conclude that times when markets are most active are times when there is an increased presence of informed trading. Our empirical analysis based on recent European and U.S. data offers challenging new evidence. We find that as trade intensity increases, the informativeness of trades tends to decrease. This result is consistent with the predictions of Admati and Pfleiderer’s (1988) rational expectations model, and also with models of dynamic trading like those proposed by Parlour (1998) and Foucault (1999). Our results cast doubt on the common wisdom that fast markets bear particularly high adverse selection risks for uninformed market participants. JEL Classification: G10, C32 Keywords: Price Impact of Trades, Trading Intensity, Dynamic Duration Models, Spread Decomposition Models, Adverse Selection Risk
We present an intertemporal consumption model of consumer investment in financial literacy. Consumers benefit from such investment because their stock of financial literacy allows them to increase the returns on their wealth. Since literacy depreciates over time and has a cost in terms of current consumption, the model determines an optimal investment in literacy. The model shows that financial literacy and wealth are determined jointly, and are positively correlated over the life cycle. Empirically, the model leads to an instrumental variables approach, in which the initial stock of financial literacy (as measured by math performance in school) is used as an instrument for the current stock of literacy. Using microeconomic and aggregate data, we find a strong effect of financial literacy on wealth accumulation and national saving, and also show that ordinary least squares estimates underestate the impact of financial literacy on saving. JEL Classification: E2, D8, G1, J24 Keywords: Financial Literacy, Cognitive Abilities, Human Capital, Saving
Ernst Bloch pointed out in a particularly emphatic way that the concept of human dignity featured centrally in historical struggles against different forms of unjustified rule, i.e. domination – to which one must add that it continues to do so to the present day. The “upright gait,” putting an end to humiliation and insult: this is the most powerful demand, in both political and rhetorical terms, that a “human rights-based” claim expresses. It marks the emergence of a radical, context-transcending reference point immanent to social conflicts which raises fundamental questions concerning the customary opposition between immanent and transcendent criticism. For within the idiom of demanding respect for human dignity, a right is invoked “here and now,” in a particular, context-specific form, which at its core is owed to every human being as a person. Thus Bloch is in one respect correct when he asserts that human rights are not a natural “birthright” but must be achieved through struggle; but in another respect this struggle can develop its social power only if it has a firm and in a certain sense “absolute” normative anchor. Properly understood, it becomes apparent that these social conflicts always affect “two worlds”: the social reality, on the one hand, which is criticized in part or radically in the light of an ideal normative dimension, on the other. For those who engage in this criticism there is no doubt that the normative dimension is no less real than the reality to which they refuse to resign themselves. Those who critically transcend reality always also live elsewhere.
This paper studies the market quality of an internalization system which is designed as part of an open limit order book (the Xetra system operated by Deutsche Börse AG). The internalization sys-tem (Xetra BEST) guarantees a price improvement over the inside spread in the Xetra order book. We develop a structural model of this unique dual market environment and show that, while adverse selection costs of internalized trades are significantly lower than those of regular order book trades, the realized spreads (the revenue earned by the suppliers of liquidity) is significantly larger. The cost savings of the internalizer are larger than the mandatory price improvement. This suggests that internalization can be profitable both for the customer and the internalizer. JEL Classification: G10
This paper analyzes the emergence of systemic risk in a network model of interconnected bank balance sheets. Given a shock to asset values of one or several banks, systemic risk in the form of multiple bank defaults depends on the strength of balance sheets and asset market liquidity. The price of bank assets on the secondary market is endogenous in the model, thereby relating funding liquidity to expected solvency - an important stylized fact of banking crises. Based on the concept of a system value at risk, Shapley values are used to define the systemic risk charge levied upon individual banks. Using a parallelized simulated annealing algorithm the properties of an optimal charge are derived. Among other things we find that there is not necessarily a correspondence between a bank's contribution to systemic risk - which determines its risk charge - and the capital that is optimally injected into it to make the financial system more resilient to systemic risk. The analysis has policy implications for the design of optimal bank levies. JEL Classification: G01, G18, G33 Keywords: Systemic Risk, Systemic Risk Charge, Systemic Risk Fund, Macroprudential Supervision, Shapley Value, Financial Network
The past thirty years have seen dramatic changes to the character of state membership regimes in which practices of easing access to membership for resident non-citizens, extending the franchise to expatriate citizens as well as, albeit in typically more limited ways, to resident non-citizens and an increasing toleration of dual nationality have become widespread. These processes of democratic inclusion, while variously motivated, represent an important trend in the contemporary political order in which we can discern two distinct shifts. The first concerns membership as a status and is characterised in terms of the movement from a simple distinction between single-nationality citizens and single-nationality aliens to a more complex structure of state membership in which we also find dual nationals and denizens (Baubock, 2007a:2395-6). The second shift relates to voting rights and is marked by the movement from the requirement that voting rights are grounded in both citizenship and residence to the relaxing of the joint character of this requirement such that citizenship or residence now increasingly serve as a basis for, at least partial, enfranchisement. In the light of these transformations, it is unsurprising that normative engagement with transnational citizenship – conceived in terms of the enjoyment of membership statuses in two (or more) states – has focused on the issues of access to, and maintenance of, national citizenship, on the one hand, and entitlement to voting rights, on the other hand.
"Buffer-stock" models of saving are now standard in the consumption literature. This paper builds theoretical foundations for rigorous understanding of the main features of such models, including the existence of a target wealth ratio and the proposition that aggregate consumption growth equals aggregate income growth in a small open economy populated by buffer stock savers. JEL Classification: D81, D91, E21 Keywords: Precautionary Saving, Buffer Stock Saving, Marginal Propensity to Consume, Permanent Income Hypothesis
The title I have chosen seems to signal a tension, even a contradiction, in a number of respects. Democracy appears to be a form of political organisation and government in which, through general and public participatory procedures, a sufficiently legitimate political will is formed which acquires the force of law. Justice, by contrast, appears to be a value external to this context which is not so much linked to procedures of “input” or “throughput” legitimation but is understood instead as an output- or outcome-oriented concept. At times, justice is even understood as an otherworldly idea which, when transported into the Platonic cave, merely causes trouble and ends up as an undemocratic elite project. In methodological terms, too, this difference is sometimes signalled in terms of a contrast between a form of “worldly” political thought and “abstract” and otherworldly philosophical reflection on justice. In my view, we are bound to talk past the issues to be discussed under the heading “transnational justice and democracy” unless we first root out false dichotomies such as the ones mentioned. My thesis will be that justice must be “secularised” or “grounded” both with regard to how we understand it and to its application to relations beyond the state.
Since World War II, direct stock ownership by households has largely been replaced by indirect stock ownership by financial institutions. We argue that tax policy is the driving force. Using long time-series from eight countries, we show that the fraction of household ownership decreases with measures of the tax benefits of holding stocks inside a pension plan. This finding is important for policy considerations on effective taxation and for financial economics research on the long-term effects of taxation on corporate finance and asset prices. JEL Classification: G10, G20, H22, H30 Keywords: Capital Gains Tax, Income Tax, Stock Ownership, Bond Ownership, Inflation, Bracket Creep, Pension Funds
We analytically show that a common across rich/poor individuals Stone-Geary utility function with subsistence consumption in the context of a simple two-asset portfolio-choice model is capable of qualitatively and quantitatively explaining: (i) the higher saving rates of the rich, (ii) the higher fraction of personal wealth held in risky assets by the rich, and (iii) the higher volatility of consumption of the wealthier. On the contrary, time-variant “keeping-up-with-the-Joneses” weighted average consumption which plays the role of moving benchmark subsistence consumption gives the same portfolio composition and saving rates across the rich and the poor, failing to reconcile the model with what micro data say. JEL Classification: G11, D91, E21, D81, D14, D11
Based on Foucault’s analysis of German Neoliberalism and his thesis of ambiguity, the following paper draws a two-level distinction between individual and regulatory ethics. The individual ethics level – which has received surprisingly little attention – contains the Christian foundation of values and the liberal-Kantian heritage of so called Ordoliberalism – as one variety of neoliberalism. The regulatory or formal-institutional ethics level on the contrary refers to the ordoliberal framework of a socio-economic order. By differentiating these two levels of ethics incorporated in German Neoliberalism, it is feasible to distinguish dissimilar varieties of neoliberalism and to link Ordoliberalism to modern economic ethics. Furthermore, it allows a revision of the dominant reception of Ordoliberalism which focuses solely on the formal-institutional level while mainly neglecting the individual ethics level.
Regulations in the pre-Sarbanes–Oxley era allowed corporate insiders considerable flexibility in strategically timing their trades and SEC filings, for example, by executing several trades and reporting them jointly after the last trade. We document that even these lax reporting requirements were frequently violated and that the strategic timing of trades and reports was common. Event study abnormal re-turns are larger after reports of strategic insider trades than after reports of otherwise similar nonstrategic trades. Our results also imply that delayed reporting is detrimental to market efficiency and lend strong support to the more stringent trade reporting requirements established by the Sarbanes–Oxley Act. JEL Classification: G14, G30, G32 Keywords: Insider Trading , Directors' Dealings , Corporate Governance , Market Efficiency
The overvaluation hypothesis (Miller 1977) predicts that a) stocks are overvalued in the presence of short selling restrictions and that b) the overvaluation increases in the degree of divergence of opinion. We design an experiment that allows us to test these predictions in the laboratory. The results indicate that prices are higher with short selling constraints, but the overvaluation does not increase in the degree of divergence of opinion. We further find that trading volume is lower and bid-ask spreads are higher when short sale restrictions are imposed. JEL Classification: C92, G14 Keywords: Overvaluation Hypothesis , Short Selling Constraints , Divergence of Opinion
I investigate the effect of transparency on the borrowing costs of Emerging Markets Economies. Transparency is measured by whether or not the countries publish the IMF Article IV Staff report and the Reports on the Observance of Standards and Codes (ROSC). Using difference-in-difference estimation, I study the effect on the sovereign credit spreads for 18 Emerging Market Economies over the period 1999-2007. I show that the effect of publishing the Article IV reports is negligible while publishing the ROSC matters, leading to a reduction in the spreads of over 15% in the samples 1999-2006 and 1999-2007. JEL Classification: F33, F34, G15 Keywords: Sovereign Bond Markets, Transparency, Emerging Market Economies
Do firms buy their stock at bargain prices? : Evidence from actual stock repurchase disclosure
(2011)
We use new data from SEC filings to investigate how S&P 500 firms execute their open market repurchase programs. We find that smaller S&P 500 firms repurchase less frequently than larger firms, and at a price which is significantly lower than the average market price. Their repurchase activity is followed by a positive and significant abnormal return which lasts up to three months after the repurchase. These findings do not hold for large S&P 500 firms. Our interpretation is that small firms repurchase strategically, whereas the repurchase activity of large firms is more focused on the disbursement of free cash. JEL Classification: G14, G30, G35 Keywords: Stock Repurchases, Stock Buybacks, Payout Policy, Timing, Bid-Ask Spread, Liquidity
The European Monetary Union euro has done very well since its initiation. Price stability has been secured and the external value of the new currency is more than satisfactory. The confidence in it is also shown by its increasing use as a global reserve currency. It has been a stabilizing factor in the current crisis. The recent budgetary problems of some member states are principally not a problem of the Monetary Union. It is therefore in no way justified to speak of a "Euro-crisis". It is true, however, that the Monetary Union restricts the number of possibilities for member states to solve their financial problems but it does not eliminate them entirely that outside help would have become indispensible. The purchase of debt instruments of member states in financial distress by the ECB is questionable from an economic, and more important, from a legal point of view. The longer the duration, the less legally justifiable is it. Financial support for member states in severe financial distress might be acceptable as a temporary crisis resolution mechanism. A permanent support mechanism needs a basis in the primary law of the EU. The treatment of the risk of "sovereign" debt in the legal framework for financial institutions urgently needs improvement. Especially the capital requirements for credit institutions have to be adjusted.
In the microstructure literature, information asymmetry is an important determinant of market liquidity. The classic setting is that uninformed dedicated liquidity suppliers charge price concessions when incoming market orders are likely to be informationally motivated. In limit order book markets, however, this relationship is less clear, as market participants can switch roles, and freely choose to immediately demand or patiently supply liquidity by submitting either market or limit orders. We study the importance of information asymmetry in limit order books based on a recent sample of thirty German DAX stocks. We find that Hasbrouck’s (1991) measure of trade informativeness Granger-causes book liquidity, in particular that required to fill large market orders. Picking-off risk due to public news induced volatility is more important for top-of-the book liquidity supply. In our multivariate analysis we control for volatility, trading volume, trading intensity and order imbalance to isolate the effect of trade informativeness on book liquidity. JEL Classification: G14 Keywords: Price Impact of Trades , Trading Intensity , Dynamic Duration Models, Spread Decomposition Models , Adverse Selection Risk
It has become commonplace to say that, in the past, international governance has been legitimated mainly, if not exclusively, by its welfare-enhancing ‘output’. There has been very little research, however, on the history of legitimating international governance by its output to validate this point. In this essay I begin to address this gap by inquiring into the origins of output-oriented strategies for legitimating international organizations. Scrutinizing the programmatic literature on international organizations from the early 20th century, I illustrate how a new and distinctive account of technocratic legitimation emerged and in the 1920s separated from other types of liberal internationalism. My inquiry, centring on the works of James Arthur Salter, David Mitrany, Paul S. Reinsch and Pitman B. Potter, explores their respective conceptions of ‘good functional governance’, executed by a non-political international technocracy. Their account is explicitly pitched against a notion of ‘international politics’, perceived as violent, polarizing, and irrational. The emergence of such a technocratic legitimation of international governance, I submit, needs to be seen in the context of societal modernization and bureaucratization that unfolded in the first half of the 20th century. I also highlight how in this account the material output of governance is intimately linked to the virtues of the organizational form that brings it about.
Der folgende Aufsatz ist eine geringfügig überarbeitete Version eines Vortrags über das japanische Gesellschaftsrecht anlässlich der im August 2010 veranstalteten Summer School für japanisches Recht an der Goethe-Universität Frankfurt am Main1. Der Vortrag war nicht nur an Juristen und Studierende der Rechtswissenschaften, sondern auch an Japanologen und Studierende der Japanologie gerichtet, weshalb ich mich bemüht habe, mich möglichst nicht in juristischen Details zu verlieren, sondern eine Einführung in das Wesen des japanischen Gesellschaftsrechts zu geben. In den letzten Jahren gab es im Umfeld japanischer Unternehmen zahlreiche bedeutende gesetzliche Änderungen, und es scheint, als würde sich diese Entwicklung zunehmend beschleunigen. In dieser Lage, in der die täglich neu auftretenden Rechtsfragen schnellstmöglich einer Lösung bedürfen, ist es für die Gelehrten des Gesellschaftsrechts oft schwierig, sich wissenschaftlich mit der substantiellen Dimension dieser Rechtsfragen zu beschäftigen. Auch dieser Aufsatz über das japanische Gesellschaftsrecht ist nur eine Skizze zum Gegenstand meines Forschungsgebietes. Die Darstellung beginnt – zum besseren Verständnis der zugrundeliegenden Systematik vor allem für den deutschen Leserkreis – mit einer knappen Erläuterung der Entsprechungen des deutschen Begriffs „Gesellschaft“ im Japanischen. Auf diese terminologische Klärung folgt ein kurzer Überblick zur Geschichte des japanischen Gesellschaftsrechts bis hin zur Erstellung des japanischen Gesellschaftsgesetzes, da zum Verständnis des heutigen japanischen Gesellschaftsrechts eine gewisse Kenntnis seiner Vorgeschichte höchst wichtig ist. Bereits hier sei der wichtige Punkt erwähnt, dass das historische japanische (Aktien-)Gesellschaftsrecht unter dem starken Einfluss Deutschlands und der Vereinigten Staaten von Amerika gestaltet wurde, was sich noch gegenwärtig auf das japanische Gesellschaftsgesetz und die Struktur der japanischen Aktiengesellschaften auswirkt. In den weiteren Abschnitten werden diese Besonderheiten des japanischen Gesellschaftsrechts anhand der Struktur der japanischen Aktiengesellschaft illustriert.
This paper proposes a new approach for modeling investor fear after rare disasters. The key element is to take into account that investors’ information about fundamentals driving rare downward jumps in the dividend process is not perfect. Bayesian learning implies that beliefs about the likelihood of rare disasters drop to a much more pessimistic level once a disaster has occurred. Such a shift in beliefs can trigger massive declines in price-dividend ratios. Pessimistic beliefs persist for some time. Thus, belief dynamics are a source of apparent excess volatility relative to a rational expectations benchmark. Due to the low frequency of disasters, even an infinitely-lived investor will remain uncertain about the exact probability. Our analysis is conducted in continuous time and offers closed-form solutions for asset prices. We distinguish between rational and adaptive Bayesian learning. Rational learners account for the possibility of future changes in beliefs in determining their demand for risky assets, while adaptive learners take beliefs as given. Thus, risky assets tend to be lower-valued and price-dividend ratios vary less under adaptive versus rational learning for identical priors. Keywords: beliefs, Bayesian learning, controlled diffusions and jump processes, learning about jumps, adaptive learning, rational learning. JEL classification: D83, G11, C11, D91, E21, D81, C61
The lessons from QE and other 'unconventional' monetary policies - evidence from the Bank of England
(2011)
This paper investigates the effectiveness of the ‘quantitative easing’ policy, as implemented by the Bank of England in March 2009. Similar policies had been previously implemented in Japan, the U.S. and the Eurozone. The effectiveness is measured by the impact of Bank of England policies (including, but not limited to QE) on nominal GDP growth – the declared goal of the policy, according to the Bank of England. Unlike the majority of the literature on the topic, the general-to-specific econometric modeling methodology (a.k.a. the ‘Hendry’ or ‘LSE’ methodology) is employed for this purpose. The empirical analysis indicates that QE as defined and announced in March 2009 had no apparent effect on the UK economy. Meanwhile, it is found that a policy of ‘quantitative easing’ defined in the original sense of the term (Werner, 1994) is supported by empirical evidence: a stable relationship between a lending aggregate (disaggregated M4 lending, i.e. bank credit for GDP transactions) and nominal GDP is found. The findings imply that BoE policy should more directly target the growth of bank credit for GDP-transactions.
Using life-history survey data from eleven European countries, we investigate whether childhood conditions, such as socioeconomic status, cognitive abilities and health problems influence portfolio choice and risk attitudes later in life. After controlling for the corresponding conditions in adulthood, we find that superior cognitive skills in childhood (especially mathematical abilities) are positively associated with stock and mutual fund ownership. Childhood socioeconomic status, as indicated by the number of rooms and by having at least some books in the house during childhood, is also positively associated with the ownership of stocks, mutual funds and individual retirement accounts, as well as with the willingness to take financial risks. On the other hand, less risky assets like bonds are not affected by early childhood conditions. We find only weak effects of childhood health problems on portfolio choice in adulthood. Finally, favorable childhood conditions affect the transition in and out of risky asset ownership, both by making divesting less likely and by facilitating investing (i.e., transitioning from non-ownership to ownership).
The unintended consequences of the debt ... will increased government expenditure hurt the economy?
(2011)
In 2008, governments in many countries embarked on large fiscal expenditure programmes, with the intention to support the economy and prevent a more serious recession. In this study, the overall impact of a substantial increase in fiscal expenditure is considered by providing a novel analysis of the most relevant recent experience in similar circumstances, namely that of Japan in the 1990s. Then a weak economy with risk-averse banks seemed to require some of the largest peacetime fiscal stimulation programmes on record, albeit with disappointing results. The explanations provided by the literature and their unsatisfactory empirical record are reviewed. An alternative explanation, derived from early Keynesian models on the ineffectiveness of fiscal policy is presented in the form of a modified Fisher-equation, which incorporates the recent findings in the credit view literature. The model postulates complete quantity crowding out. It is subjected to empirical tests, which were supportive. Thus evidence is found that fiscal policy, if not supported by suitable monetary policy, is likely to crowd out private sector demand, even in an environment of falling or near-zero interest rates. As a policy conclusion it is pointed out that by changing the funding strategy, complete crowding out can be avoided and a positive net effect produced. The proposed framework creates common ground between proponents of Keynesian views (as held, among others, by Blinder and Solow), monetarist views (as held in particular by Milton Friedman) and those of leading contemporary macroeconomists (such as Mankiw).
We use data from the 2009 Internet Survey of the Health and Retirement Study to examine the consumption impact of wealth shocks and unemployment during the Great Recession in the US. We find that many households experienced large capital losses in housing and in their financial portfolios, and that a non-trivial fraction of respondents have lost their job. As a consequence of these shocks, many households reduced substantially their expenditures. We estimate that the marginal propensities to consume with respect to housing and financial wealth are 1 and 3.3 percentage points, respectively. In addition, those who became unemployed reduced spending by 10 percent. We also distinguish the effect of perceived transitory and permanent wealth shocks, splitting the sample between households who think that the stock market is likely to recover in a year’s time, and those who don’t. In line with the predictions of standard models of intertemporal choice, we find that the latter group adjusted much more than the former its spending in response to financial wealth shocks.
Central banks have recently introduced new policy initiatives, including a policy called ‘Quantitative Easing’ (QE). Since it has been argued by the Bank of England that “Standard economic models are of limited use in these unusual circumstances, and the empirical evidence is extremely limited” (Bank of England, 2009b), we have taken an entirely empirical approach and have focused on the QE-experience, on which substantial data is available, namely that of Japan (2001-2006). Recent literature on the effectiveness of QE has neglected any reference to final policy goals. In this paper, we adopt the view that ultimately effectiveness will be measured by whether it will be able to “boost spending” (Bank of England, 2009b) and “will ultimately be judged by their impact on the wider macroeconomy” (Bank of England, 2010). In line with a widely held view among leading macroeconomists from various persuasions, while attempting to stay agnostic and open-minded on the distribution of demand changes between real output and inflation, we have thus identified nominal GDP growth as the key final policy goal of monetary policy. The empirical research finds that the policy conducted by the Bank of Japan between 2001 and 2006 makes little empirical difference while an alternative policy targeting credit creation (the original definition of QE) would likely have been more successful.
Insurance contracts are often complex and difficult to verify outside the insurance relation. We show that standard one-period insurance policies with an upper limit and a deductible are the optimal incentive-compatible contracts in a competitive market with repeated interaction. Optimal group insurance policies involve a joint upper limit but individual deductibles and insurance brokers can play a role implementing such contracts for the group of clients. Our model provides new insights and predictions about the determinants of insurance.
Der vorliegende Abschlussbericht fasst die Ergebnisse der Studie „Berufliche Weiterbildung von Teilzeitkräften“ zusammen. Der Projektzeitraum erstreckte sich vom 15.06.2010 bis zum 31.03.2011, Gefördert wurde die Studie durch das Hessische Ministerium für Wirtschaft, Verkehr und Landesentwicklung (HMWVL) und den Europäischen Sozialfonds (ESF).
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
Die Grundlagen der heutigen modernen Wortartenklassifikationen gehen bis in die Antike zurück: Bereits zu dieser Zeit hat Dionysius Thrax ein Schema mit acht Wortarten etabliert. Die darin auftretenden Wortarten sind Substantive, Verben, Adjektive, Artikel, Pronomen, Präpositionen, Adverbien und Konjunktionen. Diese Zahl wird wiederum in den unterschiedlichen Grammatikansätzen unserer Zeit variiert. So verwendet der generative Ansatz beispielsweise vier Wortarten – Bergenholtz/Schaeder (1977) verzeichnen dagegen ganze 51 verschiedene Wortarten und zusätzlich 5 Lexemklassen. Allein diese starken Schwankungen in der angenommenen Anzahl der Wortarten verdeutlichen die allgemeinen Schwierigkeiten bei der Abgrenzung der Wortarten in ihren Kriterien.
Das Zitat "Denn sie gliedern sich in Stämme wie die Menschen" aus Érik Orsennas "Die Grammatik ist ein sanftes Lied" leitet den Titel dieser Arbeit ein und markiert gleichzeitig eine Schnittstelle zwischen der Literaturwissenschaft und der Linguistik und speziell der Grammatik. Als metasprachliche Erzählung setzt sich Orsennas Erzählung literarisch mit der Sprache und ihrer Grammatik auseinander. In der vorliegenden Arbeit beschäftige ich mich vorrangig mit der Analyse der Kriterien zur Klassifikation von Wortarten und ihrer literarischen Darstellung und Ausgestaltung in Orsennas Text über die Wörter, die in Stämmen in der Stadt der Wörter zusammenleben und in einer Fabrik miteinander zu Sätzen verbunden werden können. Der Originaltext von Orsenna ist eine Erzählung in französischer Sprache. Die Übersetzerin Caroline Vollmann hat den Text an die Gegebenheiten und speziellen Phänomene der deutschen Sprache angepasst. Aus diesem Grund spreche ich in der Arbeit von Orsenna und Vollmann als Verfassern.
Da die Darstellung der Wortarten bei Orsenna und Vollmann primär durch Metaphern realisiert wird und den Wörtern als "Stämmen" in einer Stadt menschliche Eigenschaften zugewiesen werden, möchte ich besonders auf die Grundlagen der kognitiven Metapherntheorie von Lakoff und Johnson eingehen. Um eine möglichst wissenschaftlich fundierte Grundlage für die Analyse von Kriterien zur Wortartenklassifikation zu gewährleisten, habe ich drei Grammatiken als Vergleichsmedium für die spätere Analyse von Orsennas und Vollmanns Text ausgewählt. Dadurch gewinne ich sowohl eine syntaktisch als auch morphologisch und semantisch orientierte Perspektive auf den Untersuchungsgegenstand. Aus den Grammatiken von Hentschel/Weydt (2003), Helbig/Buscha (2005) und Boettcher (2009) soll im Verlauf der Arbeit ein Kriterienkatalog erstellt werden, der in einem weiteren Schritt auf die Analyse der Wortartenklassifikation des literarischen Textes angewendet werden kann.
This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin – which tried to establish whether computer-generated algorithms could "recognize" literary genres. You take 'David Copperfield', run it through a program without any human input – "unsupervised", as the expression goes – and ... can the program figure out whether it's a gothic novel or a 'Bildungsroman'? The answer is, fundamentally, Yes: but a Yes with so many complications that it is necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.
In the last few years, literary studies have experienced what we could call the rise of quantitative evidence. This had happened before of course, without producing lasting effects, but this time it’s probably going to be different, because this time we have digital databases, and automated data retrieval. As Michel’s and Lieberman’s recent article on "Culturomics" made clear, the width of the corpus and the speed of the search have increased beyond all expectations: today, we can replicate in a few minutes investigations that took a giant like Leo Spitzer months and years of work. When it comes to phenomena of language and style, we can do things that previous generations could only dream of.
When it comes to language and style. But if you work on novels or plays, style is only part of the picture. What about plot – how can that be quantified? This paper is the beginning of an answer, and the beginning of the beginning is network theory. This is a theory that studies connections within large groups of objects: the objects can be just about anything – banks, neurons, film actors, research papers, friends... – and are usually called nodes or vertices; their connections are usually called edges; and the analysis of how vertices are linked by edges has revealed many unexpected features of large systems, the most famous one being the so-called "small-world" property, or "six degrees of separation": the uncanny rapidity with which one can reach any vertex in the network from any other vertex. The theory proper requires a level of mathematical intelligence which I unfortunately lack; and it typically uses vast quantities of data which will also be missing from my paper. But this is only the first in a series of studies we’re doing at the Stanford Literary Lab; and then, even at this early stage, a few things emerge.