Refine
Year of publication
- 2011 (92) (remove)
Document Type
- Working Paper (92) (remove)
Has Fulltext
- yes (92)
Is part of the Bibliography
- no (92)
Keywords
- Deutschland (4)
- USA (4)
- China (3)
- Digital Humanities (3)
- Financial Crisis (3)
- Japan (3)
- Monetary Policy (3)
- Adaptive Erwartung (2)
- Adverse Selection Risk (2)
- Außenwirtschaftliches Gleichgewicht (2)
Institute
- Center for Financial Studies (CFS) (35)
- Wirtschaftswissenschaften (8)
- Informatik (7)
- Institute for Law and Finance (ILF) (7)
- Institute for Monetary and Financial Stability (IMFS) (7)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (6)
- House of Finance (HoF) (4)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (4)
- Gesellschaftswissenschaften (3)
- Medizin (3)
Das vorliegende Diskussionspapier ist die erweiterte and aktualisierte Fassung des Kapitels „Neoliberalismus und Arzt-Patient-Beziehung“ meines Buches „Zur sozialen Anatomie des Gesundheitswesens. Neoliberalismus und Gesundheitspolitik in Deutschland“ (Frankfurt 2005). Es geht dabei um die Ökonomisierung bzw. Kommerzialisierung eines sozialen Bereiches, der davor lange Zeit verschont wurde. Der Einfluss von Markt und Wettbewerb auf die Arzt-Patient- Beziehung werden beschrieben und analysiert sowie auf daraus folgende wichtige Veränderungen hingewiesen. Dabei zeigt sich, dass der Patient zunehmend zum Kunden wird und der Arzt immer intensiver unternehmerisch zu denken hat. Der Ermessensspielraum für ärztliche Entscheidungen, von Indikationsstellungen und therapeutischen Interventionen, werden davon nicht unerheblich berührt. Daraus ergeben sich ethische Aspekte, die schon vor einigen Jahrzehnten von der „kritischen Medizin“ beklagt wurden. Gesundheit wird hier als Menschenrecht gesehen. Als Gegenmodell zur um sich greifenden Kommerzialisierung gelten neue Formen der Versorgung, die auf der Basis von Solidarität beruhen.
A large empirical literature has shown that user fees signicantly deter public service utilization in developing #countries. While most of these results reflect partial equilibrium analysis, we find that the nationwide abolition of public school fees in Kenya in 2003 led to no increase in net public enrollment rates, but rather a dramatic shift toward private schooling. Results suggest this divergence between partial- and general-equilibrium effects is partially explained by social interactions: the entry of poorer pupils into free education contributed to the exit of their more affluent peers.
This paper compares the shareholder-value-maximizing capital structure and pricing policy of insurance groups against that of stand-alone insurers. Groups can utilise intra-group risk diversification by means of capital and risk transfer instruments. We show that using these instruments enables the group to offer insurance with less default risk and at lower premiums than is optimal for standalone insurers. We also take into account that shareholders of groups could find it more difficult to prevent inefficient overinvestment or cross-subsidisation, which we model by higher dead-weight costs of carrying capital. The tradeoff between risk diversification on the one hand and higher dead-weight costs on the other can result in group building being beneficial for shareholders but detrimental for policyholders.
We use data from the 2009 Internet Survey of the Health and Retirement Study to examine the consumption impact of wealth shocks and unemployment during the Great Recession in the US. We find that many households experienced large capital losses in housing and in their financial portfolios, and that a non-trivial fraction of respondents have lost their job. As a consequence of these shocks, many households reduced substantially their expenditures. We estimate that the marginal propensities to consume with respect to housing and financial wealth are 1 and 3.3 percentage points, respectively. In addition, those who became unemployed reduced spending by 10 percent. We also distinguish the effect of perceived transitory and permanent wealth shocks, splitting the sample between households who think that the stock market is likely to recover in a year’s time, and those who don’t. In line with the predictions of standard models of intertemporal choice, we find that the latter group adjusted much more than the former its spending in response to financial wealth shocks.
I investigate the effect of transparency on the borrowing costs of Emerging Markets Economies. Transparency is measured by whether or not the countries publish the IMF Article IV Staff report and the Reports on the Observance of Standards and Codes (ROSC). Using difference-in-difference estimation, I study the effect on the sovereign credit spreads for 18 Emerging Market Economies over the period 1999-2007. I show that the effect of publishing the Article IV reports is negligible while publishing the ROSC matters, leading to a reduction in the spreads of over 15% in the samples 1999-2006 and 1999-2007. JEL Classification: F33, F34, G15 Keywords: Sovereign Bond Markets, Transparency, Emerging Market Economies
The title I have chosen seems to signal a tension, even a contradiction, in a number of respects. Democracy appears to be a form of political organisation and government in which, through general and public participatory procedures, a sufficiently legitimate political will is formed which acquires the force of law. Justice, by contrast, appears to be a value external to this context which is not so much linked to procedures of “input” or “throughput” legitimation but is understood instead as an output- or outcome-oriented concept. At times, justice is even understood as an otherworldly idea which, when transported into the Platonic cave, merely causes trouble and ends up as an undemocratic elite project. In methodological terms, too, this difference is sometimes signalled in terms of a contrast between a form of “worldly” political thought and “abstract” and otherworldly philosophical reflection on justice. In my view, we are bound to talk past the issues to be discussed under the heading “transnational justice and democracy” unless we first root out false dichotomies such as the ones mentioned. My thesis will be that justice must be “secularised” or “grounded” both with regard to how we understand it and to its application to relations beyond the state.
The past thirty years have seen dramatic changes to the character of state membership regimes in which practices of easing access to membership for resident non-citizens, extending the franchise to expatriate citizens as well as, albeit in typically more limited ways, to resident non-citizens and an increasing toleration of dual nationality have become widespread. These processes of democratic inclusion, while variously motivated, represent an important trend in the contemporary political order in which we can discern two distinct shifts. The first concerns membership as a status and is characterised in terms of the movement from a simple distinction between single-nationality citizens and single-nationality aliens to a more complex structure of state membership in which we also find dual nationals and denizens (Baubock, 2007a:2395-6). The second shift relates to voting rights and is marked by the movement from the requirement that voting rights are grounded in both citizenship and residence to the relaxing of the joint character of this requirement such that citizenship or residence now increasingly serve as a basis for, at least partial, enfranchisement. In the light of these transformations, it is unsurprising that normative engagement with transnational citizenship – conceived in terms of the enjoyment of membership statuses in two (or more) states – has focused on the issues of access to, and maintenance of, national citizenship, on the one hand, and entitlement to voting rights, on the other hand.
Towards correctness of program transformations through unification and critical pair computation
(2011)
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.
We revisit the role of time in measuring the price impact of trades using a new empirical method that combines spread decomposition and dynamic duration modeling. Previous studies which have addressed the issue in a vector-autoregressive framework conclude that times when markets are most active are times when there is an increased presence of informed trading. Our empirical analysis based on recent European and U.S. data offers challenging new evidence. We find that as trade intensity increases, the informativeness of trades tends to decrease. This result is consistent with the predictions of Admati and Pfleiderer’s (1988) rational expectations model, and also with models of dynamic trading like those proposed by Parlour (1998) and Foucault (1999). Our results cast doubt on the common wisdom that fast markets bear particularly high adverse selection risks for uninformed market participants. JEL Classification: G10, C32 Keywords: Price Impact of Trades, Trading Intensity, Dynamic Duration Models, Spread Decomposition Models, Adverse Selection Risk
"Buffer-stock" models of saving are now standard in the consumption literature. This paper builds theoretical foundations for rigorous understanding of the main features of such models, including the existence of a target wealth ratio and the proposition that aggregate consumption growth equals aggregate income growth in a small open economy populated by buffer stock savers. JEL Classification: D81, D91, E21 Keywords: Precautionary Saving, Buffer Stock Saving, Marginal Propensity to Consume, Permanent Income Hypothesis
The unintended consequences of the debt ... will increased government expenditure hurt the economy?
(2011)
In 2008, governments in many countries embarked on large fiscal expenditure programmes, with the intention to support the economy and prevent a more serious recession. In this study, the overall impact of a substantial increase in fiscal expenditure is considered by providing a novel analysis of the most relevant recent experience in similar circumstances, namely that of Japan in the 1990s. Then a weak economy with risk-averse banks seemed to require some of the largest peacetime fiscal stimulation programmes on record, albeit with disappointing results. The explanations provided by the literature and their unsatisfactory empirical record are reviewed. An alternative explanation, derived from early Keynesian models on the ineffectiveness of fiscal policy is presented in the form of a modified Fisher-equation, which incorporates the recent findings in the credit view literature. The model postulates complete quantity crowding out. It is subjected to empirical tests, which were supportive. Thus evidence is found that fiscal policy, if not supported by suitable monetary policy, is likely to crowd out private sector demand, even in an environment of falling or near-zero interest rates. As a policy conclusion it is pointed out that by changing the funding strategy, complete crowding out can be avoided and a positive net effect produced. The proposed framework creates common ground between proponents of Keynesian views (as held, among others, by Blinder and Solow), monetarist views (as held in particular by Milton Friedman) and those of leading contemporary macroeconomists (such as Mankiw).
The aim of this paper is to give the semantic profile of the Greek verb-deriving suffixes -íz(o), -én(o), -év(o), -ón(o), -(i)áz(o), and -ín(o), with a special account of the ending -áo/-ó. The patterns presented are the result of an empirical analysis of data extracted from extended interviews conducted with 28 native Greek speakers in Athens, Greece in February 2009. In the first interview task the test persons were asked to force(=create) verbs by using the suffixes -ízo, -évo, -óno, -(i)ázo, and -íno and a variety of bases which conformed to the ontological distinctions made in Lieber (2004). In the second task the test persons were asked to evaluate three groups of forced verbs with a noun, an adjective, and an adverb, respectively, by using one (best/highly acceptable verb) to six (worst/unacceptable verb) points. In the third task nineteen established verb pairs with different suffixes and the ending -áo/-ó were presented. The test persons were asked to report whether there was some difference between them and what exactly this difference was. The differences reported were transformed into 16 alternations. In the fourth task 21 established verbs with different suffixes were presented. The test persons were asked to give the "opposite" or "near opposite" expression for each verb. The rationale behind this task was to arrive at the meaning of the suffixes through the semantics of the opposites. In the analysis Rochelle's Lieber's (2004) theoretical framework is used. The results of the analysis suggest (i) a sign-based treatment of affixes, (ii) a vertical preference structure in the semantic structure of the head suffixes which takes into account the semantic make-up of the bases, and (iii) the integration of socioexpressive meaning into verb structures.
This paper addresses the open debate about the usefulness of high-frequency (HF) data in large-scale portfolio allocation. Daily covariances are estimated based on HF data of the S&P 500 universe employing a blocked realized kernel estimator. We propose forecasting covariance matrices using a multi-scale spectral decomposition where volatilities, correlation eigenvalues and eigenvectors evolve on different frequencies. In an extensive out-of-sample forecasting study, we show that the proposed approach yields less risky and more diversified portfolio allocations as prevailing methods employing daily data. These performance gains hold over longer horizons than previous studies have shown.
The lessons from QE and other 'unconventional' monetary policies - evidence from the Bank of England
(2011)
This paper investigates the effectiveness of the ‘quantitative easing’ policy, as implemented by the Bank of England in March 2009. Similar policies had been previously implemented in Japan, the U.S. and the Eurozone. The effectiveness is measured by the impact of Bank of England policies (including, but not limited to QE) on nominal GDP growth – the declared goal of the policy, according to the Bank of England. Unlike the majority of the literature on the topic, the general-to-specific econometric modeling methodology (a.k.a. the ‘Hendry’ or ‘LSE’ methodology) is employed for this purpose. The empirical analysis indicates that QE as defined and announced in March 2009 had no apparent effect on the UK economy. Meanwhile, it is found that a policy of ‘quantitative easing’ defined in the original sense of the term (Werner, 1994) is supported by empirical evidence: a stable relationship between a lending aggregate (disaggregated M4 lending, i.e. bank credit for GDP transactions) and nominal GDP is found. The findings imply that BoE policy should more directly target the growth of bank credit for GDP-transactions.
Existing studies from the United States, Latin America, and Asia provide scant evidence that private schools dramatically improve academic performance relative to public schools. Using data from Kenya—a poor country with weak public institutions—we find a large effect of private schooling on test scores, equivalent to one full standard deviation. This finding is robust to endogenous sorting of more able pupils into private schools. The magnitude of the effect dwarfs the impact of any rigorously tested intervention to raise performance within public schools. Furthermore, nearly twothirds of private schools operate at lower cost than the median government school.
Ernst Bloch pointed out in a particularly emphatic way that the concept of human dignity featured centrally in historical struggles against different forms of unjustified rule, i.e. domination – to which one must add that it continues to do so to the present day. The “upright gait,” putting an end to humiliation and insult: this is the most powerful demand, in both political and rhetorical terms, that a “human rights-based” claim expresses. It marks the emergence of a radical, context-transcending reference point immanent to social conflicts which raises fundamental questions concerning the customary opposition between immanent and transcendent criticism. For within the idiom of demanding respect for human dignity, a right is invoked “here and now,” in a particular, context-specific form, which at its core is owed to every human being as a person. Thus Bloch is in one respect correct when he asserts that human rights are not a natural “birthright” but must be achieved through struggle; but in another respect this struggle can develop its social power only if it has a firm and in a certain sense “absolute” normative anchor. Properly understood, it becomes apparent that these social conflicts always affect “two worlds”: the social reality, on the one hand, which is criticized in part or radically in the light of an ideal normative dimension, on the other. For those who engage in this criticism there is no doubt that the normative dimension is no less real than the reality to which they refuse to resign themselves. Those who critically transcend reality always also live elsewhere.
Since World War II, direct stock ownership by households has largely been replaced by indirect stock ownership by financial institutions. We argue that tax policy is the driving force. Using long time-series from eight countries, we show that the fraction of household ownership decreases with measures of the tax benefits of holding stocks inside a pension plan. This finding is important for policy considerations on effective taxation and for financial economics research on the long-term effects of taxation on corporate finance and asset prices. JEL Classification: G10, G20, H22, H30 Keywords: Capital Gains Tax, Income Tax, Stock Ownership, Bond Ownership, Inflation, Bracket Creep, Pension Funds
It has become commonplace to say that, in the past, international governance has been legitimated mainly, if not exclusively, by its welfare-enhancing ‘output’. There has been very little research, however, on the history of legitimating international governance by its output to validate this point. In this essay I begin to address this gap by inquiring into the origins of output-oriented strategies for legitimating international organizations. Scrutinizing the programmatic literature on international organizations from the early 20th century, I illustrate how a new and distinctive account of technocratic legitimation emerged and in the 1920s separated from other types of liberal internationalism. My inquiry, centring on the works of James Arthur Salter, David Mitrany, Paul S. Reinsch and Pitman B. Potter, explores their respective conceptions of ‘good functional governance’, executed by a non-political international technocracy. Their account is explicitly pitched against a notion of ‘international politics’, perceived as violent, polarizing, and irrational. The emergence of such a technocratic legitimation of international governance, I submit, needs to be seen in the context of societal modernization and bureaucratization that unfolded in the first half of the 20th century. I also highlight how in this account the material output of governance is intimately linked to the virtues of the organizational form that brings it about.
Regulations in the pre-Sarbanes–Oxley era allowed corporate insiders considerable flexibility in strategically timing their trades and SEC filings, for example, by executing several trades and reporting them jointly after the last trade. We document that even these lax reporting requirements were frequently violated and that the strategic timing of trades and reports was common. Event study abnormal re-turns are larger after reports of strategic insider trades than after reports of otherwise similar nonstrategic trades. Our results also imply that delayed reporting is detrimental to market efficiency and lend strong support to the more stringent trade reporting requirements established by the Sarbanes–Oxley Act. JEL Classification: G14, G30, G32 Keywords: Insider Trading , Directors' Dealings , Corporate Governance , Market Efficiency
The overvaluation hypothesis (Miller 1977) predicts that a) stocks are overvalued in the presence of short selling restrictions and that b) the overvaluation increases in the degree of divergence of opinion. We design an experiment that allows us to test these predictions in the laboratory. The results indicate that prices are higher with short selling constraints, but the overvaluation does not increase in the degree of divergence of opinion. We further find that trading volume is lower and bid-ask spreads are higher when short sale restrictions are imposed. JEL Classification: C92, G14 Keywords: Overvaluation Hypothesis , Short Selling Constraints , Divergence of Opinion
We analytically show that a common across rich/poor individuals Stone-Geary utility function with subsistence consumption in the context of a simple two-asset portfolio-choice model is capable of qualitatively and quantitatively explaining: (i) the higher saving rates of the rich, (ii) the higher fraction of personal wealth held in risky assets by the rich, and (iii) the higher volatility of consumption of the wealthier. On the contrary, time-variant “keeping-up-with-the-Joneses” weighted average consumption which plays the role of moving benchmark subsistence consumption gives the same portfolio composition and saving rates across the rich and the poor, failing to reconcile the model with what micro data say. JEL Classification: G11, D91, E21, D81, D14, D11
The papers in this volume were originally presented at the Workshop on Bantu Wh-questions, held at the Institut des Sciences de l’Homme, Université Lyon 2, on 25-26 March 2011, which was organized by the French-German cooperative project on the Phonology/Syntax Interface in Bantu Languages (BANTU PSYN). This project, which is funded by the ANR and the DFG, comprises three research teams, based in Berlin, Paris and Lyon. The Berlin team, at the ZAS, is: Laura Downing (project leader) and Kristina Riedel (post-doc). The Paris team, at the Laboratoire de phonétique et phonologie (LPP; UMR 7018), is: Annie Rialland (project leader), Cédric Patin (Maître de Conférences, STL, Université Lille 3), Jean-Marc Beltzung (post-doc), Martial Embanga Aborobongui (doctoral student), Fatima Hamlaoui (post-doc). The Lyon team, at the Dynamique du Langage (UMR 5596) is: Gérard Philippson (project leader) and Sophie Manus (Maître de Conférences, Université Lyon 2). These three research teams bring together the range of theoretical expertise necessary to investigate the phonology-syntax interface: intonation (Patin, Rialland), tonal phonology (Aborobongui, Downing, Manus, Patin, Philippson, Rialland), phonology-syntax interface (Downing, Patin) and formal syntax (Riedel, Hamlaoui). They also bring together a range of Bantu language expertise: Western Bantu (Aboronbongui, Rialland), Eastern Bantu (Manus, Patin, Philippson, Riedel), and Southern Bantu (Downing).
This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin – which tried to establish whether computer-generated algorithms could "recognize" literary genres. You take 'David Copperfield', run it through a program without any human input – "unsupervised", as the expression goes – and ... can the program figure out whether it's a gothic novel or a 'Bildungsroman'? The answer is, fundamentally, Yes: but a Yes with so many complications that it is necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.
Fazit und Ausblick: Gemessen an den hohen Anforderungen, die die Rechtsprechung und das juristische Schrifttum an Qualifikation und Aufgabenwahrnehmung der Aufsichtsratsmitglieder stellen, scheint
erheblicher Bedarf für eine Professionalisierung der Aufsichtsratstätigkeit zu bestehen. Tatsächlich dürften indessen die Erwartungen, die in die Überwachung der Geschäftsführung durch den Aufsichtsrat gesetzt werden, überzogen sein. Zudem brächte die Fortentwicklung des Aufsichtsrats zu einem Gremium professioneller Überwacher eine Reihe von Nachteilen mit sich, die die vermeintlichen Vorteile aufwiegen dürfte. Eine professionelle Überwachung der Geschäftsführung ließe sich daher möglicherweise besser im Vorstand selbst ansiedeln, indem ein oder mehrere Vorstandsmitglied(er) ausschließlich mit Überwachungsaufgaben betraut würde(n). Der Vorstand hat für die Rechtmäßigkeit, Ordnungsmäßigkeit und Zweckmäßigkeit der Organisation und der Entscheidungsprozesse innerhalb der Gesellschaft zu sorgen. Dazu gehört nicht nur die Überwachung des eigenen Ressorts, sondern auch die Pflicht, den Geschäftsbetrieb insgesamt zu beobachten und Missstände auch in anderen Ressorts zur Kenntnis des Gesamtvorstands zu bringen. Bereits das geltende Recht trägt damit dem Umstand Rechnung, dass effektive Überwachung ständige Präsenz im Unternehmen und die Unterstützung durch einen Stab von Mitarbeitern erfordert. Die Betrauung einzelner Vorstandsmitglieder mit hauptamtlichen Überwachungsaufgaben würde die aus diesen Gründen dem Vorstand ohnehin obliegende Aufsicht ausbauen, dem Aufsichtsrat einen auf Überwachung spezialisierten Ansprechpartner im Vorstand zur Verfügung stellen und auf diese Weise dazu beitragen, die Überwachungsaufgabe des Aufsichtsrats auf ein im Nebenamt realistischerweise zu bewältigendes Maß zurückzuführen.
Plagiarismus in der Medizin wird im Ausland im letzten Jahrzehnt zunehmend erforscht,
nicht so in Deutschland. Prominente Plagiatsfälle auch außerhalb der Medizin stellen darüber
hinaus grundlegende Fragen an die Qualität von Wissenschaft. Plagiarismus und
unethisches Verhalten in der Wissenschaft werden in diesem Arbeitspapier im Kontext
des grundlegenden institutionell-organisatorischen Wandels des Wissenschafts- und
Hochschulsystems durch die Übertragung von Konzepten des New Public Management
(NPM) auf die Governance des Hochschul- und Wissenschaftssystems diskutiert. Möglichkeiten
und Grenzen verschiedener Strategien zum Umgang mit Plagiarismus werden
vorgestellt. Dabei wird insbesondere auf die Verwendung von Plagiats-Software eingegangen.
Die Verwendung einer Software-Lösung im Fachbereich Humanmedizin wird aus
verschiedenen Gründen kritisch eingeschätzt. Erste Ergebnisse aus einer empirischen
Studie zum Plagiarismus von Studierenden zeigen ebenfalls, dass der Prävention von
Plagiaten durch Aufklärung und Ausbildung mehr Beachtung geschenkt werden muss. Auf
Grundlage der theoretischen Überlegungen, Recherchen und der eigenen empirischen
Erhebungen werden Bausteine für einen systematischen Umgang mit Plagiarismus für die
Hochschulmedizin entwickelt.
A generalization of the compressed string pattern match that applies to terms with variables is investigated: Given terms s and t compressed by singleton tree grammars, the task is to find an instance of s that occurs as a subterm in t. We show that this problem is in NP and that the task can be performed in time O(ncjVar(s)j), including the construction of the compressed substitution, and a representation of all occurrences. We show that the special case where s is uncompressed can be performed in polynomial time. As a nice application we show that for an equational deduction of t to t0 by an equality axiom l = r (a rewrite) a single step can be performed in polynomial time in the size of compression of t and l; r if the number of variables is fixed in l. We also show that n rewriting steps can be performed in polynomial time, if the equational axioms are compressed and assumed to be constant for the rewriting sequence. Another potential application are querying mechanisms on compressed XML-data bases.
Prepared by Christian Laux, Vienna University of Economics and Business & Center for Financial Studies (CFS) for the “Workshop on Liquidity Premium in Solvency II: Conceptual and Measurement Issues,” DNB Amsterdam, March 18, 2011. The insurance industry and the Committee of European Insurance and Occupational Pension Supervisors (CEIOPS) propose to add a liquidity premium to the risk-free rate when discounting liabilities in times of financial turmoil. The objective is to counterbalance adverse effects on regulatory capital due to a decrease in asset values caused by illiquidity in a crisis. As I argue in this note, although the motive might be sensible, the proposal to add a liquidity premium when discounting liabilities is not the right approach to tackle the problem.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
Central banks have recently introduced new policy initiatives, including a policy called ‘Quantitative Easing’ (QE). Since it has been argued by the Bank of England that “Standard economic models are of limited use in these unusual circumstances, and the empirical evidence is extremely limited” (Bank of England, 2009b), we have taken an entirely empirical approach and have focused on the QE-experience, on which substantial data is available, namely that of Japan (2001-2006). Recent literature on the effectiveness of QE has neglected any reference to final policy goals. In this paper, we adopt the view that ultimately effectiveness will be measured by whether it will be able to “boost spending” (Bank of England, 2009b) and “will ultimately be judged by their impact on the wider macroeconomy” (Bank of England, 2010). In line with a widely held view among leading macroeconomists from various persuasions, while attempting to stay agnostic and open-minded on the distribution of demand changes between real output and inflation, we have thus identified nominal GDP growth as the key final policy goal of monetary policy. The empirical research finds that the policy conducted by the Bank of Japan between 2001 and 2006 makes little empirical difference while an alternative policy targeting credit creation (the original definition of QE) would likely have been more successful.
In the last few years, literary studies have experienced what we could call the rise of quantitative evidence. This had happened before of course, without producing lasting effects, but this time it’s probably going to be different, because this time we have digital databases, and automated data retrieval. As Michel’s and Lieberman’s recent article on "Culturomics" made clear, the width of the corpus and the speed of the search have increased beyond all expectations: today, we can replicate in a few minutes investigations that took a giant like Leo Spitzer months and years of work. When it comes to phenomena of language and style, we can do things that previous generations could only dream of.
When it comes to language and style. But if you work on novels or plays, style is only part of the picture. What about plot – how can that be quantified? This paper is the beginning of an answer, and the beginning of the beginning is network theory. This is a theory that studies connections within large groups of objects: the objects can be just about anything – banks, neurons, film actors, research papers, friends... – and are usually called nodes or vertices; their connections are usually called edges; and the analysis of how vertices are linked by edges has revealed many unexpected features of large systems, the most famous one being the so-called "small-world" property, or "six degrees of separation": the uncanny rapidity with which one can reach any vertex in the network from any other vertex. The theory proper requires a level of mathematical intelligence which I unfortunately lack; and it typically uses vast quantities of data which will also be missing from my paper. But this is only the first in a series of studies we’re doing at the Stanford Literary Lab; and then, even at this early stage, a few things emerge.
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
We make three points. First, the decade before the financial crisis in 2007 was characterized by a collapse in the yield on TIPS. Second, estimated VARs for the federal funds rate and the TIPS yield show that while monetary policy shocks had negligible effects on the TIPS yield, shocks to the latter had one-to-one effects on the federal funds rate. Third, these findings can be rationalized in a New Keynesian model.
Measuring confidence and uncertainty during the financial crisis: evidence from the CFS survey
(2011)
The CFS survey covers individual situations of banks and other companies of the financial sector during the financial crisis. This provides a rare possibility to analyze appraisals, expectations and forecast errors of the core sector of the recent turmoil. Following standard ways of aggregating individual survey data, we first present and introduce the CFS survey by comparing CFS indicators of confidence and predicted confidence to ifo and ZEW indicators. The major contribution is the analysis of several indicators of uncertainty. In addition to well established concepts, we introduce innovative measures based on the skewness of forecast errors and on the share of ‘no response’ replies. Results show that uncertainty indicators fit quite well with pattern of real and financial time series of the time period 2007 to 2010. Business Sentiment , Financial Crisis , Survey Indicator , Uncertainty CFS working paper series, 2010, 18. Revised Version July 2011
This paper reconsiders the effect of investor sentiment on stock prices. Using survey-based sentiment indicators from Germany and the US we confirm previous findings of predictability at intermediate time horizons. The main contribution of our paper is that we also analyze the immediate price reaction to the publication of sentiment indicators. We find that the sign of the immediate price reaction is the same as that of the predictability at intermediate time horizons. This is consistent with sentiment being related to mispricing but is inconsistent with the alternative explanation that sentiment indicators provide information about future expected returns. JEL Classification: G12, G14 Keywords: Investor Sentiment , Event Study , Return Predictability
This paper examines to what extent the build-up of "global imbalances" since the mid-1990s can be explained in a purely real open-economy DSGE model in which agents’ perceptions of long-run growth are based on filtering observed changes in productivity. We show that long-run growth estimates based on filtering U.S. productivity data comove strongly with long-horizon survey expectations. By simulating the model in which agents filter data on U.S. productivity growth, we closely match the U.S. current account evolution. Moreover, with household preferences that control the wealth effect on labor supply, we can generate output movements in line with the data. JEL Classification: E13, E32, D83, O40
This paper examines to what extent the build-up of 'global imbalances' since the mid-1990s can be explained in a purely real open-economy DSGE model in which agents' perceptions of long-run growth are based on filtering observed changes in productivity. We show that long-run growth estimates based on filtering U.S. productivity data comove strongly with long-horizon survey expectations. By simulating the model in which agents filter data on U.S. productivity growth, we closely match the U.S. current account evolution. Moreover, with household preferences that control the wealth effect on labor supply, we can generate output movements in line with the data.
In the microstructure literature, information asymmetry is an important determinant of market liquidity. The classic setting is that uninformed dedicated liquidity suppliers charge price concessions when incoming market orders are likely to be informationally motivated. In limit order book markets, however, this relationship is less clear, as market participants can switch roles, and freely choose to immediately demand or patiently supply liquidity by submitting either market or limit orders. We study the importance of information asymmetry in limit order books based on a recent sample of thirty German DAX stocks. We find that Hasbrouck’s (1991) measure of trade informativeness Granger-causes book liquidity, in particular that required to fill large market orders. Picking-off risk due to public news induced volatility is more important for top-of-the book liquidity supply. In our multivariate analysis we control for volatility, trading volume, trading intensity and order imbalance to isolate the effect of trade informativeness on book liquidity. JEL Classification: G14 Keywords: Price Impact of Trades , Trading Intensity , Dynamic Duration Models, Spread Decomposition Models , Adverse Selection Risk
The direct financial impact of the financial crisis has been to deal a heavy blow to investment-based pensions; many workers lost a substantial portion of their retirement saving. The financial sector implosion produced an economic crisis for the rest of the economy via high unemployment and reduced labor earnings, which reduced household contributions to Social Security and some private pensions. Our research asks which types of individuals were most affected by these dual financial and economic shocks, and it also explores how people may react by changing their consumption, saving and investment, work and retirement, and annuitization decisions. We do so with a realistically calibrated lifecycle framework allowing for time-varying investment opportunities and countercyclical risky labor income dynamics. We show that households near retirement will reduce both short- and long-term consumption, boost work effort, and defer retirement. Younger cohorts will initially reduce their work hours, consumption, saving, and equity exposure; later in life, they will work more, retire later, consume less, invest more in stocks, save more, and reduce their demand for private annuities. Keywords: Financial Crisis , Household Finance , Cycle Portfolio Choice , Labor Supply Classification: D1, G11, G23, G35, J14, J26, J32
This paper outlines important lessons for monetary policy. In particular, the role of inflation targeting, which was much acclaimed prior to the financial crisis and since then has not lost much of its endorsement, is critically reviewed. Ignoring the relation between monetary policy and asset prices, as is the case in this monetary policy approach, can lead to financial instability. In contrast, giving, inter alia, monetary factors a role in central banks’ policy decisions, as is done in the ECB’s encompassing approach, helps prevent these potentially harmful side effects and thus allows for fostering financial stability. Finally, this paper makes a case against increasing the central banks’ inflation target. JEL Classification: E44, E52, E58 Keywords: Inflation Targeting, Asset Prices, Financial Stability, ECB
The European Monetary Union euro has done very well since its initiation. Price stability has been secured and the external value of the new currency is more than satisfactory. The confidence in it is also shown by its increasing use as a global reserve currency. It has been a stabilizing factor in the current crisis. The recent budgetary problems of some member states are principally not a problem of the Monetary Union. It is therefore in no way justified to speak of a "Euro-crisis". It is true, however, that the Monetary Union restricts the number of possibilities for member states to solve their financial problems but it does not eliminate them entirely that outside help would have become indispensible. The purchase of debt instruments of member states in financial distress by the ECB is questionable from an economic, and more important, from a legal point of view. The longer the duration, the less legally justifiable is it. Financial support for member states in severe financial distress might be acceptable as a temporary crisis resolution mechanism. A permanent support mechanism needs a basis in the primary law of the EU. The treatment of the risk of "sovereign" debt in the legal framework for financial institutions urgently needs improvement. Especially the capital requirements for credit institutions have to be adjusted.
Am 1. Januar 2012 tritt das KORUS-Freihandelsabkommen zwischen Südkorea und den USA in Kraft. Mit dem KORUS-Freihandelsabkommen und dem vergleichbar umfangreichen, bereits seit Juli 2011 rechtskräftigen Abkommen mit der EU (KOREU) verfügt Südkoreas exportorientierte Volkswirtschaft über einen nahezu uneingeschränkten Zugang zu den beiden stärksten Wirtschaftsräumen der Welt, die gemeinsam mehr als 50 Prozent des weltweiten Bruttoinlandsprodukts (BIP) erwirtschaften. Nicht zuletzt angesichts der jüngsten globalen Finanz- und Wirtschaftskrise sowie des Stillstands der multilateralen Doha-Verhandlungsrunde zur globalen Handelsliberalisierung, setzen zahlreiche Staaten
in Ostasien wie auch Südkorea auf eine expansive und vor allem bilaterale Freihandelspolitik. In Seoul hat diese Politik mit den beiden jüngsten Abkommen ihren vorläufigen
Höhepunkt erreicht. Die weitere Handelsstrategie des Landes sieht eine Diversifizierung und Ausweitung der Freihandelspolitik vor.
* Insbesondere die jüngeren Abkommen mit Indien, den USA und der EU stellen eine Weiterentwicklung der bisherigen Freihandelspolitik Südkoreas dar. Verglichen
mit den anfänglichen Abkommen mit Chile oder auch Singapur haben diese umfangreichen und tiefgreifenden Vereinbarungen eine neue Qualität.
* In den neueren bilateralen Freihandelsprojekten Südkoreas mit dem Golfkooperationsrat oder Australien werden zunehmend auch Themen wie Rohstoff- und Ernährungssicherung in den Blick genommen. Mit dem Abschluss der Freihandelsabkommen werden auch politische Ziele verfolgt, wie zum Beispiel die Stärkung der Allianz mit den USA oder das Knüpfen
strategischer Partnerschaften über die ostasiatische Region hinaus.
* Vor allem für Japan stellen KORUS und KOREU eine ökonomische Herausforderung dar. Es ist möglich, dass nun in Ostasien, wie auch im gesamten pazifischen Raum, weitere große Freihandelsabkommen folgen werden.
Der Text Konstitutive Regeln – normativ oder nicht? Ein Blick auf ihre Rolle in Praktiken geht der Frage nach, ob – und wenn ja, in welcher Weise – konstitutive Regeln normativ sind. Die Herausforderung besteht darin, dass diese Regeln bzw. ihre Befolgung womöglich durchweg in nicht-normativen Begriffen beschrieben werden können – nämlich im Wesentlichen als Erfüllung notwendiger und/oder hinreichender Bedingungen. Natürlich kann man, aus welchen Gründen auch immer, jederzeit fordern, einer konstitutiven Regel Folge zu leisten. Aber damit würde Normativität ‚von außen‘ an solche Regeln herangetragen; in Frage steht aber, ob diese Regeln selbst normativ sind. Für eine derartige ‚interne‘ Normativität spricht sicherlich unser Umgang mit diesen Regeln und auch unser alltägliches Reden über sie. So beschreiben wir in unserer Alltagspraxis etwa das Befolgen von Spielregeln (als Paradebeispiele für konstitutive Regeln) als etwas, das korrekt ist oder getan werden soll – und Abweichungen entsprechend als Verletzungen. Der Überlegungsgang des Textes ist zweigeteilt: In einem ersten Schritt werden einige Arten von konstitutiven Regeln unterschieden. Der systematische Ertrag dieser begrifflichen Überlegungen besteht in dem Vorschlag, dass manche Arten von konstitutiven Regeln ganz problemlos als auch normative Phänomene charakterisierbar sind, andere hingegen nicht. In einem zweiten Schritt wird vor allem zu zeigen versucht, dass einige der wirklichen ‚Problemfälle‘ konstitutiver Regeln zumindest als schwach-normative Regeln beschrieben werden können (im Unterschied zu stark-normativen Phänomenen wie Verpflichtungen oder Verbote). Die ‚schwache Normativität‘ dieser Regeln kommt zum Vorschein, wenn man ihre Rolle in Praktiken betrachtet – insbesondere die Art und Weise, wie sich Akteure in diesen Praktiken unter Berufung auf konstitutive Regeln kritisieren, ohne sich dabei bereits als verpflichtet zu behandeln, diese Regeln zu befolgen.
It has often been asked whether today´s Japan will be able to move into new and promising industries, or whether it is locked into an innovation system with an inherent inability to give birth to new industries. One argument reasons that the thick institutional complementarities among labour, innovation, and finance among its enterprises and the public sector favour industrial development in sectors of intermediate uncertainty, while it is difficult to move into areas of major uncertainty. In this paper, we present the case of the silver industry or, somewhat more prosaically, the 60+ or even 50+ industry, for which most would agree that Japan has indeed become a lead market and lead producer on the global market. For an institutional economist, the case of the silver industry is particularly interesting, because Japan´s success is based on the cooperation of existing actors, the enterprise and public sector in particular, which helped overcome the information uncertainties and asymmetries involved in the new market by relying on several established mechanisms developed well before. In that sense, Japan´s silver industry presents a case of of what we propose to call successful institutional path activation with the effect of an innovative market creation, instead of the problematic lockin effects that are usually associated with the term path dependence.