Working Paper
Refine
Year of publication
- 2011 (68) (remove)
Document Type
- Working Paper (68) (remove)
Language
- English (68) (remove)
Has Fulltext
- yes (68)
Is part of the Bibliography
- no (68)
Keywords
- USA (4)
- China (3)
- Deutschland (3)
- Digital Humanities (3)
- Financial Crisis (3)
- Japan (3)
- Monetary Policy (3)
- Adaptive Erwartung (2)
- Adverse Selection Risk (2)
- Außenwirtschaftliches Gleichgewicht (2)
Institute
- Center for Financial Studies (CFS) (33)
- Wirtschaftswissenschaften (8)
- Informatik (7)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (5)
- House of Finance (HoF) (4)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (4)
- Institute for Monetary and Financial Stability (IMFS) (3)
- Gesellschaftswissenschaften (2)
- Institut für sozial-ökologische Forschung (ISOE) (1)
- Institute for Law and Finance (ILF) (1)
This paper compares the shareholder-value-maximizing capital structure and pricing policy of insurance groups against that of stand-alone insurers. Groups can utilise intra-group risk diversification by means of capital and risk transfer instruments. We show that using these instruments enables the group to offer insurance with less default risk and at lower premiums than is optimal for standalone insurers. We also take into account that shareholders of groups could find it more difficult to prevent inefficient overinvestment or cross-subsidisation, which we model by higher dead-weight costs of carrying capital. The tradeoff between risk diversification on the one hand and higher dead-weight costs on the other can result in group building being beneficial for shareholders but detrimental for policyholders.
Depending on the point of time and location, insurance companies are subject to different forms of solvency regulation. In modern regulation regimes, such as the future standard Solvency II in the EU, insurance pricing is liberalized and risk-based capital requirements will be introduced. In many economies in Asia and Latin America, on the other hand, supervisors require the prior approval of policy conditions and insurance premiums, but do not conduct risk-based capital regulation. This paper compares the outcome of insurance rate regulation and risk-based capital requirements by deriving stock insurers’ best responses. It turns out that binding price floors affect insurers’ optimal capital structures and induce them to choose higher safety levels. Risk-based capital requirements are a more efficient instrument of solvency regulation and allow for lower insurance premiums, but may come at the cost of investment efforts into adequate risk monitoring systems. The paper derives threshold values for regulator’s investments into risk-based capital regulation and provides starting points for designing a welfare-enhancing insurance regulation scheme.
Depending on the point of time and location, insurance companies are subject to different forms of solvency regulation. In modern regulation regimes, such as the future standard Solvency II in the EU, insurance pricing is liberalized and risk-based capital requirements will be introduced. In many economies in Asia and Latin America, on the other hand, supervisors require the prior approval of policy conditions and insurance premiums, but do not conduct risk-based capital regulation. This paper compares the outcome of insurance rate regulation and riskbased capital requirements by deriving stock insurers’ best responses. It turns out that binding price floors affect insurers’ optimal capital structures and induce them to choose higher safety levels. Risk-based capital requirements are a more efficient instrument of solvency regulation and allow for lower insurance premiums, but may come at the cost of investment efforts into adequate risk monitoring systems. The paper derives threshold values for regulator’s investments into risk-based capital regulation and provides starting points for designing a welfare-enhancing insurance regulation scheme.
If there is one thing to be learned from David Foster Wallace, it is that cultural transmission is a tricky game. This was a problem Wallace confronted as a literary professional, a university-based writer during what Mark McGurl has called the Program Era. But it was also a philosophical issue he grappled with on a deep level as he struggled to combat his own loneliness through writing. This fundamental concern with literature as a social, collaborative enterprise has also gained some popularity among scholars of contemporary American literature, particularly McGurl and James English: both critics explore the rules by which prestige or cultural distinction is awarded to authors (English; McGurl). Their approach requires a certain amount of empirical work, since these claims move beyond the individual experience of the text into forms of collective reading and cultural exchange influenced by social class, geographical location, education, ethnicity, and other factors. Yet McGurl and English's groundbreaking work is limited by the very forms of exclusivity they analyze: the protective bubble of creative writing programs in the academy and the elite economy of prestige surrounding literary prizes, respectively. To really study the problem of cultural transmission, we need to look beyond the symbolic markets of prestige to the real market, the site of mass literary consumption, where authors succeed or fail based on their ability to speak to that most diverse and complicated of readerships: the general public. Unless we study what I call the social lives of books, we make the mistake of keeping literature in the same ascetic laboratory that Wallace tried to break out of with his intense authorial focus on popular culture, mass media, and everyday life.
In the last few years, literary studies have experienced what we could call the rise of quantitative evidence. This had happened before of course, without producing lasting effects, but this time it’s probably going to be different, because this time we have digital databases, and automated data retrieval. As Michel’s and Lieberman’s recent article on "Culturomics" made clear, the width of the corpus and the speed of the search have increased beyond all expectations: today, we can replicate in a few minutes investigations that took a giant like Leo Spitzer months and years of work. When it comes to phenomena of language and style, we can do things that previous generations could only dream of.
When it comes to language and style. But if you work on novels or plays, style is only part of the picture. What about plot – how can that be quantified? This paper is the beginning of an answer, and the beginning of the beginning is network theory. This is a theory that studies connections within large groups of objects: the objects can be just about anything – banks, neurons, film actors, research papers, friends... – and are usually called nodes or vertices; their connections are usually called edges; and the analysis of how vertices are linked by edges has revealed many unexpected features of large systems, the most famous one being the so-called "small-world" property, or "six degrees of separation": the uncanny rapidity with which one can reach any vertex in the network from any other vertex. The theory proper requires a level of mathematical intelligence which I unfortunately lack; and it typically uses vast quantities of data which will also be missing from my paper. But this is only the first in a series of studies we’re doing at the Stanford Literary Lab; and then, even at this early stage, a few things emerge.
This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin – which tried to establish whether computer-generated algorithms could "recognize" literary genres. You take 'David Copperfield', run it through a program without any human input – "unsupervised", as the expression goes – and ... can the program figure out whether it's a gothic novel or a 'Bildungsroman'? The answer is, fundamentally, Yes: but a Yes with so many complications that it is necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.
The article discusses the methodology adopted for a cross-linguistic synchronic and diachronic corpus study on indefinites. The study covered five indefinite expressions, each in a different language. The main goal of the study was to verify the distribution of these indefinites synchronically and to attest their historical development. The methodology we used is a form of functional labeling which combines both context (syntax) and meaning (semantics) using as a starting point Haspelmath’s (1997) functional map. In the article we identify Haspelmath’s functions with logico-semantic interpretations and propose a binary branching decision tree assigning each instance of an indefinite exactly one function in the map.
This paper examines to what extent the build-up of 'global imbalances' since the mid-1990s can be explained in a purely real open-economy DSGE model in which agents' perceptions of long-run growth are based on filtering observed changes in productivity. We show that long-run growth estimates based on filtering U.S. productivity data comove strongly with long-horizon survey expectations. By simulating the model in which agents filter data on U.S. productivity growth, we closely match the U.S. current account evolution. Moreover, with household preferences that control the wealth effect on labor supply, we can generate output movements in line with the data.
Towards correctness of program transformations through unification and critical pair computation
(2011)
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
The papers in this volume were originally presented at the Workshop on Bantu Wh-questions, held at the Institut des Sciences de l’Homme, Université Lyon 2, on 25-26 March 2011, which was organized by the French-German cooperative project on the Phonology/Syntax Interface in Bantu Languages (BANTU PSYN). This project, which is funded by the ANR and the DFG, comprises three research teams, based in Berlin, Paris and Lyon. The Berlin team, at the ZAS, is: Laura Downing (project leader) and Kristina Riedel (post-doc). The Paris team, at the Laboratoire de phonétique et phonologie (LPP; UMR 7018), is: Annie Rialland (project leader), Cédric Patin (Maître de Conférences, STL, Université Lille 3), Jean-Marc Beltzung (post-doc), Martial Embanga Aborobongui (doctoral student), Fatima Hamlaoui (post-doc). The Lyon team, at the Dynamique du Langage (UMR 5596) is: Gérard Philippson (project leader) and Sophie Manus (Maître de Conférences, Université Lyon 2). These three research teams bring together the range of theoretical expertise necessary to investigate the phonology-syntax interface: intonation (Patin, Rialland), tonal phonology (Aborobongui, Downing, Manus, Patin, Philippson, Rialland), phonology-syntax interface (Downing, Patin) and formal syntax (Riedel, Hamlaoui). They also bring together a range of Bantu language expertise: Western Bantu (Aboronbongui, Rialland), Eastern Bantu (Manus, Patin, Philippson, Riedel), and Southern Bantu (Downing).
Existing studies from the United States, Latin America, and Asia provide scant evidence that private schools dramatically improve academic performance relative to public schools. Using data from Kenya—a poor country with weak public institutions—we find a large effect of private schooling on test scores, equivalent to one full standard deviation. This finding is robust to endogenous sorting of more able pupils into private schools. The magnitude of the effect dwarfs the impact of any rigorously tested intervention to raise performance within public schools. Furthermore, nearly twothirds of private schools operate at lower cost than the median government school.
A large empirical literature has shown that user fees signicantly deter public service utilization in developing #countries. While most of these results reflect partial equilibrium analysis, we find that the nationwide abolition of public school fees in Kenya in 2003 led to no increase in net public enrollment rates, but rather a dramatic shift toward private schooling. Results suggest this divergence between partial- and general-equilibrium effects is partially explained by social interactions: the entry of poorer pupils into free education contributed to the exit of their more affluent peers.
Rare Earth Elements (REE) have become the new strategic economic weapon for the modern age. Used in the manufacturing of products ranging from mobile phones to jet fighter engines, REEs have become the new “oil” of today in terms of economic and strategic importance. Currently, 95% of REEs mined globally are mined in China, giving China a monopoly on the industry. Deng Xiaoping foresaw the importance of REEs in 1992 when he commented: “as there is oil in the Middle East, there is rare earth in China.” Recently, China temporarily stopped exports of REEs to Japan, the EU and the US as an unofficial response to varying political and economic issues. This stoppage raised concerns as to the dependability of China and REE exports. Using the theory of neo-mercantilism, this paper analyzes China’s actions in the REE market and its subsequent economic and political implications. It concludes with a look at how countries are trying to position themselves away from a dependency on China.
Japan's quest for energy security : risks and opportunities in a changing geopolitical landscape
(2011)
For much of the 20th century, economic growth was fueled by cheap oil-based energy supply. Due to increasing resource constraints, however, the political and strategic importance of oil has become a significant part of energy and foreign policy making in East and Southeast Asian countries. In Japan, the rise of China’s economic and military power is a source of considerable concern. To enhance energy security, the Japanese government has recently amended its energy regulatory framework, which reveals high political awareness of risks resulting from the looming key resources shortage and competition over access. An essential understanding that national energy security is a politically and economically sensitive area with a clear international dimension affecting everyday life is critical in shaping a nation’s energy future.
It has often been asked whether today´s Japan will be able to move into new and promising industries, or whether it is locked into an innovation system with an inherent inability to give birth to new industries. One argument reasons that the thick institutional complementarities among labour, innovation, and finance among its enterprises and the public sector favour industrial development in sectors of intermediate uncertainty, while it is difficult to move into areas of major uncertainty. In this paper, we present the case of the silver industry or, somewhat more prosaically, the 60+ or even 50+ industry, for which most would agree that Japan has indeed become a lead market and lead producer on the global market. For an institutional economist, the case of the silver industry is particularly interesting, because Japan´s success is based on the cooperation of existing actors, the enterprise and public sector in particular, which helped overcome the information uncertainties and asymmetries involved in the new market by relying on several established mechanisms developed well before. In that sense, Japan´s silver industry presents a case of of what we propose to call successful institutional path activation with the effect of an innovative market creation, instead of the problematic lockin effects that are usually associated with the term path dependence.
The emergence of Capitalism is said to always lead to extreme changes in the structure of a society. This view implies that Capitalism is a universal and unique concept that needs an explicit institutional framework and should not discriminate between a German or US Capitalism. In contrast, this work argues that the ‘ideal type’ of Capitalism in a Weberian sense does not exist. It will be demonstrated that Capitalism is not a concept that shapes a uniform institutional framework within every society, constructing a specific economic system. Rather, depending on the institutional environment - family structures in particular - different forms of Capitalism arise. To exemplify this, the networking (Guanxi) Capitalism of contemporary China will be presented, where social institutions known from the past were reinforced for successful development. It will be argued that especially the change, destruction and creation of family and kinship structures are key factors that determined the further development and success of the Chinese economy and the type of Capitalism arising there. In contrast to Weber, it will be argued that Capitalism not necessarily leads to a process of destruction of traditional structures and to large-scale enterprises under rational, bureaucratic management, without leaving space for socio-cultural structures like family businesses. The flexible global production increasingly favours small business production over larger corporations. Small Chinese family firms are able to respond to rapidly changing market conditions and motivate maximum efforts for modest pay. The structure of the Chinese family proved to be very persistent over time and to be able to accommodate diverse economic and political environments while maintaining its core identity. This implies that Chinese Capitalism may be an entirely new economic system, based on Guanxi and the family.
The aim of this paper is to give the semantic profile of the Greek verb-deriving suffixes -íz(o), -én(o), -év(o), -ón(o), -(i)áz(o), and -ín(o), with a special account of the ending -áo/-ó. The patterns presented are the result of an empirical analysis of data extracted from extended interviews conducted with 28 native Greek speakers in Athens, Greece in February 2009. In the first interview task the test persons were asked to force(=create) verbs by using the suffixes -ízo, -évo, -óno, -(i)ázo, and -íno and a variety of bases which conformed to the ontological distinctions made in Lieber (2004). In the second task the test persons were asked to evaluate three groups of forced verbs with a noun, an adjective, and an adverb, respectively, by using one (best/highly acceptable verb) to six (worst/unacceptable verb) points. In the third task nineteen established verb pairs with different suffixes and the ending -áo/-ó were presented. The test persons were asked to report whether there was some difference between them and what exactly this difference was. The differences reported were transformed into 16 alternations. In the fourth task 21 established verbs with different suffixes were presented. The test persons were asked to give the "opposite" or "near opposite" expression for each verb. The rationale behind this task was to arrive at the meaning of the suffixes through the semantics of the opposites. In the analysis Rochelle's Lieber's (2004) theoretical framework is used. The results of the analysis suggest (i) a sign-based treatment of affixes, (ii) a vertical preference structure in the semantic structure of the head suffixes which takes into account the semantic make-up of the bases, and (iii) the integration of socioexpressive meaning into verb structures.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
Insurance contracts are often complex and difficult to verify outside the insurance relation. We show that standard one-period insurance policies with an upper limit and a deductible are the optimal incentive-compatible contracts in a competitive market with repeated interaction. Optimal group insurance policies involve a joint upper limit but individual deductibles and insurance brokers can play a role implementing such contracts for the group of clients. Our model provides new insights and predictions about the determinants of insurance.
Central banks have recently introduced new policy initiatives, including a policy called ‘Quantitative Easing’ (QE). Since it has been argued by the Bank of England that “Standard economic models are of limited use in these unusual circumstances, and the empirical evidence is extremely limited” (Bank of England, 2009b), we have taken an entirely empirical approach and have focused on the QE-experience, on which substantial data is available, namely that of Japan (2001-2006). Recent literature on the effectiveness of QE has neglected any reference to final policy goals. In this paper, we adopt the view that ultimately effectiveness will be measured by whether it will be able to “boost spending” (Bank of England, 2009b) and “will ultimately be judged by their impact on the wider macroeconomy” (Bank of England, 2010). In line with a widely held view among leading macroeconomists from various persuasions, while attempting to stay agnostic and open-minded on the distribution of demand changes between real output and inflation, we have thus identified nominal GDP growth as the key final policy goal of monetary policy. The empirical research finds that the policy conducted by the Bank of Japan between 2001 and 2006 makes little empirical difference while an alternative policy targeting credit creation (the original definition of QE) would likely have been more successful.
The lessons from QE and other 'unconventional' monetary policies - evidence from the Bank of England
(2011)
This paper investigates the effectiveness of the ‘quantitative easing’ policy, as implemented by the Bank of England in March 2009. Similar policies had been previously implemented in Japan, the U.S. and the Eurozone. The effectiveness is measured by the impact of Bank of England policies (including, but not limited to QE) on nominal GDP growth – the declared goal of the policy, according to the Bank of England. Unlike the majority of the literature on the topic, the general-to-specific econometric modeling methodology (a.k.a. the ‘Hendry’ or ‘LSE’ methodology) is employed for this purpose. The empirical analysis indicates that QE as defined and announced in March 2009 had no apparent effect on the UK economy. Meanwhile, it is found that a policy of ‘quantitative easing’ defined in the original sense of the term (Werner, 1994) is supported by empirical evidence: a stable relationship between a lending aggregate (disaggregated M4 lending, i.e. bank credit for GDP transactions) and nominal GDP is found. The findings imply that BoE policy should more directly target the growth of bank credit for GDP-transactions.
We use data from the 2009 Internet Survey of the Health and Retirement Study to examine the consumption impact of wealth shocks and unemployment during the Great Recession in the US. We find that many households experienced large capital losses in housing and in their financial portfolios, and that a non-trivial fraction of respondents have lost their job. As a consequence of these shocks, many households reduced substantially their expenditures. We estimate that the marginal propensities to consume with respect to housing and financial wealth are 1 and 3.3 percentage points, respectively. In addition, those who became unemployed reduced spending by 10 percent. We also distinguish the effect of perceived transitory and permanent wealth shocks, splitting the sample between households who think that the stock market is likely to recover in a year’s time, and those who don’t. In line with the predictions of standard models of intertemporal choice, we find that the latter group adjusted much more than the former its spending in response to financial wealth shocks.
Using life-history survey data from eleven European countries, we investigate whether childhood conditions, such as socioeconomic status, cognitive abilities and health problems influence portfolio choice and risk attitudes later in life. After controlling for the corresponding conditions in adulthood, we find that superior cognitive skills in childhood (especially mathematical abilities) are positively associated with stock and mutual fund ownership. Childhood socioeconomic status, as indicated by the number of rooms and by having at least some books in the house during childhood, is also positively associated with the ownership of stocks, mutual funds and individual retirement accounts, as well as with the willingness to take financial risks. On the other hand, less risky assets like bonds are not affected by early childhood conditions. We find only weak effects of childhood health problems on portfolio choice in adulthood. Finally, favorable childhood conditions affect the transition in and out of risky asset ownership, both by making divesting less likely and by facilitating investing (i.e., transitioning from non-ownership to ownership).
The unintended consequences of the debt ... will increased government expenditure hurt the economy?
(2011)
In 2008, governments in many countries embarked on large fiscal expenditure programmes, with the intention to support the economy and prevent a more serious recession. In this study, the overall impact of a substantial increase in fiscal expenditure is considered by providing a novel analysis of the most relevant recent experience in similar circumstances, namely that of Japan in the 1990s. Then a weak economy with risk-averse banks seemed to require some of the largest peacetime fiscal stimulation programmes on record, albeit with disappointing results. The explanations provided by the literature and their unsatisfactory empirical record are reviewed. An alternative explanation, derived from early Keynesian models on the ineffectiveness of fiscal policy is presented in the form of a modified Fisher-equation, which incorporates the recent findings in the credit view literature. The model postulates complete quantity crowding out. It is subjected to empirical tests, which were supportive. Thus evidence is found that fiscal policy, if not supported by suitable monetary policy, is likely to crowd out private sector demand, even in an environment of falling or near-zero interest rates. As a policy conclusion it is pointed out that by changing the funding strategy, complete crowding out can be avoided and a positive net effect produced. The proposed framework creates common ground between proponents of Keynesian views (as held, among others, by Blinder and Solow), monetarist views (as held in particular by Milton Friedman) and those of leading contemporary macroeconomists (such as Mankiw).
Capturing the zero: a new class of zero-augmented distributions and multiplicative error processes
(2011)
We propose a novel approach to model serially dependent positive-valued variables which realize a non-trivial proportion of zero outcomes. This is a typical phenomenon in financial time series observed at high frequencies, such as cumulated trading volumes. We introduce a flexible point-mass mixture distribution and develop a semiparametric specification test explicitly tailored for such distributions. Moreover, we propose a new type of multiplicative error model (MEM) based on a zero-augmented distribution, which incorporates an autoregressive binary choice component and thus captures the (potentially different) dynamics of both zero occurrences and of strictly positive realizations. Applying the proposed model to high-frequency cumulated trading volumes of both liquid and illiquid NYSE stocks, we show that the model captures the dynamic and distributional properties of the data well and is able to correctly predict future distributions.
This paper addresses the open debate about the usefulness of high-frequency (HF) data in large-scale portfolio allocation. Daily covariances are estimated based on HF data of the S&P 500 universe employing a blocked realized kernel estimator. We propose forecasting covariance matrices using a multi-scale spectral decomposition where volatilities, correlation eigenvalues and eigenvectors evolve on different frequencies. In an extensive out-of-sample forecasting study, we show that the proposed approach yields less risky and more diversified portfolio allocations as prevailing methods employing daily data. These performance gains hold over longer horizons than previous studies have shown.
The direct financial impact of the financial crisis has been to deal a heavy blow to investment-based pensions; many workers lost a substantial portion of their retirement saving. The financial sector implosion produced an economic crisis for the rest of the economy via high unemployment and reduced labor earnings, which reduced household contributions to Social Security and some private pensions. Our research asks which types of individuals were most affected by these dual financial and economic shocks, and it also explores how people may react by changing their consumption, saving and investment, work and retirement, and annuitization decisions. We do so with a realistically calibrated lifecycle framework allowing for time-varying investment opportunities and countercyclical risky labor income dynamics. We show that households near retirement will reduce both short- and long-term consumption, boost work effort, and defer retirement. Younger cohorts will initially reduce their work hours, consumption, saving, and equity exposure; later in life, they will work more, retire later, consume less, invest more in stocks, save more, and reduce their demand for private annuities. Keywords: Financial Crisis , Household Finance , Cycle Portfolio Choice , Labor Supply Classification: D1, G11, G23, G35, J14, J26, J32
A generalization of the compressed string pattern match that applies to terms with variables is investigated: Given terms s and t compressed by singleton tree grammars, the task is to find an instance of s that occurs as a subterm in t. We show that this problem is in NP and that the task can be performed in time O(ncjVar(s)j), including the construction of the compressed substitution, and a representation of all occurrences. We show that the special case where s is uncompressed can be performed in polynomial time. As a nice application we show that for an equational deduction of t to t0 by an equality axiom l = r (a rewrite) a single step can be performed in polynomial time in the size of compression of t and l; r if the number of variables is fixed in l. We also show that n rewriting steps can be performed in polynomial time, if the equational axioms are compressed and assumed to be constant for the rewriting sequence. Another potential application are querying mechanisms on compressed XML-data bases.
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules.The method is similar to the computation of critical pairs for the completion of term rewriting systems. We describe an effective unification algorithm to determine all overlaps of transformations with reduction rules for the lambda calculus LR which comprises a recursive let-expressions, constructor applications, case expressions and a seq construct for strict evaluation. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modeling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions. As a result the algorithm computes a finite set of overlappings for the reduction rules of the calculus LR that serve as a starting point to the automatization of the analysis of program transformations.
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
We make three points. First, the decade before the financial crisis in 2007 was characterized by a collapse in the yield on TIPS. Second, estimated VARs for the federal funds rate and the TIPS yield show that while monetary policy shocks had negligible effects on the TIPS yield, shocks to the latter had one-to-one effects on the federal funds rate. Third, these findings can be rationalized in a New Keynesian model.
There is ample empirical evidence documenting widespread financial illiteracy and limited pension knowledge. At the same time, the distribution of wealth is widely dispersed and many workers arrive on the verge of retirement with few or no personal assets. In this paper, we investigate the relationship between financial literacy and household net worth, relying on comprehensive measures of financial knowledge designed for a special module of the DNB (De Nederlandsche Bank) Household Survey. Our findings provide evidence of a strong positive association between financial literacy and net worth, even after controlling for many determinants of wealth. Moreover, we discuss two channels through which financial literacy might facilitate wealth accumulation. First, financial knowledge increases the likelihood of investing in the stock market, allowing individuals to benefit from the equity premium. Second, financial literacy is positively related to retirement planning, and the development of a savings plan has been shown to boost wealth. Overall, financial literacy, both directly and indirectly, is found to have a strong link to household wealth. JEL Classification: D91, D12, J26 Keywords: Financial Education, Savings and Wealth Accumulation, Retirement Preparation, Knowledge of Finance and Economics, Overconfidence, Stock Market Participation
This paper outlines a new method for using qualitative information to analyze the monetary policy strategy of central banks. Quantitative assessment indicators that are extracted from a central bank's public statements via the balance statistic approach are employed to estimate a Taylor-type rule. This procedure allows to directly capture a policymaker's assessments of macroeconomic variables that are relevant for its decision making process. As an application of the proposed method the monetary policy of the Bundesbank is re-investigated with a new dataset. One distinctive feature of the Bundesbank's strategy consisted of targeting growth in monetary aggregates. The analysis using the proposed method provides evidence that the Bundesbank indeed took into consideration monetary aggregates but also real economic activity and inflation developments in its monetary policy strategy since 1975. JEL Classification: E52, E58, N14 Keywords: Monetary Policy Rule, Statement Indicators, Bundesbank, Monetary Targeting
This paper analyzes the emergence of systemic risk in a network model of interconnected bank balance sheets. Given a shock to asset values of one or several banks, systemic risk in the form of multiple bank defaults depends on the strength of balance sheets and asset market liquidity. The price of bank assets on the secondary market is endogenous in the model, thereby relating funding liquidity to expected solvency - an important stylized fact of banking crises. Based on the concept of a system value at risk, Shapley values are used to define the systemic risk charge levied upon individual banks. Using a parallelized simulated annealing algorithm the properties of an optimal charge are derived. Among other things we find that there is not necessarily a correspondence between a bank's contribution to systemic risk - which determines its risk charge - and the capital that is optimally injected into it to make the financial system more resilient to systemic risk. The analysis has policy implications for the design of optimal bank levies. JEL Classification: G01, G18, G33 Keywords: Systemic Risk, Systemic Risk Charge, Systemic Risk Fund, Macroprudential Supervision, Shapley Value, Financial Network
Since World War II, direct stock ownership by households has largely been replaced by indirect stock ownership by financial institutions. We argue that tax policy is the driving force. Using long time-series from eight countries, we show that the fraction of household ownership decreases with measures of the tax benefits of holding stocks inside a pension plan. This finding is important for policy considerations on effective taxation and for financial economics research on the long-term effects of taxation on corporate finance and asset prices. JEL Classification: G10, G20, H22, H30 Keywords: Capital Gains Tax, Income Tax, Stock Ownership, Bond Ownership, Inflation, Bracket Creep, Pension Funds
Do firms buy their stock at bargain prices? : Evidence from actual stock repurchase disclosure
(2011)
We use new data from SEC filings to investigate how S&P 500 firms execute their open market repurchase programs. We find that smaller S&P 500 firms repurchase less frequently than larger firms, and at a price which is significantly lower than the average market price. Their repurchase activity is followed by a positive and significant abnormal return which lasts up to three months after the repurchase. These findings do not hold for large S&P 500 firms. Our interpretation is that small firms repurchase strategically, whereas the repurchase activity of large firms is more focused on the disbursement of free cash. JEL Classification: G14, G30, G35 Keywords: Stock Repurchases, Stock Buybacks, Payout Policy, Timing, Bid-Ask Spread, Liquidity
This paper studies the impact of the concentration of control, the type of controlling shareholder and the dividend tax preference of the controlling shareholder on dividend policy for a panel of 220 German firms over 1984-2005. While the concentration of control does not have an effect on the dividend payout, there is strong evidence that the type of controlling shareholder matters as family controlled firms have high dividend payouts whereas bank controlled firms have low dividend payouts. However, there is no evidence that the dividend preference of the large shareholder has an impact on the dividend decision. JEL Classification: G32, G35 Keywords: Dividend Policy, Payout Policy, Lintner Dividend Model, Tax Clientele Effects, Corporate Governance
This paper proposes a new approach for modeling investor fear after rare disasters. The key element is to take into account that investors’ information about fundamentals driving rare downward jumps in the dividend process is not perfect. Bayesian learning implies that beliefs about the likelihood of rare disasters drop to a much more pessimistic level once a disaster has occurred. Such a shift in beliefs can trigger massive declines in price-dividend ratios. Pessimistic beliefs persist for some time. Thus, belief dynamics are a source of apparent excess volatility relative to a rational expectations benchmark. Due to the low frequency of disasters, even an infinitely-lived investor will remain uncertain about the exact probability. Our analysis is conducted in continuous time and offers closed-form solutions for asset prices. We distinguish between rational and adaptive Bayesian learning. Rational learners account for the possibility of future changes in beliefs in determining their demand for risky assets, while adaptive learners take beliefs as given. Thus, risky assets tend to be lower-valued and price-dividend ratios vary less under adaptive versus rational learning for identical priors. Keywords: beliefs, Bayesian learning, controlled diffusions and jump processes, learning about jumps, adaptive learning, rational learning. JEL classification: D83, G11, C11, D91, E21, D81, C61
The European Monetary Union euro has done very well since its initiation. Price stability has been secured and the external value of the new currency is more than satisfactory. The confidence in it is also shown by its increasing use as a global reserve currency. It has been a stabilizing factor in the current crisis. The recent budgetary problems of some member states are principally not a problem of the Monetary Union. It is therefore in no way justified to speak of a "Euro-crisis". It is true, however, that the Monetary Union restricts the number of possibilities for member states to solve their financial problems but it does not eliminate them entirely that outside help would have become indispensible. The purchase of debt instruments of member states in financial distress by the ECB is questionable from an economic, and more important, from a legal point of view. The longer the duration, the less legally justifiable is it. Financial support for member states in severe financial distress might be acceptable as a temporary crisis resolution mechanism. A permanent support mechanism needs a basis in the primary law of the EU. The treatment of the risk of "sovereign" debt in the legal framework for financial institutions urgently needs improvement. Especially the capital requirements for credit institutions have to be adjusted.
Based on Foucault’s analysis of German Neoliberalism and his thesis of ambiguity, the following paper draws a two-level distinction between individual and regulatory ethics. The individual ethics level – which has received surprisingly little attention – contains the Christian foundation of values and the liberal-Kantian heritage of so called Ordoliberalism – as one variety of neoliberalism. The regulatory or formal-institutional ethics level on the contrary refers to the ordoliberal framework of a socio-economic order. By differentiating these two levels of ethics incorporated in German Neoliberalism, it is feasible to distinguish dissimilar varieties of neoliberalism and to link Ordoliberalism to modern economic ethics. Furthermore, it allows a revision of the dominant reception of Ordoliberalism which focuses solely on the formal-institutional level while mainly neglecting the individual ethics level.
The past thirty years have seen dramatic changes to the character of state membership regimes in which practices of easing access to membership for resident non-citizens, extending the franchise to expatriate citizens as well as, albeit in typically more limited ways, to resident non-citizens and an increasing toleration of dual nationality have become widespread. These processes of democratic inclusion, while variously motivated, represent an important trend in the contemporary political order in which we can discern two distinct shifts. The first concerns membership as a status and is characterised in terms of the movement from a simple distinction between single-nationality citizens and single-nationality aliens to a more complex structure of state membership in which we also find dual nationals and denizens (Baubock, 2007a:2395-6). The second shift relates to voting rights and is marked by the movement from the requirement that voting rights are grounded in both citizenship and residence to the relaxing of the joint character of this requirement such that citizenship or residence now increasingly serve as a basis for, at least partial, enfranchisement. In the light of these transformations, it is unsurprising that normative engagement with transnational citizenship – conceived in terms of the enjoyment of membership statuses in two (or more) states – has focused on the issues of access to, and maintenance of, national citizenship, on the one hand, and entitlement to voting rights, on the other hand.
"Buffer-stock" models of saving are now standard in the consumption literature. This paper builds theoretical foundations for rigorous understanding of the main features of such models, including the existence of a target wealth ratio and the proposition that aggregate consumption growth equals aggregate income growth in a small open economy populated by buffer stock savers. JEL Classification: D81, D91, E21 Keywords: Precautionary Saving, Buffer Stock Saving, Marginal Propensity to Consume, Permanent Income Hypothesis
Measuring confidence and uncertainty during the financial crisis: evidence from the CFS survey
(2011)
The CFS survey covers individual situations of banks and other companies of the financial sector during the financial crisis. This provides a rare possibility to analyze appraisals, expectations and forecast errors of the core sector of the recent turmoil. Following standard ways of aggregating individual survey data, we first present and introduce the CFS survey by comparing CFS indicators of confidence and predicted confidence to ifo and ZEW indicators. The major contribution is the analysis of several indicators of uncertainty. In addition to well established concepts, we introduce innovative measures based on the skewness of forecast errors and on the share of ‘no response’ replies. Results show that uncertainty indicators fit quite well with pattern of real and financial time series of the time period 2007 to 2010. Business Sentiment , Financial Crisis , Survey Indicator , Uncertainty CFS working paper series, 2010, 18. Revised Version July 2011
I investigate the effect of transparency on the borrowing costs of Emerging Markets Economies. Transparency is measured by whether or not the countries publish the IMF Article IV Staff report and the Reports on the Observance of Standards and Codes (ROSC). Using difference-in-difference estimation, I study the effect on the sovereign credit spreads for 18 Emerging Market Economies over the period 1999-2007. I show that the effect of publishing the Article IV reports is negligible while publishing the ROSC matters, leading to a reduction in the spreads of over 15% in the samples 1999-2006 and 1999-2007. JEL Classification: F33, F34, G15 Keywords: Sovereign Bond Markets, Transparency, Emerging Market Economies