Refine
Year of publication
- 2009 (80) (remove)
Document Type
- Working Paper (80) (remove)
Language
- English (80) (remove)
Has Fulltext
- yes (80)
Is part of the Bibliography
- no (80)
Keywords
- Deutschland (6)
- Haushalt (6)
- Bank (5)
- Lambda-Kalkül (5)
- USA (5)
- Household Finance (4)
- Buffer Stock Saving (3)
- Europäische Union (3)
- Fiscal Stimulus (3)
- Fiskalpolitik (3)
Institute
- Center for Financial Studies (CFS) (32)
- Wirtschaftswissenschaften (15)
- Institute for Monetary and Financial Stability (IMFS) (14)
- Informatik (9)
- Institute for Law and Finance (ILF) (3)
- Institut für sozial-ökologische Forschung (ISOE) (2)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (2)
- Rechtswissenschaft (2)
- Sprach- und Kulturwissenschaften (1)
Papers on pragmasemantics
(2009)
Optimality theory as used in linguistics (Prince & Smolensky, 1993/2004; Smolensky & Legendre, 2006) and cognitive psychology (Gigerenzer & Selten, 2001) is a theoretical framework that aims to integrate constraint based knowledge representation systems, generative grammar, cognitive skills, and aspects of neural network processing. In the last years considerable progress was made to overcome the artificial separation between the disciplines of linguistic on the one hand which are mainly concerned with the description of natural language competences and the psychological disciplines on the other hand which are interested in real language performance.
The semantics and pragmatics of natural language is a research topic that is asking for an integration of philosophical, linguistic, psycholinguistic aspects, including its neural underpinning. Especially recent work on experimental pragmatics (e.g. Noveck & Sperber, 2005; Garrett & Harnish, 2007) has shown that real progress in the area of pragmatics isn’t possible without using data from all available domains including data from language acquisition and actual language generation and comprehension performance. It is a conceivable research programme to use the optimality theoretic framework in order to realize the integration.
Game theoretic pragmatics is a relatively young development in pragmatics. The idea to view communication as a strategic interaction between speaker and hearer is not new. It is already present in Grice' (1975) classical paper on conversational implicatures. What game theory offers is a mathematical framework in which strategic interaction can be precisely described. It is a leading paradigm in economics as witnessed by a series of Nobel prizes in the field. It is also of growing importance to other disciplines of the social sciences. In linguistics, its main applications have been so far pragmatics and theoretical typology. For pragmatics, game theory promises a firm foundation, and a rigor which hopefully will allow studying pragmatic phenomena with the same precision as that achieved in formal semantics.
The development of game theoretic pragmatics is closely connected to the development of bidirectional optimality theory (Blutner, 2000). It can be easily seen that the game theoretic notion of a Nash equilibrium and the optimality theoretic notion of a strongly optimal form-meaning pair are closely related to each other. The main impulse that bidirectional optimality theory gave to research on game theoretic pragmatics stemmed from serious empirical problems that resulted from interpreting the principle of weak optimality as a synchronic interpretation principle.
In this volume, we have collected papers that are concerned with several aspects of game and optimality theoretic approaches to pragmatics.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence, and as a consequence also yields soundness of the encodings with respect to a contextually defined notion of program equivalence. While these translations encode blocking into queuing and waiting, we also describe an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
In contrast to the US and recently Europe, Japan appears to be unsuccessful in establishing new industries. An oft-cited example is Japan's practical invisibility in the global business software sector. Literature has ascribed Japan's weakness – or conversely, America's strength – to the specific institutional settings and competences of actors within the respective national innovation system. It has additionally been argued that unlike the American innovation system, with its proven ability to give birth to new industries, the inherent path dependency of the Japanese innovation system makes innovation and establishment of new industries quite difficult. However, there are two notable weaknesses underlying current propositions postulating that only certain innovation systems enable the creation of new industries: first, they mistakenly confound context specific with general empirical observations. And second, they grossly underestimate – or altogether fail to examine – the dynamics within innovation systems. This paper will show that it is precisely the dynamics within innovation systems – dynamics founded on the concept of path plasticity – which have enabled Japan to charge forward as a global leader in a highly innovative field: the game software sector as well as the biotechnology industry.
European scholars, colonial administrators, missionaries, bibliophiles and others were the main collectors of Malay books in the nineteenth century, both in manuscript or printed form. Among these persons were many well-known names in the field of Malay literature and culture like Raffles, Marsden, Crawfurd, Klinkert, van der Tuuk, von Dewall, Roorda, Favre, Maxwell, Overbeck, Wilkinson and Skeat, to name only a few. Their collections were often handed over to public libraries where they form an important part of the relevant Oriental or Southeast Asian manuscript collections.
Therefore the knowledge of the intellectual culture of the Malay Peninsula and the Malay World in general depended very much on these manuscripts and printed books collected often by chance or in a rather unsystematic way. The collections reflect in a strong sense the interests of its administrative or philologist collectors: court histories, genealogies of aristocratic lineages, law collections (adat-istiadat as well as undangundang) or prose belles-lettres build a vast bulk of these collections, while Islamic religious texts and poetry forms popular in the 19th century (especially syair) are fairly underrepresented. Malay manuscripts and books located in religious institutions like mosques or pondok/pesantren schools have not been searched for; until today there are more or less no systematic studies of these collections. As in some statistics religious texts build about 20% of all existing Malay manuscripts, their neglect by Europeans scholars leads to a distorted view of the literary culture in the Malay language.
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
We use a novel disaggregate sectoral euro area dataset with a regional breakdown that allows explicit estimation of the sectoral component of price changes (rather than interpreting the idiosyncratic component as sectoral as done in other papers). Employing a new method to extract factors from over-lapping data blocks, we find for our euro area data set that the sectoral component explains much less of the variation in sectoral regional inflation rates and exhibits much less volatility than previous findings for the US indicate. Country- and region-specific factors play an important role in addition to the sector-specific factors. We conclude that sectoral price changes have a “geographical” dimension, as yet unexplored in the literature, that might lead to new insights regarding the properties of sectoral price changes.
We examine intra-day market reactions to news in stock-specific sentiment disclosures. Using pre-processed data from an automated news analytics tool based on linguistic pattern recognition we extract information on the relevance as well as the direction of company-specific news. Information-implied reactions in returns, volatility as well as liquidity demand and supply are quantified by a high-frequency VAR model using 20 second intervals. Analyzing a cross-section of stocks traded at the London Stock Exchange (LSE), we find market-wide robust news-dependent responses in volatility and trading volume. However, this is only true if news items are classified as highly relevant. Liquidity supply reacts less distinctly due to a stronger influence of idiosyncratic noise. Furthermore, evidence for abnormal highfrequency returns after news in sentiments is shown. JEL-Classification: G14, C32
This paper reviews the rationale for quantitative easing when central bank policy rates reach near zero levels in light of recent announcements regarding direct asset purchases by the Bank of England, the Bank of Japan, the U.S. Federal Reserve and the European Central Bank. Empirical evidence from the previous period of quantitative easing in Japan between 2001 and 2006 is presented. During this earlier period the Bank of Japan was able to expand the monetary base very quickly and significantly. Quantitative easing translated into a greater and more lasting expansion of M1 relative to nominal GDP. Deflation subsided by 2005. As soon as inflation appeared to stabilize near a rate of zero, the Bank of Japan rapidly reduced the monetary base as a share of nominal income as it had announced in 2001. The Bank was able to exit from extensive quantitative easing within less than a year. Some implications for the current situation in Europe and the United States are discussed.
We investigate the effects of both trust and sociability for stock market participation, the role of which has been examined separately by existing finance literature. We use internationally comparable household data from the Survey of Health, Ageing and Retirement in Europe supplemented with regional information on generalized trust from the World Value Survey and on specific trust to financial institutions from Eurobarometer. We show that trust and sociability have distinct and sizeable positive effects on stock market participation and that sociability is likely to partly balance the discouragement effect on stockholding induced by low generalized trust in the region of residence. We also show that specific trust in advice given by financial institutions represents a prominent factor for stock investing, compared to other tangible features of the banking environment. Probing further into various groups of households, we find that sociability can induce stockholding among the less well off in Sweden, Denmark, and Switzerland where stock market participation is widespread. On the other hand, the effect of generalized trust is strong in countries with limited participation and low average trust like Austria, Spain, and Italy, offering an explanation for the remarkably low participation rates of the wealthy living therein.
We investigate US households’ direct investment in stocks, bonds and liquid accounts and their foreign counterparts, in order to identify the different participation hurdles affecting asset investment domestically and overseas. To this end, we estimate a trivariate probit model with three further selection equations that allows correlations among unobservables of all possible asset choices. Our results point to the existence of a second hurdle that stock owners need to overcome in order to invest in foreign stocks. Among stockholders, we show that economic resources, willingness to assume greater financial risks, shopping around for the best investment opportunities all increase the probability to invest in foreign stocks. Furthermore, we find that households who seek financial advice from relatives, friends and work contacts are less likely to invest in foreign stocks. This result corroborates the conjecture by Hong et al. (2004) that social interactions should discourage investment in foreign stocks, given their limited popularity. On the other hand, we find little evidence for additional pecuniary or informational costs associated with investment in foreign bonds and liquid accounts. Finally, we show that ignoring correlations of unobservables across different asset choices can lead to very misleading results.
We reconsider the issue of price discovery in spot and futures markets. We use a threshold error correction model to allow for arbitrage operations to have an impact on the return dynamics. We estimate the model using quote midpoints, and we modify the model to account for time-varying transaction costs. We find that the futures market leads in the process of price discovery. The lead of the futures market is more pronounced in the presence of arbitrage signals. Thus, when the deviation between the spot and the futures market is large, the spot market tends to adjust to the futures market.
After the introduction of the euro in 1999, the debate on the financial stability architecture in the EU focused on the adequacy of a decentralised setting based on national responsibilities for preventing and managing crises. The Financial Services Action Plan in 1999 and the introduction of the Lamfalussy process for financial regulation and supervision in 2001 enhanced the decentralised arrangements by increasing significantly the level of legal harmonisation and supervisory cooperation. In addition, authorities adopted EU-wide MoUs to safeguard cross-border financial stability. In this context, the financial crisis has proved to be a major challenge to the ongoing process of European financial integration. In particular, momentous events such as the freezing of interbank markets, the loss of confidence in financial institutions, runs on banks and difficulties affecting cross-border financial groups, questioned the ability of the EU financial stability architecture to contain threats to the integrated single financial market. In particular, the crisis has demonstrated the importance of coupling to micro-prudential supervision a macro dimension aimed at a broad and effective monitoring and assessment of the potential risks covering all components of the financial system. In Europe, following the de Larosière Report, the European Commission has put forward proposals for establishing a European System of Financial Supervision and a European Systemic Risk Board, the latter body to be set up under the auspices of the ECB. While the details for the implementation of these structures still need to be spelt out, they should reinforce significantly – ten years after the introduction of the euro – the financial stability architecture at the EU level.
Misselling through agents
(2009)
This paper analyzes the implications of the inherent conflict between two tasks performed by direct marketing agents: prospecting for customers and advising on the product's "suitability" for the specific needs of customers. When structuring sales-force compensation, firms trade off the expected losses from "misselling" unsuitable products with the agency costs of providing marketing incentives. We characterize how the equilibrium amount of misselling (and thus the scope of policy intervention) depends on features of the agency problem including: the internal organization of a firm's sales process, the transparency of its commission structure, and the steepness of its agents' sales incentives. JEL Classification: D18 (Consumer Protection), D83 (Search; Learning; Information and Knowledge), M31 (Marketing), M52 (Compensation and Compensation Methods and Their Effects).
This paper considers a firm that has to delegate to an agent, such as a mortgage broker or a security dealer, the twin tasks of approaching and advising customers. The main contractual restriction, in particular in light of related research in Inderst and Ottaviani (2007), is that the firm can only compensate the agent through commissions. This standard contracting restriction has the following key implications. First, the firm can only ensure internal compliance to a "standard of sales", in terms of advice for the customer, if this standard is not too high. Second, if this is still feasible, then a higher standard is associated with higher, instead of lower, sales commissions. Third, once the limit for internal compliance is approached, tougher regulation and prosecution of "misselling" have (almost) no effect on the prevailing standard. Besides having practical implications, in particular on how to (re-)regulate the sale of financial products, the novel model, which embeds a problem of advice into a framework with repeated interactions, may also be of separate interest for future work on sales force compensation. JEL Classification: D18 (Consumer Protection), D83 (Search; Learning; Information and Knowledge), M31 (Marketing), M52 (Compensation and Compensation Methods and Their Effects).
This article shows that investors financing a portfolio of projects may use the depth of their financial pockets to overcome entrepreneurial incentive problems. Competition for scarce informed capital at the refinancing stage strengthens investors’ bargaining positions. And yet, entrepreneurs’ incentives may be improved, because projects funded by investors with ‘‘shallow pockets’’ must have not only a positive net present value at the refinancing stage, but one that is higher than that of competing portfolio projects. Our article may help understand provisions used in venture capital finance that limit a fund’s initial capital and make it difficult to add more capital once the initial venture capital fund is raised. (JEL G24, G31)
We analyze how two key managerial tasks interact: that of growing the business through creating new investment opportunities and that of providing accurate information about these opportunities in the corporate budgeting process. We show how this interaction endogenously biases managers toward overinvesting in their own projects. This bias is exacerbated if managers compete for limited resources in an internal capital market, which provides us with a novel theory of the boundaries of the firm. Finally, managers of more risky and less profitable divisions should obtain steeper incentives to facilitate efficient investment decisions.
We present a simple model of personal finance in which an incumbent lender has an information advantage vis-a-vis both potential competitors and households. In order to extract more consumer surplus, a lender with sufficient market power may engage in "irresponsible"lending, approving credit even if this is knowingly against a household’s best interest. Unless rival lenders are equally well informed, competition may reduce welfare. This holds, in particular, if less informed rivals can free ride on the incumbent’s superior screening ability.
This paper argues that banks must be sufficiently levered to have first-best incentives to make new risky loans. This result, which is at odds with the notion that leverage invariably leads to excessive risk taking, derives from two key premises that focus squarely on the role of banks as informed lenders. First, banks finance projects that they do not own, which implies that they cannot extract all the profits. Second, banks conduct a credit risk analysis before making new loans. Our model may help understand why banks take on additional unsecured debt, such as unsecured deposits and subordinated loans, over and above their existing deposit base. It may also help understand why banks and finance companies have similar leverage ratios, even though the latter are not deposit takers and hence not subject to the same regulatory capital requirements as banks.
This paper shows that active investors, such as venture capitalists, can affect the speed at which new ventures grow. In the absence of product market competition, new ventures financed by active investors grow faster initially, though in the long run those financed by passive investors are able to catch up. By contrast, in a competitive product market, new ventures financed by active investors may prey on rivals that are financed by passive investors by “strategically overinvesting” early on, resulting in long-run differences in investment, profits, and firm growth. The value of active investors is greater in highly competitive industries as well as in industries with learning curves, economies of scope, and network effects, as is typical for many “new economy” industries. For such industries, our model predicts that start-ups with access to venture capital may dominate their industry peers in the long run. JEL Classifications: G24; G32 Keywords: Venture capital; dynamic investment; product market competition
We study a model of “information-based entrenchment” in which the CEO has private information that the board needs to make an efficient replacement decision. Eliciting the CEO’s private information is costly, as it implies that the board must pay the CEO both higher severance pay and higher on-the-job pay. While higher CEO pay is associated with higher turnover in our model, there is too little turnover in equilibrium. Our model makes novel empirical predictions relating CEO turnover, severance pay, and on-the-job pay to firm-level attributes such as size, corporate governance, and the quality of the firm’s accounting system.
Corporate borrowers care about the overall riskiness of a bank’s operations as their continued access to credit may rely on the bank’s ability to roll over loans or to expand existing credit facilities. As we show, a key implication of this observation is that increasing competition among banks should have an asymmetric impact on banks’ incentives to take on risk: Banks that are already riskier will take on yet more risk, while their safer rivals will become even more prudent. Our results offer new guidance for bank supervision in an increasingly competitive environment and may help to explain existing, ambiguous findings on the relationship between competition and risk-taking in banking. Furthermore, our results stress the beneficial role that competition can have for financial stability as it turns a bank’s "prudence" into an important competitive advantage.
This paper presents a novel model of the lending process that takes into account that loan officers must spend time and effort to originate new loans. Besides generating predictions on loan officers’ compensation and its interaction with the loan review process, the model sheds light on why competition could lead to excessively low lending standards. We also show how more intense competition may fasten the adoption of credit scoring. More generally, hard-information lending techniques such as credit scoring allow to give loan officers high-powered incentives without compromising the integrity and quality of the loan approval process. The model is finally applied to study the implications of loan sales on the adopted lending process and lending standard.
This paper analyses the regulatory framework which applies to the determination of directors’ remuneration in Europe and examines the extent to which European firms follow best practices in corporate governance in this area, drawing on an empirical analysis of the governance systems that European firms adopt in setting remuneration and, in particular, on an empirical assessment of their diverging approaches to disclosure. These divergences persist despite recent reforms. After an examination of the link between optimal remuneration, corporate governance and regulation and an assessment of how regulatory reform has evolved in this area, the paper provides an overview of national laws and best practice corporate governance recommendations across the Member States, following the adoption of the important EC Recommendations on directors’ remuneration and on the role of non-executive directors in 2004 and 2005, respectively. This overview is largely based on the answers to questionnaires sent to legal experts from seventeen European Member States. The paper also provides an empirical analysis of governance practices and, in particular, firm disclosure of directors’ remuneration in Europe’s largest 300 listed firms by market capitalisation. The paper reveals that, notwithstanding a swathe of reforms across the Member States in recent years and related harmonisation efforts, disclosure levels still vary from country to country and are strongly dependent on the existence of regulations and best practice guidelines in the firm’s home Member State. Convergence in disclosure practices is not strong; only a few basic standards are followed by the majority of the firms examined and there is strong divergence with respect to most of the criteria considered in the study. Consistent with previous research, our study reveals clear differences not only with respect to remuneration disclosure, but also with respect to shareholder engagement and the board’s role in the remuneration process and in setting remuneration guidelines. Ownership structures still ‘matter’; these divergences tend to follow different corporate governance systems and, in particular, the dispersed ownership/block-holding ownership divide. They do not appear to have been smoothed since the EC Company Law Action Plan was launched and notwithstanding the harmonisation that has been attempted in this field. Keywords: Directors’ remuneration, corporate governance, disclosure, European regulation JEL Classifications: G30, G38, J33, K22, M52
Lessons from the crisis
(2009)
A lot happened even before the perceived beginning of this crisis in 2007, so although the events are recent, I will give an overview from a US perspective of the period from 2001 to date, in our search for the lessons to be learned. Much of it is probably familiar, but worth revisiting. I will break this necessarily simplified account into 3 stages: first, a look at the key factors that led to the increasing riskiness of US home mortgages; second, how those risks were transmitted as securities from US housing lenders to institutional investors around the globe; and third, how those risks led to huge losses and created a credit crunch that moved the impact from the financial economy to the real economy and produced a severe recession. Then we will have a factual foundation for deriving the lessons that ought to be taken away from this very expensive experience.
Recent evaluations of the fiscal stimulus packages recently enacted in the United States and Europe such as Cogan, Cwik, Taylor and Wieland (2009) and Cwik and Wieland (2009) suggest that the GDP effects will be modest due to crowding-out of private consumption and investment. Corsetti, Meier and Mueller (2009a,b) argue that spending shocks are typically followed by consolidations with substantive spending cuts, which enhance the short-run stimulus effect. This note investigates the implications of this argument for the estimated impact of recent stimulus packages and the case for discretionary fiscal policy.
The global financial crisis has lead to a renewed interest in discretionary fiscal stimulus. Advocates of discretionary measures emphasize that government spending can stimulate additional private spending — the so-called Keynesian multiplier effect. Thus, we investigate whether the discretionary spending announced by Euro area governments for 2009 and 2010 is likely to boost euro area GDP by more than one for one. Because of modeling uncertainty, it is essential that such policy evaluations be robust to alternative modeling assumptions and different parameterizations. Therefore, we use five different empirical macroeconomic models with Keynesian features such as price and wage rigidities to evaluate the impact of fiscal stimulus. Four of them suggest that the planned increase in government spending will reduce private spending for consumption and investment purposes significantly. If announced government expenditures are implemented with delay the initial effect on euro area GDP, when stimulus is most needed, may even be negative. Traditional Keynesian multiplier effects only arise in a model that ignores the forward-looking behavioral response of consumers and firms. Using a multi-country model, we find that spillovers between euro area countries are negligible or even negative, because direct demand effects are offset by the indirect effect of euro appreciation.
Despite their importance in modern electronic trading, virtually no systematic empirical evidence on the market impact of incoming orders is existing. We quantify the short-run and long-run price effect of posting a limit order by proposing a high-frequency cointegrated VAR model for ask and bid quotes and several levels of order book depth. Price impacts are estimated by means of appropriate impulse response functions. Analyzing order book data of 30 stocks traded at Euronext Amsterdam, we show that limit orders have significant market impacts and cause a dynamic (and typically asymmetric) rebalancing of the book. The strength and direction of quote and spread responses depend on the incoming orders’ aggressiveness, their size and the state of the book. We show that the effects are qualitatively quite stable across the market. Cross-sectional variations in the magnitudes of price impacts are well explained by the underlying trading frequency and relative tick size.
The recent financial crisis has led to a major debate about fair-value accounting. Many critics have argued that fair-value accounting, often also called mark-to-market accounting, has significantly contributed to the financial crisis or, at least, exacerbated its severity. In this paper, we assess these arguments and examine the role of fair-value accounting in the financial crisis using descriptive data and empirical evidence. Based on our analysis, it is unlikely that fair-value accounting added to the severity of the current financial crisis in a major way. While there may have been downward spirals or asset-fire sales in certain markets, we find little evidence that these effects are the result of fair-value accounting. We also find little support for claims that fair-value accounting leads to excessive write-downs of banks’ assets. If anything, empirical evidence to date points in the opposite direction, that is, towards overvaluation of bank assets.
In this paper we investigate the comparative properties of empirically-estimated monetary models of the U.S. economy. We make use of a new data base of models designed for such investigations. We focus on three representative models: the Christiano, Eichenbaum, Evans (2005) model, the Smets and Wouters (2007) model, and the Taylor (1993a) model. Although the three models differ in terms of structure, estimation method, sample period, and data vintage, we find surprisingly similar economic impacts of unanticipated changes in the federal funds rate. However, the optimal monetary policy responses to other sources of economic fluctuations are widely different in the different models. We show that simple optimal policy rules that respond to the growth rate of output and smooth the interest rate are not robust. In contrast, policy rules with no interest rate smoothing and no response to the growth rate, as distinct from the level, of output are more robust. Robustness can be improved further by optimizing rules with respect to the average loss across the three models.
We introduce a regularization and blocking estimator for well-conditioned high-dimensional daily covariances using high-frequency data. Using the Barndorff-Nielsen, Hansen, Lunde, and Shephard (2008a) kernel estimator, we estimate the covariance matrix block-wise and regularize it. A data-driven grouping of assets of similar trading frequency ensures the reduction of data loss due to refresh time sampling. In an extensive simulation study mimicking the empirical features of the S&P 1500 universe we show that the ’RnB’ estimator yields efficiency gains and outperforms competing kernel estimators for varying liquidity settings, noise-to-signal ratios, and dimensions. An empirical application of forecasting daily covariances of the S&P 500 index confirms the simulation results.
In the New-Keynesian model, optimal interest rate policy under uncertainty is formulated without reference to monetary aggregates as long as certain standard assumptions on the distributions of unobservables are satisfied. The model has been criticized for failing to explain common trends in money growth and inflation, and that therefore money should be used as a cross-check in policy formulation (see Lucas (2007)). We show that the New-Keynesian model can explain such trends if one allows for the possibility of persistent central bank misperceptions. Such misperceptions motivate the search for policies that include additional robustness checks. In earlier work, we proposed an interest rate rule that is near-optimal in normal times but includes a cross-check with monetary information. In case of unusual monetary trends, interest rates are adjusted. In this paper, we show in detail how to derive the appropriate magnitude of the interest rate adjustment following a significant cross-check with monetary information, when the New-Keynesian model is the central bank’s preferred model. The cross-check is shown to be effective in offsetting persistent deviations of inflation due to central bank misperceptions. Keywords: Monetary Policy, New-Keynesian Model, Money, Quantity Theory, European Central Bank, Policy Under Uncertainty
The mitsva reflects one of the most pivotal concepts of Judaism. It sanctifies those who answer its calling, and the Jew and Judaism is unique and “chosen” because of it. In this article we highlight the various ways the mitsvot and Halakha transform us and mold the Jewish personality: (a) by converting the “ought” into a “must”; (b) by transforming daily prosaic acts of man into sacred deeds; (c) by converting simple chronological, linear time into special moments of kedusha. The mitsva involves the total personality - “head, heart and hand” and makes the body equally important with the soul in the service of Hashem. Sanctification is accomplished both through deed and thought. The Torah wants the Jew to build an environment which strengthens his religious values and has designated Erets Yisrael as the most fitting place for kedusha.
We propose a variation of online paging in two-level memory systems where pages in the fast cache get modified and therefore have to be explicitly written back to the slow memory upon evictions. For increased performance, up to alpha arbitrary pages can be moved from the cache to the slow memory within a single joint eviction, whereas fetching pages from the slow memory is still performed on a one-by-one basis. The main objective in this new alpha-paging scenario is to bound the number of evictions. After providing experimental evidence that alpha-paging can adequately model flash-memory devices in the context of translation layers we turn to the theoretical connections between alpha-paging and standard paging. We give lower bounds for deterministic and randomized alpha-paging algorithms. For deterministic algorithms, we show that an adaptation of LRU is strongly competitive, while for the randomized case we show that by adapting the classical Mark algorithm we get an algorithm with a competitive ratio larger than the lower bound by a multiplicative factor of approximately 1.7.
We model the dynamics of ask and bid curves in a limit order book market using a dynamic semiparametric factor model. The shape of the curves is captured by a factor structure which is estimated nonparametrically. Corresponding factor loadings are assumed to follow multivariate dynamics and are modelled using a vector autoregressive model. Applying the framework to four stocks traded at the Australian Stock Exchange (ASX) in 2002, we show that the suggested model captures the spatial and temporal dependencies of the limit order book. Relating the shape of the curves to variables reflecting the current state of the market, we show that the recent liquidity demand has the strongest impact. In an extensive forecasting analysis we show that the model is successful in forecasting the liquidity supply over various time horizons during a trading day. Moreover, it is shown that the model’s forecasting power can be used to improve optimal order execution strategies.
This paper explores the relationship between equity prices and the current account for 17 industrialized countries in the period 1980-2007. Based on a panel vector autoregression, I compare the effects of equity price shocks to those originating from monetary policy and exchange rates. While monetary policy shocks have a limited impact, shocks to equity prices have sizeable effects. The results suggest that equity prices impact on the current account through their effects on real activity and exchange rates. Furthermore, shocks to exchange rates play a key role as well. Keywords: current account fluctuations, equity prices, panel vector autoregression
The risk of deflation
(2009)
This paper was prepared for the meeting on Financial Regulation and Macroeconomic Stability: Key issues for the G20, organised by the CEPR and the Reinventing Bretton Woods Committee, London, 31 January 2009. Introduction: The onset of financial instability in August 2007, which quickly spread across the world, raises a number of questions for policy makers. First, what are the roots of the crisis? Many factors have been emphasized in the debate, including the opacity of complex financial products; the excessive confidence in ratings; weak risk management by financial institutions; massive reliance on wholesale funding; and the presumption that markets would always be liquid. Furthermore, poorly understood incentive effects – arising from the originate-to-distribute-model, remuneration policies and the period of low interest rates – are also widely seen as having played a role. Second, how can a repetition of the crisis can be avoided? Much attention is being focused on regulation and supervision of financial intermediaries. The G-20, at its summit in November 2008, noted that measures need to be taken in five areas: (i) financial market transparency and disclosure by firms need to be strengthened; (ii) regulation needs to be enhanced to ensure that all financial markets, products and participants are regulated or subject to oversight, as appropriate; (iii) the integrity of financial markets should be improved by bolstering investor and consumer protection, avoiding conflicts of interest, and by promoting information sharing; (iv) international cooperation among regulators must be enhanced; and (v) international financial institutions must be reformed to reflect changing economic weights in the world economy better in order to increase the legitimacy and effectiveness of these institutions. Third, how can the consequences for economic activity be minimized? Many of the adverse developments in financial markets – in particular the collapse of term interbank markets – reflect deeply entrenched perceptions of counterparty risk. Prompt and far-reaching action to support the financial system, in particular the infusion of equity capital in financial institutions to reduce counter-party risk and get credit to flow again, is essential in order to restore market functioning. A particular risk at present is that the rapid decline in inflation in many countries in recent months will turn into deflation with highly adverse real economic developments. This background paper considers how large the risk of deflation may be and discusses what policy can do to reduce it. It is organized as follows. Section 2 defines deflation and discusses downward nominal wage rigidities and the zero lower bound on interest rates. While these factors are frequently seen as two reasons why deflation can be associated with very poor economic outcomes, they should not be overemphasized. Section 3 looks at the current situation. Inflation expectations and forecasts in the subset of economies we look at (the euro area, the UK and the US) are positive, indicating that deflation is not expected. This does not imply that the current concerns of deflation are unwarranted, only that the public expects the central bank to be successful in avoiding deflation. The section also looks at the evolution of headline and “core” inflation, focusing on data from the US and the euro area. Section 4 reviews how monetary and fiscal policy can be conducted to ensure that deflation is avoided. Section 5 briefly discusses special issues arising in emerging market economies. Finally, Section 6 offers some conclusions. An Appendix discusses deflation episodes in the period 1882-1939.
This paper examines the sustainability of the currency board arrangements in Argentina and Hong Kong. We employ a Markov switching model with two regimes to infer the exchange rate pressure due to economic fundamentals and market expectations. The empirical results suggest that economic fundamentals and expectations are key determinants of a currency board’s sustainability. We also show that the government’s credibility played a more important role in Argentina than in Hong Kong. The trade surplus, real exchange rate and inflation rate were more important drivers of the sustainability of the Hong Kong currency board.
The objective of this paper is to test the hypothesis that in particular financially constrained firms lease a higher share of their assets to mitigate problems of asymmetric information. The assumptions are tested under a GMM framework which simultaneously controls for endogeneity problems and firms’ fixed effects. We find that the share of total annual lease expenses attributable to either finance or operating leases is considerably higher for financially strained as well as for small and fast-growing firms – those likely to face higher agency-cost premiums on marginal financing. Furthermore, our results confirm the substitution of leasing and debt financing for lessee firms. However, we find no evidence that firms use leasing as an instrument to reduce their tax burdens. Keywords: Leasing; financial constraints; capital structure; asymmetric information.
This paper investigates the impact of IT standardization on bank performance based on a panel of 457 German savings banks over the period from 1996 to 2006. We measure IT standardization as the fraction of IT expenses for centralized services over banks' total IT expenses. Bank efficiency, in turn, is measured by traditional accounting performance indicators as well as by cost and profit efficiencies that are estimated by a stochastic frontier approach. Our results suggest that IT standardization is conducive to cost efficiency. The relation is positive and robust for small and medium-sized banks but vanishes for very large banks. Furthermore, our study confirms the often cited computer paradox by showing that total IT expenditures negatively impact cost efficiency and have no influence on bank profits. To the best of our knowledge, this paper is first to empirically explore whether IT standardization enhances efficiency by employing genuine data of banks' IT expenditures. JEL Classification: C23, G21 Keywords: IT standardization, cost and profit efficiency, savings banks
Renewed interest in fiscal policy has increased the use of quantitative models to evaluate policy. Because of modeling uncertainty, it is essential that policy evaluations be robust to alternative assumptions. We find that models currently being used in practice to evaluate fiscal policy stimulus proposals are not robust. Government spending multipliers in an alternative empirically-estimated and widely-cited new Keynesian model are much smaller than in these old Keynesian models; the estimated stimulus is extremely small with GDP and employment effects only one-sixth as large.
A question of Mesorah?
(2009)
In the upcoming Krias Hatorah in Parshat Shoftim and Parshat Ki Savo there are a number of instances where the meaning of a phrase changes completely based on the pronunciation of a single word – םד – with either a Komatz or Patah. Until recently, most Chumashim and Tikunim which generally followed the famous Yaakov Ben Hayyim 1525 edition of Mikraot Gedolot published in Venice that printed a seemingly inconsistent pattern in the pronunciation of the different occurrences of this word.
The budget constraint requires that, eventually, consumption must adjust fully to any permanent shock to income. Intuition suggests that, knowing this, optimizing agents will fully adjust their spending immediately upon experiencing a permanent shock. However, this paper shows that if consumers are impatient and are subject to transitory as well as permanent shocks, the optimal marginal propensity to consume out of permanent shocks (the MPCP) is strictly less than 1, because buffer stock savers have a target wealth-to-permanent-income ratio; a positive shock to permanent income moves the ratio below its target, temporarily boosting saving. Keywords: Risk, Uncertainty, Consumption, Precautionary Saving, Buffer Stock Saving, Permanent Income Hypothesis.