Refine
Year of publication
Document Type
- Working Paper (214)
- Part of Periodical (15)
- Report (8)
- Article (7)
- Book (2)
- Doctoral Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (248)
Keywords
- monetary policy (14)
- DSGE (7)
- Geldpolitik (7)
- Federal Reserve (6)
- Monetary Policy (6)
- Numerical accuracy (6)
- Solution methods (6)
- Bayesian estimation (5)
- DSGE models (5)
- Deutschland (5)
Institute
- Institute for Monetary and Financial Stability (IMFS) (248) (remove)
This paper uses unique administrative data and a quasi-field experiment of exogenous allocation in Sweden to estimate medium- and longer-run effects on financial behavior from exposure to financially literate neighbors. It contributes evidence of causal impact of exposure and of a social multiplier of financial knowledge, but also of unfavorable distributional aspects of externalities. Exposure promotes saving in private retirement accounts and stockholding, especially when neighbors have economics or business education, but only for educated households and when interaction possibilities are substantial. Findings point to transfer of knowledge rather than mere imitation or effects through labor, education, or mobility channels.
The authors present evidence of a new propagation mechanism for wealth inequality, based on differential responses, by education, to greater inequality at the start of economic life. The paper is motivated by a novel positive cross-country relationship between wealth inequality and perceptions of opportunity and fairness, which holds only for the more educated. Using unique administrative micro data and a quasi-field experiment of exogenous allocation of households, the authors find that exposure to a greater top 10% wealth share at the start of economic life in the country leads only the more educated placed in locations with above-median wealth mobility to attain higher wealth levels and position in the cohort-specific wealth distribution later on. Underlying this effect is greater participation in risky financial and real assets and in self-employment, with no evidence for a labor income, unemployment risk, or human capital investment channel. This differential response is robust to controlling for initial exposure to fixed or other time-varying local features, including income inequality, and consistent with self-fulfilling responses of the more educated to perceived opportunities, without evidence of imitation or learning from those at the top.
The authors identify U.S. monetary and fiscal dominance regimes using machine learning techniques. The algorithms are trained and verified by employing simulated data from Markov-switching DSGE models, before they classify regimes from 1968-2017 using actual U.S. data. All machine learning methods outperform a standard logistic regression concerning the simulated data. Among those the Boosted Ensemble Trees classifier yields the best results. The authors find clear evidence of fiscal dominance before Volcker. Monetary dominance is detected between 1984-1988, before a fiscally led regime turns up around the stock market crash lasting until 1994. Until the beginning of the new century, monetary dominance is established, while the more recent evidence following the financial crisis is mixed with a tendency towards fiscal dominance.
This paper examines the sustainability of the currency board arrangements in Argentina and Hong Kong. We employ a Markov switching model with two regimes to infer the exchange rate pressure due to economic fundamentals and market expectations. The empirical results suggest that economic fundamentals and expectations are key determinants of a currency board’s sustainability. We also show that the government’s credibility played a more important role in Argentina than in Hong Kong. The trade surplus, real exchange rate and inflation rate were more important drivers of the sustainability of the Hong Kong currency board.
Distributed ledger technology especially in the form of publicly coordinated validation networks such as Ethereum and Bitcoin with their own monetary circles provide for a revealing litmus test for current financial regulatory schemes. The paper highlights the interrelation between distributed coordination and the emission of virtual currency to make sense of the function of the new monetary phenomenon. It then argues for the regulation of financial services on the ground of the technology to ensure integrity standards. In this respect, it is useful to gear the development of a regulatory scheme towards the existing financial regulatory principles. However, future measures of the regulators must take the distributed nature of the platforms into account by relying on a “regulated self-regulation” of the community. Finally, the article focuses on the shortcomings of the current EU regulatory regimes, especially the regulation frameworks regarding financial services, payment services and electronic money.
Our paper evaluates recent regulatory proposals mandating the deferral of bonus payments and claw-back clauses in the financial sector. We study a broadly applicable principal agent setting, in which the agent exerts effort for an immediately observable task (acquisition) and a task for which information is only gradually available over time (diligence). Optimal compensation contracts trade off the cost and benefit of delay resulting from agent impatience and the informational gain. Mandatory deferral may increase or decrease equilibrium diligence depending on the importance of the acquisition task. We provide concrete conditions on economic primitives that make mandatory deferral socially (un)desirable.
This paper applies structure preserving doubling methods to solve the matrix quadratic underlying the recursive solution of linear DSGE models. We present and compare two Structure-Preserving Doubling Algorithms ( SDAs) to other competing methods – the QZ method, a Newton algorithm, and an iterative Bernoulli approach – as well as the related cyclic and logarithmic reduction algorithms. Our comparison is completed using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium scale New Keynesian model of Smets and Wouters (2007) iteratively. We find that both SDAs perform very favorably relative to QZ, with generally more accurate solutions computed in less time. While we collect theoretical convergence results that promise quadratic convergence rates to a unique stable solution, the algorithms may fail to converge when there is a breakdown due to singularity of the coefficient matrices in the recursion. One of the proposed algorithms can overcome this problem by an appropriate (re)initialization. This SDA also performs particular well in refining solutions of different methods or from nearby parameterizations.
This paper considers a firm that has to delegate to an agent, such as a mortgage broker or a security dealer, the twin tasks of approaching and advising customers. The main contractual restriction, in particular in light of related research in Inderst and Ottaviani (2007), is that the firm can only compensate the agent through commissions. This standard contracting restriction has the following key implications. First, the firm can only ensure internal compliance to a "standard of sales", in terms of advice for the customer, if this standard is not too high. Second, if this is still feasible, then a higher standard is associated with higher, instead of lower, sales commissions. Third, once the limit for internal compliance is approached, tougher regulation and prosecution of "misselling" have (almost) no effect on the prevailing standard. Besides having practical implications, in particular on how to (re-)regulate the sale of financial products, the novel model, which embeds a problem of advice into a framework with repeated interactions, may also be of separate interest for future work on sales force compensation. JEL Classification: D18 (Consumer Protection), D83 (Search; Learning; Information and Knowledge), M31 (Marketing), M52 (Compensation and Compensation Methods and Their Effects).
This paper presents a novel model of the lending process that takes into account that loan officers must spend time and effort to originate new loans. Besides generating predictions on loan officers’ compensation and its interaction with the loan review process, the model sheds light on why competition could lead to excessively low lending standards. We also show how more intense competition may fasten the adoption of credit scoring. More generally, hard-information lending techniques such as credit scoring allow to give loan officers high-powered incentives without compromising the integrity and quality of the loan approval process. The model is finally applied to study the implications of loan sales on the adopted lending process and lending standard.
We present a simple model of personal finance in which an incumbent lender has an information advantage vis-a-vis both potential competitors and households. In order to extract more consumer surplus, a lender with sufficient market power may engage in "irresponsible"lending, approving credit even if this is knowingly against a household’s best interest. Unless rival lenders are equally well informed, competition may reduce welfare. This holds, in particular, if less informed rivals can free ride on the incumbent’s superior screening ability.
This paper presents a novel model of the lending process that takes into account that loan officers must spend time and effort to originate new loans. Besides generating predictions on loan officers’ compensation and its interaction with the loan review process, the model sheds light on why competition could lead to excessively low lending standards. We also show how more intense competition may fasten the adoption of credit scoring. More generally, hard-information lending techniques such as credit scoring allow to give loan officers high-powered incentives without compromising the integrity and quality of the loan approval process.
We analyze how two key managerial tasks interact: that of growing the business through creating new investment opportunities and that of providing accurate information about these opportunities in the corporate budgeting process. We show how this interaction endogenously biases managers toward overinvesting in their own projects. This bias is exacerbated if managers compete for limited resources in an internal capital market, which provides us with a novel theory of the boundaries of the firm. Finally, managers of more risky and less profitable divisions should obtain steeper incentives to facilitate efficient investment decisions.
We consider an imperfectly competitive loan market in which a local relationship lender has an information advantage vis-à-vis distant transaction lenders. Competitive pressure from the transaction lenders prevents the local lender from extracting the full surplus from projects, so that she inefficiently rejects marginally profitable projects. Collateral mitigates the inefficiency by increasing the local lender’s payoff from precisely those marginal projects that she inefficiently rejects. The model predicts that, controlling for observable borrower risk, collateralized loans are more likely to default ex post, which is consistent with the empirical evidence. The model also predicts that borrowers for whom local lenders have a relatively smaller information advantage face higher collateral requirements, and that technological innovations that narrow the information advantage of local lenders, such as small business credit scoring, lead to a greater use of collateral in lending relationships. JEL classification: D82; G21 Keywords: Collateral; Soft infomation; Loan market competition; Relationship lending
This paper shows that active investors, such as venture capitalists, can affect the speed at which new ventures grow. In the absence of product market competition, new ventures financed by active investors grow faster initially, though in the long run those financed by passive investors are able to catch up. By contrast, in a competitive product market, new ventures financed by active investors may prey on rivals that are financed by passive investors by “strategically overinvesting” early on, resulting in long-run differences in investment, profits, and firm growth. The value of active investors is greater in highly competitive industries as well as in industries with learning curves, economies of scope, and network effects, as is typical for many “new economy” industries. For such industries, our model predicts that start-ups with access to venture capital may dominate their industry peers in the long run. JEL Classifications: G24; G32 Keywords: Venture capital; dynamic investment; product market competition
We study a model of “information-based entrenchment” in which the CEO has private information that the board needs to make an efficient replacement decision. Eliciting the CEO’s private information is costly, as it implies that the board must pay the CEO both higher severance pay and higher on-the-job pay. While higher CEO pay is associated with higher turnover in our model, there is too little turnover in equilibrium. Our model makes novel empirical predictions relating CEO turnover, severance pay, and on-the-job pay to firm-level attributes such as size, corporate governance, and the quality of the firm’s accounting system.
This paper argues that banks must be sufficiently levered to have first-best incentives to make new risky loans. This result, which is at odds with the notion that leverage invariably leads to excessive risk taking, derives from two key premises that focus squarely on the role of banks as informed lenders. First, banks finance projects that they do not own, which implies that they cannot extract all the profits. Second, banks conduct a credit risk analysis before making new loans. Our model may help understand why banks take on additional unsecured debt, such as unsecured deposits and subordinated loans, over and above their existing deposit base. It may also help understand why banks and finance companies have similar leverage ratios, even though the latter are not deposit takers and hence not subject to the same regulatory capital requirements as banks.
This article shows that investors financing a portfolio of projects may use the depth of their financial pockets to overcome entrepreneurial incentive problems. Competition for scarce informed capital at the refinancing stage strengthens investors’ bargaining positions. And yet, entrepreneurs’ incentives may be improved, because projects funded by investors with ‘‘shallow pockets’’ must have not only a positive net present value at the refinancing stage, but one that is higher than that of competing portfolio projects. Our article may help understand provisions used in venture capital finance that limit a fund’s initial capital and make it difficult to add more capital once the initial venture capital fund is raised. (JEL G24, G31)
This paper shows that investors financing a portfolio of projects may use the depth of their financial pockets to overcome entrepreneurial incentive problems. Competition for scarce informed capital at the refinancing stage strengthens investors’ bargaining positions. And yet, entrepreneurs’ incentives may be improved, because projects funded by investors with “shallow pockets” must have not only a positive net present value at the refinancing stage, but one that is higher than that of competing portfolio projects. Our paper may help to understand provisions used in venture capital finance that limit a fund’s initial capital and make it difficult to add more capital once the initial venture capital fund is raised.
Misselling through agents
(2009)
This paper analyzes the implications of the inherent conflict between two tasks performed by direct marketing agents: prospecting for customers and advising on the product's "suitability" for the specific needs of customers. When structuring sales-force compensation, firms trade off the expected losses from "misselling" unsuitable products with the agency costs of providing marketing incentives. We characterize how the equilibrium amount of misselling (and thus the scope of policy intervention) depends on features of the agency problem including: the internal organization of a firm's sales process, the transparency of its commission structure, and the steepness of its agents' sales incentives. JEL Classification: D18 (Consumer Protection), D83 (Search; Learning; Information and Knowledge), M31 (Marketing), M52 (Compensation and Compensation Methods and Their Effects).
In this paper, we provide some reflections on the development of monetary theory and monetary policy over the last 150 years. Rather than presenting an encompassing overview, which would be overambitious, we simply concentrate on a few selected aspects that we view as milestones in the development of this subject. We also try to illustrate some of the interactions with the political and financial system, academic discussion and the views and actions of central banks.
In his speech at the conference „The SNB and its Watchers“, Otmar Issing, member of the ECB Governing Council from its start in 1998 until 2006, takes a look back at more than twenty years of the conference series „The ECB and Its Watchers“. In June 1999, Issing established this format together with Axel Weber, then Director of the Center for Financial Studies, to discuss the monetary policy strategy of the newly founded central bank with a broad circle of participants, that is academics, bank economists and members of the media on a „neutral ground“. At the annual conference, the ECB and its representatives would play an active role and engage in a lively exchange of view with the other participants. Over the years, Volker Wieland took over as organizer of the conference series, which also was adopted by other central banks. In his contribution at the second conference „The SNB and its Watchers“, Issing summarizes the experience gained from over twenty years of the ECB Watchers Conference.
The Eurosystem and the Deutsche Bundesbank will incur substantial losses in 2023 that are likely to persist for several years. Due to the massive purchases of securities in the last 10 years, especially of government bonds, the banks' excess reserves have risen sharply. The resulting high interest payments to the banks since the turnaround in monetary policy, with little income for the large-scale securities holdings, led to massive criticism. The banks were said to be making "unfair" profits as a result, while the fiscal authorities had to forego the previously customary transfers of central bank profits. Populist demands to limit bank profits by, for example, drastically increasing the minimum reserve ratios in the Eurosystem to reduce excess reserves are creating new severe problems and are neither justified nor helpful. Ultimately, the EU member states have benefited for a very long time from historically low interest rates because of the Eurosystem's extraordinary loose monetary policy and must now bear the flip side consequences of the massive expansion of central bank balance sheets during the necessary period of monetary policy normalisation.
The so-called Troika, consisting of the EU-Commission, the European Central Bank (ECB) and the International Monetary Fund (IMF), was supposed to support the member states of the euro area which had been hit hard by a sovereign debt crisis. For that purpose, economic adjustment programs were drafted and monitored in order to prevent the break-up of the euro area and sovereign defaults. The cooperation of these institutions, which was born out of necessity, has been partly successful, but has also created persistent problems. With the further increase of public debt, especially in France and Italy, the danger of a renewed crisis in the euro area was growing. The European Stability Mechanism (ESM) together with the European Commission will replace the Troika in the future, following decisions of the EU Summit of December 2018. It shall play the role of a European Monetary Fund in the event of a crisis. The IMF, on the other side, will no longer play an active role in solving sovereign debt crises in the euro area. The current course is, however, inadequate to tackle the core problems of the euro zone and to avoid future crises, which are mainly structural in nature and due to escalating public debt and lack of international competitiveness of some member countries. The current Corona crisis will aggravate the institutional problems. It has led to a common European fiscal response ("Next Generation EU"). This rescue and recovery program will not be financed by ESM resources and will not be monitored by the ESM. One important novelty of this package is that it involves the issuance of substantial common European debt.
Debt levels in the eurozone have reached new record highs. The member countries have tried to cushion the economic consequences of the corona pandemic with a massive increase in government spending. End of 2021 public debt in relation to GDP will approach 100% on average. There are various calls to abolish or soften the Maastricht rules of limiting sovereign debt. We see the risk of a new sovereign debt crisis in this decade if it is not possible to bring public debt down to an acceptable level. Our new fiscal rule would be suitable and appropriate for this purpose, because obviously the Maastricht criteria have failed. In contrast to the rigid 3% Maastricht-criterion, our rule is flexible and it addresses the main problem: excessively high public debt ratios. And it lowers the existing incentives for highly indebted governments to exert expansionary pressure on monetary policy. If obeyed strictly, our rule reinforces the snowball effect and reduces the excessively high debt ratios within a manageable period, even if nominal growth is weak. This is confirmed by simulations with different scenarios as well as with the hypothetical application of the new fiscal rule to eurozone economies from 2022 to 2026. Finally, we take up the recent proposal by ESM economists to increase the permissible debt ratio from 60 to 100% of GDP in the eurozone.
The term structure of interest rates is crucial for the transmission of monetary policy to financial markets and the macroeconomy. Disentangling the impact of monetary policy on the components of interest rates, expected short rates and term premia, is essential to understanding this channel. To accomplish this, we provide a quantitative structural model with endogenous, time-varying term premia that are consistent with empirical findings. News about future policy, in contrast to unexpected policy shocks, has quantitatively significant effects on term premia along the entire term structure. This provides a plausible explanation for partly contradictory estimates in the empirical literature.
The term structure of interest rates is crucial for the transmission of monetary policy to financial markets and the macroeconomy. Disentangling the impact of monetary policy on the components of interest rates, expected short rates, and term premia is essential to understanding this channel. To accomplish this, we provide a quantitative structural model with endogenous, time-varying term premia that are consistent with empirical findings. News about future policy, in contrast to unexpected policy shocks, has quantitatively significant effects on term premia along the entire term structure. This provides a plausible explanation for partly contradictory estimates in the empirical literature.
Rising temperatures, falling ratings: the effect of climate change on sovereign creditworthiness
(2021)
How will a changing climate impact the creditworthiness of governments over the very long term? Financial markets need credible, digestible information on how climate change translates into material risks. To bridge the gap between climate science and real-world financial indicators, the authors simulate the effect of climate change on sovereign credit ratings for 108 countries, creating the world’s first climate-adjusted sovereign credit rating. The study offers a first methodological approach to extend the long-term rating to an ultra-long-term reality, aiming at long-term investors, but also regulators and rating agencies.
This paper proposes a new approach for modeling investor fear after rare disasters. The key element is to take into account that investors’ information about fundamentals driving rare downward jumps in the dividend process is not perfect. Bayesian learning implies that beliefs about the likelihood of rare disasters drop to a much more pessimistic level once a disaster has occurred. Such a shift in beliefs can trigger massive declines in price-dividend ratios. Pessimistic beliefs persist for some time. Thus, belief dynamics are a source of apparent excess volatility relative to a rational expectations benchmark. Due to the low frequency of disasters, even an infinitely-lived investor will remain uncertain about the exact probability. Our analysis is conducted in continuous time and offers closed-form solutions for asset prices. We distinguish between rational and adaptive Bayesian learning. Rational learners account for the possibility of future changes in beliefs in determining their demand for risky assets, while adaptive learners take beliefs as given. Thus, risky assets tend to be lower-valued and price-dividend ratios vary less under adaptive versus rational learning for identical priors. Keywords: beliefs, Bayesian learning, controlled diffusions and jump processes, learning about jumps, adaptive learning, rational learning. JEL classification: D83, G11, C11, D91, E21, D81, C61
This paper studies the macro-financial implications of using carbon prices to achieve ambitious greenhouse gas (GHG) emission reduction targets. My empirical evidence shows a 0.6% output loss and a rise of 0.3% in inflation in response to a 1% shock on carbon policy. Furthermore, I also observe financial instability and allocation effects between the clean and highly polluted energy sectors. To have a better prediction of medium and long-term impact, using a medium-large macro-financial DSGE model with environmental aspects, I show the recessionary effect of an ambitious carbon price implementation to achieve climate targets, a 40% reduction in GHG emission causes a 0.7% output loss while reaching a zero-emission economy in 30 years causes a 2.6% output loss. I document an amplified effect of the banking sector during the transition path. The paper also uncovers the beneficial role of pre-announcements of carbon policies in mitigating inflation volatility by 0.2% at its peak, and our results suggest well-communicated carbon policies from authorities and investing to expand the green sector. My findings also stress the use of optimal green monetary and financial policies in mitigating the effects of transition risk and assisting the transition to a zero-emission world. Utilizing a heterogeneous approach with macroprudential tools, I find that optimal macroprudential tools can mitigate the output loss by 0.1% and investment loss by 1%. Importantly, my work highlights the use of capital flow management in the green transition when a global cooperative solution is challenging.
The authors embed human capital-based endogenous growth into a New-Keynesian model with search and matching frictions in the labor market and skill obsolescence from long-term unemployment. The model can account for key features of the Great Recession: a decline in productivity growth, the relative stability of inflation despite a pronounced fall in output (the "missing disinflation puzzle"), and a permanent gap between output and the pre-crisis trend output.
In the model, lower aggregate demand raises unemployment and the training costs associated with skill obsolescence. Lower employment hinders learning-by-doing, which slows down human capital accumulation, feeding back into even fewer vacancies than justified by the demand shock alone. These feedback channels mitigate the disinflationary effect of the demand shock while amplifying its contractionary effect on output. The temporary growth slowdown translates into output hysteresis (permanently lower output and labor productivity).
Central banks normally accept debt of their own governments as collateral in liquidity operations without reservations. This gives rise to a valuable liquidity premium that reduces the cost of government finance. The ECB is an interesting exception in this respect. It relies on external assessments of the creditworthiness of its member states, such as credit ratings, to determine eligibility and the haircut it imposes on such debt. The authors show how such features in a central bank’s collateral framework can give rise to cliff effects and multiple equilibria in bond yields and increase the vulnerability of governments to external shocks. This can potentially induce sovereign debt crises and defaults that would not otherwise arise.
This paper characterises optimal monetary policy in an economy with endogenous
firm entry, a cash-in-advance constraint and preset wages. Firms must make pro
fits to cover entry costs; thus the markup on goods prices is efficient. However, because leisure is not priced at a markup, the consumption-leisure tradeoff is distorted. Consequently, the real wage, hours and production are suboptimally low. Due to the labour requirement in entry, insufficient labour supply also implies that entry is too low. The paper shows that in the absence of
fiscal instruments such as labour income subsidies, the optimal monetary policy under sticky wages achieves higher welfare than under flexible wages. The policy maker uses the money supply instrument to raise the real wage - the cost of leisure - above its flexible-wage level, in response to expansionary shocks to productivity and entry costs. This raises labour supply, expanding production and
rm entry.
How do changes in market structure affect the US business cycle? We estimate a monetary DSGE model with endogenous
rm/product entry and a translog expenditure function by Bayesian methods. The dynamics of net business formation allow us to identify the 'competition effect', by which desired price markups and inflation decrease when entry rises. We
find that a 1 percent increase in the number of competitors lowers desired markups by 0.18 percent. Most of the cyclical variability in inflation is driven by markup fluctuations due to sticky prices or exogenous shocks rather than endogenous changes in desired markups.
How do changes in market structure affect the US business cycle? We estimate a monetary DSGE model with endogenous
rm/product entry and a translog expenditure function by Bayesian methods. The dynamics of net business formation allow us to identify the 'competition effect', by which desired price markups and inflation decrease when entry rises. We
find that a 1 percent increase in the number of competitors lowers desired markups by 0.18 percent. Most of the cyclical variability in inflation is driven by markup fluctuations due to sticky prices or exogenous shocks rather than endogenous changes in desired markups.
This paper investigates the effect of a change in informational environment of borrowers on the organizational design of bank lending. We use micro-data from a large multinational bank and exploit the sudden introduction of a credit registry, an information-sharing mechanism across banks, for a subset of borrowers. Using within borrower and loan officer variation in a difference-in-difference empirical design, we show that expansion of credit registry led to an improvement in allocation of credit to affected
borrowers. There was a concurrent change in the organizational structure of the bank that involved a dramatic increase in delegation of lending decisions of affected borrowers to loan officers. We also find a significant expansion in scope of activities of loan officers who deal primarily with affected borrowers, as well as of their superiors. There is suggestive evidence that larger banks in the economy were better able to implement similar changes as our bank. We argue that these patterns can be understood within the framework of incentive-based and information cost processing theories. Our findings could help rationalize why improvements in the information environment of borrowers may be altering the landscape of lending by moving decisions outside the boundaries of financial intermediaries.
There is substantial disagreement about the consequences of the Tax Cuts and Jobs Act (TCJA) of 2017, which constitutes the most extensive tax reform in the United States in more than 30 years. Using a large-scale two-country dynamic general equilibrium model with nominal rigidities, we find that the TCJA increases GDP by about 2% in the medium-run and by about 2.5% in the long-run. The shortrun impact depends crucially on the degree and costs of variable capital utilization, with GDP effects ranging from 1 to 3%. At the same time, the TCJA does not pay for itself. In our analysis, the reform decreases tax revenues and raises the debt-to-GDP ratio by about 15 percentage points in the medium-run until 2025. We show that combining the TCJA with spending cuts can dampen the increase in government indebtedness without reducing its expansionary effect.
This paper develops and implements a backward and forward error analysis of and condition numbers for the numerical stability of the solutions of linear dynamic stochastic general equilibrium (DSGE) models. Comparing seven different solution methods from the literature, I demonstrate an economically significant loss of accuracy specifically in standard, generalized Schur (or QZ) decomposition based solutions methods resulting from large backward errors in solving the associated matrix quadratic problem. This is illustrated in the monetary macro model of Smets and Wouters (2007) and two production-based asset pricing models, a simple model of external habits with a readily available symbolic solution and the model of Jermann (1998) that lacks such a symbolic solution - QZ-based numerical solutions miss the equity premium by up to several annualized percentage points for parameterizations that either match the chosen calibration targets or are nearby to the parameterization in the literature. While the numerical solution methods from the literature failed to give any indication of these potential errors, easily implementable backward-error metrics and condition numbers are shown to successfully warn of such potential inaccuracies. The analysis is then performed for a database of roughly 100 DSGE models from the literature and a large set of draws from the model of Smets and Wouters (2007). While economically relevant errors do not appear pervasive from these latter applications, accuracies that differ by several orders of magnitude persist.
On the accuracy of linear DSGE solution methods and the consequences for log-normal asset pricing
(2021)
This paper demonstrates a failure of standard, generalized Schur (or QZ) decomposition based solutions methods for linear dynamic stochastic general equilibrium (DSGE) models when there is insufficient eigenvalue separation about the unit circle. The significance of this is demonstrated in a simple production-based asset pricing model with external habit formation. While the exact solution afforded by the simplicity of the model matches post-war US consumption growth and the equity premium, QZ-based numerical solutions miss the later by many annualized percentage points.
This paper presents and compares Bernoulli iterative approaches for solving linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. I find that Bernoulli methods compare favorably in solving DSGE models to the QZ, providing similar accuracy as measured by the forward error of the solution at a comparable computation burden. The method can guarantee convergence to a particular, e.g., unique stable, solution and can be combined with other iterative methods, such as the Newton method, lending themselves especially to refining solutions.
The authors relax the standard assumption in the dynamic stochastic general equilibrium (DSGE) literature that exogenous processes are governed by AR(1) processes and estimate ARMA (p,q) orders and parameters of exogenous processes. Methodologically, they contribute to the Bayesian DSGE literature by using Reversible Jump Markov Chain Monte Carlo (RJMCMC) to sample from the unknown ARMA orders and their associated parameter spaces of varying dimensions.
In estimating the technology process in the neoclassical growth model using post war US GDP data, they cast considerable doubt on the standard AR(1) assumption in favor of higher order processes. They find that the posterior concentrates density on hump-shaped impulse responses for all endogenous variables, consistent with alternative empirical estimates and the rigidities behind many richer structural models. Sampling from noninvertible MA representations, a negative response of hours to a positive technology shock is contained within the posterior credible set. While the posterior contains significant uncertainty regarding the exact order, the results are insensitive to the choice of data filter; this contrasts with the authors’ ARMA estimates of GDP itself, which vary significantly depending on the choice of HP or first difference filter.
The authors present and compare Newton-based methods from the applied mathematics literature for solving the matrix quadratic that underlies the recursive solution of linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. They find that Newton-based methods compare favorably in solving DSGE models, providing higher accuracy as measured by the forward error of the solution at a comparable computation burden. The methods, however, suffer from their inability to guarantee convergence to a particular, e.g. unique stable, solution, but their iterative procedures lend themselves to refining solutions either from different methods or parameterizations.
Highlights
• Six Newton methods for solving matrix quadratic equations in linear DSGE models.
• Compared to QZ using 99 different DSGE models including Smets and Wouters (2007).
• Newton methods more accurate than QZ with comparable computation burden.
• Apt for refining solutions from alternative methods or nearby parameterizations.
Abstract
This paper presents and compares Newton-based methods from the applied mathematics literature for solving the matrix quadratic that underlies the recursive solution of linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. We find that Newton-based methods compare favorably in solving DSGE models, providing higher accuracy as measured by the forward error of the solution at a comparable computation burden. The methods, however, suffer from their inability to guarantee convergence to a particular, e.g. unique stable, solution, but their iterative procedures lend themselves to refining solutions either from different methods or parameterizations.
The authors propose a new method to forecast macroeconomic variables that combines two existing approaches to mixed-frequency data in DSGE models. The first existing approach estimates the DSGE model in a quarterly frequency and uses higher frequency auxiliary data only for forecasting. The second method transforms a quarterly state space into a monthly frequency. Their algorithm combines the advantages of these two existing approaches.They compare the new method with the existing methods using simulated data and real-world data. With simulated data, the new method outperforms all other methods, including forecasts from the standard quarterly model. With real world data, incorporating auxiliary variables as in their method substantially decreases forecasting errors for recessions, but casting the model in a monthly frequency delivers better forecasts in normal times.
We present determinacy bounds on monetary policy in the sticky information model. We find that these bounds are more conservative here when the long run Phillips curve is vertical than in the standard Calvo sticky price New Keynesian model. Specifically, the Taylor principle is now necessary directly - no amount of output targeting can substitute for the monetary authority’s concern for inflation. These determinacy bounds are obtained by appealing to frequency domain techniques that themselves provide novel interpretations of the Phillips curve.
The authors examine the effectiveness of labor cost reductions as a means to stimulate economic activity and assesses the differences which may occur with the prevailing exchange rate regime. They develop a medium-scale three-region DSGE model and show that the impact of a cut in the employers’ social security contributions rate does not vary significantly under different exchange rate regimes. They find that both the interest rate and the exchange rate channel matters. Furthermore, the measure appears to be effective even if it comes along with a consumption tax increase to preserve long-term fiscal sustainability.
Finally, they assess whether obtained theoretical results hold up empirically by applying the local projection method. Regression results suggest that changes in employers’ social security contributions rates have statistically significant real effects – a one percentage point reduction leads to an average cumulative rise in output of around 1.3 percent in the medium term. Moreover, the outcome does not differ significantly across the different exchange rate regimes.
Die Abhandlung ist eine überarbeitete und erweiterte Fassung der vom Institute for Monetary and Financial Stability am 19. Juni 2006 veranstalteten Guest Lecture des Autors zum Thema "Demystifying Hedge Funds"