Refine
Year of publication
Document Type
- Working Paper (214)
- Part of Periodical (15)
- Report (8)
- Article (7)
- Book (2)
- Doctoral Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (248)
Keywords
- monetary policy (14)
- DSGE (7)
- Geldpolitik (7)
- Federal Reserve (6)
- Monetary Policy (6)
- Numerical accuracy (6)
- Solution methods (6)
- Bayesian estimation (5)
- DSGE models (5)
- Deutschland (5)
Institute
- Institute for Monetary and Financial Stability (IMFS) (248) (remove)
On the accuracy of linear DSGE solution methods and the consequences for log-normal asset pricing
(2021)
This paper demonstrates a failure of standard, generalized Schur (or QZ) decomposition based solutions methods for linear dynamic stochastic general equilibrium (DSGE) models when there is insufficient eigenvalue separation about the unit circle. The significance of this is demonstrated in a simple production-based asset pricing model with external habit formation. While the exact solution afforded by the simplicity of the model matches post-war US consumption growth and the equity premium, QZ-based numerical solutions miss the later by many annualized percentage points.
This paper presents and compares Bernoulli iterative approaches for solving linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. I find that Bernoulli methods compare favorably in solving DSGE models to the QZ, providing similar accuracy as measured by the forward error of the solution at a comparable computation burden. The method can guarantee convergence to a particular, e.g., unique stable, solution and can be combined with other iterative methods, such as the Newton method, lending themselves especially to refining solutions.
The authors relax the standard assumption in the dynamic stochastic general equilibrium (DSGE) literature that exogenous processes are governed by AR(1) processes and estimate ARMA (p,q) orders and parameters of exogenous processes. Methodologically, they contribute to the Bayesian DSGE literature by using Reversible Jump Markov Chain Monte Carlo (RJMCMC) to sample from the unknown ARMA orders and their associated parameter spaces of varying dimensions.
In estimating the technology process in the neoclassical growth model using post war US GDP data, they cast considerable doubt on the standard AR(1) assumption in favor of higher order processes. They find that the posterior concentrates density on hump-shaped impulse responses for all endogenous variables, consistent with alternative empirical estimates and the rigidities behind many richer structural models. Sampling from noninvertible MA representations, a negative response of hours to a positive technology shock is contained within the posterior credible set. While the posterior contains significant uncertainty regarding the exact order, the results are insensitive to the choice of data filter; this contrasts with the authors’ ARMA estimates of GDP itself, which vary significantly depending on the choice of HP or first difference filter.
The authors present and compare Newton-based methods from the applied mathematics literature for solving the matrix quadratic that underlies the recursive solution of linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. They find that Newton-based methods compare favorably in solving DSGE models, providing higher accuracy as measured by the forward error of the solution at a comparable computation burden. The methods, however, suffer from their inability to guarantee convergence to a particular, e.g. unique stable, solution, but their iterative procedures lend themselves to refining solutions either from different methods or parameterizations.
Highlights
• Six Newton methods for solving matrix quadratic equations in linear DSGE models.
• Compared to QZ using 99 different DSGE models including Smets and Wouters (2007).
• Newton methods more accurate than QZ with comparable computation burden.
• Apt for refining solutions from alternative methods or nearby parameterizations.
Abstract
This paper presents and compares Newton-based methods from the applied mathematics literature for solving the matrix quadratic that underlies the recursive solution of linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. We find that Newton-based methods compare favorably in solving DSGE models, providing higher accuracy as measured by the forward error of the solution at a comparable computation burden. The methods, however, suffer from their inability to guarantee convergence to a particular, e.g. unique stable, solution, but their iterative procedures lend themselves to refining solutions either from different methods or parameterizations.
The authors propose a new method to forecast macroeconomic variables that combines two existing approaches to mixed-frequency data in DSGE models. The first existing approach estimates the DSGE model in a quarterly frequency and uses higher frequency auxiliary data only for forecasting. The second method transforms a quarterly state space into a monthly frequency. Their algorithm combines the advantages of these two existing approaches.They compare the new method with the existing methods using simulated data and real-world data. With simulated data, the new method outperforms all other methods, including forecasts from the standard quarterly model. With real world data, incorporating auxiliary variables as in their method substantially decreases forecasting errors for recessions, but casting the model in a monthly frequency delivers better forecasts in normal times.
We present determinacy bounds on monetary policy in the sticky information model. We find that these bounds are more conservative here when the long run Phillips curve is vertical than in the standard Calvo sticky price New Keynesian model. Specifically, the Taylor principle is now necessary directly - no amount of output targeting can substitute for the monetary authority’s concern for inflation. These determinacy bounds are obtained by appealing to frequency domain techniques that themselves provide novel interpretations of the Phillips curve.
The authors examine the effectiveness of labor cost reductions as a means to stimulate economic activity and assesses the differences which may occur with the prevailing exchange rate regime. They develop a medium-scale three-region DSGE model and show that the impact of a cut in the employers’ social security contributions rate does not vary significantly under different exchange rate regimes. They find that both the interest rate and the exchange rate channel matters. Furthermore, the measure appears to be effective even if it comes along with a consumption tax increase to preserve long-term fiscal sustainability.
Finally, they assess whether obtained theoretical results hold up empirically by applying the local projection method. Regression results suggest that changes in employers’ social security contributions rates have statistically significant real effects – a one percentage point reduction leads to an average cumulative rise in output of around 1.3 percent in the medium term. Moreover, the outcome does not differ significantly across the different exchange rate regimes.
Die Abhandlung ist eine überarbeitete und erweiterte Fassung der vom Institute for Monetary and Financial Stability am 19. Juni 2006 veranstalteten Guest Lecture des Autors zum Thema "Demystifying Hedge Funds"
The forward guidance trap
(2023)
This paper examines the policy experience of the Fed, ECB and BOJ during and after the Covid-19 pandemic and draws lessons for monetary policy strategy and ist communication. All three central banks provided appropriate accommodation during the pandemic but two failed to unwind this accommodation in a timely manner. The Fed and ECB guided real interest rates to inappropriately negative levels as the economy recovered from the pandemic, fueling high inflation. The policy error can be traced to decisions regarding forward guidance on policy rates that delayed lift-off while the two central banks continued to expand their balance sheets. The Fed and the ECB fell into the forward guidance trap. This could have been avoided if policy were guided by a forward- looking rule that properly adjusted the nominal interest rate with the evolution of the inflation outlook.
What happened in Cyprus? The economic consequences of the last communist government in Europe
(2014)
This paper reviews developments in the Cypriot economy following the introduction of the euro on 1 January 2008 and leading to the economic collapse of the island five years later. The main cause of the collapse is identified with the election of a communist government in February 2008, within two months of the introduction of the euro, and its subsequent choices for action and inaction on economic policy matters. The government allowed a rapid deterioration of public finances, and despite repeated warnings, damaged the country's creditworthiness and lost market access in May 2011. The destruction of the island's largest power station in July 2011 subsequently threw the economy into recession. Together with the intensification of the euro area crisis in the summer and fall of 2011, these events weakened the banking system which was vulnerable due to its exposure in Greece. Rather than deal with its fiscal crisis, the government secured a loan from the Russian government that allowed it to postpone action until after the February 2013 election. Rather than protect the banking system, losses were imposed on banks and a campaign against them was coordinated and used as a platform by the communist party for the February 2013 election. The strategy succeeded in delaying resolution of the crisis and avoiding short-term political cost for the communist party before the election, but also in precipitating a catastrophe right after the election.
Under ordinary circumstances, the fiscal implications of central bank policies tend to be seen as relatively minor and escape close scrutiny. The global financial crisis of 2008, however, demanded an extraordinary response by central banks which brought to light the immense power of central bank balance sheet policies as well as their major fiscal implications. Once the zero lower bound on interest rates is reached, expanding a central bank’s balance sheet becomes the central instrument for providing additional monetary policy accommodation. However, with interest rates near zero, the line separating fiscal and monetary policy is blurred. Furthermore, discretionary decisions associated with asset purchases and liquidity provision, as well as with lender-of-last-resort operations benefiting private entities, can have major distributional effects that are ordinarily associated with fiscal policy. In the euro area, discretionary central bank decisions can have immense distributional effects across member states. However, decisions of this nature are incompatible with the role of unelected officials in democratic societies. Drawing on the response to the crisis by the Federal Reserve and the ECB, this paper explores the tensions arising from central bank balance sheet policies and addresses pertinent questions about the governance and accountability of independent central banks in a democratic society.
The Federal Reserve’s muddled mandate to attain simultaneously the incompatible goals of maximum employment and price stability invites short-term-oriented discretionary policymaking inconsistent with the systematic approach needed for monetary policy to contribute best to the economy over time. Fear of liftoff—the reluctance to start the process of policy normalization after the end of a recession—serves as an example. Causes of the problem are discussed, drawing on public choice and cognitive psychology perspectives. The Federal Reserve could adopt a framework that relies on a simple policy rule subject to periodic reviews and adaptation. Replacing meeting-by-meeting discretion with a simple policy rule would eschew discretion in favor of systematic policy. Periodic review of the rule would allow the Federal Reserve the flexibility to account for and occasionally adapt to the evolving understanding of the economy. Congressional legislation could guide the Federal Reserve in this direction. However the Federal Reserve may be best placed to select the simple rule and could embrace this improvement on its own, within its current mandate, with the publication of a simple rule along the lines of its statement of longer-run goals.
What institutional arrangements for an independent central bank with a price stability mandate promote good policy outcomes when unconventional policies become necessary? Unconventional monetary policy poses challenges. The large scale asset purchases needed to counteract the zero lower bound on nominal interest rates have uncomfortable fiscal and distributional consequences and require central banks to assume greater risks on their balance sheets.
In his paper, Athanasios Orphanides draws lessons from the experience of the Bank of Japan (BoJ) since the late 1990s for the institutional design of independent central banks. He comes to the conclusion that lack of clarity on the precise definition of price stability, coupled with concerns about the legitimacy of large balance sheet expansions, hinders policy: It encourages the central bank to eschew the decisive quantitative easing needed to reflate the economy and instead to accommodate too-low inflation. The BoJ’s experience with the zero lower bound suggests important benefits from a clear definition of price stability as a symmetric 2% goal for inflation, which the Bank adopted in 2013.
Following the experience of the global financial crisis, central banks have been asked to undertake unprecedented responsibilities. Governments and the public appear to have high expectations that monetary policy can provide solutions to problems that do not necessarily fit in the realm of traditional monetary policy. This paper examines three broad public policy goals that may overburden monetary policy: full employment; fiscal sustainability; and financial stability. While central banks have a crucial position in public policy, the appropriate policy mix also involves other institutions, and overreliance on monetary policy to achieve these goals is bound to disappoint. Central Bank policies that facilitate postponement of needed policy actions by governments may also have longer-term adverse consequences that could outweigh more immediate benefits. Overburdening monetary policy may eventually diminish and compromise the independence and credibility of the central bank, thereby reducing its effectiveness to preserve price stability and contribute to crisis management.
Are rules and boundaries sufficient to limit harmful central bank discretion? Lessons from Europe
(2014)
Marvin Goodfriend’s (2014) insightful, informative and provocative work explains concisely and convincingly why the Fed needs rules and boundaries. This paper reviews the broader institutional design problem regarding the effectiveness of the central bank in practice and confirms the need for rules and boundaries. The framework proposed for improving the Fed incorporates key elements that have already been adopted in the European Union. The case of ELA provision by the ECB and the Central Bank of Cyprus to Marfin-Laiki Bank during the crisis, however, suggests that the existence of rules and boundaries may not be enough to limit harmful discretion. During a crisis, novel interpretations of the legal authority of the central bank may be introduced to create a grey area that might be exploited to justify harmful discretionary decisions even in the presence of rules and boundaries. This raises the question how to ensure that rules and boundaries are respected in practice
Despite a number of helpful changes, including the adoption of an inflation target, the Fed’s monetary policy strategy proved insufficiently resilient in recent years. While the Fed eased policy appropriately during the pandemic, it fell behind the curve during the post-pandemic recovery. During 2021, the Fed kept easing policy while the inflation outlook was deteriorating and the economy was growing considerably faster than the economy’s natural growth rate—the sum of the Fed’s 2% inflation goal and the growth rate of potential output.
The resilience of the Fed’s monetary policy strategy could be enhanced, and such errors be avoided with guidance from a simple natural growth targeting rule that prescribes that the federal funds rate during each quarter be raised (cut) when projected nominal income growth exceeds (falls short) of the economy’s natural growth rate. An illustration with real-time data and forecasts since the early 1990s shows that Fed policy has not persistently deviated from this simple rule with the notable exception of the period coinciding with the Fed’s post-pandemic policy error.
The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.
We analyze cyclical co-movement in credit, house prices, equity prices, and longterm interest rates across 17 advanced economies. Using a time-varying multi-level dynamic factor model and more than 130 years of data, we analyze the dynamics of co-movement at different levels of aggregation and compare recent developments to earlier episodes such as the early era of financial globalization from 1880 to 1913 and the Great Depression. We find that joint global dynamics across various financial quantities and prices as well as variable-specific global co-movements are important to explain fluctuations in the data. From a historical perspective, global co-movement in financial variables is not a new phenomenon, but its importance has increased for some variables since the 1980s. For equity prices, global cycles play currently a historically unprecedented role, explaining more than half of the fluctuations in the data. Global cycles in credit and housing have become much more pronounced and longer, but their importance in explaining dynamics has only increased for some economies including the US, the UK and Nordic European countries. We also include GDP in the analysis and find an increasing role for a global business cycle.