C61 Optimization Techniques; Programming Models; Dynamic Analysis
Refine
Document Type
- Working Paper (11)
Language
- English (11)
Has Fulltext
- yes (11)
Is part of the Bibliography
- no (11)
Keywords
- Buffer Stock Saving (2)
- DEA (2)
- DSGE (2)
- Malmquist-Productivity (2)
- Numerical accuracy (2)
- Solution methods (2)
- bootstrapping (2)
- demutualization (2)
- exchanges (2)
- Anleihe (1)
This paper presents and compares Bernoulli iterative approaches for solving linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. I find that Bernoulli methods compare favorably in solving DSGE models to the QZ, providing similar accuracy as measured by the forward error of the solution at a comparable computation burden. The method can guarantee convergence to a particular, e.g., unique stable, solution and can be combined with other iterative methods, such as the Newton method, lending themselves especially to refining solutions.
The authors propose a new method to forecast macroeconomic variables that combines two existing approaches to mixed-frequency data in DSGE models. The first existing approach estimates the DSGE model in a quarterly frequency and uses higher frequency auxiliary data only for forecasting. The second method transforms a quarterly state space into a monthly frequency. Their algorithm combines the advantages of these two existing approaches.They compare the new method with the existing methods using simulated data and real-world data. With simulated data, the new method outperforms all other methods, including forecasts from the standard quarterly model. With real world data, incorporating auxiliary variables as in their method substantially decreases forecasting errors for recessions, but casting the model in a monthly frequency delivers better forecasts in normal times.
The authors present and compare Newton-based methods from the applied mathematics literature for solving the matrix quadratic that underlies the recursive solution of linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. They find that Newton-based methods compare favorably in solving DSGE models, providing higher accuracy as measured by the forward error of the solution at a comparable computation burden. The methods, however, suffer from their inability to guarantee convergence to a particular, e.g. unique stable, solution, but their iterative procedures lend themselves to refining solutions either from different methods or parameterizations.
We introduce a new measure of systemic risk, the change in the conditional joint probability of default, which assesses the effects of the interdependence in the financial system on the general default risk of sovereign debtors. We apply our measure to examine the fragility of the European financial system during the ongoing sovereign debt crisis. Our analysis documents an increase in systemic risk contributions in the euro area during the post-Lehman global recession and especially after the beginning of the euro area sovereign debt crisis. We also find a considerable potential for cascade effects from small to large euro area sovereigns. When we investigate the effect of sovereign default on the European Union banking system, we find that bigger banks, banks with riskier activities, with poor asset quality, and funding and liquidity constraints tend to be more vulnerable to a sovereign default. Surprisingly, an increase in leverage does not seem to influence systemic vulnerability.
This chapter aims to provide a hands-on approach to New Keynesian models and their uses for macroeconomic policy analysis. It starts by reviewing the origins of the New Keynesian approach, the key model ingredients and representative models. Building blocks of current-generation dynamic stochastic general equilibrium (DSGE) models are discussed in detail. These models address the famous Lucas critique by deriving behavioral equations systematically from the optimizing and forward-looking decision-making of households and firms subject to well-defined constraints. State-of-the-art methods for solving and estimating such models are reviewed and presented in examples. The chapter goes beyond the mere presentation of the most popular benchmark model by providing a framework for model comparison along with a database that includes a wide variety of macroeconomic models. Thus, it offers a convenient approach for comparing new models to available benchmarks and for investigating whether particular policy recommendations are robust to model uncertainty. Such robustness analysis is illustrated by evaluating the performance of simple monetary policy rules across a range of recently-estimated models including some with financial market imperfections and by reviewing recent comparative findings regarding the magnitude of government spending multipliers. The chapter concludes with a discussion of important objectives for on-going and future research using the New Keynesian framework.
This paper proposes a new approach for modeling investor fear after rare disasters. The key element is to take into account that investors’ information about fundamentals driving rare downward jumps in the dividend process is not perfect. Bayesian learning implies that beliefs about the likelihood of rare disasters drop to a much more pessimistic level once a disaster has occurred. Such a shift in beliefs can trigger massive declines in price-dividend ratios. Pessimistic beliefs persist for some time. Thus, belief dynamics are a source of apparent excess volatility relative to a rational expectations benchmark. Due to the low frequency of disasters, even an infinitely-lived investor will remain uncertain about the exact probability. Our analysis is conducted in continuous time and offers closed-form solutions for asset prices. We distinguish between rational and adaptive Bayesian learning. Rational learners account for the possibility of future changes in beliefs in determining their demand for risky assets, while adaptive learners take beliefs as given. Thus, risky assets tend to be lower-valued and price-dividend ratios vary less under adaptive versus rational learning for identical priors. Keywords: beliefs, Bayesian learning, controlled diffusions and jump processes, learning about jumps, adaptive learning, rational learning. JEL classification: D83, G11, C11, D91, E21, D81, C61
We model the motives for residents of a country to hold foreign assets, including the precautionary motive that has been omitted from much previous literature as intractable. Our model captures many of the principal insights from the existing specialized literature on the precautionary motive, deriving a convenient formula for the economy’s target value of assets. The target is the level of assets that balances impatience, prudence, risk, intertemporal substitution, and the rate of return. We use the model to shed light on two topical questions: The “upstream” flows of capital from developing countries to advanced countries, and the long-run impact of resorbing global financial imbalances
We present a tractable model of the effects of nonfinancial risk on intertemporal choice. Our purpose is to provide a simple framework that can be adopted in fields like representative-agent macroeconomics, corporate finance, or political economy, where most modelers have chosen not to incorporate serious nonfinancial risk because available methods were too complex to yield transparent insights. Our model produces an intuitive analytical formula for target assets, and we show how to analyze transition dynamics using a familiar Ramsey-style phase diagram. Despite its starkness, our model captures most of the key implications of nonfinancial risk for intertemporal choice.
In recent years stock exchanges have been increasingly diversifying their operations into related business areas such as derivatives trading, post-trading services and software sales. This trend can be observed most notably among profit-oriented trading venues. While the pursuit for diversification is likely to be driven by the attractiveness of these investment opportunities, it is yet an open question whether certain integration activities are also efficient, both from a social welfare and from the exchanges' perspective. Academic contributions so far analyzed different business models primarily from the social welfare perspective, whereas there is only little literature considering their impact on the exchange itself. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the factor productivity of exchanges with different business models. Our findings suggest three conclusions: (1) Integration activity comes at the cost of increased operational complexity which in some cases outweigh the potential synergies between related activities and therefore leads to technical inefficiencies and lower productivity growth. (2) We find no evidence that vertical integration is more efficient and productive than other business models. This finding could contribute to the ongoing discussion about the merits of vertical integration from a social welfare perspective. (3) The existence of a strong in-house IT-competence seems to be beneficial to overcome.
Academic contributions on the demutualization of stock exchanges so far have been predominantly devoted to social welfare issues, whereas there is scarce empirical literature referring to the impact of a governance change on the exchange itself. While there is consensus that the case for demutualization is predominantly driven by the need to improve the exchange's competitiveness in a changing business environment, it remains unclear how different governance regimes actually affect stock exchange performance. Some authors propose that a public listing is the best suited governance arrangement to improve an exchange's competitiveness. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the efficiency and productivity of exchanges with differing governance arrangements. For this purpose we calculate in a first step individual efficiency and productivity values via DEA. In a second step we regress the derived values against variables that - amongst others - map the institutional arrangement of the exchanges in order to determine efficiency and productivity differences between (1) mutuals (2) demutualized but customer-owned exchanges and (3) publicly listed and thus at least partly outsider-owned exchanges. We find evidence that demutualized exchanges exhibit higher technical efficiency than mutuals. However, they perform relatively poor as far as productivity growth is concerned. Furthermore, we find no evidence that publicly listed exchanges possess higher efficiency and productivity values than demutualized exchanges with a customer-dominated structure. We conclude that the merits of outside ownership lie possibly in other areas such as solving conflicts of interest between too heterogeneous members.