Refine
Year of publication
- 2012 (87) (remove)
Document Type
- Working Paper (87) (remove)
Has Fulltext
- yes (87)
Is part of the Bibliography
- no (87)
Keywords
- model uncertainty (4)
- monetary policy (4)
- DSGE models (3)
- Monetary Policy (3)
- forecasting (3)
- ECB (2)
- Fiscal Policy (2)
- Formale Semantik (2)
- Funktionale Programmierung (2)
- Greenbook (2)
Institute
- Center for Financial Studies (CFS) (25)
- Institute for Monetary and Financial Stability (IMFS) (20)
- Wirtschaftswissenschaften (16)
- Institut für sozial-ökologische Forschung (ISOE) (10)
- House of Finance (HoF) (7)
- Rechtswissenschaft (6)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (5)
- Gesellschaftswissenschaften (4)
- Informatik (4)
- Institute for Law and Finance (ILF) (4)
Eine wesentliche Voraussetzung für die Entschlüsselung herrschender Justizverständnisse ist die Auseinandersetzung mit den Rollen, die die beteiligten Akteure in einem Rechtssystem einnehmen sowie die Untersuchung der rechtlichen und institutionellen Bedingungen unter denen diese Akteure handeln. Der vorliegende Beitrag beschäftigt sich zunächst mit der Macht- und Aufgabenverteilung zwischen Richtern und Parteien. Dabei wird deutlich, dass die Rollenallokation nicht einheitlich ist, sondern in Abhängigkeit von unterschiedlichen verfahrensrechtlichen und institutionellen Voraussetzungen variiert. In Verfahren vor einer Jury wird die richterliche Autorität durch eine maximal ausgeprägte Parteiautonomie stark eingeschränkt. Als Rechthonoratioren (im Weberschen Sinne) agieren Richter dagegen immer dann, wenn Sie ohne Geschworene Recht sprechen. Dies geschieht insbesondere in den einzelstaatlichen Obergerichten und den Bundesberufungsgereichten, aber auch in Verfahren erster Instanz, in denen „claims in equity“ zu entscheiden sind. Der Beitrag beschäftigt sich abschließend mit dem Einfluss, den die Besonderheiten der amerikanischen Juristenausbildung auf das amerikanische Justizverständnis ausüben: Sie prägen und reproduzieren eine der Rollen und Selbstbilder unter amerikanischen Juristen, sowohl in der Anwaltschaft als auch auf Seiten der Richter.
Venture capital (VC) investment has long been conceptualized as a local business , in which the VC’s ability to source, syndicate, fund, monitor, and add value to portfolio firms critically depends on their access to knowledge obtained through their ties to the local (i.e., geographically proximate) network. Consistent with the view that local networks matter, existing research confirms that local and geographically distant portfolio firms are sourced, syndicated, funded, and monitored differently. Curiously, emerging research on VC investment practice within the United States finds that distant investments, as measured by “exits” (either initial public offering or merger & acquisition) out-perform local investments. These findings raise important questions about the assumed benefits of local network membership and proximity. To more deeply probe these questions, we contrast the deal structure of cross-border VC investment with domestic VC investment, and contrast the deal structure of cross-border VC investments that include a local
partner with those that do not. Evidence from 139,892 rounds of venture capital financing in the period 1980-2009 suggests that cross-border investment practice, in terms of deal sourcing, syndication, and performance indeed change with proximity, but that monitoring practices do not. Further, we find that the inclusion of a local partner in the investment syndicate yields surprisingly few benefits. This evidence, we argue, raises important questions about VC investment practice as well as the ability of firms to capture and lever the presumed benefits of network membership.
This paper investigates the accuracy of point and density forecasts of four DSGE models for inflation, output growth and the federal funds rate. Model parameters are estimated and forecasts are derived successively from historical U.S. data vintages synchronized with the Fed’s Greenbook projections. Point forecasts of some models are of similar accuracy as the forecasts of nonstructural large dataset methods. Despite their common underlying New Keynesian modeling philosophy, forecasts of different DSGE models turn out to be quite distinct. Weighted forecasts are more precise than forecasts from individual models. The accuracy of a simple average of DSGE model forecasts is comparable to Greenbook projections for medium term horizons. Comparing density forecasts of DSGE models with the actual distribution of observations shows that the models overestimate uncertainty around point forecasts.
This paper investigates the accuracy of forecasts from four DSGE models for inflation, output growth and the federal funds rate using a real-time dataset synchronized with the Fed’s Greenbook projections. Conditioning the model forecasts on the Greenbook nowcasts leads to forecasts that are as accurate as the Greenbook projections for output growth and the federal funds rate. Only for inflation the model forecasts are dominated by the Greenbook projections. A comparison with forecasts from Bayesian VARs shows that the economic structure of the DSGE models which is useful for the interpretation of forecasts does not lower the accuracy of forecasts. Combining forecasts of several DSGE models increases precision in comparison to individual model forecasts. Comparing density forecasts with the actual distribution of observations shows that DSGE models overestimate uncertainty around point forecasts.
In the aftermath of the global financial crisis, the state of macroeconomicmodeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development
In the aftermath of the global financial crisis, the state of macroeconomic modeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development.
Motivated by the U.S. events of the 2000s, we address whether a too low for too long interest rate policy may generate a boom-bust cycle. We simulate anticipated and unanticipated monetary policies in state-of-the-art DSGE models and in a model with bond financing via a shadow banking system, in which the bond spread is calibrated for normal and optimistic times. Our results suggest that the U.S. boom-bust was caused by the combination of (i) too low for too long interest rates, (ii) excessive optimism and (iii) a failure of agents to anticipate the extent of the abnormally favorable conditions.
In this paper, I introduce lumpy micro-level capital adjustment into a sticky information general equilibrium model. Lumpy adjustment arises because of inattentiveness in capital investment decisions instead of the more common assumption of non-convex adjustment costs. The model features inattentiveness as the only source of stickiness. I find that the model with lumpy investment yields business cycle dynamics which differ substantially from those of an otherwise identical model with frictionless investment and are much more consistent with the empirical evidence. These results therefore strengthen the case in favour of the relevance of microeconomic investment lumpiness for the business cycle.