Refine
Year of publication
Document Type
- Article (31801)
- Part of Periodical (11989)
- Book (8348)
- Doctoral Thesis (5772)
- Part of a Book (3973)
- Working Paper (3404)
- Review (2949)
- Preprint (2420)
- Contribution to a Periodical (2399)
- Conference Proceeding (1776)
Language
- German (43471)
- English (30837)
- French (1071)
- Portuguese (843)
- Multiple languages (320)
- Spanish (309)
- Croatian (302)
- Italian (198)
- mis (174)
- Turkish (168)
Keywords
- Deutsch (1086)
- Literatur (880)
- taxonomy (774)
- Deutschland (553)
- Rezension (516)
- new species (459)
- Rezeption (354)
- Frankfurt <Main> / Universität (341)
- Übersetzung (330)
- Geschichte (302)
Institute
- Medizin (7883)
- Präsidium (5281)
- Physik (4999)
- Extern (2742)
- Wirtschaftswissenschaften (2738)
- Gesellschaftswissenschaften (2383)
- Biowissenschaften (2213)
- Biochemie und Chemie (1985)
- Frankfurt Institute for Advanced Studies (FIAS) (1897)
- Informatik (1738)
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Syndicated loans and the number of lending relationships have raised growing attention. All other terms being equal (e.g. seniority), syndicated loans provide larger payments (in basis points) to lenders funding larger amounts. The paper explores empirically the motivation for such a price discrimination on sovereign syndicated loans in the period 1990-1997. First evidence suggests larger premia are associated with renegotiation prospects. This is consistent with the hypothesis that price discrimination is aimed at reducing the number of lenders and thus the expected renegotiation costs. However, larger payment discrimination is also associated with more targeted market segments and with larger loans, thus minimising borrowing costs and/or attempting to widen the circle of lending relationships in order to successfully raise the requested amount. JEL Classification: F34, G21, G33 This version: June, 2002. Later version (october 2003) with the title: "Why Borrowers Pay Premiums to Larger Lenders: Empirical Evidence from Sovereign Syndicated Loans" : http://publikationen.ub.uni-frankfurt.de/volltexte/2005/992/
We use consumer price data for 205 cities/regions in 21 countries to study deviations from the law-of-one-price before, during and after the major currency crises of the 1990s. We combine data from industrialised nations in North America (Unites States, Canada, Mexico), Europe (Germany, Italy, Spain and Portugal) and Asia (Japan, Korea, New Zealand, Australia) with corresponding data from emerging market economies in the South America (Argentine, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). We confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by significantly increasing these border effects, and by raising within country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago. JEL classification: F40, F41
We use consumer price data for 81 European cities (in Germany, Austria, Switzerland, Italy, Spain and Portugal) to study deviations from the law-of-one-price before and during the European Economic and Monetary Union (EMU) by analysing both aggregate and disaggregate CPI data for 7 categories of goods we find that the distance between cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of the relative price is much higher for two cities located in different countries than for two equidistant cities in the same country. Under EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility. JEL classification: F40, F41
This paper analyzes a comprehensive data set of 108 non venture-backed, 58 venture-backed and 33 bridge financed companies going public at Germany s Neuer Markt between March 1997 and March 2000. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This study provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VCbacked IPOs.
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
Using data from the Consumer Expenditure Survey we first document that the recent increase in income inequality in the US has not been accompanied by a corresponding rise in consumption inequality. Much of this divergence is due to different trends in within-group inequality, which has increased significantly for income but little for consumption. We then develop a simple framework that allows us to analytically characterize how within-group income inequality affects consumption inequality in a world in which agents can trade a full set of contingent consumption claims, subject to endogenous constraints emanating from the limited enforcement of intertemporal contracts (as in Kehoe and Levine, 1993). Finally, we quantitatively evaluate, in the context of a calibrated general equilibrium production economy, whether this set-up, or alternatively a standard incomplete markets model (as in Ayiagari 1994), can account for the documented stylized consumption inequality facts from the US data. JEL Klassifikation: E21, D91, D63, D31, G22
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61.
This paper studies an overlapping generations model with stochastic production and incomplete markets to assess whether the introduction of an unfunded social security system leads to a Pareto improvement. When returns to capital and wages are imperfectly correlated a system that endows retired households with claims to labor income enhances the sharing of aggregate risk between generations. Our quantitative analysis shows that, abstracting from the capital crowding-out effect, the introduction of social security represents a Pareto improving reform, even when the economy is dynamically effcient. However, the severity of the crowding-out effect in general equilibrium tends to overturn these gains. Klassifikation: E62, H55, H31, D91, D58 . April 2005.
While much of classical statistical analysis is based on Gaussian distributional assumptions, statistical modeling with the Laplace distribution has gained importance in many applied fields. This phenomenon is rooted in the fact that, like the Gaussian, the Laplace distribution has many attractive properties. This paper investigates two methods of combining them and their use in modeling and predicting financial risk. Based on 25 daily stock return series, the empirical results indicate that the new models offer a plausible description of the data. They are also shown to be competitive with, or superior to, use of the hyperbolic distribution, which has gained some popularity in asset-return modeling and, in fact, also nests the Gaussian and Laplace. Klassifikation: C16, C50 . March 2005.
This paper computes the optimal progressivity of the income tax code in a dynamic general equilibrium model with household heterogeneity in which uninsurable labor productivity risk gives rise to a nontrivial income and wealth distribution. A progressive tax system serves as a partial substitute for missing insurance markets and enhances an equal distribution of economic welfare. These beneficial effects of a progressive tax system have to be traded off against the efficiency loss arising from distorting endogenous labor supply and capital accumulation decisions. Using a utilitarian steady state social welfare criterion we find that the optimal US income tax is well approximated by a flat tax rate of 17:2% and a fixed deduction of about $9,400. The steady state welfare gains from a fundamental tax reform towards this tax system are equivalent to 1:7% higher consumption in each state of the world. An explicit computation of the transition path induced by a reform of the current towards the optimal tax system indicates that a majority of the population currently alive (roughly 62%) would experience welfare gains, suggesting that such fundamental income tax reform is not only desirable, but may also be politically feasible. JEL Klassifikation: E62, H21, H24 .
Financial markets embed expectations of central bank policy into asset prices. This paper compares two approaches that extract a probability density of market beliefs. The first is a simulatedmoments estimator for option volatilities described in Mizrach (2002); the second is a new approach developed by Haas, Mittnik and Paolella (2004a) for fat-tailed conditionally heteroskedastic time series. In an application to the 1992-93 European Exchange Rate Mechanism crises, that both the options and the underlying exchange rates provide useful information for policy makers. JEL Klassifikation: G12, G14, F31.
Volatility forecasting
(2005)
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1.
This paper analyzes dynamic equilibrium risk sharing contracts between profit-maximizing intermediaries and a large pool of ex-ante identical agents that face idiosyncratic income uncertainty that makes them heterogeneous ex-post. In any given period, after having observed her income, the agent can walk away from the contract, while the intermediary cannot, i.e. there is one-sided commitment. We consider the extreme scenario that the agents face no costs to walking away, and can sign up with any competing intermediary without any reputational losses. We demonstrate that not only autarky, but also partial and full insurance can obtain, depending on the relative patience of agents and financial intermediaries. Insurance can be provided because in an equilibrium contract an up-front payment e.ectively locks in the agent with an intermediary. We then show that our contract economy is equivalent to a consumption-savings economy with one-period Arrow securities and a short-sale constraint, similar to Bulow and Rogo. (1989). From this equivalence and our characterization of dynamic contracts it immediately follows that without cost of switching financial intermediaries debt contracts are not sustainable, even though a risk allocation superior to autarky can be achieved. JEL Klassifikation: G22, E21, D11, D91.
Default risk sharing between banks and markets : the contribution of collateralized debt obligations
(2005)
This paper contributes to the economics of financial institutions risk management by exploring how loan securitization a.ects their default risk, their systematic risk, and their stock prices. In a typical CDO transaction a bank retains through a first loss piece a very high proportion of the expected default losses, and transfers only the extreme losses to other market participants. The size of the first loss piece is largely driven by the average default probability of the securitized assets. If the bank sells loans in a true sale transaction, it may use the proceeds to to expand its loan business, thereby incurring more systematic risk. We find an increase of the banks' betas, but no significant stock price e.ect around the announcement of a CDO issue. Our results suggest a role for supervisory requirements in stabilizing the financial system, related to transparency of tranche allocation, and to regulatory treatment of senior tranches. JEL Klassifikation: D82, G21, D74 .