Refine
Year of publication
Document Type
- Working Paper (3389) (remove)
Language
- English (2353)
- German (1016)
- Spanish (8)
- French (7)
- Multiple languages (2)
Keywords
- Deutschland (223)
- USA (64)
- Corporate Governance (53)
- Geldpolitik (53)
- Schätzung (52)
- Europäische Union (51)
- monetary policy (47)
- Bank (41)
- Sprachtypologie (34)
- Monetary Policy (31)
Institute
- Wirtschaftswissenschaften (1498)
- Center for Financial Studies (CFS) (1471)
- Sustainable Architecture for Finance in Europe (SAFE) (805)
- House of Finance (HoF) (665)
- Rechtswissenschaft (402)
- Institute for Monetary and Financial Stability (IMFS) (214)
- Informatik (119)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (75)
- Gesellschaftswissenschaften (75)
- Geographie (64)
Inflation-targeting central banks have only imperfect knowledge about the effect of policy decisions on inflation. An important source of uncertainty is the relationship between inflation and unemployment. This paper studies the optimal monetary policy in the presence of uncertainty about the natural unemployment rate, the short-run inflation-unemployment tradeoff and the degree of inflation persistence in a simple macroeconomic model, which incorporates rational learning by the central bank as well as private sector agents. Two conflicting motives drive the optimal policy. In the static version of the model, uncertainty provides a motive for the policymaker to move more cautiously than she would if she knew the true parameters. In the dynamic version, uncertainty also motivates an element of experimentation in policy. I find that the optimal policy that balances the cautionary and activist motives typically exhibits gradualism, that is, it still remains less aggressive than a policy that disregards parameter uncertainty. Exceptions occur when uncertainty is very high and in inflation close to target.
The use of GARCH models with stable Paretian innovations in financial modeling has been recently suggested in the literature. This class of processes is attractive because it allows for conditional skewness and leptokurtosis of financial returns without ruling out normality. This contribution illustrates their usefulness in predicting the downside risk of financial assets in the context of modeling foreign exchange-rates and demonstrates their superiority over use of normal or Student´s t GARCH models.
Learning and equilibrium selection in a monetary overlapping generations model with sticky prices
(2003)
We study adaptive learning in a monetary overlapping generations model with sticky prices and monopolistic competition for the case where learning agents observe current endogenous variables. Observability of current variables is essential for informational consistency of the learning setup with the model set up but generates multiple temporary equilibria when prices are flexible and prevents a straightforward construction of the learning dynamics. Sticky prices overcome this problem by avoiding simultaneity between prices and price expectations. Adaptive learning then robustly selects the determinate (monetary) steady state independent from the degree of imperfect competition. The indeterminate (non-monetary) steady state and non-stationary equilibria are never stable. Stability in a deterministic version of the model may differ because perfect foresight equilibria can be the limit of restricted perceptions equilibria of the stochastic economy with vanishing noise and thereby inherit different stability properties. This discontinuity at the zero variance of shocks suggests to analyze learning in stochastic models.
This paper compares Bayesian decision theory with robust decision theory where the decision maker optimizes with respect to the worst state realization. For a class of robust decision problems there exists a sequence of Bayesian decision problems whose solution converges towards the robust solution. It is shown that the limiting Bayesian problem displays infinite risk aversion and that decisions are insensitive (robust) to the precise assignment of prior probabilities. This holds independent from whether the preference for robustness is global or restricted to local perturbations around some reference model.
This paper considers a sticky price model with a cash-in-advance constraint where agents forecast inflation rates with the help of econometric models. Agents use least squares learning to estimate two competing models of which one is consistent with rational expectations once learning is complete. When past performance governs the choice of forecast model, agents may prefer to use the inconsistent forecast model, which generates an equilibrium where forecasts are inefficient. While average output and inflation result the same as under rational expectations, higher moments differ substantially: output and inflation show persistence, inflation responds sluggishly to nominal disturbances, and the dynamic correlations of output and inflation match U.S. data surprisingly well.
Over-allotment arrangements are nowadays part of almost any initial public offering. The underwriting banks borrow stocks from the previous shareholders to issue more than the initially announced number of shares. This is combined with the option to cover this short position at the issue price. We present empirical evidence on the value of these arrangements to the underwriters of initial public offerings on the Neuer Markt. The over-allotment arrangement is regarded as a portfolio of a long call option and a short position in a forward contract on the stock, which is different from other approaches presented in the literature. Given the economically substantial values for these option-like claims we try to identify benefits to previous shareholders or new investors when the company is using this instrument in the process of going public. Although we carefully control for potential endogeneity problems, we find virtually no evidence for a reduction in underpricing for firms using over-allotment arrangements. Furthermore, we do not find evidence for more pronounced price stabilization activities or better aftermarket performance for firms granting an over-allotment arrangement to the underwriting banks.
A number of recent studies have suggested that activist stabilization policy rules responding to inflation and the output gap can attain simultaneously a low and stable rate of inflation as well as a high degree of economic stability. The foremost example of such a strategy is the policy rule proposed by Taylor (1993). In this paper, I demonstrate that the policy settings that would have been suggested by this rule during the 1970s, based on real-time data published by the U.S. Commerce Department, do not greatly differ from actual policy during this period. To the extent macroeconomic outcomes during this period are considered unfavorable, this raises questions regarding the usefulness of this strategy for monetary policy. To the extent the Taylor rule is believed to provide a reasonable guide to monetary policy, this finding raises questions regarding earlier critiques of monetary policy during the 1970s.
Ambivalence in the regulatory definition of capital adequacy for credit risk has recently stirred the financial services industry to collateral loan obligations (CLOs) as an important balance sheet management tool. CLOs represent a specialised form of Asset-Backed Securitisation (ABS), with investors acquiring a structured claim on the interest proceeds generated from a portfolio of bank loans in the form of tranches with different seniority. By way of modelling Merton-type risk-neutral asset returns of contingent claims on a multi-asset portfolio of corporate loans in a CLO transaction, we analyse the optimal design of loan securitisation from the perspective of credit risk in potential collateral default. We propose a pricing model that draws on a careful simulation of expected loan loss based on parametric bootstrapping through extreme value theory (EVT). The analysis illustrates the dichotomous effect of loss cascading, as the most junior tranche of CLO transactions exhibits a distinctly different default tolerance compared to the remaining tranches. By solving the puzzling question of properly pricing the risk premium for expected credit loss, we explain the rationale of first loss retention as credit risk cover on the basis of our simulation results for pricing purposes under the impact of asymmetric information. Klassifikation: C15, C22, D82, F34, G13, G18, G20
The following descriptive paper surveys the various types of loan securitisation and provides a working definition of so-called collateralised loan obligations (CLOs). Free of the common rhetoric and slogans, which sometimes substitute for understanding of the complex nature of structured finance, this paper describes the theoretical foundations of this specialised form of loan securitisation. Not only the distinctive properties of CLOs, but also the information economics inherent in the transfer of credit risk will be considered, so that we can equally privilege the critical aspects of security design in the structuring of CLO transactions.
In this paper we assess the implications of sunk costs and product differentiation on the pricing decisions of the multinational firms. For this purpose we use a modified version of Salop's spatial competition. The model yields clear-cut predictions regarding the effects of exchange rate shocks on the market structure and on pass-through. The main results are following: shocks within the band of inaction do not affect market structure. The upper bound of this range rises as the industry ratio of sunk- to fixed costs increases. As fixed costs and product heterogeneity jointly increase, the lower bound drops. Outside of the range, depreciations cause one or several of those foreign brands closest to the home brand to leave. This decreases the overall responsiveness of prices to exchange rate shocks. Large appreciations induce entry and increase the elasticity of prices. This asymmetry implies larger positive than negative PPP deviations. When accounting for price changes in foreign markets, strategic pricing behaviour is no longer sufficient to generate real exchange rate variability. Incomplete pass-through obtains if and only if the domestic firms have a smaller market share abroad. With large nominal exchange rate shocks a hysteresis result obtains if and only if sunk costs are non-zero. Klassifikation: C33, E31
Since the second half of the nineties the euro area has been subject to a considerable accumulation of temporary and idiosyncratic price shocks. Core inflation indicators for the euro area are thus of utmost interest. Based on euro area-wide data core inflation in this paper is analyzed by means of an indicator derived from the generalized dynamic factor model. This indicator reveals that HICP inflation strongly exaggerated both the decline as well as the increase in the price trend in 1999 and 2000/2001. Our results reinforce those achieved by Cristadoro, Forni, Reichlin and Versonese (2001) based on euro area country data which indicates the robustness of the indicator. Klassifikation: C33, E31
Both unconditional mixed-normal distributions and GARCH models with fat-tailed conditional distributions have been employed for modeling financial return data. We consider a mixed-normal distribution coupled with a GARCH-type structure which allows for conditional variance in each of the components as well as dynamic feedback between the components. Special cases and relationships with previously proposed specifications are discussed and stationarity conditions are derived. An empirical application to NASDAQ-index data indicates the appropriateness of the model class and illustrates that the approach can generate a plausible disaggregation of the conditional variance process, in which the components' volatility dynamics have a clearly distinct behavior that is, for example, compatible with the well-known leverage effect. Klassifikation: C22, C51, G10
In this paper, we present a monetary policy game in which the central bank has a private forecast of supply and demand shocks. The public needs to form its inflationary expectations and can make use of central bank announcements. However, because of the credibility problem that the central bank faces, the public will not believe a precise announcement. By extending the arrangement proposed by Garfinkel and Oh (1995) to a model that includes private information about both demand and supply shocks, we investigate the feasibility of making imprecise credible announcements concerning the rate of inflation. Klassifikation:E52;E58
This paper investigates the financial contracting behavior of German venture capitalists against the results of recent theoretical work on the design of venture capital contracts, especially with regard to the use of convertible securities. First, we identify a special feature of the German market, namely that public-private partnership agencies require significantly lower returns than private and young venture capitalists. The latter are most likely to follow their North-American counterpart by refinancing themselves with closed-end funds. Second, with regard to financing practices it is shown that the use of convertibles, relative to other instruments, is influenced by the anticipated severity of agency problems. Klassifikation: C24; G24; G32
We analyze the desinvestment decision of venture capitalists in the course of an IPO of their portfolio firms. The capital market learns of the project quality only in the period following the IPO. Venture capitalists with high-quality firms face a trade-off between immediately selling their stake in the venture at a price below the true value and having to wait until the true value is revealed. We show that the dilemma may be resolved via a reputation-acquiring mechanism in a repeated game set-up. Thereby, we can explain, e.g., the advent of "hot-issue market behavior" involving early disinvestments and a high degree of price uncertainty. Furthermore, we provide a new rationale for underpricing. Young venture capitalists may use underpricing as a device for credibly committing themselves to acquiring reputation.
Deutsche Börse AG plans to introduce a system (Xetra Best) allowing brokers and broker-dealers to internalize the orders of retail customers. Further, Xetra Best supports payment for order flow arrangements. Both internalization and payment for order flow may be detrimental to market quality. This paper discusses advantages and disadvantages of these arrangements. It draws on experiences made in the US. We derive policy implications that aim at a more stringent interpretation of "best execution", and at higher transparency. Klassifikation: G10, G14
Within a two step GARCH framework we estimate the time-varying spillover effects from European and US return innovations to 10 economic sectors within the euro area, the United States, and the United Kingdom. We use daily data from January 1988 - March 2002. At the beginning of our sample sectors in all three currency areas/blocks formed a quite homogeneous group exhibiting only minor sector-specific characteristics. However, over time sectors became more heterogeneous, that is the response to aggregate shocks increasingly varies across sectors. This provides evidence that sector-specific effects gained in importance. European industries show increased heterogeneity simultaneously with the start of the European Monetary Union, whereas in the US this trend started in the early 1990's. Information technology and non-cyclical services (including telecommunication services) became the most integrated sectors worldwide, which are most affected by aggregate European and US shocks. On the other hand, basic industries, non-cyclical consumer goods, resources, and utilities became less affected by aggregate shocks. Volatility spillovers proved to be small and volatile. JEL_Klassifikation: G1, F36
Forecasting stock market volatility and the informational efficiency of the DAX-index options market
(2002)
Alternative strategies for predicting stock market volatility are examined. In out-of-sample forecasting experiments implied-volatility information, derived from contemporaneously observed option prices or history-based volatility predictors, such as GARCH models, are investigated, to determine if they are more appropriate for predicting future return volatility. Employing German DAX-index return data it is found that past returns do not contain useful information beyond the volatility expectations already reflected in option prices. This supports the efficient market hypothesis for the DAX-index options market.
Money-back guarantees in individual pension accounts : evidence from the German pension reform
(2002)
The German Retirement Saving Act instituted a new funded system of supplementary pensions coupled with a general reduction in the level of state pay-as-you-go old-age pensions. In order to qualify for tax relief, the providers of supplementary savings products must offer a guarantee of the nominal value at retirement of contributions paid into these saving accounts. This paper explores how this "money-back" guarantee works and evaluates alternative designs for guarantee structures, including a life cycle model (dynamic asset allocation), a plan with a pre-specified blend of equity and bond investments (static asset allocation), and some type of portfolio insurance. We use a simulation methodology to compare hedging effectiveness and hedging costs associated with the provision of the money-back guarantee. In addition, the guarantee has important implications for regulators who must find an appropriate solvency system for such saving schemes. This version June 17, 2002 . Klassifikation: G11, G23, G28
Why borrowers pay premiums to larger lenders : empirical evidence from sovereign syndicated loans
(2003)
All other terms being equal (e.g. seniority), syndicated loan contracts provide larger lending compensations (in percentage points) to institutions funding larger amounts. This paper explores empirically the motivation for such a price design on a sample of sovereign syndicated loans in the period 1990-1997. I find strong evidence that a larger premium is associated with higher renegotiation probability and information asymmetries. It hardly has any impact on the number of lenders though. This is consistent with the hypothesis that larger lenders act as main lenders, namely help reduce information asymmetries and provide services in situations of liquidity shortage. This constitutes new evidence of the existence of compensations for such unique services. Moreover, larger payment discrepancies are also associated with larger syndicated loan amounts. This provides further new evidence that larger borrowers bear additional borrowing costs.
Die vorliegende empirische Studie analysiert die Vertragsgestaltung zwischen Investoren und europäischen Venture Capital-Fonds. Im Zentrum steht die Analyse der Vergütung des Fondsmanagements sowie der zum Einsatz kommenden Vertragsklauseln. Deren Ausgestaltung ist entscheidend für die Überwindung der Prinzipal-Agenten-Beziehung innewohnenden Agency-Probleme. Hierzu werden 122 Fondsprospekte sowie 46 Gesellschafterverträge von europäischen Venture Capital-Fonds ausgewertet, die in den Jahren 1996 bis 2001, der ersten großen Boomphase des europäischen Venture Capital-Marktes, aufgelegt wurden. Während die jährliche Vergütung des Fondsmanagements auf den ersten Blick sehr standardisiert erscheint, ergeben sich bei einer Barwertbetrachtung aller zu leistenden Management Fees über die gesamte Fondslaufzeit deutliche Anzeichen für Preisdifferenzierung. In Bezug auf den Einsatz von Vertragsklauseln kann eine Zunahme im Zeitablauf und mithin eine zunehmende Komplexität des Vertragsdesigns festgestellt werden. Vor dem Hintergrund der Erfahrungen aus dem US-amerikanischen Venture Capital-Markt kann diese Entwicklung jedoch noch nicht als abgeschlossen gelten. Der europäische Markt bewegt sich in Bezug auf die Verwendung vertraglicher Restriktionen auf dem Niveau, das in den USA bereits Anfang der neunziger Jahre erreicht war.
We use consumer price data for 205 cities/regions in 21 countries to study PPP deviations before, during and after the major currency crises of the 1990s. We combine data from industrialized nations in North America (Unites States, Canada and Mexico), Europe (Germany, Italy, Spain and Portugal), Asia (Japan and South Korea), and Oceania (Australia and New Zealand) with corresponding data from emerging market economies in South America (Argentina, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). By doing so, we confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by considerably increasing these border effects and by raising within-country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago.
We investigate into the role of the trade channel as important determinant of a country's current account position and the degree of business cycle synchronization with the rest of the world by comparing the predictions of two types of DGE models. It is shown that the behavior of a country's external balance and the international transmission of shocks depends amongst other things on two factors: i) the magnitude of trade interdependence, ii) the degree of substitutability between importable and domestically-produced goods. Using time series data on bilateral trade flows, we estimate the magnitude of trade interdependence and the elasticity of substitution between importable and domestic goods for the G7 countries. Given these estimates, idiosyncratic supply shocks potentially induce changes in the current account and foreign output that vary in direction and magnitude across G7 countries. The relationship between the magnitude of foreign trade and the import substitutability with various correlation measures is examined empirically in a cross-sectional dimension. First Draft, July 2001. Final Draft, November 2001. Klassifikation: E32, F41
Industrial production in G7 countries is assumed to be driven by two exogenous disturbances. Those disturbances are identified in a VAR model so they can be interpreted as country-specific and global supply shocks. The dynamic properties of the model are analyzed and the relative importance of each shock is measured. It is shown that the VAR model matches most of the theoretical predictions of standard intertemporal open-economy models. The identified structural disturbances are analyzed with regard to their impact on the current account and investment. First Draft, October 2000. Final Draft, January 2001. This paper is based on the second chapter of my doctoral dissertation at the University of Frankfurt. Klassifikation: E32, F41
Our study examines the existence and the nature of private benefits of control in Germany. We do this by analyzing initial public offerings of founding-family owned firms and tracking their fate up to ten years following the IPO. Our sample includes a uniquely rich data set of 105 IPOs of family-owned firms floated from 1970 to 1991 on German stock exchanges. We find that, first, even ten years after the IPO, family owners, in the cross section, continue to exercise considerable control. Second, we show that there exist substantial private benefits of control in these firms and - to our understanding for the first time - we empirically measure what the nature of these private benefits really is. We also show that the separation of cash flow rights and voting rights via the issuance of dual-class shares is used to create controlling shareholder structures in order to preserve these private benefits. Third, we find a puzzling and significant underperformance of dual-class share IPOs, which can be explained by ex ante unanticipated expropriation of minority shareholders due to poor investor protection in Germany. This Version: 4th draft, November 2001 This paper has been presented at the European Financial Management Association 2001 Meeting in Lugano, the CEPR Conference "The Firm and its Stakeholders" in Courmayeur, the Fall 2000 WAFA Conference in Washington, D.C., the European Economic Association 2000 Conference in Bozen, the ABN AMRO International Conference on IPOs in Amsterdam, the SIRIF Conference on Corporate Governance in Edinburgh, the Financial Center Seminar at the Tinbergen Institute Rotterdam, and the G-Forum on Entrepreneurship Research in Vienna. Klassifikation: G14, G32, G15
Against the difficult background of analysing aggregated data in this paper core inflation in the euro area is estimated by means of the structural vector autoregressive approach. We demonstrate that the HICP sometimes seems to be a misleading indicator for monetary policy in the euro area. We furthermore compare our core inflation measure to the wide-spread "ex food and energy" measure, often referred to by the ECB. In addition we provide evidence that our measure is a coincident indicator of HICP inflation. Assessing the robustness of our core inflation measure we carefully conclude that it seems to be quite reliable. This Version: April, 2002 Revised edition published in: Allgemeinenes Statistisches Archiv, Vol 87, 2003. Klassifikation: C32, E31
Recent empirical research found that the strong short-term relationship between monetary aggregates and US real output and inflation, as outlined in the classical study by M. Friedman and Schwartz, mostly disappeared since the early 1980s. In the light of the B. Friedman and Kuttner (1992) information value approach, we reevaluate the vanishing relationship between US monetary aggregates and these macroeconomic fundamentals by taking into account the international currency feature of the US dollar. In practice, by using official US data for foreign flows constructed by Porter and Judson (1996) we find that domestic money (currency component of M1 corrected for the foreign holdings of dollars) contains valuable information about future movements of US real output and inflation. Statistical evidence here provided thus suggests that the Friedman and Schwartz's stylized facts can be reestablished once the focus of analysis is back on the domestic monetary aggregates. This Version: August, 2001. Klassifikation: E3, E4, E5
We use consumer price data for 81 European cities (in Germany, Austria, Finland, Italy, Spain, Portugal and Switzerland) to study the impact of the introduction of the euro on goods market integration. Employing both aggregated and disaggregated consumer price index (CPI) data we confirm previous results which showed that the distance between European cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of relative prices is much higher for two cities located in different countries than for two equidistant cities in the same country. Under the EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility.
This paper investigates how US and European equity markets affected the US dollar-euro rate from the introduction of the euro through April 2001. More detailed the following questions are raised: First, do movements in the stock market help to explain movements in the exchange rate? Second, how large is the impact of stock market returns on the exchange rate? And third, does the exchange rate respond differently to different equity markets? The investigation was carried out using daily data within a vector-autoregression model (VAR). Surprisingly, positive returns on US equities as well as on European stock markets had a negative impact on the US dollar-euro rate. Quantitatively, the US dollar-euro rate seems to be more influenced by European stock markets compared to US stock markets. Further, there is evidence for a somewhat weaker impact of technology stock indices on the US dollar-euro rate compared with broader market indices. Finally, the long-term interest rate differential seems to contain more information about exchange rate movements than the short-term interest rate differential. This Version: August, 2001. Klassifikation: C32, F31
This paper uses a unique data set from credit files of six leading German banks to provide some empirical insights into their rating systems used to classify corporate borrowers. On the basis of the New Basle Capital Accord, which allows banks to use their internal rating systems to compute their minimum capital requirements, the relations between potential risk factors, rating decisions and the default probabilities are analysed to answer the question whether German banks are ready for the internal ratings-based approach. The results suggests that the answer is not affirmative at this stage. We find internal rating systems not comparable over banks and furthermore we reveal differences between credit rating determining and default probability determining factors respectively. Klassifikation: G21, G33, G38
In the recent theoretical literature on lending risk, the coordination problem in multi-creditor relationships have been analyzed extensively. We address this topic empirically, relying on a unique panel data set that includes detailed credit-file information on distressed lending relationships in Germany. In particular, it includes information on creditor pools, a legal institution aiming at coordinating lender interests in borrower distress. We report three major findings. First, the existence of creditor pools increases the probability of workout success. Second, the results are consistent with coordination costs being positively related to pool size. Third, major determinants of pool formation are found to be the number of banks, the distribution of lending shares, and the severity of the distress shock.
In recent years new methods and models have been developed to quantify credit risk on a portfolio basis. CreditMetrics (tm), CreditRisk+, CreditPortfolio (tm) are among the best known and many others are similar to them. At first glance they are quite different in their approaches and methodologies. A comparison of these models especially with regard to their applicability on typical middle market loan portfolios is in the focus of this study. The analysis shows that differences in the results of an application of the models on a certain loan portfolio is mainly due to different approaches in approximating default correlations. That is especially true for typically non-rated medium-sized counterparties. On the other hand distributional assumptions or different solution techniques in the models are more or less compatible.
This paper shows that emerging market eurobond spreads after the Asian crisis can be almost completely explained by market expectations about macroeconomic fundamentals and international interest rates. Contrary to the claim that emerging market bond spreads are driven by market variables such as stock market volatility in the developed countries, it is found that this did not play a significant role after the Asian crisis. Using panel data techniques, it is shown that the determinants of bond spreads can be divided into long-term structural variables and medium-term variables which explain month-to-month changes in bond spreads. As relevant medium-term variables, ''consensus forecasts'' of real GDP growth and inflation, and international interest rates are identified. The long-term structural factors do not explicitly enter the model and show up as fixed or random country-specific effects. These intercepts are highly correlated with the countries' credit rating.
This paper analyzes a comprehensive data set of 160 non venture-backed, 79 venture-backed and 61 bridge financed companies going public at Germany´s Neuer Markt between March 1997 and March 2002. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This analysis provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VC-backed IPOs.
This paper examines thoroughly the Chilean Pension Reform, giving first an overview of the mandatory saving plan, the relevant institutions, and the rules for transition from the old to the new system. The main part of the paper contains a critical evaluation of the reform, in particular the macroeconomic performance with respect to capital formation and growth, and the effects on the savings rate as well as on the rates of return and labor market are discussed. Furthermore, the development of capital markets is reviewed. A short critique is presented with respect to intergenerational distribution and risk sharing as well as with respect to the social consequences. This paper is the result of a CFS sponsored research project. A preliminary version was presented at the meeting of the committee of Social Policy of the Verein fuer Socialpolitik, May 1999 and at the 55th Congress of IIPF, 23-26 August 1999, in Moskow.
This paper provides empirical evidence on initial public offerings (IPOs) by investigating the pricing and long-run performance of IPOs using a unique data set collected on the German capital market before World War I. Our findings indicate that underpricing of IPOs has existed, but has significantly decreased over time in our sample. Employing a mixture of distributions approach we also find evidence of price stabilization of IPOs. Concerning long-run performance, investors who bought their shares in the early after-market and held them for more than three years experienced significantly lower returns than the respective industry as a whole. Earlier versions of this paper were presented at the ABN-AMRO Conference on IPOs in Amsterdam, the Annual Meetings of the European Finance Association, the Annual Meetings of the Verein für Socialpolitik, the IX Tor Vergata International Conference on Banking and Finance in Rome, and at Johann Wolfgang Goethe-University in Frankfurt.
This paper examines empirically the question whether the presence of foreign banks and a liberal trade regime with regard to financial services can contribute to a stabilization of capital flows to emerging markets. Since foreign banks, so the argument goes, provide better information to foreign investors and increase transparency, the danger of herding is reduced. Previous findings by Kono and Schuknecht (1998) confirmed empirically that such an effect does exist. This study expands their data set with respect to the length of the time period and the number of countries. Contrary to Kono and Schuknecht, it is found that foreign bank penetration tends to rather increase the volatility of capital flows. The trade regime variables are not significant in explaining cross-country variations in the volatility of capital flows. This result does not change significantly when alternative measures of volatility are considered. This paper was presented at the conference ''Financial crisis in transition countries: recent lessons and problems yet to solve'' on 13-14 July 2000 at the Institute for Economic Research (IWH) in Halle, Germany.
This paper discusses the role of internal corporate ratings as a means by which commercial banks condense their informational advantage and preserve it vis-à-vis a competitive lending market. In drawing on a unique data set collected from leading universal banks in Germany, we are able to evaluate the extent to which non-public information determines corporate ratings. As a point of departure, the paper describes a sample of rating systems currently in use, and points at methodological differences between them. Relying on a probit analysis, we are able to show that the set of qualitative, or soft, factors is not simply redundant with respect to publicly available accounting data. Rather, qualitative information tends to be decisive in at least one third of cases. It tends to improve the firms' overall corporate rating. In the case of conflicting rating changes, i.e. when qualitative and quantitative rating changes have opposing signs, quantitative criteria dominate the overall rating change. Furthermore, the more restrictive the weighting scheme as part of the rating methodology is, the stronger is the impact of qualitative information on the firms' overall rating. The implications of our results underline the need to define stringent rating standards, from both a risk management and a regulatory point of view. Revised edition published in: ZEW Wirtschaftsanalysen 2001, Bd 54, Baden-Baden, Nomos
This paper provides a broad empirical examination of the major currencies' roles in international capital markets, with a special emphasis on the first year of the euro. A contribution is made as to how to measure these roles, both for international financing as well as for international investment. The times series collected for these measures allow for the identification of changes in the role of the euro during 1999 compared to the aggregate of euro predecessor currencies, net of intra -euro area assets/liabilities, before stage 3 of EMU. A number of key factors determining the currency distribution of international portfolio investments, such as relative market liquidity and relative risk characteristics of assets, are also examined empirically. It turns out that for almost all important market segments for which data are available, the euro immediately became the second most widely used currency for international financing and investment. For the flow of international bond and note issuance it experienced significant growth in 1999 even slightly overtaking the US dollar in the second half of the year. The euro's international investment role appears more static though, since most of the early external asset supply in euro is actually absorbed by euro area residents.
This paper examines the interaction of G7 real exchange rates with real output and interest rate differentials. Using cointegration methods, we generally find a link between the real exchange rate and the real interest differential. This finding contrasts with the majority of the extant research on the real exchange rate - real interest rate link. We identify a new measure of the equilibrium exchange rate in terms of the permanent component of the real exchange rate that is consistent with the dynamic equilibrium given by the cointegration relation. Furthermore, the presence of cointegration also allows us to identify real, nominal and transitory disturbances with only minimal identifying restrictions. Our findings suggest that persistent deviations of real exchange rates from their equilibrium value can have feedback effects on the underlying fundamentals, hence altering the equilibrium exchange rate itself. This has important implications for the persistence measures of real exchange rates that are reported elsewhere in the literature.
In this study the firms' choice of the number of bank relationships is analyzed with respect to influential factors like borrower quality, size and the existence of a close housebank relationship. Then, the number of bank relationships is used as a proxy to examine if bank competition is reflected in loan terms. It is shown that the number of bank relationships is foremost determined by borrower size and the existence of a housebank relationship. Loan rate spreads are not effected by the number of bank relationships. However, borrowers with a small number of bank relationships provide more collateral and get more credit. These effects are amplified by a housebank relationship. Housebanks get more collateral and are ready to take a larger stake in the financing of their customers.
The globalization of markets and companies has increased the demand for internationally comparable high quality accounting information resulting from a common set of accounting rules. Despite remarkable efforts of international harmonization for more than 25 years, accounting regulation is still the domain of national legislators or delegated standard setters. The paper starts by outlining the reasons for this state of affairs and by characterizing the different institutional backgrounds of accounting standard setting in four selected countries as well as on the international level. This is followed by a summary of important international differences in accounting rules and a summary of the empirical evidence of the impact of different rules on the resulting numbers and their relevance to users. It is argued that neither a priori theoretical reasoning nor the evidence from empirical studies provides a convincing basis for choices between accounting regimes and even less so between specific accounting rules. As there is a broad consensus that there is a need for one set of global accounting standards the final sections of the paper discuss currently existing and proposed structures of international accounting standard setting. The evolving new IASC structure is critically evaluated.
This paper discusses the role of the credit rating agencies during the recent financial crises. In particular, it examines whether the agencies can add to the dynamics of emerging market crises. Academics and investors often argue that sovereign credit ratings are responsible for pronounced boom-bust cycles in emerging-markets lending. Using a vector autoregressive system this paper examines how US dollar bond yield spreads and the short-term international liquidity position react to an unexpected sovereign credit rating change. Contrary to common belief and previous studies, the empirical results suggest that an abrupt downgrade does not necessarily intensify a financial crisis.
Bank internal ratings of corporate clients are intended to quantify the expected likelihood of future borrower defaults. This paper develops a comprehensive framework for evaluating the quality of standard rating systems. We suggest a number of principles that ought to be met by 'good rating practice'. These 'generally accepted rating principles' are potentially relevant for the improvement of existing rating systems. They are also relevant for the development of certification standards for internal rating systems, as currently discussed in a consultative paper issued by the Bank for International Settlement in Basle, entitled 'A new capital adequacy framework'. We would very much appreciate any comments by readers that help to develop these rating standards further. Simply send us an E-mail, or give us a call.
This paper measures the economy-wide impact of bank distress on the loss of relationship benefits. We use the near-collapse of the Norwegian banking system during the period 1988 to 1991 to measure the impact of bank distress announcements on the stock prices of firms maintaining a relationship with a distressed bank. We find that although banks experience large and permanent downward revisions in their equity value during the event period, firms maintaining relationships with these banks face only small and temporary changes, on average, in stock price. In other words, the aggregate impact of bank distress on the real economy appears small. We analyze the cross-sectional variation in firm abnormal returns and find that firms that maintain international bank relationships suffer more upon announcement of bank distress.
This paper presents evidence that spillovers through shifts in bank lending can help explain the pattern of contagion. To test the role of bank lending in transmitting currency crises we examine a panel of data on capital flows to 30 emerging markets disaggregated by 11 banking centers. In addition we study a cross-section of emerging markets for which we construct a number of measures of competition for bank funds. For the Mexican and Asian crises, we find that the degree to which countries compete for funds from common bank lenders is a fairly robust predictor of both disaggregated bank flows and the incidence of a currency crisis. In the Russian crisis, the common bank lender helps to predict the incidence of contagion but there is also evidence of a generalized outflow from all emerging markets. We test extensively for robustness to sample, specification and definition of the common bank lender effect. Overall our findings suggest that spillovers through banking centers may be more important in explaining contagion than similarities in macro-economic fundamentals and even than trade linkage.
For some time now the buzzword 'transparency' has been bandied about in the media almost daily. For example, calls were made for greater transparency in the financial system in connection with developments in the Asian financial markets. But the call for greater transparency goes far beyond the financial markets. It is now regarded as a necessary part of "good governance" demanded of all economic policy makers. As the World Bank's chief economist Joseph Stiglitz put it: 'No one would dare say that they were against transparency (....): It would be like saying you were against motherhood or apple pie.' This paper focuses on transparency in monetary policy, in particular with respect to the European System of Central Bank.
This study uses Markov-switching models to evaluate the informational content of the term structure as a predictor of recessions in eight OECD countries. The empirical results suggest that for all countries the term spread is sensibly modelled as a two-state regime-switching process. Moreover, our simple univariate model turns out to be a filter that transforms accurately term spread changes into turning point predictions. The term structure is confirmed to be a reliable recession indicator. However, the results of probit estimations show that the markov-switching filter does not significantly improve the forecasting ability of the spread.
Modeling short-term interest rates as following regime-switching processes has become increasingly popular. Theoretically, regime-switching models are able to capture rational expectations of infrequently occurring discrete events. Technically, they allow for potential time-varying stationarity. After discussing both aspects with reference to the recent literature, this paper provides estimations of various univariate regime-switching specifications for the German three-month money market rate and bivariate specifications additionally including the term spread. However, the main contribution is a multi-step out-of-sample forecasting competition. It turns out that forecasts are improved substantially when allowing for state-dependence. Particularly, the informational content of the term spread for future short rate changes can be exploited optimally within a multivariate regime-switching framework.
Collateral, default risk, and relationship lending : an empirical study on financial contracting
(2000)
This paper provides further insights into the nature of relationship lending by analyzing the link between relationship lending, borrower quality and collateral as a key variable in loan contract design. We used a unique data set based on the examination of credit files of five leading German banks, thus relying on information actually used in the process of bank credit decision-making and contract design. In particular, bank internal borrower ratings serve to evaluate borrower quality, and the bank's own assessment of its housebank status serves to identify information-intensive relationships. Additionally, we used data on workout activities for borrowers facing financial distress. We found no significant correlation between ex ante borrower quality and the incidence or degree of collateralization. Our results indicate that the use of collateral in loan contract design is mainly driven by aspects of relationship lending and renegotiations. We found that relationship lenders or housebanks do require more collateral from their debtors, thereby increasing the borrower's lock-in and strengthening the banks' bargaining power in future renegotiation situations. This result is strongly supported by our analysis of the correlation between ex post risk, collateral and relationship lending since housebanks do more frequently engage in workout activities for distressed borrowers, and collateralization increases workout probability. First version: March 12, 1999
Die Betreuer am neuen Markt sollen die Effizienz des Handels durch Bereitstellung zusätzlicher Liquidität erhöhen. Die vorliegende Studie untersucht den Liquiditätsbeitrag der Betreuer in zwei aufeinanderfolgenden Jahren. Die Beteiligung der Betreuer am Umsatz des Marktes hat im beobachteten Zeitraum deutlich abgenommen. Ihre Orderlimits und -volumina hingegen haben die Markttiefe erhöht. Weiterhin zeigt sich, daß die Betreuer sowohl in liquiditätsschwachen Titeln als auch in liquiditätsschwachen Marktphasen zur Steigerung der Liquidität beigetragen haben.
We analyze the role of different kinds of primary and secondary market interventions for the government's goal to maximize its revenues from public bond issuances. Some of these interventions can be thought of as characteristics of a "primary dealer system". After all, we see that a primary dealer system with a restricted number of participants may be useful in case of only restricted competition among sufficiently heterogeneous market makers. We further show that minimum secondary market turnover requirements for primary dealers with respect to bond sales seem to be in general more adequate than the definition of maximum bid-ask-spreads or minimum turnover requirements with respect to bond purchases. Moreover, official price management operations are not able to completely substitute for a system of primary dealers. Finally it should be noted that there is in general no reason for monetary compensations to primary dealers since they already possess some privileges with respect to public bond auction.
Frankfurts Position im internationalen Finanzplatzwettbewerb : eine ressourcenorientierte Analyse
(1999)
Der vorliegende Aufsatz stellt die Vorgehensweise und die wichtigsten Ergebnisse einer internationalen Finanzplatzstudie vor, die im Jahre 1998 im Auftrag des Center for Financial Studies (Frankfurt am Main) durchgeführt wurde. Ziel dieser Studie war es, aus der Analyse wichtiger Finanzplatzressourcen und den Wechselwirkungen zwischen den unterschiedlichen Ressourcen Rückschlüsse auf Frankfurts Position im internationalen Finanzplatzwettbewerb zu ziehen. Aus ressourcenorientierter Sicht (Resource-Based-View) konnte gezeigt werden, daß der Finanzplatz Frankfurt einerseits größere Wettbewerbsnachteile gegenüber den Finanzzentren New York und London aufweist, die kurz- und mittelfristig kaum aufholbar sind. Andererseits besitzt der Finanzplatz Frankfurt Wettbewerbsvorteile gegenüber den Finanzzentren Paris und Tokyo. Diese sind aus der Sicht Frankfurts kurz- bis mittelfristig verteidigbar. Im Gegensatz zu den Wettbewerbsnachteilen Frankfurts im Vergleich zu den angelsächsischen Finanzplätzen fallen die Wettbewerbsvorteile Frankfurts gegenüber Paris und Tokyo aber deutlich geringer aus.
This paper considers the desirability of the observed tendency of central banks to adjust interest rates only gradually in response to changes in economic conditions. It shows, in the context of a simple model of optimizing private-sector behavior, that such inertial behavior on the part of the central bank may indeed be optimal, in the sense of minimizing a loss function that penalizes inflation variations, deviations of output from potential, and interest-rate variability. Sluggish adjustment characterizes an optimal policy commitment, even though no such inertia would be present in the case of a reputationless (Markovian) equilibrium under discretion. Optimal interest-rate feedback rules are also characterized, and shown to involve substantial positive coefficients on lagged interest rates. This provides a theoretical explanation for the numerical results obtained by Rotemberg and Woodford (1998) in their quantitative model of the U.S. economy.
This paper analyses two reasons why inflation may interfere with price adjustment so as to create inefficiencies in resource allocation at low rates of inflation. The first argument is that the higher the rate of inflation the lower the likelihood that downward nominal rigidities are binding (the Tobin argument) which implies a non-linear Phillips-curve. The second argument is that low inflation strengthens nominal price rigidities and thus impairs the flexibility of the price system resulting in a less efficient resource allocation. It is argued that inflation can be too low from a welfare point of view due to the presence of nominal rigidities, but the quantitative importance is an open question.
As inflation rates in the United States decline, analysts are asking if there are economic reasons to hold the rates at levels above zero. Previous studies of whether inflation "greases the wheels" of the labor market ignore inflation's potential for disrupting wage patterns in the same market. This paper outlines an institutionally-based model of wage-setting that allows the benefits of inflation (downward wage flexibility) to be separated from disruptive uncertainty about inflation rate (undue variation in relative prices). Our estimates, using a unique 40-year panel of wage changes made by large mid-western employers, suggest that low rates of inflation do help the economy to adjust to changes in labor supply and demand. However, when inflation's disruptive effects are balanced against this benefit the labor market justification for pursuing a positive long-term inflation goal effectively disappears.
Since 1990, a number of countries have adopted inflation targeting as their declared monetary strategy. Interpretations of the significance of this movement, however, have differed widely. To some, inflation targeting mandates the single-minded, rule-like pursuit of price stability without regard for other policy objectives; to others, inflation targeting represents nothing more than the latest version of cheap talk by central banks unable to sustain monetary commitments. Advocates of inflation targeting, including the adopting central banks themselves, have expressed the view that the efforts at transparency and communication in the inflation targeting framework grant the central bank greater short-run flexibility in pursuit of its long-run inflation goal. This paper assesses whether the talk that inflation targeting central banks engage in matters to central bank behavior, and which interpretation of the strategy is consistent with that assessment. We identify five distinct interpretations of inflation targeting, consistent with various strands of the current literature, and identify those interpretations as movements between various strategies in a conventional model of time-inconsistency in monetary policy. The empirical implications of these interpretations are then compared to the response of central banks to movements in inflation of three countries that adopted inflation targets in the early 1990s: The United Kingdom, Canada, and New Zealand. For all three, the evidence shows a break in the behavior of inflation consistent with a strengthened commitment to price stability. In no case, however, is there evidence that the strategy entails a single-minded pursuit of the inflation target. For the U.K., the results are consistent with the successful implementation the optimal state-contingent rule, thereby combining flexibility and credibility; similarly, New Zealand's improved inflation performance was achieved without a discernable increase in counter-inflationary conservatism. The results for Canada are less clear, perhaps reflecting the broader fiscal and international developments affecting the Canadian economy during this period.
Derivatives usage in risk management by U.S. and German non-financial firms : a comparative survey
(1998)
This paper is a comparative study of the responses to the 1995 Wharton School survey of derivative usage among US non-financial firms and a 1997 companion survey on German non-financial firms. It is not a mere comparison of the results of both studies but a comparative study, drawing a comparable subsample of firms from the US study to match the sample of German firms on both size and industry composition. We find that German firms are more likely to use derivatives than US firms, with 78% of German firms using derivatives compared to 57% of US firms. Aside from this higher overall usage, the general pattern of usage across industry and size groupings is comparable across the two countries. In both countries, foreign currency derivative usage is most common, followed closely by interest rate derivatives, with commodity derivatives a distant third. Usage rates across all three classes of derivatives are higher for German firms than US firms. In contrast to the similarities, firms in the two countries differ notably on issues such as the primary goal of hedging, their choice of instruments, and the influence of their market view when taking derivative positions. These differences appear to be driven by the greater importance of financial accounting statements in Germany than the US and stricter German corporate policies of control over derivative activities within the firm. German firms also indicate significantly less concern about derivative related issues than US firms, which appears to arise from a more basic and simple strategy for using derivatives. Finally, among the derivative non-users, German firms tend to cite reasons suggesting derivatives were not needed whereas US firms tend to cite reasons suggesting a possible role for derivatives, but a hesitation to use them for some reason.
The purpose of the paper is to survey and discuss inflation targeting in the context of monetary policy rules. The paper provides a general conceptual discussion of monetary policy rules, attempts to clarify the essential characteristics of inflation targeting, compares inflation targeting to the other monetary policy rules, and draws some conclusions for the monetary policy of the European system of Central Banks.
Despite the relevance of credit financing for the profit and risk situation of commercial banks only little empirical evidence on the initial credit decision and monitoring process exists due to the lack of appropriate data on bank debt financing. The present paper provides a systematic overview of a data set generated during the Center for Financial Studies research project on "Credit Management" which was designed to fill this empirical void. The data set contains a broad list of variables taken from the credit files of five major German banks. It is a random sample drawn from all customers which have engaged in some form of borrowing from the banks in question between January 1992 and January 1997 and which meet a number of selection criteria. The sampling design and data collection procedure are discussed in detail. Additionally, the project's research agenda is described and some general descriptive statistics of the firms in our sample are provided.
We studied information and interaction processes in six lending relationships between a universal bank and medium sized firms. The study is based on the credit files of the respective firms. If no problems occur in these lending relationships, bank monitoring is based mainly on cheap, retrospective and internal data. In case of distress, more expensive, prospective and external information is used. The level of monitoring and the willingness to renegotiate the lending relationship depends on what the lending officers can learn about the future prospects of the firm from the behaviour of the debtors. We identify both signalling and bonding activities. Such learning from past behaviour seems to allow monitoring at low cost, whereas the direct observation of the firm's investment outlook seems to be very costly. Also, too much knowledge about the firm's investments might leave the bank in a very strong bargaining position and distort investment incentives. Therefore, the traditional view of credit assessment as observation of the quality of a borrower's investment programme needs to be reconsidered.
Shares trading in the Bolsa mexicana de Valores do not seem to react to company news. Using a sample of Mexican corporate news announcements from the period July 1994 through June 1996, this paper finds that there is nothing unusual about returns, volatility of returns, volume of trade or bid-ask spreads in the event window. This suggests one of five possibilities: our sample size is small; or markets are inefficient; or markets are efficient but the corporate news announcements are not value-relevant; or markets are efficient and corporate news announcements are value-relevant, but they have been fully anticipated; or markets are efficient and corporate news announcements are value-relevant, but unrestricted insider trading has caused prices to fully incorporate the information. The evidence supports the last hypothesis. The paper thus points towards a methodology for ranking emerging stock markets in terms of their market integrity, an approach that can be used with the limited data available in such markets.
No one seems to be neutral about the effects of EMU on the German economy. Roughly speaking, there are two camps: those who see the euro as the advent of a newly open, large, and efficient regime which will lead to improvements in European and in particular in German competitiveness; those who see the euro as a weakening of the German commitment to price stability. From a broader macroeconomic perspective, however, it is clear that EMU is unlikely to cause directly any meaningful change either for the better in Standort Deutschland or for the worse in the German price stability. There is ample evidence that changes in monetary regimes (so long as non leaving hyperinflation) induce little changes in real economic structures such as labor or financial markets. Regional asymmetries of the sorts in the EU do not tend to translate into monetary differences. Most importantly, there is no good reason to believe that the ECB will behave any differently than the Bundesbank.
Where do we stand in the theory of finance? : a selective overview with reference to Erich Gutenberg
(1998)
For the past 20 years, financial markets research has concerned itself with issues related to the evaluation and management of financial securities in efficient capital markets and with issues of management control in incomplete markets. The following selective overview focuses on key aspects of the theory and empirical experience of management control under conditions of asymmetric information. The objective is examine the validity of the recently advanced hypothesis on the myths of corporate control. The present overview is based on Gutenberg's position that there exists a discrete corporate interest, as distinct from and separate from the interests of the shareholders or other stakeholders. In the third volume of Grundlagen der BWL: Die Finanzen, published in 1969, this position of Gutenberg's is coupled with an appeal for a so-called financial equilibrium to be maintained. Not until recently have models grounded in capital market theory been developed which also allow for a firm's management to exercise autonomy vis-à-vis its stakeholder. This paper was prepared for the Erich Gutenberg centenary conference on December 12 and 13, 1997 in Cologne.
This study examines the relation of bank loan terms like interest rates, collateral, and lines of credit to borrower risk defined by the banks' internal credit rating. The analysis is not restricted to a static view. It also incorporates rating transition and its implications on the relation. Money illusion and phenomena linked with relationship banking are discovered as important factors. The results show that riskier borrowers pay higher loan rate premiums and rely more on bank finance. Housebanks obtain more collateral and provide more finance. Caused by money illusion in times of high market interest rates loan rate premiums are relatively small whereas in times of low market interest rates they are relatively high. There was no evidence for an appropriate adjustment of loan terms to rating changes. But bank market power represented by a weighted average of credit rating before and after a rating transition serves to compensate for low earlier profits caused by phenomena of interest rate smoothing. Klassifikation: G21.
Banks increasingly recognize the need to measure and manage the credit risk of their loans on a portfolio basis. We address the subportfolio "middle market". Due to their specific lending policy for this market segment it is an important task for banks to systematically identify regional and industrial credit concentrations and reduce the detected concentrations through diversification. In recent years, the development of markets for credit securitization and credit derivatives has provided new credit risk management tools. However, in the addressed market segment adverse selection and moral hazard problems are quite severe. A potential successful application of credit securitization and credit derivatives for managing credit risk of middle market commercial loan portfolios depends on the development of incentive-compatible structures which solve or at least mitigate the adverse selection and moral hazard problems. In this paper we identify a number of general requirements and describe two possible solution concepts.
In nur wenigen Jahren wird die Euopäische Union um eine Gruppe osteuropäischer Staaten erweitert werden. Diese Erweiterung birgt Chancen und Risiken. Die Chancen liegen unter anderem in der Erweiterung der Märkte und gegenseitigen Handelsbeziehungen. Voraussetzung hierfür ist allerdings gegenseitiges Verständnis im doppelten Sinn dieses Wortes. Wenn die Menschen sich nicht verstehen, werden auch neue Möglichkeiten nicht genutzt werden können, wenn die Wirtschaft nicht die richtige Sprache findet, kann sie nichts verkaufen. Für alle wirtschaftlichen, kulturellen und gesellschaftlichen Bereiche ist eine funktionierende verbale Kommunikation unerläßlich. Wie viel mehr gilt dies für grenzüberschreitende Beziehungen. Die Erweiterung der EU wird nur dann in eine Integration der neuen Kandidaten münden können, wenn die Verständigung zwischen allen Beteiligten gesichert ist. In dem vorliegenden Bändchen weisen die Autoren nach, dass nur durch Sprachkultur und eine einschlägige Forschung, die auch die kulturspezifischen Konnationen mit berücksichtigt, eine interkulturelle Sprachkompetenz erworben werden kann. In der Sprachenvielfalt eines vereinten Europa wird eine solche Kompetenz mehr denn je gefragt sein. Welche Wege uns diesem Ziel näher bringen und welche Möglichkeiten die deutsche Sprache hat, der wachsenden Sprachkonkurrenz international zu begegnen, das ist die Fragestellung der hier publizierten Studien, die dazu eine Fülle von Vorschlägen, Empfehlungen und Anregungen beisteuern. Die Arbeiten sind Ergebnisse des forost-Projektes "Sprachkultur und Sprachkultivierung in Osteuropa – ein paradigmatischer Vergleich", das sich innerhalb der Gruppe III des Forschungsverbundes "Nationale Identität, ethnischer Pluralismus und internationale Beziehungen" dieser Thematik gewidmet hat.
During the last years the lending business has come under considerable competitive pressure and bank managers often express concern regarding its profitability vis-a-vis other activities. This paper tries to empirically identify factors that are able to explain the financial performance of bank lending activities. The analysis is based on the CFS-data-set that has been collected in 1997 from 200 medium-sized firms. Two regressions are performed: The first is directed towards relationships between the interest rate premiums and various determining factors, the second aims at detecting relationships between those factors and the occurrence of several types of problems during the course of a credit engagement. Furthermore, the results of both regressions are used to test theoretical hypotheses regarding the impact of certain parameters on credit terms and distress probabilities. The findings are somewhat “puzzling“: First, the rating is not as significant as expected. Second, credit contracts seem to be priced lower for situations with greater risks. Finally, the results do not fully support any of three hypotheses that are often advanced to describe the role of collateral and covenants in credit contracts.
The German financial market is often characterized as a bank-based system with strong bank-customer relationships. The corresponding notion of a housebank is closely related to the theoretical idea of relationship lending. It is the objective of this paper to provide a direct comparison between housebanks and "normal" banks as to their credit policy. Therefore, we analyze a new data set, representing a random sample of borrowers drawn from the credit portfolios of five leading German banks over a period of five years. We use credit-file data rather than industry survey data and, thus, focus the analysis on information that is directly related to actual credit decisions. In particular, we use bank-internal borrower rating data to evaluate borrower quality, and the bank's own assessment of its housebank status to control for information-intensive relationships.
This paper reviews the factors that will determine the shape of financial markets under EMU. It argues that financial markets will not be unified by the introduction of the euro. National central banks have a vested interest in preserving local idiosyncracies (e.g. the Wechsels in Germany) and they might be allowed to do so by promoting the use of so-called tier two assets under the common monetary policy. Moreover, a host of national regulations (prudential and fiscal) will make assets expressed in euro imperfect substitutes across borders. Prudential control will also continue to be handled differently from country to country. In the long run these national idiosyncracies cannot survive competitive pressures in the euro area. The year 1999 will thus see the beginning of a process of unification of financial markets that will be irresistible in the long run, but might still take some time to complete.
Es werden verschiedene Methoden zur Messung der Risikoeinstellung einzelner Individuen vorgestellt und kritisch diskutiert. Berücksichtigt werden unter anderem Selbsteinschätzungen und experimentell orientierte Verfahren. Die Zusammenstellung wendet sich insbesondere an Wissenschaftler und Praktiker, die nach anwendbaren Verfahren zur Risikoeinstellungsmessung suchen.
Ein Value-at-Risk-Limit wird als DM-Betrag gekennzeichnet, der von den tatsächlichen Handelsverlusten innerhalb einer bestimmten Zeitdauer nur mit geringer Wahrscheinlichkeit überschritten werden darf. Da der Bankvorstand i.d.R. Jahres-Value-at-Risk-Limite beschließt, im Handelsbereich die Geschäfte aber für einen kurzfristigen - unterstellt wird ein eintägiger - Planungshorizont abgeschlossen werden, ist zu klären, wie Jahres-Limite in Tages-Limite umgerechnet und während des Jahres realisierte Gewinne und Verluste auf die Limite angerechnet werden können. Auf der Grundlage des Umrechnungsverfahrens nach der Quadratwurzel-T-Formel lassen sich drei Verfahren für die Ermittlung des Tages-Limits unterscheiden: 1. Realisierte Gewinne und Verluste werden nicht angerechnet (starres Limit). 2. Bei Verlusteintritt vermindert sich das Tages-Limit für die Restperiode, realisierte Gewinne machen Kürzungen rückgängig (Verlustbegrenzungslimit). 3. Tages-Limite werden um Gewinne und Verluste angepaßt, wodurch eine Erweiterung des Handlungsspielraumes möglich ist (dynamisches Limit). Die drei Limite werden in einem Simulationsmodell gegeneinander abgewogen, wobei unterstellt wird, ein Händler handle nur eine einzige Aktie und antizipiere in 55% der Fälle die Kursrichtung. Die Simulationsergebnisse sind bei den unterstellten Renditeprozessen (geometrische Brownsche Bewegung und reale Renditen von 77 deutschen Aktien für die Zeit vom 01.01.1974 bis 31.12.1995) weitgehend identisch. Das dynamische Limit produziert deutlich höhere durchschnittliche Ergebnisse als das starre Limit und das Verlustbegrenzungslimit. Überschreitungen des Jahres-Limits treten nur beim starren Verfahren auf, die Häufigkeit ist allerdings wesentlich geringer als die zulässige Wahrscheinlichkeit von 1 %.
In this paper we analyze the relation between fund performance and market share. Using three performance measures we first establish that significant differences in the risk-adjusted returns of the funds in the sample exist. Thus, investors may react to past fund performance when making their investment decisions. We estimated a model relating past performance to changes in market share and found that past performance has a significant positive effect on market share. The results of a specification test indicate that investors react to risk-adjusted returns rather than to raw returns. This suggests that investors may be more sophisticated than is often assumed.
From the mid-seventies on, the central banks of most major industrial countries switched to monetary targeting. The Bundesbank was the first central bank to take this step, making the switch at the end of 1974. This changeover to monetary targeting was due to the difficulties which the Bundesbank - like other central banks - was facing in pursuing its original strategy, and whichcame to a head in the early seventies, when inflation escalated. A second factor was the collapse of the Bretton Woods system of fixed exchange rates, which created the necessary scope for national monetary targeting. Finally, the advance of monetarist ideas fostered the explicit turn towards monetary targets, although the Bundesbank did not implement these in a mechanistic way. Whereas the Bundesbank has adhered to its policy of monetary targeting up to the present, nowadays monetary targeting plays only a minor role worldwide. Many central banks have switched to the strategy of direct inflation targeting. Others favour a more discretionary approach or a policy which is geared to the exchange rate. In the academic debate, monetary targeting is often presented as an outdated approach which has long since lost its basis of stable money demand. These findings give riseto a number of questions: Has monetary targeting actually become outdated? Which role is played by the concrete design of this strategy, and, against this background, how easily can it be transferred to European monetary union? This paper aims to answer these questions, drawing on the particular experience which the Bundesbank has gained of monetary targeting. It seems appropriate to discuss monetary targeting by using a specific example, since this notion is not very precise. This applies, for example, to the money definition used, the way the target is derived, the stringency applied in pursuing the target and the monetary management procedure.
In this speech (given at the CFSresearch conference on the Implementation of Price Stability held at the Bundesbank Frankfurt am Main, 10. - 12. Sept 1998), John Vickers discusses theoretical and practical issues relating to inflation targeting as used in the United Kingdom doing the past six years. After outlining the role of the Bank s Monetary Policy Committee, he considers the Committee s task from a theoretical perspective, beforediscussing the concept and measurement of domestically generated inflation.
Credit Unions are cooperative financial institutions specializing in the basic financial needs of certain groups of consumers. A distinguishing feature of credit unions is the legal requirement that members share a common bond. This organizing principle recently became the focus of national attention as the Supreme Court and the U.S. Congress took opposite sides in a controversy regarding the number of common bonds that could co-exist within the membership of a single credit union. Despite its importance, little research has been done into how common bonds affect how credit unions actually operate. We frame the issues with a simple theoretical model of credit-union formation and consolidation. To provide intuition into the flexibility of multiple-group credit unions in serving members, we simulate the model and present some comparative-static results. We then apply a semi-parametric empirical model to a large dataset drawn from federally chartered occupational credit unions in 1996 to investigate the effects of common bonds. Our results suggest that credit unions with multiple common bonds have higher participation rates than credit unions that are otherwise similar but whose membership shares a single common bond.
"In this paper, I analyse the conduct of business rules included in the Directive on Markets in Financial Instruments (MiFID) which has replaced the Investment Services Directive (ISD). These rules, in addition to being part of the regulation of investment intermediaries, operate as contractual standards in the relationships between intermediaries and their clients. While the need to harmonise similar rules is generally acknowledged, in the present paper I ask whether the Lamfalussy regulatory architecture, which governs securities lawmaking in the EU, has in some way improved regulation in this area. In section II, I examine the general aspects of the Lamfalussy process. In section III, I critically analyse the MiFID s provisions on conduct of business obligations, best execution of transactions and client order handling, taking into account the new regime of trade internalisation by investment intermediaries and the ensuing competition between these intermediaries and market operators. In sectionIV, I draw some general conclusions on the re-regulation made under the Lamfalussy regulatory structure and its limits. In this section, I make a few preliminary comments on the relevance of conduct of business rules to contract law, the ISD rules of conduct and the role of harmonisation."
Die rechtliche Beurteilung der Verwendung des Gewinns von Zentralbanken bewegt sich im Überschneidungsbereich von: 1) Währungsrecht 2) Finanzverfassungsrecht und 3) Finanzpolitik. Rechtliche Bedenken ergeben sich im Wesentlichen aus den verfassungsrechtlichen Vorgaben für die Staatsfinanzierung sowie aus der Garantie der Unabhängigkeit der Europäischen Zentralbank und der Bundesbank. Maßgebende Rechtsquellen sind sowohl das Recht der Europäischen Union als auch das deutsche Finanzverfassungsrecht, angereichert um das einfache Haushaltsrecht des Bundes.
Das Recht der sog. eigenkapitalersetzenden Gesellschafterdarlehen ist in der jüngeren Vergangenheit zunehmend Gegenstand der Kritik geworden. Mit dem nachfolgenden Beitrag wird auf der Grundlage einer kritischen Analyse der lex lata ein Vorschlag für eine Vereinfachung der Regeln über die Gesellschafterfremdfinanzierung in der Krise entwickelt.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
This paper proposes a non-standard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language FUNDIO , which is also a call-by-need lambda calculus, is investigated. The syntax of FUNDIO has case, letrec, constructors and an IO-interface: its operational semantics is described by small-step reductions. A contextual approximation and equivalence depending on the input-output behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of FUNDIO and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation. Thus this calculus has a potential to integrate non-strict functional programming with a non-deterministic approach to input-output and also to provide a useful semantics for this combination. It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIO-semantics. Of course, we do not address the typing problems the are involved in the usage of Haskell s unsafePerformIO. The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
The extraction of strictness information marks an indispensable element of an efficient compilation of lazy functional languages like Haskell. Based on the method of abstract reduction we have developed an e cient strictness analyser for a core language of Haskell. It is completely written in Haskell and compares favourably with known implementations. The implementation is based on the G#-machine, which is an extension of the G-machine that has been adapted to the needs of abstract reduction.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.