Refine
Year of publication
Document Type
- Working Paper (2354) (remove)
Language
- English (2354) (remove)
Has Fulltext
- yes (2354) (remove)
Is part of the Bibliography
- no (2354)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1380)
- Wirtschaftswissenschaften (1309)
- Sustainable Architecture for Finance in Europe (SAFE) (742)
- House of Finance (HoF) (608)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
We investigate into the role of the trade channel as important determinant of a country's current account position and the degree of business cycle synchronization with the rest of the world by comparing the predictions of two types of DGE models. It is shown that the behavior of a country's external balance and the international transmission of shocks depends amongst other things on two factors: i) the magnitude of trade interdependence, ii) the degree of substitutability between importable and domestically-produced goods. Using time series data on bilateral trade flows, we estimate the magnitude of trade interdependence and the elasticity of substitution between importable and domestic goods for the G7 countries. Given these estimates, idiosyncratic supply shocks potentially induce changes in the current account and foreign output that vary in direction and magnitude across G7 countries. The relationship between the magnitude of foreign trade and the import substitutability with various correlation measures is examined empirically in a cross-sectional dimension. First Draft, July 2001. Final Draft, November 2001. Klassifikation: E32, F41
Industrial production in G7 countries is assumed to be driven by two exogenous disturbances. Those disturbances are identified in a VAR model so they can be interpreted as country-specific and global supply shocks. The dynamic properties of the model are analyzed and the relative importance of each shock is measured. It is shown that the VAR model matches most of the theoretical predictions of standard intertemporal open-economy models. The identified structural disturbances are analyzed with regard to their impact on the current account and investment. First Draft, October 2000. Final Draft, January 2001. This paper is based on the second chapter of my doctoral dissertation at the University of Frankfurt. Klassifikation: E32, F41
Our study examines the existence and the nature of private benefits of control in Germany. We do this by analyzing initial public offerings of founding-family owned firms and tracking their fate up to ten years following the IPO. Our sample includes a uniquely rich data set of 105 IPOs of family-owned firms floated from 1970 to 1991 on German stock exchanges. We find that, first, even ten years after the IPO, family owners, in the cross section, continue to exercise considerable control. Second, we show that there exist substantial private benefits of control in these firms and - to our understanding for the first time - we empirically measure what the nature of these private benefits really is. We also show that the separation of cash flow rights and voting rights via the issuance of dual-class shares is used to create controlling shareholder structures in order to preserve these private benefits. Third, we find a puzzling and significant underperformance of dual-class share IPOs, which can be explained by ex ante unanticipated expropriation of minority shareholders due to poor investor protection in Germany. This Version: 4th draft, November 2001 This paper has been presented at the European Financial Management Association 2001 Meeting in Lugano, the CEPR Conference "The Firm and its Stakeholders" in Courmayeur, the Fall 2000 WAFA Conference in Washington, D.C., the European Economic Association 2000 Conference in Bozen, the ABN AMRO International Conference on IPOs in Amsterdam, the SIRIF Conference on Corporate Governance in Edinburgh, the Financial Center Seminar at the Tinbergen Institute Rotterdam, and the G-Forum on Entrepreneurship Research in Vienna. Klassifikation: G14, G32, G15
Against the difficult background of analysing aggregated data in this paper core inflation in the euro area is estimated by means of the structural vector autoregressive approach. We demonstrate that the HICP sometimes seems to be a misleading indicator for monetary policy in the euro area. We furthermore compare our core inflation measure to the wide-spread "ex food and energy" measure, often referred to by the ECB. In addition we provide evidence that our measure is a coincident indicator of HICP inflation. Assessing the robustness of our core inflation measure we carefully conclude that it seems to be quite reliable. This Version: April, 2002 Revised edition published in: Allgemeinenes Statistisches Archiv, Vol 87, 2003. Klassifikation: C32, E31
Recent empirical research found that the strong short-term relationship between monetary aggregates and US real output and inflation, as outlined in the classical study by M. Friedman and Schwartz, mostly disappeared since the early 1980s. In the light of the B. Friedman and Kuttner (1992) information value approach, we reevaluate the vanishing relationship between US monetary aggregates and these macroeconomic fundamentals by taking into account the international currency feature of the US dollar. In practice, by using official US data for foreign flows constructed by Porter and Judson (1996) we find that domestic money (currency component of M1 corrected for the foreign holdings of dollars) contains valuable information about future movements of US real output and inflation. Statistical evidence here provided thus suggests that the Friedman and Schwartz's stylized facts can be reestablished once the focus of analysis is back on the domestic monetary aggregates. This Version: August, 2001. Klassifikation: E3, E4, E5
We use consumer price data for 81 European cities (in Germany, Austria, Finland, Italy, Spain, Portugal and Switzerland) to study the impact of the introduction of the euro on goods market integration. Employing both aggregated and disaggregated consumer price index (CPI) data we confirm previous results which showed that the distance between European cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of relative prices is much higher for two cities located in different countries than for two equidistant cities in the same country. Under the EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility.
This paper investigates how US and European equity markets affected the US dollar-euro rate from the introduction of the euro through April 2001. More detailed the following questions are raised: First, do movements in the stock market help to explain movements in the exchange rate? Second, how large is the impact of stock market returns on the exchange rate? And third, does the exchange rate respond differently to different equity markets? The investigation was carried out using daily data within a vector-autoregression model (VAR). Surprisingly, positive returns on US equities as well as on European stock markets had a negative impact on the US dollar-euro rate. Quantitatively, the US dollar-euro rate seems to be more influenced by European stock markets compared to US stock markets. Further, there is evidence for a somewhat weaker impact of technology stock indices on the US dollar-euro rate compared with broader market indices. Finally, the long-term interest rate differential seems to contain more information about exchange rate movements than the short-term interest rate differential. This Version: August, 2001. Klassifikation: C32, F31
This paper uses a unique data set from credit files of six leading German banks to provide some empirical insights into their rating systems used to classify corporate borrowers. On the basis of the New Basle Capital Accord, which allows banks to use their internal rating systems to compute their minimum capital requirements, the relations between potential risk factors, rating decisions and the default probabilities are analysed to answer the question whether German banks are ready for the internal ratings-based approach. The results suggests that the answer is not affirmative at this stage. We find internal rating systems not comparable over banks and furthermore we reveal differences between credit rating determining and default probability determining factors respectively. Klassifikation: G21, G33, G38
In the recent theoretical literature on lending risk, the coordination problem in multi-creditor relationships have been analyzed extensively. We address this topic empirically, relying on a unique panel data set that includes detailed credit-file information on distressed lending relationships in Germany. In particular, it includes information on creditor pools, a legal institution aiming at coordinating lender interests in borrower distress. We report three major findings. First, the existence of creditor pools increases the probability of workout success. Second, the results are consistent with coordination costs being positively related to pool size. Third, major determinants of pool formation are found to be the number of banks, the distribution of lending shares, and the severity of the distress shock.
In recent years new methods and models have been developed to quantify credit risk on a portfolio basis. CreditMetrics (tm), CreditRisk+, CreditPortfolio (tm) are among the best known and many others are similar to them. At first glance they are quite different in their approaches and methodologies. A comparison of these models especially with regard to their applicability on typical middle market loan portfolios is in the focus of this study. The analysis shows that differences in the results of an application of the models on a certain loan portfolio is mainly due to different approaches in approximating default correlations. That is especially true for typically non-rated medium-sized counterparties. On the other hand distributional assumptions or different solution techniques in the models are more or less compatible.
This paper shows that emerging market eurobond spreads after the Asian crisis can be almost completely explained by market expectations about macroeconomic fundamentals and international interest rates. Contrary to the claim that emerging market bond spreads are driven by market variables such as stock market volatility in the developed countries, it is found that this did not play a significant role after the Asian crisis. Using panel data techniques, it is shown that the determinants of bond spreads can be divided into long-term structural variables and medium-term variables which explain month-to-month changes in bond spreads. As relevant medium-term variables, ''consensus forecasts'' of real GDP growth and inflation, and international interest rates are identified. The long-term structural factors do not explicitly enter the model and show up as fixed or random country-specific effects. These intercepts are highly correlated with the countries' credit rating.
This paper analyzes a comprehensive data set of 160 non venture-backed, 79 venture-backed and 61 bridge financed companies going public at Germany´s Neuer Markt between March 1997 and March 2002. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This analysis provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VC-backed IPOs.
This paper examines thoroughly the Chilean Pension Reform, giving first an overview of the mandatory saving plan, the relevant institutions, and the rules for transition from the old to the new system. The main part of the paper contains a critical evaluation of the reform, in particular the macroeconomic performance with respect to capital formation and growth, and the effects on the savings rate as well as on the rates of return and labor market are discussed. Furthermore, the development of capital markets is reviewed. A short critique is presented with respect to intergenerational distribution and risk sharing as well as with respect to the social consequences. This paper is the result of a CFS sponsored research project. A preliminary version was presented at the meeting of the committee of Social Policy of the Verein fuer Socialpolitik, May 1999 and at the 55th Congress of IIPF, 23-26 August 1999, in Moskow.
This paper provides empirical evidence on initial public offerings (IPOs) by investigating the pricing and long-run performance of IPOs using a unique data set collected on the German capital market before World War I. Our findings indicate that underpricing of IPOs has existed, but has significantly decreased over time in our sample. Employing a mixture of distributions approach we also find evidence of price stabilization of IPOs. Concerning long-run performance, investors who bought their shares in the early after-market and held them for more than three years experienced significantly lower returns than the respective industry as a whole. Earlier versions of this paper were presented at the ABN-AMRO Conference on IPOs in Amsterdam, the Annual Meetings of the European Finance Association, the Annual Meetings of the Verein für Socialpolitik, the IX Tor Vergata International Conference on Banking and Finance in Rome, and at Johann Wolfgang Goethe-University in Frankfurt.
This paper examines empirically the question whether the presence of foreign banks and a liberal trade regime with regard to financial services can contribute to a stabilization of capital flows to emerging markets. Since foreign banks, so the argument goes, provide better information to foreign investors and increase transparency, the danger of herding is reduced. Previous findings by Kono and Schuknecht (1998) confirmed empirically that such an effect does exist. This study expands their data set with respect to the length of the time period and the number of countries. Contrary to Kono and Schuknecht, it is found that foreign bank penetration tends to rather increase the volatility of capital flows. The trade regime variables are not significant in explaining cross-country variations in the volatility of capital flows. This result does not change significantly when alternative measures of volatility are considered. This paper was presented at the conference ''Financial crisis in transition countries: recent lessons and problems yet to solve'' on 13-14 July 2000 at the Institute for Economic Research (IWH) in Halle, Germany.
This paper discusses the role of internal corporate ratings as a means by which commercial banks condense their informational advantage and preserve it vis-à-vis a competitive lending market. In drawing on a unique data set collected from leading universal banks in Germany, we are able to evaluate the extent to which non-public information determines corporate ratings. As a point of departure, the paper describes a sample of rating systems currently in use, and points at methodological differences between them. Relying on a probit analysis, we are able to show that the set of qualitative, or soft, factors is not simply redundant with respect to publicly available accounting data. Rather, qualitative information tends to be decisive in at least one third of cases. It tends to improve the firms' overall corporate rating. In the case of conflicting rating changes, i.e. when qualitative and quantitative rating changes have opposing signs, quantitative criteria dominate the overall rating change. Furthermore, the more restrictive the weighting scheme as part of the rating methodology is, the stronger is the impact of qualitative information on the firms' overall rating. The implications of our results underline the need to define stringent rating standards, from both a risk management and a regulatory point of view. Revised edition published in: ZEW Wirtschaftsanalysen 2001, Bd 54, Baden-Baden, Nomos
This paper provides a broad empirical examination of the major currencies' roles in international capital markets, with a special emphasis on the first year of the euro. A contribution is made as to how to measure these roles, both for international financing as well as for international investment. The times series collected for these measures allow for the identification of changes in the role of the euro during 1999 compared to the aggregate of euro predecessor currencies, net of intra -euro area assets/liabilities, before stage 3 of EMU. A number of key factors determining the currency distribution of international portfolio investments, such as relative market liquidity and relative risk characteristics of assets, are also examined empirically. It turns out that for almost all important market segments for which data are available, the euro immediately became the second most widely used currency for international financing and investment. For the flow of international bond and note issuance it experienced significant growth in 1999 even slightly overtaking the US dollar in the second half of the year. The euro's international investment role appears more static though, since most of the early external asset supply in euro is actually absorbed by euro area residents.
This paper examines the interaction of G7 real exchange rates with real output and interest rate differentials. Using cointegration methods, we generally find a link between the real exchange rate and the real interest differential. This finding contrasts with the majority of the extant research on the real exchange rate - real interest rate link. We identify a new measure of the equilibrium exchange rate in terms of the permanent component of the real exchange rate that is consistent with the dynamic equilibrium given by the cointegration relation. Furthermore, the presence of cointegration also allows us to identify real, nominal and transitory disturbances with only minimal identifying restrictions. Our findings suggest that persistent deviations of real exchange rates from their equilibrium value can have feedback effects on the underlying fundamentals, hence altering the equilibrium exchange rate itself. This has important implications for the persistence measures of real exchange rates that are reported elsewhere in the literature.
In this study the firms' choice of the number of bank relationships is analyzed with respect to influential factors like borrower quality, size and the existence of a close housebank relationship. Then, the number of bank relationships is used as a proxy to examine if bank competition is reflected in loan terms. It is shown that the number of bank relationships is foremost determined by borrower size and the existence of a housebank relationship. Loan rate spreads are not effected by the number of bank relationships. However, borrowers with a small number of bank relationships provide more collateral and get more credit. These effects are amplified by a housebank relationship. Housebanks get more collateral and are ready to take a larger stake in the financing of their customers.
The globalization of markets and companies has increased the demand for internationally comparable high quality accounting information resulting from a common set of accounting rules. Despite remarkable efforts of international harmonization for more than 25 years, accounting regulation is still the domain of national legislators or delegated standard setters. The paper starts by outlining the reasons for this state of affairs and by characterizing the different institutional backgrounds of accounting standard setting in four selected countries as well as on the international level. This is followed by a summary of important international differences in accounting rules and a summary of the empirical evidence of the impact of different rules on the resulting numbers and their relevance to users. It is argued that neither a priori theoretical reasoning nor the evidence from empirical studies provides a convincing basis for choices between accounting regimes and even less so between specific accounting rules. As there is a broad consensus that there is a need for one set of global accounting standards the final sections of the paper discuss currently existing and proposed structures of international accounting standard setting. The evolving new IASC structure is critically evaluated.
This paper discusses the role of the credit rating agencies during the recent financial crises. In particular, it examines whether the agencies can add to the dynamics of emerging market crises. Academics and investors often argue that sovereign credit ratings are responsible for pronounced boom-bust cycles in emerging-markets lending. Using a vector autoregressive system this paper examines how US dollar bond yield spreads and the short-term international liquidity position react to an unexpected sovereign credit rating change. Contrary to common belief and previous studies, the empirical results suggest that an abrupt downgrade does not necessarily intensify a financial crisis.
Bank internal ratings of corporate clients are intended to quantify the expected likelihood of future borrower defaults. This paper develops a comprehensive framework for evaluating the quality of standard rating systems. We suggest a number of principles that ought to be met by 'good rating practice'. These 'generally accepted rating principles' are potentially relevant for the improvement of existing rating systems. They are also relevant for the development of certification standards for internal rating systems, as currently discussed in a consultative paper issued by the Bank for International Settlement in Basle, entitled 'A new capital adequacy framework'. We would very much appreciate any comments by readers that help to develop these rating standards further. Simply send us an E-mail, or give us a call.
This paper measures the economy-wide impact of bank distress on the loss of relationship benefits. We use the near-collapse of the Norwegian banking system during the period 1988 to 1991 to measure the impact of bank distress announcements on the stock prices of firms maintaining a relationship with a distressed bank. We find that although banks experience large and permanent downward revisions in their equity value during the event period, firms maintaining relationships with these banks face only small and temporary changes, on average, in stock price. In other words, the aggregate impact of bank distress on the real economy appears small. We analyze the cross-sectional variation in firm abnormal returns and find that firms that maintain international bank relationships suffer more upon announcement of bank distress.
This paper presents evidence that spillovers through shifts in bank lending can help explain the pattern of contagion. To test the role of bank lending in transmitting currency crises we examine a panel of data on capital flows to 30 emerging markets disaggregated by 11 banking centers. In addition we study a cross-section of emerging markets for which we construct a number of measures of competition for bank funds. For the Mexican and Asian crises, we find that the degree to which countries compete for funds from common bank lenders is a fairly robust predictor of both disaggregated bank flows and the incidence of a currency crisis. In the Russian crisis, the common bank lender helps to predict the incidence of contagion but there is also evidence of a generalized outflow from all emerging markets. We test extensively for robustness to sample, specification and definition of the common bank lender effect. Overall our findings suggest that spillovers through banking centers may be more important in explaining contagion than similarities in macro-economic fundamentals and even than trade linkage.
For some time now the buzzword 'transparency' has been bandied about in the media almost daily. For example, calls were made for greater transparency in the financial system in connection with developments in the Asian financial markets. But the call for greater transparency goes far beyond the financial markets. It is now regarded as a necessary part of "good governance" demanded of all economic policy makers. As the World Bank's chief economist Joseph Stiglitz put it: 'No one would dare say that they were against transparency (....): It would be like saying you were against motherhood or apple pie.' This paper focuses on transparency in monetary policy, in particular with respect to the European System of Central Bank.
This study uses Markov-switching models to evaluate the informational content of the term structure as a predictor of recessions in eight OECD countries. The empirical results suggest that for all countries the term spread is sensibly modelled as a two-state regime-switching process. Moreover, our simple univariate model turns out to be a filter that transforms accurately term spread changes into turning point predictions. The term structure is confirmed to be a reliable recession indicator. However, the results of probit estimations show that the markov-switching filter does not significantly improve the forecasting ability of the spread.
Modeling short-term interest rates as following regime-switching processes has become increasingly popular. Theoretically, regime-switching models are able to capture rational expectations of infrequently occurring discrete events. Technically, they allow for potential time-varying stationarity. After discussing both aspects with reference to the recent literature, this paper provides estimations of various univariate regime-switching specifications for the German three-month money market rate and bivariate specifications additionally including the term spread. However, the main contribution is a multi-step out-of-sample forecasting competition. It turns out that forecasts are improved substantially when allowing for state-dependence. Particularly, the informational content of the term spread for future short rate changes can be exploited optimally within a multivariate regime-switching framework.
Collateral, default risk, and relationship lending : an empirical study on financial contracting
(2000)
This paper provides further insights into the nature of relationship lending by analyzing the link between relationship lending, borrower quality and collateral as a key variable in loan contract design. We used a unique data set based on the examination of credit files of five leading German banks, thus relying on information actually used in the process of bank credit decision-making and contract design. In particular, bank internal borrower ratings serve to evaluate borrower quality, and the bank's own assessment of its housebank status serves to identify information-intensive relationships. Additionally, we used data on workout activities for borrowers facing financial distress. We found no significant correlation between ex ante borrower quality and the incidence or degree of collateralization. Our results indicate that the use of collateral in loan contract design is mainly driven by aspects of relationship lending and renegotiations. We found that relationship lenders or housebanks do require more collateral from their debtors, thereby increasing the borrower's lock-in and strengthening the banks' bargaining power in future renegotiation situations. This result is strongly supported by our analysis of the correlation between ex post risk, collateral and relationship lending since housebanks do more frequently engage in workout activities for distressed borrowers, and collateralization increases workout probability. First version: March 12, 1999
We analyze the role of different kinds of primary and secondary market interventions for the government's goal to maximize its revenues from public bond issuances. Some of these interventions can be thought of as characteristics of a "primary dealer system". After all, we see that a primary dealer system with a restricted number of participants may be useful in case of only restricted competition among sufficiently heterogeneous market makers. We further show that minimum secondary market turnover requirements for primary dealers with respect to bond sales seem to be in general more adequate than the definition of maximum bid-ask-spreads or minimum turnover requirements with respect to bond purchases. Moreover, official price management operations are not able to completely substitute for a system of primary dealers. Finally it should be noted that there is in general no reason for monetary compensations to primary dealers since they already possess some privileges with respect to public bond auction.
This paper considers the desirability of the observed tendency of central banks to adjust interest rates only gradually in response to changes in economic conditions. It shows, in the context of a simple model of optimizing private-sector behavior, that such inertial behavior on the part of the central bank may indeed be optimal, in the sense of minimizing a loss function that penalizes inflation variations, deviations of output from potential, and interest-rate variability. Sluggish adjustment characterizes an optimal policy commitment, even though no such inertia would be present in the case of a reputationless (Markovian) equilibrium under discretion. Optimal interest-rate feedback rules are also characterized, and shown to involve substantial positive coefficients on lagged interest rates. This provides a theoretical explanation for the numerical results obtained by Rotemberg and Woodford (1998) in their quantitative model of the U.S. economy.
This paper analyses two reasons why inflation may interfere with price adjustment so as to create inefficiencies in resource allocation at low rates of inflation. The first argument is that the higher the rate of inflation the lower the likelihood that downward nominal rigidities are binding (the Tobin argument) which implies a non-linear Phillips-curve. The second argument is that low inflation strengthens nominal price rigidities and thus impairs the flexibility of the price system resulting in a less efficient resource allocation. It is argued that inflation can be too low from a welfare point of view due to the presence of nominal rigidities, but the quantitative importance is an open question.
As inflation rates in the United States decline, analysts are asking if there are economic reasons to hold the rates at levels above zero. Previous studies of whether inflation "greases the wheels" of the labor market ignore inflation's potential for disrupting wage patterns in the same market. This paper outlines an institutionally-based model of wage-setting that allows the benefits of inflation (downward wage flexibility) to be separated from disruptive uncertainty about inflation rate (undue variation in relative prices). Our estimates, using a unique 40-year panel of wage changes made by large mid-western employers, suggest that low rates of inflation do help the economy to adjust to changes in labor supply and demand. However, when inflation's disruptive effects are balanced against this benefit the labor market justification for pursuing a positive long-term inflation goal effectively disappears.
Since 1990, a number of countries have adopted inflation targeting as their declared monetary strategy. Interpretations of the significance of this movement, however, have differed widely. To some, inflation targeting mandates the single-minded, rule-like pursuit of price stability without regard for other policy objectives; to others, inflation targeting represents nothing more than the latest version of cheap talk by central banks unable to sustain monetary commitments. Advocates of inflation targeting, including the adopting central banks themselves, have expressed the view that the efforts at transparency and communication in the inflation targeting framework grant the central bank greater short-run flexibility in pursuit of its long-run inflation goal. This paper assesses whether the talk that inflation targeting central banks engage in matters to central bank behavior, and which interpretation of the strategy is consistent with that assessment. We identify five distinct interpretations of inflation targeting, consistent with various strands of the current literature, and identify those interpretations as movements between various strategies in a conventional model of time-inconsistency in monetary policy. The empirical implications of these interpretations are then compared to the response of central banks to movements in inflation of three countries that adopted inflation targets in the early 1990s: The United Kingdom, Canada, and New Zealand. For all three, the evidence shows a break in the behavior of inflation consistent with a strengthened commitment to price stability. In no case, however, is there evidence that the strategy entails a single-minded pursuit of the inflation target. For the U.K., the results are consistent with the successful implementation the optimal state-contingent rule, thereby combining flexibility and credibility; similarly, New Zealand's improved inflation performance was achieved without a discernable increase in counter-inflationary conservatism. The results for Canada are less clear, perhaps reflecting the broader fiscal and international developments affecting the Canadian economy during this period.
Derivatives usage in risk management by U.S. and German non-financial firms : a comparative survey
(1998)
This paper is a comparative study of the responses to the 1995 Wharton School survey of derivative usage among US non-financial firms and a 1997 companion survey on German non-financial firms. It is not a mere comparison of the results of both studies but a comparative study, drawing a comparable subsample of firms from the US study to match the sample of German firms on both size and industry composition. We find that German firms are more likely to use derivatives than US firms, with 78% of German firms using derivatives compared to 57% of US firms. Aside from this higher overall usage, the general pattern of usage across industry and size groupings is comparable across the two countries. In both countries, foreign currency derivative usage is most common, followed closely by interest rate derivatives, with commodity derivatives a distant third. Usage rates across all three classes of derivatives are higher for German firms than US firms. In contrast to the similarities, firms in the two countries differ notably on issues such as the primary goal of hedging, their choice of instruments, and the influence of their market view when taking derivative positions. These differences appear to be driven by the greater importance of financial accounting statements in Germany than the US and stricter German corporate policies of control over derivative activities within the firm. German firms also indicate significantly less concern about derivative related issues than US firms, which appears to arise from a more basic and simple strategy for using derivatives. Finally, among the derivative non-users, German firms tend to cite reasons suggesting derivatives were not needed whereas US firms tend to cite reasons suggesting a possible role for derivatives, but a hesitation to use them for some reason.
The purpose of the paper is to survey and discuss inflation targeting in the context of monetary policy rules. The paper provides a general conceptual discussion of monetary policy rules, attempts to clarify the essential characteristics of inflation targeting, compares inflation targeting to the other monetary policy rules, and draws some conclusions for the monetary policy of the European system of Central Banks.
Despite the relevance of credit financing for the profit and risk situation of commercial banks only little empirical evidence on the initial credit decision and monitoring process exists due to the lack of appropriate data on bank debt financing. The present paper provides a systematic overview of a data set generated during the Center for Financial Studies research project on "Credit Management" which was designed to fill this empirical void. The data set contains a broad list of variables taken from the credit files of five major German banks. It is a random sample drawn from all customers which have engaged in some form of borrowing from the banks in question between January 1992 and January 1997 and which meet a number of selection criteria. The sampling design and data collection procedure are discussed in detail. Additionally, the project's research agenda is described and some general descriptive statistics of the firms in our sample are provided.
We studied information and interaction processes in six lending relationships between a universal bank and medium sized firms. The study is based on the credit files of the respective firms. If no problems occur in these lending relationships, bank monitoring is based mainly on cheap, retrospective and internal data. In case of distress, more expensive, prospective and external information is used. The level of monitoring and the willingness to renegotiate the lending relationship depends on what the lending officers can learn about the future prospects of the firm from the behaviour of the debtors. We identify both signalling and bonding activities. Such learning from past behaviour seems to allow monitoring at low cost, whereas the direct observation of the firm's investment outlook seems to be very costly. Also, too much knowledge about the firm's investments might leave the bank in a very strong bargaining position and distort investment incentives. Therefore, the traditional view of credit assessment as observation of the quality of a borrower's investment programme needs to be reconsidered.
Shares trading in the Bolsa mexicana de Valores do not seem to react to company news. Using a sample of Mexican corporate news announcements from the period July 1994 through June 1996, this paper finds that there is nothing unusual about returns, volatility of returns, volume of trade or bid-ask spreads in the event window. This suggests one of five possibilities: our sample size is small; or markets are inefficient; or markets are efficient but the corporate news announcements are not value-relevant; or markets are efficient and corporate news announcements are value-relevant, but they have been fully anticipated; or markets are efficient and corporate news announcements are value-relevant, but unrestricted insider trading has caused prices to fully incorporate the information. The evidence supports the last hypothesis. The paper thus points towards a methodology for ranking emerging stock markets in terms of their market integrity, an approach that can be used with the limited data available in such markets.
No one seems to be neutral about the effects of EMU on the German economy. Roughly speaking, there are two camps: those who see the euro as the advent of a newly open, large, and efficient regime which will lead to improvements in European and in particular in German competitiveness; those who see the euro as a weakening of the German commitment to price stability. From a broader macroeconomic perspective, however, it is clear that EMU is unlikely to cause directly any meaningful change either for the better in Standort Deutschland or for the worse in the German price stability. There is ample evidence that changes in monetary regimes (so long as non leaving hyperinflation) induce little changes in real economic structures such as labor or financial markets. Regional asymmetries of the sorts in the EU do not tend to translate into monetary differences. Most importantly, there is no good reason to believe that the ECB will behave any differently than the Bundesbank.
Where do we stand in the theory of finance? : a selective overview with reference to Erich Gutenberg
(1998)
For the past 20 years, financial markets research has concerned itself with issues related to the evaluation and management of financial securities in efficient capital markets and with issues of management control in incomplete markets. The following selective overview focuses on key aspects of the theory and empirical experience of management control under conditions of asymmetric information. The objective is examine the validity of the recently advanced hypothesis on the myths of corporate control. The present overview is based on Gutenberg's position that there exists a discrete corporate interest, as distinct from and separate from the interests of the shareholders or other stakeholders. In the third volume of Grundlagen der BWL: Die Finanzen, published in 1969, this position of Gutenberg's is coupled with an appeal for a so-called financial equilibrium to be maintained. Not until recently have models grounded in capital market theory been developed which also allow for a firm's management to exercise autonomy vis-à-vis its stakeholder. This paper was prepared for the Erich Gutenberg centenary conference on December 12 and 13, 1997 in Cologne.
This study examines the relation of bank loan terms like interest rates, collateral, and lines of credit to borrower risk defined by the banks' internal credit rating. The analysis is not restricted to a static view. It also incorporates rating transition and its implications on the relation. Money illusion and phenomena linked with relationship banking are discovered as important factors. The results show that riskier borrowers pay higher loan rate premiums and rely more on bank finance. Housebanks obtain more collateral and provide more finance. Caused by money illusion in times of high market interest rates loan rate premiums are relatively small whereas in times of low market interest rates they are relatively high. There was no evidence for an appropriate adjustment of loan terms to rating changes. But bank market power represented by a weighted average of credit rating before and after a rating transition serves to compensate for low earlier profits caused by phenomena of interest rate smoothing. Klassifikation: G21.
Banks increasingly recognize the need to measure and manage the credit risk of their loans on a portfolio basis. We address the subportfolio "middle market". Due to their specific lending policy for this market segment it is an important task for banks to systematically identify regional and industrial credit concentrations and reduce the detected concentrations through diversification. In recent years, the development of markets for credit securitization and credit derivatives has provided new credit risk management tools. However, in the addressed market segment adverse selection and moral hazard problems are quite severe. A potential successful application of credit securitization and credit derivatives for managing credit risk of middle market commercial loan portfolios depends on the development of incentive-compatible structures which solve or at least mitigate the adverse selection and moral hazard problems. In this paper we identify a number of general requirements and describe two possible solution concepts.
During the last years the lending business has come under considerable competitive pressure and bank managers often express concern regarding its profitability vis-a-vis other activities. This paper tries to empirically identify factors that are able to explain the financial performance of bank lending activities. The analysis is based on the CFS-data-set that has been collected in 1997 from 200 medium-sized firms. Two regressions are performed: The first is directed towards relationships between the interest rate premiums and various determining factors, the second aims at detecting relationships between those factors and the occurrence of several types of problems during the course of a credit engagement. Furthermore, the results of both regressions are used to test theoretical hypotheses regarding the impact of certain parameters on credit terms and distress probabilities. The findings are somewhat “puzzling“: First, the rating is not as significant as expected. Second, credit contracts seem to be priced lower for situations with greater risks. Finally, the results do not fully support any of three hypotheses that are often advanced to describe the role of collateral and covenants in credit contracts.
The German financial market is often characterized as a bank-based system with strong bank-customer relationships. The corresponding notion of a housebank is closely related to the theoretical idea of relationship lending. It is the objective of this paper to provide a direct comparison between housebanks and "normal" banks as to their credit policy. Therefore, we analyze a new data set, representing a random sample of borrowers drawn from the credit portfolios of five leading German banks over a period of five years. We use credit-file data rather than industry survey data and, thus, focus the analysis on information that is directly related to actual credit decisions. In particular, we use bank-internal borrower rating data to evaluate borrower quality, and the bank's own assessment of its housebank status to control for information-intensive relationships.
This paper reviews the factors that will determine the shape of financial markets under EMU. It argues that financial markets will not be unified by the introduction of the euro. National central banks have a vested interest in preserving local idiosyncracies (e.g. the Wechsels in Germany) and they might be allowed to do so by promoting the use of so-called tier two assets under the common monetary policy. Moreover, a host of national regulations (prudential and fiscal) will make assets expressed in euro imperfect substitutes across borders. Prudential control will also continue to be handled differently from country to country. In the long run these national idiosyncracies cannot survive competitive pressures in the euro area. The year 1999 will thus see the beginning of a process of unification of financial markets that will be irresistible in the long run, but might still take some time to complete.
In this paper we analyze the relation between fund performance and market share. Using three performance measures we first establish that significant differences in the risk-adjusted returns of the funds in the sample exist. Thus, investors may react to past fund performance when making their investment decisions. We estimated a model relating past performance to changes in market share and found that past performance has a significant positive effect on market share. The results of a specification test indicate that investors react to risk-adjusted returns rather than to raw returns. This suggests that investors may be more sophisticated than is often assumed.
From the mid-seventies on, the central banks of most major industrial countries switched to monetary targeting. The Bundesbank was the first central bank to take this step, making the switch at the end of 1974. This changeover to monetary targeting was due to the difficulties which the Bundesbank - like other central banks - was facing in pursuing its original strategy, and whichcame to a head in the early seventies, when inflation escalated. A second factor was the collapse of the Bretton Woods system of fixed exchange rates, which created the necessary scope for national monetary targeting. Finally, the advance of monetarist ideas fostered the explicit turn towards monetary targets, although the Bundesbank did not implement these in a mechanistic way. Whereas the Bundesbank has adhered to its policy of monetary targeting up to the present, nowadays monetary targeting plays only a minor role worldwide. Many central banks have switched to the strategy of direct inflation targeting. Others favour a more discretionary approach or a policy which is geared to the exchange rate. In the academic debate, monetary targeting is often presented as an outdated approach which has long since lost its basis of stable money demand. These findings give riseto a number of questions: Has monetary targeting actually become outdated? Which role is played by the concrete design of this strategy, and, against this background, how easily can it be transferred to European monetary union? This paper aims to answer these questions, drawing on the particular experience which the Bundesbank has gained of monetary targeting. It seems appropriate to discuss monetary targeting by using a specific example, since this notion is not very precise. This applies, for example, to the money definition used, the way the target is derived, the stringency applied in pursuing the target and the monetary management procedure.
In this speech (given at the CFSresearch conference on the Implementation of Price Stability held at the Bundesbank Frankfurt am Main, 10. - 12. Sept 1998), John Vickers discusses theoretical and practical issues relating to inflation targeting as used in the United Kingdom doing the past six years. After outlining the role of the Bank s Monetary Policy Committee, he considers the Committee s task from a theoretical perspective, beforediscussing the concept and measurement of domestically generated inflation.
Credit Unions are cooperative financial institutions specializing in the basic financial needs of certain groups of consumers. A distinguishing feature of credit unions is the legal requirement that members share a common bond. This organizing principle recently became the focus of national attention as the Supreme Court and the U.S. Congress took opposite sides in a controversy regarding the number of common bonds that could co-exist within the membership of a single credit union. Despite its importance, little research has been done into how common bonds affect how credit unions actually operate. We frame the issues with a simple theoretical model of credit-union formation and consolidation. To provide intuition into the flexibility of multiple-group credit unions in serving members, we simulate the model and present some comparative-static results. We then apply a semi-parametric empirical model to a large dataset drawn from federally chartered occupational credit unions in 1996 to investigate the effects of common bonds. Our results suggest that credit unions with multiple common bonds have higher participation rates than credit unions that are otherwise similar but whose membership shares a single common bond.
"In this paper, I analyse the conduct of business rules included in the Directive on Markets in Financial Instruments (MiFID) which has replaced the Investment Services Directive (ISD). These rules, in addition to being part of the regulation of investment intermediaries, operate as contractual standards in the relationships between intermediaries and their clients. While the need to harmonise similar rules is generally acknowledged, in the present paper I ask whether the Lamfalussy regulatory architecture, which governs securities lawmaking in the EU, has in some way improved regulation in this area. In section II, I examine the general aspects of the Lamfalussy process. In section III, I critically analyse the MiFID s provisions on conduct of business obligations, best execution of transactions and client order handling, taking into account the new regime of trade internalisation by investment intermediaries and the ensuing competition between these intermediaries and market operators. In sectionIV, I draw some general conclusions on the re-regulation made under the Lamfalussy regulatory structure and its limits. In this section, I make a few preliminary comments on the relevance of conduct of business rules to contract law, the ISD rules of conduct and the role of harmonisation."
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
This paper proposes a non-standard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language FUNDIO , which is also a call-by-need lambda calculus, is investigated. The syntax of FUNDIO has case, letrec, constructors and an IO-interface: its operational semantics is described by small-step reductions. A contextual approximation and equivalence depending on the input-output behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of FUNDIO and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation. Thus this calculus has a potential to integrate non-strict functional programming with a non-deterministic approach to input-output and also to provide a useful semantics for this combination. It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIO-semantics. Of course, we do not address the typing problems the are involved in the usage of Haskell s unsafePerformIO. The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
It is well known that first order uni cation is decidable, whereas second order and higher order unification is undecidable. Bounded second order unification (BSOU) is second order unification under the restriction that only a bounded number of holes in the instantiating terms for second order variables is permitted, however, the size of the instantiation is not restricted. In this paper, a decision algorithm for bounded second order unification is described. This is the fist non-trivial decidability result for second order unification, where the (finite) signature is not restricted and there are no restrictions on the occurrences of variables. We show that the monadic second order unification (MSOU), a specialization of BSOU is in sum p s. Since MSOU is related to word unification, this is compares favourably to the best known upper bound NEXPTIME (and also to the announced upper bound PSPACE) for word unification. This supports the claim that bounded second order unification is easier than context unification, whose decidability is currently an open question.
This paper describes the development of a typesetting program for music in the lazy functional programming language Clean. The system transforms a description of the music to be typeset in a dvi-file just like TEX does with mathematical formulae. The implementation makes heavy use of higher order functions. It has been implemented in just a few weeks and is able to typeset quite impressive examples. The system is easy to maintain and can be extended to typeset arbitrary complicated musical constructs. The paper can be considered as a status report of the implementation as well as a reference manual for the resulting system.
The extraction of strictness information marks an indispensable element of an efficient compilation of lazy functional languages like Haskell. Based on the method of abstract reduction we have developed an e cient strictness analyser for a core language of Haskell. It is completely written in Haskell and compares favourably with known implementations. The implementation is based on the G#-machine, which is an extension of the G-machine that has been adapted to the needs of abstract reduction.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
Automatic termination proofs of functional programming languages are an often challenged problem Most work in this area is done on strict languages Orderings for arguments of recursive calls are generated In lazily evaluated languages arguments for functions are not necessarily evaluated to a normal form It is not a trivial task to de ne orderings on expressions that are not in normal form or that do not even have a normal form We propose a method based on an abstract reduction process that reduces up to the point when su cient ordering relations can be found The proposed method is able to nd termination proofs for lazily evaluated programs that involve non terminating subexpressions Analysis is performed on a higher order polymorphic typed language and termi nation of higher order functions can be proved too The calculus can be used to derive information on a wide range on di erent notions of termination.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We consider the problem of unifying a set of equations between second-order terms. Terms are constructed from function symbols, constant symbols and variables, and furthermore using monadic second-order variables that may stand for a term with one hole, and parametric terms. We consider stratified systems, where for every first-order and second-order variable, the string of second-order variables on the path from the root of a term to every occurrence of this variable is always the same. It is shown that unification of stratified second-order terms is decidable by describing a nondeterministic decision algorithm that eventually uses Makanin's algorithm for deciding the unifiability of word equations. As a generalization, we show that the method can be used as a unification procedure for non-stratified second-order systems, and describe conditions for termination in the general case.
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
Increasingly, alternative investments via hedge funds are gaining importance in Germany. Just recently, this subject was taken up in the legal literature, too; this resulted in a higher product transparency. However, German investment law and, particularly, the special division "hedge funds" is still a field dominated by practitioners. First, the present situation shall be outlined. In addition, a description of the current development is given, in which the practical knowledge of the author is included. Finally, the hedge fund regulation intended by the legislator at the beginning of the year 2004 is legally evaluated against this background.
In response to recent developments in the financial markets and the stunning growth of the hedge fund industry in the United States, policy makers, most notably the Securities and Exchange Commission (“SEC”), are turning their attention to the regulation, or lack thereof, of hedge funds. U.S. regulators have scrutinized the hedge fund industry on several occasions in the recent past without imposing substantial regulatory constraints. Will this time be any different? The focus of the regulators’ interest has shifted. Traditionally, they approached the hedge fund industry by focusing on systemic risk to and integrity of the financial markets. The current inquiry is almost exclusively driven by investor protection concerns. What has changed? First, since 2000, new kinds of investors have poured capital into hedge funds in the United States, facilitated by the “retailization” of hedge funds through the development of funds of hedge funds and the dismal performance of the stock market. Second, in a post-Enron era, regulators and policy makers are increasingly sensitive to investor protection concerns. On May 14 and 15, 2003, the SEC held for the first time a public roundtable discussion on the single topic of hedge funds. Among the investor protection concerns highlighted were: an increase in incidents of fraud, inadequate suitability determinations by brokers who market hedge fund interests to individual investors, conflicts of interest of managers who manage mutual funds and hedge funds side-by-side, a lack of transparency that hinders investors from making informed investment decisions, layering of fees, and unbounded discretion by managers in pricing private hedge fund securities. Although there has been discussion about imposing wide-ranging restrictions onhedge funds, such as reining in short selling, requiring disclosure of long/short positions and limiting leverage, such a response would be heavy-handed and probably unnecessary. The existing regulatory regime is largely adequate to address the most flagrant abuses. Moreover, as the hedge fund market further matures, it is likely that institutional investors will continue to weed out weak performers and mediocre or dishonest hedge fund managers. What is likely to emerge from the newest regulatory focus on investor protection is a measured response that would enhance the SEC’s enforcement and inspection authority, while leaving hedge funds’ inherent investment flexibility largely unfettered. A likely scenario, for example, might be a requirement that some, or possibly all, hedge fund sponsors register with the SEC as investment advisers. Today, most are exempt from registration, although more and more are registering to provide advice to public hedge funds and attract institutions. Registration would make it easier for the SEC to ferret out potential fraudsters in advance by reviewing the professional history of hedge fund operators, allow the SEC to bring administrative proceedings against hedge fund advisers for statutory violations and give the agency access to books and records that it does not have today. Other possible initiatives, including additional disclosure requirements for publicly offered hedge funds, are discussed below. This article addresses the question whether U.S. regulation of hedge funds is really taking a new direction. It (i) provides a brief overview of the current U.S. regulatory scheme, from which hedge funds are generally exempt, (ii) describes recent events in the United States that have contributed to regulators’ anxiety, (iii) examines the investor protection rationale for hedge fund regulation and considers whether these concerns do, in fact, merit increased regulation of hedge funds at this time, and (iv) considers the likelihood and possible scope of a potential regulatory response, principally by the SEC.
In an ideal world all investment products, including hedge funds, would be marketable to all investors. In this ideal world, all investors would fully understand the nature of the products and would be able to make an informed choice whether to invest. Of course the ideal world does not exist – the retail investment market is characterised by asymmetries of information. Product providers know most about the products on offer (or at least they should do). Investment advisers often know rather less than the provider but much more than their retail customers. Providers and intermediary advisers are understandably motivated by the desire to sell their products. There is therefore a risk that investment products will be mis-sold by investment advisers or mis-bought by ill-informed investors. This asymmetry of information is dealt with in most countries through regulation. However, the regulatory response in different countries is not necessarily the same. There are various ways in which protections can be applied and it is important to understand that the cultural background and regulatory histories of countries flavours the way regulation has developed. This means (as will be explained in greater detail later) that some countries are better able than others to admit hedge funds to the retail sector. Following this Introduction, Section II looks at some key background issues. Section III then looks at some important questions raised by the retail hedge fund issue. Many of these are questions of balance. Balance lies at the heart of regulation of course – regulation must always balance the needs of investors and with market efficiency. Understanding the “retail hedge fund” question requires particular attention to balance. Section IV then looks at the UK regime and how the FSA has answered the balance question. Section V offers some international perspectives. Section VI concludes. It will be seen that there is no obviously right answer to the question whether hedge fund products should be marketed to retail investors. Each regulator in each jurisdiction needs to make up its own mind on how to deal with the various issues and balances. It is evident, however, that internationally there is a move towards a greater variety of retail funds. There is nothing wrong with that, provided the regulators and the retail customers they protect, understand sufficiently what sort of protection is, or is not, being offered in the regulatory regime.
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
Despite the apparent stability of the wage bargaining institutions in West Germany, aggregate union membership has been declining dramatically since the early 90's. However, aggregate gross membership numbers do not distinguish by employment status and it is impossible to disaggregate these sufficiently. This paper uses four waves of the German Socioeconomic Panel in 1985, 1989, 1993, and 1998 to perform a panel analysis of net union membership among employees. We estimate a correlated random effects probit model suggested in Chamberlain (1984) to take proper account of individual specfic effects. Our results suggest that at the individual level the propensity to be a union member has not changed considerably over time. Thus, the aggregate decline in membership is due to composition effects. We also use the estimates to predict net union density at the industry level based on the IAB employment subsample for the time period 1985 to 1997. JEL - Klassifikation: J5
The paper analyses the financial structure of German inward FDI. From a tax perspective, intra-company loans granted by the parent should be all the more strongly preferred over equity the lower the tax rate of the parent and the higher the tax rate of the German affiliate. From our study of a panel of more than 8,000 non-financial affiliates in Germany, we find only small effects of the tax rate of the foreign parent. However, our empirical results show that subsidiaries that on average are profitable react more strongly to changes in the German corporate tax rate than this is the case for less profitable firms. This gives support to the frequent concern that high German taxes are partly responsible for the high levels of intracompany loans. Taxation, however, does not fully explain the high levels of intra-company borrowing. Roughly 60% of the cross-border intra-company loans turn out to be held by firms that are running losses. JEL - Klassifikation H25 , F23 .
This paper is a draft for the chapter German banks and banking structure of the forthcoming book The German financial system . As such, the paper starts out with a description of past and present structural features of the German banking industry. Given the presented empirical evidence it then argues that great care has to be taken when generalising structural trends from one financial system to another. Whilst conventio nal commercial banking is clearly in decline in the US, it is far from clear whether the dominance of banks in the German financial system has been significantly eroded over the last decades. We interpret the immense stability in intermediation ratios and financing patterns of firms between 1970 and 2000 as strong evidence for our view that the way in which and the extent to which German banks fulfil the central functions for the financial system are still consistent with the overall logic of the German financial system. In spite of the current dire business environment for financial intermediaries we do not expect the German financial system and its banking industry as an integral part of this system to converge to the institutional arrangements typical for a market-oriented financial system. This Version: March 25, 2003
Initiated by the seminal work of Diamond/Dybvig (1983) and Diamond (1984), advances in the theory of financial intermediation have sharpened our understanding of the theoretical foundations of banks as special financial institutions. What makes them "unique" is the combination of accepting deposits and issuing loans. However, in recent years the notion of "disintermediation" has gained tremendous popularity, especially among American observers. These observers argue that deregulation, globalisation and advances in information technology have been eroding the role of banks as intermediaries and thus their alleged uniqueness. It is even assumed that ever more efficiently organised capital markets and specialised financial institutions that take advantage of these markets, such as mutual funds or finance companies, will lead to the demise of banks. Using a novel measurement concept based on intermediation and securitisation ratios, the present article provides evidence which shows that banking disintermediation is indeed a reality for the US financial system. This seems to indicate that American banks are not all that "unique"; they can be replaced to a considerable extent. Moreover, many observers seem to believe that what has happened in the US reflects a universal trend. However, empirical results reported in this paper indicate that such a trend has not manifested itself in other financial systems, and in particular, not in Germany or Japan. Evidence on the enormous structural differences between financial systems and the lack of unequivocal signs of convergence render any inferences from the American experience to other financial systems very problematic.
Abstract: It is commonplace in the debate on Germany's labor market problems to argue that high unemployment and low wage dispersion are related. This paper analyses the relationship between unemployment and residual wage dispersion for individuals with comparable attributes. In the conventional neoclassical point of view, wages are determined by the marginal product of the workers. Accordingly, increases in union minimum wages result in a decline of residual wage dispersion and higher unemployment. A competing view regards wage dispersion as the outcome of search frictions and the associated monopsony power of the firms. Accordingly, an increase in search frictions causes both higher unemployment and higher wage dispersion. The empirical analysis attempts to discriminate between the two hypotheses for West Germany analyzing the relationship between wage dispersion and both the level of unemployment as well as the transition rates between different labor market states. The findings are not completely consistent with either theory. However, as predicted by search theory, one robust result is that unemployment by cells is not negatively correlated with the within cell wage dispersion.
This paper evaluates the effects of Public Sponsored Training in East Germany in the context of reiterated treatments. Selection bias based on observed characteristics is corrected for by applying kernel matching based on the propensity score. We control for further selection and the presence of Ashenfelter's Dip before the program with conditional difference-in-differences estimators. Training as a first treatment shows insignificant effects on the transition rates. The effect of program sequences and the incremental effect of a second program on the reemployment probability are insignificant. However, the incremental effect on the probability to remain employed is slightly positive. JEL - Klassifikation: H43 , C23 , J6 , J64 , C14
The Box-Cox quantile regression model using the two stage method introduced by Chamberlain (1994) and Buchinsky (1995) provides an attractive extension of linear quantile regression techniques. However, a major numerical problem exists when implementing this method which has not been addressed so far in the literature. We suggest a simple solution modifying the estimator slightly. This modification is easy to implement. The modified estimator is still [square root] n-consistent and its asymptotic distribution can easily be derived. A simulation study confirms that the modified estimator works well.
This paper investigates the magnitude and the main determinants of share price reactions to buy-back announcements of German corporations. For our comprehensive sample of 224 announcements that took place between May 1998 and April 2003 we find average cumulative abnormal returns around -7.5% for the thirty days preceding the announcement and around +7.0 % for the ten days following the announcement. We regress post-announcement abnormal returns with multiple firm characteristics and provide evidence which supports the undervaluation signaling hypothesis but not the excess cash hypothesis or the tax-efficiency hypothesis. In extending prior empirical work, we also analyze price effects from initial statements of firms that they intend to seek shareholder approval for a buy-back plan. Observed cumulative abnormal returns on this initial date are in excess of 5% implying a total average price effect between 12% and 15% from implementing a buy-back plan. We conjecture that the German regulatory environment is the main reason why market variations to buy-back announcements are much stronger in Germany than in other countries and conclude that initial statements by managers to seek shareholders’ approval for a buy-back plan should also be subject to legal ad-hoc disclosure requirements.
This paper analyzes empirically the distribution of unemployment durations in West- Germany before and after the changes during the mid 1980s in the maximum entitlement periods for unemployment benefits for elderly unemployed. The analysis is based on the comprehensive IAB employment subsample containing register panel data for about 500.000 individuals in West Germany. We analyze two proxies for unemployment since the data do not precisely measure unemployment in an economic sense. We provide a theoretical analysis of the link between the durations of nonemployment and of unemployment between jobs. Our empirical analysis finds significant changes in the distributions of nonemployment durations for older unemployed individuals. At the same time, the distribution of unemployment durations between jobs did not change in response to the reforms. Our findings are consistent with an interpretation that many firms and workers used the more bene cial laws as a part of early retirement packages but those workers who were still looking for a job did not reduce their search effort in response to the extension of the maximum entitlement periods. This interpretation is consistent with our theoretical model under plausible assumptions. JEL: C24, J64, J65
This paper examines intraday stock price effects and trading activity caused by ad hoc disclosures in Germany. The evidence suggests that the observed stock prices react within 90 minutes after the ad hoc disclosures. Trading volumes take even longer to adjust. We find no evidence for abnormal price reactions or abnormal trading volume before announcements. The bigger the company that announces an ad hoc disclosure, the less severe is the abnormal price effect following the announcement. The number of analysts is negatively correlated to the trading volume effect before the ad hoc disclosure. The higher the trading volume on the last trading day before the announcement, the greater is the price effect after the ad hoc disclosures and the greater the trading volume effect. Keywords: ad hoc disclosure rules, intraday stock price adjustments, market efficiency.
We show that multi-bank loan pools improve the risk-return profile of banks’ loan business. Banks write simple contracts on the proceeds from pooled loan portfolios, taking into account the free-rider problems in joint loan production. Thus, banks benefit greatly from diversifying credit risk while limiting the efficiency loss due to adverse incentives. We present calibration results that the formation of loan pools reduce the volatility in default rates, proxying for credit risk, of participating banks’ loan portfolios by roughly 70% in our sample. Under reasonable assumptions, the gain in return on equity (in certainty equivalent terms) is around 20 basis points annually.
Global reserves of coal, oil and natural gas are diminishing; global energy requirements however are dramatically increasing. Renewable energy sources lower the threat to the earth’s climate but are not able to meet the energy consumption in major urban areas. The opinion of many experts is that the future will be dominated by hydrogen. However, this gas is essentially totally manufactured from fossil fuels and is hence of limited abundance – not to mention the hazards involved in its utilisation. - A novel energy concept involving solar and thus carbon-independent hydrogen-based technology necessitates an intermediate storage vehicle for renewable energy. This future energy carrier should be simple to manufacture, be available to an unlimited degree or at least be suitable for recycling, be able to store and transport the energy without hazards, demonstrate a high energy density and release no carbon dioxide or other climatically detrimental substances. - Silicon successfully functions as a tailor-made intermediate linking decentrally operating renewable energy-generation technology with equally decentrally organised hydrogen-based infrastructure at any location of choice. In contrast to oil and in particular hydrogen, the transport and storage of silicon are free from potential hazards and require a simple infrastructure similar to that needed for coal.
This paper compares the accuracy of credit ratings of Moody s and Standard&Poors. Based on 11,428 issuer ratings and 350 defaults in several datasets from 1999 to 2003 a slight advantage for the rating system of Moody s is detected. Compared to former research the robustness of the results is increased by using nonparametric bootstrap approaches. Furthermore, robustness checks are made to control for the impact of Watchlist entries, staleness of ratings and the effect of unsolicited ratings on the results.
National borders in Europe have been opening since 1992 and the Union is expanding to embrace more countries prompting enterprises to consider alternative and more attractive locations outside their home country to handle part of their activities (Van Dijk and Pellenbarg, 2000; Cantwell and Iammarino, 2002). International relocation is becoming more and more popular even for small and medium-sized firms that are involved in a growing internationalisation process, mirroring the path of multinational enterprises. Italy, like other industrialised countries, is experiencing a fragmentation of the production chain: firms tend to shift high labour-intensive manufacturing activities to areas characterised by an abundance of low-cost labour (i.e. Central Eastern Europe, India, South East Asia, Latin America, Russia and Central Asia). The internationalisation process by Italian district SMEs has assumed significant dimensions. It has become a relevant topic in recent economic debate because of its consequences for the local context and, in particular, the implication for the survival of the Italian district model (see, among others, Becattini, 2002; Rullani, 1998 and Cor, 2000). The purpose of the paper is twofold: it aims at (i) identifying the managerial approaches to the internationalisation process adopted by the Italian district SMEs and by the Industrial District (ID) itself and (ii) at investigating whether the international delocalisation to the South Eastern European countries (SEECs) constitutes a threat or an opportunity for the Italian district model. The paper is organised as follows. The general introduction is followed by a description of the evolution of the internationalisation processes in Italy over the last three decades. Section three presents a discussion of the internationalisation strategies adopted by Italian SMEs. Section four focuses on the internationalisation process of the Italian industrial districts SMEs. A review of the studies on the subject is offered in section five. Section six presents a qualitative study on the internationalisation process as undergone by sports shoes manufacturers in the Montebelluna district, in north-east Italy. This study shows different managerial strategies to the internationalisation process and emphasises that the motivations can evolve over time, from originally cost-saving to increasingly market-oriented or global strategies. On the basis of a literature review, section seven investigates whether internationalisation constitutes a threat (i.e. loss of jobs and knowledge) or an opportunity (i.e. enlargement of the ID, update district s competitiveness) for the district model. Finally, some summarising remarks in section eight conclude the paper.
Over the past decade, a variety of studies have shown that other sectors in addition to high technology industries can provide a basis for regional growth and income and employment opportunities. In addition, design-intensive, craft-based, creative industries which operate in frequently changing, fashion-oriented markets have established regional concentrations. Such industries focus on the production of products and services with a particular cultural and social content and frequently integrate new information technologies into their operations and outputs. Among these industries, the media and, more recently, multimedia industries have received particular atte ntion (Brail/ Gertler 1999; Egan/ Saxenian 1999). Especially, the film (motion picture) and TV industries have been the focus of a number of studies (e.g. Storper/ Christopherson 1987; Scott 1996). For the purpose of this paper, cultural products industries are defined as those industries which are involved in the commodification of culture, especially those operations that depend for their success on the commercialization of objects and services that transmit social and cultural messages (Scott 1996, p. 306). Empirical studies on the size, structure and organizational attributes of the firms in media-related industry clusters have revealed a number of common characteristics (Scott 1996; Brail/ Gertler 1999; Egan/ Saxenian 1999). Most firms in these industries are fairly young, often existing for only a few years. They also tend to be small in terms of employment. Often, regional clusters of specialized industries are the product of a local growth process which has been driven by innovative local start-ups. In their early stages, many firms have been established by teams of persons rather than by individual entrepreneurs and have heavily relied on owner capital. Another important feature which distinguishes these industries from others is that they concentrate in inner-city instead of suburban locations (Storper/ Christopherson 1987; Eberts/ Norcliffe 1998; Brail/ Gertler 1999). In this study, I provide evidence that the Leipzig media industry shows similar tendencies and characteristics as those displayed by the multimedia and cultural products industry clusters in Los Angeles, San Francisco and Toronto, albeit at a much smaller scale. Cultural products industries are characterized by a strong tendency towards the formation of regional clusters despite the fact that in some sectors, such as the multimedia industry, technological opportunities (i.e. internet technologies) have seemingly reduced the necessity of proximity in operations between interlinked firms. In fact, it seems that regional concentration tendencies are even more dominant in cultural products industries than in many industries of the old economy . Cultural products industries have formed particular regional clusters of suppliers, producers and customers which are interlinked within the same commodity chains (Scott 1996; Les- 2 lie/ Reimer 1999). These clusters are characterized by a deep social division of labor between vertically-linked firms and patterns of interaction and cooperation in production and innovation. Within close networks of social relations and reflexive collective action, they have developed a strong tendency towards product- and process-related specialization (Storper 1997; Maskell/ Malmberg 1999; Porter 2000). In the context of the rise of a new media industry cluster in Leipzig, Germany, I discuss those approaches in the next section of this paper which provide an understanding of complex industrial clustering processes. Therein socio-institutional settings, inter-firm communication and interactive learning play a decisive role in generating regional innovation and growth. However, I will also emphasize that interfirm networks can have a negative impact on competitiveness if social relations and linkages are too close, too exclusive and too rigid. Leipzig's historical role as a location of media-related businesses will be presented in section 3. As part of this, I will argue the need to view the present cluster of media firms as an independent phenomenon which is not a mere continuation of tradition. In section 4 the start-up and location processes are analyzed which have contributed to the rise of a new media industry cluster in Leipzig during the 1990's. Related to this, section 5 will discuss the role and variety of institutions which have developed in Leipzig and how they support specialization processes. This will be interpreted as a process of reembedding into a local context. In section 6, I will discuss how media firms have become over-embedded due to their strong orientation towards regional markets. This will be followed by some brief conclusions regarding the growth potential of the Leipzig media industry.
There are few changes in the history of human existence comparable to urbanization in scope and potential to bring about biologic change. The transition in the developed world from an agricultural to an industrial-urban society has already produced substantial changes in human health, morphology and growth (Schell, Smith and Bilsborough, 1993, p.1). By the year 2000, about 50% of the world s total population will be living crowded in urban areas and soon thereafter, by the year 2025 as the global urban population reaches the 5 billion mark more of the world s population will be living in urban areas. This has enormous health consequences. By the close of the twenty-first century, more people will be packed into the urban areas of the developing world than are alive on the planet today (UNCHS (Habitat), 1996, p.xxi). Africa presents a particularly poignant example of the problems involved, as it has the fastest population and urban growth in the world as well as the lowest economic development and growth and many of the poorest countries, especially in Tropical Africa. Thus it exemplifies in stark reality many of the worst difficulties of urban health and ecology (Clarke, 1993, p.260). This essay is therefore concerned to analyse the trends of urbanization in Africa. This is followed by an overview of the environmental conditions of Africa s towns and cities. The subsequent section explores the links between the urban environment and health. Although the focus is with physical hazards it is important to note that the social milieu is also vital in the reproduction of health. The paper concludes by providing some policy recommendations.
Characterised as the mighty capital of the eurozone (Sassen 1999, 83), Frankfurt is said to be a rising world city primarily due to its financial centre. This is reflected in the use of such common catchphrases as Bankfurt and Mainhattan for the city, as well as its reference in scientific publications. As Ronneberger and Keil (1995, 305) state, for instance, a service economy [...] mastered by the finance sector forms the basis for the continuing integration of Frankfurt into the international market. Frankfurt is the most important German as well as European financial centres. Thirteen of the 30 largest German banks and about two thirds of Germany s foreign banks are seated here. Frankfurt s stock exchange (ranked 4th in the world) is by far the biggest in Germany with a turnover-share of more than 80%. Its derivatives exchange (Eurex) aims to become the biggest in the world. As the host city for the European Central Bank, it is also the centre of European monetary policy. As a major node in the global financial network today, Frankfurt s specific functions within this network will be investigated in this paper. Unlike most other predominant national financial centres, Frankfurt has not continuously held this position in Germany s since the middle ages: It re-gained it s position from Berlin only after World War II. In contrast to the static phenomenon financial centre which is well covered in the literature emergence and development of financial centres is not as well understood. The study of the development of the financial centre Frankfurt after World War II gives insights into the dynamics of the self-reinforcing mechanisms within financial centres; the second topic covered in the paper. The paper is organised as follows: the remainder of this chapter looks at the method used in this study and the theory of financial centres with an emphasis on the basic approaches to the emergence of financial centres. After that it is asked whether Frankfurt meets the basic requirements for the concept of path dependence, i.e. that there are self-reinforcing mechanisms. After a positive answer to that, the development of Frankfurt as a financial centre is discussed as well as its role as a node in the world (financial) system today in chapter two. Chapter three provides some more or less speculative remarks about Frankfurt s future; the last chapter briefly summarises the findings of the paper.
One of the most important but less understood phenomena in the beginning of the 21st century has been a shift toward knowledge-based economic activity in the comparative advantage of modern industrialized countries. Two broad trends has been observed in the global economy. That is, the output from the world's science and technology system has been growing rapidly and the nature of investment has been changed (MILLER, 1996). The relative proportions of physical and intangible investment have changed considerably with the relative increase of intangible investments since the 1980s. In addition, there has been increased complementarity between physical and intangible investments and more important role of high technology in both kinds of investment (MILLER, 1996). Even in the newly industrialized countries, the growth of technology intensive industries, the increase of R&D activities and the growth of the knowledge intensive producer services have been common feature in recent years. In this change of the structure of productive assets, the role of knowledge is well recognized as the most fundamental resources in recent years (OECD, 1996; WORLD BANK, 1998). The development of information and communication technology (ICT) and globalisation trend have promoted this shift toward knowledge-based economy.
The globalisation of contemporary capitalism is bringing about at least two important implications for the emergence and significance of business services. First, the social division of labour steadily increases (ILLERIS 1996). Within the complex organisation of production and trade new intermediate actors emerge either from the externalisation of existing functions in the course of corporate restructuring policies or from the fragmentation of the production chain into newly defined functions. Second, competitive advantages of firms increasingly rest on their ability to innovate and learn. As global communication erodes knowledge advantages more quickly, product life cycles shorten and permanent organisational learning results to be crucial for the creation and maintenance of competitiveness. Intra- and interorganisational relations of firms now are the key assets for learning and reflexivity (STORPER 1997). These two aspects of globalisation help understand why management consulting - as only one among other knowledge intensive business services (KIBS) - has been experiencing such a boost throughout the last two decades. Throughout the last ten years, the business has grown annually by 10% on average in Europe. Management consulting can be seen first, as a new organisational intermediate and second, as an agent of change and reflexivity to business organisations. Although the KIBS industry may not take a great share of the national GDP its impact on national economies should not be underestimated. Estimations show that today up to 80% of the value added to industrial products stem from business services (ILLERIS 1996). Economic geographers have been paying more attention to KIBS since the late 1970s and focus on the transformation of the spatial economy through the emerging business services. This market survey is conceived as a first step of a research programme on the internationalisation of management consulting and as a contribution to the lively debate in economic geography. The management consulting industry is unlimited in many ways: There are only scarce institutional boundaries, low barriers to entry, a very heterogeneous supply structure and multiple forms of transaction. Official statistics have not yet provided devices of grasping this market and it may be therefore, that research and literature on this business are rather poor. The following survey is an attempt to selectively compile existing material, empirical studies and statistics in order to draw a sketchy picture of the European market, its institutional constraints, agents and dynamics. German examples will be employed to pursue arguments in more depth.