Refine
Year of publication
Document Type
- Working Paper (2358) (remove)
Language
- English (2358) (remove)
Is part of the Bibliography
- no (2358) (remove)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (24)
Institute
- Center for Financial Studies (CFS) (1383)
- Wirtschaftswissenschaften (1313)
- Sustainable Architecture for Finance in Europe (SAFE) (745)
- House of Finance (HoF) (610)
- Institute for Monetary and Financial Stability (IMFS) (174)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
In this paper, we show the pivotal role business owners play in estimating the importance of the precautionary saving motive. The fact that business owners hold higher-than-average wealth while facing higher income risk than other households leads to a correlation between wealth and labor income risk regardless of whether or not a precautionary motive is important. Using data from the Panel Study of Income Dynamics in the 1980s and the 1990s, we show that within separate samples of both business owners and non-business owners the size of precautionary savings with respect to labor income risk is modest and accounts for less than ten percent of total household wealth. However, pooling together these two groups leads to an artificially high estimate of the importance of precautionary savings. Data from the Survey of Consumer Finances further confirms that precautionary savings account for less than ten percent of total wealth for both business owners and non-business owners. Thus, while a precautionary saving motive exists and affects all households, it does not give rise to high amounts of wealth in the economy, particularly among those households who face the most volatile labor earnings. Klassifizierung: D91
We evaluate the importance of the precautionary saving motive by relying on a direct question about precautionary wealth from the 1995 and 1998 waves of the Survey of Consumer Finances. In this survey, a new question has been designed to elicit the amount of desired precautionary wealth. This allows us to assess the amount of precautionary accumulation and to overcome many of the problems of previous works on this topic. We find that a precautionary saving motive exists and affects virtually every type of household. However, precautionary savings account for only 8 percent of total wealth holdings. Even though this motive does not give rise to large amounts of wealth, particularly for young and middle-age households, it is particularly important for two groups: older households and business owners. Overall, we provide strong evidence that we need to take the precautionary saving motive into account when modeling saving behavior. Klassifizierung: D91, E21, C21
Several recent studies have addressed household participation in the stock market, but relatively few have focused on household stock trading behavior. Household trading is important for the stock market, as households own more than 40% of the NYSE capitalization directly and can also influence trading patterns of institutional investors by adjusting their indirect stock holdings. Existing studies based on administrative data offer conflicting results. Discount brokerage data show excessive trading to the detriment of stockholders, while data on retirement accounts indicate extreme inactivity. This paper uses data representative of the population to document the extent of household portfolio inertia and to link it to household characteristics and to stock market movements. We document considerable portfolio inertia, as regards both changing stockholding participation status and trading stocks, and find that specific household characteristics contribute to the tendency to exhibit such inertia. Although our findings suggest some dependence of trading directly-held equity through brokerage accounts on the performance of the stock market index, they do not indicate that the recent expansion in the stockholder base and the experience of the stock market downswing have significantly altered the overall propensity of households to trade in stocks or to switch participation status in a way that could contribute to stock market instability. JEL Classification: G110, E210
This paper compares the boom-bust cycle in Finland and Sweden 1984-1995 with the average boom-bust pattern in industrialized countries as calculated from an international sample for the period 1970-2002. Two clear conclusions emerge. First, the Finnish-Swedish experience is much more volatile than the average boom-bust pattern. This holds for virtually every time series examined. Second, the bust and the recovery in the two Nordic countries differ markedly more from the international pattern than the boom phase does. The bust is considerably deeper and the recovery comes earlier and is more rapid. We explain the highly volatile character of the Finnish and Swedish boom-bust episode by the design of economic policies in the 1980s and 1990s. The boom-bust cycle in Finland and Sweden 1984-1995 was driven by financial liberalization and a hard currency policy, causing large pro-cyclical swings in the real rate of interest transmitted via the financial sector into the real sector and then into the public finances. JEL Classification: E32, E62, E63
We model the impact of bank mergers on loan competition, reserve holdings and aggregate liquidity. A merger changes the distribution of liquidity shocks and creates an internal money market, leading to financial cost efficiencies and more precise estimates of liquidity needs. The merged banks may increase their reserve holdings through an internalization effect or decrease them because of a diversification effect. The merger also affects loan market competition, which in turn modifies the distribution of bank sizes and aggregate liquidity needs. Mergers among large banks tend to increase aggregate liquidity needs and thus the public provision of liquidity through monetary operations of the central bank. JEL Classification: G24, G32, G34
One of the most acute problems in the world today is provision of a respectable living for the elderly. Today the process of aging population (as a result of a declined birth rate and increased life expectancy) has touched all countries of the world - developed countries as well as countries like Russia. Consequently, reforming traditional pension systems to deal with the changing situation has become an important issue around the world. These reforms typically center on the implementation of some form of funding of future pension benefits. This also holds for Russia, where in 1995 pension reform legislation introduced the so-called “accumulation pension”. In this context, this article will deal with the issues concerning the establishment of mutual funds, legal aspects of their operating and their investing opportunities. There will be carried out a comparative analysis of mutual funds with the other forms of public investments, namely: Common Funds of Bank Management, Voucher Investment Funds and Joint-stock Investment Funds.
Open-end real estate funds are of particular importance in the German bankdominated financial system. However, recently the German open-end fund industry came under severe distress which triggered a broad discussion of required regulatory interventions. This paper gives a detailed description of the institutional structure of these funds and of the events that led to the crisis. Furthermore, it applies recent banking theory to open-end real estate funds in order to understand why the open-end fund structure was so prevalent in Germany. Based on these theoretical insights we evaluate the various policy recommendation that have been raised.
In this paper, I tackle the question whether one share - one vote should become a European law rule. I examine, first of all, the economic theory concerning one share - one vote and its optimality, and the law and economics literature on dual class recapitalizations and other deviations from one share - one vote. I also consider the agency costs of deviations from one share - one vote and examine whether they justify regulation. I subsequently analyze the rules implementing the one share - one vote standard in the US and Europe. In particular, I analyze the self-regulatory rules of US exchanges, the relevant provisions of the European Takeover Directive (including the well known break-through rule), and the European Court of Justice's position as to golden shares (which also are deviations from the one share - one vote standard). I conclude that one share - one vote is not justified by economic efficiency, as also confirmed by comparative law. Also the European breakthrough rule, which ultimately strikes down all deviations from one share - one vote, does not appear to be well grounded. Only transparency rules appear to be justified at EU level as disclosure of ownership and voting structures serves a pricing and governance function, while harmonisation of the relevant rules reduces transaction costs in integrated markets.
One of the dangers of harmonisation and unification processes taking place within the framework of the EU is that they may result in the codification of the lowest common denominator. This is precisely what is threatening to happen in respect of assignment. Referring the transfer of receivables by way of assignment to the law of the assignor’s residence, as article 13 of the Proposal does, would be opting for the most conservative solution and would for many Member States be a step backward rather than forward. A conflict rule referring assignment to the law of the assignor's residence is too rigid to do justice to the dynamic nature of assignments in cross-border transactions and it is unjustly one-sided. It offers no real advantages when compared to other conflict rules; it even has serious disadvantages which make the conflict rule unsuitable for efficient assignment-based cross-border transactions. It is not unconceivable that this conflict rule would even be contrary to the fundamental freedoms of the ECTreaty. The Community legislators in particular should be careful not to needlessly adopt rules which create insurmountable obstacles for cross-border business where choice-of-law by the parties would perfectly do. Community legislation has a special responsibility to create a smooth legal environment for single market transactions.
This Paper will look at the changing nature of asset management, and will examine the nature of the European framework for collective investment undertakings, enshrined in the UCITS Directive2 in that light. This question whether the UCITS Directive in its current form remains an appropriate European response to the changing investment management landscape is an issue with which the European Commission is actively engaging through its Green Paper on the Enhancement of the EU Framework for Investment Funds, published in July 2005.3 But before considering these important questions, it is necessary to begin with an idea of what a collective investment, more specifically a UCITS actually is and how it fits conceptually in the broader world of pooled investments.....
We analyze the degree of contract completeness with respect to staging of venture capital investments using a hand-collected German data set of contract data from 464 rounds into 290 entrepreneurial firms. We distinguish three forms of staging (pure milestone financing, pure round financing and mixes). Thereby, contract completeness reduces when going from pure milestone financing via mixes to pure round financing. We show that the decision for a specific form of staging is determined by the expected distribution of bargaining power between the contracting parties when new funding becomes necessary and the predictability of the development process. To be more precise, parties choose the more complete contracts the lower the entrepreneur's expected bargaining power - the maximum level depending on the predictability of the development process. JEL Classification: G24, G32, D86, D80, G34
This paper investigates whether the stock market reacts to unsolicited ratings for a sample of S&P rated firms from January 1996 to December 2005. We first analyze the stock market reaction associated with the assignment of an initial unsolicited rating. We find evidence that this reaction is negative and particularly accentuated for Japanese firms. A comparison between S&P’s initial unsolicited ratings with previously published ratings of two Japanese rating agencies for a Japanese subsample shows that ratings assigned by S&P are systematically worse. Further, we find that the stock market does not react to the transition from an unsolicited to a solicited rating. Comparison of the upgrades in the sample with a matched-sample of upgrades of solicited ratings reveals that the price reactions are no different. In addition, abnormal returns are worse for firms whose rating remained unchanged after the solicitation compared to those for upgraded firms. Finally, we find that Japanese firms are less likely to receive an upgrade. Our findings suggest that unsolicited ratings are biased downwards, that the capital market therefore expects upgrades of formerly unsolicited ratings and punishes firms whose ratings remain unchanged. All these effects seem to be more pronounced for Japanese firms.
In this paper, we propose a model of credit rating agencies using the global games framework to incorporate information and coordination problems. We introduce a refined utility function of a credit rating agency that, additional to reputation maximization, also embeds aspects of competition and feedback effects of the rating on the rated firms. Apart from hinting at explanations for several hypotheses with regard to agencies' optimal rating assessments, our model suggests that the existence of rating agencies may decrease the incidence of multiple equilibria. If investors have discretionary power over the precision of their private information, we can prove that public rating announcements and private information collection are complements rather than substitutes in order to secure uniqueness of equilibrium. In this respect, rating agencies may spark off a virtuous circle that increases the efficiency of the market outcome.
Using data of US domestic mergers and acquisitions transactions, this paper shows that acquirers have a preference for geographically proximate target companies. We measure the ‘home bias’ against benchmark portfolios of hypothetical deals where the potential targets consist of firms of similar size in the same four-digit SIC code that have been targets in other transactions at about the same time or firms that have been listed at a stock exchange at that time. There is a strong and consistent home bias for M&A transactions in the US, which is significantly declining during the observation period, i.e. between 1990 and 2004. At the same time, the average distances between target and acquirer increase articulately. The home bias is stronger for small and relatively opaque target companies suggesting that local information is the decisive factor in explaining the results. Acquirers that diversify into new business lines also display a stronger preference for more proximate targets. With an event study we show that investors react relatively better to proximate acquisitions than to distant ones. That reaction is more important and becomes significant in times when the average distance between target and acquirer becomes larger, but never becomes economically significant. We interpret this as evidence for the familiarity hypothesis brought forward by Huberman (2001): Acquirers know about the existence of proximate targets and are more likely to merge with them without necessarily being better informed. However, when comparing the best and the worst deals, we are able to show a dramatic difference in distances and home bias: The most successful deals display on average a much stronger home bias and distinctively smaller distance between acquirer and target than the least successful deals. Proximity in M&A transactions therefore is a necessary but not sufficient condition for success. The paper contributes to the growing literature on the role of distance in financial decisions.
We estimate the effect of pension reforms on households' expectations of retirement outcomes and private wealth accumulation decisions exploiting a decade of intense Italian pension reforms as a source of exogenous variation in expected pension wealth. The Survey of Household Income and Wealth, a large random sample of the Italian population, elicits expectations of the age at which workers expect to retire and of the ratio of pension benefits to pre-retirement income between 1989 and 2002. We find that workers have revised expectations in the direction suggested by the reform and that there is substantial offset between private wealth and perceived pension wealth, particularly by workers that are better informed about their pension wealth. Klassifikation: E21, H55
We present a multivariate generalization of the mixed normal GARCH model proposed in Haas, Mittnik, and Paolella (2004a). Issues of parametrization and estimation are discussed. We derive conditions for covariance stationarity and the existence of the fourth moment, and provide expressions for the dynamic correlation structure of the process. These results are also applicable to the single-component multivariate GARCH(p, q) model and simplify the results existing in the literature. In an application to stock returns, we show that the disaggregation of the conditional (co)variance process generated by our model provides substantial intuition, and we highlight a number of findings with potential significance for portfolio selection and further financial applications, such as regime-dependent correlation structures and leverage effects. Klassifikation: C32, C51, G10, G11
We model the impact of bank mergers on loan competition, reserve holdings and aggregate liquidity. A merger changes the distribution of liquidity shocks and creates an internal money market, leading to financial cost efficiencies and more precise estimates of liquidity needs. The merged banks may increase their reserve holdings through an internalization effect or decrease them because of a diversification effect. The merger also affects loan market competition, which in turn modifies the distribution of bank sizes and aggregate liquidity needs. Mergers among large banks tend to increase aggregate liquidity needs and thus the public provision of liquidity through monetary operations of the central bank. Klassifikation: D43, G21, G28, L13
When a spot market monopolist has a position in a corresponding futures market, he has an incentive to deviate from the spot market optimum to make this position more profitable. Rational futures market makers take this into account when setting prices. We show that the monopolist, by randomizing his futures market position, can strategically exploit his market power at the expense of other futures market participants. Furthermore, traders without market power can manipulate futures prices by hiding their orders behind the monopolist's strategic trades. The moral hazard problem stemming from spot market power thus provides a venue for strategic trading and manipulation that parallels the adverse selection problem stemming from inside information. Klassifikation: D82, G13
We study a set of German open-end mutual funds for a time period during which this industry emerged from its infancy. In those years, the distribution channel for mutual funds was dominated by the brick-and-mortar retail networks of the large universal banks. Using monthly observations from 12/1986 through 12/1998, we investigate if cross-sectional return differences across mutual funds affect their market shares. Although such a causal relation has been established in highly competitive markets, such as the United States, the rigid distribution system in place in Germany at the time may have caused retail performance and investment performance to uncouple. In fact, although we observe stark differences in investment performance across mutual funds (and over time), we find no evidence that cross-sectional performance differences affect the market shares of these funds. Klassifikation: G 23
Large banks often sell part of their loan portfolio in the form of collateralized debt obligations (CDO) to investors. In this paper we raise the question whether credit asset securitization affects the cyclicality (or commonality) of bank equity values. The commonality of bank equity values reflects a major component of systemic risks in the banking market, caused by correlated defaults of loans in the banks' loan books. Our simulations take into account the major stylized fact of CDO transactions, the non-proportional nature of risk sharing that goes along with tranching. We provide a theoretical framework for the risk transfer through securitization that builds on a macro risk factor and an idiosyncratic risk factor, allowing an identification of the types of risk that the individual tranche holders bear. This allows conclusions about the risk positions of issuing banks after risk transfer. Building on the strict subordination of tranches, we first evaluate the correlation properties both within and across risk classes. We then determine the effect of securitization on the systematic risk of all tranches, and derive its effect on the issuing bank's equity beta. The simulation results show that under plausible assumptions concerning bank reinvestment behaviour and capital structure choice, the issuing intermediary's systematic risk tends to rise. We discuss the implications of our findings for financial stability supervision. Klassifikation: G28
In this paper, we consider expected value, variance and worst-case optimization of nonlinear models. We present algorithms for computing optimal expected values, and variance, based on iterative Taylor expansions. We establish convergence and consider the relative merits of policies beaded on expected value optimization and worst-case robustness. The latter is a minimax strategy and ensures optimal cover in view of the worst-case scenario(s) while the former is optimal expected performance in a stochastic setting. Both approaches are used with a macroeconomic policy model to illustrate relative performances, robustness and trade-offs between the strategies. Klassifikation: C61, E43
Market efficiency today
(2006)
This CFS Working Paper has been presented at the CFSsymposium "Market Efficiency Today" held in Frankfurt/Main on October 6, 2005. In 2004 the Center for Financial Studies (CFS) in cooperation with the Johann Wolfgang Goethe University, Frankfurt/Main established an international academic prize, which is to be known as The Deutsche Bank Prize in Financial Economics. The prize will honor an internationally renowned researcher who has excelled through influential contributions to research in the fields of finance and money and macroeconomics, and whose work has lead to practice and policy-relevant results. The Deutsche Bank Prize in Financial Economics has been awarded for the first time in October 2005. The prize, sponsored by the Stiftungsfonds Deutsche Bank im Stifterverband für die Deutsche Wissenschaft, carries a cash award of € 50,000. The prize will be awarded every two years and the prize holder will be appointed a "Distinguished Fellow" of the CFS. The role of media partner for the Deutsche Bank Prize in Financial Economics is to be filled by the internationally renowned publication, The Economist and the Handelsblatt, the leading German-language financial and business newspaper.
Using unobservable conditional variance as measure, latent-variable approaches, such as GARCH and stochastic-volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing "observable" or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time-series models for realized volatility exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time-varying volatility of realized volatility leads to a substantial improvement of the model's fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting. Klassifikation: C22, C51, C52, C53
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb, which is locally bottom-avoiding. We use a small-step operational semantics in form of a normal order reduction. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into an arbitrary program context their termination behaviour is the same. We use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We evolve different proof tools for proving correctness of program transformations. We provide a context lemma for may- as well as must- convergence which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations keep contextual equivalence. In contrast to other approaches our syntax as well as semantics does not make use of a heap for sharing expressions. Instead we represent these expressions explicitely via letrec-bindings.
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
Extending the method of Howe, we establish a large class of untyped higher-order calculi, in particular such with call-by-need evaluation, where similarity, also called applicative simulation, can be used as a proof tool for showing contextual preorder. The paper also demonstrates that Mann’s approach using an intermediate “approximation” calculus scales up well from a basic call-by-need non-deterministic lambdacalculus to more expressive lambda calculi. I.e., it is demonstrated, that after transferring the contextual preorder of a non-deterministic call-byneed lambda calculus to its corresponding approximation calculus, it is possible to apply Howe’s method to show that similarity is a precongruence. The transfer is not treated in this paper. The paper also proposes an optimization of the similarity-test by cutting off redundant computations. Our results also applies to deterministic or non-deterministic call-by-value lambda-calculi, and improves upon previous work insofar as it is proved that only closed values are required as arguments for similaritytesting instead of all closed expressions.
The paper examines challenges in effectively implementing the lender-of-last-resort function in the EU single financial market. Briefly highlighted are features of the EU financial landscape that could increase EU systemic financial risk. Briefly described are the complexities of the EU’s financial-stability architecture for preventing and resolving financial problems, including lender-of-last-resort operations. The paper examines how the lender-of-last-resort function might materialize during a systemic financial disturbance affecting more than one EU Member State. The paper identifies challenges and possible ways of enhancing the effectiveness of the existing architecture.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
This paper makes a case for the future development of European corporate law through regulatory competition rather than EC legislation. It is for the first time becoming legally possible for firms within the EU to select the national company law that they wish to govern their activities. A significant number of firms can be expected to exercise this freedom, and national legislatures can be expected to respond by seeking to make their company laws more attractive to firms. Whilst the UK is likely to be the single most successful jurisdiction in attracting firms, the presence of different models of corporate governance within Europe make it quite possible that competition will result in specialisation rather than convergence, and that no Member State will come to dominate as Delaware has done in the US. Procedural safeguards in the legal framework will direct the selection of laws which increase social welfare, as opposed simply to the welfare of those making the choice. Given that European legislators cannot be sure of the ‘optimal’ model for company law, the future of European company law-making would better be left with Member States than take the form of harmonized legislation.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
We find that on average consumers chose the contract that ex post minimized their net costs. A substantial fraction of consumers (about 40%) still chose the ex post sub-optimal contract, with some incurring hundreds of dollars of avoidable interest costs. Nonetheless, the probability of choosing the sub-optimal contract declines with the dollar magnitude of the potential error, and consumers with larger errors were more likely to subsequently switch to the optimal contract. Thus most of the errors appear not to have been very costly, with the exception that a small minority of consumers persists in holding substantially sub-optimal contracts without switching. Klassifikation: G11, G21, E21, E51
Using a set of regional inflation rates we examine the dynamics of inflation dispersion within the U.S.A., Japan and across U.S. and Canadian regions. We find that inflation rate dispersion is significant throughout the sample period in all three samples. Based on methods applied in the empirical growth literature, we provide evidence in favor of significant mean reversion (ß-convergence) in inflation rates in all considered samples. The evidence on ó-convergence is mixed, however. Observed declines in dispersion are usually associated with decreasing overall inflation levels which indicates a positive relationship between mean inflation and overall inflation rate dispersion. Our findings for the within-distribution dynamics of regional inflation rates show that dynamics are largest for Japanese prefectures, followed by U.S. metropolitan areas. For the combined U.S.-Canadian sample, we find a pattern of within-distribution dynamics that is comparable to that found for regions within the European Monetary Union (EMU). In line with findings in the so-called 'border literature' these results suggest that frictions across European markets are at least as large as they are, e.g., across North American markets. Klassifikation: E31, E52, E58
Using a unique data set of regional inflation rates we are examining the extent and dynamics of inflation dispersion in major EMU countries before and after the introduction of the euro. For both periods, we find strong evidence in favor of mean reversion (ß-convergence) in inflation rates. However, half-lives to convergence are considerable and seem to have increased after 1999. The results indicate that the convergence process is nonlinear in the sense that its speed becomes smaller the further convergence has proceeded. An examination of the dynamics of overall inflation dispersion (ó-convergence) shows that there has been a decline in dispersion in the first half of the 1990s. For the second half of the 1990s, no further decline can be observed. At the end of the sample period, dispersion has even increased. The existence of large persistence in European inflation rates is confirmed when distribution dynamics methodology is applied. At the end of the paper we present evidence for the sustainability of the ECB's inflation target of an EMU-wide average inflation rate of less than but close to 2%. Klassifikation: E31, E52, E58
The paper documents lack of awareness of financial assets in the 1995 and 1998 Bank of Italy Surveys of Household Income and Wealth. It then explores the determinants of awareness, and finds that the probability that survey respondents are aware of stocks, mutual funds and investment accounts is positively correlated with education, household resources, long-term bank relations and proxies for social interaction. Lack of financial awareness has important implications for understanding the stockholding puzzle and for estimating stock market participation costs. Klassifikation: E2, D8, G1
The theory of intertemporal consumption choice makes sharp predictions about the evolution of the entire distribution of household consumption, not just about its conditional mean. In the paper, we study the empirical transition matrix of consumption using a panel drawn from the Bank of Italy Survey of Household Income and Wealth. We estimate the parameters that minimize the distance between the empirical and the theoretical transition matrix of the consumption distribution. The transition matrix generated by our estimates matches remarkably well the empirical matrix, both in the aggregate and in samples stratified by education. Our estimates strongly reject the consumption insurance model and suggest that households smooth income shocks to a lesser extent than implied by the permanent income hypothesis. Klassifikation: D52, D91, I30
Trusting the stock market
(2005)
We provide a new explanation to the limited stock market participation puzzle. In deciding whether to buy stocks, investors factor in the risk of being cheated. The perception of this risk is a function not only of the objective characteristics of the stock, but also of the subjective characteristics of the investor. Less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less. The calibration of the model shows that this problem is sufficiently severe to account for the lack of participation of some of the richest investors in the United States as well as for differences in the rate of participation across countries. We also find evidence consistent with these propositions in Dutch and Italian micro data, as well as in cross country data. Klassifikation: D1, D8
Credit card debt puzzles
(2005)
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper' framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework can actually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers. Klassifikation: E210, G110
Some have argued that recent increases in credit risk transfer are desirable because they improve the diversification of risk. Others have suggested that they may be undesirable if they increase the risk of financial crises. Using a model with banking and insurance sectors, we show that credit risk transfer can be beneficial when banks face uniform demand for liquidity. However, when they face idiosyncratic liquidity risk and hedge this risk in an interbank market, credit risk transfer can be detrimental to welfare. It can lead to contagion between the two sectors and increase the risk of crises. Klassifikation: G21, G22
How do markets spread risk when events are unknown or unknowable and where not anticipated in an insurance contract? While the policyholder can "hold up" the insurer for extra contractual payments, the continuing gains from trade on a single contract are often too small to yield useful coverage. By acting as a repository of the reputations of the parties, we show the brokers provide a coordinating mechanism to leverage the collective hold up power of policyholders. This extends both the degree of implicit and explicit coverage. The role is reflected in the terms of broker engagement, specifically in the ownership by the broker of the renewal rights. Finally, we argue that brokers can be motivated to play this role when they receive commissions that are contingent on insurer profits. This last feature questions a recent, well publicized, attack on broker compensation by New York attorney general, Elliot Spitzer. Klassifikation: G22, G24, L14
Die vorliegende Analyse untersucht die Beschäftigungseffekte von Vermittlungsgutscheinen und Personal-Service-Agenturen mit Hilfe einer makroökonometrischen Evaluation. Neben einer mikroökonometrischen Evaluation, welche die Wirkungen auf individueller Ebene untersucht, kann eine makroökonometrische Analyse Aussagen über die gesamtwirtschaftlichen Effekte der Maßnahmen machen. Die strukturellen Multiplikatorwirkungen im makroökonomischen Kreislaufzusammenhang werden jedoch nicht berücksichtigt. Das ökonometrische Modell zur Analyse der beiden Maßnahmen basiert auf einer Matching-Funktion, die den Suchprozess von Firmen und von Arbeitern nach einem Beschäftigungsverhältnis abbildet. Die empirischen Analysen werden getrennt für Ost- und Westdeutschland sowie für die Strategietypen der Bundesagentur für Arbeit durchgeführt. Sie zeigen, dass die Ausgabe von Vermittlungsgutscheinen nur in „großstädtisch geprägten Bezirken vorwiegend in Westdeutschland mit hoher Arbeitslosigkeit“ (Strategietyp II) einen signifikant positiven Effekt auf den Suchprozess hat. Für die Personal-Service-Agenturen zeigen sich signifikant positive Effekte für Ost- als auch für Westdeutschland. Allerdings fehlt für eine abschließende Bewertung der Ergebnisse für die Personal- Service-Agenturen aufgrund der relativ geringen Teilnehmerzahl noch ein Vergleich mit mikroökonometrischen Analysen.
In this paper we evaluate the employment effects of job creation schemes on the participating individuals in Germany. Job creation schemes are a major element of active labour market policy in Germany and are targeted at long-term unemployed and other hard-to-place individuals. Access to very informative administrative data of the Federal Employment Agency justifies the application of a matching estimator and allows to account for individual (group-specific) and regional effect heterogeneity. We extend previous studies in four directions. First, we are able to evaluate the effects on regular (unsubsidised) employment. Second, we observe the outcome of participants and non-participants for nearly three years after programme start and can therefore analyse mid- and long-term effects. Third, we test the sensitivity of the results with respect to various decisions which have to be made during implementation of the matching estimator, e.g. choosing the matching algorithm or estimating the propensity score. Finally, we check if a possible occurrence of 'unobserved heterogeneity' distorts our interpretation. The overall results are rather discouraging, since the employment effects are negative or insignificant for most of the analysed groups. One notable exception are long-term unemployed individuals who benefit from participation. Hence, one policy implication is to address programmes to this problem group more tightly. JEL Classification: J68, H43, C13
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
This paper evaluates the effects of job creation schemes on the participating individuals in Germany. Since previous empirical studies of these measures have been based on relatively small datasets and focussed on East Germany, this is the first study which allows to draw policy-relevant conclusions. The very informative and exhaustive dataset at hand not only justifies the application of a matching estimator but also allows to take account of threefold heterogeneity. The recently developed multiple treatment framework is used to evaluate the effects with respect to regional, individual and programme heterogeneity. The results show considerable differences with respect to these sources of heterogeneity, but the overall finding is very clear. At the end of our observation period, that is two years after the start of the programmes, participants in job creation schemes have a significantly lower success probability on the labour market in comparison to matched non-participants. JEL Classification: H43, J64, J68, C13, C40
This paper investigates the macroeconomic effects of job creation schemes and vocational training on the matching processes in West Germany. The empirical analysis is based on regional data for local employment office districts for the period from 1999 to 2003. The empirical model relies on a dynamic version of a matching function augmented by ALMP. In order to obtain consistent estimates in the presence of a dynamic panel data model, a first-differences GMM estimator and a transformed maximum likelihood estimator are applied. Furthermore the paper considers the endogeneity problem of the policy measures. The results obtained from our estimates indicate that vocational training does not significantly affect the matching process and that job creation schemes have a negative effect. JEL Classification: C23, E24, H43, J64, J68
Most evaluation studies of active labour market policies (ALMP) focus on the microeconometric evaluation approach using individual data. However, as the microeconometric approach usually ignores impacts on the non-participants, it should be seen as a first step to a complete evaluation which has to be followed by an analysis on the macroeconomic level. As a starting point for our analysis we discuss the effects of ALMP in a theoretical labour market framework augmented by ALMP. We estimate the impacts of ALMP in Germany for the time period 1999-2001 with regional data of 175 labour office districts. Due to the high persistence of German labour market data the application of a dynamic model is crucial. Furthermore our analysis accounts especially for the inherent simultaneity problem of ALMP. For West Germany we find positive effects of vocational training and job creation schemes on the labour market situation, whereas the results for East Germany do not allow profound statements. JEL Classification: C33, E24, H43, J64, J68.
Previous empirical studies of job creation schemes in Germany have shown that the average effects for the participating individuals are negative. However, we find that this is not true for all strata of the population. Identifying individual characteristics that are responsible for the effect heterogeneity and using this information for a better allocation of individuals therefore bears some scope for improving programme efficiency. We present several stratification strategies and discuss the occurring effect heterogeneity. Our findings show that job creation schemes do neither harm nor improve the labour market chances for most of the groups. Exceptions are long-term unemployed men in West and long-term unemployed women in East and West Germany who benefit from participation in terms of higher employment rates. JEL: C13 , J68 , H43
Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments.
Persistently high unemployment, tight government budgets and the growing scepticism regarding the effects of active labour market policies (ALMP) are the basis for a growing interest in evaluating these measures. This paper intends to explain the need for evaluation on the micro- and macroeconomic level, introduce the fundamental evaluation problem and solutions to it, give an overview of the newer developments in evaluation literature and finally take a look on empirical estimations of ALMP effects. JEL Classification: C14, C33, H43, J64, J68
This study analyses the effects of public sector sponsored vocational training (PSVT) on individuals’ unemployment duration in West Germany for the period from 1985 to 1993. The data is taken from the German Socio-Economic Panel (GSOEP). To resolve the intriguing sample selection problem, i.e. to find an adequate control group for the group of trainees, we employ matching methods. These matching methods use the individual propensity to participate in training, which is obtained by estimating a panel probit model as the main matching variable. On the basis of the matched sample a discrete time hazard rate model is utilized to assess the effects of training participation on unemployment duration. Our results indicate that a significant positive effect on reemployment chances due to PSVT can only be expected for courses with a duration of no longer than six months. No significant positive effects on post-training reemployment chances where found for courses lasting longer than six months. In fact these PSVT courses are significantly less effective at increasing reemployment chances than those lasting no longer than three months. JEL classification: C40, J20, J64
This paper provides a review of empirical evidence relating to the impact of training on employment performance. Since a central issue in estimating training effects is the sample selection problem a short theoretical discussion of different evaluation strategies is given. The empirical overview primarily focuses on non-experimental evidence for Germany. In addition selected studies for other countries and experimental investigations are discussed.
In this study we are concerned with the impact of vocational training on the individual’s unemployment duration in West Germany. The data basis used is the German Socio-Economic Panel (GSOEP) for the period from 1984 to 1994. To resolve the intriguing sample selection problem, i.e. to find an adequate control group for the group of trainees, we employ matching methods which were developed in the statistical literature. These matching methods uses as the main matching variable the individual propensity score to participate in training, which is obtained by estimating a random effects probit model. On the basis of the matched sample a discrete time hazard rate model is utilized to assess the impact of vocational training on unemployment duration. Our results indicate, that training significantly raises the transition rate of unemployed into employment in the short but not in the long run. JEL classification: C40, J20, J64
We estimate a semiparametric single-risk discrete-time duration model to assess the effect of vocational training on the duration of unemployment spells. The data basis used in this study is the German Socio-Economic-Panel (GSOEP) for West Germany for the period from 1986 to 1994. To take into account a possible selection bias actual participation in vocational training is instrumented using estimates of a randomeffects probit model for the participation in qualification measures. Our main results show that training does have a significant short term effect of reducing unemployment duration but that this effect does not persist in the long run. JEL classifications: C41, J20, J64
This paper is intended as a short survey of the most relevant methods for grouped transition data. The fundamentals of duration analysis are discussed in a continuous time framework, whereas the treatment of methods for discrete durations is limited to the peculiarity of these models. In addition, some recent empirical applications of the methods are discussed.
In recent econometric work, most analyses of female labour supply consider married women, whereas the results for unmarried women are provided rather as a by-product (Burtless/Greenberg, 1982, Johnson/Pencavel, 1984, Leu/Kugler, 1986, Merz, 1990,). When the particular interest is focused on unmarried women, data of the seventies or rather simple econometric models are used (Keeley et al., 1978, Hausman, 1980, Coverman/Kemp, 1987) . Often very specific populations are examined, like for example lone mothers in Blundell/Duncan/Meghir (1992), Jenkins (1992), Staat/Wagenhals (1993) or Laisney et al. (1993). Analysing the economic behaviour of unmarried women, one is confronted with the problem that the term ‘unmarried’ is not clearly defined. It includes single, divorced, separated and widowed women. They live in different types of households, like one-person households or family households, where they occupy different economic positions as for example head of the household or relative of the head. The present work considers unmarried female heads of household. We assume that the dominant economic position as head of household, voluntarily or involuntarily occupied, forces these women to a similar behaviour independent from their family status. Thus they are taken together in the analysis from the different family statuses: single, divorced, separated and widowed. Being unmarried often is regarded as a temporary state, voluntarily or involuntarily, for example in the case of young women before marriage or in the case of divorced women after their separation. Nevertheless the demographic development shows the increased importance of unmarried women in the population during the last decades. In the USA the portion of female headed households raised from 21,1% in 1970 to 26,2% in 1980 and 29,0% in 1992 (Statistical Abstracts of the United States, 1993. Own calculations). In the FRG, female headed households constitute 26,4% of total households in 1970, 27,4% in 1980 and 30,1% in 1992 (Stat.Bundesamt, FS 1, Reihe 3, 1970, 1980, 1992). Therefore it seems an interesting topic to analyse the labour supply behaviour of unmarried female heads. Especially the question whether the labour supply of unmarried women resembles rather that of married women or of prime-age males is of particular interest. Another purpose of this analysis is to apply modern econometric panel data models with special emphasis on the problem of unbalanced panel data. Most panel data analyses are carried out using balanced panel data, which is no problem if the selection process could be ignored and if enough cases are available to guarantee efficient estimation. Especially the last point was crucial for the present analysis of unmarried females. In the available panel data sets the unmarried female heads constitute only a rather small population. Therefore the estimation techniques were modified to take missing observations of the individuals into account. The paper is organized as follows: In section 2 the underlying theoretical model of intertemporal labour supply under uncertainty is shortly presented. Section 3 deals with the econometric specification and estimation techniques where the use of unbalanced panel data is considered. Section 4 contains the data description with a particular look on the unbalancedness of the samples. In the last section 5 the empirical results are presented. We compare the estimated parameters for the unmarried women between the USA and the FRG and also analyse the differences between unmarried and married women. Moreover a comparison between different samples of unmarried women is provided.
This paper provides an empirical assessment of hypotheses that identify causes of demand side constraints of individual labour supply. In a comparative study for the USA and the FRG we focus on analysing the effect of productivity gaps (industry wage growth beyond productivity growth), industry investment intensity and regional labour market conditions on individual employment probabilities. Furthermore, we investigate whether demand side constraints of labour supply can be caused by a spill over from commodity markets. Efficiency wage theory and the theory of inter-industry wage differentials are utilised to derive identifying restrictions that are applicable to the labour supply models for both countries. The econometric contribution of the paper is the derivation and application of a two step estimation method for the class of simultaneous random effects double hurdle models, of which the labour supply model employed in this paper is a special case. To provide the empirical basis for the comparative study, the Panel Study of Income Dynamics and the German Socio-Economic Panel are linked to the OECD’s International Sectoral Database. JEL classification: C33, C34, J64, O57
Modelling consumer behaviour in a profile design using a three equation generalised Tobit model
(1997)
We propose the application of a three equation generalised Tobit to model different aspects of consumer behaviour in a full profile study design. The model takes into account that consumer behaviour can be measured by preference scores, purchase probability and purchase volume. We aim to avoid the drawbacks of traditional conjoint analysis where the latter two aspects are disregarded. Starting from a full profile design, we develop the appropriate questionnaire layout, the econometric model, the likelihood function and tests. The model is applied in a market entry study for an innovative medicament after a reform of Germany´s public health system in 1993-1994. JEL Classification: C35,M31,L65
Sharing of substructures like subterms and subcontexts in terms is a common method for space-efficient representation of terms, which allows for example to represent exponentially large terms in polynomial space, or to represent terms with iterated substructures in a compact form. We present singleton tree grammars as a general formalism for the treatment of sharing in terms. Singleton tree grammars (STG) are recursion-free context-free tree grammars without alternatives for non-terminals and at most unary second-order nonterminals. STGs generalize Plandowski's singleton context free grammars to terms (trees). We show that the test, whether two different nonterminals in an STG generate the same term can be done in polynomial time, which implies that the equality test of terms with shared terms and contexts, where composition of contexts is permitted, can be done in polynomial time in the size of the representation. This will allow polynomial-time algorithms for terms exploiting sharing. We hope that this technique will lead to improved upper complexity bounds for variants of second order unification algorithms, in particular for variants of context unification and bounded second order unification.
A new method for the determination of S-matrices of devices in multimoded waveguides and first experimental experiences are presented. The theoretical foundations are given. The scattering matrix of a TESLA copper cavity at a frequency above the cut-off of the second waveguide mode has been measured.
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
The paper analyzes the incentive for the ECB to establish reputation by pursuing a restrictive policy right at the start of its operation. The bank is modelled as risk averse with respect to deviations of both inflation and output from her target. The public, being imperfectly informed about the bank’s preferences uses observed inflation as (imperfect) signal for the unknown preferences. Under linear learning rules - which are commonly used in the literature - a gradual build up of reputation is the optimal response. The paper shows that such a linear learning rule is not consistent with efficient signaling. It is shown that in a game with efficient signaling, a cold turkey approach - allowing for deflation - is optimal for a strong bank - accepting high current output losses at the beginning in order to demonstrate its toughness. JEL classification: D 82, E 58
During the last years the relationship between financial development and economic growth has received widespread attention in the literature on growth and development. This paper summarises in its first part the results of this research, stressing the growth-enhancing effects of an increased interpersonal re-allocation of resources promoted by financial development. The second part of the paper seeks to identify the determinants of financial development based on Diamond's theory of financial intermediation as delegated monitoring. The analysis shows that the quality of corporate governance of banks is the key factor in financial system development. Accordingly, financial sector reforms in developing countries will only succeed if they strengthen the corporate governance of financial institutions. In this area, financial institution building has an important contribution to make. Paper presented at the First Annual Seminar on New Development Finance held at the Goethe University of Frankfurt, September 22 - October 3, 1997
The extension of long-term loans, e.g. to finance housing, is adversely affected by inflation. For one thing, the higher nominal interest rates charged by the banks in response to inflation mean that borrowers have to make (nominally) higher interest payments, which unnecessarily reduces their borrowing capacity. For another, long-term loans with variable interest rates increase the probability that borrowers will become unable to meet their payment obligations. The present paper examines these two assertions in detail. At the same time, it presents a concept for substantially reducing the weaknesses of conventional lending methodologies. We start by investigating the consequences of a stable inflation rate on the borrowing capacity of credit clients, then go on to analyze the impact of fluctuating inflation rates on the risk of default.
Competition for order flow can be characterized as a coordination game with multiple equilibria. Analyzing competition between dealer markets and a crossing network, we show that the crossing network is more stable for lower traders’ disutilities from unexecuted orders. By introducing private information, we prove existence of a unique equilibrium with market consolidation. Assets with low volatility and large volumes are traded on crossing networks, others on dealer markets. Efficiency requires more assets to be traded on crossing networks. If traders’ disutilities differ sufficiently, a unique equilibrium with market fragmentation exists. Low disutility traders use the crossing network while high disutility traders use the dealer market. The crossing network’s market share is inefficiently small.
In this paper, we estimate the demand for homeowner insurance in Florida. Since we are interested in a number of factors influencing demand, we approach the problem from two directions. We first estimate two hedonic equations representing the premium per contract and the price mark-up. We analyze how the contracts are bundled and how contract provisions, insurer characteristics and insured risk characteristics and demographics influence the premium per contract and the price mark-up. Second, we estimate the demand for homeowners insurance using two-stage least squares regression. We employ ISO's indicated loss costs as our proxy for real insurance services demanded. We assume that the demand for coverage is essentially a joint demand and thus we can estimate the demand for catastrophe coverage separately from the demand for noncatastrophe coverage. We determine that price elasticities are less elastic for catastrophic coverage than for non-catastrophic coverage. Further estimated income elasticities suggest that homeowners insurance is an inferior good. Finally, we conclude based on the results of a selection model that our sample of ISO reporting companies well represents the demand for insurance in the Florida market as a whole.
At present, the question of how national pension or retirement payment systems should be organised is being hotly debated in various countries, and opinions vary widely as to what should be regarded as the optimal design for such systems. It appears to the authors of the present paper that in this entire discussion one aspect is largely overlooked: What relationships exist between the pension system and the financial system in a given country? As such relationships might prove to be important, the present paper investigates the following questions: (1) Are there differences between the national pension systems of three major European countries – Germany, France and the U.K. – and between the financial systems of these countries? (2) And if the existence of such differences can be demonstrated, is there a correspondence between the differences with respect to the various national pension systems and the differences as regards the countries’ financial systems? (3) And if such a correspondence exists, is there any kind of interrelationship between the national financial and pension systems of the individual countries which goes beyond a mere correspondence? Looking mainly at two aspects – namely, risk allocation and the incentives to create human capital – the authors of this paper argue (1) that there are indeed considerable differences between the financial and pension systems of the three countries; (2) that in both Germany and the U.K. there are also systematic correspondences between the respective pension systems and financial systems and their economic characteristics, but that such a correspondence cannot be identified in the case of France; and (3) that these parallels are, in the final analysis, based on complementarities and are therefore likely to contribute to the efficiency of the German and the British systems. The paper concludes with a brief look at policy implications which the existence of, or the lack of, consistency between national pension systems and national financial systems might have.
Although the world of banking and finance is becoming more integrated every day, in most aspects the world of financial regulation continues to be narrowly defined by national boundaries. The main players here are still national governments and governmental agencies. And until recently, they tended to follow a policy of shielding their activities from scrutiny by their peers and members of the academic community rather than inviting critical assessments and an exchange of ideas. The turbulence in international financial markets in the 1980s, and its impact on U.S. banks, gave rise to the notion that academics working in the field of banking and financial regulation might be in a position to make a contribution to the improvement of regulation in the United States, and thus ultimately to the stability of the entire financial sector. This provided the impetus for the creation of the “U.S. Shadow Financial Regulatory Committee”. In the meantime, similar shadow committees have been founded in Europe and Japan. The specific problems associated with financial regulation in Europe, as well as the specific features which distinguish the European Shadow Financial Regulatory Committee from its counterparts in the U.S. and Japan, derive from the fact that while Europe has already made substantial progress towards economic and political integration, it is still primarily a collection of distinct nation-states with differing institutional set-ups and political and economic traditions. Therefore, any attempt to work towards a European approach to financial regulation must include an effort to promote the development of a European culture of co-operation in this area, and this is precisely what the European Shadow Financial Regulatory Committee (ESFRC) seeks to do. In this paper, Harald Benink, chairman of the ESFRC, and Reinhard H. Schmidt, one of the two German members, discuss the origin, the objectives and the functioning of the committee and the thrust of its recommendations.
In this paper we have developed a financial model of the non-life insurer to provide assistance for the management of the insurance company in making decisions on product, investment and reinsurance mix. The model is based on portfolio theory and recognizes the stochastic nature of and the interaction between the underwriting and investment income of the insurance business. In the context of an empirical application we illustrate howa portfolio optimisation approach can be used for asset-liability management.
Our study provides evidence on the share price reactions to the announcement of equity issues in Germany, where capital market is characterized by institutional features distinct from the U.S. market. German seasoned equity issues yield a positive market reaction which contrasts to the significant negative abnormal returns reported for the U.S. We provide evidence that these results are due to differences in both issuing characteristics and floatation methods, and in the corporate governance and ownership structures of the two countries. Our study explains much of the empirical puzzle of different market reactions to seemingly similar events across financial markets.
Real options theory applies techniques known from finance theory to the valuation of capital investments. The present paper investigates further into this analogy, considering the case of a portfolio of real options. An implementation of real option models in practice will mostly be concerned with a portfolio of real options, so the analysis of portfolio aspects is of both academic and practical interest. Is a portfolio of real options special? In order to shed some light on this question, the present paper will outline the relevant features of a portfolio of real options. It will show that the analogy to financial options remains great if compound option models are applied. As a result, a portfolio of real options, and therefore the firm as such, generally is to be understood as one single compound, real option.
We present an empirical study focusing on the estimation of a fundamental multi-factor model for a universe of European stocks. Following the approach of the BARRA model, we have adopted a cross-sectional methodology. The proportion of explained variance ranges from 7.3% to 66.3% in the weekly regressions with a mean of 32.9%. For the individual factors we give the percentage of the weeks when they yielded statistically significant influence on stock returns. The best explanatory power – apart from the dominant country factors – was found among the statistical constructs „success“ and „variability in markets“.
Who knows what when? : The information content of pre-IPO market prices : [Version March/June 2002]
(2002)
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
We propose a new framework for modelling time dependence in duration processes on financial markets. The well known autoregressive conditional duration (ACD) approach introduced by Engle and Russell (1998) will be extended in a way that allows the conditional expectation of the duration process to depend on an unobservable stochastic process, which is modelled via a Markov chain. The Markov switching ACD model (MSACD) is a very flexible tool for description and forecasting of financial duration processes. In addition the introduction of an unobservable, discrete valued regime variable can be justified in the light of recent market microstructure theories. In an empirical application we show, that the MSACD approach is able to capture several specific characteristics of inter trade durations while alternative ACD models fail. Furthermore, we use the MSACD to test implications of a sequential trade model.
Banking and markets
(2001)
This paper integrates a number of recent themes in the literature in banking and asset markets–optimal risk sharing, limited market participation, asset-price volatility, market liquidity, and financial crises–in a general-equilibrium theory of the financial system. A complex financial system comprises both financial markets financial institutions. Financial institutions can take the form of intermediaries or banks. Banks, inlike intermediaries, are subject to runs, but crises do not imply market failure. We show that a sophisticated financiel system–a system with complete markets for aggregate risk and limited market participation–is incentive-efficient, if the institutions take the form of intermediaries, or else constrained-efficient, of they take the form of banks. We also consider an economy in which the markets for aggregate risks are incomplete. In this context, there is a rolefpr prudential regulation: regulating liquidity can improve welfare.
Executive Stock Option Programs (SOPs) have become the dominant compensation instrument for top-management in recent years. The incentive effects of an SOP both with respect to corporate investment and financing decisions critically depend on the design of the SOP. A specific problem in designing SOPs concerns dividend protection. Usually, SOPs are not dividend protected, i.e. any dividend payout decreases the value of a manager’s options. Empirical evidence shows that this results in a significant decrease in the level of corporate dividends and, at the same time, into an increase in share repurchases. Yet, few suggestions have been made on how to account for dividends in SOPs. This paper applies arguments from principal-agent-theory and from the theory of finance to analyze different forms of dividend protection, and to address the relevance of dividend protection in SOPs. Finally, the paper relates the theoretical analysis to empirical work on the link between share repurchases and SOPs.
Since the beginning of the 1990s, it has been widely expected that the implementation of the European Single Market would lead to a rapid convergence of Europe’s financial systems. In the present paper we will show that at least in the period prior to the introduction of the common currency this expected convergence did not materialise. Our empirical studies on the significance of various institutions within the financial sectors, on the financing patterns of firms in various countries and on the predominant mechanisms of corporate governance, which are summarised and placed in a broader context in this paper, point to few, if any, signs of a convergence at a fundamental or structural level between the German, British and French financial systems. The German financial system continues to appear to be bank-dominated, while the British system still appears to be capital market-dominated. During the period covered by the research, i.e. 1980 – 1998, the French system underwent the most far-reaching changes, and today it is difficult to classify. In our opinion, these findings can be attributed to the effects of strong path dependencies, which are in turn an outgrowth of relationships of complementarity between the individual system components. Projecting what we have observed into the future, the results of our research indicate that one of two alternative paths of development is most likely to materialise: either the differences between the national financial systems will persist, or – possibly as a result of systemic crises – one financial system type will become the dominant model internationally. And if this second path emerges, the Anglo-American, capital market-dominated system could turn out to be the “winner”, because it is better able to withstand and weather crises, but not necessarily because it is more efficient.
In this paper we study the benefits derived from international diversification of stock portfolios from German and Hungarian point of view. In contrast to the German capital market, which is one of the largest in the world, the Hungarian Stock Exchange is an emerging market. The Hungarian stock market is highly volatile, high returns are often accompanied by extremely large risk. Therefore, there is a good potential for Hungarian investors to realize substantial benefits in terms of risk reduction by creating multi-currency portfolios. The paper gives evidence on the above me ntioned benefits for both countries by examining the performance of several ex ante portfolio strategies. In order to control the currency risk, different types of hedging approaches are implemented.
Financial development and financial institution building are important prerequisites for economic growth. However, both the potential and the problems of institution building are still vastly underestimated by those who design and fund institution building projects. The paper first underlines the importance of financial development for economic growth, then describes the main elements of “serious” institution building: the lending technology, the methodological approaches, and the question of internal structure and corporate governance. Finally, it discusses three problems which institution building efforts have to cope with: inappropriate expectations on the part of donor and partner institutions regarding the problems and effects of institution building efforts, the lack of awareness of the importance of governance and ownership issues, and financial regulation that is too restrictive for microfinance operations. All three problems together explain why there are so few successful micro and small business institutions operating worldwide.
We analyze incentives for loan officers in a model with hidden action, limited liability and truth-telling constraints under the assumption that the principal has private information from an automatic scoring system. First we show that the truth-telling problem reduces the bank’s expected profit whenever the loan officer cannot only conceal bad types, but can also falsely report bad types. Second, we investigate whether the bank should reveal her private information to the agent. We show that this depends on the percentage of good loans in the population and on the signal’s informativeness. Though we had to define different regions for different parameters, we concluded that it might often be favorable to not reveal the signal. This contradicts current practice.
We investigate the suggested substitutive relation between executive compensation and the disciplinary threat of takeover imposed by the market for corporate control. We complement other empirical studies on managerial compensation and corporate control mechanisms in three distinct ways. First, we concentrate on firms in the oil industry for which agency problems were especially severe in the 1980s. Due to the extensive generation of excess cash flow, product and factor market discipline was ineffective. Second, we obtain a unique data set drawn directly from proxy statements which accounts not only for salary and bonus but for the value of all stock-market based compensation held in the portfolio of a CEO. Our data set consists of 51 firms in the U.S. oil industry from 1977 to 1994. Third, we employ ex ante measures of the threat of takeover at the individual firm level which are superior to ex post measures like actual takeover occurrence or past incidence of takeovers in an industry. Results show that annual compensation and, to a much higher degree, stock-based managerial compensation increase after a firm becomes protected from a hostile takeover. However, clear-cut evidence that CEOs of protected firms receive higher compensation than those of firms considered susceptible to a takeover cannot be found.
Individual financial systems can be understood as very specific configurations of certain key elements. Often these configurations remain unchanged for decades. We hypothesize that there is a specific relationship between key elements, namely that of complementarity. Thus, complementarity seems to be an essential feature of financial systems. Intuitively speaking, complementarity exists if the elements of a (financial) system reinforce each other in terms of contributing to the functioning of the system. It is the purpose of this paper to provide an analytical clarification of the concept of complementarity. This is done by modeling financial systems as combinations of four elements: firm-specific human capital of an entrepreneur, the ability of a bank to restructure the borrower's firm in the case of distress, the possibility to appropriate private benefits from running the firm, and the bankruptcy law. A specific configuration of these elements constitutes one financial system. The bankruptcy law and the potential private benefits are treated as exogenous. They determine the bargaining power of the contracting parties in the case that recontracting occurs. In a two-stage game, the optimal values for the other elements are determined by the agents individually - by investing in human capital and restructuring skills, respectively - and jointly by writing, executing and possibly renegotiating a financing contract for the firm. The paper discusses the equilibria for different types of bankruptcy law and demonstrates that equilibria exhibit the sought-after feature of complementarity. Three particularly significant equilibria correspond to stylized accounts of the British, German and the US-American financial system, respectively.
The paper presents an empirical analysis of the alledged transformation of the financial systems in the three major European economies, France, Germany and the UK. Based on a unified data set developed on the basis of national accounts statistics, and employing a new and consistent method of measurement, the following questions are addressed: Is there a common pattern of structural change; do banks lose importance in the process of change; and are the three financial systems becoming more similar? We find that there is neither a general trend towards disintermediation, nor towards a transformation from bank-based to capital market-based financial systems, nor for a loss of importance of banks. Only in the case of France strong signs of transformation as well as signs of a general decline in the role of banks could be found. Thus the three financial systems also do not seem to become more similar. However, there is also a common pattern of change: the intermediation chains are lengthening in all three countries. Nonbank financial intermediaries are taking over a more important role as mobilizers of capital from the non-financial sectors. In combination with the trend towards securitization of bank liabilites, this change increases the funding costs of banks and may put banks under pressure. In the case of France, this change is so pronounced that it might even threaten the stability of the financial system.
Market discipline for financial institutions can be imposed not only from the liability side, as has often been stressed in the literature on the use of subordinated debt, but also from the asset side. This will be particularly true if good lending opportunities are in short supply, so that banks have to compete for projects. In such a setting, borrowers may demand that banks commit to monitoring by requiring that they use some of their own capital in lending, thus creating an asset market-based incentive for banks to hold capital. Borrowers can also provide banks with incentives to monitor by allowing them to reap some of the benefits from the loans, which accrue only if the loans are in fact paid o.. Since borrowers do not fully internalize the cost of raising capital to the banks, the level of capital demanded by market participants may be above the one chosen by a regulator, even when capital is a relatively costly source of funds. This implies that capital requirements may not be binding, as recent evidence seems to indicate. JEL Classification: G21, G38
We explore the macro/finance interface in the context of equity markets. In particular, using half a century of Livingston expected business conditions data we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a statistically and economically significant counter-cyclical fashion: depressed expected business conditions are associated with high expected excess returns. Moreover, inclusion of expected business conditions in otherwise standard predictive return regressions substantially reduces the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even after controlling for an important and recently introduced non-financial predictor, the generalized consumption/wealth ratio, which accords with the view that expected business conditions play a role in asset pricing different from and complementary to that of the consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, while time-varying consumption/wealth may capture time-varying risk aversion. Klassifikation: G12
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
This chapter focuses on institutional investors in the German financial markets. Institutional investors are specialized financial intermediaries who collect and manage funds on behalf of small investors toward specific objectives in terms of risk, return and maturity. The major types of institutional investors in Germany are insurance companies and investment funds. We will examine the nature of their businesses, their size and role in the financial sector, the size and the composition of the assets under their management, aspects of financ ial regulation, and features of their asset-liability-management.
We analyze exchange rates along with equity quotes for 3 German firms from New York (NYSE) and Frankfurt (XETRA) during overlapping trading hours to see where price discovery occurs and how stock prices adjust to an exchange rate shock. Findings include: (a) the exchange rate is exogenous with respect to the stock prices; (b) exchange rate innovations are more important in understanding the evolution of NYSE prices than XETRA prices; and (c) most (but not all) of the fundamental or random walk component of firm value is determined in Frankfurt.
For the Neuer Markt year 2001 is not considered as one of its best, compared to its prior performance. Investors who once piled into the Neuer Markt have now become wary of the exchange, which was launched in 1997 as Europe’s leading growth market and answer to the U.S.‘s Nasdaq Stock Market. The Neuer Markt’s reputation has been marred by the misleading information policy from several Neuer Markt companies, publishing false annual and quarterly data. Some of these companies are responsible for having misinformed investors of their pending bankruptcies. Under these circumstances, it is time to find an explanation for the dramatic loss of credibility in Neuer Markt enterprises. Finding an answer, two aspects come under consideration: • What type of information (annual versus quarterly reports) was available for investors and • of what quality were these provided data. Interim reports can be seen as important instrument in the reporting system to inform all kinds of investors. For this reason we examine the quality of Neuer Markt quarterly reports by concentrating on the disclosure level of 52 Neuer Markt companies‘ reports for the third quarter 1999 and 2000. To enable comparison we establish four disclosure indexes that measure the report’s compliance with the Neuer Markt Rules and Regulations as well as with IAS and US GAAP interim reporting standards. The results demonstrate that the level of disclosure has increased over time. Then we aim to find typical attributes of Neuer Markt enterprises that provide high or low level of accounting information in their quarterly reports. Nevertheless the study also shows that there is not any correlation between market capitalization and the quality of interim reports. However, it can be suggested that an additional enforcement mechanism could improve quality and lure investors back. A step towards this aim is the standardization project of quarterly reports of Deutsche Boerse AG.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.