Refine
Year of publication
Document Type
- Working Paper (2354) (remove)
Language
- English (2354) (remove)
Has Fulltext
- yes (2354) (remove)
Is part of the Bibliography
- no (2354)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1380)
- Wirtschaftswissenschaften (1309)
- Sustainable Architecture for Finance in Europe (SAFE) (742)
- House of Finance (HoF) (608)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Using unobservable conditional variance as measure, latent-variable approaches, such as GARCH and stochastic-volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing "observable" or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time-series models for realized volatility exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time-varying volatility of realized volatility leads to a substantial improvement of the model's fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting. Klassifikation: C22, C51, C52, C53
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb, which is locally bottom-avoiding. We use a small-step operational semantics in form of a normal order reduction. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into an arbitrary program context their termination behaviour is the same. We use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We evolve different proof tools for proving correctness of program transformations. We provide a context lemma for may- as well as must- convergence which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations keep contextual equivalence. In contrast to other approaches our syntax as well as semantics does not make use of a heap for sharing expressions. Instead we represent these expressions explicitely via letrec-bindings.
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
Extending the method of Howe, we establish a large class of untyped higher-order calculi, in particular such with call-by-need evaluation, where similarity, also called applicative simulation, can be used as a proof tool for showing contextual preorder. The paper also demonstrates that Mann’s approach using an intermediate “approximation” calculus scales up well from a basic call-by-need non-deterministic lambdacalculus to more expressive lambda calculi. I.e., it is demonstrated, that after transferring the contextual preorder of a non-deterministic call-byneed lambda calculus to its corresponding approximation calculus, it is possible to apply Howe’s method to show that similarity is a precongruence. The transfer is not treated in this paper. The paper also proposes an optimization of the similarity-test by cutting off redundant computations. Our results also applies to deterministic or non-deterministic call-by-value lambda-calculi, and improves upon previous work insofar as it is proved that only closed values are required as arguments for similaritytesting instead of all closed expressions.
The paper examines challenges in effectively implementing the lender-of-last-resort function in the EU single financial market. Briefly highlighted are features of the EU financial landscape that could increase EU systemic financial risk. Briefly described are the complexities of the EU’s financial-stability architecture for preventing and resolving financial problems, including lender-of-last-resort operations. The paper examines how the lender-of-last-resort function might materialize during a systemic financial disturbance affecting more than one EU Member State. The paper identifies challenges and possible ways of enhancing the effectiveness of the existing architecture.
The assumption that mankind is able to have an in uence on global or regional climate, respectively, due to the emission of greenhouse gases, is often discussed. This assumption is both very important and very obscure. In consequence, it is necessary to clarify definitively which meteorological elements (climate parameters) are in uencend by the anthropogenic climate impact, and to which extent in which regions of the world. In addition, to be able to interprete such an information properly, it is also necessary to know the magnitude of the different climate signals due to natural variability (for example due to volcanic or solar activity) and the magnitide of stochastic climate noise. The usual tool of climatologists, general circulation models (GCM) suffer from the problem that they are at least quantitatively uncertain with regard to the regional patterns of the behaviour of climate elements and from the lack of accurate information about long-term (decadal and centennial) forcing. In contrast to that, statistical methods as used in this study have the advantage to test hypotheses directly based on observational data. So, we focus to the very reality of climate variability as it has occurred in the past. We apply two strategies of time series analyis with regard to the observed climate variables under consideration. First, each time series is splitted into its variation components. This procedure is called 'structure-oriented time series separation'. The second strategy called 'cause-oriented time series separation' matches various time series representing various forcing mechanisms with those representing the climate behaviour (climate elements). In this way it can be assessed which part of observed climate variability can be explained by this (combined) forcing and which part remains unexplained.
This paper makes a case for the future development of European corporate law through regulatory competition rather than EC legislation. It is for the first time becoming legally possible for firms within the EU to select the national company law that they wish to govern their activities. A significant number of firms can be expected to exercise this freedom, and national legislatures can be expected to respond by seeking to make their company laws more attractive to firms. Whilst the UK is likely to be the single most successful jurisdiction in attracting firms, the presence of different models of corporate governance within Europe make it quite possible that competition will result in specialisation rather than convergence, and that no Member State will come to dominate as Delaware has done in the US. Procedural safeguards in the legal framework will direct the selection of laws which increase social welfare, as opposed simply to the welfare of those making the choice. Given that European legislators cannot be sure of the ‘optimal’ model for company law, the future of European company law-making would better be left with Member States than take the form of harmonized legislation.
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
We find that on average consumers chose the contract that ex post minimized their net costs. A substantial fraction of consumers (about 40%) still chose the ex post sub-optimal contract, with some incurring hundreds of dollars of avoidable interest costs. Nonetheless, the probability of choosing the sub-optimal contract declines with the dollar magnitude of the potential error, and consumers with larger errors were more likely to subsequently switch to the optimal contract. Thus most of the errors appear not to have been very costly, with the exception that a small minority of consumers persists in holding substantially sub-optimal contracts without switching. Klassifikation: G11, G21, E21, E51
Using a set of regional inflation rates we examine the dynamics of inflation dispersion within the U.S.A., Japan and across U.S. and Canadian regions. We find that inflation rate dispersion is significant throughout the sample period in all three samples. Based on methods applied in the empirical growth literature, we provide evidence in favor of significant mean reversion (ß-convergence) in inflation rates in all considered samples. The evidence on ó-convergence is mixed, however. Observed declines in dispersion are usually associated with decreasing overall inflation levels which indicates a positive relationship between mean inflation and overall inflation rate dispersion. Our findings for the within-distribution dynamics of regional inflation rates show that dynamics are largest for Japanese prefectures, followed by U.S. metropolitan areas. For the combined U.S.-Canadian sample, we find a pattern of within-distribution dynamics that is comparable to that found for regions within the European Monetary Union (EMU). In line with findings in the so-called 'border literature' these results suggest that frictions across European markets are at least as large as they are, e.g., across North American markets. Klassifikation: E31, E52, E58
Using a unique data set of regional inflation rates we are examining the extent and dynamics of inflation dispersion in major EMU countries before and after the introduction of the euro. For both periods, we find strong evidence in favor of mean reversion (ß-convergence) in inflation rates. However, half-lives to convergence are considerable and seem to have increased after 1999. The results indicate that the convergence process is nonlinear in the sense that its speed becomes smaller the further convergence has proceeded. An examination of the dynamics of overall inflation dispersion (ó-convergence) shows that there has been a decline in dispersion in the first half of the 1990s. For the second half of the 1990s, no further decline can be observed. At the end of the sample period, dispersion has even increased. The existence of large persistence in European inflation rates is confirmed when distribution dynamics methodology is applied. At the end of the paper we present evidence for the sustainability of the ECB's inflation target of an EMU-wide average inflation rate of less than but close to 2%. Klassifikation: E31, E52, E58
The paper documents lack of awareness of financial assets in the 1995 and 1998 Bank of Italy Surveys of Household Income and Wealth. It then explores the determinants of awareness, and finds that the probability that survey respondents are aware of stocks, mutual funds and investment accounts is positively correlated with education, household resources, long-term bank relations and proxies for social interaction. Lack of financial awareness has important implications for understanding the stockholding puzzle and for estimating stock market participation costs. Klassifikation: E2, D8, G1
The theory of intertemporal consumption choice makes sharp predictions about the evolution of the entire distribution of household consumption, not just about its conditional mean. In the paper, we study the empirical transition matrix of consumption using a panel drawn from the Bank of Italy Survey of Household Income and Wealth. We estimate the parameters that minimize the distance between the empirical and the theoretical transition matrix of the consumption distribution. The transition matrix generated by our estimates matches remarkably well the empirical matrix, both in the aggregate and in samples stratified by education. Our estimates strongly reject the consumption insurance model and suggest that households smooth income shocks to a lesser extent than implied by the permanent income hypothesis. Klassifikation: D52, D91, I30
Trusting the stock market
(2005)
We provide a new explanation to the limited stock market participation puzzle. In deciding whether to buy stocks, investors factor in the risk of being cheated. The perception of this risk is a function not only of the objective characteristics of the stock, but also of the subjective characteristics of the investor. Less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less. The calibration of the model shows that this problem is sufficiently severe to account for the lack of participation of some of the richest investors in the United States as well as for differences in the rate of participation across countries. We also find evidence consistent with these propositions in Dutch and Italian micro data, as well as in cross country data. Klassifikation: D1, D8
Credit card debt puzzles
(2005)
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper' framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework can actually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers. Klassifikation: E210, G110
Some have argued that recent increases in credit risk transfer are desirable because they improve the diversification of risk. Others have suggested that they may be undesirable if they increase the risk of financial crises. Using a model with banking and insurance sectors, we show that credit risk transfer can be beneficial when banks face uniform demand for liquidity. However, when they face idiosyncratic liquidity risk and hedge this risk in an interbank market, credit risk transfer can be detrimental to welfare. It can lead to contagion between the two sectors and increase the risk of crises. Klassifikation: G21, G22
How do markets spread risk when events are unknown or unknowable and where not anticipated in an insurance contract? While the policyholder can "hold up" the insurer for extra contractual payments, the continuing gains from trade on a single contract are often too small to yield useful coverage. By acting as a repository of the reputations of the parties, we show the brokers provide a coordinating mechanism to leverage the collective hold up power of policyholders. This extends both the degree of implicit and explicit coverage. The role is reflected in the terms of broker engagement, specifically in the ownership by the broker of the renewal rights. Finally, we argue that brokers can be motivated to play this role when they receive commissions that are contingent on insurer profits. This last feature questions a recent, well publicized, attack on broker compensation by New York attorney general, Elliot Spitzer. Klassifikation: G22, G24, L14
Die vorliegende Analyse untersucht die Beschäftigungseffekte von Vermittlungsgutscheinen und Personal-Service-Agenturen mit Hilfe einer makroökonometrischen Evaluation. Neben einer mikroökonometrischen Evaluation, welche die Wirkungen auf individueller Ebene untersucht, kann eine makroökonometrische Analyse Aussagen über die gesamtwirtschaftlichen Effekte der Maßnahmen machen. Die strukturellen Multiplikatorwirkungen im makroökonomischen Kreislaufzusammenhang werden jedoch nicht berücksichtigt. Das ökonometrische Modell zur Analyse der beiden Maßnahmen basiert auf einer Matching-Funktion, die den Suchprozess von Firmen und von Arbeitern nach einem Beschäftigungsverhältnis abbildet. Die empirischen Analysen werden getrennt für Ost- und Westdeutschland sowie für die Strategietypen der Bundesagentur für Arbeit durchgeführt. Sie zeigen, dass die Ausgabe von Vermittlungsgutscheinen nur in „großstädtisch geprägten Bezirken vorwiegend in Westdeutschland mit hoher Arbeitslosigkeit“ (Strategietyp II) einen signifikant positiven Effekt auf den Suchprozess hat. Für die Personal-Service-Agenturen zeigen sich signifikant positive Effekte für Ost- als auch für Westdeutschland. Allerdings fehlt für eine abschließende Bewertung der Ergebnisse für die Personal- Service-Agenturen aufgrund der relativ geringen Teilnehmerzahl noch ein Vergleich mit mikroökonometrischen Analysen.
In this paper we evaluate the employment effects of job creation schemes on the participating individuals in Germany. Job creation schemes are a major element of active labour market policy in Germany and are targeted at long-term unemployed and other hard-to-place individuals. Access to very informative administrative data of the Federal Employment Agency justifies the application of a matching estimator and allows to account for individual (group-specific) and regional effect heterogeneity. We extend previous studies in four directions. First, we are able to evaluate the effects on regular (unsubsidised) employment. Second, we observe the outcome of participants and non-participants for nearly three years after programme start and can therefore analyse mid- and long-term effects. Third, we test the sensitivity of the results with respect to various decisions which have to be made during implementation of the matching estimator, e.g. choosing the matching algorithm or estimating the propensity score. Finally, we check if a possible occurrence of 'unobserved heterogeneity' distorts our interpretation. The overall results are rather discouraging, since the employment effects are negative or insignificant for most of the analysed groups. One notable exception are long-term unemployed individuals who benefit from participation. Hence, one policy implication is to address programmes to this problem group more tightly. JEL Classification: J68, H43, C13
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
This paper evaluates the effects of job creation schemes on the participating individuals in Germany. Since previous empirical studies of these measures have been based on relatively small datasets and focussed on East Germany, this is the first study which allows to draw policy-relevant conclusions. The very informative and exhaustive dataset at hand not only justifies the application of a matching estimator but also allows to take account of threefold heterogeneity. The recently developed multiple treatment framework is used to evaluate the effects with respect to regional, individual and programme heterogeneity. The results show considerable differences with respect to these sources of heterogeneity, but the overall finding is very clear. At the end of our observation period, that is two years after the start of the programmes, participants in job creation schemes have a significantly lower success probability on the labour market in comparison to matched non-participants. JEL Classification: H43, J64, J68, C13, C40
This paper investigates the macroeconomic effects of job creation schemes and vocational training on the matching processes in West Germany. The empirical analysis is based on regional data for local employment office districts for the period from 1999 to 2003. The empirical model relies on a dynamic version of a matching function augmented by ALMP. In order to obtain consistent estimates in the presence of a dynamic panel data model, a first-differences GMM estimator and a transformed maximum likelihood estimator are applied. Furthermore the paper considers the endogeneity problem of the policy measures. The results obtained from our estimates indicate that vocational training does not significantly affect the matching process and that job creation schemes have a negative effect. JEL Classification: C23, E24, H43, J64, J68
Most evaluation studies of active labour market policies (ALMP) focus on the microeconometric evaluation approach using individual data. However, as the microeconometric approach usually ignores impacts on the non-participants, it should be seen as a first step to a complete evaluation which has to be followed by an analysis on the macroeconomic level. As a starting point for our analysis we discuss the effects of ALMP in a theoretical labour market framework augmented by ALMP. We estimate the impacts of ALMP in Germany for the time period 1999-2001 with regional data of 175 labour office districts. Due to the high persistence of German labour market data the application of a dynamic model is crucial. Furthermore our analysis accounts especially for the inherent simultaneity problem of ALMP. For West Germany we find positive effects of vocational training and job creation schemes on the labour market situation, whereas the results for East Germany do not allow profound statements. JEL Classification: C33, E24, H43, J64, J68.
Previous empirical studies of job creation schemes in Germany have shown that the average effects for the participating individuals are negative. However, we find that this is not true for all strata of the population. Identifying individual characteristics that are responsible for the effect heterogeneity and using this information for a better allocation of individuals therefore bears some scope for improving programme efficiency. We present several stratification strategies and discuss the occurring effect heterogeneity. Our findings show that job creation schemes do neither harm nor improve the labour market chances for most of the groups. Exceptions are long-term unemployed men in West and long-term unemployed women in East and West Germany who benefit from participation in terms of higher employment rates. JEL: C13 , J68 , H43
Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments.
Persistently high unemployment, tight government budgets and the growing scepticism regarding the effects of active labour market policies (ALMP) are the basis for a growing interest in evaluating these measures. This paper intends to explain the need for evaluation on the micro- and macroeconomic level, introduce the fundamental evaluation problem and solutions to it, give an overview of the newer developments in evaluation literature and finally take a look on empirical estimations of ALMP effects. JEL Classification: C14, C33, H43, J64, J68
This study analyses the effects of public sector sponsored vocational training (PSVT) on individuals’ unemployment duration in West Germany for the period from 1985 to 1993. The data is taken from the German Socio-Economic Panel (GSOEP). To resolve the intriguing sample selection problem, i.e. to find an adequate control group for the group of trainees, we employ matching methods. These matching methods use the individual propensity to participate in training, which is obtained by estimating a panel probit model as the main matching variable. On the basis of the matched sample a discrete time hazard rate model is utilized to assess the effects of training participation on unemployment duration. Our results indicate that a significant positive effect on reemployment chances due to PSVT can only be expected for courses with a duration of no longer than six months. No significant positive effects on post-training reemployment chances where found for courses lasting longer than six months. In fact these PSVT courses are significantly less effective at increasing reemployment chances than those lasting no longer than three months. JEL classification: C40, J20, J64
This paper provides a review of empirical evidence relating to the impact of training on employment performance. Since a central issue in estimating training effects is the sample selection problem a short theoretical discussion of different evaluation strategies is given. The empirical overview primarily focuses on non-experimental evidence for Germany. In addition selected studies for other countries and experimental investigations are discussed.
In this study we are concerned with the impact of vocational training on the individual’s unemployment duration in West Germany. The data basis used is the German Socio-Economic Panel (GSOEP) for the period from 1984 to 1994. To resolve the intriguing sample selection problem, i.e. to find an adequate control group for the group of trainees, we employ matching methods which were developed in the statistical literature. These matching methods uses as the main matching variable the individual propensity score to participate in training, which is obtained by estimating a random effects probit model. On the basis of the matched sample a discrete time hazard rate model is utilized to assess the impact of vocational training on unemployment duration. Our results indicate, that training significantly raises the transition rate of unemployed into employment in the short but not in the long run. JEL classification: C40, J20, J64
We estimate a semiparametric single-risk discrete-time duration model to assess the effect of vocational training on the duration of unemployment spells. The data basis used in this study is the German Socio-Economic-Panel (GSOEP) for West Germany for the period from 1986 to 1994. To take into account a possible selection bias actual participation in vocational training is instrumented using estimates of a randomeffects probit model for the participation in qualification measures. Our main results show that training does have a significant short term effect of reducing unemployment duration but that this effect does not persist in the long run. JEL classifications: C41, J20, J64
This paper is intended as a short survey of the most relevant methods for grouped transition data. The fundamentals of duration analysis are discussed in a continuous time framework, whereas the treatment of methods for discrete durations is limited to the peculiarity of these models. In addition, some recent empirical applications of the methods are discussed.
In recent econometric work, most analyses of female labour supply consider married women, whereas the results for unmarried women are provided rather as a by-product (Burtless/Greenberg, 1982, Johnson/Pencavel, 1984, Leu/Kugler, 1986, Merz, 1990,). When the particular interest is focused on unmarried women, data of the seventies or rather simple econometric models are used (Keeley et al., 1978, Hausman, 1980, Coverman/Kemp, 1987) . Often very specific populations are examined, like for example lone mothers in Blundell/Duncan/Meghir (1992), Jenkins (1992), Staat/Wagenhals (1993) or Laisney et al. (1993). Analysing the economic behaviour of unmarried women, one is confronted with the problem that the term ‘unmarried’ is not clearly defined. It includes single, divorced, separated and widowed women. They live in different types of households, like one-person households or family households, where they occupy different economic positions as for example head of the household or relative of the head. The present work considers unmarried female heads of household. We assume that the dominant economic position as head of household, voluntarily or involuntarily occupied, forces these women to a similar behaviour independent from their family status. Thus they are taken together in the analysis from the different family statuses: single, divorced, separated and widowed. Being unmarried often is regarded as a temporary state, voluntarily or involuntarily, for example in the case of young women before marriage or in the case of divorced women after their separation. Nevertheless the demographic development shows the increased importance of unmarried women in the population during the last decades. In the USA the portion of female headed households raised from 21,1% in 1970 to 26,2% in 1980 and 29,0% in 1992 (Statistical Abstracts of the United States, 1993. Own calculations). In the FRG, female headed households constitute 26,4% of total households in 1970, 27,4% in 1980 and 30,1% in 1992 (Stat.Bundesamt, FS 1, Reihe 3, 1970, 1980, 1992). Therefore it seems an interesting topic to analyse the labour supply behaviour of unmarried female heads. Especially the question whether the labour supply of unmarried women resembles rather that of married women or of prime-age males is of particular interest. Another purpose of this analysis is to apply modern econometric panel data models with special emphasis on the problem of unbalanced panel data. Most panel data analyses are carried out using balanced panel data, which is no problem if the selection process could be ignored and if enough cases are available to guarantee efficient estimation. Especially the last point was crucial for the present analysis of unmarried females. In the available panel data sets the unmarried female heads constitute only a rather small population. Therefore the estimation techniques were modified to take missing observations of the individuals into account. The paper is organized as follows: In section 2 the underlying theoretical model of intertemporal labour supply under uncertainty is shortly presented. Section 3 deals with the econometric specification and estimation techniques where the use of unbalanced panel data is considered. Section 4 contains the data description with a particular look on the unbalancedness of the samples. In the last section 5 the empirical results are presented. We compare the estimated parameters for the unmarried women between the USA and the FRG and also analyse the differences between unmarried and married women. Moreover a comparison between different samples of unmarried women is provided.
This paper provides an empirical assessment of hypotheses that identify causes of demand side constraints of individual labour supply. In a comparative study for the USA and the FRG we focus on analysing the effect of productivity gaps (industry wage growth beyond productivity growth), industry investment intensity and regional labour market conditions on individual employment probabilities. Furthermore, we investigate whether demand side constraints of labour supply can be caused by a spill over from commodity markets. Efficiency wage theory and the theory of inter-industry wage differentials are utilised to derive identifying restrictions that are applicable to the labour supply models for both countries. The econometric contribution of the paper is the derivation and application of a two step estimation method for the class of simultaneous random effects double hurdle models, of which the labour supply model employed in this paper is a special case. To provide the empirical basis for the comparative study, the Panel Study of Income Dynamics and the German Socio-Economic Panel are linked to the OECD’s International Sectoral Database. JEL classification: C33, C34, J64, O57
Modelling consumer behaviour in a profile design using a three equation generalised Tobit model
(1997)
We propose the application of a three equation generalised Tobit to model different aspects of consumer behaviour in a full profile study design. The model takes into account that consumer behaviour can be measured by preference scores, purchase probability and purchase volume. We aim to avoid the drawbacks of traditional conjoint analysis where the latter two aspects are disregarded. Starting from a full profile design, we develop the appropriate questionnaire layout, the econometric model, the likelihood function and tests. The model is applied in a market entry study for an innovative medicament after a reform of Germany´s public health system in 1993-1994. JEL Classification: C35,M31,L65
Sharing of substructures like subterms and subcontexts in terms is a common method for space-efficient representation of terms, which allows for example to represent exponentially large terms in polynomial space, or to represent terms with iterated substructures in a compact form. We present singleton tree grammars as a general formalism for the treatment of sharing in terms. Singleton tree grammars (STG) are recursion-free context-free tree grammars without alternatives for non-terminals and at most unary second-order nonterminals. STGs generalize Plandowski's singleton context free grammars to terms (trees). We show that the test, whether two different nonterminals in an STG generate the same term can be done in polynomial time, which implies that the equality test of terms with shared terms and contexts, where composition of contexts is permitted, can be done in polynomial time in the size of the representation. This will allow polynomial-time algorithms for terms exploiting sharing. We hope that this technique will lead to improved upper complexity bounds for variants of second order unification algorithms, in particular for variants of context unification and bounded second order unification.
A new method for the determination of S-matrices of devices in multimoded waveguides and first experimental experiences are presented. The theoretical foundations are given. The scattering matrix of a TESLA copper cavity at a frequency above the cut-off of the second waveguide mode has been measured.
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
The paper analyzes the incentive for the ECB to establish reputation by pursuing a restrictive policy right at the start of its operation. The bank is modelled as risk averse with respect to deviations of both inflation and output from her target. The public, being imperfectly informed about the bank’s preferences uses observed inflation as (imperfect) signal for the unknown preferences. Under linear learning rules - which are commonly used in the literature - a gradual build up of reputation is the optimal response. The paper shows that such a linear learning rule is not consistent with efficient signaling. It is shown that in a game with efficient signaling, a cold turkey approach - allowing for deflation - is optimal for a strong bank - accepting high current output losses at the beginning in order to demonstrate its toughness. JEL classification: D 82, E 58
During the last years the relationship between financial development and economic growth has received widespread attention in the literature on growth and development. This paper summarises in its first part the results of this research, stressing the growth-enhancing effects of an increased interpersonal re-allocation of resources promoted by financial development. The second part of the paper seeks to identify the determinants of financial development based on Diamond's theory of financial intermediation as delegated monitoring. The analysis shows that the quality of corporate governance of banks is the key factor in financial system development. Accordingly, financial sector reforms in developing countries will only succeed if they strengthen the corporate governance of financial institutions. In this area, financial institution building has an important contribution to make. Paper presented at the First Annual Seminar on New Development Finance held at the Goethe University of Frankfurt, September 22 - October 3, 1997
The extension of long-term loans, e.g. to finance housing, is adversely affected by inflation. For one thing, the higher nominal interest rates charged by the banks in response to inflation mean that borrowers have to make (nominally) higher interest payments, which unnecessarily reduces their borrowing capacity. For another, long-term loans with variable interest rates increase the probability that borrowers will become unable to meet their payment obligations. The present paper examines these two assertions in detail. At the same time, it presents a concept for substantially reducing the weaknesses of conventional lending methodologies. We start by investigating the consequences of a stable inflation rate on the borrowing capacity of credit clients, then go on to analyze the impact of fluctuating inflation rates on the risk of default.
Competition for order flow can be characterized as a coordination game with multiple equilibria. Analyzing competition between dealer markets and a crossing network, we show that the crossing network is more stable for lower traders’ disutilities from unexecuted orders. By introducing private information, we prove existence of a unique equilibrium with market consolidation. Assets with low volatility and large volumes are traded on crossing networks, others on dealer markets. Efficiency requires more assets to be traded on crossing networks. If traders’ disutilities differ sufficiently, a unique equilibrium with market fragmentation exists. Low disutility traders use the crossing network while high disutility traders use the dealer market. The crossing network’s market share is inefficiently small.
In this paper, we estimate the demand for homeowner insurance in Florida. Since we are interested in a number of factors influencing demand, we approach the problem from two directions. We first estimate two hedonic equations representing the premium per contract and the price mark-up. We analyze how the contracts are bundled and how contract provisions, insurer characteristics and insured risk characteristics and demographics influence the premium per contract and the price mark-up. Second, we estimate the demand for homeowners insurance using two-stage least squares regression. We employ ISO's indicated loss costs as our proxy for real insurance services demanded. We assume that the demand for coverage is essentially a joint demand and thus we can estimate the demand for catastrophe coverage separately from the demand for noncatastrophe coverage. We determine that price elasticities are less elastic for catastrophic coverage than for non-catastrophic coverage. Further estimated income elasticities suggest that homeowners insurance is an inferior good. Finally, we conclude based on the results of a selection model that our sample of ISO reporting companies well represents the demand for insurance in the Florida market as a whole.
At present, the question of how national pension or retirement payment systems should be organised is being hotly debated in various countries, and opinions vary widely as to what should be regarded as the optimal design for such systems. It appears to the authors of the present paper that in this entire discussion one aspect is largely overlooked: What relationships exist between the pension system and the financial system in a given country? As such relationships might prove to be important, the present paper investigates the following questions: (1) Are there differences between the national pension systems of three major European countries – Germany, France and the U.K. – and between the financial systems of these countries? (2) And if the existence of such differences can be demonstrated, is there a correspondence between the differences with respect to the various national pension systems and the differences as regards the countries’ financial systems? (3) And if such a correspondence exists, is there any kind of interrelationship between the national financial and pension systems of the individual countries which goes beyond a mere correspondence? Looking mainly at two aspects – namely, risk allocation and the incentives to create human capital – the authors of this paper argue (1) that there are indeed considerable differences between the financial and pension systems of the three countries; (2) that in both Germany and the U.K. there are also systematic correspondences between the respective pension systems and financial systems and their economic characteristics, but that such a correspondence cannot be identified in the case of France; and (3) that these parallels are, in the final analysis, based on complementarities and are therefore likely to contribute to the efficiency of the German and the British systems. The paper concludes with a brief look at policy implications which the existence of, or the lack of, consistency between national pension systems and national financial systems might have.
Although the world of banking and finance is becoming more integrated every day, in most aspects the world of financial regulation continues to be narrowly defined by national boundaries. The main players here are still national governments and governmental agencies. And until recently, they tended to follow a policy of shielding their activities from scrutiny by their peers and members of the academic community rather than inviting critical assessments and an exchange of ideas. The turbulence in international financial markets in the 1980s, and its impact on U.S. banks, gave rise to the notion that academics working in the field of banking and financial regulation might be in a position to make a contribution to the improvement of regulation in the United States, and thus ultimately to the stability of the entire financial sector. This provided the impetus for the creation of the “U.S. Shadow Financial Regulatory Committee”. In the meantime, similar shadow committees have been founded in Europe and Japan. The specific problems associated with financial regulation in Europe, as well as the specific features which distinguish the European Shadow Financial Regulatory Committee from its counterparts in the U.S. and Japan, derive from the fact that while Europe has already made substantial progress towards economic and political integration, it is still primarily a collection of distinct nation-states with differing institutional set-ups and political and economic traditions. Therefore, any attempt to work towards a European approach to financial regulation must include an effort to promote the development of a European culture of co-operation in this area, and this is precisely what the European Shadow Financial Regulatory Committee (ESFRC) seeks to do. In this paper, Harald Benink, chairman of the ESFRC, and Reinhard H. Schmidt, one of the two German members, discuss the origin, the objectives and the functioning of the committee and the thrust of its recommendations.
In this paper we have developed a financial model of the non-life insurer to provide assistance for the management of the insurance company in making decisions on product, investment and reinsurance mix. The model is based on portfolio theory and recognizes the stochastic nature of and the interaction between the underwriting and investment income of the insurance business. In the context of an empirical application we illustrate howa portfolio optimisation approach can be used for asset-liability management.
Our study provides evidence on the share price reactions to the announcement of equity issues in Germany, where capital market is characterized by institutional features distinct from the U.S. market. German seasoned equity issues yield a positive market reaction which contrasts to the significant negative abnormal returns reported for the U.S. We provide evidence that these results are due to differences in both issuing characteristics and floatation methods, and in the corporate governance and ownership structures of the two countries. Our study explains much of the empirical puzzle of different market reactions to seemingly similar events across financial markets.
Real options theory applies techniques known from finance theory to the valuation of capital investments. The present paper investigates further into this analogy, considering the case of a portfolio of real options. An implementation of real option models in practice will mostly be concerned with a portfolio of real options, so the analysis of portfolio aspects is of both academic and practical interest. Is a portfolio of real options special? In order to shed some light on this question, the present paper will outline the relevant features of a portfolio of real options. It will show that the analogy to financial options remains great if compound option models are applied. As a result, a portfolio of real options, and therefore the firm as such, generally is to be understood as one single compound, real option.
We present an empirical study focusing on the estimation of a fundamental multi-factor model for a universe of European stocks. Following the approach of the BARRA model, we have adopted a cross-sectional methodology. The proportion of explained variance ranges from 7.3% to 66.3% in the weekly regressions with a mean of 32.9%. For the individual factors we give the percentage of the weeks when they yielded statistically significant influence on stock returns. The best explanatory power – apart from the dominant country factors – was found among the statistical constructs „success“ and „variability in markets“.
Who knows what when? : The information content of pre-IPO market prices : [Version March/June 2002]
(2002)
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
We propose a new framework for modelling time dependence in duration processes on financial markets. The well known autoregressive conditional duration (ACD) approach introduced by Engle and Russell (1998) will be extended in a way that allows the conditional expectation of the duration process to depend on an unobservable stochastic process, which is modelled via a Markov chain. The Markov switching ACD model (MSACD) is a very flexible tool for description and forecasting of financial duration processes. In addition the introduction of an unobservable, discrete valued regime variable can be justified in the light of recent market microstructure theories. In an empirical application we show, that the MSACD approach is able to capture several specific characteristics of inter trade durations while alternative ACD models fail. Furthermore, we use the MSACD to test implications of a sequential trade model.
Banking and markets
(2001)
This paper integrates a number of recent themes in the literature in banking and asset markets–optimal risk sharing, limited market participation, asset-price volatility, market liquidity, and financial crises–in a general-equilibrium theory of the financial system. A complex financial system comprises both financial markets financial institutions. Financial institutions can take the form of intermediaries or banks. Banks, inlike intermediaries, are subject to runs, but crises do not imply market failure. We show that a sophisticated financiel system–a system with complete markets for aggregate risk and limited market participation–is incentive-efficient, if the institutions take the form of intermediaries, or else constrained-efficient, of they take the form of banks. We also consider an economy in which the markets for aggregate risks are incomplete. In this context, there is a rolefpr prudential regulation: regulating liquidity can improve welfare.
Executive Stock Option Programs (SOPs) have become the dominant compensation instrument for top-management in recent years. The incentive effects of an SOP both with respect to corporate investment and financing decisions critically depend on the design of the SOP. A specific problem in designing SOPs concerns dividend protection. Usually, SOPs are not dividend protected, i.e. any dividend payout decreases the value of a manager’s options. Empirical evidence shows that this results in a significant decrease in the level of corporate dividends and, at the same time, into an increase in share repurchases. Yet, few suggestions have been made on how to account for dividends in SOPs. This paper applies arguments from principal-agent-theory and from the theory of finance to analyze different forms of dividend protection, and to address the relevance of dividend protection in SOPs. Finally, the paper relates the theoretical analysis to empirical work on the link between share repurchases and SOPs.
Since the beginning of the 1990s, it has been widely expected that the implementation of the European Single Market would lead to a rapid convergence of Europe’s financial systems. In the present paper we will show that at least in the period prior to the introduction of the common currency this expected convergence did not materialise. Our empirical studies on the significance of various institutions within the financial sectors, on the financing patterns of firms in various countries and on the predominant mechanisms of corporate governance, which are summarised and placed in a broader context in this paper, point to few, if any, signs of a convergence at a fundamental or structural level between the German, British and French financial systems. The German financial system continues to appear to be bank-dominated, while the British system still appears to be capital market-dominated. During the period covered by the research, i.e. 1980 – 1998, the French system underwent the most far-reaching changes, and today it is difficult to classify. In our opinion, these findings can be attributed to the effects of strong path dependencies, which are in turn an outgrowth of relationships of complementarity between the individual system components. Projecting what we have observed into the future, the results of our research indicate that one of two alternative paths of development is most likely to materialise: either the differences between the national financial systems will persist, or – possibly as a result of systemic crises – one financial system type will become the dominant model internationally. And if this second path emerges, the Anglo-American, capital market-dominated system could turn out to be the “winner”, because it is better able to withstand and weather crises, but not necessarily because it is more efficient.
In this paper we study the benefits derived from international diversification of stock portfolios from German and Hungarian point of view. In contrast to the German capital market, which is one of the largest in the world, the Hungarian Stock Exchange is an emerging market. The Hungarian stock market is highly volatile, high returns are often accompanied by extremely large risk. Therefore, there is a good potential for Hungarian investors to realize substantial benefits in terms of risk reduction by creating multi-currency portfolios. The paper gives evidence on the above me ntioned benefits for both countries by examining the performance of several ex ante portfolio strategies. In order to control the currency risk, different types of hedging approaches are implemented.
Financial development and financial institution building are important prerequisites for economic growth. However, both the potential and the problems of institution building are still vastly underestimated by those who design and fund institution building projects. The paper first underlines the importance of financial development for economic growth, then describes the main elements of “serious” institution building: the lending technology, the methodological approaches, and the question of internal structure and corporate governance. Finally, it discusses three problems which institution building efforts have to cope with: inappropriate expectations on the part of donor and partner institutions regarding the problems and effects of institution building efforts, the lack of awareness of the importance of governance and ownership issues, and financial regulation that is too restrictive for microfinance operations. All three problems together explain why there are so few successful micro and small business institutions operating worldwide.
We analyze incentives for loan officers in a model with hidden action, limited liability and truth-telling constraints under the assumption that the principal has private information from an automatic scoring system. First we show that the truth-telling problem reduces the bank’s expected profit whenever the loan officer cannot only conceal bad types, but can also falsely report bad types. Second, we investigate whether the bank should reveal her private information to the agent. We show that this depends on the percentage of good loans in the population and on the signal’s informativeness. Though we had to define different regions for different parameters, we concluded that it might often be favorable to not reveal the signal. This contradicts current practice.
We investigate the suggested substitutive relation between executive compensation and the disciplinary threat of takeover imposed by the market for corporate control. We complement other empirical studies on managerial compensation and corporate control mechanisms in three distinct ways. First, we concentrate on firms in the oil industry for which agency problems were especially severe in the 1980s. Due to the extensive generation of excess cash flow, product and factor market discipline was ineffective. Second, we obtain a unique data set drawn directly from proxy statements which accounts not only for salary and bonus but for the value of all stock-market based compensation held in the portfolio of a CEO. Our data set consists of 51 firms in the U.S. oil industry from 1977 to 1994. Third, we employ ex ante measures of the threat of takeover at the individual firm level which are superior to ex post measures like actual takeover occurrence or past incidence of takeovers in an industry. Results show that annual compensation and, to a much higher degree, stock-based managerial compensation increase after a firm becomes protected from a hostile takeover. However, clear-cut evidence that CEOs of protected firms receive higher compensation than those of firms considered susceptible to a takeover cannot be found.
Individual financial systems can be understood as very specific configurations of certain key elements. Often these configurations remain unchanged for decades. We hypothesize that there is a specific relationship between key elements, namely that of complementarity. Thus, complementarity seems to be an essential feature of financial systems. Intuitively speaking, complementarity exists if the elements of a (financial) system reinforce each other in terms of contributing to the functioning of the system. It is the purpose of this paper to provide an analytical clarification of the concept of complementarity. This is done by modeling financial systems as combinations of four elements: firm-specific human capital of an entrepreneur, the ability of a bank to restructure the borrower's firm in the case of distress, the possibility to appropriate private benefits from running the firm, and the bankruptcy law. A specific configuration of these elements constitutes one financial system. The bankruptcy law and the potential private benefits are treated as exogenous. They determine the bargaining power of the contracting parties in the case that recontracting occurs. In a two-stage game, the optimal values for the other elements are determined by the agents individually - by investing in human capital and restructuring skills, respectively - and jointly by writing, executing and possibly renegotiating a financing contract for the firm. The paper discusses the equilibria for different types of bankruptcy law and demonstrates that equilibria exhibit the sought-after feature of complementarity. Three particularly significant equilibria correspond to stylized accounts of the British, German and the US-American financial system, respectively.
The paper presents an empirical analysis of the alledged transformation of the financial systems in the three major European economies, France, Germany and the UK. Based on a unified data set developed on the basis of national accounts statistics, and employing a new and consistent method of measurement, the following questions are addressed: Is there a common pattern of structural change; do banks lose importance in the process of change; and are the three financial systems becoming more similar? We find that there is neither a general trend towards disintermediation, nor towards a transformation from bank-based to capital market-based financial systems, nor for a loss of importance of banks. Only in the case of France strong signs of transformation as well as signs of a general decline in the role of banks could be found. Thus the three financial systems also do not seem to become more similar. However, there is also a common pattern of change: the intermediation chains are lengthening in all three countries. Nonbank financial intermediaries are taking over a more important role as mobilizers of capital from the non-financial sectors. In combination with the trend towards securitization of bank liabilites, this change increases the funding costs of banks and may put banks under pressure. In the case of France, this change is so pronounced that it might even threaten the stability of the financial system.
Market discipline for financial institutions can be imposed not only from the liability side, as has often been stressed in the literature on the use of subordinated debt, but also from the asset side. This will be particularly true if good lending opportunities are in short supply, so that banks have to compete for projects. In such a setting, borrowers may demand that banks commit to monitoring by requiring that they use some of their own capital in lending, thus creating an asset market-based incentive for banks to hold capital. Borrowers can also provide banks with incentives to monitor by allowing them to reap some of the benefits from the loans, which accrue only if the loans are in fact paid o.. Since borrowers do not fully internalize the cost of raising capital to the banks, the level of capital demanded by market participants may be above the one chosen by a regulator, even when capital is a relatively costly source of funds. This implies that capital requirements may not be binding, as recent evidence seems to indicate. JEL Classification: G21, G38
We explore the macro/finance interface in the context of equity markets. In particular, using half a century of Livingston expected business conditions data we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a statistically and economically significant counter-cyclical fashion: depressed expected business conditions are associated with high expected excess returns. Moreover, inclusion of expected business conditions in otherwise standard predictive return regressions substantially reduces the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even after controlling for an important and recently introduced non-financial predictor, the generalized consumption/wealth ratio, which accords with the view that expected business conditions play a role in asset pricing different from and complementary to that of the consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, while time-varying consumption/wealth may capture time-varying risk aversion. Klassifikation: G12
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
This chapter focuses on institutional investors in the German financial markets. Institutional investors are specialized financial intermediaries who collect and manage funds on behalf of small investors toward specific objectives in terms of risk, return and maturity. The major types of institutional investors in Germany are insurance companies and investment funds. We will examine the nature of their businesses, their size and role in the financial sector, the size and the composition of the assets under their management, aspects of financ ial regulation, and features of their asset-liability-management.
We analyze exchange rates along with equity quotes for 3 German firms from New York (NYSE) and Frankfurt (XETRA) during overlapping trading hours to see where price discovery occurs and how stock prices adjust to an exchange rate shock. Findings include: (a) the exchange rate is exogenous with respect to the stock prices; (b) exchange rate innovations are more important in understanding the evolution of NYSE prices than XETRA prices; and (c) most (but not all) of the fundamental or random walk component of firm value is determined in Frankfurt.
For the Neuer Markt year 2001 is not considered as one of its best, compared to its prior performance. Investors who once piled into the Neuer Markt have now become wary of the exchange, which was launched in 1997 as Europe’s leading growth market and answer to the U.S.‘s Nasdaq Stock Market. The Neuer Markt’s reputation has been marred by the misleading information policy from several Neuer Markt companies, publishing false annual and quarterly data. Some of these companies are responsible for having misinformed investors of their pending bankruptcies. Under these circumstances, it is time to find an explanation for the dramatic loss of credibility in Neuer Markt enterprises. Finding an answer, two aspects come under consideration: • What type of information (annual versus quarterly reports) was available for investors and • of what quality were these provided data. Interim reports can be seen as important instrument in the reporting system to inform all kinds of investors. For this reason we examine the quality of Neuer Markt quarterly reports by concentrating on the disclosure level of 52 Neuer Markt companies‘ reports for the third quarter 1999 and 2000. To enable comparison we establish four disclosure indexes that measure the report’s compliance with the Neuer Markt Rules and Regulations as well as with IAS and US GAAP interim reporting standards. The results demonstrate that the level of disclosure has increased over time. Then we aim to find typical attributes of Neuer Markt enterprises that provide high or low level of accounting information in their quarterly reports. Nevertheless the study also shows that there is not any correlation between market capitalization and the quality of interim reports. However, it can be suggested that an additional enforcement mechanism could improve quality and lure investors back. A step towards this aim is the standardization project of quarterly reports of Deutsche Boerse AG.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
What constitutes a financial system in general and the German financial system in particular?
(2003)
This paper is one of the two introductory chapters of the book "The German Financial System". It first discusses two issues that have a general bearing on the entire book, and then provides a broad overview of the German financial system. The first general issue is that of clarifying what we mean by the key term "financial system" and, based on this definition, of showing why the financial system of a country is important and what it might be important for. Obviously, a definition of its subject matter and an explanation of its importance are required at the outset of any book. As we will explain in Section II, we use the term "financial system" in a broad sense which sets it clearly apart from the narrower concept of the "financial sector". The second general issue is that of how financial systems are described and analysed. Obviously, the definition of the object of analysis and the method by which the object is to be analysed are closely related to one another. The remainder of the paper provides a general overview of the German financial system. In addition, it is intended to provide a first indication of how the elements of the German financial system are related to each other, and thus to support our claim from Section II that there is indeed some merit in emphasising the systemic features of financial systems in general and of the German financial system in particular. The chapter concludes by briefly comparing the general characteristics of the German financial system with those of the financial systems of other advanced industrial countries, and taking a brief look at recent developments which might undermine the "systemic" character of the German financial system.
Portfolio choice and estimation risk : a comparison of Bayesian approaches to resampled efficiency
(2002)
Estimation risk is known to have a huge impact on mean/variance (MV) optimized portfolios, which is one of the primary reasons to make standard Markowitz optimization unfeasible in practice. Several approaches to incorporate estimation risk into portfolio selection are suggested in the earlier literature. These papers regularly discuss heuristic approaches (e.g., placing restrictions on portfolio weights) and Bayesian estimators. Among the Bayesian class of estimators, we will focus in this paper on the Bayes/Stein estimator developed by Jorion (1985, 1986), which is probably the most popular estimator. We will show that optimal portfolios based on the Bayes/Stein estimator correspond to portfolios on the original mean-variance efficient frontier with a higher risk aversion. We quantify this increase in risk aversion. Furthermore, we review a relatively new approach introduced by Michaud (1998), resampling efficiency. Michaud argues that the limitations of MV efficiency in practice generally derive from a lack of statistical understanding of MV optimization. He advocates a statistical view of MV optimization that leads to new procedures that can reduce estimation risk. Resampling efficiency has been contrasted to standard Markowitz portfolios until now, but not to other approaches which explicitly incorporate estimation risk. This paper attempts to fill this gap. Optimal portfolios based on the Bayes/Stein estimator and resampling efficiency are compared in an empirical out-of-sample study in terms of their Sharpe ratio and in terms of stochastic dominance.
Eine Beteiligung des Managements an Gewinngrößen spielt eine wichtige Rolle bei der Ausrichtung von Managemententscheidungen auf die Ziele der Unternehmenseigentümer. Dieser Beitrag zeigt auf, unter welchen Gewinnermittlungsregeln ein Agent zu optimalen Investitionsentscheidungen motiviert wird, wenn er an den Residualgewinnen beteiligt wird. Dieser Beitrag beschäftigt sich insbesondere mit der Frage, ob zum Zwecke einer optimalen Investitionssteuerung, Fertigerzeugnisse zu Vollkosten oder zu Teilkosten bewertet werden sollen. Vor diesem Hintergrund werden ebenfalls verschiedene Wertansätze für Forderungen auf ihre Anreizwirkungen untersucht.
Recent changes in accounting regulation for financial instruments (SFAS 133, IAS 39) have been heavily criticized by representatives from the banking industry. They argue for retaining a historical cost based "mixed model" where accounting for financial instruments depends on their designation to either trading or nontrading activities. In order to demonstrate the impact of different accounting models for financial instruments on the financial statements of banks, we develop a bank simulation model capturing the essential characteristics of a modern universal bank with investment banking and commercial banking activities. In our simulations we look at different scenarios with periods of increasing/decreasing interest rates using historical data and with different banking strategies (fully hedged; partially hedged). The financial statements of our model bank are prepared under different accounting rules ("Old" IAS before implementation of IAS 39; current IAS) with and without hedge accounting as offered by the respective sets of rules. The paper identifies critical issues of applying the different accounting rules for financial instruments to the activities of a universal bank. It demonstrates important shortcomings of the "Old" IAS rules (before IAS 39), and of the current IAS rules. Under the current IAS rules the results of a fully hedged bank may have to show volatility in income statements due to changes in market interest rates. Accounting results of a partially hedged bank in the same scenario may be less affected even though there are economic gains or losses.
As past research suggest, currency exposure risk is a main source of overall risk of international diversified portfolios. Thus, controlling the currency risk is an important instrument for controlling and improving investment performance of international investments. This study examines the effectiveness of controlling the currency risk for international diversified mixed asset portfolios via different hedge tools. Several hedging strategies, using currency forwards and currency options, were evaluated and compared with each other. Therefore, the stock and bond markets of the, United Kingdom, Germany, Japan, Switzerland, and the U.S, in the time period of January 1985 till December 2002, are considered. This is done form the point of view of a German investor. Due to highly skewed return distributions of options, the application of the traditional mean-variance framework for portfolio optimization is doubtful when options are considered. To account for this problem, a mean-LPM model is employed. Currency trends are also taken into account to check for the general dependence of time trends of currency movements and the relative potential gains of risk controlling strategies.
Rating agencies state that they take a rating action only when it is unlikely to be reversed shortly afterwards. Based on a formal representation of the rating process, I show that such a policy provides a good explanation for the empirical evidence: Rating changes occur relatively seldom, exhibit serial dependence, and lag changes in the issuers’ default risk. In terms of informational losses, avoiding rating reversals can be more harmful than monitoring credit quality only twice per year.
The purpose of this paper is to compare three different index construction methodologies of commercial property investments. We examine for different European countries (i) appraisal-based indices and methods of „unsmoothing“ the corresponding return series, (ii) indices that trace average ex-post transaction prices over time, and (iii) indices based on Real Estate Investment Trust share prices.
Substantial research attention has been devoted to the pension accumulation process, whereby employees and those advising them work to accumulate funds for retirement. Until recently, less analysis has been devoted to the pension decumulation process – the process by which retirees finance their consumption during retirement. This gap has recently begun to be filled by an active group of researchers examining key aspects of the pension payout market. One of the areas of most interesting investigation has been in the area of annuities, which are financial products intended to cover the risk of retirees outliving their assets. This paper reviews and extends recent research examining the role of annuities in helping finance retirement consumption. We also examine key market and regulatory factors.
This paper examines the provision of managerial investment incentives by an accounting based incentive scheme in a multiperiod agency setting in which an impatient manager has to choose between mutually exclusive investment projects. We study the properties of accounting rules that motivate an impatient manager to exert unobservable effort and to make optimal investment decisions. In this analysis, a realized cash flow constitutes a noisy signal that contains information about the unknown profitability of the investment project. By observing these signals a principal is able to revise his prior beliefs about the agent´s investment decision. The revision of the principal´s prior beliefs leads to a trade off between the provision of efficient investment incentives and intertemporalsharing of output.
Under a new Basel capital accord, bank regulators might use quantitative measures when evaluating the eligibility of internal credit rating systems for the internal ratings based approach. Based on data from Deutsche Bundesbank and using a simulation approach, we find that it is possible to identify strongly inferior rating systems out-of time based on statistics that measure either the quality of ranking borrowers from good to bad, or the quality of individual default probability forecasts. Banks do not significantly improve system quality if they use credit scores instead of ratings, or logistic regression default probability estimates instead of historical data. Banks that are not able to discriminate between high- and low-risk borrowers increase their average capital requirements due to the concavity of the capital requirements function.
The theoretical derivation of credit market segmentation as the result of a free market process
(2003)
Information asymmetries make it difficult for banks to assess accurately whether specific entrepreneurs are able and/or willing to repay their loans. This leads to implicit interest rate ceilings, i.e. banks "refuse" to increase their interest rates beyond this ceiling as this would lower their net returns. Although the maximum interest rate increases as the size of enterprises decreases, such ceilings nonetheless constrain the banks’ ability to set interest rates at a level that would enable them to cover costs. If transaction costs are high, the total costs associated with granting small and medium-sized loans will exceed the maximum average return which the banks can earn by issuing such loans. For this reason, banks do not lend to small and medium-sized enterprises, and, as a consequence, these businesses have no access to formal sector loans. Because micro and small enterprises have a very high RoI, it is worthwhile for them to rely on expensive informal loans to finance their operations, at least until they reach a certain size. Once they have reached this size, however, it does not make economic sense for them to continue taking out informal credits, and thus they face a growth constraint imposed by the credit market. Medium-sized enterprises earn a lower RoI than small ones, which is why borrowing in the informal credit market is not a worthwhile option for them. Moreover, they do not have access to credit from formal financial institutions, and are thus excluded from obtaining any kind of financing in either of the two credit markets. As the result of free, unregulated market forces we get a stable equilibrium in which the credit market is segmented into an informal (small loan) segment, a formal (large loan) segment and, in between, a "non-market" (medium loan) segment.
This paper analyses the long-term effects of improved small-scale lending, often provided by microfinance institutions set up with the support of development aid. The analysis shows that some common assumptions about microfinance are not true at all: First, it shows that the impact on income will accrue not to the microenterprises themselves, but rather to the consumers of their products. Second, microfinance will have a significant positive effect on the wage levels of employees in the informal sector. Third, microfinance will cause high growth rates in the informal production sector, whereas the trade sector will either contract or at best grow very little.
An economy in which deposit-taking banks of a Diamond/ Dybvig style and an asset market coexist is modelled. Firstly, within this framework we characterize distinct financial systems depending on the fraction of households with direct investment opportunities that are less efficient than those available to banks. With this fraction comparatively low, the evolving financial system can be interpreted as market-oriented. In this system, banks only provide efficient investment opportunities to households with inferior investment alternatives. Banks are not active in the secondary financial market nor do they provide any liquidity insurance to their depositors. Households participate to a large extent in the primary as well as in the secondary financial markets. In the other case of a relatively high fraction of households with inefficient direct investment opportunities, a bank-dominated financial system arises, in which banks provide liquidity transformation, are active in secondary financial markets and are the only player in primary markets, while households only participate in secondary financial markets. Secondly, we analyze the effect a run on a single bank has on the entire financial system. Interestingly, we can show that a bank run on a single bank causes contagion via the financial market neither in market-oriented nor in extremely bank-dominated financial systems. But in only moderately bank-dominated (or hybrid) financial systems fire sales of long-term financial claims by a distressed bank cause a sudden drop in asset prices that precipitates other banks into crisis.
Capital rationing is an empirically well-documented phenomenon. This constraint requires managers to make investment decisions between mutually exclusive investment opportunities. In a multiperiod agency setting, this paper analyses accounting rules that provide managerial incentives for efficient project selection. In order to motivate a shortsighted manager to expend unobservable effort and to make efficient investment decisions, the principal sets up an incentive scheme based on residual income (e.g. EVATM). The paper shows that income smoothing generates a trade-off between agency costs resulting from differences in discount rates and the costs associated with the "congruity" of residual earnings.
Open-end real estate funds (so called “Offene Immobilienfonds”) play a major role in the German market for securitised real estate investments. Such funds are pools of money from many investors, which are invested in real estate by special investment management companies. This study seeks to identify the risk and return profile of this investment vehicle (before and after income taxes), to compare them with those of other major asset classes, and to provide implications for their appropriate role in a mixed-asset portfolio. Addition-ally, an overview of the institutional architecture and role of German open-end real estate funds is given. Empirical evidence suggests that the financial characteristics of open-end real estate funds are in many respects similar to those reported for direct real estate invest-ments. Accordingly, German open-end real estate funds qualify for medium and long-term investment horizons, rather than for shorter holding periods.
This paper investigates the magnitude and the main determinants of share price reactions to buy-back announcements of German corporations. Based on a sample of 224 announcements from the period May 1998 to April 2003 we find average cumulative abnormal returns around -7.5% for the thirty days preceding the announcement and around +7.0 % for the ten days following the announcement. We regress postannouncement abnormal returns with multiple firm characteristics and provide evidence which supports the undervaluation signaling hypothesis but not the excess cash hypothesis. In extending prior empirical work, we also analyze price effects from an initial statement by management that it intends to seek shareholder approval for a buy-back plan. Observed cumulative abnormal returns on this initial date are in excess of 5% implying a total average price effect between 12% and 15% from implementing a buy-back plan. We conjecture that the German regulatory environment is the main reason why market variations to buy-back announcements are much stronger in Germany than in other countries and conclude that initial statements by managers to seek shareholders’ approval for a buy-back plan should also be subject to legal ad-hoc disclosure requirements. EFM classification: 330, 350
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that therefore financial patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by demonstrating that the surprising empirical results found by Mayer et al. are due to a hidden assumption underlying their methodology. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
This study contributes to the valuation of employee stock options (ESO) in two ways: First, a new pricing model is presented, admitting a major part of calculations to be solved in closed form. Designed with a focus on good replication of empirics, the model fits with publicly observable exercise characteristics better than earlier models. In particular, it is able to account for the correlation of the time of exercise and the stock price at exercise, suspected of being crucial for the option value. The impact of correlation is weak, however, whereas cancellations play a central role. The second contribution of this paper is an examination to what extent the ESO pricing method of SFAS 123 is subject to discretion of the accountant. Given my model were true, the SFAS price would be a good proxy. Yet, outside shareholders usually cannot observe one of the SFAS input parameters. On behalf of an example I show that there is wide latitude left to the accountant.
This study contributes to the valuation of employee stock options (ESO) in two ways: First, a new pricing model is presented, admitting a major part of calculations to be solved in closed form. Designed with a focus on good replication of empirics, the model fits with publicly observable exercise characteristics better than earlier models. In particular, it is able to account for the correlation of the time of exercise and the stock price at exercise, suspected of being crucial for the option value. The impact of correlation is weak, however, whereas cancellations play a central role. The second contribution of this paper is an examination to what extent the ESO pricing method of SFAS 123 is subject to discretion of the accountant. Given my model were true, the SFAS price would be a good proxy. Yet, outside shareholders usually cannot observe one of the SFAS input parameters. On behalf of an example I show that there is wide latitude left to the accountant.
In a framework closely related to Diamond and Rajan (2001) we characterize different financial systems and analyze the welfare implications of different LOLR-policies in these financial systems. We show that in a bank-dominated financial system it is less likely that a LOLR-policy that follows the Bagehot rules is preferable. In financial systems with rather illiquid assets a discretionary individual liquidity assistance might be welfare improving, while in market-based financial systems, with rather liquid assets in the banks' balance sheets, emergency liquidity assistance provided freely to the market at a penalty rate is likely to be efficient. Thus, a "one size fits all"-approach that does not take the differences of financial systems into account is misguiding. JEL - Klassifikation: D52 , E44 , G21 , E52 , E58
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.