Refine
Year of publication
Document Type
- Working Paper (1305)
- Part of Periodical (276)
- Article (150)
- Report (96)
- Doctoral Thesis (34)
- Conference Proceeding (14)
- Part of a Book (7)
- Book (5)
- Preprint (4)
- Review (3)
Language
- English (1897) (remove)
Has Fulltext
- yes (1897) (remove)
Is part of the Bibliography
- no (1897)
Keywords
- Deutschland (58)
- Financial Institutions (47)
- ECB (41)
- Capital Markets Union (36)
- monetary policy (34)
- Financial Markets (33)
- Banking Union (32)
- Banking Regulation (28)
- Monetary Policy (28)
- Household Finance (27)
Institute
- Wirtschaftswissenschaften (1897) (remove)
Many tax-codes around the world allow for special taxable treatment of savings in retirement accounts. In particular, profits in retirement accounts are usually tax exempt which allow investors to increase an asset’s return by holding it in such a retirement account. While the existing literature on asset location shows that risk-free bonds are usually the preferred asset to hold in a retirement account, we explain how the tax exemption of profits in retirement accounts affects private investors’ asset allocation. We show that total final wealth can be decomposed into what the investor would have earned in a taxable account and what is due to the tax exemption of profits in the retirement account. The tax exemption of profits can thus be considered a tax-gift which is similar to an implicit bond holding. As this tax-gift’s impact on total final wealth decreases over time, so does the investor’s equity exposure.
This paper analyses cross-border contagion in a sample of European banks from January 1994 to January 2003. We use a multinomial logit model to estimate the number of banks in a given country that experience a large shock on the same day (“coexceedances”) as a function of variables measuring common shocks and coexceedances in other countries. Large shocks are measured by the bottom 95th percentile of the distribution of the first difference in the daily distance to default of the bank. We find evidence in favour of significant cross-border contagion. We also find some evidence that since the introduction of the euro cross-border contagion may have increased. The results seem to be very robust to changes in the specification.
We compute the optimal dynamic asset allocation policy for a retiree with Epstein-Zin utility. The retiree can decide how much he consumes and how much he invests in stocks, bonds, and annuities. Pricing the annuities we account for asymmetric mortality beliefs and administration expenses. We show that the retiree does not purchase annuities only once but rather several times during retirement (gradual annuitization). We analyze the case in which the retiree is restricted to buy annuities only once and has to perform a (complete or partial) switching strategy. This restriction reduces both the utility and the demand for annuities.
The dissertation collects four self-contained essays which contribute to the literature on wage structures, heterogeneous labor demand, and the impact of trade unions. The first paper provides a detailed description of the evolution of wage inequality in East and West Germany in the late years of the twentieth century. In contrast to previous decades, wage inequality has been rising in several dimensions during that period. The second paper identifies cohort effects in the evolution of both wages and employment. Observed structures are consistent with a labor demand framework that incorporates steady skill-biased technical change. Substitutability between skill and age groups in the German labor market is found to be relatively high. Simulations based on estimated elasticities of substitution illustrate that higher wage dispersion between skill groups would have contributed to a reduction in unemployment. The third paper estimates determinants of individual union membership decisions and studies the erosion of union density in East and West Germany. Using corresponding predictions of net union density, the fourth paper analyzes the link between union strength and the structure of wages. A higher union density is associated with lower residual wage dispersion, reduced skill wage differentials, and a lower wage level. This finding is in line with an insurance motive for union action. The thesis comprises the following articles: (1) “Rising Wage Dispersion, After All! The German Wage Structure at the Turn of the Century,” IZA Discussion Paper 2098, April 2006. (2) “Skill Wage Premia, Employment, and Cohort Effects: Are Workers in Germany All of the Same Type?”, IZA Discussion Paper 2185, June 2006, joint with Bernd Fitzenberger. (3) “The Erosion of Union Membership in Germany: Determinants, Densities, Decompositions,” IZA Discussion Paper 2193, July 2006, joint with Bernd Fitzenberger and Qingwei Wang. (4) “Equal Pay for Equal Work? On Union Power and the Structure of Wages in West Germany, 1985–1997,” translation of “Gleicher Lohn für gleiche Arbeit? Zum Zusammenhang zwischen Gewerkschaftsmitgliedschaft und Lohnstruktur in Westdeutschland 1985–1997,” Zeitschrift für Arbeitsmarkt-Forschung, 38 (2/3), 125-146, joint with Bernd Fitzenberger, 2005.
We focus on a quantitative assessment of rigid labor markets in an environment of stable monetary policy. We ask how wages and labor market shocks feed into the inflation process and derive monetary policy implications. Towards that aim, we structurally model matching frictions and rigid wages in line with an optimizing rationale in a New Keynesian closed economy DSGE model. We estimate the model using Bayesian techniques for German data from the late 1970s to present. Given the pre-euro heterogeneity in wage bargaining we take this as the first-best approximation at hand for modelling monetary policy in the presence of labor market frictions in the current European regime. In our framework, we find that labor market structure is of prime importance for the evolution of the business cycle, and for monetary policy in particular. Yet shocks originating in the labor market itself may contain only limited information for the conduct of stabilization policy. JEL - Klassifikation: J64 , E32 , C11 , E52
As of today, estimating interest rate reaction functions for the Euro Area is hampered by the short time span since the conduct of a single monetary policy. In this paper we circumvent the common use of aggregated data before 1999 by estimating interest rate reaction functions based on a panel including actual EMU Member States. We find that exploiting the cross-section dimen- sion of a multi-country panel and accounting for cross-country heterogeneity in advance of the single monetary policy pays off with regard to the estimated reaction functions' ability to describe actual interest rate dynamics. We retrieve a panel reaction function which is demonstrated to be a valuable tool for evaluating episodes of monetary policy since 1999. JEL - Klassifikation: E43 , E58 , C33
This paper investigates various theories explaining banks´ overbidding in the fixed rate tenders of the European Central Bank (ECB). Using auction data from both the Bundesbank and the ECB, we show that none of the theories can on its own explain the observed overbidding. This implies that the proposed new rules by the ECB, aimed at neutralizing interest rate expectations, would not eliminate overbidding if the rationing rule in the fixed rate tenders remains unchanged. JEL - Klassifikation: D44 , E32
Although stable money demand functions are crucial for the monetary model of the exchange rate, empirical research on exchange rates and money demand is more or less disconnected. This paper tries to fill the gap for the Euro/Dollar exchange rate. We investigate whether monetary disequilibria provided by the empirical literature on U.S. and European money demand functions contain useful information about exchange rate movements. Our results suggest that the empirical performance of the monetary exchange rate model improves when insights from the money demand literature are explicitly taken into account. JEL - Klassifikation: F31 , E41
The dynamic relationship between the Euro overnight rate, the ECB´s policy rate and the term spread
(2006)
This paper investigates how the dynamic adjustment of the European overnight rate Eonia to the term spread and the ECB’s policy rate has been affected by rate expectations and the operational framework of the ECB. In line with recent evidence found for the US and Japan, the reaction of the Eonia to the term spread is non-symmetric. Moreover, the response of the Eonia to the policy rate depends on both, the repo auction format and the position of the Eonia in the ECB’s interest rate corridor. JEL - Klassifikation: E43 , E52
Inflation and relative price variability in the Euro area : evidence from a panel threshold model
(2006)
In recent macroeconomic theory, relative price variability (RPV) generates the central distortions of inflation. This paper provides first evidence on the empirical relation between inflation and RPV in the euro area focusing on threshold effects of inflation. We ¯nd that expected inflation significantly increases RPV if inflation is either very low (below -1.38% p.a.) or very high (above 5.94% p.a.). In the intermediate regime, however, expected in°ation has no distorting effects which supports price stability as an outcome of optimal monetary policy. JEL classification: E31, C23
This paper employs individual bidding data to analyze the empirical performance of the longer term refinancing operations (LTROs) of the European Central Bank (ECB). We investigate how banks’ bidding behavior is related to a series of exogenous variables such as collateral costs, interest rate expectations, market volatility and to individual bank characteristics like country of origin, size, and experience. Panel regressions reveal that a bank’s bidding depends on bank characteristics. Yet, different bidding behavior generally does not translate into differences concerning bidder success. In contrast to the ECB’s main refinancing operations, we find evidence for the winner’s curse effect in LTROs. Our results indicate that LTROs do neither lead to market distortions nor to unfair auction outcomes. JEL classification: E52, D44
A distinguishing feature of the ECB’s monetary policy setup is the preannouncement of a minimum bid rate in its weekly repo auctions. However, whenever interest rates are expected to decline, the minimum bid rate is viewed as too high and banks refrain from bidding, severely impeding the ECB’s money market management. To shed more light on banks’ underbidding, we perform a panel analysis of the bidder behavior in the repo auctions of the Bundesbank where no minimum bid rate was set. Our results indicate that neither bank’s participation nor the submitted bid amount is significantly affected by an expected rate cut. This suggests that abandoning the minimum bid rate might increase the efficiency of the ECB’s money market management.
Despite a legal framework being in place for several years, the market share of qualified electronic signatures is disappointingly low. Mobile Signatures provide a new and promising opportunity for the deployment of an infrastructure for qualified electronic signatures. We that SIM-based signatures are the most secure and convenient solution. However, using the SIM-card as a secure signature creation device (SSCD) raises new challenges, because it would contain the user’s private key as well as the subscriber identification. Combining both functions in one card raises the question who will have the control over the keys and certificates. We propose a protocol called Certification on Demand (COD) that separates certification services from subscriber identification information and allows consumers to choose their appropriate certification services and service providers based on their needs. This infrastructure could be used to enable secure mobile brokerage services that can ommit the necessity of TAN lists and therefore allow a better integration of information and transaction services.
The tax codes in many countries allow for special tax advantages for investments in special retirement plans. Probably the most important advantage to these plans is that profits usually remain untaxed. This paper deals with the question, which assets are preferable in a taxdeferred account (TDA). Contrary to the conventional wisdom that one should prefer bonds in the TDA, it is shown that especially in early years, stocks can be the preferred asset to hold in the TDA for an investor maximizing final wealth, given a certain asset allocation. The higher the performance of stocks compared to bonds, the higher the tax burden put on stocks compared to bonds. Simultaneously, the longer the remaining investment horizon, the larger the relative outperformance of the optimal asset location strategy compared to the myopic strategy of locating bonds in the TDA. An algorithm is provided to determine the investment strategy that maximizes (expected) funds at the end of a given investment horizon when there is an analytical solution.
This paper analyzes the relation between demographic structure and real asset returns on treasury bills, bonds and stocks for the G7-countries (United States, Canada, Japan, Italy, France, the United Kingdom and Germany). A macroeconomic multifactor model is used to examine a variety of different demographic factors from 1951 to 2002. There was no robust relationship found between shocks in demographic variables and asset returns in the framework of these models, which suggests that Asset Meltdown is rather fiction than fact.
Both banks and open end real estate funds effectuate liquidity transformation in large amounts and high scales. Because of this similarity the latter should be analyzed using the same methodologies as usually applied for banks. We show that the work in the tradition of Diamond and Dybvig (1983), especially Allen and Gale (1998) and Diamond and Rajan (2001), provides an applicable theoretical framework. We used this as the basis for our model for open end real estate funds. We then examined the usefulness of the modeling structure in analyzing open end real estate funds. First, we could show that withdrawing of capital resulting in a run is not always inefficient. Instead, withdrawing can as well be referred to the situation where the low return of an open end fund unit in comparison to other opportunities makes, (partial) withdrawal viewed from the risk-sharing perspective optimal. Even with costly liquidation, this result will hold, though we will have deadweight losses in such a situation. Second, introducing a secondary market in our model does, not in general, resolve the problem of deadweight losses associated with foreclosure. If assets are sold during a run, we do not only have a transfer of value but it can also create an economic cost. Because funds are forced to liquidate the illiquid asset in order to fulfill their obligations, the price of the real estate asset is forced down making the crisis worse. Rather than providing insurance, such that investors receive a transfer in negative outcomes, the secondary market does the opposite. It provides a negative insurance instead. Third, our model proves that the open end structure provides a monitoring function which serves as an efficient instrument to discipline the funds management. Therefore, we argue that an open end structure can represent a more adequate solution to securitize real estate or other illiquid assets. Instead of transforming open end in closed end structures, fund runs should be accepted as a normal phenomenon to clear the market from funds with mismanagement.
Customer channel migration
(2006)
Customer Channel Migration deals with the active management of a customer's channel usage behavior with the aim to increase her profitability and lifetime. Hence, the dissertation answers two distict questions: on one hand, it investigates the impact of channel use on a customer's profitability and lifetime. On the other hand, it is researched how a customer's channel usage behavior can be influenced and managed. The cumulative dissertation consists of five articles: the first article describes the matching method and its application to marketing problems. The matching method is necessary to estimate the unbiased impact of channel use on a customer's profitability and lifetime. The second article describes the application of the matching method in order to determine the monetary implications of using the internet in the financial services industry. The third article investigates the impact of the internet use on a customer's lifetime. The forth and the fifth article of the dissertation both investigate the management of a customer's channel usage behavior. The forth article designs a scale to measure a customer's perceived channel value. The fifth article builds upon these findings and develops a model which explains a customer's channel usage behavior. Based on these insights this article derives some managerial implications on how to manage customers between different channels.
One of the most acute problems in the world today is provision of a respectable living for the elderly. Today the process of aging population (as a result of a declined birth rate and increased life expectancy) has touched all countries of the world - developed countries as well as countries like Russia. Consequently, reforming traditional pension systems to deal with the changing situation has become an important issue around the world. These reforms typically center on the implementation of some form of funding of future pension benefits. This also holds for Russia, where in 1995 pension reform legislation introduced the so-called “accumulation pension”. In this context, this article will deal with the issues concerning the establishment of mutual funds, legal aspects of their operating and their investing opportunities. There will be carried out a comparative analysis of mutual funds with the other forms of public investments, namely: Common Funds of Bank Management, Voucher Investment Funds and Joint-stock Investment Funds.
Open-end real estate funds are of particular importance in the German bankdominated financial system. However, recently the German open-end fund industry came under severe distress which triggered a broad discussion of required regulatory interventions. This paper gives a detailed description of the institutional structure of these funds and of the events that led to the crisis. Furthermore, it applies recent banking theory to open-end real estate funds in order to understand why the open-end fund structure was so prevalent in Germany. Based on these theoretical insights we evaluate the various policy recommendation that have been raised.
One of the dangers of harmonisation and unification processes taking place within the framework of the EU is that they may result in the codification of the lowest common denominator. This is precisely what is threatening to happen in respect of assignment. Referring the transfer of receivables by way of assignment to the law of the assignor’s residence, as article 13 of the Proposal does, would be opting for the most conservative solution and would for many Member States be a step backward rather than forward. A conflict rule referring assignment to the law of the assignor's residence is too rigid to do justice to the dynamic nature of assignments in cross-border transactions and it is unjustly one-sided. It offers no real advantages when compared to other conflict rules; it even has serious disadvantages which make the conflict rule unsuitable for efficient assignment-based cross-border transactions. It is not unconceivable that this conflict rule would even be contrary to the fundamental freedoms of the ECTreaty. The Community legislators in particular should be careful not to needlessly adopt rules which create insurmountable obstacles for cross-border business where choice-of-law by the parties would perfectly do. Community legislation has a special responsibility to create a smooth legal environment for single market transactions.
This paper investigates whether the stock market reacts to unsolicited ratings for a sample of S&P rated firms from January 1996 to December 2005. We first analyze the stock market reaction associated with the assignment of an initial unsolicited rating. We find evidence that this reaction is negative and particularly accentuated for Japanese firms. A comparison between S&P’s initial unsolicited ratings with previously published ratings of two Japanese rating agencies for a Japanese subsample shows that ratings assigned by S&P are systematically worse. Further, we find that the stock market does not react to the transition from an unsolicited to a solicited rating. Comparison of the upgrades in the sample with a matched-sample of upgrades of solicited ratings reveals that the price reactions are no different. In addition, abnormal returns are worse for firms whose rating remained unchanged after the solicitation compared to those for upgraded firms. Finally, we find that Japanese firms are less likely to receive an upgrade. Our findings suggest that unsolicited ratings are biased downwards, that the capital market therefore expects upgrades of formerly unsolicited ratings and punishes firms whose ratings remain unchanged. All these effects seem to be more pronounced for Japanese firms.
In this paper, we propose a model of credit rating agencies using the global games framework to incorporate information and coordination problems. We introduce a refined utility function of a credit rating agency that, additional to reputation maximization, also embeds aspects of competition and feedback effects of the rating on the rated firms. Apart from hinting at explanations for several hypotheses with regard to agencies' optimal rating assessments, our model suggests that the existence of rating agencies may decrease the incidence of multiple equilibria. If investors have discretionary power over the precision of their private information, we can prove that public rating announcements and private information collection are complements rather than substitutes in order to secure uniqueness of equilibrium. In this respect, rating agencies may spark off a virtuous circle that increases the efficiency of the market outcome.
Using data of US domestic mergers and acquisitions transactions, this paper shows that acquirers have a preference for geographically proximate target companies. We measure the ‘home bias’ against benchmark portfolios of hypothetical deals where the potential targets consist of firms of similar size in the same four-digit SIC code that have been targets in other transactions at about the same time or firms that have been listed at a stock exchange at that time. There is a strong and consistent home bias for M&A transactions in the US, which is significantly declining during the observation period, i.e. between 1990 and 2004. At the same time, the average distances between target and acquirer increase articulately. The home bias is stronger for small and relatively opaque target companies suggesting that local information is the decisive factor in explaining the results. Acquirers that diversify into new business lines also display a stronger preference for more proximate targets. With an event study we show that investors react relatively better to proximate acquisitions than to distant ones. That reaction is more important and becomes significant in times when the average distance between target and acquirer becomes larger, but never becomes economically significant. We interpret this as evidence for the familiarity hypothesis brought forward by Huberman (2001): Acquirers know about the existence of proximate targets and are more likely to merge with them without necessarily being better informed. However, when comparing the best and the worst deals, we are able to show a dramatic difference in distances and home bias: The most successful deals display on average a much stronger home bias and distinctively smaller distance between acquirer and target than the least successful deals. Proximity in M&A transactions therefore is a necessary but not sufficient condition for success. The paper contributes to the growing literature on the role of distance in financial decisions.
The paper examines challenges in effectively implementing the lender-of-last-resort function in the EU single financial market. Briefly highlighted are features of the EU financial landscape that could increase EU systemic financial risk. Briefly described are the complexities of the EU’s financial-stability architecture for preventing and resolving financial problems, including lender-of-last-resort operations. The paper examines how the lender-of-last-resort function might materialize during a systemic financial disturbance affecting more than one EU Member State. The paper identifies challenges and possible ways of enhancing the effectiveness of the existing architecture.
Location-based services (LBS) are services that position your mobile phone to provide some context-based service for you. Some of these services – called ‘location tracking’ applications - need frequent updates of the current position to decide whether a service should be initiated. Thus, internet-based systems will continuously collect and process the location in relationship to a personal context of an identified customer. This paper will present the concept of location as part of a person’s identity. I will conceptualize location in information systems and relate it to concepts like privacy, geographical information systems and surveillance. The talk will present how the knowledge of a person's private life and identity can be enhanced with data mining technologies on location profiles and movement patterns. Finally, some first concepts about protecting location information.
Mobile telephony and mobile internet are driving a new application paradigm: location-based services (LBS). Based on a person’s location and context, personalized applications can be deployed. Thus, internet-based systems will continuously collect and process the location in relationship to a personal context of an identified customer. One of the challenges in designing LBS infrastructures is the concurrent design for economic infrastructures and the preservation of privacy of the subjects whose location is tracked. This presentation will explain typical LBS scenarios, the resulting new privacy challenges and user requirements and raises economic questions about privacy-design. The topics will be connected to “mobile identity” to derive what particular identity management issues can be found in LBS.
In this paper, I examine the potential of mobile alerting services empowering investors to react quickly to critical market events. Therefore, an analysis of short-term (intraday) price effects is performed. I find abnormal returns to company announcements which are completed within a timeframe of minutes. To make use of these findings, these price effects are predicted using pre-defined external metrics and different estimation methodologies. Compared to previous research, the results provide support that artificial neural networks and multiple linear regression are good estimation models for forecasting price effects also on an intraday basis. As most of the price effect magnitude and effect delay can be estimated correctly, it is demonstrated how a suitable mobile alerting service combining a low level of user-intrusiveness and timely information supply can be designed.
Multiplayer games have become very popular in the PC market. Almost none of the current games are shipped without some support for multiplayer gaming. At the same time mobile devices are becoming more powerful and popularity of games on these platforms increases. However, there are almost no games that support multiplayer gaming despite the multiple options of these devices to connect with each other and build mobile ad hoc networks. Reasons for this lack of multiplayer support are the high diversity of mobile devices as well as the different protocols and their properties that these devices support. With “SmartBlaster” we developed a multiplayer game for several different platforms that is using several different channels (Bluetooth, IrDa, 802.11 and other networks supporting TCP/IP) to communicate between them.
Die vorliegende Analyse untersucht die Beschäftigungseffekte von Vermittlungsgutscheinen und Personal-Service-Agenturen mit Hilfe einer makroökonometrischen Evaluation. Neben einer mikroökonometrischen Evaluation, welche die Wirkungen auf individueller Ebene untersucht, kann eine makroökonometrische Analyse Aussagen über die gesamtwirtschaftlichen Effekte der Maßnahmen machen. Die strukturellen Multiplikatorwirkungen im makroökonomischen Kreislaufzusammenhang werden jedoch nicht berücksichtigt. Das ökonometrische Modell zur Analyse der beiden Maßnahmen basiert auf einer Matching-Funktion, die den Suchprozess von Firmen und von Arbeitern nach einem Beschäftigungsverhältnis abbildet. Die empirischen Analysen werden getrennt für Ost- und Westdeutschland sowie für die Strategietypen der Bundesagentur für Arbeit durchgeführt. Sie zeigen, dass die Ausgabe von Vermittlungsgutscheinen nur in „großstädtisch geprägten Bezirken vorwiegend in Westdeutschland mit hoher Arbeitslosigkeit“ (Strategietyp II) einen signifikant positiven Effekt auf den Suchprozess hat. Für die Personal-Service-Agenturen zeigen sich signifikant positive Effekte für Ost- als auch für Westdeutschland. Allerdings fehlt für eine abschließende Bewertung der Ergebnisse für die Personal- Service-Agenturen aufgrund der relativ geringen Teilnehmerzahl noch ein Vergleich mit mikroökonometrischen Analysen.
Serial correlation in dynamic panel data models with weakly exogenous regressor and fixed effects
(2005)
Our paper wants to present and compare two estimation methodologies for dynamic panel data models in the presence of serially correlated errors and weakly exogenous regressors. The ¯rst is the ¯rst di®erence GMM estimator as proposed by Arellano and Bond (1991) and the second is the transformed Maximum Likelihood Estimator as proposed by Hsiao, Pesaran, and Tahmiscioglu (2002). Thereby, we consider the ¯xed e®ects case and weakly exogenous regressors. The ¯nite sample properties of both estimation methodologies are analysed within a simulation experiment. Furthermore, we will present an empirical example to consider the performance of both estimators with real data. JEL Classification: C23, J64
In this paper we evaluate the employment effects of job creation schemes on the participating individuals in Germany. Job creation schemes are a major element of active labour market policy in Germany and are targeted at long-term unemployed and other hard-to-place individuals. Access to very informative administrative data of the Federal Employment Agency justifies the application of a matching estimator and allows to account for individual (group-specific) and regional effect heterogeneity. We extend previous studies in four directions. First, we are able to evaluate the effects on regular (unsubsidised) employment. Second, we observe the outcome of participants and non-participants for nearly three years after programme start and can therefore analyse mid- and long-term effects. Third, we test the sensitivity of the results with respect to various decisions which have to be made during implementation of the matching estimator, e.g. choosing the matching algorithm or estimating the propensity score. Finally, we check if a possible occurrence of 'unobserved heterogeneity' distorts our interpretation. The overall results are rather discouraging, since the employment effects are negative or insignificant for most of the analysed groups. One notable exception are long-term unemployed individuals who benefit from participation. Hence, one policy implication is to address programmes to this problem group more tightly. JEL Classification: J68, H43, C13
Job creation schemes (JCS) have been one important programme of active labour market policy in Germany aiming at the re-integration of hard-to-place unemployed individuals into regular employment. In ontrast to earlier evaluation studies of these programmes based on survey data, we use administrative data containing more than 11,000 participants for our analysis and hence, can take effect heterogeneity explicitly into account. We focus on effect heterogeneity caused by differences in the implementation of programmes (economic sector, types of support and implementing institutions). The results are rather discouraging and show that in general, JCS are unable to improve the re-integration chances of participants into regular employment.
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
The effects of vocational training programmes on the duration of unemployment in Eastern Germany
(2005)
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
This paper evaluates the effects of job creation schemes on the participating individuals in Germany. Since previous empirical studies of these measures have been based on relatively small datasets and focussed on East Germany, this is the first study which allows to draw policy-relevant conclusions. The very informative and exhaustive dataset at hand not only justifies the application of a matching estimator but also allows to take account of threefold heterogeneity. The recently developed multiple treatment framework is used to evaluate the effects with respect to regional, individual and programme heterogeneity. The results show considerable differences with respect to these sources of heterogeneity, but the overall finding is very clear. At the end of our observation period, that is two years after the start of the programmes, participants in job creation schemes have a significantly lower success probability on the labour market in comparison to matched non-participants.
This paper evaluates the effects of job creation schemes on the participating individuals in Germany. Since previous empirical studies of these measures have been based on relatively small datasets and focussed on East Germany, this is the first study which allows to draw policy-relevant conclusions. The very informative and exhaustive dataset at hand not only justifies the application of a matching estimator but also allows to take account of threefold heterogeneity. The recently developed multiple treatment framework is used to evaluate the effects with respect to regional, individual and programme heterogeneity. The results show considerable differences with respect to these sources of heterogeneity, but the overall finding is very clear. At the end of our observation period, that is two years after the start of the programmes, participants in job creation schemes have a significantly lower success probability on the labour market in comparison to matched non-participants. JEL Classification: H43, J64, J68, C13, C40
This paper investigates the macroeconomic effects of job creation schemes and vocational training on the matching processes in West Germany. The empirical analysis is based on regional data for local employment office districts for the period from 1999 to 2003. The empirical model relies on a dynamic version of a matching function augmented by ALMP. In order to obtain consistent estimates in the presence of a dynamic panel data model, a first-differences GMM estimator and a transformed maximum likelihood estimator are applied. Furthermore the paper considers the endogeneity problem of the policy measures. The results obtained from our estimates indicate that vocational training does not significantly affect the matching process and that job creation schemes have a negative effect. JEL Classification: C23, E24, H43, J64, J68
Most evaluation studies of active labour market policies (ALMP) focus on the microeconometric evaluation approach using individual data. However, as the microeconometric approach usually ignores impacts on the non-participants, it should be seen as a first step to a complete evaluation which has to be followed by an analysis on the macroeconomic level. As a starting point for our analysis we discuss the effects of ALMP in a theoretical labour market framework augmented by ALMP. We estimate the impacts of ALMP in Germany for the time period 1999-2001 with regional data of 175 labour office districts. Due to the high persistence of German labour market data the application of a dynamic model is crucial. Furthermore our analysis accounts especially for the inherent simultaneity problem of ALMP. For West Germany we find positive effects of vocational training and job creation schemes on the labour market situation, whereas the results for East Germany do not allow profound statements. JEL Classification: C33, E24, H43, J64, J68.
Previous empirical studies of job creation schemes in Germany have shown that the average effects for the participating individuals are negative. However, we find that this is not true for all strata of the population. Identifying individual characteristics that are responsible for the effect heterogeneity and using this information for a better allocation of individuals therefore bears some scope for improving programme efficiency. We present several stratification strategies and discuss the occurring effect heterogeneity. Our findings show that job creation schemes do neither harm nor improve the labour market chances for most of the groups. Exceptions are long-term unemployed men in West and long-term unemployed women in East and West Germany who benefit from participation in terms of higher employment rates. JEL: C13 , J68 , H43
Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments.
Persistently high unemployment, tight government budgets and the growing scepticism regarding the effects of active labour market policies (ALMP) are the basis for a growing interest in evaluating these measures. This paper intends to explain the need for evaluation on the micro- and macroeconomic level, introduce the fundamental evaluation problem and solutions to it, give an overview of the newer developments in evaluation literature and finally take a look on empirical estimations of ALMP effects. JEL Classification: C14, C33, H43, J64, J68
This study analyses the effects of public sector sponsored vocational training (PSVT) on individuals’ unemployment duration in West Germany for the period from 1985 to 1993. The data is taken from the German Socio-Economic Panel (GSOEP). To resolve the intriguing sample selection problem, i.e. to find an adequate control group for the group of trainees, we employ matching methods. These matching methods use the individual propensity to participate in training, which is obtained by estimating a panel probit model as the main matching variable. On the basis of the matched sample a discrete time hazard rate model is utilized to assess the effects of training participation on unemployment duration. Our results indicate that a significant positive effect on reemployment chances due to PSVT can only be expected for courses with a duration of no longer than six months. No significant positive effects on post-training reemployment chances where found for courses lasting longer than six months. In fact these PSVT courses are significantly less effective at increasing reemployment chances than those lasting no longer than three months. JEL classification: C40, J20, J64
This paper provides a review of empirical evidence relating to the impact of training on employment performance. Since a central issue in estimating training effects is the sample selection problem a short theoretical discussion of different evaluation strategies is given. The empirical overview primarily focuses on non-experimental evidence for Germany. In addition selected studies for other countries and experimental investigations are discussed.
In this study we are concerned with the impact of vocational training on the individual’s unemployment duration in West Germany. The data basis used is the German Socio-Economic Panel (GSOEP) for the period from 1984 to 1994. To resolve the intriguing sample selection problem, i.e. to find an adequate control group for the group of trainees, we employ matching methods which were developed in the statistical literature. These matching methods uses as the main matching variable the individual propensity score to participate in training, which is obtained by estimating a random effects probit model. On the basis of the matched sample a discrete time hazard rate model is utilized to assess the impact of vocational training on unemployment duration. Our results indicate, that training significantly raises the transition rate of unemployed into employment in the short but not in the long run. JEL classification: C40, J20, J64
We estimate a semiparametric single-risk discrete-time duration model to assess the effect of vocational training on the duration of unemployment spells. The data basis used in this study is the German Socio-Economic-Panel (GSOEP) for West Germany for the period from 1986 to 1994. To take into account a possible selection bias actual participation in vocational training is instrumented using estimates of a randomeffects probit model for the participation in qualification measures. Our main results show that training does have a significant short term effect of reducing unemployment duration but that this effect does not persist in the long run. JEL classifications: C41, J20, J64
This paper is intended as a short survey of the most relevant methods for grouped transition data. The fundamentals of duration analysis are discussed in a continuous time framework, whereas the treatment of methods for discrete durations is limited to the peculiarity of these models. In addition, some recent empirical applications of the methods are discussed.
In recent econometric work, most analyses of female labour supply consider married women, whereas the results for unmarried women are provided rather as a by-product (Burtless/Greenberg, 1982, Johnson/Pencavel, 1984, Leu/Kugler, 1986, Merz, 1990,). When the particular interest is focused on unmarried women, data of the seventies or rather simple econometric models are used (Keeley et al., 1978, Hausman, 1980, Coverman/Kemp, 1987) . Often very specific populations are examined, like for example lone mothers in Blundell/Duncan/Meghir (1992), Jenkins (1992), Staat/Wagenhals (1993) or Laisney et al. (1993). Analysing the economic behaviour of unmarried women, one is confronted with the problem that the term ‘unmarried’ is not clearly defined. It includes single, divorced, separated and widowed women. They live in different types of households, like one-person households or family households, where they occupy different economic positions as for example head of the household or relative of the head. The present work considers unmarried female heads of household. We assume that the dominant economic position as head of household, voluntarily or involuntarily occupied, forces these women to a similar behaviour independent from their family status. Thus they are taken together in the analysis from the different family statuses: single, divorced, separated and widowed. Being unmarried often is regarded as a temporary state, voluntarily or involuntarily, for example in the case of young women before marriage or in the case of divorced women after their separation. Nevertheless the demographic development shows the increased importance of unmarried women in the population during the last decades. In the USA the portion of female headed households raised from 21,1% in 1970 to 26,2% in 1980 and 29,0% in 1992 (Statistical Abstracts of the United States, 1993. Own calculations). In the FRG, female headed households constitute 26,4% of total households in 1970, 27,4% in 1980 and 30,1% in 1992 (Stat.Bundesamt, FS 1, Reihe 3, 1970, 1980, 1992). Therefore it seems an interesting topic to analyse the labour supply behaviour of unmarried female heads. Especially the question whether the labour supply of unmarried women resembles rather that of married women or of prime-age males is of particular interest. Another purpose of this analysis is to apply modern econometric panel data models with special emphasis on the problem of unbalanced panel data. Most panel data analyses are carried out using balanced panel data, which is no problem if the selection process could be ignored and if enough cases are available to guarantee efficient estimation. Especially the last point was crucial for the present analysis of unmarried females. In the available panel data sets the unmarried female heads constitute only a rather small population. Therefore the estimation techniques were modified to take missing observations of the individuals into account. The paper is organized as follows: In section 2 the underlying theoretical model of intertemporal labour supply under uncertainty is shortly presented. Section 3 deals with the econometric specification and estimation techniques where the use of unbalanced panel data is considered. Section 4 contains the data description with a particular look on the unbalancedness of the samples. In the last section 5 the empirical results are presented. We compare the estimated parameters for the unmarried women between the USA and the FRG and also analyse the differences between unmarried and married women. Moreover a comparison between different samples of unmarried women is provided.
This paper provides an empirical assessment of hypotheses that identify causes of demand side constraints of individual labour supply. In a comparative study for the USA and the FRG we focus on analysing the effect of productivity gaps (industry wage growth beyond productivity growth), industry investment intensity and regional labour market conditions on individual employment probabilities. Furthermore, we investigate whether demand side constraints of labour supply can be caused by a spill over from commodity markets. Efficiency wage theory and the theory of inter-industry wage differentials are utilised to derive identifying restrictions that are applicable to the labour supply models for both countries. The econometric contribution of the paper is the derivation and application of a two step estimation method for the class of simultaneous random effects double hurdle models, of which the labour supply model employed in this paper is a special case. To provide the empirical basis for the comparative study, the Panel Study of Income Dynamics and the German Socio-Economic Panel are linked to the OECD’s International Sectoral Database. JEL classification: C33, C34, J64, O57
Modelling consumer behaviour in a profile design using a three equation generalised Tobit model
(1997)
We propose the application of a three equation generalised Tobit to model different aspects of consumer behaviour in a full profile study design. The model takes into account that consumer behaviour can be measured by preference scores, purchase probability and purchase volume. We aim to avoid the drawbacks of traditional conjoint analysis where the latter two aspects are disregarded. Starting from a full profile design, we develop the appropriate questionnaire layout, the econometric model, the likelihood function and tests. The model is applied in a market entry study for an innovative medicament after a reform of Germany´s public health system in 1993-1994. JEL Classification: C35,M31,L65
Comparison of MSACD models
(2003)
We propose a new framework for modelling time dependence in duration processes on financial markets. The well known autoregressive conditional duration (ACD) approach introduced by Engle and Russell (1998) will be extended in a way that allows the conditional expectation of the duration process to depend on an unobservable stochastic process which is modelled via a Markov chain. The Markov switching ACD model (MSACD) is a very flexible tool for description and forecasting of financial duration processes. In addition, the introduction of an unobservable, discrete valued regime variable can be justified in the light of recent market microstructure theories. In an empirical application we show that the MSACD approach is able to capture several specific characteristics of inter trade durations while alternative ACD models fail. JEL classification: C22, C25, C41, G14
We propose a new framework for modelling the time dependence in duration processes being in force on financial markets. The pioneering ACD model introduced by Engle and Russell (1998) will be extended in a manner that the duration process will be accompanied by an unobservable stochastic process. The Discrete Mixture ACD framework provides us with a general methodology which puts the idea into practice. It is established by introducing a discrete-valued latent regime variable which can be justified in the light of recent market microstructure theories. The empirical application demonstrates its ability to capture specific characteristics of intraday transaction durations while alternative approaches fail. JEL classification: C41, C22, C25, C51, G14.
In recent methodological work the well known ACD approach, originally introduced by Engle and Russell (1998), has been supplemented by the involvement of an unobservable stochastic process which accompanies the underlying process of durations via a discrete mixture of distributions. The Mixture ACD model, emanating from the specialized proposal of De Luca and Gallo (2004), has proved to be a moderate tool for description of financial duration data. The use of one and the same family of ordinary distributions has been common practice until now. Our contribution incites to use the rich parameterized comprehensive family of distributions which allows for interacting different distributional idiosyncrasies. JEL classification: C41, C22, C25, C51, G14.
We propose a new framework for modelling the time dependence in duration processes being in force on financial markets. The pioneering ACD model introduced by Engle and Russell (1998) will be extended in a manner that the duration process will be accompanied by an unobservable stochastic process. The Discrete Mixture ACD framework provides us with a general methodology which puts the idea into practice. It is established by introducing a discrete-valued latent regime variable which can be justified in the light of recent market microstructure theories. The empirical application demonstrates its ability to capture specific characteristics of intraday transaction durations while alternative approaches fail. JEL classification: C41, C22, C25, C51, G14.
In recent methodological work the well known ACD approach, originally introduced by Engle and Russell (1998), has been supplemented by the involvement of an unobservable stochastic process which accompanies the underlying process of durations via a discrete mixture of distributions. The Mixture ACD model, emanating from the specialized proposal of De Luca and Gallo (2004), has proved to be a moderate tool for description of financial duration data. The use of one and the same family of ordinary distributions has been common practice until now. Our contribution incites to use the rich parameterized comprehensive family of distributions which allows for interacting different distributional idiosyncrasies. JEL classification: C41, C22, C25, C51, G14
We propose a new framework for modeling time dependence in duration processes. The ACD approach introduced by Engle and Russell (1998) will be extended so that the conditional expectation of the durations depends on an unobservable stochastic process which is modeled via a Markov chain. The Markov switching ACD model (MSACD) is a flexible tool for description of financial duration processes. The introduction of a latent information regime variable can be justified in the light of recent market microstructure theories. In an empirical application we show that the MSACD approach is able to capture specific characteristics of inter trade durations while alternative ACD models fail. JEL classification: C41, C22, C25, C51, G14
We develop an interregional version of the standard textbook input-output model, that is extended with respect to the inclusion of the consumption expenditures and income generation process into the endogenous part of the input-output table. We also introduce a new method for deriving a two-region version of an interregional input-output table from original input-output tables for an overall economy and one of its regions. In an empirical assessment of the economic effects of the Frankfurt Airport, the interregional model is successfully employed. It is shown, that the model is capable of reducing the degree of overestimation of economic effects that results from inappropriate use of national input-output tables in the assessment of regional impact effects.
Models with multiple equilibria are a popular way to explain currency attacks. Morris and Shin (1998) have shown that, in the context of those models, unique equilibria may prevail once noisy private information is introduced. In this paper, we generalize the results of Morris and Shin to a broader class of probability distributions and show - using the technique of iterated elimination of dominated strategies - that uniqueness will hold, even if we allow for sunspots and individual uncertainty about strategic behavior of other agents. We provide a clear exposition of the logic of this model and we analyse the impact of transparency on the probability of a speculative attack. For the case of uniform distribution of noisy signals, we show that increased transparency of government policy reduces the likelihood of attacks. JEL Classification F 31, D 82
During the last decade, there has been a significant bias towards bond financing on emerging markets, with private investors relying on a bail-out of bonds by the international community. The bias has been a main cause for recent excessive fragility of international capital markets. The paper shows how collective action clauses in bonds contracts help to involve the private sector in risk sharing. It argues that such clauses, as a market based instrument, will raise spreads for emerging market debt and so help to correct a market failure towards excessive bond finance. Recent pressure by the IMF to involve the private sector is facing a conflict between the principle to honour existing contracts and the principle of equal treatment of bondholders.
The paper analyzes the incentive for the ECB to establish reputation by pursuing a restrictive policy right at the start of its operation. The bank is modelled as risk averse with respect to deviations of both inflation and output from her target. The public, being imperfectly informed about the bank’s preferences uses observed inflation as (imperfect) signal for the unknown preferences. Under linear learning rules - which are commonly used in the literature - a gradual build up of reputation is the optimal response. The paper shows that such a linear learning rule is not consistent with efficient signaling. It is shown that in a game with efficient signaling, a cold turkey approach - allowing for deflation - is optimal for a strong bank - accepting high current output losses at the beginning in order to demonstrate its toughness. JEL classification: D 82, E 58
During the last years the relationship between financial development and economic growth has received widespread attention in the literature on growth and development. This paper summarises in its first part the results of this research, stressing the growth-enhancing effects of an increased interpersonal re-allocation of resources promoted by financial development. The second part of the paper seeks to identify the determinants of financial development based on Diamond's theory of financial intermediation as delegated monitoring. The analysis shows that the quality of corporate governance of banks is the key factor in financial system development. Accordingly, financial sector reforms in developing countries will only succeed if they strengthen the corporate governance of financial institutions. In this area, financial institution building has an important contribution to make. Paper presented at the First Annual Seminar on New Development Finance held at the Goethe University of Frankfurt, September 22 - October 3, 1997
The focus of this article is the analysis of the inflation risk of European real estate securities. Following both a causal and a final understanding of risk, the analysis is twofold. First, to examine the causal influence of inflation on short- and long-term asset returns, different regression approaches are employed based on the methodology of Fama and Schwert (1977). Hedging capacities against expected inflation are found only for German open-end funds. Secondly, different shortfall risk measures are used to study whether an investment in European real estate securities protects against a negative real return at the end of a given investment period.
The extension of long-term loans, e.g. to finance housing, is adversely affected by inflation. For one thing, the higher nominal interest rates charged by the banks in response to inflation mean that borrowers have to make (nominally) higher interest payments, which unnecessarily reduces their borrowing capacity. For another, long-term loans with variable interest rates increase the probability that borrowers will become unable to meet their payment obligations. The present paper examines these two assertions in detail. At the same time, it presents a concept for substantially reducing the weaknesses of conventional lending methodologies. We start by investigating the consequences of a stable inflation rate on the borrowing capacity of credit clients, then go on to analyze the impact of fluctuating inflation rates on the risk of default.
Competition for order flow can be characterized as a coordination game with multiple equilibria. Analyzing competition between dealer markets and a crossing network, we show that the crossing network is more stable for lower traders’ disutilities from unexecuted orders. By introducing private information, we prove existence of a unique equilibrium with market consolidation. Assets with low volatility and large volumes are traded on crossing networks, others on dealer markets. Efficiency requires more assets to be traded on crossing networks. If traders’ disutilities differ sufficiently, a unique equilibrium with market fragmentation exists. Low disutility traders use the crossing network while high disutility traders use the dealer market. The crossing network’s market share is inefficiently small.
In this paper, we estimate the demand for homeowner insurance in Florida. Since we are interested in a number of factors influencing demand, we approach the problem from two directions. We first estimate two hedonic equations representing the premium per contract and the price mark-up. We analyze how the contracts are bundled and how contract provisions, insurer characteristics and insured risk characteristics and demographics influence the premium per contract and the price mark-up. Second, we estimate the demand for homeowners insurance using two-stage least squares regression. We employ ISO's indicated loss costs as our proxy for real insurance services demanded. We assume that the demand for coverage is essentially a joint demand and thus we can estimate the demand for catastrophe coverage separately from the demand for noncatastrophe coverage. We determine that price elasticities are less elastic for catastrophic coverage than for non-catastrophic coverage. Further estimated income elasticities suggest that homeowners insurance is an inferior good. Finally, we conclude based on the results of a selection model that our sample of ISO reporting companies well represents the demand for insurance in the Florida market as a whole.
At present, the question of how national pension or retirement payment systems should be organised is being hotly debated in various countries, and opinions vary widely as to what should be regarded as the optimal design for such systems. It appears to the authors of the present paper that in this entire discussion one aspect is largely overlooked: What relationships exist between the pension system and the financial system in a given country? As such relationships might prove to be important, the present paper investigates the following questions: (1) Are there differences between the national pension systems of three major European countries – Germany, France and the U.K. – and between the financial systems of these countries? (2) And if the existence of such differences can be demonstrated, is there a correspondence between the differences with respect to the various national pension systems and the differences as regards the countries’ financial systems? (3) And if such a correspondence exists, is there any kind of interrelationship between the national financial and pension systems of the individual countries which goes beyond a mere correspondence? Looking mainly at two aspects – namely, risk allocation and the incentives to create human capital – the authors of this paper argue (1) that there are indeed considerable differences between the financial and pension systems of the three countries; (2) that in both Germany and the U.K. there are also systematic correspondences between the respective pension systems and financial systems and their economic characteristics, but that such a correspondence cannot be identified in the case of France; and (3) that these parallels are, in the final analysis, based on complementarities and are therefore likely to contribute to the efficiency of the German and the British systems. The paper concludes with a brief look at policy implications which the existence of, or the lack of, consistency between national pension systems and national financial systems might have.
Although the world of banking and finance is becoming more integrated every day, in most aspects the world of financial regulation continues to be narrowly defined by national boundaries. The main players here are still national governments and governmental agencies. And until recently, they tended to follow a policy of shielding their activities from scrutiny by their peers and members of the academic community rather than inviting critical assessments and an exchange of ideas. The turbulence in international financial markets in the 1980s, and its impact on U.S. banks, gave rise to the notion that academics working in the field of banking and financial regulation might be in a position to make a contribution to the improvement of regulation in the United States, and thus ultimately to the stability of the entire financial sector. This provided the impetus for the creation of the “U.S. Shadow Financial Regulatory Committee”. In the meantime, similar shadow committees have been founded in Europe and Japan. The specific problems associated with financial regulation in Europe, as well as the specific features which distinguish the European Shadow Financial Regulatory Committee from its counterparts in the U.S. and Japan, derive from the fact that while Europe has already made substantial progress towards economic and political integration, it is still primarily a collection of distinct nation-states with differing institutional set-ups and political and economic traditions. Therefore, any attempt to work towards a European approach to financial regulation must include an effort to promote the development of a European culture of co-operation in this area, and this is precisely what the European Shadow Financial Regulatory Committee (ESFRC) seeks to do. In this paper, Harald Benink, chairman of the ESFRC, and Reinhard H. Schmidt, one of the two German members, discuss the origin, the objectives and the functioning of the committee and the thrust of its recommendations.
Structural positions are very common in investment practice. A structural position is defined as a permanent overweighting of a riskier asset class relative to a prespecified benchmark portfolio. The most prominent example for a structural position is the equity bias in a balanced fund that arises by consistently overweighting equities in tactical asset allocation. Another example is the permanent allocation of credit in a fixed income portfolio with a government benchmark. The analysis provided in this article shows that whenever possible, structural positions should be avoided. Graphical illustrations based on Pythagorean theorem are used to make a connection between the active risk/return and the total risk/return framework. Structural positions alter the risk profile of the portfolio substantially, and the appeal of active management – to provide active returns uncorrelated to benchmark returns and hence to shift the efficient frontier outwards – gets lost. The article demonstrates that the commonly used alpha – tracking error criterion is not sufficient for active management. In addition, structural positions complicate measuring managers’ skill. The paper also develops normative implications for active portfolio management. Tactical asset allocation should be based on the comparison of expected excess returns of an asset class to the equilibrium risk premium of the same asset class and not to expected excess returns of other asset classes. For the cases, where structural positions cannot be avoided, a risk budgeting approach is introduced and applied to determine the optimal position size. Finally, investors are advised not to base performance evaluation only on simple manager rankings because this encourages managers to take structural positions and does not reward efforts to produce alpha. The same holds true for comparing managers’ information ratios. Information ratios, in investment practice defined as the ratio of active return to active risk, do not uncover structural positions.
In this paper we have developed a financial model of the non-life insurer to provide assistance for the management of the insurance company in making decisions on product, investment and reinsurance mix. The model is based on portfolio theory and recognizes the stochastic nature of and the interaction between the underwriting and investment income of the insurance business. In the context of an empirical application we illustrate howa portfolio optimisation approach can be used for asset-liability management.
Our study provides evidence on the share price reactions to the announcement of equity issues in Germany, where capital market is characterized by institutional features distinct from the U.S. market. German seasoned equity issues yield a positive market reaction which contrasts to the significant negative abnormal returns reported for the U.S. We provide evidence that these results are due to differences in both issuing characteristics and floatation methods, and in the corporate governance and ownership structures of the two countries. Our study explains much of the empirical puzzle of different market reactions to seemingly similar events across financial markets.
Real options theory applies techniques known from finance theory to the valuation of capital investments. The present paper investigates further into this analogy, considering the case of a portfolio of real options. An implementation of real option models in practice will mostly be concerned with a portfolio of real options, so the analysis of portfolio aspects is of both academic and practical interest. Is a portfolio of real options special? In order to shed some light on this question, the present paper will outline the relevant features of a portfolio of real options. It will show that the analogy to financial options remains great if compound option models are applied. As a result, a portfolio of real options, and therefore the firm as such, generally is to be understood as one single compound, real option.
We present an empirical study focusing on the estimation of a fundamental multi-factor model for a universe of European stocks. Following the approach of the BARRA model, we have adopted a cross-sectional methodology. The proportion of explained variance ranges from 7.3% to 66.3% in the weekly regressions with a mean of 32.9%. For the individual factors we give the percentage of the weeks when they yielded statistically significant influence on stock returns. The best explanatory power – apart from the dominant country factors – was found among the statistical constructs „success“ and „variability in markets“.
Who knows what when? : The information content of pre-IPO market prices : [Version March/June 2002]
(2002)
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
We propose a new framework for modelling time dependence in duration processes on financial markets. The well known autoregressive conditional duration (ACD) approach introduced by Engle and Russell (1998) will be extended in a way that allows the conditional expectation of the duration process to depend on an unobservable stochastic process, which is modelled via a Markov chain. The Markov switching ACD model (MSACD) is a very flexible tool for description and forecasting of financial duration processes. In addition the introduction of an unobservable, discrete valued regime variable can be justified in the light of recent market microstructure theories. In an empirical application we show, that the MSACD approach is able to capture several specific characteristics of inter trade durations while alternative ACD models fail. Furthermore, we use the MSACD to test implications of a sequential trade model.
Banking and markets
(2001)
This paper integrates a number of recent themes in the literature in banking and asset markets–optimal risk sharing, limited market participation, asset-price volatility, market liquidity, and financial crises–in a general-equilibrium theory of the financial system. A complex financial system comprises both financial markets financial institutions. Financial institutions can take the form of intermediaries or banks. Banks, inlike intermediaries, are subject to runs, but crises do not imply market failure. We show that a sophisticated financiel system–a system with complete markets for aggregate risk and limited market participation–is incentive-efficient, if the institutions take the form of intermediaries, or else constrained-efficient, of they take the form of banks. We also consider an economy in which the markets for aggregate risks are incomplete. In this context, there is a rolefpr prudential regulation: regulating liquidity can improve welfare.
Executive Stock Option Programs (SOPs) have become the dominant compensation instrument for top-management in recent years. The incentive effects of an SOP both with respect to corporate investment and financing decisions critically depend on the design of the SOP. A specific problem in designing SOPs concerns dividend protection. Usually, SOPs are not dividend protected, i.e. any dividend payout decreases the value of a manager’s options. Empirical evidence shows that this results in a significant decrease in the level of corporate dividends and, at the same time, into an increase in share repurchases. Yet, few suggestions have been made on how to account for dividends in SOPs. This paper applies arguments from principal-agent-theory and from the theory of finance to analyze different forms of dividend protection, and to address the relevance of dividend protection in SOPs. Finally, the paper relates the theoretical analysis to empirical work on the link between share repurchases and SOPs.
Since the beginning of the 1990s, it has been widely expected that the implementation of the European Single Market would lead to a rapid convergence of Europe’s financial systems. In the present paper we will show that at least in the period prior to the introduction of the common currency this expected convergence did not materialise. Our empirical studies on the significance of various institutions within the financial sectors, on the financing patterns of firms in various countries and on the predominant mechanisms of corporate governance, which are summarised and placed in a broader context in this paper, point to few, if any, signs of a convergence at a fundamental or structural level between the German, British and French financial systems. The German financial system continues to appear to be bank-dominated, while the British system still appears to be capital market-dominated. During the period covered by the research, i.e. 1980 – 1998, the French system underwent the most far-reaching changes, and today it is difficult to classify. In our opinion, these findings can be attributed to the effects of strong path dependencies, which are in turn an outgrowth of relationships of complementarity between the individual system components. Projecting what we have observed into the future, the results of our research indicate that one of two alternative paths of development is most likely to materialise: either the differences between the national financial systems will persist, or – possibly as a result of systemic crises – one financial system type will become the dominant model internationally. And if this second path emerges, the Anglo-American, capital market-dominated system could turn out to be the “winner”, because it is better able to withstand and weather crises, but not necessarily because it is more efficient.
In this paper we study the benefits derived from international diversification of stock portfolios from German and Hungarian point of view. In contrast to the German capital market, which is one of the largest in the world, the Hungarian Stock Exchange is an emerging market. The Hungarian stock market is highly volatile, high returns are often accompanied by extremely large risk. Therefore, there is a good potential for Hungarian investors to realize substantial benefits in terms of risk reduction by creating multi-currency portfolios. The paper gives evidence on the above me ntioned benefits for both countries by examining the performance of several ex ante portfolio strategies. In order to control the currency risk, different types of hedging approaches are implemented.
Financial development and financial institution building are important prerequisites for economic growth. However, both the potential and the problems of institution building are still vastly underestimated by those who design and fund institution building projects. The paper first underlines the importance of financial development for economic growth, then describes the main elements of “serious” institution building: the lending technology, the methodological approaches, and the question of internal structure and corporate governance. Finally, it discusses three problems which institution building efforts have to cope with: inappropriate expectations on the part of donor and partner institutions regarding the problems and effects of institution building efforts, the lack of awareness of the importance of governance and ownership issues, and financial regulation that is too restrictive for microfinance operations. All three problems together explain why there are so few successful micro and small business institutions operating worldwide.
We analyze incentives for loan officers in a model with hidden action, limited liability and truth-telling constraints under the assumption that the principal has private information from an automatic scoring system. First we show that the truth-telling problem reduces the bank’s expected profit whenever the loan officer cannot only conceal bad types, but can also falsely report bad types. Second, we investigate whether the bank should reveal her private information to the agent. We show that this depends on the percentage of good loans in the population and on the signal’s informativeness. Though we had to define different regions for different parameters, we concluded that it might often be favorable to not reveal the signal. This contradicts current practice.
We investigate the suggested substitutive relation between executive compensation and the disciplinary threat of takeover imposed by the market for corporate control. We complement other empirical studies on managerial compensation and corporate control mechanisms in three distinct ways. First, we concentrate on firms in the oil industry for which agency problems were especially severe in the 1980s. Due to the extensive generation of excess cash flow, product and factor market discipline was ineffective. Second, we obtain a unique data set drawn directly from proxy statements which accounts not only for salary and bonus but for the value of all stock-market based compensation held in the portfolio of a CEO. Our data set consists of 51 firms in the U.S. oil industry from 1977 to 1994. Third, we employ ex ante measures of the threat of takeover at the individual firm level which are superior to ex post measures like actual takeover occurrence or past incidence of takeovers in an industry. Results show that annual compensation and, to a much higher degree, stock-based managerial compensation increase after a firm becomes protected from a hostile takeover. However, clear-cut evidence that CEOs of protected firms receive higher compensation than those of firms considered susceptible to a takeover cannot be found.
Individual financial systems can be understood as very specific configurations of certain key elements. Often these configurations remain unchanged for decades. We hypothesize that there is a specific relationship between key elements, namely that of complementarity. Thus, complementarity seems to be an essential feature of financial systems. Intuitively speaking, complementarity exists if the elements of a (financial) system reinforce each other in terms of contributing to the functioning of the system. It is the purpose of this paper to provide an analytical clarification of the concept of complementarity. This is done by modeling financial systems as combinations of four elements: firm-specific human capital of an entrepreneur, the ability of a bank to restructure the borrower's firm in the case of distress, the possibility to appropriate private benefits from running the firm, and the bankruptcy law. A specific configuration of these elements constitutes one financial system. The bankruptcy law and the potential private benefits are treated as exogenous. They determine the bargaining power of the contracting parties in the case that recontracting occurs. In a two-stage game, the optimal values for the other elements are determined by the agents individually - by investing in human capital and restructuring skills, respectively - and jointly by writing, executing and possibly renegotiating a financing contract for the firm. The paper discusses the equilibria for different types of bankruptcy law and demonstrates that equilibria exhibit the sought-after feature of complementarity. Three particularly significant equilibria correspond to stylized accounts of the British, German and the US-American financial system, respectively.
The paper presents an empirical analysis of the alledged transformation of the financial systems in the three major European economies, France, Germany and the UK. Based on a unified data set developed on the basis of national accounts statistics, and employing a new and consistent method of measurement, the following questions are addressed: Is there a common pattern of structural change; do banks lose importance in the process of change; and are the three financial systems becoming more similar? We find that there is neither a general trend towards disintermediation, nor towards a transformation from bank-based to capital market-based financial systems, nor for a loss of importance of banks. Only in the case of France strong signs of transformation as well as signs of a general decline in the role of banks could be found. Thus the three financial systems also do not seem to become more similar. However, there is also a common pattern of change: the intermediation chains are lengthening in all three countries. Nonbank financial intermediaries are taking over a more important role as mobilizers of capital from the non-financial sectors. In combination with the trend towards securitization of bank liabilites, this change increases the funding costs of banks and may put banks under pressure. In the case of France, this change is so pronounced that it might even threaten the stability of the financial system.
Hackethal and Schmidt (2003) criticize a large body of literature on the financing of corporate sectors in different countries that questions some of the distinctions conventionally drawn between financial systems. Their criticism is directed against the use of net flows of finance and they propose alternative measures based on gross flows which they claim re-establish conventional distinctions. This paper argues that their criticism is invalid and that their alternative measures are misleading. There are real issues raised by the use of aggregate data but they are not the ones discussed in Hackethal and Schmidt’s paper. JEL Classification: G30
US investors hold much less foreign stocks than mean/variance analysis applied to historical data predicts. In this article, we investigate whether this home bias can be explained by Bayesian approaches to international asset allocation. In contrast to mean/variance analysis, Bayesian approaches employ different techniques for obtaining the set of expected returns. They shrink sample means towards a reference point that is inferred from economic theory. We also show that one of the Bayesian approaches leads to the same implications for asset allocation as mean-variance/tracking error criterion. In both cases, the optimal portfolio is a combination the market portfolio and the mean/variance efficient portfolio with the highest Sharpe ratio.
Applying the Bayesian approaches to the subject of international diversification, we find that substantial home bias can be explained when a US investor has a strong belief in the global mean/variance efficiency of the US market portfolio and when he has a high regret aversion falling behind the US market portfolio. We also find that the current level of home bias can justified whenever regret aversion is significantly higher than risk aversion.
Finally, we compare the Bayesian approaches to mean/variance analysis in an empirical out-ofsample study. The Bayesian approaches prove to be superior to mean/variance optimized portfolios in terms of higher risk-adjusted performance and lower turnover. However, they not systematically outperform the US market portfolio or the minimum-variance portfolio.
We analyze exchange rates along with equity quotes for 3 German firms from New York (NYSE) and Frankfurt (XETRA) during overlapping trading hours to see where price discovery occurs and how stock prices adjust to an exchange rate shock. Findings include: (a) the exchange rate is exogenous with respect to the stock prices; (b) exchange rate innovations are more important in understanding the evolution of NYSE prices than XETRA prices; and (c) most (but not all) of the fundamental or random walk component of firm value is determined in Frankfurt.
In contrast to the United States and the United Kingdom, little empirical work exists about the distributional characteristics of appraisalbased real estate returns outside these countries. The purpose of this study is to fill this gap by focusing on Germany. In line with other studies, this paper offers an extensive investigation into the distribution of German real estate returns and compares them with and U.S. and U.K. data in the same period. Furthermore, the comovements with bonds and stocks are also examined. In the core, the distributional characteristics for German real estate are comparable to that for the U.S. and U.K.
U.S. investors hold much less international stock than is optimal according to mean–variance portfolio theory applied to historical data. We investigated whether this home bias can be explained by Bayesian approaches to international asset allocation. In comparison with mean–variance analysis, Bayesian approaches use different techniques for obtaining the set of expected returns by shrinking the sample means toward a reference point that is inferred from economic theory. Applying the Bayesian approaches to the field of international diversification, we found that a substantial home bias can be explained when a U.S. investor has a strong belief in the global mean–variance efficiency of the U.S. market portfolio, and in this article, we show how to quantify the strength of this belief. We also found that one of the Bayesian approaches leads to the same implications for asset allocation as the mean–variance/tracking-error criterion. In both cases, the optimal portfolio is a combination of the U.S. market portfolio and the mean–variance-efficient portfolio with the highest Sharpe ratio.
For the Neuer Markt year 2001 is not considered as one of its best, compared to its prior performance. Investors who once piled into the Neuer Markt have now become wary of the exchange, which was launched in 1997 as Europe’s leading growth market and answer to the U.S.‘s Nasdaq Stock Market. The Neuer Markt’s reputation has been marred by the misleading information policy from several Neuer Markt companies, publishing false annual and quarterly data. Some of these companies are responsible for having misinformed investors of their pending bankruptcies. Under these circumstances, it is time to find an explanation for the dramatic loss of credibility in Neuer Markt enterprises. Finding an answer, two aspects come under consideration: • What type of information (annual versus quarterly reports) was available for investors and • of what quality were these provided data. Interim reports can be seen as important instrument in the reporting system to inform all kinds of investors. For this reason we examine the quality of Neuer Markt quarterly reports by concentrating on the disclosure level of 52 Neuer Markt companies‘ reports for the third quarter 1999 and 2000. To enable comparison we establish four disclosure indexes that measure the report’s compliance with the Neuer Markt Rules and Regulations as well as with IAS and US GAAP interim reporting standards. The results demonstrate that the level of disclosure has increased over time. Then we aim to find typical attributes of Neuer Markt enterprises that provide high or low level of accounting information in their quarterly reports. Nevertheless the study also shows that there is not any correlation between market capitalization and the quality of interim reports. However, it can be suggested that an additional enforcement mechanism could improve quality and lure investors back. A step towards this aim is the standardization project of quarterly reports of Deutsche Boerse AG.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
What constitutes a financial system in general and the German financial system in particular?
(2003)
This paper is one of the two introductory chapters of the book "The German Financial System". It first discusses two issues that have a general bearing on the entire book, and then provides a broad overview of the German financial system. The first general issue is that of clarifying what we mean by the key term "financial system" and, based on this definition, of showing why the financial system of a country is important and what it might be important for. Obviously, a definition of its subject matter and an explanation of its importance are required at the outset of any book. As we will explain in Section II, we use the term "financial system" in a broad sense which sets it clearly apart from the narrower concept of the "financial sector". The second general issue is that of how financial systems are described and analysed. Obviously, the definition of the object of analysis and the method by which the object is to be analysed are closely related to one another. The remainder of the paper provides a general overview of the German financial system. In addition, it is intended to provide a first indication of how the elements of the German financial system are related to each other, and thus to support our claim from Section II that there is indeed some merit in emphasising the systemic features of financial systems in general and of the German financial system in particular. The chapter concludes by briefly comparing the general characteristics of the German financial system with those of the financial systems of other advanced industrial countries, and taking a brief look at recent developments which might undermine the "systemic" character of the German financial system.