Refine
Year of publication
- 2008 (137) (remove)
Document Type
- Working Paper (137) (remove)
Is part of the Bibliography
- no (137)
Keywords
- USA (7)
- Deutschland (6)
- Bank (5)
- Geldpolitik (5)
- Lambda-Kalkül (5)
- Operationale Semantik (5)
- Programmiersprache (5)
- Haushalt (4)
- Liquidität (4)
- Aging (3)
Institute
Motivated by the prominent role of electronic limit order book (LOB) markets in today’s stock market environment, this paper provides the basis for understanding, reconstructing and adopting Hollifield, Miller, Sandas, and Slive’s (2006) (henceforth HMSS) methodology for estimating the gains from trade to the Xetra LOB market at the Frankfurt Stock Exchange (FSE) in order to evaluate its performance in this respect. Therefore this paper looks deeply into HMSS’s base model and provides a structured recipe for the planned implementation with Xetra LOB data. The contribution of this paper lies in the modification of HMSS’s methodology with respect to the particularities of the Xetra trading system that are not yet considered in HMSS’s base model. The necessary modifications, as expressed in terms of empirical caveats, are substantial to derive unbiased market efficiency measures for Xetra in the end.
We explore the pattern of elderly homeownership using microeconomic surveys of 15 OECD countries, merging 60 national household surveys on about 300,000 individuals. In all countries the survey is repeated over time, permitting construction of an international dataset of repeated cross-sectional data. We find that ownership rates decline considerably after age 60 in all countries. However, a large part of the decline depends on cohort effects. Adjusting for them, we find that ownership rates start falling after age 70 and reach a percentage point per year decline after age 75. We find that differences across country ownership trajectories are correlated with indicators measuring the degree of market regulations.
This paper introduces adaptive learning and endogenous indexation in the New-Keynesian Phillips curve and studies disinflation under inflation targeting policies. The analysis is motivated by the disinflation performance of many inflation-targeting countries, in particular the gradual Chilean disinflation with temporary annual targets. At the start of the disinflation episode price-setting firms’ expect inflation to be highly persistent and opt for backward-looking indexation. As the central bank acts to bring inflation under control, price-setting firms revise their estimates of the degree of persistence. Such adaptive learning lowers the cost of disinflation. This reduction can be exploited by a gradual approach to disinflation. Firms that choose the rate for indexation also re-assess the likelihood that announced inflation targets determine steady-state inflation and adjust indexation of contracts accordingly. A strategy of announcing and pursuing short-term targets for inflation is found to influence the likelihood that firms switch from backward-looking indexation to the central bank’s targets. As firms abandon backward-looking indexation the costs of disinflation decline further. We show that an inflation targeting strategy that employs temporary targets can benefit from lower disinflation costs due to the reduction in backward-looking indexation.
Monetary policy analysts often rely on rules-of-thumb, such as the Taylor rule, to describe historical monetary policy decisions and to compare current policy to historical norms. Analysis along these lines also permits evaluation of episodes where policy may have deviated from a simple rule and examination of the reasons behind such deviations. One interesting question is whether such rules-of-thumb should draw on policymakers "forecasts of key variables such as inflation and unemployment or on observed outcomes. Importantly, deviations of the policy from the prescriptions of a Taylor rule that relies on outcomes may be due to systematic responses to information captured in policymakers" own projections. We investigate this proposition in the context of FOMC policy decisions over the past 20 years using publicly available FOMC projections from the biannual monetary policy reports to the Congress (Humphrey-Hawkins reports). Our results indicate that FOMC decisions can indeed be predominantly explained in terms of the FOMC´s own projections rather than observed outcomes. Thus, a forecast-based rule-of-thumb better characterizes FOMC decision-making. We also confirm that many of the apparent deviations of the federal funds rate from an outcome-based Taylor-style rule may be considered systematic responses to information contained in FOMC projections.
Risk transfer with CDOs
(2008)
Modern bank management comprises both classical lending business and transfer of asset risk to capital markets through securitization. Sound knowledge of the risks involved in securitization transactions is a prerequisite for solid risk management. This paper aims to resolve a part of the opaqueness surrounding credit-risk allocation to tranches that represent claims of different seniority on a reference portfolio. In particular, this paper analyzes the allocation of credit risk to different tranches of a CDO transaction when the underlying asset returns are driven by a common macro factor and an idiosyncratic component. Junior and senior tranches are found to be nearly orthogonal, motivating a search for the whereabout of systematic risk in CDO transactions. We propose a metric for capturing the allocation of systematic risk to tranches. First, in contrast to a widely-held claim, we show that (extreme) tail risk in standard CDO transactions is held by all tranches. While junior tranches take on all types of systematic risk, senior tranches take on almost no non-tail risk. This is in stark contrast to an untranched bond portfolio of the same rating quality, which on average suffers substantial losses for all realizations of the macro factor. Second, given tranching, a shock to the risk of the underlying asset portfolio (e.g. a rise in asset correlation or in mean portfolio loss) has the strongest impact, in relative terms, on the exposure of senior tranche CDO-investors. Our findings can be used to explain major stylized facts observed in credit markets.
We show that the use of correlations for modeling dependencies may lead to counterintuitive behavior of risk measures, such as Value-at-Risk (VaR) and Expected Short- fall (ES), when the risk of very rare events is assessed via Monte-Carlo techniques. The phenomenon is demonstrated for mixture models adapted from credit risk analysis as well as for common Poisson-shock models used in reliability theory. An obvious implication of this finding pertains to the analysis of operational risk. The alleged incentive suggested by the New Basel Capital Accord (Basel II), amely decreasing minimum capital requirements by allowing for less than perfect correlation, may not necessarily be attainable.
The paper proposes a panel cointegration analysis of the joint development of government expenditures and economic growth in 23 OECD countries. The empirical evidence provides indication of a structural positive correlation between public spending and per-capita GDP which is consistent with the so-called Wagner´s law. A long-run elasticity larger than one suggests a more than proportional increase of government expenditures with respect to economic activity. In addition, according to the spirit of the law, we found that the correlation is usually higher in countries with lower per-capita GDP, suggesting that the catching-up period is characterized by a stronger development of government activities with respect to economies in a more advanced state of development.
Risk transfer with CDOs
(2008)
Modern bank management comprises both classical lending business and transfer of asset risk to capital markets through securitization. Sound knowledge of the risks involved in securitization transactions is a prerequisite for solid risk management. This paper aims to resolve a part of the opaqueness surrounding credit-risk allocation to tranches that represent claims of different seniority on a reference portfolio. In particular, this paper analyzes the allocation of credit risk to different tranches of a CDO transaction when the underlying asset returns are driven by a common macro factor and an idiosyncratic component. Junior and senior tranches are found to be nearly orthogonal, motivating a search for the where about of systematic risk in CDO transactions. We propose a metric for capturing the allocation of systematic risk to tranches. First, in contrast to a widely-held claim, we show that (extreme) tail risk in standard CDO transactions is held by all tranches. While junior tranches take on all types of systematic risk, senior tranches take on almost no non-tail risk. This is in stark contrast to an untranched bond portfolio of the same rating quality, which on average suffers substantial losses for all realizations of the macro factor. Second, given tranching, a shock to the risk of the underlying asset portfolio (e.g. a rise in asset correlation or in mean portfolio loss) has the strongest impact, in relative terms, on the exposure of senior tranche CDO-investors. Our findings can be used to explain major stylized facts observed in credit markets.
Do we measure what we get?
(2008)
Performance measures shall enhance the performance of companies by directing the attention of decision makers towards the achievement of organizational goals. Therefore, goal congruence is regarded in literature as a major factor in the quality of such measures. As reality is affected by many variables, in practice one has tried to achieve a high degree of goal congruence by incorporating an increasing number of these variables into performance measures. However, a goal congruent measure does not lead automatically to superior decisions, because decision makers’ restricted cognitive abilities can counteract the intended effects. This paper addresses the interplay between goal congruence and complexity of performance measures considering cognitively-restricted decision makers. Two types of decision quality are derived which allow a differentiated view on the influence of this interplay on decision quality and learning. The simulation experiments based on this differentiation provide results which allow a critical reflection on costs and benefits of goal congruence and the assumptions regarding the goal congruence of incentive systems.
A new global crop water model was developed to compute blue (irrigation) water requirements and crop evapotranspiration from green (precipitation) water at a spatial resolution of 5 arc minutes by 5 arc minutes for 26 different crop classes. The model is based on soil water balances performed for each crop and each grid cell. For the first time a new global data set was applied consisting of monthly growing areas of irrigated crops and related cropping calendars. Crop water use was computed for irrigated land and the period 1998 – 2002. In this documentation report the data sets used as model input and methods used in the model calculations are described, followed by a presentation of the first results for blue and green water use at the global scale, for countries and specific crops. Additionally the simulated seasonal distribution of water use on irrigated land is presented. The computed model results are compared to census based statistical information on irrigation water use and to results of another crop water model developed at FAO.
A data set of monthly growing areas of 26 irrigated crops (MGAG-I) and related crop calendars (CC-I) was compiled for 402 spatial entities. The selection of the crops consisted of all major food crops including regionally important ones (wheat, rice, maize, barley, rye, millet, sorghum, soybeans, sunflower, potatoes, cassava, sugar cane, sugar beets, oil palm, rapeseed/canola, groundnuts/peanuts, pulses, citrus, date palm, grapes/vine, cocoa, coffee), major water-consuming crops (cotton), and unspecified other crops (other perennial crops, other annual crops, managed grassland). The data set refers to the time period 1998-2002 and has a spatial resolution of 5 arc minutes by 5 arc minutes which is 8 km by 8 km at the equator. This is the first time that a data set of cell-specific irrigated growing areas of irrigated crops with this spatial resolution was created. The data set is consistent to the irrigated area and water use statistics of the AQUASTAT programme of the Food and Agriculture Organization of the United Nations (FAO) (http://www.fao.org/ag/agl/aglw/aquastat/main/index.stm) and the Global Map of Irrigation Areas (GMIA) (http://www.fao.org/ag/agl/aglw/aquastat/irrigationmap/index.stm). At the cell-level it was tried to maximise consistency to the cropland extent and cropland harvested area from the Department of Geography and Earth System Science Program of the McGill University at Montreal, Quebec, Canada and the Center for Sustainability and the Global Environment (SAGE) of the University of Wisconsin at Madison, USA (http://www.geog.mcgill.ca/~nramankutty/ Datasets/Datasets.html and http://geomatics.geog.mcgill.ca/~navin/pub/Data/175crops2000/). The consistency between the grid product and the input data was quantified. MGAG-I and CC-I are fully consistent to each other on entity level. For input data other than CC-I, the consistency of MGAG-I on cell level was calculated. The consistency of MGAG-I with respect to the area equipped for irrigation (AEI) of GMIA and to the cropland extent of SAGE was characterised by the sum of the cell-specific maximum difference between the MGAG-I monthly total irrigated area and the reference area when the latter was exceeded in the grid cell. The consistency of the harvested area contained in MGAG-I with respect to SAGE harvested area was characterised by the crop-specific sum of the cell-specific difference between MGAG-I harvested area and the SAGE harvested area when the latter was exceeded in the grid cell. In all three cases, the sums are the excess areas that should not have been distributed under the assumption that the input data were correct. Globally, this cell-level excess of MGAG-I as compared to AEI is 331,304 ha or only about 0.12 % of the global AEI of 278.9 Mha found in the original grid. The respective cell-level excess of MGAG-I as compared to the SAGE cropland extent is 32.2 Mha, corresponding to about 2.2 % of the total cropland area. The respective cell-level excess of MGAG-I as compared to the SAGE harvested area is 27 % of the irrigated harvested area, or 11.5 % of the AEI. In a further step that will be published later also rainfed areas were compiled in order to form the Global data set of monthly irrigated and rainfed crop areas around the year 2000 (MIRCA2000). The data set can be used for global and continental-scale studies on food security and water use. In the future, it will be improved, e.g. with a better spatial resolution of crop calendars and an improved crop distribution algorithm. The MIRCA2000 data set, its full documentation together with future updates will be freely available through the following long-term internet site: http://www.geo.uni-frankfurt.de/ipg/ag/dl/forschung/MIRCA/index.html. The research presented here was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) within the framework of the research project entitled "Consistent assessment of global green, blue and virtual water fluxes in the context of food production: regional stresses and worldwide teleconnections". The authors thank Navin Ramankutty and Chad Monfreda for making available the current SAGE datasets on cropland extent (Ramankutty et al., 2008) and harvested area (Monfreda et al., 2008) prior to their publication.
The introduction of a common currency as well as the harmonization of rules and regulations in Europe has significantly reduced distance in all its guises. With reduced costs of overcoming space, this emphasizes centripetal forces and it should foster consolidation of financial activity. In a national context, as a rule, this led to the emergence of one financial center. Hence, Europeanization of financial and monetary affairs could foretell the relegation of some European financial hubs such as Frankfurt and Paris to third-rank status. Frankfurt’s financial history is interesting insofar as it has lost (in the 1870s) and regained (mainly in the 1980s) its preeminent place in the German context. Because Europe is still characterized by local pockets of information-sensitive assets as well as a demand for variety the national analogy probably does not hold. There is room in Europe for a number of financial hubs of an international dimension, including Frankfurt.
In this paper we consider the dynamics of spot and futures prices in the presence of arbitrage. We propose a partially linear error correction model where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. We estimate our model using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference.
This study develops a novel 2-step hedonic approach, which is used to construct a price index for German paintings. This approach enables the researcher to use every single auction record, instead of only those auction records that belong to a sub-sample of selected artists. This results in a substantially larger sample available for research and it lowers the selection bias that is inherent in the traditional hedonic and repeat sales methodologies. Using a unique sample of 61,135 auction records for German artworks created by 5,115 different artists over the period 1985 to 2007, we find that the geometric annual return on German art is just 3.8 percent, with a standard deviation of 17.87 percent. Although our results indicate that art underperforms the market portfolio and is not proportionally rewarded for downside risk, under some circumstances art should be included in an optimal portfolio for diversification purposes.
While companies have emerged as very proactive donors in the wake of recent major disasters like Hurricane Katrina, it remains unclear whether that corporate generosity generates benefits to firms themselves. The literature on strategic philanthropy suggests that such philanthropic behavior may be valuable because it can generate direct and indirect benefits to the firm, yet it is not known whether investors interpret donations in this way. We develop hypotheses linking the strategic character of donations to positive abnormal returns. Using event study methodology, we investigate stock market reactions to corporate donation announcements by 108 US firms made in response to Hurricane Katrina. We then use regression analysis to examine if our hypothesized predictors are associated with positive abnormal returns. Our results show that overall, corporate donations were linked to neither positive nor negative abnormal returns. We do, however, see that a number of factors moderate the relationship between donation announcements and abnormal stock returns. Implications for theory and practice are discussed.
We estimate the degree of 'stickiness' in aggregate consumption growth (sometimes interpreted as reflecting consumption habits) for thirteen advanced economies. We find that, after controlling for measurement error, consumption growth has a high degree of autocorrelation, with a stickiness parameter of about 0.7 on average across countries. The sticky-consumption-growth model outperforms the random walk model of Hall (1978), and typically fits the data better than the popular Campbell and Mankiw (1989) model. In several countries, the sticky-consumption-growth and Campbell-Mankiw models work about equally well.
Sowohl die Diversifikation als auch die Fokussierung von Unternehmensaktivitäten werden häufig mit der Maximierung des Unternehmenswertes begründet. Wir untersuchen die Auswirkungen auf den Aktienkurs für 184 Akquisitionen sowie 139 Desinvestitionen deutscher Konzerne im Zeitraum von 1996-2005. Unternehmensdiversifikationen üben, entgegen der oft geäußerten Kritik, keinen signifikant negativen Einfluss auf den Marktwert aus. Fokussierende Unternehmensakquisitionen hingegen sind mit einem signifikanten Wertaufschlag verbunden. Der Verkauf von Unternehmensteilen führt generell zu einer Marktwertsteigerung. Dabei führen Abspaltungen außerhalb des Kerngeschäfts zu einer – allerdings insignifikant – höheren Wertsteigerung als Desinvestitionen von Kerngeschäftsaktivitäten. Statt eines systematischen Diversifikationsabschlags finden wir somit einen „Fokussierungsaufschlag“ für den deutschen Markt.
Generally, information provision and certifcation have been identified as the major economic functions of rating agencies. This paper analyzes whether the “watchlist” (rating review) instrument has extended the agencies' role towards a monitoring position, as proposed by Boot, Milbourn, and Schmeits (2006). Using a data set of Moody's rating history between 1982 and 2004, we find that the overall information content of rating action has indeed increased since the introduction of the watchlist procedure. Our findings suggest that rating reviews help to establish implicit monitoring contracts between agencies and borrowers and as such enable a finer partition of rating information, thereby contributing to a higher information quality.
Zugleich Besprechung von LG Köln, Urt. v. 5.10.2007 – 82 O 114/06 (STRABAG AG) Zu den Rechten, die ein Aktionär gemäß § 28 Satz 1 WpHG für die Zeit verliert, in der er seine Mitteilungspflicht aus § 21 Abs. 1 oder 1a WpHG nicht erfüllt, gehört auch das Stimmrecht in der Hauptversammlung. Daß diese Regelung ein erhebliches Anfechtungspotential gegen Hauptversammlungsbeschlüsse in sich birgt, hatte man schon erkannt, als das WpHG noch nicht einmal in Kraft getreten war. Heute liegt dieses Potential offener zutage denn je: Einer neueren empirischen Studie zufolge zählt die Rüge, der Mehrheits- oder ein sonstiger Großaktionär sei wegen Verstoßes gegen gesetzliche Mitteilungspflichten vom Stimmrecht ausgeschlossen gewesen, zu den am häufigsten vorgebrachten Anfechtungsgründen. Mit Aufmerksamkeit von allen Seiten darf vor diesem Hintergrund das Urteil des Landgerichts Köln vom 5. Oktober 2007 in der Sache STRABAG AG rechnen, das mit mehreren grundsätzlichen – und z. T. überraschenden – Aussagen zur Auslegung der §§ 21 ff. WpHG sowie zu den Möglichkeiten und prozessualen Folgen eines Bestätigungsbeschlusses gemäß § 244 AktG aufwartet. Der Beitrag stellt zunächst den Sachverhalt des STRABAG-Falles und diejenigen Thesen des Urteils vor (unter II), die anschließend nacheinander auf den Prüfstand gestellt werden sollen (unter III-VI). Der Fall bietet aber auch Anlaß, der Frage nachzugehen, was von der geplanten Verschärfung des § 28 WpHG durch das im Entwurf vorliegende Risikobegrenzungsgesetz4 zu halten ist (unter VII).
Sur initiative du Professeur Paul Krüger Andersen, Danemark, et de l’auteur du présent article1, les 27 et 28 septembre 2007 a eu lieu au Danemark la première réunion d’une commission qui s’est fixé comme objectif la conception d’un European Model Company Law Act (EMCLA). Le projet sera décrit dans ce qui suit. Il ne vise ni l’harmonisation impérative des droits des sociétés nationaux ni la création d’une forme supplémentaire de société européenne. Le but est d’élaborer des normes modèles pour les sociétés de capitaux, dans un premier temps pour la société anonyme, qui pourraient être reprises tout ou en partie par les législateurs nationaux. Le projet doit donc être conçu comme une alternative ou un complément aux instruments existants d’harmonisation légale au niveau communautaire (II.). Il convient par la suite de décrire l’expérience américaine avec de telles « lois modèles » en matière de droit des sociétés (III.). Enfin une ébauche des problèmes spécifiques auxquels se heurtera le EMCLA sera faite tandis que seront exposés la composition et le plan de travail de la commission (IV.).
On 27 and 28 September 2007, a commission formed on the initiative of the authors held its first meeting in Aarhus, Denmark to deliberate on its goal of drafting a "European Model Company Law Act" (EMCLA). This project, outlined in the following pages, aims neither to force a mandatory harmonization of national company law nor to create a further, European corporate form. The goal is rather to draft model rules for a corporation that national legislatures would be free to adopt in whole or in part. Thus, the project is thought as an alternative and supplement to the existing EU instruments for the convergence of company law. The present EU instruments, their prerequisites and limits will be discussed in more detail in Part II, below. Part III will examine the US experience with such "model acts" in the area of company law. Part IV will then conclude by discussing several topics concerning the content of an EMCLA, introducing the members of the EMCLA Working Group, and explaining the Group's preliminary working plan.
This paper identifies some common errors that occur in comparative law, offers some guidelines to help avoid such errors, and provides a framework for entering into studies of the company laws of three major jurisdictions. The first section illustrates why a conscious approach to comparative company law is useful. Part I discusses some of the problems that can arise in comparative law and offers a few points of caution that can be useful for practical, theoretical and legislative comparative law. Part II discusses some relatively famous examples of comparative analysis gone astray in order to demonstrate the utility of heeding the outlined points of caution. The second section offers a framework for approaching comparative company law. Part III provides an example of using functional definition to demarcate the topic "company law", offering an "effects" test to determine whether a given provision of law should be considered as functionally part of the rules that govern the core characteristics of companies. It does this by presenting the relevant company law statutes and related topical laws of Germany, the United Kingdom and the United States, using Delaware as a proxy for the 50 states. On the basis of this definition, Part IV analyzes the system of legal functions that comprises "company law" in the United States and the European Union. It selects as the predominant factor for consideration the jurisdictions, sub-jurisdictions and rule-making entities that have legislative or rule-making competence in the relevant territorial unit, analyzes the extent of their power, presents the type of law (rules) they enact (issue), and discusses the concrete manner in which the laws and rules of the jurisdictions and sub-jurisdictions can legally interact. Part V looks at the way these jurisdictions do interact on the temporal axis of history, that is, their actual influence on each other, which in the relevant jurisdictions currently takes the form of regulatory competition and legislative harmonization. The method of the approach outlined in this paper borrows much from system theory. The analysis attempts to be detailed without losing track of the overall jurisdictional framework in the countries studied.
Wertpapierleihgeschäfte gehören heute zum Standardrepertoire bei der Durchführung von Kapitalmarkttransaktionen. Der vorliegende Beitrag geht der Frage nach, welche Möglichkeiten solche Geschäfte im Hinblick auf eigene Aktien bieten und welche Grenzen §§ 71 ff. AktG ihrem Einsatz bei eigenen Aktien ziehen.
Am 27. und 28. September des vergangenen Jahres hat auf Initiative von Prof. Paul Krüger Andersen, Dänemark,1 und des Verfassers in Aarhus/Dänemark das erste Treffen der Arbeitsgruppe stattgefunden, die sich zum Ziel gesetzt hat, einen „European Model Company Law Act“ (EMCLA) zu entwickeln. Dieses Projekt soll im Folgenden vorgestellt werden. Es zielt weder auf eine zwingende Harmonisierung der nationalen Gesellschaftsrechte noch auf die Schaffung einer weiteren europäischen Gesellschaftsform ab. Ziel ist vielmehr, Modellregeln für eine Kapitalgesellschaft, zunächst die Aktiengesellschaft, zu entwerfen, die von den nationalen Gesetzgebern ganz oder zum Teil übernommen werden können. Damit tritt das Vorhaben als Alternative und Ergänzung neben die vorhandenen Instrumente der Gesellschaftsrechtsangleichung in der Europäischen Union. Darauf ist im Folgenden zunächst einzugehen (II.). Ein weiterer Abschnitt weist auf die US-amerikanischen Erfahrungen mit solchen einheitlichen „Modellgesetzen“ im Bereich des Gesellschaftsrechts hin (III.). Der letzte Teil spricht dann ausgewählte Einzelprobleme an, die sich bei der Entwicklung eines EMCLA ergeben, stellt die Arbeitsgruppe vor und erläutert ihren vorläufigen Arbeitsplan (IV.).
Die Bundesregierung plant mit dem „Gesetz zur Begrenzung der mit Finanzinvestitionen verbundenen Risiken“ (Risikobegrenzungsgesetz), das derzeit als Regierungsentwurf vorliegt, gesamtwirtschaftlich unerwünschte Aktivitäten von Finanzinvestoren zu erschweren oder zu verhindern. Dabei sollen Finanz- oder Unternehmenstransaktionen, die effizienzfördernd wirken, unbeeinträchtigt bleiben. Inwieweit der RegE-Risikobegrenzungsgesetz dieses selbstgesetzte Ziel erreichen wird, ist derzeit nicht absehbar. Absehbar ist hingegen, dass die im RegE-Risikobegrenzungsgesetz enthaltene neue übernahmerechtliche Regel für das sog. „acting in concert“ in einen Konflikt mit dem Gemeinschaftsrecht gerät. Diesen Konflikt und seine Gründe zeigt der Beitrag auf. Dazu wird in Teil A. zunächst der neue Tatbestand vorgestellt und sodann unter B. seine Vereinbarkeit mit der Übernahmerichtlinie (I.) sowie mit der Kapitalverkehrsfreiheit (II.) untersucht. Unter C. werden die Ergebnisse zusammengefasst.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
We develop a multivariate generalization of the Markov–switching GARCH model introduced by Haas, Mittnik, and Paolella (2004b) and derive its fourth–moment structure. An application to international stock markets illustrates the relevance of accounting for volatility regimes from both a statistical and economic perspective, including out–of–sample portfolio selection and computation of Value–at–Risk.
An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out–of–sample Value–at–Risk measures.
This paper documents and studies sources of international differences in participation and holdings in stocks, private businesses, and homes among households aged 50+ in the US, England, and eleven continental European countries, using new internationally comparable, household-level data. With greater integration of asset and labor markets and policies, households of given characteristics should be holding more similar portfolios for old age. We decompose observed differences across the Atlantic, within the US, and within Europe into those arising from differences: a) in the distribution of characteristics and b) in the influence of given characteristics. We find that US households are generally more likely to own these assets than their European counterparts. However, European asset owners tend to hold smaller real, PPP-adjusted amounts in stocks and larger in private businesses and primary residence than US owners at comparable points in the distribution of holdings, even controlling for differences in configuration of characteristics. Differences in characteristics often play minimal or no role. Differences in market conditions are much more pronounced among European countries than among US regions, suggesting significant potential for further integration.
Marginal income taxes may have an insurance effect by decreasing the effective fluctuations of after-tax individual income. By compressing the idiosyncratic component o personal income fluctuations, higher marginal taxes should be negatively correlated with the dispersion of consumption across households, a necessary implication of an insurance effect of taxation. Our study empirically examines this negative correlation, exploiting the ample variation of state taxes across US states. We show that taxes are negatively correlated with the consumption dispersion of the within-state distribution of non-durable consumption and that this correlation is robust.
Based on a unique dataset of legislative changes in industrial countries, we identify events that strengthen the competition control of mergers and acquisitions, analyze their impact on banks and non-financial firms and explain the different reactions observed with specific regulatory characteristics of the banking sector. Covering nineteen countries for the period 1987 to 2004, we find that more competition-oriented merger control increases the stock prices of banks and decreases the stock prices of non-financial firms. Bank targets become more profitable and larger, while those of non-financial firms remain mostly unaffected. A major determinant of the positive bank returns is the degree of opaqueness that characterizes the institutional setup for supervisory bank merger reviews. The legal design of the supervisory control of bank mergers may therefore have important implications for real activity.
Many older US households have done little or no planning for retirement, and there is a substantial population that seems to undersave for retirement. Of particular concern is the relative position of older women, who are more vulnerable to old-age poverty due to their longer longevity. This paper uses data from a special module we devised on planning and financial literacy in the 2004 Health and Retirement Study. It shows that women display much lower levels of financial literacy than the older population as a whole. In addition, women who are less financially literate are also less likely to plan for retirement and be successful planners. These findings have important implications for policy and for programs aimed at fostering financial security at older ages.
Generally, information provision and certification have been identified as the major economic functions of rating agencies. This paper analyzes whether the “watchlist" (rating review) instrument has extended the agencies' role towards a monitoring position, as proposed by Boot, Milbourn, and Schmeits (2006). Using a data set of Moody's rating history between 1982 and 2004, we find that the overall information content of rating action has indeed increased since the introduction of the watchlist procedure. Our findings suggest that rating reviews help to establish implicit monitoring contracts between agencies and borrowers and as such enable a finer partition of rating information, thereby contributing to a higher information quality.
The reaction of consumer spending and debt to tax rebates – evidence from consumer credit data
(2008)
We use a new panel dataset of credit card accounts to analyze how consumer responded to the 2001 Federal income tax rebates. We estimate the monthly response of credit card payments, spending, and debt, exploiting the unique, randomized timing of the rebate disbursement. We find that, on average, consumers initially saved some of the rebate, by increasing their credit card payments and thereby paying down debt. But soon afterwards their spending increased, counter to the canonical Permanent-Income model. Spending rose most for consumers who were initially most likely to be liquidity constrained, whereas debt declined most (so saving rose most) for unconstrained consumers. More generally, the results suggest that there can be important dynamics in consumers’ response to “lumpy” increases in income like tax rebates, working in part through balance sheet (liquidity) mechanisms.