Refine
Year of publication
- 2012 (87) (remove)
Document Type
- Working Paper (87) (remove)
Has Fulltext
- yes (87) (remove)
Is part of the Bibliography
- no (87)
Keywords
- model uncertainty (4)
- monetary policy (4)
- DSGE models (3)
- Monetary Policy (3)
- forecasting (3)
- ECB (2)
- Fiscal Policy (2)
- Formale Semantik (2)
- Funktionale Programmierung (2)
- Greenbook (2)
Institute
- Center for Financial Studies (CFS) (25)
- Institute for Monetary and Financial Stability (IMFS) (20)
- Wirtschaftswissenschaften (16)
- Institut für sozial-ökologische Forschung (ISOE) (10)
- House of Finance (HoF) (7)
- Rechtswissenschaft (6)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (5)
- Gesellschaftswissenschaften (4)
- Informatik (4)
- Institute for Law and Finance (ILF) (4)
This paper investigates how an office-motivated incumbent can use transparency enhancement on public spending to signal his budgetary management ability and win re-election. We show that when the incumbent faces a popular challenger, transparency policy can be an effective signaling device. A more popular challenger can reduce the probability to enhance transparency, while voters can be better off due to a more informative signaling. It is also shown that a higher level of public interest in fiscal issues can increase the probability of enhancing transparency, while voters can be worse off by a less informative signaling.
This paper constructs a dynamic model of health insurance to evaluate the short- and long run effects of policies that prevent firms from conditioning wages on health conditions of their workers, and that prevent health insurance companies from charging individuals with adverse health conditions higher insurance premia. Our study is motivated by recent US legislation that has tightened regulations on wage discrimination against workers with poorer health status (Americans with Disability Act of 2009, ADA, and ADA Amendments Act of 2008, ADAAA) and that will prohibit health insurance companies from charging different premiums for workers of different health status starting in 2014 (Patient Protection and Affordable Care Act, PPACA). In the model, a trade-off arises between the static gains from better insurance against poor health induced by these policies and their adverse dynamic incentive effects on household efforts to lead a healthy life. Using household panel data from the PSID we estimate and calibrate the model and then use it to evaluate the static and dynamic consequences of no-wage discrimination and no-prior conditions laws for the evolution of the cross-sectional health and consumption distribution of a cohort of households, as well as ex-ante lifetime utility of a typical member of this cohort. In our quantitative analysis we find that although a combination of both policies is effective in providing full consumption insurance period by period, it is suboptimal to introduce both policies jointly since such policy innovation induces a more rapid deterioration of the cohort health distribution over time. This is due to the fact that combination of both laws severely undermines the incentives to lead healthier lives. The resulting negative effects on health outcomes in society more than offset the static gains from better consumption insurance so that expected discounted lifetime utility is lower under both policies, relative to only implementing wage nondiscrimination legislation.
This paper investigates the effect of anticipated/experienced regret and pride on individual investors’ decisions to hold or sell a winning or losing investment, in the form of the disposition effect. As expected the results suggest that in the loss domain, low anticipated regret predicts a greater probability of selling a losing investment. While in the gain domain, high anticipated pride indicates a greater probability of selling a winning investment. The effects of high experienced regret/pride on the selling probability are found as well. An unexpected finding is that regret (pride) seems to be not only relevant for the loss (gain) domain, but also for the gain (loss) domain. In addition, this paper presents evidence of interconnectedness between anticipated and experienced emotions. The authors discuss the implications of these findings and possible avenues for further research.
After nearly two decades of US leadership during the 1980s and 1990s, are Europe’s venture capital (VC) markets in the 2000s finally catching up regarding the provision of financing and successful exits, or is the performance gap as wide as ever? Are we amid an overall VC performance slump with no encouraging news? We attempt to answer these questions by tracking over 40,000 VC-backed firms stemming from six industries in 13 European countries and the US between 1985 and 2009; determining the type of exit – if any – each particular firm’s investors choose for the venture.
Venture capital (VC) investment has long been conceptualized as a local business , in which the VC’s ability to source, syndicate, fund, monitor, and add value to portfolio firms critically depends on their access to knowledge obtained through their ties to the local (i.e., geographically proximate) network. Consistent with the view that local networks matter, existing research confirms that local and geographically distant portfolio firms are sourced, syndicated, funded, and monitored differently. Curiously, emerging research on VC investment practice within the United States finds that distant investments, as measured by “exits” (either initial public offering or merger & acquisition) out-perform local investments. These findings raise important questions about the assumed benefits of local network membership and proximity. To more deeply probe these questions, we contrast the deal structure of cross-border VC investment with domestic VC investment, and contrast the deal structure of cross-border VC investments that include a local
partner with those that do not. Evidence from 139,892 rounds of venture capital financing in the period 1980-2009 suggests that cross-border investment practice, in terms of deal sourcing, syndication, and performance indeed change with proximity, but that monitoring practices do not. Further, we find that the inclusion of a local partner in the investment syndicate yields surprisingly few benefits. This evidence, we argue, raises important questions about VC investment practice as well as the ability of firms to capture and lever the presumed benefits of network membership.
Fazit
1. Aufgrund der Regelungen über Produktinterventionen nach Artt. 31, 32 MiFIR-E könnte den mitgliedstaatlichen Behörden und – subsidiär – ESMA künftig ein scharfes Schwert zur Abwehr von Gefahren für den Anlegerschutz, die Finanzmärkte und die Finanzstabilität im Rahmen der Wertpapieraufsicht zur Verfügung stehen. Da die Eingriffsvoraussetzungen vage formuliert und noch durch delegierte Rechtsakte der Kommission zu konkretisieren sind, lässt sich die künftige Bedeutung dieser Aufsichtsbefugnisse zum jetzigen Zeitpunkt noch nicht abschließend einschätzen. Insbesondere ist unklar, welche Anforderungen an eine zu Eingriffen berechtigende Gefahr für den Anlegerschutz zu stellen sind.
2. Der Vorstand eines als Aktiengesellschaft verfassten Wertpapierdienstleistungsunternehmens muss seine Entscheidungen künftig auch daran ausrichten, dass weder die von der Gesellschaft entwickelten und vertriebenen Finanzinstrumente noch ihre Finanztätigkeiten oder Finanzpraktiken eine Gefahr für den Anlegerschutz, die Integrität und das Funktionieren der Finanzmärkte oder die Stabilität des Finanzsystems darstellen, die Anlass für eine Intervention sein könnte.
3. Ist die Gesellschaft Adressat eines auf der Grundlage von Artt. 31, 32 MiFIR-E erlassenen Verbots oder einer Beschränkung, muss der Vorstand die Entscheidung über das Einlegen von Rechtsmitteln nach den allgemeinen aktienrechtlichen Grundsätzen an der Förderlichkeit für das Unternehmensinteresse ausrichten.
4. Schließlich wird sich künftig die Frage nach der Haftung der Gesellschaft gegenüber den Anlegern stellen, wenn Finanzinstrumente entgegen einem im Interesse des Anlegerschutzes ergangenen Verbot vertrieben werden. Sofern der Gesetzgeber keine abweichende Entscheidung trifft, ist anzunehmen, dass die abgeschlossenen Verträge nicht nach § 134 BGB nichtig, sondern allenfalls anfechtbar sind. Darüber hinaus können vertragliche oder deliktische Schadensersatzansprüche der Anleger bestehen.
In seiner Entscheidung in Sachen Fresenius - Der Konzern 2012, 420 - hat der Bundesgerichtshof entschieden, der Vorstand einer Aktiengesellschaft handele pflichtwidrig, wenn er einem Aufsichtsratsmitglied die vereinbarte Vergütung für Beratungsleistungen zahle, noch bevor der Aufsichtsrat dem Vertrag zugestimmt habe. In diesem Zusammenhang hat er die bereits zuvor herrschende Lehre bestätigt, der zufolge § 114 AktG auch Beratungsverträge zwischen einem Aufsichtsratsmitglied und einem von der Aktiengesellschaft abhängigen Unternehmen erfasst. Schließlich hat der Bundesgerichtshof seine Rechtsprechung konkretisiert, nach der § 114 AktG auch dann Anwendung findet, wenn die Beratungsleistung nicht von einem Aufsichtsratsmitglied, sondern von einer Gesellschaft erbracht wird, an der das Aufsichtsratsmitglied beteiligt ist, sofern es nur in nicht unerheblichem Umfang an der Vergütung partizipiert. Der vorliegende Beitrag nimmt kritisch zu allen vorgenannten Aspekten des Fresenius-Urteils Stellung.
Der Index verzeichnet Bildtafeln aus folgenden Büchern: Schumm (2008) - Flechten Madeiras, der Kanaren und Azoren; Schumm & Aptroot (2010) - Seychelles Lichen Guide; Schumm (2011) - Kalkflechten der Schäbischen Alb - ein mikroskopisch anatomischer Atlas; Aptroot & Schumm (2011) - Fruticose Roccellaceae - an anatomical-microscopical Atlas and Guide with a worldwide Key and further Notes on some crustose Roccellaceae or similar Lichens; und Schumm & Aptroot (2012) - A microscopical Atlas of some tropical Lichens from SE-Asia (Thailand, Cambodia, Philippines, Vietnam), Volume 1 and Volume 2.
Eine wesentliche Voraussetzung für die Entschlüsselung herrschender Justizverständnisse ist die Auseinandersetzung mit den Rollen, die die beteiligten Akteure in einem Rechtssystem einnehmen sowie die Untersuchung der rechtlichen und institutionellen Bedingungen unter denen diese Akteure handeln. Der vorliegende Beitrag beschäftigt sich zunächst mit der Macht- und Aufgabenverteilung zwischen Richtern und Parteien. Dabei wird deutlich, dass die Rollenallokation nicht einheitlich ist, sondern in Abhängigkeit von unterschiedlichen verfahrensrechtlichen und institutionellen Voraussetzungen variiert. In Verfahren vor einer Jury wird die richterliche Autorität durch eine maximal ausgeprägte Parteiautonomie stark eingeschränkt. Als Rechthonoratioren (im Weberschen Sinne) agieren Richter dagegen immer dann, wenn Sie ohne Geschworene Recht sprechen. Dies geschieht insbesondere in den einzelstaatlichen Obergerichten und den Bundesberufungsgereichten, aber auch in Verfahren erster Instanz, in denen „claims in equity“ zu entscheiden sind. Der Beitrag beschäftigt sich abschließend mit dem Einfluss, den die Besonderheiten der amerikanischen Juristenausbildung auf das amerikanische Justizverständnis ausüben: Sie prägen und reproduzieren eine der Rollen und Selbstbilder unter amerikanischen Juristen, sowohl in der Anwaltschaft als auch auf Seiten der Richter.
Der vorliegende Beitrag leitete das Programm des Workshops „Schlichten und Richten – Differenzierung und Hybridisierung” (Frankfurt/Main, 9./10. Februar 2012) ein. Mit diesem Workshop begann das Arbeitsprogramm des LOEWE–Schwerpunkts „Außergerichtliche und gerichtliche Konfliktlösung“, der am 1. Januar 2012 seine Tätigkeit aufgenommen hatte (siehe hierzu www.konfliktloesung.eu; eine leicht veränderte Fassung des Beitrags in englischer Sprache wird in Kürze abrufbar sein unter: http://www.ssrn.com/link/Max-Planck-Legal-History-RES.html ). Der Ausgangspunkt des Workshops ist eine deutsche Debattentradition, die die Alternativität von gerichtlichen und nichtgerichtlichen, kontradiktorischen oder konsensualen sowie mehr formalisierten und mehr informalisierten Konfliktlösungsformen unter dem Schlagwort „Schlichten oder Richten“ (auch „Schlichten statt Richten“ oder „Schlichten oder Richten“) thematisierte.
Der Beitrag problematisiert zunächst die bisherige mangelnde rechtshistorische Aufmerksamkeit, die Alternativen zur gerichtlichen Konfliktlösung zugewandt wurde. Er weist daraufhin, dass auch die heutige Diskussion über gelungenes Konfliktlösungsmanagement oft explizit oder implizit von – zuweilen nicht ausreichend reflektierten – historischen Vorannahmen geprägt ist und – damit verbunden – von Vorstellungen über rechtskulturelle Fremdheit und Nähe.
Im zweiten und dritten Abschnitt skizziert der Beitrag kurz den historischen Gang der deutschen Diskussion über „Schlichten und Richten“ seit dem Aufkommen auch rechtswissenschaftlich anerkannter Schlichtungsinstitutionen zu Beginn des 20. Jahrhunderts. Er versucht, deren wechselnde zeitgenössische Kontexte sichtbar zu machen und zeigt, wie sich in diesen Diskussionen (zuweilen utopisch scheinende) rechtspolitische Verheißungen ansiedeln konnten, welch fruchtbaren Boden diese Diskussionen aber auch für neue Kategorienbildungen und multidisziplinäre Zugänge bot.
Im vierten Abschnitt wird versucht, Verknüpfungen mit der gegenwärtigen ADR-Diskussionen herzustellen, während im fünften Abschnitt in analytischer Absicht Konfigurationen des Wortpaars „Schlichten“ und „Richten“ vorgestellt werden: „Schlichten“ und „Richten“ als Alternative, als Abhängigkeitsverhältnis und als Abfolge. Der fünfte Abschnitt schließlich fragt nach Funktionselementen und den Funktionsbedingungen von Schlichten und Richten, d.h.: Welche Leitrationalitäten, Partizipationsmechanismen, Legitimationsnarrative und Reflexionsformen lassen sich jeweils der einen oder anderen Form der Konfliktlösung zuordnen.
All diese Überlegungen sind eher tentativer Art und vermitteln nur erste umrisshafte Vorstellungen. Sie dienen in erster Linie dem Diskussionsanstoß und sollen erste Schneisen in dieses komplexe Forschungsfeld schlagen. Die Vortragsform ist beibehalten und der Fußnotenapparat ist auf das nötige Minimum reduziert.
From its early post-war catch-up phase, Germany’s formidable export engine has been its consistent driver of growth. But Germany has almost equally consistently run current account surpluses. Exports have powered the dynamic phases and helped emerge from stagnation. Volatile external demand, in turn, has elevated German GDP growth volatility by advanced countries’ standards, keeping domestic consumption growth at surprisingly low levels. As a consequence, despite the size of its economy and important labor market reforms, Germany’s ability to act as global locomotive has been limited. With increasing competition in its traditional areas of manufacturing, a more domestically-driven growth dynamic, especially in the production and delivery of services, will be good for Germany and for the global economy. Absent such an effort, German growth will remain constrained, and Germany will play only a modest role in spurring growth elsewhere.
In this paper we develop empirical measures for the strength of spillover effects. Modifying and extending the framework by Diebold and Yilmaz (2011), we quantify spillovers between sovereign credit markets and banks in the euro area. Spillovers are estimated recursively from a vector autoregressive model of daily CDS spread changes, with exogenous common factors. We account for interdependencies between sovereign and bank CDS spreads and we derive generalised impulse response functions. Specifically, we assess the systemic effect of an unexpected shock to the creditworthiness of a particular sovereign or country-specific bank index to other sovereign or bank CDSs between October 2009 and July 2012. Channels of transmission from or to sovereigns and banks are aggregated as a Contagion index (CI). This index is disentangled into four components, the average potential spillover: i) amongst sovereigns, ii) amongst banks, iii) from sovereigns to banks, and iv) vice-versa. We highlight the impact of policy-related events along the different components of the contagion index. The systemic contribution of each sovereign or banking group is quantified as the net spillover weight in the total net-spillover measure. Finally, the captured time-varying interdependence between banks and sovereigns emphasises the evolution of their strong nexus.
Law making becomes an increasingly important function of the higher courts in civil law matters. This observation leads to the question of whether the law making function is nevertheless carried out in a “classical” legal-principled way or whether the courts increasingly employ a political-formative style. To answer this question, one should not only focus on the content of the courts’ reasoning but also on their procedural-institutional framework. From that perspective, the processing of so-called legislative facts is a key issue in determining the role of courts between legal reasoning and social engineering. The paper shows that Germany, England and the United States pursue different lines in processing legislative facts. Notwithstanding these differences, it seems to be the case that the increasing importance of law making will also change the institutional framework of appellate courts towards a quasi-legislative forum.
We use a novel disaggregate sectoral euro area data set with a regional breakdown to investigate price changes and suggest a new method to extract factors from over-lapping data blocks. This allows us to separately estimate aggregate, sectoral, country-specific and regional components of price changes. We thereby provide an improved estimate of the sectoral factor in comparison with previous literature, which decomposes price changes into an aggregate and idiosyncratic component only, and interprets the latter as sectoral. We find that the sectoral component explains much less of the variation in sectoral regional inflation rates and exhibits much less volatility than previous findings for the US indicate. We further contribute to the literature on price setting by providing evidence that country- and region-specific factors play an important role in addition to the sector-specific factors, emphasising heterogeneity of inflation dynamics along different dimensions. We also conclude that sectoral price changes have a “geographical” dimension, that leads to new insights regarding the properties of sectoral price changes.
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact focussing primarily on a dynamic stochastic general equilibrium model with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run. We explore the role of the mix of expenditure cuts and tax reductions as well as gradualism in achieving this policy outcome. Finally, we conduct sensitivity studies regarding the type of model used and its parameterization.
We examine both the degree and the structural stability of inflation persis tence at different quantiles of the conditional inflation distribution. Previous research focused exclusively on persistence at the conditional mean of the inflation rate. Economic theory, however, provides various reasons -for example downward wage rigidities or menu costs- to expect higher inflation persistence at the upper than at the lower tail of the conditional inflation distribution.
Based on post-war US data we indeed find slower mean reversion in response to positive than to negative shocks. We find robust evidence for a structural break in persistence at all quantiles of the inflation process in the early 1980s. Inflation persistence has decreased and become more homogeneous across quantiles. Persistence at the conditional mean became more informative about the degree of persistence across the entire conditional inflation distribution. While prior to the 1980s inflation was not mean reverting in response to large positive shocks, our evidence strongly suggests that since the end of the Volcker disinflation the unit root can be rejected at every quantile including the upper tail of the conditional inflation distribution.
This paper investigates the accuracy of point and density forecasts of four DSGE models for inflation, output growth and the federal funds rate. Model parameters are estimated and forecasts are derived successively from historical U.S. data vintages synchronized with the Fed’s Greenbook projections. Point forecasts of some models are of similar accuracy as the forecasts of nonstructural large dataset methods. Despite their common underlying New Keynesian modeling philosophy, forecasts of different DSGE models turn out to be quite distinct. Weighted forecasts are more precise than forecasts from individual models. The accuracy of a simple average of DSGE model forecasts is comparable to Greenbook projections for medium term horizons. Comparing density forecasts of DSGE models with the actual distribution of observations shows that the models overestimate uncertainty around point forecasts.
The withdrawal of foreign capital from emerging countries at the height of the recent financial crisis and its quick return sparked a debate about the impact of capital flow surges on asset markets. This paper addresses the response of property prices to an inflow of foreign capital. For that purpose we estimate a panel VAR on a set of Asian emerging market economies, for which the waves of inflows were particularly pronounced, and identify capital inflow shocks based on sign restrictions. Our results suggest that capital inflow shocks have a significant effect on the appreciation of house prices and equity prices. Capital inflow shocks account for - roughly - twice the portion of overall house price changes they explain in OECD countries. We also address crosscountry differences in the house price responses to shocks, which are most likely due to differences in the monetary policy response to capital inflows.
The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.
Motivated by the U.S. events of the 2000s, we address whether a too low for too long interest rate policy may generate a boom-bust cycle. We simulate anticipated and unanticipated monetary policies in state-of-the-art DSGE models and in a model with bond financing via a shadow banking system, in which the bond spread is calibrated for normal and optimistic times. Our results suggest that the U.S. boom-bust was caused by the combination of (i) too low for too long interest rates, (ii) excessive optimism and (iii) a failure of agents to anticipate the extent of the abnormally favorable conditions.
In this paper, I introduce lumpy micro-level capital adjustment into a sticky information general equilibrium model. Lumpy adjustment arises because of inattentiveness in capital investment decisions instead of the more common assumption of non-convex adjustment costs. The model features inattentiveness as the only source of stickiness. I find that the model with lumpy investment yields business cycle dynamics which differ substantially from those of an otherwise identical model with frictionless investment and are much more consistent with the empirical evidence. These results therefore strengthen the case in favour of the relevance of microeconomic investment lumpiness for the business cycle.
This paper outlines relatively easy to implement reforms for the supervision of transnational banking-groups in the E.U. that should not be primarily based on legal form but on the actual risk structures of the pertinent financial institutions. The proposal also aims at paying close attention to the economics of public administration and international relations in allocating competences among national and supranational supervisory bodies. Before detailing the own proposition, this paper looks into the relationship between sovereign debt and banking crises that drive regulatory reactions to the financial turmoil in the Euro area. These initiatives inter alia affirm effective prudential supervision as a pivotal element of crisis prevention. In order to arrive at a more informed idea, which determinants apart from a perceived appetite for regulatory arbitrage drive banks’ organizational choices, this paper scrutinizes the merits of either a branch or subsidiary structure for the cross-border business of financial institutions. In doing so, it also considers the policy-makers perspective. The analysis shows that no one size fits all organizational structure is available and concludes that banks’ choices should generally not be second-guessed, particularly because they are subject to (some) market discipline. The analysis proceeds with describing and evaluating how competences in prudential supervision are currently allocated among national and supranational supervisory authorities. In order to assess the findings the appraisal adopts insights form the economics of public administration and international relations. It argues that the supervisory architecture has to be more aligned with bureaucrats’ incentives and that inefficient requirements to cooperate and share information should be reduced. Contrary to a widespread perception, shifting responsibility to a supranational authority cannot solve all the problems identified. Resting on these foundations, the last part of this paper finally sketches an alternative solution that dwells on far-reaching mutual recognition of national supervisory regimes and allocates competences in line with supervisors’ incentives and the risk inherent in crossborder banking groups.
Mit dem mathematischen Werkzeug zur Filterung von Lärmprozessen, die für die Praktizierung der numerischen Wettervorhersage in ihrer Anfangszeit eine große Rolle gespielt hat, wird an den linearisierten Gleichungen für ein barotropes Flachwassermodell im β-Format die Frage behandelt, wie sich der Rossby-förmige Wellenmodus entwickelt, wenn gleichzeitige Trägheitsschwerewellenmoden infolge von veränderlichen filternden Gleichungen unterdrückt werden.
Zunächst werden einige analytische Grundbeziehungen einer lärmfreien Adjustierung von Anfangsfeldern ausgeführt. Dann richtet sich die Untersuchung auf lärmfilternde prognostische Modellgleichungen (Absch. 5, 6, 7). Aus dieser Analysis in Abhängigkeit von der Filterstufe findet man im Vergleich mit der konventionellen synoptisch-skaligen Rossby-Formel, die der Filterstufe Null entspricht, zum Teil erweiterte Wellengeschwindigkeiten der Rossby-förmigen Physik. Dabei treten sogar unerwartete neue Effekte auf, die wohl primär im Langwellenspektrum zählen.
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Power and law in enlightened absolutism : Carl Gottlieb Svarez' theoretical and practical approach
(2012)
The term Enlightened Absolutism reflects a certain tension between its two components. This tension is in a way a continuation of the dichotomy between power on one hand and law on the other. The present paper shall provide an analysis of these two concepts from the perspective of Carl Gottlieb Svarez, who, in his position as a high-ranking Prussian civil servant and legal reformist, has had unparalleled influence on the legislative history of the
Prussian states towards the end of the 18th century. Working side-by-side with Johann Heinrich Casimir von Carmer, who held the post of Prussian minister of justice from 1779 to 1798, Svarez was able to make use of his talent for reforming and legislating. From 1780 to 1794 he was primarily responsible for the elaboration of the codification of the Prussian private law – the “Allgemeines Landrecht für die Preußischen Staaten” in 1794. In the present paper, Svarez’ approach to the relation between law and power shall be analysed on two different levels. Firstly, on a theoretical level, the reformist’s thoughts and reflections as laid down in his numerous works, papers and memorandums, shall be discussed. Secondly, on a practical level, the question of the extent to which he implemented his ideas in Prussian legal reality shall be explored.
Mit der Europäischen Rechtsgeschichte verfügt die Rechtsgeschichte seit vielen Jahrzehnten über eine Tradition transnationaler rechtshistorischer Forschung. Sie wurde von deutschsprachigen Wissenschaftlern der Vor- und Nachkriegszeit geprägt – Emil Seckel, Paul Koschaker, Franz Wieacker, Helmut Coing – und stand im Kontext des westeuropäischen Nachkriegsprojekts. Noch heute bauen wir auf ihren großen Leistungen auf. Sie war, wie alle Geschichtswissenschaft, Teil eines Prozesses der gesellschaftlichen Selbstverständigung über die eigene Identität und zeichnete das Bild einer distinkten europäischen Rechtskultur.
In den letzten Jahren sind im Zuge der Diskussion um postkoloniale Perspektiven auf die Geschichte, um Transnationale und Globalgeschichte, viele Grundlagen der traditionellen Europahistoriographie kritisiert und massiv erschüttert worden. Das wirft Fragen auch an die Europäische Rechtsgeschichte auf: Welches Europabild liegt ihr zu Grunde? Auf welchen intellektuellen und konzeptionellen Grundlagen beruht sie? Wie antwortet sie auf die Vorwürfe des Eurozentrismus, des epistemischen Kolonialismus, wie auf die Forderung, Europa zu ‘provinzialisieren’? Wie definiert sie das Verhältnis der Europäischen zur Transnationalen und Globalen Rechtsgeschichte? - Diesen und ähnlichen Fragen wenden sich die folgenden Überlegungen zu. Der Schwerpunkt liegt auf einer Auseinandersetzung mit der Tradition, ihren konzeptionellen Grundlagen und deren wissenschaftshistorischem Kontext (1. Teil, 1.-6.). Aus dieser kritischen Bestandsaufnahme und den Ergebnissen der Debatte um Globalgeschichte ergeben sich Ausgangspunkte und Aufgaben einer in vielem auf den Leistungen der Disziplin aufbauenden, doch notwendigerweise auf einer anderen Konzeption beruhenden Rechtsgeschichte Europas in globalhistorischer Perspektive (2. Teil, 7.-11.).
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact focussing primarily on a dynamic stochastic general equilibrium model with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run. We explore the role of the mix of expenditure cuts and tax reductions as well as gradualism in achieving this policy outcome. Finally, we conduct sensitivity studies regarding the type of model used and its parameterization.
The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.
We argue that the U.S. personal saving rate’s long stability (1960s–1980s), subsequent steady decline (1980s–2007), and recent substantial rise (2008–2011) can be interpreted using a parsimonious ‘buffer stock’ model of consumption in the presence of labor income uncertainty and credit constraints. Saving in the model is affected by the gap between ‘target’ and actual wealth, with the target determined by credit conditions and uncertainty. An estimated structural version of the model suggests that increased credit availability accounts for most of the long-term saving decline, while fluctuations in wealth and uncertainty capture the bulk of the business-cycle variation.
This paper investigates whether preference interactions can explain why risk preferences change over time and across contexts. We conduct an experiment in which subjects accept or reject gambles involving real money gains and losses. We introduce within-subject variation by alternating subjectively liked music and disliked music in the background. We find that favourite music increases risk-taking, and disliked music suppresses risk-taking, compared to a baseline of no music. Several theories in psychology propose mechanisms by which mood affects risktaking, but none of them fully explain our results. The results are, however, consistent with preference complementarities that extend to risk preference.
This paper examines data on financial sophistication among the U.S. older population, using a special-purpose module implemented in the Health and Retirement Study. We show that financial sophistication is deficient for older respondents (aged 55+). Specifically, many in this group lack a basic grasp of asset pricing, risk diversification, portfolio choice, and investment fees. Subpopulations with particular deficits include women, the least educated, persons over the age of 75, and non-Whites. In view of the fact that people are increasingly being asked to take on responsibility for their own retirement security, such lack of knowledge can have serious implications.
This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies the martingale method. More precisely, we construct the non-separable value function by formalizing the optimal constrained terminal wealth to be a (conjectured) contingent claim on the optimal non-constrained terminal wealth. This is relevant by itself, but also opens up the opportunity to derive new solutions to constrained problems. As a second contribution, we thus derive new results for non-strict constraints on the shortfall of inter¬mediate wealth and/or consumption.
I characterize optimal monetary and fiscal policy in a stochastic New Keynesian model when nominal interest rates may occasionally hit the zero lower bound. The benevolent policymaker controls the short-term nominal interest rate and the level of government spending. Under discretionary policy, accounting for fiscal stabilization policy eliminates to a large extent the welfare losses associated with the presence of the zero bound. Under commitment, the gains associated with the use of the fiscal policy tool remain modest, even though fiscal stabilization policy is part of the optimal policy mix.
This chapter aims to provide a hands-on approach to New Keynesian models and their uses for macroeconomic policy analysis. It starts by reviewing the origins of the New Keynesian approach, the key model ingredients and representative models. Building blocks of current-generation dynamic stochastic general equilibrium (DSGE) models are discussed in detail. These models address the famous Lucas critique by deriving behavioral equations systematically from the optimizing and forward-looking decision-making of households and firms subject to well-defined constraints. State-of-the-art methods for solving and estimating such models are reviewed and presented in examples. The chapter goes beyond the mere presentation of the most popular benchmark model by providing a framework for model comparison along with a database that includes a wide variety of macroeconomic models. Thus, it offers a convenient approach for comparing new models to available benchmarks and for investigating whether particular policy recommendations are robust to model uncertainty. Such robustness analysis is illustrated by evaluating the performance of simple monetary policy rules across a range of recently-estimated models including some with financial market imperfections and by reviewing recent comparative findings regarding the magnitude of government spending multipliers. The chapter concludes with a discussion of important objectives for on-going and future research using the New Keynesian framework.
Bei der Information und Sensibilisierung von Eigenheimbesitzer/innen für das Thema
Energie und CO2-Einsparung im Gebäudesektor stehen bislang vor allem breitenwirksame
Instrumente zur Verfügung. Dialogische Kommunikationsangebote, die in anderen
Bereichen des Nachhaltigkeitsmarketing eingesetzt werden, sind bislang nur wenig
verbreitet.
In dem vorliegenden Arbeitspapier werden die Bausteine einer integrierten Kommunikationsstrategie
für eine energetische Gebäudesanierung beleuchtet. Nach dem Verständnis
der Autor/innen umfasst eine solche Strategie monologisches und dialogisches
Marketing, Energieberatung sowie Markenbildung. Gestützt auf konzeptionelle
Überlegungen und empirische Ergebnisse werden im ersten Teil grundlegende Ziele
und Elemente einer dialogischen Kommunikationsstrategie für eine energetische Sanierung
erläutert. Im zweiten Teil illustrieren konkrete Beispiele, wie unter anderem
eine dialogische Kommunikation für unterschiedliche Sanierungsanlässe in der Praxis
gestaltet werden kann.
Im Projekt OPTUM wurde untersucht, welche Umweltentlastungen durch Elektrofahrzeuge in Zukunft erzielt werden könnten. Hierzu wurde ein integrativer Ansatz verfolgt, der neben der fahrzeugseitigen Betrachtung auch die Interaktionen mit dem Strommarkt berücksichtigt. Im Einzelnen fanden Analysen zu den folgenden zentralen Aspekten statt: Akzeptanz und Attraktivität von Elektrofahrzeugen, Marktpotenziale für Elektrofahrzeuge, Interaktion von Elektrofahrzeugen mit dem Stromsektor, CO2-Minderungspotenziale von Elektromobilität, Ökonomische Betrachtung der Speichermedien und Ressourceneffizienz des Systems Elektromobilität. In diesem Studientext werden die Forschungsergebnisse zur Frage nach der Attraktivität und Akzeptanz von Elektroautos vorgestellt. Dabei wird auf Ergebnisse aus zwei empirischen Untersuchungen eingegangen, die in OPTUM zur Ermittlung der Attraktivität und Akzeptanz von Elektrofahrzeugen durchgeführt wurden. Bei diesen Untersuchungen handelt es sich zum einen um eine qualitative Untersuchung mittels Fokusgruppen und zum anderen um eine standardisierte Erhebung, bei der NeuwagenkäuferInnen befragt wurden. Mit der standardisierten Befragung wurde eine Conjoint-Analyse zur Fahrzeugwahl gekoppelt, bei der sich die Befragten zwischen Fahrzeugen mit Verbrennungsmotor, Plug-in-Hybrid-Antrieb und voll-elektrischem Antrieb entscheiden mussten. Die empirischen Analysen verdeutlichen, dass es ein erhebliches Akzeptanzpotenzial für die beiden Elektrofahrzeugkonzepte – Plug-in-Hybride und vollelektrische Fahrzeuge – gibt. Speziell für voll-elektrische Fahrzeuge existiert je nach Szenario und Fahrzeugklasse ein Akzeptanzpotenzial von 12 bis 25 Prozent. Des Weiteren liefern beide empirischen Erhebungen Hinweise, wie dieses Akzeptanzpotenzial ausgeschöpft oder gar vergrößert werden kann.
A. Einleitung
B. Die Rechtsgrundlagen der Compliance in Aktiengesellschaft und Aktienkonzern
I. Normierung der Compliance im Bank- und Versicherungsaufsichtsrecht
II. Deutscher Corporate Governance Kodex
III. Bestandteil des aktienrechtlichen Früherkennungssystems
IV. Organisationspflichten des Vorstands im Innenverhältnis
1. Legalitätspflicht
2. Legalitätskontrollpflicht
a. Residualpflichten bei vertikaler Delegation
b. Schadenabwendungspflicht
3. Zwischenergebnis
V. Organisationspflichten der Gesellschaft im Außenverhältnis
1. Aufsichtsmaßnahmen nach § 130 OWiG
a. Aufsichtsmaßnahmen
b. Begrenzte Reichweite
c. Geltung im Konzern
2. Haftung für Verrichtungsgehilfen nach § 831 BGB
a. Pflichtenumfang
aa. Eignungsaufsicht
bb. Einweisung und Anleitung
b. Begrenzte Reichweite
aa. Dezentralisierter Entlastungsbeweis
bb. Pointilistisches Konzept
c. Geltung im Konzern
3. Betriebliche Organisationspflichten gemäß § 823 BGB
a. Inhalt
b. Entstehung durch Delegation von Verkehrspflichten
c. Entstehung durch Arbeitsteilung d. Geltung im Konzern
aa. Delegation von Verkehrspflichten
bb. Ausgliederung einer gefährlichen Tätigkeit
cc. Verkehrspflicht aus Arbeitsteilung
C. Schluss
Was ist »Neoliberalismus« und wie ist es um ihn bestellt? Welche Rolle spielt der Begriff nach all den Abgesängen und Wiederbelebungen heute in Politik und den Sozialwissenschaften? Ziel des vorliegenden Beitrags ist es, das unübersichtliche Feld der Forschungsrichtungen, die sich mit dem Neoliberalismus befassen, in Augenschein zu nehmen und die wichtigsten Debatten sowie ihre Fortentwicklungen vorzustellen, um die Orientierung zu erleichtern. Ausgehend von einem kurzen Überblick über aktuelle Stellungnahmen zum Neoliberalismus im politischen Diskurs werden die beiden wichtigsten theoretischen Perspektiven wird – Hegemonietheorie und Governmentality Studies –vorgestellt, aus denen der Neoliberalismus untersucht wird, um dann verschiedene der wichtigsten Schauplätze des Neoliberalismus abzuschreiten. Das kritische Interesse der größtenteils aus einer der beiden Perspektiven heraus arbeitenden Forscher richtet sich unter anderem auf die Rolle des Nationalstaats, den Umbau urbaner Räume, seine Auswirkungen auf die Geschlechterverhältnisse oder die Art und Weise, wie das Leben im Neoliberalismus die Selbstverhältnisse der Subjekte transformiert. Der Artikel schließt mit Überlegungen zum theoretischen Preis, der für den ungeheuer weit gefasste Neoliberalismusbegriff zu zahlen ist und der nicht zuletzt in einer vermeintlichen Alternativlosigkeit besteht, die ironischerweise aus den zahllosen kritisch intendierten Beschwörungen des Neoliberalismus hervorgeht.
Am 5. 8. 2009 ist das neue Schuldverschreibungsgesetz in Kraft getreten. Es lässt in
weitgehendem Umfang Umstrukturierungen einer Anleihe, z. B. Änderungen der Fälligkeit
oder der Zinshöhe, Schuldnersetzungen, debt equity swaps u. a. m., durch Mehrheitsbeschluss
der Gläubigerversammlung zu, wenn die Anleihebedingungen dies vorsehen (sog. Collective
Action Clauses; CAC). Vor Inkrafttreten des SchVG begebene Anleihen können ebenfalls
durch Mehrheitsbeschluss der Geltung des neuen SchVG unterstellt werden. Ausdrücklich
klargestellt ist dies für die – wenigen – Emissionen, auf die bereits das alte SchVG von 1899
anwendbar war. Im Folgenden wird dargelegt, dass dies nach der einschlägigen, allerdings
wenig glücklich formulierten Überleitungsvorschrift des § 24 SchVG 2009 auch für die
weitaus zahlreicheren Fälle gilt, in denen auf die Altanleihe zwar deutsches Sachrecht,
insbesondere die §§ 793 ff. BGB, nicht aber das alte SchVG von 1899 anzuwenden ist. Diese
Frage hat sowohl für Altanleihen privater Emittenten wie für umlaufende Anleihen
ausländischer Staaten größte Bedeutung.
Debt-induced crises, including the subprime, are usually attributed exclusively to supply-side factors. We examine the role of social influences on debt culture, emanating from perceived average income of peers. Utilizing unique information from a household survey representative of the Dutch population, that circumvents the issue of defining the social circle, we consider collateralized, consumer, and informal loans. We find robust social effects on borrowing, especially among those who consider themselves poorer than their peers; and on indebtedness, suggesting a link to financial distress. We employ a number of approaches to rule out spurious associations and to handle correlated effects.
Trading under limited pre-trade transparency becomes increasingly popular on financial markets. We provide first evidence on traders’ use of (completely) hidden orders which might be placed even inside of the (displayed) bid-ask spread. Employing TotalView-ITCH data on order messages at NASDAQ, we propose a simple method to conduct statistical inference on the location of hidden depth and to test economic hypotheses. Analyzing a wide cross-section of stocks, we show that market conditions reflected by the (visible) bid-ask spread, (visible) depth, recent price movements and trading signals significantly affect the aggressiveness of ’dark’ liquidity supply and thus the ’hidden spread’. Our evidence suggests that traders balance hidden order placements to (i) compete for the provision of (hidden) liquidity and (ii) protect themselves against adverse selection, front-running as well as ’hidden order detection strategies’ used by high-frequency traders. Accordingly, our results show that hidden liquidity locations are predictable given the observable state of the market.
In the aftermath of the global financial crisis, the state of macroeconomic modeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development.
How do changes in market structure affect the US business cycle? We estimate a monetary DSGE model with endogenous
rm/product entry and a translog expenditure function by Bayesian methods. The dynamics of net business formation allow us to identify the 'competition effect', by which desired price markups and inflation decrease when entry rises. We
find that a 1 percent increase in the number of competitors lowers desired markups by 0.18 percent. Most of the cyclical variability in inflation is driven by markup fluctuations due to sticky prices or exogenous shocks rather than endogenous changes in desired markups.
This paper characterises optimal monetary policy in an economy with endogenous
firm entry, a cash-in-advance constraint and preset wages. Firms must make pro
fits to cover entry costs; thus the markup on goods prices is efficient. However, because leisure is not priced at a markup, the consumption-leisure tradeoff is distorted. Consequently, the real wage, hours and production are suboptimally low. Due to the labour requirement in entry, insufficient labour supply also implies that entry is too low. The paper shows that in the absence of
fiscal instruments such as labour income subsidies, the optimal monetary policy under sticky wages achieves higher welfare than under flexible wages. The policy maker uses the money supply instrument to raise the real wage - the cost of leisure - above its flexible-wage level, in response to expansionary shocks to productivity and entry costs. This raises labour supply, expanding production and
rm entry.