Refine
Year of publication
- 2014 (213)
- 2015 (180)
- 2016 (179)
- 2013 (162)
- 2003 (150)
- 2021 (142)
- 2008 (137)
- 2018 (132)
- 2017 (129)
- 2022 (126)
- 2006 (123)
- 2020 (123)
- 2023 (121)
- 2009 (116)
- 2004 (114)
- 2005 (109)
- 2007 (98)
- 2019 (96)
- 2011 (92)
- 2012 (87)
- 2010 (84)
- 2002 (83)
- 2001 (78)
- 1999 (71)
- 1998 (70)
- 2000 (67)
- 1997 (43)
- 2024 (40)
- 1996 (27)
- 1995 (19)
- 1993 (16)
- 1994 (16)
- 1976 (10)
- 1983 (10)
- 1984 (10)
- 1988 (10)
- 1992 (9)
- 1986 (8)
- 1989 (8)
- 1990 (8)
- 1991 (8)
- 1970 (7)
- 1975 (7)
- 1987 (7)
- 1969 (6)
- 1977 (6)
- 1982 (6)
- 1978 (5)
- 1980 (5)
- 1971 (4)
- 1985 (4)
- 1972 (3)
- 1974 (2)
- 1979 (2)
- 1981 (2)
- 1921 (1)
- 1968 (1)
- 1973 (1)
Document Type
- Working Paper (3394) (remove)
Language
- English (2358)
- German (1016)
- Spanish (8)
- French (7)
- Multiple languages (2)
Is part of the Bibliography
- no (3394) (remove)
Keywords
- Deutschland (223)
- USA (64)
- Corporate Governance (53)
- Geldpolitik (53)
- Schätzung (52)
- Europäische Union (51)
- monetary policy (47)
- Bank (41)
- Sprachtypologie (34)
- Monetary Policy (31)
Institute
- Wirtschaftswissenschaften (1504)
- Center for Financial Studies (CFS) (1477)
- Sustainable Architecture for Finance in Europe (SAFE) (811)
- House of Finance (HoF) (669)
- Rechtswissenschaft (403)
- Institute for Monetary and Financial Stability (IMFS) (216)
- Informatik (119)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (75)
- Gesellschaftswissenschaften (75)
- Geographie (64)
Valenz ist eine Zeitbombe, die im Lexikon deponiert ist und in der Grammatik detoniert. Im vorliegenden Beitrag geht es um die Grundlegung einer neuen Valenztheorie, der die Aufgabe zukommt, diese Bombe so empfindlich zu konstruieren, daß sie nicht mehr entschärft werden kann. Dabei möchte ich gleich am Anfang betonen, daß die Valenztheorie – genau und nur im Sinne der obigen Metapher – eine grammatische Teiltheorie darstellt, die nicht an ein bestimmtes Grammatikmodell gebunden ist. Zwar ist die Valenztheorie m enger Verbindung mit der Dependenzgrammatik entstanden, Valenztheorie und Dependenzgrammatik haben jedoch klar unterschiedliche Gegenstände. Auf die Bestimmung dieser Gegenstände komme ich am Ende meiner Erörterungen zu sprechen (vgl. 5.). Es soll von folgenden Arbeitsdefinitionen ausgegangen werden: (I) Valenzpotenz (kurz: Valenz) ist die Potenz relationaler Lexemwörter (Lexemwort' im Sinne von Coseriu), die zu realisierende grammatische Struktur zu prädeterminieren (vgl. auch Welke 1993; zur Relationalität vgl. Lehmann 1992:437f.). Aus dieser Arbeitsdefinition folgt, (a) daß Valenz für einen Teil der grammatischen Realisierung verantwortlich ist, aber auch (b) daß Valenz bei weitem nicht für alles in der grammatischen Realisierung verantwortlich ist. Eine ganze Reihe von morphologischen, syntaktischen, semantischen und konzeptuellen Prozessen wIe z.B. Derivation (verbale Präfixbildung), Konjugationstyp, syntaktische Konversion, Serialisierung, Graduierungen der Transitivität, Determinierung, Fokussierung usw. interagiert mit der Valenz, sobald diese eine grammatische Struktur mitzuerzeugen hat (vgl. auch 3.6).
Aus Anlass des Fernseh-Sendebeginns in West- und Ostdeutschland vor 50 Jahren fand im Hans-Bredow-Institut in Hamburg am 5. und 6. Dezember 2002 ein Symposion unter dem Titel "Fernsehgeschichte als Zeitgeschichte - Zeitgeschichte als Fernsehgeschichte" statt. In einem kritischen Beitrag untersuchte Peter Zimmermann vor allem die "Feindbildkonstruktionen" des westlichen Fernsehens, die sich nach Aufffassung des Referenten auch bis nach dem Fall der Mauer nachweisen lassen. Zimmermann: "Im freudetrunkenen Monat November des Jahres 1989 schien das deutsche Wintermärchen mit dem Fall der Mauer endlich ein glückliches Ende zu finden. Ganz ungetrübt verlief die Wiedervereinigung der 'deutschen Brüder und Schwestern' allerdings auch in medialer Hinsicht nicht. Mit der sogenannten Abwicklung des DDR-Fernsehens und der DEFA übernahm das Fernsehen der Bundesrepublik auch die ostdeutsche 'Bilderhoheit'. Die in Film und Fernsehen der DDR bislang dominanten positiven Selbstbilder wurden fortan durch die im Westen dominierenden negativen Fremdbilder ersetzt. Es ist daher wenig verwunderlich, dass seit der Wiedervereinigung in den Fernsehdokumentationen zur deutschen Geschichte fast ausnahmslos der westdeutsche Blick dominiert und die Geschichte der DDR marginalisiert, abgewertet oder karikiert wird."
Die fragmentierte Verrechtlichung des internationalen Raums, die Proliferation von Regelungsarrangements jenseits des Staates und die Diffusion globaler Normen sowie die daraus resultierenden Geltungs-, Kompetenz- und Autoritätskonflikte sind seit geraumer Zeit ein in der sozialwissenschaftlichen Literatur viel diskutiertes Phänomen. Überlappungen von nationalen Regierungssystemen und von im Völkerrecht verankerten klassischen internationalen Regimen existieren seit der Schaffung des Westfälischen Staatensystems.In jüngerer Zeit verstärkte sich der Pluralismus normativer Ordnungen jedoch global durch neuartige Typen von Regelungsarrangements jenseits des Staates. Auch unter den zwischenstaatlich geschaffenen internationalen Institutionen finden sich solche, die autonome Handlungs- und Entscheidungskompetenzen zugesprochen bekommen haben und diese als Akteure mit eigener Subjektivität ausüben. Hinzu kommt eine immer stärkere Aufnahme von „behind the border issues“ in den Aufgabenkatalog dieser Regime und Organisationen (Zürn 2004). Diese Entwicklungen führen zu einem neuen Grad an Kontestation und Umstrittenheit globaler normativer Ordnungen. Weder die Herstellung einer einheitlichen globalen normativen Ordnung noch eine Re-Nationalisierung des Rechts erscheinen heute als realistische Zukunftsprognosen. Umso wichtiger ist es daher, sich mit den Auswirkungen dieses Pluralismus’ normativer Ordnungen zu beschäftigen.
This paper is the first to conduct an incentive-compatible experiment using real monetary payoffs to test the hypothesis of probabilistic insurance which states that willingness to pay for insurance decreases sharply in the presence of even small default probabilities as compared to a risk-free insurance contract. In our experiment, 181 participants state their willingness to pay for insurance contracts with different levels of default risk. We find that the willingness to pay sharply decreases with increasing default risk. Our results hence strongly support the hypothesis of probabilistic insurance. Furthermore, we study the impact of customer reaction to default risk on an insurer’s optimal solvency level using our experimentally obtained data on insurance demand. We show that an insurer should choose to be default-free rather than having even a very small default probability. This risk strategy is also optimal when assuming substantial transaction costs for risk management activities undertaken to achieve the maximum solvency level.
Die Hauptthese dieser Dissertation ist, dass Nord-Sotho keinen obligatorischen Gebrauch von grammatischen Mitteln zur Markierung von Fokus macht, weder in der Syntax noch in der Prosodie oder Morphologie. Trotzdem strukturiert diese Sprache eine Äußerung nach informationsstrukturellen Aspekten. Konstituenten, die im Diskurs gegeben sind, werden entweder getilgt, pronominalisiert oder an den rechten oder linken Satzrand versetzt. Diese (morpho-)syntaktischen Prozesse wirken so zusammen, dass die fokussierte Konstituente oft final in ihrem Teilsatz erscheint. Obwohl die finale Position keine designierte Fokusposition ist, ist das Wissen um diese Tendenz doch entscheidend für das Verständnis einer morphologischen Alternation, die in Nord-Sotho am Verb erscheint und die in der Literatur im Zusammenhang mit Fokus diskutiert wurde.
Obwohl also ein direkter grammatischer Ausdruck von formaler F(okus)-Markierung im Nord-Sotho fehlt, ist F-Markierung trotzdem entscheidend für die Grammatik dieser Sprache: Fokussierte logische Subjekte können nicht in kanonischer präverbaler Position erscheinen. Sie erscheinen stattdessen entweder postverbal oder in einem Spaltsatz, abhängig von der Valenz des Verbs. Obwohl Nord-Sotho bei Objekten im Gebrauch von Spaltsätzen eine Korrespondenz von komplexer Form mit komplexer Bedeutung zeigt, gilt diese Korrespondenz nicht für logische Subjekte.
Die vorliegende Dissertation modelliert die oben genannten Ergebnisse im theoretischen Rahmen der Optimalitätstheorie (OT). Syntaktischer in situ Fokus und die Abwesenheit von prosodischer Fokusmarkierung können mit unkontroversen Beschränkungen erfasst werden. Für die Ungrammatikaliät fokussierter logischer Subjekte in präverbaler Position schlägt die vorliegende Arbeit die Modifizierung einer in der Literatur vorhandenen Beschränkung vor, die in Nord-Sotho von entscheidener Bedeutung ist. Die Form-Bedeutungs-Korrespondenz wird, wie andere Phänomene pragmatischer Arbeitsteilung auch, innerhalb der schwach bidirektionalen Optimalitätstheorie behandelt.
Eine wesentliche Voraussetzung für die Entschlüsselung herrschender Justizverständnisse ist die Auseinandersetzung mit den Rollen, die die beteiligten Akteure in einem Rechtssystem einnehmen sowie die Untersuchung der rechtlichen und institutionellen Bedingungen unter denen diese Akteure handeln. Der vorliegende Beitrag beschäftigt sich zunächst mit der Macht- und Aufgabenverteilung zwischen Richtern und Parteien. Dabei wird deutlich, dass die Rollenallokation nicht einheitlich ist, sondern in Abhängigkeit von unterschiedlichen verfahrensrechtlichen und institutionellen Voraussetzungen variiert. In Verfahren vor einer Jury wird die richterliche Autorität durch eine maximal ausgeprägte Parteiautonomie stark eingeschränkt. Als Rechthonoratioren (im Weberschen Sinne) agieren Richter dagegen immer dann, wenn Sie ohne Geschworene Recht sprechen. Dies geschieht insbesondere in den einzelstaatlichen Obergerichten und den Bundesberufungsgereichten, aber auch in Verfahren erster Instanz, in denen „claims in equity“ zu entscheiden sind. Der Beitrag beschäftigt sich abschließend mit dem Einfluss, den die Besonderheiten der amerikanischen Juristenausbildung auf das amerikanische Justizverständnis ausüben: Sie prägen und reproduzieren eine der Rollen und Selbstbilder unter amerikanischen Juristen, sowohl in der Anwaltschaft als auch auf Seiten der Richter.
Biodiversity loss poses a significant threat to the global economy and affects ecosystem services on which most large companies rely heavily. The severe financial implications of such a reduced species diversity have attracted the attention of companies and stakeholders, with numerous calls to increase corporate transparency. Using textual analysis, this study thus investigates the current state of voluntary biodiversity reporting of 359 European blue-chip companies and assesses the extent to which it aligns with the upcoming disclosure framework of the Task Force on Nature-related Financial Disclosures (TNFD). The descriptive results suggest a substantial gap between current reporting practices and the proposed TNFD framework, with disclosures largely lacking quantification, details and clear targets. In addition, the disclosures appear to be relatively unstandardized. Companies in sectors or regions exposed to higher nature-related risks as well as larger companies are more likely to report on aspects of biodiversity. This study contributes to the emerging literature on nature-related risks and provides detailed insights on the extent of the reporting gap in light of the upcoming standards.
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
With free delivery of products virtually being a standard in E-commerce, product returns pose a major challenge for online retailers and society. For retailers, product returns involve significant transportation, labor, disposal, and administrative costs. From a societal perspective, product returns contribute to greenhouse gas emissions and packaging disposal and are often a waste of natural resources. Therefore, reducing product returns has become a key challenge. This paper develops and validates a novel smart green nudging approach to tackle the problem of product returns during customers’ online shopping processes. We combine a green nudge with a novel data enrichment strategy and a modern causal machine learning method. We first run a large-scale randomized field experiment in the online shop of a German fashion retailer to test the efficacy of a novel green nudge. Subsequently, we fuse the data from about 50,000 customers with publicly-available aggregate data to create what we call enriched digital footprints and train a causal machine learning system capable of optimizing the administration of the green nudge. We report two main findings: First, our field study shows that the large-scale deployment of a simple, low-cost green nudge can significantly reduce product returns while increasing retailer profits. Second, we show how a causal machine learning system trained on the enriched digital footprint can amplify the effectiveness of the green nudge by “smartly” administering it only to certain types of customers. Overall, this paper demonstrates how combining a low-cost marketing instrument, a privacy-preserving data enrichment strategy, and a causal machine learning method can create a win-win situation from both an environmental and economic perspective by simultaneously reducing product returns and increasing retailers’ profits.
By focusing on the cost conditions at issuance, I find that not only the Covid-19 pandemic effects were different across bonds and firms at different stages, but also that the market composition was significantly affected, collapsing on investment- grade bonds, a segment in which the share of bonds eligible to the ECB corporate programmes strikingly increased from 15% to 40%. At the same time the high-yield segment shrunk to almost disappear at 4%. In addition to a market segmentation along the bond grade and the eligibility to the ECB programmes, another source of risk detected in the pricing mechanism is the weak resilience to pandemic: the premium requested is around 30 basis points and started to be priced only after the early containment actions taken by the national authorities. On the contrary, I do not find evidence supporting an increased risk for corporations headquartered in countries with a reduced fiscal space, nor the existence of a premium in favour of green bonds, which should be the backbone of a possible “green recovery”.
We assess the degree of market fragmentation in the euro-area corporate bond market by disentangling the determinants of the risk premium paid on bonds at origination. By looking at over 2,400 bonds we are able to isolate the country-specific effects which are a suitable indicator of the market fragmentation. We find that, after peaking during the sovereign debt crisis, fragmentation shrank in 2013 and receded to pre-crisis levels only in 2014. However, the low level of estimated market fragmentation is coupled with a still high heterogeneity in actual bond yields, challenging the consistency of the new equilibrium.
We analyze the risk premium on bank bonds at origination with a special focus on the role of implicit and explicit public guarantees and the systemic relevance of the issuing institutions. By looking at the asset swap spread on 5,500 bonds, we find that explicit guarantees and sovereign creditworthiness have a substantial effect on the risk premium. In addition, while large institutions still enjoy lower issuance costs linked to the TBTF framework, we find evidence of enhanced market disciple for systemically important banks which face, since the onset of the financial crisis, an increased premium on bond placements.
Unconventional green
(2023)
We analyze the effects of the PEPP (Pandemic Emergency Purchase Programme), the temporary quantitative easing implemented by the ECB immediately after the burst of the Covid-19 pandemic. We show that the differences in aim, size and flexibility with respect to the traditional Corporate Sector Purchase Programme (CSPP) were able to significantly involve, in addition to the directly targeted bonds, also the green bond segment. Via a standard difference- in-differences model we estimate that the yield on green bonds declined by more than 20 basis points after the PEPP. In order to take into account also the differences attributable to the eligibility to the programme, we employ a triple difference estimator. Bonds that at the same time were green and eligible benefitted of an additional premium of 39 basis points.
Chen and Zadrozny (1998) developed the linear extended Yule-Walker (XYW) method for determining the parameters of a vector autoregressive (VAR) model with available covariances of mixed-frequency observations on the variables of the model. If the parameters are determined uniquely for available population covariances, then, the VAR model is identified. The present paper extends the original XYW method to an extended XYW method for determining all ARMA parameters of a vector autoregressive moving-average (VARMA) model with available covariances of single- or mixed-frequency observations on the variables of the model. The paper proves that under conditions of stationarity, regularity, miniphaseness, controllability, observability, and diagonalizability on the parameters of the model, the parameters are determined uniquely with available population covariances of single- or mixed-frequency observations on the variables of the model, so that the VARMA model is identified with the single- or mixed-frequency covariances.
Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.
Over the past few decades, changes in market conditions such as globalisation and deregulation of financial markets as well as product innovation and technical advancements have induced financial institutions to expand their business activities beyond their traditional boundaries and to engage in cross-sectoral operations. As combining different sectoral businesses offers opportunities for operational synergies and diversification benefits, financial groups comprising banks, insurance undertakings and/or investment firms, usually referred to as financial conglomerates, have rapidly emerged, providing a wide range of services and products in distinct financial sectors and oftentimes in different geographic locations. In the European Union (EU), financial conglomerates have become part of the biggest and most active financial market participants in recent years. Financial conglomerates generally pose new problems for financial authorities as they can raise new risks and exacerbate existing ones. In particular, their cross-sectoral business activities can involve prudentially substantial risks such as the risk of regulatory arbitrage and contagion risk arising from intra-group transactions. Moreover, the generally large size of financial conglomerates as well as the high complexity and interconnectedness of their corporate structures and risk exposures can entail substantial systemic risk and can therefore threaten the stability of the financial system as a whole. Until a few years ago, there was no supervisory framework in place which addressed a financial conglomerate in its entirety as a group. Instead, each group entity within a financial conglomerate was subject to the supervisory rules of its pertinent sector only. Such silo supervisory approach had the drawback of not taking account of risks which arise or aggravate at the group level. It also failed to consider how the risks from different business lines within the group interrelate with each other and affect the group as a whole. In order to address this lack of group-wide prudential supervision of financial conglomerates, the European legislator adopted the Financial Conglomerates Directive 2002/87/EC8 (‘FCD’) on 16 December 2002. The FCD was transposed into national law in the member states of the EU (‘Member States’) by 11 August 2004 for application to financial years beginning on 1 January 2005 and after. The FCD primarily aims at supplementing the existing sectoral directives to address the additional risks of concentration, contagion and complexity presented by financial conglomerates. It therefore provides for a supervisory framework which is applicable in addition to the sectoral supervision. Most importantly, the FCD has introduced additional capital requirements at the conglomerate level so as to prevent the multiple use of the same capital by different group entities. This paper seeks to examine to what extent the FCD provides for an adequate capital regulation of financial conglomerates in the EU while taking into account the underlying sectoral capital requirements and the inherent risks associated with financial conglomerates. In Part 1, the definition and the basic corporate models of financial conglomerates will be presented (I), followed by an illustration of the core motives behind the phenomenon of financial conglomeration (II) and an overview of the development of the supervision over financial conglomerates in the EU (III). Part 2 begins with a brief elaboration on the role of regulatory capital (I) and gives a general overview of the EU capital requirements applicable to banks and insurance undertakings respectively. A delineation of the commonalities and differences of the banking and the insurance capital requirements will be provided (II). It continues to further examine the need for a group-wide capital regulation of financial conglomerates and analyses the adequacy of the FCD capital requirements. In this context, the technical advice rendered by the Joint Committee on Financial Conglomerates (JCFC) as well as the currently ongoing legislative reforms at the EU level will be discussed (III). The paper finally closes with a conclusion and an outlook on remaining open issues (IV).
The financial services industry worldwide has undergone major transformation since the late 1970s. Technological advancements in information processing and communication facilitated financial innovation and narrowed traditional distinctions in financial products and services, allowing them to become close substitutes for one another. The deregulation process in many major economies prior to the recent financial crisis blurred the traditional lines of demarcation between the distinct types of financial institutions, exposing those firms to new competitors in their traditional business areas, while the increasing globalization of financial markets fostered the provision of financial services across national borders. Against this backdrop, a trend toward consolidation across financial sectors as well as across national borders increasingly manifested itself since the 1990s. The developments in the financial markets ever more intensified competition in the financial services industry and induced financial institutions to redefine their business strategies in search of higher profitability and growth opportunities. Consolidation across distinct financial sectors, i.e. financial conglomeration, in particular became a popular business strategy in light of the potential operational synergies and diversification benefits it can offer. This trend spurred the growth of diversified financial groups, the so-called financial conglomerates, which commingle banking, securities, and insurance activities under one corporate umbrella.5 Still today, large, complex financial conglomerates are represented among major players in the financial markets worldwide, whose activities not only sway across traditional boundaries of banking, securities, and insurance sectors but also across national borders.
Notwithstanding the economic benefits that conglomeration may produce as a business strategy, the emergence of financial conglomerates also exacerbated existing and created new prudential risks in the financial system. 6 The mixing of a variety of financial products and services under one corporate roof and the generally large and complex group structure of financial conglomerates expose such organizations to specific group risks such as contagion and arbitrage risk as well as systemic risk. When realized, these risks may not only cause the failure of an entire financial group but threaten the stability of the financial system as a whole, as evidenced by the events during recent financial crisis of 2007-2009...
I propose a dynamic stochastic general equilibrium model in which the leverage of borrowers as well as banks and housing finance play a crucial role in the model dynamics. The model is used to evaluate the relative effectiveness of a policy to inject capital into banks versus a policy to relieve households of mortgage debt. In normal times, when the economy is near the steady state and policy rates are set according to a Taylor-type rule, capital injections to banks are more effective in stimulating the economy in the long-run. However, in the middle of a housing debt crisis, when households are highly leveraged, the short-run output effects of the debt relief are more substantial. When the zero lower bound (ZLB) is additionally considered, the debt relief policy can be much more powerful in boosting the economy both in the short-run and in the longrun. Moreover, the output effects of the debt relief become increasingly larger, the longer the ZLB is binding.
Permanent conflict resolution at the high courts was one of the Holy Roman Empire’s main characteristics. This applies even to conflicts between rural communities and their lords, who could be dealt with, at least under certain circumstances, at the Imperial Chamber Court or the Aulic Council. These trials, however, were embedded in complicated processes of establishing and legitimizing claims on a local level as well as attempts to achieve a solution by violence or by arbitration. Researchers have stated that conflict resolution underwent, in the long run, a process of “juridification” (“Verrechtlichung”). This working paper proposes a method, based on Niklas Luhmann’s theory of procedural legitimation (“Legitimation durch Verfahren”), which possibly allows to detect elements of juridification and conflict resolution in the actions of parties and courts.
Die Öffnung des deutschen Bilanzrechts bewirkt eine zunehmende Anwendungsbreite von internationalen Rechnunungslegungsnormen (wie insbesondere der US-GAAP und der IAS) für deutsche Rechtsanwender;1 die heterogenen Normtypen und die – damit einhergehend – unterschiedlichen ökonomischen Eigenschaften dieser Normen erfordern für einen sinnvollen Rechnungslegungsvergleich eine komparative Rechnungslegungstheorie. Eine Besinnung auf die ökonomische Theorie ist – auch ausgelöst durch die Internationalisierung der Rechnungslegung – hier grundsätzlich festzustellen,2 wie auch das moderne deutsche Bilanzrecht seine heutige Prägung durch die ökonomische Theorie – und nicht vornehmlich durch die Anwender – erhielt.3 Es ist das Ziel des Aufsatzes, einen Beitrag zu einer institutionenökonomischen Theorie der Rechnungslegung zum Zweck der Bestimmung von Informationsinhalten und Gewinnansprüchen sowie zur vergleichenden Rechnungslegungstheorie zu leisten. In einem ersten Hauptteil (2) wird im folgenden – auf dem institutionenökonomischen Forschungsprogramm aufbauend – skizziert, welche Bedeutung Institutionen im Rahmen des Nutzenkalküls von Entscheidern zuzumessen ist; danach werden die einzelnen für eine vergleichende Rechnungslegung relevanten Institutionsarten typisiert (in formale und informelle Regeln) sowie deren Attribute im individuellen Zielstromkalkül eingeführt (nämlich Prädikate der Manipulationsfreiheit und Prädikate der Entscheidungsverbundenheit). Das Verhältnis der Institutionen zueinander wird im folgenden Abschnitt (3) anhand eines rechtlich geprägten und eines ökonomischen Systemverständnisses entwickelt. Es wird gezeigt, daß beide Systembegriffe auf einer Nichtadditivität der sie konstituierenden Institutionen gründen, die den qualitativen Vergleich unterschiedlicher Systeme erschweren; man überschätzt hingegen die Unterschiede zwischen juristischem und ökonomischem Systemverständnis: beide sind funktionsähnlich. Im letzten Hauptteil (4) werden schließlich vor dem Hintergrund einer gestaltenden Theorie die hierfür relevanten Teilbereiche (Sub-Systeme) der Rechnungslegungsordnung vorgestellt sowie einzelne Publizitätsnormen funktional ausgelegt. Der Beitrag schließt mit zusammenfassenden Thesen (5).
Mängel bei der Abschlußprüfung : Tatsachenberichte und Analysen aus betriebswirtschaftlicher Sicht
(2001)
Unternehmenskrisen, „überraschende“ zumal, standen am Anfang der gesetzlichen Normierung der Abschlußprüfung in Deutschland. Es entspricht daher einem legitimen Anliegen von Öffentlichkeit und Fachwelt, die herrschende Maßstäblichkeit der Qualität von Abschlußprüfungen und die Glaubwürdigkeit4 von Abschlußprüfern insbesondere dann in gesteigertem Maße als Problem zu begreifen, wenn den gesetzlichen Schutzzwecken und Schutznormen der etablierten Abschlußprüfung zum Trotz Unternehmen in eine Krise geraten: Denn innerhalb der institutionellen Mechanismen ihrer Früherkennung – eines funktionalen Teils des deutschen Systems von corporate governance – gilt die Pflichtprüfung mit Recht als pivotales Element. Vieles an festzustellender Kritik mag hierbei einem der Komplexität der zu verhandelnden Sachzusammenhänge unüberbrückbaren Laienverständnis geschuldet sein; manches aber ist sicherlich erklärlich durch verbesserbare gesetzliche Vorschriften, zu lösende theoretische (ökonomische und rechtswissenschaftliche) Problemstellungen und eine zu fördernde gute Berufspraxis. Jüngste fragliche Mängel der Abschlußprüfung geben den Anlaß zu vorliegenden Tatsachenberichten und betriebswirtschaftlichen Analysen. Die getroffene Auswahl der Unternehmen ist hierbei ebenso willkürlich wie die der betroffenen Wirtschaftsprüfungsgesellschaften – nicht zufällig ist indes die Auswahl der betriebswirtschaftlichen Grundprobleme: Betreffen diese doch wesentliche Erwartungen an die Abschlußprüfung, die offenbar so regelmäßig enttäuscht wurden, daß selbst in Regierungsbegründungen von Gesetzesvorlagen nunmehr eine „sog. Erwartungslücke“5 bemüht wird. Die Erwartungslücken ergeben sich hierbei insbesondere aus der Vorstellung, daß (a) der gesetzliche Pflichtprüfer bei einer ordnungsmäßigen Prüfung zwingend doloses Handeln aufzudecken habe, daß (b) bilanzielle Wertansätze hinreichend zuverlässige Größen bilden, die über die Vermögenslage des Unternehmens berichten und schließlich (c) die Prüfung der tatsächlichen wirtschaftlichen Lage des Unternehmens und die Unterrichtung hiervon in Prüfungsbericht und Bestätigungsvermerk eine Selbstverständlichkeit der Pflichtprüfung sei. Diesen Erwartungen folgt der Gang der Untersuchung.
Vorliegende Publikation verfolgt ein anderes Konzept. Auch sie will auf interessante Angebote im digitalen Reich aufmerksam machen. Sie beschränkt sich jedoch nicht auf den bloßen Link oder den knapp kommentierten Hinweis auf ein Programm, sondern charakterisiert das jeweilige Angebot im Blick auf seine praktische Brauchbarkeit. Das Kriterium der Brauchbarkeit sind die Erfahrungen, die der Autor jeweils damit gemacht hat. Man könnte eine solche Auswahl subjektiv nennen, aber wer, außer Subjekten, kann überhaupt etwas beurteilen?
Wer gerne auf Entdeckungsreisen geht und wen immer einmal das Fernweh packt, ohne dass er Zeit und Geld für große Expeditionen hätte, für den kann Google Earth eine Art Suchtmittel darstellen. Von Ayers Rock zum Fujijama und aus den Straßenschluchten Manhattans in die afrikanische Steppe: Mit Google Earth eine Sache von Sekunden.
The first part of the following paper deals with varying points of criticism forwarded against Ordoliberalism. Here, it is not the aim to directly falsify each argument on its own; rather, the author tries to give a precise overview of the spectrum of critique. The second section picks out one argument of critical review – namely that the ordoliberal concept of the state is somewhat elitist and grounded on intellectual experts. Based on the previous sections, the final part differentiates two kinds of genesis of norms: an evolutionary and an elitist one – both (latently) present within Ordoliberalism. In combination with the two-level differentiation between individual and regulatory ethics, the essay allows for a distinction between individual-ethical norms based on an evolutionary genesis of norms and regulatory-ethical norms based on an elitist understanding of norms. A by-product of the author’s argument is a (further) demarcation within neoliberalism.
Based on Foucault’s analysis of German Neoliberalism and his thesis of ambiguity, the following paper draws a two-level distinction between individual and regulatory ethics. The individual ethics level – which has received surprisingly little attention – contains the Christian foundation of values and the liberal-Kantian heritage of so called Ordoliberalism – as one variety of neoliberalism. The regulatory or formal-institutional ethics level on the contrary refers to the ordoliberal framework of a socio-economic order. By differentiating these two levels of ethics incorporated in German Neoliberalism, it is feasible to distinguish dissimilar varieties of neoliberalism and to link Ordoliberalism to modern economic ethics. Furthermore, it allows a revision of the dominant reception of Ordoliberalism which focuses solely on the formal-institutional level while mainly neglecting the individual ethics level.
June 4th, 2013 marks the formal launch of the third generation of the Equator Principles (EP III) and the tenth anniversary of the EPs – enough reasons for evaluating the EPs initiative from an economic ethics and business ethics perspectives. In particular, this essay deals with the following questions: What are the EPs and where are they going? What has been achieved so far by the EPs? What are the strengths and weaknesses of the EPs? Which necessary reform steps need to be adopted in order to further strengthen the EPs framework? Can the EPs be regarded as a role-model in the field of sustainable finance and CSR? The paper is structured as follows: The first chapter defines the term EPs and introduces the keywords related to the EPs framework. The second chapter gives a brief overview of the history of the EPs. The third chapter discusses the Equator Principles Association, the governing, administering, and managing institution behind the EPs. The fourth chapter summarizes the main features and characteristics of the newly released third generation of the EPs. The fifth chapter critically evaluates the EP III from an economic ethics and business ethics perspectives. The paper concludes with a summary of the main findings.
This paper analyzes liquidity in an order driven market. We only investigate the best limits in the limit order book, but also take into account the book behind these inside prices. When subsequent prices are close to the best ones and depth at them is substantial, larger orders can be executed without an extensive price impact and without deterring liquidity. We develop and estimate several econometric models, based on depth and prices in the book, as well as on the slopes of the limit order book. The dynamics of different dimensions of liquidity are analyzed: prices, depth at and beyond the best prices, as well as resiliency, i.e. how fast the different liquidity measures recover after a liquidity shock. Our results show a somewhat less favorable image of liquidity than often found in the literature. After a liquidity shock (in the spread or depth or in the book beyond the best limits), several dimension of liquidity deteriorate at the same time. Not only does the inside spread increase, and depth at the best prices decrease, also the difference between subsequent bid and ask prices may become larger and depth provided at them decreases. The impacts are both econometrically and economically significant. Also, our findings point to an interaction between different measures of liquidity, between liquidity at the best prices and beyond in the book, and between ask and bid side of the market.
Sissi im Film
(2018)
Ludwig van Beethoven im Film
(2018)
Inhalt
1. Strauß, Johann (Vater:/Sr.) (* 14.3.1804 – † 25.9.1849) [3]
2. Strauß, Johann (Sohn:/Jr.) (* 25.10.1825 – † 3.6.1899) [6]
3. Die Operetten-Adaptionen [12]
3.1 Die Fledermaus (Operette in 3 Akten, Johann Strauß, Sohn) :/:/ UA: 5.4.1874 [12]
3.2 Eine Nacht in Venedig (Operette in 3 Akten, Johann Strauß, Sohn) :/:/ UA: 3.10.1883 [17]
3.3 Der Zigeunerbaron (Operette in 3 Akten, Johann Strauß, Sohn) :/:/ UA: 24.10.1885 [18]
3.4 Wiener Blut (Johann Strauß, Sohn) :/:/ UA: 25.10.1899 [19]
3.5 Frühlingsluft (Jose. Strauß) :/:/ UA: 9.5.1903 [
Les Blank
(2017)
Venture capital (VC) investment has long been conceptualized as a local business , in which the VC’s ability to source, syndicate, fund, monitor, and add value to portfolio firms critically depends on their access to knowledge obtained through their ties to the local (i.e., geographically proximate) network. Consistent with the view that local networks matter, existing research confirms that local and geographically distant portfolio firms are sourced, syndicated, funded, and monitored differently. Curiously, emerging research on VC investment practice within the United States finds that distant investments, as measured by “exits” (either initial public offering or merger & acquisition) out-perform local investments. These findings raise important questions about the assumed benefits of local network membership and proximity. To more deeply probe these questions, we contrast the deal structure of cross-border VC investment with domestic VC investment, and contrast the deal structure of cross-border VC investments that include a local
partner with those that do not. Evidence from 139,892 rounds of venture capital financing in the period 1980-2009 suggests that cross-border investment practice, in terms of deal sourcing, syndication, and performance indeed change with proximity, but that monitoring practices do not. Further, we find that the inclusion of a local partner in the investment syndicate yields surprisingly few benefits. This evidence, we argue, raises important questions about VC investment practice as well as the ability of firms to capture and lever the presumed benefits of network membership.
We examine the dynamics of assets under management (AUM) and management fees at the portfolio manager level in the closed-end fund industry. We find that managers capitalize on good past performance and favorable investor perception about future performance, as reflected in fund premiums, through AUM expansions and fee increases. However, the penalties for poor performance or unfavorable investor perception are either insignificant, or substantially mitigated by manager tenure. Long tenure is generally associated with poor performance and high discounts. Our findings suggest substantial managerial power in capturing CEF rents. We also document significant diseconomies of scale at the manager level.
This paper considers the desirability of the observed tendency of central banks to adjust interest rates only gradually in response to changes in economic conditions. It shows, in the context of a simple model of optimizing private-sector behavior, that such inertial behavior on the part of the central bank may indeed be optimal, in the sense of minimizing a loss function that penalizes inflation variations, deviations of output from potential, and interest-rate variability. Sluggish adjustment characterizes an optimal policy commitment, even though no such inertia would be present in the case of a reputationless (Markovian) equilibrium under discretion. Optimal interest-rate feedback rules are also characterized, and shown to involve substantial positive coefficients on lagged interest rates. This provides a theoretical explanation for the numerical results obtained by Rotemberg and Woodford (1998) in their quantitative model of the U.S. economy.
The paper considers optimal monetary stabilization policy in a forward-looking model, when the central bank recognizes that private-sector expectations need not be precisely model-consistent, and wishes to choose a policy that will be as good as possible in the case of any beliefs that are close enough to model-consistency. It is found that commitment continues to be important for optimal policy, that the optimal long-run inflation target is unaffected by the degree of potential distortion of beliefs, and that optimal policy is even more history-dependent than if rational expectations are assumed. JEL Classification: E52, E58, E42
This paper investigates the accuracy of point and density forecasts of four DSGE models for inflation, output growth and the federal funds rate. Model parameters are estimated and forecasts are derived successively from historical U.S. data vintages synchronized with the Fed’s Greenbook projections. Point forecasts of some models are of similar accuracy as the forecasts of nonstructural large dataset methods. Despite their common underlying New Keynesian modeling philosophy, forecasts of different DSGE models turn out to be quite distinct. Weighted forecasts are more precise than forecasts from individual models. The accuracy of a simple average of DSGE model forecasts is comparable to Greenbook projections for medium term horizons. Comparing density forecasts of DSGE models with the actual distribution of observations shows that the models overestimate uncertainty around point forecasts.
The paper illustrates based on an example the importance of consistency between the empirical measurement and the concept of variables in estimated macroeconomic models. Since standard New Keynesian models do not account for demographic trends and sectoral shifts, the authors proposes adjusting hours worked per capita used to estimate such models accordingly to enhance the consistency between the data and the model. Without this adjustment, low frequency shifts in hours lead to unreasonable trends in the output gap, caused by the close link between hours and the output gap in such models.
The retirement wave of baby boomers, for example, lowers U.S. aggregate hours per capita, which leads to erroneous permanently negative output gap estimates following the Great Recession. After correcting hours for changes in the age composition, the estimated output gap closes gradually instead following the years after the Great Recession.
This paper investigates the accuracy of forecasts from four DSGE models for inflation, output growth and the federal funds rate using a real-time dataset synchronized with the Fed’s Greenbook projections. Conditioning the model forecasts on the Greenbook nowcasts leads to forecasts that are as accurate as the Greenbook projections for output growth and the federal funds rate. Only for inflation the model forecasts are dominated by the Greenbook projections. A comparison with forecasts from Bayesian VARs shows that the economic structure of the DSGE models which is useful for the interpretation of forecasts does not lower the accuracy of forecasts. Combining forecasts of several DSGE models increases precision in comparison to individual model forecasts. Comparing density forecasts with the actual distribution of observations shows that DSGE models overestimate uncertainty around point forecasts.
Large companies are increasingly on trial. Over the last decade, many of the world’s biggest firms have been embroiled in legal disputes over corruption charges, financial fraud, environmental damage, taxation issues or sanction violations, ending in convictions or settlements of record-breaking fines, well above the billion-dollar mark. For critics of globalization, this turn towards corporate accountability is a welcome sea-change showing that multinational companies are no longer above the law. For legal experts, the trend is noteworthy because of the extraterritorial dimensions of law enforcement, as companies are increasingly held accountable for activities independent of their nationality or the place of the activities. Indeed, the global trend required understanding the evolution of corporate criminal law enforcement in the United States in particular, where authorities have skillfully expanded its effective jurisdiction beyond its territory. This paper traces the evolution of corporate prosecutions in the United States. Analyzing federal prosecution data, it then shows that foreign firms are more likely to pay a fine, which is on average 6,6 times larger.
One of the motivations for establishing a European banking union was the desire to break the ties with between national regulators and domestic financial institutions in order to prevent regulatory capture. However, supervisory authority over the financial sector at the national level can also have valuable public benefits. The aim of this policy letter is to detail these public benefits in order to counter discussions that focus only on conflicts of interest. It is informed by an analysis of how financial institutions interacted with policy-makers in the design of national bank rescue schemes in response to the banking crisis of 2008. Using this information, it discusses the possible benefits of close cooperation between financial institutions and regulators and analyzes these in the wake of a European banking union.
Over the last three decades, countries across the Andean region have moved toward legal recognition of indigenous justice systems. This turn toward legal pluralism, however, has been and continues to be heavily contested. The working paper explores a theoretical perspective that aims at analyzing and making sense of this contentious process by assessing the interplay between conflict and (mis)trust. Based on a review of the existing scholarship on legal pluralism and indigenous justice in the Andean region, with a particular focus on the cases of Bolivia and Ecuador, it is argued that manifest conflict over the contested recognition of indigenous justice can be considered as helpful and even necessary for the deconstruction of mistrust of indigenous justice. Still, such conflict can also help reproduce and even reinforce mistrust, depending on the ways in which conflict is dealt with politically and socially. The exploratory paper suggests four proposition that specify the complex and contingent relationship between conflict and (mis)trust in the contested negotiation of pluralist justice systems in the Andean region.
Schuldenanstieg und Haftungsausschluss im deutschen Föderalstaat : zur Rolle des Moral Hazard
(2007)
Einleitung: Die deutschen Staatsschulden sind in den letzten Jahrzehnten kontinuierlich gestiegen. Künftige Generationen werden zusätzlich aufgrund der demographischen Entwicklung durch die umlagenfinanzierten sozialen Sicherungssysteme belastet. Gerade auch der Anstieg der Verschuldung der Bundesländer war in den letzten Jahrzehnten spürbar. So betrug die Verschuldung aller deutschen Bundesländer zusammengenommen 1991 noch 168 Mrd. Euro, während Anfang 2007 die Verschuldung 483 Mrd. Euro betrug, was eine knappe Verdopplung der Schuldenquote der Länder (Verschuldung in Prozent des BIP) auf ca. 21 Prozent impliziert. In der aktuellen Diskussion um die Reform des deutschen Föderalismus besteht Einigkeit in der Diagnose des Problems. Die Entwicklung der Staatsschulden ist kritisch und darf sich so nicht fortsetzen. Uneinigkeit herrscht hingegen über die Ursache des Anstiegs. Ebenfalls wird um die beste Möglichkeit, diesen zu bremsen, gerungen. Verschiedene Autoren argumentieren, dass der Verschuldungsanstieg der deutschen Bundesländer vor allem auf den Moral Hazard Anreiz zurückzuführen ist. Der vorliegende Diskussionsbeitrag diskutiert dies als einen der möglichen Gründe des Schuldenanstiegs. Hierzu wird zunächst das Konzept kurz eingeführt. Anschließend wird die bestehende empirische Evidenz für Deutschland diskutiert. Schließlich wird eine Bewertung und Einordnung in die aktuelle Debatte vorgenommen. Schlußbemerkungen: Im vorliegenden Diskussionsbeitrag wird das "Moral hazard" Problem als einer der möglichen Gründe für den beobachteten starken Anstieg der Verschuldung deutscher Bundesländer diskutiert. Es wurde gezeigt, dass die Finanzmärkte kaum auf die erheblichen Unterschiede in den fiskalischen Fundamentaldaten der Länder reagieren. Mit einer Fallstudie wurde außerdem verdeutlicht, dass das aktuelle Bundesverfassungsgerichtsurteil zu einer eventuellen Haushaltsnotlage von Berlin Berlin die Risikoeinschätzung der Märkte für deutsche Bundesländer nicht verändert hat. Alles in allem scheint es sinnvoll, über eine größere Beteiligung der Gläubiger an Risiken einzelner Länder nachzudenken. Dies dürfte aber den Schuldenanstieg nur bei bereits hoch verschuldeten Ländern begrenzen und möglicherweise einem Notlagenfall vorbeugen, nicht aber den grundsätzlichen "Defizit-Bias" der Finanzpolitik kompensieren. Insgesamt scheinen deswegen vorgelagerte Regeln notwendig, um den Anstieg der Verschuldung schon früh zu unterbinden und somit Belastungen zukünftiger Generationen zu reduzieren.
This paper studies the long-run effects of credit market disruptions on real firm outcomes and how these effects depend on nominal wage rigidities at the firm level. I trace out the long-run investment and growth trajectories of firms which are more adversely affected by a transitory shock to aggregate credit supply. Affected firms exhibit a temporary investment gap for two years following the shock, resulting in a persistent accumulated growth gap. I show that affected firms with a higher degree of wage rigidity exhibit a steeper drop in investment and grow more slowly than affected firms with more flexible wages.
Das Arbeitspapier zeigt Perspektiven eines Promotionsprojektes auf, das sich mit der Reform der englischen Common Law- und Equity-Gerichtsbarkeit im Viktorianischen Zeitalter befasst. Nach einem Einblick in relevante Quellen und Literatur wird inhaltlich auf Mitglieder und Aufgaben der im Jahr 1867 eingesetzten Judicature Commission eingegangen. Anschließend werden Neuerungen aufgezeigt, die für das englische Gerichtswesen aus den in den 1870er Jahren verabschiedeten Judicature Acts folgten.
During the last years the relationship between financial development and economic growth has received widespread attention in the literature on growth and development. This paper summarises in its first part the results of this research, stressing the growth-enhancing effects of an increased interpersonal re-allocation of resources promoted by financial development. The second part of the paper seeks to identify the determinants of financial development based on Diamond's theory of financial intermediation as delegated monitoring. The analysis shows that the quality of corporate governance of banks is the key factor in financial system development. Accordingly, financial sector reforms in developing countries will only succeed if they strengthen the corporate governance of financial institutions. In this area, financial institution building has an important contribution to make. Paper presented at the First Annual Seminar on New Development Finance held at the Goethe University of Frankfurt, September 22 - October 3, 1997
Diese Arbeit nimmt Weiße Freiwillige aus Deutschland in den Blick, die einen Freiwilligendienst im Ausland geleistet haben und in rassistischen Machtverhältnissen eine privilegierte, das heißt Weiße Position einnehmen. Dabei dienen die Critical Whiteness Studies als fruchtbare Grundlage, um die Auseinandersetzung mit Rassismus aus Weißer privilegierter Perspektive zu untersuchen. Die Arbeit geht daher der Frage nach: Inwiefern die Erfahrungen im Freiwilligendienst und die begleitenden rassismuskritischen Seminare Weiße Nord-Nord und Nord-Süd Freiwillige dazu anregen, ihre Privilegien zu reflektieren und sich kritisch im rassistischen Machtsystem zu positionieren. Die Analyse der Interviews mit Weißen Freiwilligen zeigt, dass die Interviewten zum einen unterschiedliche Konfrontationserfahrungen mit Whiteness gemacht haben und zum anderen ihre daraus resultierenden Reflexionsprozesse und Umgangsweisen sehr divers ausfallen. Unterschiede zeigen sich jedoch nicht nur zwischen den Nord-Nord und Nord-Süd Freiwilligen, sondern auch situationsabhängig anhand der jeweiligen Erfahrungen der einzelnen Weißen Freiwilligen. Aus diesen Untersuchungen lässt sich ableiten, dass es auch für rassismus- und machtkritische Begleitseminare weiterhin eine zu bewältigende Herausforderung bleibt, die Relevanz der persönlichen Auseinandersetzung mit Whiteness und somit mit eigenen Privilegien und Verstrickungen in Rassismus – unabhängig vom Zielland des Freiwilligendienstes – zu vermitteln.
Im Zuge der fortlaufenden Digitalisierung im Mobilitätssektor werden aktuell besonders in Großstädten verstärkt geteilte on-demand Fahrdienstleistungen implementiert. Das sog. Ridepooling beschreibt eine dynamische und digitale Form des konventionellen Sammeltaxis, bei welcher durch eine intelligente Algorithmik mehrere voneinander unabhängige, zeitlich korrespondierende Fahrtwünsche in Echtzeit zu einer Route kombiniert werden. So können einander unbekannte Kund*innen gemeinsam und gleichzeitig nach ihren individuellen Bedürfnissen auf Direktverbindungen befördert werden. Viele der Ridepooling-Angebote werden in urban geprägten Raumstrukturen von privaten Verkehrsunternehmen - teilweise sogar eigenwirtschaftlich - betrieben und als nachhaltige Mobilitätsform beworben: Sie soll die sich individualisierenden Mobilitätsbedürfnisse der Bürger*innen befriedigen, dadurch städtische Problematiken wie hohe Luft- und Lärmbelastung, Staubildung sowie Flächenknappheit adressieren und zu einer umweltfreundlichen Verlagerung des lokalen Verkehrsaufkommens (Modal Shift) führen.
Die vorliegende Arbeit untersucht am Beispiel der Großstädte Berlin und Hamburg, wie und unter welchen Zielsetzungen der unterschiedlichen Akteure die neuen Angebotsformen implementiert wurden und welche Auswirkungen sie auf die städtischen Mobilitätssysteme haben.
Durch Expert*innen-Interviews mit städtischen Behörden, öffentlichen und privaten Verkehrsunternehmen, Verkehrsverbünden und Expert*innen für digitale und städtische Mobilität soll der aktuell noch geringe Forschungsstand über die Zielsetzungen, Formen und Auswirkungen von Ridepooling-Angeboten in städtischen Räumen um praxisnahe Betrachtungen und Erkenntnisse erweitert werden. Es kann angenommen werden, dass die unterschiedlichen Ausgestaltungen der untersuchten Angebote von ioki, CleverShuttle, MOIA und BerlKönig dabei durchaus voneinander differierende Effekte auf das Nutzungsverhalten der Kund*innen und die städtische Verkehrsgestaltung sowie deren ökologischen und sozialen Nachhaltigkeitsdimensionen haben.
In this paper we test previous claims concerning the universality of patterns of polysemy and semantic change in perception verbs. Implicit in such claims are two elements: firstly, that the sharing of two related senses A and B by a given form is cross-linguistically widespread, and matched by a complementary lack of some rival polysemy, and secondly that the explanation for the ubiquity of a given pattern of polysemy is ultimately rooted in our shared human cognitive make-up. However, in comparison to the vigorous testing of claimed universals that has occurred in phonology, syntax and even basic lexical meaning, there has been little attempt to test proposed universals of semantic extension against a detailed areal study of non-European languages. To address this problem we examine a broad range of Australian languages to evaluate two hypothesized universals: one by Viberg (1984), concerning patterns of semantic extension across sensory modalities within the domain of perception verbs (i .e. intra-field extensions), and the other by Sweetser (1990), concerning the mapping of perception to cognition (i.e. trans-field extensions). Testing against the Australian data allows one claimed universal to survive, but demolishes the other, even though both assign primacy to vision among the senses.
The speakers of the Paraná dialect of Kaingáng, from whom the data of this study were gathered, have lived in close contact with the Brazilians since before the turn of the century. Although many members of this group are still monolingual and Kaingáng is spoken in all the homes, the influence of Portuguese is making an impact on the language. This can be seen not only in isolated loan words, but it is slowly changing the time dimension of the language and the thinking of the Indians. The change seems to have come about first through loan words, but it is now also affecting the semantic structure of the language and is beginning to affect the grammatical structure as well. The study here presented deals with this change as it can be seen in relation to time expressions such as yesterday – today – tomorrow; units of time such as day – month – year; kinship terms; and finally aspect particles. In considering the time expressions the meaning of various paradigms will be discussed. The paradigms are related to the time when events took place, to sequence of events, and to the point of the action. No Brazilian influence can be observed here. In the discussion of the units of time the semantic area of these units before and after Brazilian influence will be explored. Through Brazilian influence vocabulary has been developed with which it is possible to accurately pinpoint events in time which was not possible before this. The time distinctions within the kinship system will be discussed, and how they change with the influence of Brazilian terms. A whole new generation distinction is added in the modified kinship system. Similary several new aspect particles are being created through contractions, which now contain a time element. The whole development shows an emphasis on fine distinctions in time depth which came about through the contact with Portuguese and which can be observed in several points of the structure of Kaingáng.
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated U.S. recessions. We generate forecasts from six different models of the U.S. economy and compare them to professional forecasts from the Federal Reserve’s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates. JEL Classification: C53, D84, E31, E32, E37 Keywords: Forecasting, Business Cycles, Heterogeneous Beliefs, Forecast Distribution, Model Uncertainty, Bayesian Estimation
The recent decline in euro area inflation has triggered new calls for additional monetary stimulus by the ECB in order to counter the threat of a self‐reinforcing deflation and recession spiral. This note reviews the available evidence on inflation expectations, output gaps and other factors driving current inflation through the lens of the Phillips curve. It also draws a comparison to the Japanese experience with deflation in the late 1990s and the evidence from Japan concerning the outputinflation nexus at low trend inflation. The note concludes from this evidence that the risk of a selfreinforcing deflation remains very small. Thus, the ECB best await the impact of the long‐term refinancing operations decided in June that have the potential to induce substantial monetary accommodation once implemented for the first time in September.
In the aftermath of the global financial crisis, the state of macroeconomicmodeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development
In the aftermath of the global financial crisis, the state of macroeconomic modeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development.
The global financial crisis and the ensuing criticism of macroeconomics have inspired researchers to explore new modeling approaches. There are many new models that deliver improved estimates of the transmission of macroeconomic policies and aim to better integrate the financial sector in business cycle analysis. Policy making institutions need to compare available models of policy transmission and evaluate the impact and interaction of policy instruments in order to design effective policy strategies. This paper reviews the literature on model comparison and presents a new approach for comparative analysis. Its computational implementation enables individual researchers to conduct systematic model comparisons and policy evaluations easily and at low cost. This approach also contributes to improving reproducibility of computational research in macroeconomic modeling. Several applications serve to illustrate the usefulness of model comparison and the new tools in the area of monetary and fiscal policy. They include an analysis of the impact of parameter shifts on the effects of fiscal policy, a comparison of monetary policy transmission across model generations and a cross-country comparison of the impact of changes in central bank rates in the United States and the euro area. Furthermore, the paper includes a large-scale comparison of the dynamics and policy implications of different macro-financial models. The models considered account for financial accelerator effects in investment financing, credit and house price booms and a role for bank capital. A final exercise illustrates how these models can be used to assess the benefits of leaning against credit growth in monetary policy.
This paper reviews the rationale for quantitative easing when central bank policy rates reach near zero levels in light of recent announcements regarding direct asset purchases by the Bank of England, the Bank of Japan, the U.S. Federal Reserve and the European Central Bank. Empirical evidence from the previous period of quantitative easing in Japan between 2001 and 2006 is presented. During this earlier period the Bank of Japan was able to expand the monetary base very quickly and significantly. Quantitative easing translated into a greater and more lasting expansion of M1 relative to nominal GDP. Deflation subsided by 2005. As soon as inflation appeared to stabilize near a rate of zero, the Bank of Japan rapidly reduced the monetary base as a share of nominal income as it had announced in 2001. The Bank was able to exit from extensive quantitative easing within less than a year. Some implications for the current situation in Europe and the United States are discussed.
Recent evaluations of the fiscal stimulus packages recently enacted in the United States and Europe such as Cogan, Cwik, Taylor and Wieland (2009) and Cwik and Wieland (2009) suggest that the GDP effects will be modest due to crowding-out of private consumption and investment. Corsetti, Meier and Mueller (2009a,b) argue that spending shocks are typically followed by consolidations with substantive spending cuts, which enhance the short-run stimulus effect. This note investigates the implications of this argument for the estimated impact of recent stimulus packages and the case for discretionary fiscal policy.
This paper introduces adaptive learning and endogenous indexation in the New-Keynesian Phillips curve and studies disinflation under inflation targeting policies. The analysis is motivated by the disinflation performance of many inflation-targeting countries, in particular the gradual Chilean disinflation with temporary annual targets. At the start of the disinflation episode price-setting firms’ expect inflation to be highly persistent and opt for backward-looking indexation. As the central bank acts to bring inflation under control, price-setting firms revise their estimates of the degree of persistence. Such adaptive learning lowers the cost of disinflation. This reduction can be exploited by a gradual approach to disinflation. Firms that choose the rate for indexation also re-assess the likelihood that announced inflation targets determine steady-state inflation and adjust indexation of contracts accordingly. A strategy of announcing and pursuing short-term targets for inflation is found to influence the likelihood that firms switch from backward-looking indexation to the central bank’s targets. As firms abandon backward-looking indexation the costs of disinflation decline further. We show that an inflation targeting strategy that employs temporary targets can benefit from lower disinflation costs due to the reduction in backward-looking indexation.
Inflation-targeting central banks have only imperfect knowledge about the effect of policy decisions on inflation. An important source of uncertainty is the relationship between inflation and unemployment. This paper studies the optimal monetary policy in the presence of uncertainty about the natural unemployment rate, the short-run inflation-unemployment tradeoff and the degree of inflation persistence in a simple macroeconomic model, which incorporates rational learning by the central bank as well as private sector agents. Two conflicting motives drive the optimal policy. In the static version of the model, uncertainty provides a motive for the policymaker to move more cautiously than she would if she knew the true parameters. In the dynamic version, uncertainty also motivates an element of experimentation in policy. I find that the optimal policy that balances the cautionary and activist motives typically exhibits gradualism, that is, it still remains less aggressive than a policy that disregards parameter uncertainty. Exceptions occur when uncertainty is very high and in inflation close to target.
This note argues that the European Central Bank should adjust its strategy in order to consider broader measures of inflation in its policy deliberations and communications. In particular, it points out that a broad measure of domestic goods and services price inflation such as the GDP deflator has increased along with the euro area recovery and the expansion of monetary policy since 2013, while HICP inflation has become more variable and, on average, has declined. Similarly, the cost of owner-occupied housing, which is excluded from the HICP, has risen during this period. Furthermore, it shows that optimal monetary policy at the effective lower bound on nominal interest rates aims to return inflation more slowly to the inflation target from below than in normal times because of uncertainty about the effects and potential side effects of quantitative easing.
Das Working Paper bietet die zusammenfassende Stellungnahme von Prof. Volker Wieland zum Ankaufprogramm der Europäischen Zentralbank für Anleihen des öffentlichen Sektors (Public Sector Purchase Programme, PSPP) am Bundesverfassungsgericht am 30.07.2019. Dabei liegt der Schwerpunkt auf der Frage der Einordnung des PSPP als monetäre, geldpolitische Maßnahme und der Verhältnismäßigkeit des Programms und seiner Umsetzung. Ebenfalls wird kurz auf die weiteren Fragen zur Umsetzung, insbesondere Ankündigung, Begrenzung und Abstand zum Primärmarkt für Staatsanleihen eingegangen.
While record-making prices at art auctions receive headline news coverage, artists typically do not receive any direct proceeds from those sales. Early-stage creative work in any field is perennially difficult to value, but the valuation, reward, and incentivization for artistic labor are particularly fraught. A core challenge in studying the real return on artists’ work is the extreme difficulty accessing data from when an artwork was first sold. Galleries keep private records that are difficult to access and to match to public auction results. This paper, for the first time, uses archivally sourced primary market records, for the artists Jasper Johns and Robert Rauschenberg. Although this approach restricts the size of the data set, this innovative method shows much more accurate returns on art than typical regression and hedonic models. We find that if Johns and Rauschenberg had retained 10% equity in their work when it was first sold, the returns to them when the work was resold at auction would have outperformed the US S&P 500 by between 2 and 986 times. The implication of this work opens up vast policy recommendations with regard to secondary art market sales, entrepreneurial strategies using blockchain technology, and implications about how we compensate creative work.
Employing the art-collection records of Burton and Emily Hall Tremaine, we consider whether early-stage art investors can be understood as venture capitalists. Because the Tremaines bought artists’ work very close to an artwork’s creation, with 69% of works in our study purchased within one year of the year when they were made, their collecting practice can best be framed as venture-capital investment in art. The Tremaines also illustrate art collecting as social-impact investment, owing to their combined strategy of art sales and museum donations for which the collectors received a tax credit under US rules. Because the Tremaines’ museum donations took place at a time that U.S. marginal tax rates from 70% to 91%, the near “donation parity” with markets, creating a parallel to ESG investment in the management of multiple forms of value.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
Namibia is known to be the most arid country south of the Sahara. Average annual rainfall is not only relatively low in most parts of the country, it is also highly variable. Only 8 per cent of the country receives enough rain during a normal rainy season to practice rainfed cultivation. At the same time between 60 per cent and 70 per cent of the population depend on subsistence agro-pastoralism in non-freehold or communal areas. Against the background of rising unemployment, the livelihoods of the majority of these people are likely to depend on natural resources in the foreseeable future.
Natural resources generally are under considerable strain. As the rural population increases, so is the demand for natural resources, land and water specifically. Dependency on subsistence farming which is the result of large scale rural poverty exacerbates the problem. Large parts of the country are stocked injudiciously, resulting in overgrazing and water is frequently overabstracted, leading to declining water tables (MET 2005: 2).
Unequal access to both land and water has prompted government to introduce reforms in these sectors. These reforms were guided by the desire to manage resources more sustainably while providing more equal access to them. In terms of NDP 2, sustainability means to use natural resources in such a way so as not to ‘compromise the ability of future generations to make use of these resources’ (NDP 2: 595).
Immediately after Independence government started reform processes in the land and water sectors. However, these reforms have happened at different paces and largely independent of each other. Increasingly policy makers and development practitioners realised that land and water management needed to be integrated, as decisions about land management and land use options had a direct impact on water resources. Conversely the availability of water sets the parameters for what is possible in terms of agricultural production and other land uses. The north-central regions face a particular challenge in this regard as the region carries more livestock than it can sustain in the long run. At the same time, close to half the households do not own any livestock. Access to livestock by these households would improve their abilities to cultivate their land more efficiently in order to feed themselves and thus reduce poverty levels.
But livestock are a major consumer of water. In 2000 livestock was consuming more water than the domestic sector. The figures were 77Mm3/a and 67Mm3/a respectively (Urban et al. 2003 Annex 7: 2). This situation has prompted a Project Progress Report on the Namibia Water Resources Management Review in 2003 to conclude that Given the extreme water scarcity in most parts of the country, land and water issues are closely linked. It therefore seems indispensable to mutually adjust land – and water sector reform processes (Ibid: 20).
This paper will briefly look at four institutions that are central to land and water management with a view to assess the extent to which they interact. These are Communal Land Boards, Water Point Committees, Traditional Authorities and Regional Councils. A discussion of relevant policy documents and legislative instruments will investigate whether the existing policy framework
provides for an integrated approach or not. Before doing this, it appears sensible to briefly situate these four institutions in the wider maze of institutions operating at regional and
sub-regional level. All these institutions – important as they are in the quest to improve participation at the regional and sub-regional level – are competing for time and input fros mallscale farmers.
The unintended consequences of the debt ... will increased government expenditure hurt the economy?
(2011)
In 2008, governments in many countries embarked on large fiscal expenditure programmes, with the intention to support the economy and prevent a more serious recession. In this study, the overall impact of a substantial increase in fiscal expenditure is considered by providing a novel analysis of the most relevant recent experience in similar circumstances, namely that of Japan in the 1990s. Then a weak economy with risk-averse banks seemed to require some of the largest peacetime fiscal stimulation programmes on record, albeit with disappointing results. The explanations provided by the literature and their unsatisfactory empirical record are reviewed. An alternative explanation, derived from early Keynesian models on the ineffectiveness of fiscal policy is presented in the form of a modified Fisher-equation, which incorporates the recent findings in the credit view literature. The model postulates complete quantity crowding out. It is subjected to empirical tests, which were supportive. Thus evidence is found that fiscal policy, if not supported by suitable monetary policy, is likely to crowd out private sector demand, even in an environment of falling or near-zero interest rates. As a policy conclusion it is pointed out that by changing the funding strategy, complete crowding out can be avoided and a positive net effect produced. The proposed framework creates common ground between proponents of Keynesian views (as held, among others, by Blinder and Solow), monetarist views (as held in particular by Milton Friedman) and those of leading contemporary macroeconomists (such as Mankiw).
During the past decade, processes associated with what is popularly though perhaps misleadingly known as globalization have come within the purview of anthropology. Migration and mobility ‐ and the footloose or even rootless social groups that they produce ‐ as well as the worldwide diffusion of commodities, media images, political ideas and practices, technologies and scientific knowledge today are on anthropology's research agenda. As a consequence, received notions about the ways in which culture relates to territory have been abandoned. The term transnationalisation captures cultural processes that stream across the borders of nation states. Anthropologists have been forced to revise the notion that transnationalisation would inevitably bring about a culturally homogenized world. Instead, we are witnessing a surge of greatly increasing cultural diversity. New cultural forms grow out of historically situated articulations of the local and the global. Rather than left-over relics from traditional orders, these are decidedly modern, yet far from uniform. The essay engages the idea of the pluralization of modernities, explores its potential for interdisciplinary research agendas, and also inquires into problematic assumptions underlying this new theoretical concept.
»Wenn es einen Wirklichkeitssinn gibt, dann muß es« – so folgerte Robert Musil zu Beginn des 20. Jahrhunderts – »auch einen Möglichkeitssinn geben.« Darunter versteht er die Fähigkeit, »alles, was ebenso gut [auch] sein könnte, zu denken und das, was ist, nicht wichtiger zu nehmen, als das, was nicht ist.« Mit dem Begriff des Möglichkeitssinns, der auf die Relativität und Alternativität des individuellen Denkens sowie auf die Utopie eines anderen, hypothetischen Lebens verweist, hat Robert Musil in seinem Jahrhundertroman Der Mann ohne Eigenschaften dem Kontingenzbewusstsein des modernen Menschen Ausdruck gegeben, welches am Ende des 20. Jahrhunderts zum Grundmodus der Existenz und der Verfasstheit des Individuums überhaupt werden sollte. Dem Begriff der Kontingenz liegt bei aller Unschärfe ein grundlegendes, auf Aristoteles zurückgehendes Verständnis zugrunde, welches Niklas Luhmann folgendermaßen definiert: Kontingent ist etwas, was weder notwendig ist, noch unmöglich ist; was also so, wie es ist (war, sein wird), sein kann, aber auch anders möglich ist. Der Begriff bezeichnet mithin Gegebenes (zu Erfahrendes, Erwartetes, Gedachtes, Phantasiertes) im Hinblick auf mögliches Anderssein; er bezeichnet Gegenstände im Horizont möglicher Abwandlungen.
Die vorstehenden Überlegungen führen zu folgenden Ergebnissen:
1. Das SchVG erlaubt den Gläubigern sämtlicher vor Inkrafttreten des Gesetzes begebenen Anleihen, einschließlich solcher die nicht dem SchVG 1899 unterliegen, einen Beschluss über die Anwendbarkeit des SchVG zu fassen (Opt-in).
2. Der Anwendbarkeit des SchVG und damit insbesondere auch der Opt-in-Regelung steht eine Teilrechtswahl ausländischen Rechts in den Anleihebedingungen nicht entgegen, solange die Substanz der verbrieften Forderung deutschem Recht unterliegt.
Dies ergibt sich bereits aus dem gültigen Gesetz. Aufgrund entgegenstehender instanzgerichtlicher Rechtsprechung besteht allerdings Klarstellungsbedarf. Dies insbesondere auch deshalb, weil hiermit Fragen angesprochen sind, welche die Funktionsfähigkeit und Marktakzeptanz des neuen Gesetzes in wesentlichen Anwendungsbereichen berühren. Im Rahmen der Reform des Schuldverschreibungsrechts hat die Bundesregierung angekündigt, laufend zu prüfen, ob beabsichtigten Wirkungen dieses Gesetzes erreicht worden sind, und, soweit erforderlich, rechtzeitig die hieraus resultierenden erforderlichen Maßnahmen zu ergreifen.48 Nachdem unlängst bereits die Straffung des Freigabeverfahrens erfolgte49 ist zu hoffen, dass auch der hier identifizierte gesetzliche Klarstellungsbedarf zügig in Angriff genommen wird.
Der Beitrag ruft die zentralen Überlegungen Hugo Sinzheimers zur sozialen Selbstbestimmung, zur Arbeitsverfassung, zum Arbeitsrecht als ein die Grenzen zwischen Zivilrecht und öffentlichem Recht sprengenden Rechtsgebiet sui generis und zur Rechtssoziologie ins Gedächtnis, um daraus einige Folgerungen für die Arbeitsrechtswissenschaft am Fachbereich Rechtswissenschaft der Goethe Universität abzuleiten.
Zur Versprachlichung des Raums in Bildergeschichten deutschsprachiger Vor- und Grundschulkinder
(2002)
Gegenstand der vorliegenden Arbeit ist die Versprachlichung des Raums in Bildergeschichten deutschsprachiger Vor- und Grundschulkinder. Methodisch fügt sich die Untersuchung in Arbeiten zur Entwicklung der narrativen Kompetenz des Kindes anhand von Bildergeschichten ein, wie sie in neuerer Zeit […] durchgeführt wurden (s. vor allem Berman & Slobin 1994). Hinsichtlich der allgemein-sprachwissenschaftlichen Analyse der Versprachlichung des Raums ist die Arbeit vor allem den typologischen Studien von L. Talmy (1985, 1991) verpflichtet. […] Ziel der vorliegenden Arbeit ist es, die Rolle der Versprachlichung räumlicher Beziehungen unter zwei Aspekten zu untersuchen: hinsichtlich der Erstellung kohärenten narrativen Diskurses und hinsichtlich der sprachlichen Mittel, mittels derer die Kinder auf statische und dynamische räumliche Beziehungen referieren. Entsprechend der Dreiteilung der Ich-Jetzt-Hier-Origo stellen räumliche Beziehungen neben der Referenz auf Personen und zeitliche Beziehungen einen der drei Bereiche dar, in denen sich textuelle Kohärenz manifestiert. Bei der Versprachlichung des Raums geht es einerseits um Einführung, Beibehaltung und Verschiebung narrativer Orte und andererseits um statische räumliche Befindlichkeiten gegenüber dynamischen räumlichen Ereignissen. […] Die Arbeit gliedert sich in einen theoretischen und einen empirischen Hauptteil.
The modern tontine: an innovative instrument for longevity risk management in an aging society
(2016)
The changing social, financial and regulatory frameworks, such as an increasingly aging society, the current low interest rate environment, as well as the implementation of Solvency II, lead to the search for new product forms for private pension provision. In order to address the various issues, these product forms should reduce or avoid investment guarantees and risks stemming from longevity, still provide reliable insurance benefits and simultaneously take account of the increasing financial resources required for very high ages. In this context, we examine whether a historical concept of insurance, the tontine, entails enough innovative potential to extend and improve the prevailing privately funded pension solutions in a modern way. The tontine basically generates an age-increasing cash flow, which can help to match the increasing financing needs at old ages. However, the tontine generates volatile cash flows, so that - especially in the context of an aging society - the insurance character of the tontine cannot be guaranteed in every situation. We show that partial tontinization of retirement wealth can serve as a reliable supplement to existing pension products.
A tontine provides a mortality driven, age-increasing payout structure through the pooling of mortality. Because a tontine does not entail any guarantees, the payout structure of a tontine is determined by the pooling of individual characteristics of tontinists. Therefore, the surrender decision of single tontinists directly affects the remaining members' payouts. Nevertheless, the opportunity to surrender is crucial to the success of a tontine from a regulatory as well as a policyholder perspective. Therefore, this paper derives the fair surrender value of a tontine, first on the basis of expected values, and then incorporates the increasing payout volatility to determine an equitable surrender value. Results show that the surrender decision requires a discount on the fair surrender value as security for the remaining members. The discount intensifies in decreasing tontine size and increasing risk aversion. However, tontinists are less willing to surrender for decreasing tontine size and increasing risk aversion, creating a natural protection against tontine runs stemming from short-term liquidity shocks. Furthermore we argue that a surrender decision based on private information requires a discount on the fair surrender value as well.
FIFO is the most prominent queueing strategy due to its simplicity and the fact that it only works with local information. Its analysis within the adversarial queueing theory however has shown, that there are networks that are not stable under the FIFO protocol, even at arbitrarily low rate. On the other hand there are networks that are universally stable, i.e., they are stable under every greedy protocol at any rate r < 1. The question as to which networks are stable under the FIFO protocol arises naturally. We offer the first polynomial time algorithm for deciding FIFO stability and simple-path FIFO stability of a directed network, answering an open question posed in [1, 4]. It turns out, that there are networks, that are FIFO stable but not universally stable, hence FIFO is not a worst case protocol in this sense. Our characterization of FIFO stability is constructive and disproves an open characterization in [4].
Central banks have faced a succession of crises over the past years as well as a number of structural factors such as a transition to a greener economy, demographic developments, digitalisation and possibly increased onshoring. These suggest that the future inflation environment will be different from the one we know. Thus uncertainty about important macroeconomic variables and, in particular, inflation dynamics will likely remain high.
The paper uses fiscal reaction functions for a panel of euro-area countries to investigate whether euro membership has reduced the responsiveness of countries to shocks in the level of inherited debt compared to the period prior to succession to the euro. While we find some evidence for such a loss in prudence, the results are not robust to changes in the specification, such as an exclusion of Greece from the panel. This suggests that the current debt problems may result to a large extent from preexisting debt levels prior to entry or from a larger need for fiscal prudence in a common currency, while an adverse change in the fiscal reaction functions for most countries does not apply.
The pressure on tax haven countries to engage in tax information exchange shows first effects on capital markets. Empirical research suggests that investors do react to information exchange and partially withdraw from previous secrecy jurisdictions that open up to information exchange. While some of the economic literature emphasizes possible positive effects of tax havens, the present paper argues that proponents of positive effects may have started from questionable premises, in particular when it comes to the effects that tax havens have for emerging markets like China and India.
Inflation ist ein Konstrukt. Sie wird von unterschiedlichen Akteuren unterschiedlich wahrgenommen. Zum Teil passiert dies, weil Warenkörbe differieren, zum Teil weil Erwartungen unterschiedlich gebildet werden. Dieser Beitrag diskutiert die Heterogenität der Inflation und ihrer Wahrnehmung und was dies für die Zielgröße der Zentralbankpolitik bedeutet.
This paper studies the distributional consequences of a systematic variation in expenditure shares and prices. Using European Union Household Budget Surveys and Harmonized Index of Consumer Prices data, we construct household-specific price indices and reveal the existence of a pro-rich inflation in Europe. Particularly, over the period 2001-15, the consumption bundles of the poorest deciles in 25 European countries have, on average, become 10.5 percentage points more expensive than those of the richest decile. We find that ignoring the differential inflation across the distribution underestimates the change in the Gini (based on consumption expenditure) by up to 0.03 points. Cross-country heterogeneity in this change is large enough to alter the inequality ranking of numerous countries. The average inflation effect we detect is almost as large as the change in the standard Gini measure over the period of interest.