Refine
Year of publication
Document Type
- Working Paper (3393) (remove)
Language
- English (2356)
- German (1017)
- Spanish (8)
- French (7)
- Multiple languages (2)
Keywords
- Deutschland (223)
- USA (64)
- Corporate Governance (53)
- Geldpolitik (53)
- Schätzung (52)
- Europäische Union (51)
- monetary policy (47)
- Bank (41)
- Sprachtypologie (34)
- Monetary Policy (31)
Institute
- Wirtschaftswissenschaften (1502)
- Center for Financial Studies (CFS) (1475)
- Sustainable Architecture for Finance in Europe (SAFE) (809)
- House of Finance (HoF) (667)
- Rechtswissenschaft (403)
- Institute for Monetary and Financial Stability (IMFS) (216)
- Informatik (119)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (75)
- Gesellschaftswissenschaften (75)
- Geographie (64)
A resampling method based on the bootstrap and a bias-correction step is developed for improving the Value-at-Risk (VaR) forecasting ability of the normal-GARCH model. Compared to the use of more sophisticated GARCH models, the new method is fast, easy to implement, numerically reliable, and, except for having to choose a window length L for the bias-correction step, fully data driven. The results for several different financial asset returns over a long out-of-sample forecasting period, as well as use of simulated data, strongly support use of the new method, and the performance is not sensitive to the choice of L. Klassifizierung: C22, C53, C63, G12
This chapter outlines the conditions under which accounting-based smoothing can be beneficial for policyholders who hold with-profit or participating payout life annuities (PLAs). We use a realistically-calibrated model of PLAs to explore how alternative accounting techniques influence policyholder welfare as well as insurer profitability and stability. We find that accounting smoothing of participating life annuities is favorable to consumers and insurers, as it mitigates the impact of short-term volatility and enhances the utility of these long-term annuity contracts.
Accounting for financial stability: Bank disclosure and loss recognition in the financial crisis
(2020)
This paper examines banks’ disclosures and loss recognition in the financial crisis and identifies several core issues for the link between accounting and financial stability. Our analysis suggests that, going into the financial crisis, banks’ disclosures about relevant risk exposures were relatively sparse. Such disclosures came later after major concerns about banks’ exposures had arisen in markets. Similarly, the recognition of loan losses was relatively slow and delayed relative to prevailing market expectations. Among the possible explanations for this evidence, our analysis suggests that banks’ reporting incentives played a key role, which has important implications for bank supervision and the new expected loss model for loan accounting. We also provide evidence that shielding regulatory capital from accounting losses through prudential filters can dampen banks’ incentives for corrective actions. Overall, our analysis reveals several important challenges if accounting and financial reporting are to contribute to financial stability.
This paper investigates what we can learn from the financial crisis about the link between accounting and financial stability. The picture that emerges ten years after the crisis is substantially different from the picture that dominated the accounting debate during and shortly after the crisis. Widespread claims about the role of fair-value (or mark-to-market) accounting in the crisis have been debunked. However, we identify several other core issues for the link between accounting and financial stability. Our analysis suggests that, going into the financial crisis, banks’ disclosures about relevant risk exposures were relatively sparse. Such disclosures came later after major concerns about banks’ exposures had arisen in markets. Similarly, banks delayed the recognition of loan losses. Banks’ incentives seem to drive this evidence, suggesting that reporting discretion and enforcement deserve careful consideration. In addition, bank regulation through its interlinkage with financial accounting may have dampened banks’ incentives for corrective actions. Our analysis illustrates that a number of serious challenges remain if accounting and financial reporting are to contribute to financial stability.
Accounting for financial instruments in the banking industry: conclusions from a simulation model
(2003)
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature.
Recent changes in accounting regulation for financial instruments (SFAS 133, IAS 39) have been heavily criticized by representatives from the banking industry. They argue for retaining a historical cost based "mixed model" where accounting for financial instruments depends on their designation to either trading or nontrading activities. In order to demonstrate the impact of different accounting models for financial instruments on the financial statements of banks, we develop a bank simulation model capturing the essential characteristics of a modern universal bank with investment banking and commercial banking activities. In our simulations we look at different scenarios with periods of increasing/decreasing interest rates using historical data and with different banking strategies (fully hedged; partially hedged). The financial statements of our model bank are prepared under different accounting rules ("Old" IAS before implementation of IAS 39; current IAS) with and without hedge accounting as offered by the respective sets of rules. The paper identifies critical issues of applying the different accounting rules for financial instruments to the activities of a universal bank. It demonstrates important shortcomings of the "Old" IAS rules (before IAS 39), and of the current IAS rules. Under the current IAS rules the results of a fully hedged bank may have to show volatility in income statements due to changes in market interest rates. Accounting results of a partially hedged bank in the same scenario may be less affected even though there are economic gains or losses.
Returns to experience for U.S. workers have changed over the post-war period. This paper argues that a simple model goes a long way towards replicating these changes. The model features three well-known ingredients: (i) an aggregate production function with constant skill-biased technical change; (ii) cohort qualities that vary with average years of schooling; and crucially (iii) time-invariant age-efficiency profiles. The model quantitatively accounts for changes in longitudinal and cross-sectional returns to experience, as well as the differential evolution of the college wage premium for young and old workers.
From the late middle ages to early modern times (ca. 1200-1600) the Lübeck City Council was the most important courthouse in the Baltic. About 100 cities and towns on its shores lived according to the law of Lübeck. The paper deals with the old theory that Imperial law, i.e. mainly the learned Ius commune, was generally rejected by the council on the grounds of its foreign nature. The paper rejects this view with the help of 8 case studies. There exist rather spectacular statements against Imperial Law, but a closer look reveals that they have to be seen in the light of a specific practical context. They must not be confounded with general statements in which the council had no interest. Its attitude towards Learned Law was flexible and purely pragmatic.
It is my intention to make two major points in this paper: 1. The first has to do with finding a frame within which the modal expressions of one particular Ancient IE [Indoeuropean] language – I have chosen Classical Greek – can be best described. I shall try to point out that the regularities which we find in these expressions must depend on an underlying principle, represented by abstract structures. These structures are semanto-syntactic, which means that the semantic properties or bundles of properties are arranged not in a linear order but in a hierarchical order, analogous to a bracketing in a PS structure. The abstract structures we propose have, of course, a very tentative character. They can only be accepted as far as evidence for them can be furnished. 2. My second point has to do with the modal verb forms that were the object of the studies of most Indo-Europeanists. If in the innermost bracket of a semanto-syntactic structure two semantic properties or bundles of properties can be exchanged without any further change in the total structure, and if this change is correlated with a change in verbal mood forms and nothing else, then I think we are faced with a case where these forms can be said to have a meaning of their own. I shall also try to show how these meanings are to be understood as bundles of features rather than as unanalyzed terms. In my final remarks: I shall try to outline the bearing these views have on comparative IE linguistics.
Befristete Anstellungsverhältnisse enden, sofern keine andere vertragliche Vereinbarung getroffen wurde, mit Ende der Laufzeit nach § 620 Abs. 1 BGB oder durch außerordentliche Kündigung gemäß § 626 BGB. Liegt kein wichtiger Grund im Sinne des § 626 BGB vor, kann allenfalls die erfolglose Abmahnung eine vorzeitige Beendigung des Anstellungsverhältnisses ermöglichen. Es sprechen keine zwingenden Gründe dafür, die Abmahnung nicht auch auf einen Geschäftsführer anzuwenden. Etwas anderes ergibt sich auch nicht aus § 14 Abs. 1 Nr. 1 KSchG und der mittlerweile gefestigten Rechtsprechung des Bundesgerichtshofes. Diese Rechtsprechung bezieht sich nicht auf den hier erörterten Fall der Pflichtverletzung unterhalb der Schwelle des § 626 BGB. Bei der Frage nach der konkreten Anwendung müssen grundsätzlich zwei Fallkonstellationen unterschieden werden: Im ersten Fall, in dem der Geschäftsführer zunächst nach § 38 GmbHG abberufen wurde, sein Anstellungsverhältnis jedoch weiterhin fortbesteht, kann der Geschäftsführer nur dann abgemahnt werden, wenn dieser nach seiner Abberufung gegen eine nachwirkende Pflicht verstößt und diese Pflichtverletzung nicht ausreicht, um ihn gemäß § 626 BGB fristlos zu kündigen. Wird der Geschäftsführer im Zusammenhang mit der Beendigung seines Organverhältnisses auf eine Stelle unterhalb der Geschäftsführerebene versetzt, findet die Abmahnung nach den allgemeinen Grundsätzen auf zukünftige Pflichtverstöße Anwendung. Wird ein Geschäftsführer trotz einer Pflichtverletzung im Organverhältnis belassen, steht einer Abmahnung ebenfalls kein Hinderungsgrund entgegen. Bei der durch die Abwägung nach § 314 Abs. 2 Satz 2 i. V. m. § 323 Abs. 2 Nr. 3 BGB vorgeschriebenen Berücksichtigung der beiderseitigen Interessen zeigt sich, dass das Rechtsinstitut der Abmahnung in der angesprochenen Konstellation ein geeignetes Instrument des Interessenausgleichs darstellt, auf das im Interesse beider Parteien zunächst zurückgegriffen werden sollte.
Die Untersuchungen auf der Basis der Einkommens- und Verbrauchs stichproben haben ergeben, daß sich hinter der für die "alte" Bundesrepublik festgestellten weitgehenden Stabilität der Verteilung der Nettoäquivalenzeinkommen deutliche Veränderungen auf den vorgelagerten Stufen des Verteilungsprozesses verbergen. Bei den individuellen Erwerbseinkommen sowie bei den individuellen Faktoreinkommen (nur Bezieher) sind zwischen 1973 und 1988 die hier einbezogenen aggregierten Ungleichheitsmaße zwar kaum gestiegen; Kernel Density-Schätzungen zeigen aber einen leichten Polarisierungstrend der bimodalen Verteilung, da die Dichte in den Randbereichen der Verteilung zugenommen hat und das Dichtetal zwischen den beiden Gipfeln sich gesenkt hat. Unter Berücksichtigung des Haushaltszusammenhangs - durch Zusammenfassung individueller Faktoreinkommen auf Haushaltsebene und Gewichtung mit einer Äquivalenzskala - erweisen sich die Verteilungsänderungen als noch gravierender. Die aggregierten Ungleichheitsmaße sind stark gestiegen, und das Verhältnis der beiden Modi der zweigipfligen Verteilung hat sich umgekehrt: lag 1973 der erste Gipfel im Bereich der geringfügigen Faktoräquivalenzeinkommen noch deutlich unter dem zweiten, knapp unterhalb des Durchschnitts gelegenen Gipfel, so war 1988 der erste Gipfel deutlich höher als der zweite. Die relative Häufigkeit marginaler Faktoräquivalenzeinkommen hat im Zeitablauf also eindeutig zugenommen, ebenso wie die im oberen Einkommensbereich. Dennoch kann man von Polarisierung nur in einem weiteren Sinn sprechen, da das Dichtetal zwischen den Modi 1988 höher als 1973 liegt. Es mag beruhigend wirken, daß - zumindest in der Zeit vor der Wiedervereinigung - das Abgaben- und Transfersystem die zunehmende Disparität der Faktoreinkommensverteilung insoweit kompensieren konnte, als die relative Häufigkeit des Niedrigeinkommensbereichs - hier abgegrenzt mit 50% des durchschnittlichen Nettoäquivalenzeinkommens - vergleichsweise mäßig zugenommen hat. Dieser Eindruck ist allerdings im Hinblick auf die eingangs erwähnten Einschränkungen der Datenbasis zu relativieren. Die unzureichende Erfassung des oberen und des unteren Randbereichs der Einkommensverteilung läßt vermuten, daß der tatsächliche Trend zunehmender Ungleichheit und Polarisierung durch unsere Analysen unterschätzt wird.
We develop a model that endogenizes the manager's choice of firm risk and of inside debt investment strategy. Our model delivers two predictions. First, managers have an incentive to reduce the correlation between inside debt and company stock in bad times. Second, managers that reduce such a correlation take on more risk in bad times. Using a sample of U.S. public firms, we provide evidence consistent with the model's predictions. Our results suggest that the weaker link between inside debt and company stock in bad times does not translate into a mitigation of debt-equity conflicts.
The long-run consumption risk model provides a theoretically appealing explanation for prominent asset pricing puzzles, but its intricate structure presents a challenge for econometric analysis. This paper proposes a two-step indirect inference approach that disentangles the estimation of the model's macroeconomic dynamics and the investor's preference parameters. A Monte Carlo study explores the feasibility and efficiency of the estimation strategy. We apply the method to recent U.S. data and provide a critical re-assessment of the long-run risk model's ability to reconcile the real economy and financial markets. This two-step indirect inference approach is potentially useful for the econometric analysis of other prominent consumption-based asset pricing models that are equally difficult to estimate.
We model the motives for residents of a country to hold foreign assets, including the precautionary motive that has been omitted from much previous literature as intractable. Our model captures many of the principal insights from the existing specialized literature on the precautionary motive, deriving a convenient formula for the economy’s target value of assets. The target is the level of assets that balances impatience, prudence, risk, intertemporal substitution, and the rate of return. We use the model to shed light on two topical questions: The “upstream” flows of capital from developing countries to advanced countries, and the long-run impact of resorbing global financial imbalances
We present a tractable model of the effects of nonfinancial risk on intertemporal choice. Our purpose is to provide a simple framework that can be adopted in fields like representative-agent macroeconomics, corporate finance, or political economy, where most modelers have chosen not to incorporate serious nonfinancial risk because available methods were too complex to yield transparent insights. Our model produces an intuitive analytical formula for target assets, and we show how to analyze transition dynamics using a familiar Ramsey-style phase diagram. Despite its starkness, our model captures most of the key implications of nonfinancial risk for intertemporal choice.
A theory of the boundaries of banks with implications for financial integration and regulation
(2015)
We offer a theory of the "boundary of the
rm" that is tailored to banking, as it builds on a single ine¢ ciency arising from risk-shifting and as it takes into account both interbank lending as an alternative to integration and the role of possibly insured deposit funding. Amongst others, it explains both why deeper economic integration should cause also greater financial integration through both bank mergers and interbank lending, albeit this typically remains ine¢ ciently incomplete, and why economic disintegration (or "desychronization"), as currently witnessed in the European Union, should cause less interbank exposure. It also suggests that recent policy measures such as the preferential treatment of retail deposits, the extension of deposit insurance, or penalties on "connectedness" could all lead to substantial welfare losses.
The well-known proof of termination of reduction in simply typed calculi is adapted to a monomorphically typed lambda-calculus with case and constructors and recursive data types. The proof differs at several places from the standard proof. Perhaps it is useful and can be extended also to more complex calculi.
A tale of one exchange and two order books : effects of fragmentation in the absence of competition
(2018)
Exchanges nowadays routinely operate multiple, almost identically structured limit order markets for the same security. We study the effects of such fragmentation on market performance using a dynamic model where agents trade strategically across two identically-organized limit order books. We show that fragmented markets, in equilibrium, offer higher welfare to intermediaries at the expense of investors with intrinsic trading motives, and lower liquidity than consolidated markets. Consistent with our theory, we document improvements in liquidity and lower profits for liquidity providers when Euronext, in 2009, consolidated its order ow for stocks traded across two country-specific and identically-organized order books into a single order book. Our results suggest that competition in market design, not fragmentation, drives previously documented improvements in market quality when new trading venues emerge; in the absence of such competition, market fragmentation is harmful.
Did the Federal Reserves’ Quantitative Easing (QE) in the aftermath of the financial crisis have macroeconomic effects? To answer this question, the authors estimate a large-scale DSGE model over the sample from 1998 to 2020, including data of the Fed’s balance sheet. The authors allow for QE to affect the economy via multiple channels that arise from several financial frictions. Their nonlinear Bayesian likelihood approach fully accounts for the zero lower bound on nominal interest rates. They find that between 2009 to 2015, QE increased output by about 1.2 percent. This reflects a net increase in investment of nearly 9 percent, that was accompanied by a 0.7 percent drop in aggregate consumption. Both, government bond and capital asset purchases were effective in improving financing conditions. Especially capital asset purchases significantly facilitated new investment and increased the production capacity. Against the backdrop of a fall in consumption, supply side effects dominated which led to a mild disinflationary effect of about 0.25 percent annually.
A stochastic forward-looking model to assess the profitability and solvency of european insurers
(2016)
In this paper, we develop an analytical framework for conducting forward-looking assessments of profitability and solvency of the main euro area insurance sectors. We model the balance sheet of an insurance company encompassing both life and non-life business and we calibrate it using country level data to make it representative of the major euro area insurance markets. Then, we project this representative balance sheet forward under stochastic capital markets, stochastic mortality developments and stochastic claims. The model highlights the potential threats to insurers solvency and profitability stemming from a sustained period of low interest rates particularly in those markets which are largely exposed to reinvestment risks due to the relatively high guarantees and generous profit participation schemes. The model also proves how the resilience of insurers to adverse financial developments heavily depends on the diversification of their business mix. Finally, the model identifies potential negative spillovers between life and non-life business thorugh the redistribution of capital within groups.
A stochastic forward-looking model to assess the profitability and solvency of European insurers
(2016)
In this paper, we develop an analytical framework for conducting forward-looking assessments of profitability and solvency of the main euro area insurance sectors. We model the balance sheet of an insurance company encompassing both life and non-life business and we calibrate it using country level data to make it representative of the major euro area insurance markets. Then, we project this representative balance sheet forward under stochastic capital markets, stochastic mortality developments and stochastic claims. The model highlights the potential threats to insurers solvency and profitability stemming from a sustained period of low interest rates particularly in those markets which are largely exposed to reinvestment risks due to the relatively high guarantees and generous profit participation schemes. The model also proves how the resilience of insurers to adverse financial developments heavily depends on the diversification of their business mix. Finally, the model identifies potential negative spillovers between life and non-life business thorugh the redistribution of capital within groups.
In this paper we estimate a small model of the euro area to be used as a laboratory for evaluating the performance of alternative monetary policy strategies. We start with the relationship between output and inflation and investigate the fit of the nominal wage contracting model due to Taylor (1980)and three different versions of the relative real wage contracting model proposed by Buiter and Jewitt (1981)and estimated by Fuhrer and Moore (1995a) for the United States. While Fuhrer and Moore reject the nominal contracting model in favor of the relative contracting model which induces more inflation persistence, we find that both models fit euro area data reasonably well. When considering France, Germany and Italy separately, however, we find that the nominal contracting model fits German data better, while the relative contracting model does quite well in countries which transitioned out of a high inflation regime such as France and Italy. We close the model by estimating an aggregate demand relationship and investigate the consequences of the different wage contracting specifications for the inflation-output variability tradeoff, when interest rates are set according to Taylor 's rule.
A safe core mandate
(2023)
Central banks have vastly expanded their footprint on capital markets. At a time of extraordinary pressure by many sides, a simple benchmark for the scale and scope of their core mandate of price and financial stability may be useful.
We make a case for a narrow mandate to maintain and safeguard the border between safe and quasi safe assets. This ex-ante definition minimizes ambiguity and discourages risk creation and limit panic runs, primarily by separating market demand for reliable liquidity from risk-intolerant, price-insensitive demand for a safe store of value. The central bank may be occasionally forced to intervene beyond the safe core but should not be bound by any such ex-ante mandate, unless directed to specific goals set by legislation with explicit fiscal support.
We review distinct features of liquidity and safety demand, seeking a definition of the safety border, and discuss LOLR support for borderline safe assets such as MMF or uninsured deposits.
A safe core formulation is close to the historical focus on regulated entities, collateralized lending and attention to the public debt market, but its specific framing offers some context on controversial issues such as the extent of LOLR responsibilities. It also justifies a persistently large scale for central bank liabilities (Greenwood, Hansom and Stein 2016), as safety demand is related to financial wealth rather than GDP. Finally, it is consistent with an active central bank role in supporting liquidity in government debt markets trading and clearing (Duffie 2020, 2021).
This paper was presented at the workshop “Goods, Languages, and Cultures along the Silk Road” at Goethe University Frankfurt am Main, October 18 and 19, 2019. While many contributions to the workshop focused on recent developments in China’s current “New Silk Road” politics, on forms of communication, and on contemporary exchange of goods and ideas across so-called Silk Road countries in the Caucasus and Central Asia and with China, this short essay focuses on the history of the so-called Silk Road as an important transport connection. Although what is now called the “Silk Road” was not a pure East-West binary in antiquity but rather developed into a network that also led to the South and North, the focus here will be on describing the East-West connection.
I will start with a few brief remarks on the origins of the connection referred to as the Silk Road and will then introduce the different great empires that shaped this connection between antiquity and the Middle Ages through military campaigns and by using it as a trading route and network. But the Silk Road was by no means only of economic and military importance. Its significance for the exchange and dissemination of religions should also be mentioned. This paper does not detail the importance of the numerous individual religions in the area of the Silk Road but discusses the phenomenon of the spread of religions and the loss of some of their own distinguishing characteristics in this spread, a phenomenon that could be described as a “unity of opposites” (coincidentia oppositorum). Finally, the essay asks who, in the face of the regular replacement of powers, held sovereignty over the transport connection: the subject (in the form of the empires) or the object (in the form of the road).
Who were the main protagonists of and along the Silk Road in the course of history? Who were the people who became the great powers of the ancient Silk Road, building up the material route, governing parts of it, and organizing trade and relationships from the far East to the extreme West of the Eurasian continent?
This study examines the recent literature on the expectations, beliefs and perceptions of investors who incorporate Environmental, Social, Governance (ESG) considerations in investment decisions with the aim to generate superior performance and also make a societal impact. Through the lens of equilibrium models of agents with heterogeneous tastes for ESG investments, green assets are expected to generate lower returns in the long run than their non- ESG counterparts. However, at the short run, ESG investment can outperform non-ESG investment through various channels. Empirically, results of ESG outperformance are mixed. We find consensus in the literature that some investors have ESG preference and that their actions can generate positive social impact. The shift towards more sustainable policies in firms is motivated by the increased market values and the lower cost of capital of green firms driven by investors’ choices.
This paper analyzes how on-the-job search (OJS) by an agent impacts the moral hazard problem in a repeated principal-agent relationship. OJS is found to constitute a source of agency costs because efficient search incentives require that the agent receives all gains from trade. Further, the optimal incentive contract with OJS matches the design of empirically observed compensation contracts more accurately than models that ignore OJS. In particular, the optimal contract entails excessive performance pay plus efficiency wages. Efficiency wages reduce the opportunity costs of work effort and hence serve as a complement to bonuses. Thus, the model offers a novel explanation for the use of efficiency wages. When allowing for renegotiation, the model generates wage and turnover dynamics that are consistent with empirical evidence. I argue that the model contributes to explaining the concomitant rise in the use of performance pay and in competition for high-skill workers during the last three decades.
High-frequency changes in interest rates around FOMC announcements are an important tool for identifying the effects of monetary policy on asset prices and the macroeconomy. However, some recent studies have questioned both the exogeneity and the relevance of these monetary policy surprises as instruments, especially for estimating the macroeconomic effects of monetary policy shocks. For example, monetary policy surprises are correlated with macroeconomic and financial data that is publicly available prior to the FOMC announcement. The authors address these concerns in two ways: First, they expand the set of monetary policy announcements to include speeches by the Fed Chair, which essentially doubles the number and importance of announcements in our dataset. Second, they explain the predictability of the monetary policy surprises in terms of the “Fed response to news” channel of Bauer and Swanson (2021) and account for it by orthogonalizing the surprises with respect to macroeconomic and financial data. Their subsequent reassessment of the effects of monetary policy yields two key results: First, estimates of the high-frequency effects on financial markets are largely unchanged. Second, estimates of the macroeconomic effects of monetary policy are substantially larger and more significant than what most previous empirical studies have found.
This paper examines the interaction of G7 real exchange rates with real output and interest rate differentials. Using cointegration methods, we generally find a link between the real exchange rate and the real interest differential. This finding contrasts with the majority of the extant research on the real exchange rate - real interest rate link. We identify a new measure of the equilibrium exchange rate in terms of the permanent component of the real exchange rate that is consistent with the dynamic equilibrium given by the cointegration relation. Furthermore, the presence of cointegration also allows us to identify real, nominal and transitory disturbances with only minimal identifying restrictions. Our findings suggest that persistent deviations of real exchange rates from their equilibrium value can have feedback effects on the underlying fundamentals, hence altering the equilibrium exchange rate itself. This has important implications for the persistence measures of real exchange rates that are reported elsewhere in the literature.
A question of Mesorah?
(2009)
In the upcoming Krias Hatorah in Parshat Shoftim and Parshat Ki Savo there are a number of instances where the meaning of a phrase changes completely based on the pronunciation of a single word – םד – with either a Komatz or Patah. Until recently, most Chumashim and Tikunim which generally followed the famous Yaakov Ben Hayyim 1525 edition of Mikraot Gedolot published in Venice that printed a seemingly inconsistent pattern in the pronunciation of the different occurrences of this word.
Using a novel dataset, we develop a structural model of the Very Large Crude Carrier (VLCC) market between the Arabian Gulf and the Far East. We study how fluctuations in oil tanker rates, oil exports, shipowner profits, and bunker fuel prices are determined by shocks to the supply and demand for oil tankers, to the utilization of tankers, and to the cost of operating tankers, including bunker fuel costs. Our analysis shows that time charter rates are largely unresponsive to tanker cost shocks. In response to higher costs, voyage profits decline, as cost shocks are only partially passed on to round-trip voyage rates. Oil exports from the Arabian Gulf also decline, reflecting lower demand for VLCCs. Positive utilization shocks are associated with higher profits, a slight increase in time charter rates and lower fuel prices and oil export volumes. Tanker supply and tanker demand shocks have persistent effects on time charter rates, round-trip voyage rates, the volume of oil exports, fuel prices, and profits with the expected sign.
The nineteenth century in Britain saw tumultuous changes that reshaped the fabric of society and altered the course of modernization. It also saw the rise of the novel to the height of its cultural power as the most important literary form of the period. This paper reports on a long-term experiment in tracing such macroscopic changes in the novel during this crucial period. Specifically, we present findings on two interrelated transformations in novelistic language that reveal a systemic concretization in language and fundamental change in the social spaces of the novel. We show how these shifts have consequences for setting, characterization, and narration as well as implications for the responsiveness of the novel to the dramatic changes in British society.
This paper has a second strand as well. This project was simultaneously an experiment in developing quantitative and computational methods for tracing changes in literary language. We wanted to see how far quantifiable features such as word usage could be pushed toward the investigation of literary history. Could we leverage quantitative methods in ways that respect the nuance and complexity we value in the humanities? To this end, we present a second set of results, the techniques and methodological lessons gained in the course of designing and running this project.
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
We raise some critical points against a naïve interpretation of “green finance” products and strategies. These critical insights are the background against which we take a closer look at instruments and policies that might allow green finance to become more impactful. In particular, we focus on the role of a taxonomy and investor activism. We also describe the interaction of government policies with green finance practice – an aspect, which has been mostly neglected in policy debates but needs to be taken into account. Finally, the special case of green government bonds is discussed.
We raise some critical points against a naïve interpretation of “green finance” products and strategies. These critical insights are the background against which we take a closer look at instruments and policies that might allow green finance to become more impactful. In particular, we focus on the role of a taxonomy and investor activism. We also describe the interaction of government policies with green finance practice – an aspect, which has been mostly neglected in policy debates but needs to be taken into account. Finally, the special case of green government bonds is discussed.
We create an alternative version of the present utility value formula to explicitly show that every store-of-value in the economy bears utility-interest (non-pecuniary income) for ist holder regardless of possible interest earnings from financial markets. In addition, we generalize the well-known welfare measures of consumer and producer surplus as present value concepts and apply them not only for the production and usage of consumer goods and durables but also for money and other financial assets. This helps us, inter alia, to formalize the circumstances under which even a producer of legal tender might become insolvent. We also develop a new measure of seigniorage and demonstrate why the well-established concept of monetary seigniorage is flawed. Our framework also allows us to formulate the conditions for liability-issued money such as inside money and financial instruments such as debt certificates to become – somewhat paradoxically – net wealth of the society.
[I]n its present form, the bibliography contains approximately 1100 entries. Bibliographical work is never complete, and the present one is still modest in a number of respects. It is not annotated, and it still contains a lot of mistakes and inconsistencies. It has nevertheless reached a stage which justifies considering the possibility of making it available to the public. The first step towards this is its pre-publication in the form of this working paper. […]
The bibliography is less complete for earlier years. For works before 1970, the bibliographies of Firbas and Golkova 1975 and Tyl 1970 may be consulted, which have not been included here.
The Russian war of aggression against Ukraine since 24 February 2022 has intensified the discussion of Europe’s reliance on energy imports from Russia. A ban on Russian imports of oil, natural gas and coal has already been imposed by the United States, while the United Kingdom plans to cease imports of oil and coal from Russia by the end of 2022. The German Federal Government is currently opposing an energy embargo against Russia. However, the Federal Ministry for Economic Affairs and Climate Action is working on a strategy to reduce energy imports from Russia. In this paper, the authors give an overview of the German and European reliance on energy imports from Russia with a focus on gas imports and discuss price effects, alternative suppliers of natural gas, and the potential for saving and replacing natural gas. They also provide an overview of estimates of the consequences on the economic outlook if the conflict intensifies.
In this paper we consider the dynamics of spot and futures prices in the presence of arbitrage. We propose a partially linear error correction model where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. We estimate our model using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
We build a novel leading indicator (LI) for the EU industrial production (IP). Differently from previous studies, the technique developed in this paper is able to produce an ex-ante LI that is immune to “overlapping information drawbacks”. In addition, the set of variables composing the LI relies on a dynamic and systematic criterion. This ensures that the choice of the variables is not driven by subjective views. Our LI anticipates swings (including the 2007-2008 crisis) in the EU industrial production – on average – by 2 to 3 months. The predictive power improves if the indicator is revised every five or ten years. In a forward-looking framework, via a general-to-specific procedure, we also show that our LI represents the most informative variable in approaching expectations on the EU IP growth.
Riley (1979)'s reactive equilibrium concept addresses problems of equilibrium existence in competitive markets with adverse selection. The game-theoretic interpretation of the reactive equilibrium concept in Engers and Fernandez (1987) yields the Rothschild-Stiglitz (1976)/Riley (1979) allocation as an equilibrium allocation, however multiplicity of equilibrium emerges. In this note we imbed the reactive equilibrium's logic in a dynamic market context with active consumers. We show that the Riley/Rothschild-Stiglitz contracts constitute the unique equilibrium allocation in any pure strategy subgame perfect Nash equilibrium.
This note argues that in a situation of an inelastic natural gas supply a restrictive monetary policy in the euro zone could reduce the energy bill and therefore has additional merits. A more hawkish monetary policy may be able to indirectly use monopsony power on the gas market. The welfare benefits of such a policy are diluted to the extent that some of the supply (approximately 10 percent) comes from within the euro zone, which may give rise to distributional concerns.
The Box-Cox quantile regression model using the two stage method introduced by Chamberlain (1994) and Buchinsky (1995) provides an attractive extension of linear quantile regression techniques. However, a major numerical problem exists when implementing this method which has not been addressed so far in the literature. We suggest a simple solution modifying the estimator slightly. This modification is easy to implement. The modified estimator is still [square root] n-consistent and its asymptotic distribution can easily be derived. A simulation study confirms that the modified estimator works well.
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.